Simpler Android apps with Flow and Mortar

Bust up your app into tidy little modules with these two libraries.

Written by Ray Ryan.

Doctor, it hurts when I go like that.
Don’t go like that!
—Henny Youngman

When Fragments were introduced to Android the Square Register team jumped on board. We were already chopping activities into subscreens in a clumsy way, and we did ourselves a lot of good moving to the new hotness. But we also bought ourselves a lot of headaches: default constructors only, offscreen fragments mysteriously being brought back to life at odd moments, no direct control over animation, an opaque back stack manager — to use Fragments well required mastering an increasingly arcane folklore.

So now we do something else, and we’d like to share a couple of new libraries that help us do it. We use Flow to keep track of what View to show when. Yes, we just use View instances directly as the building blocks of our apps. It’s okay because the views stay simple (and testable) by injecting all their smarts from tidy little scopes defined with Mortar.

Flow: URLs for your app

The first new toy, Flow, is a backstack model that knows you likely have to deal both with the device’s back button as well as an ActionBar’s up button. The things that Flow manages are referred to as “screens,” with a lowercase “S” — they’re not instances of any particular class. To Flow a screen is a POJO that describes a particular location in your app. Think of it as a URL, or an informal Intent.

For example, imagine a music player app, with Album and Track screens:

@Screen(layout = R.layout.album_view)
class AlbumScreen {
public final int id;
public AlbumScreen(int id) { = id; }
@Screen(layout = R.layout.track_view)
class TrackScreen implements HasParent<AlbumScreen> {
public final int albumId;
public final int trackId;
  public TrackScreen(int albumId, int trackId) {
this.albumId = albumId;
this.trackId = trackId;
  @Override AlbumScreen getParent() {
return new AlbumScreen(albumId);

The two screen types provide just enough information to rummage around for the model classes that have the actual album and track info. The optional HasParentinterface implemented by the TrackScreen here declares what should happen when the up button is tapped. Moving to the right track screen from a list on the album screen is as easy as:

setOnItemClickListener(new OnItemClickListener() {
@Override public void onItemClick(AdapterView<?> parent, View view,
int position, long id) {
flow.goTo(new Track(albumId, position));

Going back or up are just as simple, flow.goBack() or flow.goUp().

So what happens on-screen when you call these methods? You decide. While Flow provides the @Screen annotation as a convenience for instantiating the view to show for a particular screen, actually displaying the thing is up to you. A really simple Activity might implement the Flow.Listener interface this way:

@Override public void go(Backstack backstack, Direction direction) {
Object screen = backstack.current().getScreen();
setContentView(Screens.createView(this, screen));

It shouldn’t take a lot of imagination to see how to embellish this with animation based on the given Direction (FORWARD or BACK).

Mortar: Blueprints for each of those URLs

If Flow tells you where to go, Mortar tells you what to build when you get there.

Major views in our apps use Dagger to inject all their interesting parts. One of the best tricks we’ve found is to create @Singleton controllers for them. Configuration change? No problem. The landscape version of your view will inject the same controller instance that the portrait version was just using, making continuity a breeze. But all those controllers for all those views live forever, occupying precious memory long after the views they manage have been torn down. And just how can they get their hands on the activity’s persistence bundle to survive process death?

Mortar solves both of these problems. Each section of a Mortar app (each screen if you’re also using Flow) is defined by a Blueprint with its very own module. And the thing most commonly provided is a singleton Presenter, a view controller with a simple lifecycle and its own persistence bundle.

Going back to our music player example, using Mortar the AlbumScreen might look something like this:

@Screen(layout = R.layout.ablum_view)
class AlbumScreen implements Blueprint {
final int id;
public AlbumScreen(int albumId) { = albumId; }
  @Override String getMortarScopeName() {
return getClass().getName();
  @Override Object getDaggerModule() {
return new Module();
  @dagger.Module(addsTo = AppModule.class)
class Module {
@Provides Album provideAlbum(JukeBox jukebox) {
return jukebox.getAlbum(albumId);

We’re imagining here an app-wide JukeBox service that provides access to Albummodel objects. See that @Provides Album method at the bottom? That’s a factory method that will let the AlbumView inflated from R.layout.album_view simply @Inject Album album directly, without messing with int ids and the like.

public class AlbumView extends FrameLayout {
@Inject Album album;
  public AlbumView(Context context, AttributeSet attrs) {
super(context, attrs);
Mortar.inject(context, this);
  // ...

To take the example further, suppose the AlbumView is starting to get more complicated. We want it to edit metadata like the album name, and of course we don’t want to lose unsaved edits when our app goes to the background. It’s time to move the increasing smarts out of the view and over to a Presenter. Let’s keep the Android view concerned with Android-specific tasks like layout and event handling, and keep our app logic cleanly separated (and testable!).

@Screen(layout = R.layout.ablum_view)
class AlbumScreen implements Blueprint {
final int id;
  public AlbumScreen(int albumId) { = albumId; }
  @Override String getMortarScopeName() {
return getClass().getName();
  @Override Object getDaggerModule() {
return new Module();
  @dagger.Module(addsTo = AppModule.class)
class Module {
@Provides Album provideAlbum(JukeBox jukebox) {
return jukebox.getAlbum(albumId);
  @Singleton Presenter extends ViewPresenter<AlbumView> {
private final Album album;
    @Inject Presenter(Album album) { this.album = album; }
    @Override onLoad(Bundle savedState) {
AlbumView view = getView();
if (view != null) {
    @Override onSave(Bundle outState) {
outState.putString("name-in-progress", getView().getEditedName());
    void onSaveClicked() {

The view will now look something like this (in part). Notice how the AlbumView lets the presenter know when it’s actually in play, through overrides of onAttachedToWindowand onDetachedFromWindow.

public class AlbumView extends FrameLayout {
@Inject AlbumScreen.Presenter presenter;
  private final TextView newNameView;
  public AlbumView(Context context, AttributeSet attrs) {
super(context, attrs);
Mortar.inject(context, this);
    this.newNameView = (TextView) findViewById(
    findViewById( OnClickListener() {
public void onClick(View view) {
  @Override protected void onAttachedToWindow() {
  @Override protected void onDetachedFromWindow() {
  // ...

Because AlbumScreen.Presenter is scoped to just this screen, we have confidence that it will be gc’d when we go elsewhere. The AlbumScreen class itself serves as a self-contained, readable definition about this section of the app. And, do you see those onLoad and onSave methods on the Presenter? Those are the entire Mortar lifecycle. We just haven’t found a need for anything more.

It works for us

So that’s how we’re doing it these days, and life is pretty good. Flow and Mortar are both still taking shape, though — hopefully with help from you.

Next Story — Android leak pattern: subscriptions in views
Currently Reading - Android leak pattern: subscriptions in views

Android leak pattern: subscriptions in views

In Square Register Android, we rely on custom views to structure our app. Sometimes a view listens to changes from an object that lives longer than that view.

For instance, a HeaderView might want to listen to username changes coming from an Authenticator singleton:

onFinishInflate() is a good place for an inflated custom view to find its child views, so we do that and then we subscribe to username changes.

The above code has a major bug: We never unsubscribe. When the view goes away, the Action1 stays subscribed. Because the Action1 is an anonymous class, it keeps a reference to the outer class, HeaderView. The entire view hierarchy is now leaking, and can’t be garbage collected.

To fix this bug, let’s unsubscribe when the view is detached from the window:

Problem fixed? Not exactly. I was recently looking at a LeakCanary report, which was caused by a very similar piece of code:

Let’s look at the code again:

Somehow View.onDetachedFromWindow() was not being called, which created the leak.

While debugging, I realized that View.onAttachedToWindow() wasn’t called, either. If a view is never attached, obviously it won’t be detached. So, View.onFinishInflate() is called, but not View.onAttachedToWindow().

Let’s learn more about View.onAttachedToWindow():

  • When a view is added to a parent view with a window, onAttachedToWindow() is called immediately, from addView().
  • When a view is added to a parent view with no window, onAttachedToWindow() will be called when that parent is attached to a window.

We’re inflating the view hierarchy the typical Android way:

At that point, every view in the view hierarchy has received the View.onFinishInflate() callback, but not the View.onAttachedToWindow() callback. Here’s why:

View.onAttachedToWindow() is called on the first view traversal, sometime after Activity.onStart()

ViewRootImpl is where the onAttachedToWindow() call is dispatched:

Cool, so we don’t get attached in onCreate(), what about after onStart() though? Isn’t that always called after onCreate()?

Not always! The Activity.onCreate() javadoc gives us the answer:

You can call finish() from within this function, in which case onDestroy() will be immediately called without any of the rest of the activity lifecycle (onStart(), onResume(), onPause(), etc) executing.


We were validating the activity intent in onCreate(), and immediately calling finish() with an error result if the content of that intent was invalid:

The view hierarchy was inflated, but never attached to the window and therefore never detached.

Here’s an updated version of the good old activity lifecycle diagram:

With that knowledge, we can now move the subscription code to onAttachedToWindow():

This is for the better anyway: Symmetry is good, and unlike the original implementation we can add and remove that view any number of times.

Next Story — Upgrading a Reverse Proxy from Netty 3 to 4
Currently Reading - Upgrading a Reverse Proxy from Netty 3 to 4

Upgrading a Reverse Proxy from Netty 3 to 4

Tracon is our reverse HTTP proxy powered by Netty. We recently completed an upgrade to Netty 4 and wanted to share our experience.

Written by Chris Conroy and Matt Davenport.

Tracon: Square’s reverse proxy

Tracon is our reverse HTTP proxy powered by Netty. Several years ago, as we started to move to a microservice architecture, we realized that we needed a reverse proxy to coordinate the migration of APIs from our legacy monolith to our rapidly expanding set of microservices.

We chose to build Tracon on top of Netty in order to get efficient performance coupled with the ability to make safe and sophisticated customizations. We are also able to leverage a lot of shared Java code with the rest of our stack in order to provide rock-solid service discovery, configuration and lifecycle management, and much more!

Tracon was written using Netty 3 and has been in production for three years. Over its lifetime, the codebase has grown to 20,000 lines of code and tests. Thanks in large part to the Netty library, the core of this proxy application has proven so reliable that we’ve expanded its use into other applications. The same library powers our internal authenticating corporate proxy. Tracon’s integration with our internal dynamic service discovery system will soon power all service-to-service communication at Square. In addition to routing logic, we can capture a myriad of statistics about the traffic flowing into our datacenters.

Why upgrade to Netty 4 now?

Netty 4 was released three years ago. Compared to Netty 3, the threading and memory models have been completely revamped for improved performance. Perhaps more importantly, it also provides first class support for HTTP/2. Although we’ve been interested in migrating to this library for quite a while, we’ve delayed upgrading because it is a major upgrade that introduces some significant breaking changes.

Now that Netty 4 has been around for a while and Netty 3 has reached the end of its life, we felt that the time was ripe for an overhaul of this mission-critical piece of infrastructure. We want to allow our mobile clients to use HTTP/2 and are retooling our RPC infrastructure to use gRPC which will require our infrastructure to proxy HTTP/2. We knew this would be a multi-month effort and there would be bumps along the way. Now that the upgrade is complete, we wanted to share some of the issues we encountered and how we solved them.

Issues encountered

Single-threaded channels: this should be simple!

Unlike Netty 3, in Netty 4, outbound events happen on the same single thread as inbound events. This allowed us to simplify some of our outbound handlers by removing code that ensured thread safety. However, we also ran into an unexpected race condition because of this change.

Many of our tests run with an echo server, and we assert that the client receives exactly what it sent. In one of our tests involving chunked messages, we found that we would occasionally receive all but one chunk back. The missing chunk was never at the beginning of the message, but it varied from the middle to the end.

In Netty 3, all interactions with a pipeline were thread-safe. However, in Netty 4, all pipeline events must occur on the event loop. As a result, events that originate outside of the event loop are scheduled asynchronously by Netty.

In Tracon, we proxy traffic from an inbound server channel to a separate outbound channel. Since we pool our outbound connections, the outbound channels aren’t tied to the inbound event loop. Events from each event loop caused this proxy to try to write concurrently. This code was safe in Netty 3 since each write call would complete before returning. In Netty 4, we had to more carefully control what event loop could call write to prevent out of order writes.

When upgrading an application from Netty 3, carefully audit any code for events that might fire from outside the event loop: these events will now be scheduled asynchronously.

When is a channel really connected?

In Netty 3, the SslHandler “redefines” a channelConnected event to be gated on the completion of the TLS handshake instead of the TCP handshake on the socket. In Netty 4, the handler does not block the channelConnected event and instead fires a finer-grained user event:SslHandshakeCompletionEvent. Note that Netty 4 replaces channelConnected with channelActive.

For most applications, this would be an innocuous change, but Tracon uses mutually authenticated TLS to verify the identity of the services it is speaking to. When we first upgraded, we found that we lacked the expected SSLSession in the mutual authentication channelActive handler. The fix is simple: listen for the handshake completion event instead of assuming the TLS setup is complete on channelActive

@Override public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws
if (evt.equals(SslHandshakeCompletionEvent.SUCCESS)) {
Principal peerPrincipal = engine.getSession().getPeerPrincipal();
// Validate the principal
// ...
super.userEventTriggered(ctx, evt);

Recycled Buffers Leaking NIO Memory

In addition to our normal JVM monitoring, we added monitoring of the size and amount of NIO allocations by exporting the JMX bean java.nio:type=BufferPool,name=direct since we want to be able to understand and alert on the direct memory usage by the new pooled allocator.

In one cluster, we were able to observe an NIO memory leak using this data. Netty provides a leak detection framework to help catch errors in managing the buffer reference counts. We didn’t get any leak detection errors because this leak was not actually a reference count bug!

Netty 4 introduces a thread-local Recycler that serves as a general purpose object pool. By default, the recycler is eligible to retain up to 262k objects. ByteBufs are pooled by default if they are less than 64kb: that translates to a maximum of 17GB of NIO memory per buffer recycler.

Under normal conditions, it’s rare to allocate enough NIO buffers to matter. However, without adequate back-pressure, a single slow reader can balloon memory usage. Even after the buffered data for the slow reader is written, the recycler does not expire old objects: the NIO memory belonging to that thread will never be freed for use by another thread. We found the recyclers completely exhausted our NIO memory space.

We’ve notified the Netty project of these issues, and there are several upcoming fixes to provide saner defaults and limit the growth of objects:

We encourage all users of Netty to configure their recycler settings based on the available memory and number of threads and profiling of the application. The number of objects per recycler can be configured by setting -Dio.netty.recycler.maxCapacity and the maximum buffer size to pool is configured by -Dio.netty.threadLocalDirectBufferSize. It’s safe to completely disable the recycler by setting the -Dio.netty.recycler.maxCapacity to 0, and for our applications, we have not observed any performance advantage in using the recycler.

We made another small but very important change in response to this issue: we modified our global UncaughtExceptionHandler to terminate the process if it encounters an error since we can’t reasonably recover once we hit an OutOfMemoryError. This will help mitigate the effects of any potential leaks in the future.

class LoggingExceptionHandler implements Thread.UncaughtExceptionHandler {
 private static final Logger logger = Logger.getLogger(LoggingExceptionHandler.class);
 /** Registers this as the default handler. */
static void registerAsDefault() {
Thread.setDefaultUncaughtExceptionHandler(new LoggingExceptionHandler());
 @Override public void uncaughtException(Thread t, Throwable e) {
if (e instanceof Exception) {
logger.error("Uncaught exception killed thread named '" + t.getName() + "'.", e);
} else {
logger.fatal("Uncaught error killed thread named '" + t.getName() + "'." + " Exiting now.", e);

Limiting the recycler fixed the leak, but this also revealed how much memory a single slow reader could consume. This isn’t new to Netty 4, but we were able to easily add backpressure using the channelWritabilityChanged event. We simply add this handler whenever we bind two channels together and remove it when the channels are unlinked.

* Observe the writability of the given inbound pipeline and set the [email protected] ChannelOption#AUTO_READ}
* of the other channel to match. This allows our proxy to signal to the other side of a proxy
* connection that a channel has a slow consumer and therefore should stop reading from the
* other side of the proxy until that consumer is ready.
public class WritabilityHandler extends ChannelInboundHandlerAdapter {
 private final Channel otherChannel;
 public WritabilityHandler(Channel otherChannel) {
this.otherChannel = otherChannel;
 @Override public void channelWritabilityChanged(ChannelHandlerContext ctx) throws Exception {
boolean writable =;
otherChannel.config().setOption(ChannelOption.AUTO_READ, writable);

The writability of a channel will go to not writable after the send buffer fills up to the high water mark, and it won’t be marked as writable again until it falls below the low water mark. By default, the high water mark is 64kb and the low water mark is 32kb. Depending on your traffic patterns, you may need to tune these values.

If a promise breaks, and there’s no listener, did you just build /dev/null as a service?

While debugging some test failures, we realized that some writes were failing silently. Outbound operations notify their futures of any failures, but if each write failure has shared failure handling, you can instead wire up a handler to cover all writes. We added a simple handler to log any failed writes:

public class PromiseFailureHandler extends ChannelOutboundHandlerAdapter {
 private final Logger logger = Logger.getLogger(PromiseFailureHandler.class);
 @Override public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise)
throws Exception {
promise.addListener(future -> {
if (!future.isSuccess()) {"Write on channel %s failed", promise.cause(),;
   super.write(ctx, msg, promise);

HTTPCodec changes

Netty 4 has an improved HTTP codec with a better API for managing chunked message content. We were able to remove some of our custom chunk handling code, but we also found a few surprises along the way!

In Netty 4, every HTTP message is converted into a chunked message. This holds true even for zero-length messages. While it’s technically valid to have a 0 length chunked message, it’s definitely a bit silly! We installed object aggregators to convert these messages to non-chunked encoding. Netty only provides an aggregator for inbound pipelines: we added a custom aggregator for our outbound pipelines and will be looking to contribute this upstream for other Netty users.

There are a few nuances with the new codec model. Of note, LastHttpContent is also a HttpContent. This sounds obvious, but if you aren’t careful you can end up handling a message twice! Additionally, a FullHttpResponse is also an HttpResponse, an HttpContent, and a LastHttpContent. We found that we generally wanted to handle this as both an HttpResponse and a LastHttpContent, but we had to be careful to ensure that we didn’t forward the message through the pipeline twice.

Don’t do this

if (msg instanceof HttpResponse) {
if (msg instanceof HttpContent) {
if (msg instanceof LastHttpContent) {
… // Duplicate handling! This was already handled above!

Another nuance we discovered in some test code: LastHttpContent may fire after the receiving side has already received the complete response if there is no body. In this case, the last content is serving as a sentinel, but the last bytes have already gone out on the wire!

Replacing the engine while the plane is in the air

In total, our change to migrate to Netty 4 touched 100+ files and 8k+ lines of code. Such a large change coupled with a new threading and memory model is bound to encounter some issues. Since 100% of our external traffic flows through this system, we needed a process to validate the safety of these changes.

Our large suite of unit and integration tests was invaluable in validating the initial implementation.

Once we established confidence in the tests, we began with a “dark deploy” where we rolled out the proxy in a disabled state. While it didn’t take any traffic, we were able to exercise a large amount of the new code by running health checks through the Netty pipeline to check the status of downstream services. We highly recommend this technique for safely rolling out any large change.

As we slowly rolled out the new code to production, we also relied on a wealth of metrics in order to compare the performance of the new code. Once we addressed all of the issues, we found that Netty 4 performance using the UnpooledByteBufAllocator is effectively identical to Netty 3. We’re looking forward to using the pooled allocator in the near future for even better performance.


We’d like to thank everyone involved in the Netty project. We’d especially like to thank Norman Maurer / @normanmaurer for being so helpful and responsive!


Next Story — Introducing Cleanse: A Lightweight Dependency Injection Framework For Swift
Currently Reading - Introducing Cleanse: A Lightweight Dependency Injection Framework For Swift

Introducing Cleanse: A Lightweight Dependency Injection Framework For Swift

Cleanse is a pure Swift dependency injection library.

Written by Mike Lewis.

Dependency Injection for All (Mobile Devices)

Several years ago, I was introduced to dependency injection(DI) working on a Java service here. Our Java “Service Container” is built on top of Guice. After a small learning curve, it clearly became one of those technologies I couldn’t live without. DI enables software to be loosely coupled, more testable while requiring less annoying boilerplate code.

After working on a couple Java services, it was time for me to go back to iOS to build our lovely Square Appointments App in Objective-C. Moving to Objective-C meant giving up the power DI frameworks such as Guice and Dagger. Yes, there were and still are a couple DI implementations for Objective-C, but we felt they lacked safety, excessively used the Objective-C runtime, or were just too verbose to configure. Well, that didn’t stop us. We came up with a Frankensteinian solution that used LibClang and generated code based on “Annotations”. It was modeled after Dagger 1 and gave compile-time checking and several other benefits.

About a year ago, we started adopting Swift. Having a DI library based on Objective-C started to show its weaknesses, even after we added support for creating modules with swift. We couldn’t inject things that didn’t bridge to Objective-C such as structs or Observables. The code generation solution required a bit of Xcode trickery such as maintaining custom build rules and having to manually touch files to trigger recompilation. We didn’t want to give up DI though since we also hate writing boilerplate code!

Enter Cleanse

In an ideal world, we’d have implemented something like Dagger 2 for Swift. Unfortunately, Swift is lacking tooling such as annotation processors, annotations, Java-like reflection library, etc. Swift does, however, have an incredibly powerful type system.

We leverage this type system to bring you Cleanse, our dependency injection framework for Swift. Configuring Cleanse modules may look very similar to configuring Guice modules. However, a lot of inspiration has also come from Dagger 2, such as components and lack of ObjectGraph/Injectortypes which allows for unsafe practices. This lets us have a modern DI framework with a robustfeature set that we can use today!

A Quick Tour

We made a small example playground that demonstrates wiring up an HTTP client to make requests to GitHub’s API.

Unlike the two de facto Java DI frameworks, Dagger and Guice, which support several types ofbindings and injections, Cleanse operates primarily on Factory injection. In this context, factory is just a function type that takes 0..N arguments and returns a new instance. Conveniently, if one has a GithubListMembersServiceImpl type, GithubListMembersServiceImpl.init, is a factory for GithubListMembersServiceImpl, which makes it almost equivalent to Constructor Injection.

Let’s say we have a protocol defined which should list the members of a GitHub organization.

protocol GithubListMembersService {
func listMembers(organizationName: String, handler: [String] -> ())

And we implement this protocol as GithubListMembersServiceImpl

struct GithubListMembersServiceImpl : GithubListMembersService {
// We require a github base URL and an NSURLSession to perform our task
  let githubURL: TaggedProvider<GithubBaseURL>
let urlSession: NSURLSession
  /// Lists members of an organization (ignores errors for sake of example)
func listMembers(organizationName: String, handler: [String] -> ()) {
let url = githubURL.get().URLByAppendingPathComponent("orgs/\(organizationName)/public_members")
    let dataTask = urlSession.dataTaskWithURL(url) { data, response, error in
guard let data = data, result = (try? NSJSONSerialization.JSONObjectWithData(data, options: [])) as? [[String: AnyObject]] else {
      handler(result.flatMap { $0["login"] as? String })

We want this implementation to be provided whenever a GithubListMembersService is requested. To do this, we configure it in a Module. Modules are the building blocks for configuring Cleanse.

struct GithubAPIModule : Module {
func configure<B : Binder>(binder binder: B) {
// Configure GithubMembersServiceImpl to be the implementation of GithubMembersService
.to(factory: GithubListMembersServiceImpl.init)
        // While we're at it, configure the github Base URL to be ""
.tagged(with: GithubBaseURL.self)
.to(value: NSURL(string: "")!)

You may have noticed that GithubListMembersServiceImpl requires an NSURLSession. To satisfy that requirement, we’ll need to configure that as well. Let’s make another module:

struct NetworkModule : Module {
func configure<B : Binder>(binder binder: B) {
// Make `NSURLSessionConfiguration.ephemeralSessionConfiguration` be provided
// when one requests a `NSURLSessionConfiguration`
.to(factory: NSURLSessionConfiguration.ephemeralSessionConfiguration)
    // Make `NSURLSession` available.
// It depends on `NSURLSessionConfiguration` configured above (`$0`)
.to {
configuration: $0,
delegate: nil,
delegateQueue: NSOperationQueue.mainQueue()

We can assemble these two modules in a Component. A Component is essentially a Module which also declares a Root type for an object graph. In this case, we want our root to be GithubListMembersService.

struct GithubListMembersComponent : Component {
// When we build this component we want `GithubListMembersService` returned
typealias Root = GithubListMembersService
  func configure<B : Binder>(binder binder: B) {
// Install both the modules we have made
binder.install(module: NetworkModule())
binder.install(module: GithubAPIModule())

Now its time to build the component and get our GithubListMembersService! We call build() on an instance of our component. This returns the Root type, GithubListMembersService.

let membersService = try! GithubListMembersComponent().build()

If there are validation errors constructing our object graph, they would be thrown from the build() method.

Now, let’s see who the members of Square’s GitHub org are:

membersService.listMembers("square") { members in
print("Fetched \(members.count) members:")
  for (i, login) in members.enumerate() {

A more detailed getting started guide can be found in the README or by taking a look at ourexample app.

The Code

One can check out Cleanse on GitHub.

Cleanse is a work in progress, but we feel it has the building blocks for a very powerful and developer friendly DI framework. We’d like to encourage community involvement for developing more advanced features (e.g. Subcomponents like in Dagger 2). Its current implementation supports both Swift 2.2 and the open source version of Swift 3.

We’re in the process of completely migrating Square Appointments App to Cleanse for all our DI needs in the near future. Expect to see exciting new features, improvements, more documentation, examples, and maybe even some more articles over the coming weeks and months.

Next Story — Introducing Square’s Register API for Android
Currently Reading - Introducing Square’s Register API for Android

Introducing Square’s Register API for Android

Developers can now build custom Android point-of-sale applications that take swipe, dip, or tap payments through Square hardware, and integrate with Square’s software and services.

Written by Pierre-Yves Ricau.

Following our launch of Register API for iOS in March, beginning today, developers can now build custom Android point-of-sale applications that take swipe, dip, or tap payments through Square hardware, and integrate with Square’s software and services. This builds on our existing API offerings for Square Register and eCommerce.

Register API for Android

The Square Register API lets you focus on what you do best: creating an amazing point-of-sale experience for your merchants while Square takes care of moving the money. You can build a custom point of sale with specific features for your business’ needs, or start a technology company for a new point of sale and sell it to businesses. Get started today by creating your Android application!

Maybe you will build a custom point of sale for lawn care service companies, an in-store on-floor retail business checkout, an optimized cart and membership funnel for wineries, a self checkout for doggie daycares — your imagination is the limit! Your app doesn’t have to handle any payments information, which makes PCI compliance a non-event. And you won’t need to think for a second about integrating with hardware card readers. Build your custom app and distribute it like normal on the Play Store. When it is time for your app to initiate a payment, call our SDK with an amount to start the Square Register app on the payment screen. The buyer completes the payment in Register (by swiping, tapping, dipping, or keying in the card) and then focus and control automatically returns back to your app with the result of the charge. Thanks to Square Register, all the money movement heavy lifting is taken care of. Android Register API supports all our hardware, including the new Square Contactless + Chip Reader.

To start taking payments, it’s as simple as three lines of code:

ChargeRequest chargeRequest = new ChargeRequest.Builder(1_00, USD).build();
Intent chargeIntent = registerClient.createChargeIntent(chargeRequest);
startActivityForResult(chargeIntent, CHARGE_REQUEST_CODE);

Square Register will come to the foreground and complete the payment on your behalf. Once that’s done, we’ll return the payment result to your app.

@Override protected void onActivityResult(int requestCode, int resultCode, 
Intent data) {
if (requestCode == CHARGE_REQUEST_CODE) {
if (resultCode == Activity.RESULT_OK) {
ChargeRequest.Success success = registerClient.parseChargeSuccess(data);
} else {
ChargeRequest.Error error = registerClient.parseChargeError(data);
} else {
super.onActivityResult(requestCode, resultCode, data);

Pricing is the same as other payments completed using Square Register. The Register API for Android is currently only available in the US and Canada, with other markets to quickly follow.

We are busy building out Square’s commerce platform, to give merchants solutions to help them easily run their business. We’re eager to hear your feedback! You can reach us at [email protected] and follow @SquareDev on Twitter for more updates about Square’s developer platform and community. Get started today by creating your Android application!

Sign up to continue reading what matters most to you

Great stories deserve a great audience

Continue reading