Mobile App Performance

Update 3/16/15: I updated my performance numbers here:

A few months ago, I decided to write my very first mobile app, and maybe naively, I wanted to make it cross-platform. Without going into great detail, I started writing a location tracking app for auto racing, and to ensure that I would be processing the GPS coordinates the same on both iOS and Android, I had a goal of writing the logic once. Given the current mobile development tools available, I believed this would be possible without much trouble.

I ran across Google’s J2ObjC first and started working with it. However, because various bugs (which have been fixed since) held me back, I began looking into other tools for developing cross-platform mobile applications. Since many of these tools allow you to write code in non-native platform languages, I began worrying about the performance of the various tools and decided to test them. I shared some of my findings with @brunobowden, a fellow J2ObjC hacker and Xoogler, and he suggested I post my results.

These are the tools I surveyed.

  • J2ObjC — Google’s transpiler that translates Java to Objective-C. It also includes basic Java libraries ported to Objective-C. Basic usage is to develop the app logic in Java to be shared between your Android and iOS app. The Android app would use the logic natively. For iOS, you translate the Java logic to Objective-C and import the translated code into your Swift or Objective-C project. J2ObjC is open-sourced on github and is freely available.
  • Xamarin — A C# implementation that targets iOS, Android, and Windows Phone. Using Xamarin Studio, you can write your entire app in C# and compile it down to native bytecode. For iOS, Xamarin apps use SGen or Boehm for garbage collection. Pricing starts at $25 per month.
  • RoboVM — A Java implementation that targets iOS. RoboVM command-line tools compile Java into native iOS bytecode. RoboVM implements its own garbage collector on iOS. Like, J2ObjC, the Android app would be native. Pricing is unavailable as RoboVM is in beta.
  • RubyMotion — A Ruby implementation that targets iOS, Android, and OSX. RubyMotion command-line tools compile Ruby iOS and Android apps into native bytecode. Ruby gems must target RubyMotion which precludes many existing gems. RubyMotion implements its own garbage collector on iOS. Pricing starts at $15 per month.

A quick thing about me

Given that I'm not a prolific medium-er, twitter-er, share-er, etc, I thought I would “color” this post with my background. I say “color” since I am not attempting to back up what I have found. Instead, I hope you see how I arrived at my conclusions and that, as always, YMMV.

I was one of the original Google engineers back in 1999. I worked on a whole range of things, including scaling our index from 50M to 2B+ docs, creating unified search, and creating enterprise search. Since leaving Google, I haven’t been actively coding on a daily basis, except for a one year stint of helping a friend launch Roostify. Other things like family, angel investing, and car racing (yes, that’s me above) took priority. I will be the first to admit I am a little rusty and out of practice. However, I think that this post will be a little informative as I believe some signal is better than no signal.

Testing Methodology

My two goals in testing were to measure the computational performance of the generated code by each tool and to measure how much memory each variation of the app would use. I extracted the computational logic from my location tracking app and ported it to Java, C#, Swift and Ruby. Unit tests were ported to all languages to verify correctness of the logic in each tool.

I varied the number of iterations through the logic based on whether I was measuring computational performance or memory management. One iteration of the logic consisted of the following.

  1. Create a “session” object
  2. For each GPS coordinate in a list of 600, create another object, run some computations, and add it to a list in the “session” object.
  3. Remove the strong reference to the “session” object.

My testing environment:

  • OSX 10.10.2 on a 2012 MacBook Pro
  • Xcode 6.1.1 (6.3 Beta for Swift 1.2)
  • Android Studio 1.0.2
  • RubyMotion 3.5
  • Xamarin Studio 3.7
  • RoboVM (beta)
  • J2ObjC 0.9.5
  • iPhone 6 / iOS 8.1.2
  • MotoX 2014 / Android 5.0

iOS Computational Performance

Using J2ObjC, a pure Objective-C app was created to serve as a benchmark on iOS. The logic was also ported to Swift to see how Apple’s new language would perform. To test computational performance, I looped through the logic 1,000 times. The computation was done in the main UI thread, and I ran the test 10 times to get an average. All code was compiled with release optimization (i.e. -Ofast) when available. Here are the results from the iOS apps running on my iPhone 6.

iOS Computational Performance in seconds (lower is better)
Average iOS Computational Performance in seconds (lower is better)
  • J2ObjC/noARC — As expected, this pure Objective-C app completed the tests fastest. The logic was written in Java and translated to Objective-C using J2ObjC without ARC. There may be more performance to be gained had the logic been written in Objective-C from the start. If someone wants to implement the logic in Objective-C, I would be interested in testing that.
  • J2ObjC/ARC — I wanted to see how much overhead ARC would cause. The Java logic was translated with J2ObjC using the -use-arc flag and ARC was enabled in the Xcode project Build Settings. As we can see, ARC causes at least a 50% performance hit in my test.
  • Swift 1.1— When Apple launched Swift, they claimed performance increases over Objective-C. I have not seen this as my tests show a pure Swift 1.1 app almost four times slower than the J2ObjC translated app.
  • Swift 1.2 — Apple released Swift 1.2 in XCode 6.3 Beta just as I was finishing this post. With more claimed performance increases, I needed to find out if this was true. As you can see, huge improvements were made as the Swift code now runs 2.5 times faster. It is still not as fast as J2ObjC, but at least it is competitive.
  • Swift 1.1/J2ObjC — Xcode allows Swift apps to include Objective-C code. In this bridged test app, the GPS points were first loaded in Swift and then sent to the non-ARC Objective-C (J2ObjC translated Java) logic. Not surprisingly, it is quicker than pure Swift, but crossing the bridge comes at some expense.
  • Swift 1.2/J2ObjC — Swift 1.2 shows improvement in this hybrid app too. The almost 40% increase in performance brings this type of app much closer to the performance of a J2ObjC app.
  • Xamarin — The Xamarin iOS app was compiled with “Use LLVM optimizing compiler” and “Use the SGen garbage collector”. As the first, non-native platform language compiler for iOS, I am impressed Xamarin’s performance. Though, not quite as fast as J2ObjC, Xamarin is much faster than pure Swift 1.1 and even better than the Swift 1.1/Objective-C hybrid. Xamarin is on par with Swift 1.2 and slightly slower than Swift 1.2/J2ObjC. In some instances, Xamarin was even quicker than J2ObjC as noted in runs #2 and #3. However, this was not consistent, and I am still postulating why this is the case.
  • RoboVM — RoboVM is a huge surprise. The performance is on par with J2ObjC consistently, and it’s still in beta.
  • Rubymotion — RubyMotion iOS performance is horrible. RubyMotion is not even the youngest in the group, and with the number of “success stories”, I expected it to have reasonable performance. However, its performance is way off the mark.

Android Computational Performance

On Android, the Java logic was imported into a native app. The Xamarin Android app just needed a native Android UI. Unfortunately, the RubyMotion Android app could not be tested because RubyMotion doesn’t implement the Math module on Android, which my logic depends heavily upon. I filed a bug report, and I will run the test when support is available. RoboVM does not have a test since it is a Java language compiler for iOS, which makes a RoboVM Android app the same as a native Android app. Like the iOS computational performance test, the Android apps executed 1,000 iterations of the logic. Both apps were compiled with release flags. Here are the results from the Android apps running on my MotoX.

Android Computational Performance in seconds (lower is better)
  • Java — Java was the native Android benchmark. However, note that the performance is not as good as iPhone 6. The performance difference is very similar to other testing between the two phones. For example, see the “Performance benchmarks” in the following article iPhone 6 / Moto X comparison.
  • Xamarin — Xamarin, once again, shows how competitive it is by being barely slower than a native Android app. Xamarin also exhibits the same variance on Android, as it had on iOS.
  • RubyMotion — Unknown until RubyMotion fixes its Math module bug.

iOS Memory Performance

To measure memory performance, I increased the number of iterations from 1,000 to 10,000. This increase caused the apps to start crashing on my iPhone due to memory errors, so I ran each app in the simulator. Using Apple’s Instruments tool, I screenshot the memory usage. The table below summarizes the results.

iOS Memory Performance table

Launch was how much memory the app reserves for itself. Peak was the highest amount of memory used. Settle was where the memory usage ended up after releasing memory from the test. Run time was how long the test took to run once. It should be noted that this is not reflective of raw performance because Instruments was attached to the app to monitor memory usage.


Once again, this pure Objective-C app was used as the benchmark. ARC was added to see how the memory usage was affected, but the memory usage was quite similar. Instead, the app took 50% longer to finish the test.

J2ObjC iOS memory profile
J2ObjC/ARC iOS memory profile


Both Swift 1.1 and 1.2 used the same amount actual of memory, similar to J2ObjC, but clearly, Swift 1.2 was way more efficient. Swift 1.2 allocated 104.99 MB while Swift 1.1 allocated almost 9 times that. The run time was also lightning quick at 1.6 s. With better overall memory usage than a J2ObjC app, Swift 1.2 became more reasonable very quickly.

Swift 1.1 iOS memory profile
Swift 1.2 iOS memory profile


Not surprisingly, the memory profile was similar to the J2ObjC memory profile. Swift 1.2 had no noticeable impact on memory usage compared to Swift 1.1. However, Swift 1.2 performed almost 50% better.

Swift 1.1/J2ObjC iOS memory profile
Swift 1.2/J2ObjC iOS memory profile

Xamarin / RoboVM

Finally, I compared the non-native platform apps’ memory usage. Both Xamarin and RoboVM showed amazing memory usage by barely using any memory to run the computation at 10.06 MB and 3.67 MB, respectively. Compared to their initial memory numbers, the amount of memory they used to run the test was negligible. They both finished the test much faster than the J2ObjC app, and reclaiming the memory took no time. Clearly, the garbage collectors for both apps were very aggressive, but it looked like that neither sacrificed any performance. RoboVM was especially notable since it used much less memory at peak and in total than Xamarin.

Xamarin iOS memory profile
RoboVM iOS memory profile


Unlike RubyMotion’s computational performance, the garbage collector did a great job. Though not quite as good as Xamarin or RoboVM, RubyMotion still allocated within the range of J2OBJC with a peak of 21.95 MB of memory used. Performance was still an issue as it took over 10 minutes to execute. However, it should be noted that the performance problem was not completely contained within the app. The slow execution caused Instruments on my MacBook to use over 20GB of memory recording the profile. With only 16GB of available RAM, Instruments started swapping which compounded the performance problem.

RubyMotion iOS memory profile

Android Memory Performance

Running the test on a Nexus 5 API 21 emulator, I took screenshots of each app’s memory profile in Memory Monitor inside Android Studio. I tested both a native Java app and a Xamarin Android app. As before, the RubyMotion app was not available. For the test, I loaded the app, executed the test, manually initiated garbage collection, and executed the test a second time.


The Java profile below showed the app started out with a heap a little over 1 MB. During the test, the heap got as high as 2.7 MB. Afterwards, the heap remained at that size until I manually caused the garbage collector to execute, after which dropped the heap size down to 1.75 MB. The second test execution shows the same heap usage as the first, but afterwards, the heap returned to 1.75 MB. Execution times for both tests were a little over 13 seconds.

Java Android memory profile


The Xamarin app started off with a high (compared to Java) heap size at about 6.5 MB. Though it was not readily apparent, I executed the test at about the 7 second mark. The test took a little over 9 seconds to execute, which is a little quicker than Java. At 20 seconds, I ran the garbage collector, and the app released a lot of memory, bringing the heap down to 1.5 MB. At about 27 seconds, I ran the test again, but the heap did not grow. Both tests took about the same time to execute so the heap size did not seem to have an impact. I am not sure why Xamarin needed such a large heap in the beginning.

Xamarin Android memory profile


  1. My test case is unique. As with any performance measurement, your results will depend on what you are testing. It is very possible that both Xamarin and RoboVM perform super well for my situation, but may fail horribly for your situation.
  2. I believe my Objective-C test app can optimize its memory usage. It did not make sense that it used so much memory, but like I said, the full load on the CPU could have prevented it from reclaiming memory.
  3. I do not have a lot of experience developing mobile apps. Though I have some doubt, there is a chance my test apps could be further optimized with platform-specific tricks that I am unaware of.

Final Thoughts

I am impressed with both Xamarin and RoboVM. Both have shown that it is definitely possible to develop mobile apps in non-native platform languages. This just makes RubyMotion’s performance all the more disappointing.

It is great to see Swift performance increase with the latest release. When I tested Swift 1.1, I was disappointed with the performance being worse than even Xamarin and RoboVM. However, it is great to see Swift 1.2 close the gap on Objective-C and encouraging that Apple put this release out rather timely.

If you are looking to use the native development environments for both iOS and Android while attempting to implement cross-platform logic, J2ObjC would be a great choice.

Based on my test results, my tool choice might be a moot point. My real-world testing example showed that the majority of the tools tested can handle 1,000 x 600 = 600,000 GPS coordinates per second. This is a lot faster than GPS updates coming from a modern smartphone (1 Hz) or from a external GPS unit (50 Hz). I could complicate my logic 600,000 / 50 = 12,000 times before I start encountering the possibility of missing a GPS update.

Having said that, the performance-minded engineer in me plans on developing the rest of my app in Xamarin. If RoboVM was out of beta and had better support for storyboards, I would have considered using it. I never thought I would be a C# developer as it is hard to get over the Microsoft stigma, but alas, these Xamarin guys know what they are doing.


A special thanks to @brunobowden for edits and suggestions. Also, thanks to @tomball and @nadavrot for performance suggestions.



Former Googler, Pseudo-hacker, Frozen waffle master

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store