Recap of sessions mobileiotcon 2016–2

Satyajit Malugu
mobile-testing
Published in
5 min readApr 30, 2016

This post is a follow up for my earlier recap which talked about logistics and keynotes. In this post I’d like go in depth for the concurrent sessions that I attended

Ubers app testing

This is the first moible testing session of the conference and definetely one of the highlights. The speakers did a good job of introducing problems for uber scale and gave in depth review of their orchestration program Octopus.

Their main problem was to test both rider and driver apps at the same time and make sure they are synced up. Using this tools they were not only able to do simulate 1:1 scenarios but 1:many for uber pool. They showed a cool demo of one driver app which is in sync with 4 rider apps. Octopus seem to handle creating multiple emulators, simulators, running on different languages, parallelizing etc.

I can’t help but get a feeling like many other tech companies(including mine) ubers’ testing team is obsessive about test automation than testing.

Links

  • Talk
  • Octopus tool (they plan to open source it, so keep on eye out)
  • Authors twitter handle — Apple chow

Lessons for IOT testing from mobile testing

Steven Winter talk was full of wisdom that he has gained over the years managing QA organizations. He did a great job of summarizing how mobile testing and automation focus has evolved over the years and how we can apply these lessons to IOT testing. The gist is IOT testing is not different, it can be done using same principles but keeping focus on scale and security.

Testing mobile SDK’s at Brighcove

As a mobile tester that only dealt with testing native apps, this talk from Jim gave an insight in how SDK testing is performed. An SDK is something that developers of native apps use and hence has to deal with same challenges of mobile testers but even more because you are not sure how your clients are going to use these SDKs. Also brightcove is a video player so they definetely have to test the rendering of the video and associated intricacies of codecs, play formats, API versions and so on.

Their team performed a lot of functional testing using Androids’ instrumentation and apple’s . He also show how they refactored their test cases and also how they made their testing framework scalable.

Shift left mobile testing

Well this is my talk and obviously the best of them all ;) As a speaker, I think the talk went well, they were a lot of questions asked at the end. There were moments where I felt I lost the crowd but changing topics got their attention. The primary takeaway from this talk was to think about test automation as a whole of unit, functional, partial and UI tests not just UI tests. And I went on to give few steps on how to skew the test pyramid towards the ideal size.

Thanks Pablo for the pics!

Links

Test infrastructure at Mathworks

Ankit and Binod did a good job at explaining their infrastructure for mobile testing that is scalable to multiple test groups within their company. Their problem statement was to fit mobile infrastructure into their companies existing CI pipeline and hence had to few customizations. They wrapped around appium to provide a uniform selectors in Java and Swift. Used emulators/simulators from a pool of machines and their system ‘loans’ these out for running tests. Some good ideas/lessons on how to create a legacy and multi product large company.

XCUITest for UI testing in iOS

I attended this session partially but from the slides and audience engagement this talk would be useful and practical after the conference. Jason walked through a lot of code and illustrated best practices using Swift by using the new XCUITest. He showed page objects, composition and how their company started vesting in this Swift framework. Lots of practical tips and show casing what is possible in the new toolset.

Also Jason was very active on twitter during the conference tweeting with #MobileIotCon

Links

Bing infrastructure for mobile testing

Danni is my wife’s colleague at Microsoft and we recognized each other just before start of the conference. I knew Bing had a mature infrastrucure and appium based framework for their Native, mobile web automation, of which Danni gave a good overview. She also talked at length about when to use real devices vs simulators and how Bing makes their choices. Lots of practical wisdom for anyone looking create a testing infrastructure at scale.

She was a confident speaker, on topic and covered a lot of ground.

Looks like the theme of the conference is mobile infrastructure though all of them seemed very particular into fitting their companies needs rather than something that can be shared.

Combinitorial test patterns for mobile

Jon hager is a veteran software testers with decades of testing experience and book(s) to his name. The state space for mobile testing given the devices, OS, chipsets, hardware etc seems humongous, how can one test all of these? Jon Hager has a scientific solution for it — combinatorial test matrix generation. His approach is agnostic to the test execution and only concetrated on test case generation based on math(pairwise testing). He walked us through a tool that generates these cases and its algorithm reduced the test cases from 65k to ~150.

Though a little hard to implement in a pragramatic setting but very useful way of thinking for test case generation that avoids duplication and attacks scale.

Links

Summary

Overall the conference was packed with quality like minded folks who are solving mobile testing problems today. I made few contacts, learned others problems and how they are approaching them. Best thing of all, I could relate to most of these people, I come to conferences for this feeling.

--

--