AnDevCon 2017 — What I learned

I spent the last two days in AnDevCon 2017 hosted in Reston, VA. The quality of the speakers was quite good in general. I couldn’t go to some of the talks, unfortunately. Wish I could split up in smaller Manuels sometimes to watch them all!

This is a brief summary of the talks I went to and what I learned from them. If you are reading this and attended to the conference and to other talks I couldn’t go, please don’t hesitate to contact me so we can include what you learned as part of this article! That’d be awesome!

Without further delay… here it is!

Bootstrapping IoT products with Google

by Dave Smith

Android Things doesn’t have the Application layer since your app is the only one running on the device. There are also some limitations in the Framework layer: Content Providers or Wallpapers are not longer applicable. For example, you can use Notifications but it’s not guaranteed that they’re going to be shown since the device might not have a display and the notification doesn’t have a place to go.

Permissions are automatically granted and you have Google Play Services available with everything it involves.

You have to specify a single entry point to your application, if not, the system is going to behave oddly.

You have two layers to interact with the Android Things hardware: Peripheral I/O and the User Drivers layer which connect to the hardware and binds the data using Android APIs.

In my opinion, one of the best features is the Android Things console. It allows you to keep your devices updated (either your app, platform or security updates…). It will also include metrics in the future!

Maintainable Espresso Tests with Robots and Screenshots

by Sam Edwards

Sam walked us through how to create a deterministic UI automation testing approach with Espresso. For that, the Robot pattern is key!

The Robot Testing pattern is a Test Architecture pattern that focuses on maintainability. You extract all the logic to interact with the UI in a Robot, so in case something breaks, you only have to change your Robot instead of all tests using the screen.

What about screenshots? Well, you can get more information in the awesome talk he gave at Droidcon NYC 2016. It’s difficult that product people value the amount of time you spend writing tests, right? That’s because they just can see console output. Screenshots are game-changing! Instead, they can see what’s actually being tested.

He explained how to use Spoon and Falcon to capture the state of your app while running tests and then, presenting them in a nice dashboard. They also help solving the fragmentation problem! You can take screenshots for all kind of devices with any OS version, screen size, etc using the Android emulator.

Sam in action

Putting the “Pro” in Proximity: Interactive Demos of the Nearby API

by Chad Schultz

This talk was about what’s available in Google’s Nearby API. There are three different types: Messages, Notifications and Connections.

One of the takeaways is that is available for Android and iOS.

Basically, you can connect devices that are nearby to communicate between them sending messages, notifications or data. The possibilities are endless: you can have realtime collaboration, broadcast a resource, file sharing, push notifications, etc.

Everything is based on location and you do need to ask for some permissions (in this case, enable autoManage()). How it finds devices is not something we should worry about since internally, it uses Bluetooth and WiFi to create those connections.

An Introduction to RxJava

by Matt Dupree

Matt took a bold approach to explain RxJava to beginners in a way I haven’t seen before. He built the concept up starting from Java arrays all the way up to RxJava Observables touching on some intermediate concepts like arrays in JavaScript and Lambdas in Java 8.

He started by talking about the different kind of hells that Android developers experience with an analogy of Dante’s inferno. In particular, the Callback hell in Android! We’re used to nest callbacks inside other callbacks, etc. ClickListeners, AsyncTasks…

Dante’s inferno

He defined an Observable as a “Declarative Composable Sequence” explaining in detail what each word meant to him.

It’s composable because you can chain operators as you like in a declarative way. The sequence comes in when you’re receiving or emitting items. He put some emphasis in memory management. With Observables in RxJava, you don’t have to worry about memory at all! It’s all handled for you.

A Room with a ViewModel: Android’s Architecture Components

by Mark Murphy

We learned a lot about the new Google library Room which is basically a pretty powerful abstraction layer over SQL.

He explained concepts such as how to use the @TypeConverter annotation with different scopes to be able to parse any kind of data from the DB; how to use @ForeignKey annotation to relate entities between them and for example, assign a onDelete action approach (for example: Delete in cascade); and how to use the @Relation annotation to be able to get custom data between different Entities from an unique Dao, this requires to create a POJO object though.

Migrations in Room are easier to test, the documentation explains how to do it. If you want to migrate from version 1 to 3 and you don’t support a custom 1-to-3 migration, it will migrate first from 1 to 2, and then from 2 to 3. In any case, you’ll have to migrate the code manually with db.execSQL() statements.

In case you want to encrypt Room, there’s this CWAC-SafeRoom library which is a bridge between Room and SQLCipher for Android.

Google Keynote: What’s New with the Google Assistant and Google’s IoT Platform

by Wayne Piekarski

Wayne started explaining how things have changed over the years. In the past, apps for different devices were installed separately. Now it’s all connected and you don’t have to do that. The apps are all about context and what the user wants to do using ubiquitous computing.

Android Auto is great for media or messaging apps! The car will get the app automatically if you have it installed in your device and that opens up a lot of opportunities.

We talked about Android Things before, but he expanded the concept and also talked about using TensorFlow with it and examples of what we can do with it (focusing on Video and Audio processing).

We can benefit from Google Assistant with Actions on Google + All the conversations are in the cloud and it allows you to build conversations. No more robots saying: “press 1 to do this… press 2 to do this other thing…”! You can also get the Google Assistant SDK and put it in the device you want! It’s easy to integrate it with Android Things, for example. You can even use it with Smart Home!

Google Cloud IoT is still in beta but you can process all your data there with BigQuery, BigFunctions, etc. You can even use it for mass deployments on devices.

He gave an announcement of a new version of Google Glass (Enterprise edition) which is meant to be used in industrial/commercial applications/environments.

Rx Concurrency: A Prescription for Multi-threading

by Blake Meike

In my opinion, one of the best talks in AnDevCon. Blake faced a really difficult topic and explained it in a really easy way.

He started by explaining how AsyncTasks manage memory and the problems we could find there! I found this quite useful and completely in context: For example, you’re not guaranteed that onPreExecute is going to run on the same thread as doInBackground, only one task of the same type is going to be executed at the same time, etc.

Talking about RxJava, he explained how to use the observeOn and subscribeOn operators. observeOn is going to create an Observable that proxies the onNext calls on the thread you specify whereas subscribeOn is going to make the subscribe call to the top-most Observable in a thread so that it’s going to execute its code in that particular thread.

He also explained that Schedulers are just executors that can manage different threads. It’s going to be in charge of creating/assigning threads to the different tasks they receive.

He went on to explain the different types of Schedulers:

  • single() in RxAndroid schedules on the Main thread.
  • newThread() creates a new thread for each new task (use carefully).
  • trampoline() schedules the task to run as soon as the current task completes using the Main thread. This one is a bit confusing, to be honest.
  • computation() can create as many threads as number of CPUs + 1. It mimics what the machine can actually do, there’s a limited thread pool in this case.
  • io() has no limits on the number of threads it can create. So, be careful to not block threads when using this Scheduler.

Bring your App to life with After Effects and Lottie

by Gabriel Peal

The most appealing talk of the conference! Full of nice animations and extra motivated to create my own ones using Lottie. He started by showing what Lottie can do… have you seen it in their Github repo? Pretty impressive, huh?

He put animations in this way: “Shapes… that change… over time”. Sounds easy!

He explained some concepts that are needed to understand how Lottie works: Bezier curves, fills, gradients, masks, etc. Also how those features were implemented in Android Lottie.

Check out the Lottie app in the Google Play Store to see what’s capable of.

Gabriel in action

Thanks all for reading,

Did you attend AnDevCon 2017? Do you want to share your thoughts with the wider community? Let me know and I’ll include them as part of this article!

Thank you,