An Android slice of Google I/O 2019

Victor Ireri
ASOS Tech Blog
Published in
13 min readJul 3, 2019

Another seemingly innocuous email notification appeared on my phone as I was sat at my desk at the ASOS London office. On closer inspection, it turned out to be the news I was hoping for. As luck would have it, I had won a ticket to the largest annual developer conference - Google I/O. Two 11 hour flights and 7 days later I was back in London with a brain bulging with information from the past week. Here’s an android-flavoured slice of the humongous ̶P̶i̶e̶ Quiche that was the 2019 Google I/O conference.

Google I/O countdown

Android

Bubbles
This new multitasking feature will allow users to keep track of their most important tasks and access them anywhere in the operating system at any time via a floating, draggable icon that sits above other apps. Selecting the bubble expands it into a small, fully interactable window which hosts the app. Samsung users familiar with the ‘pop-up view’ will find that the Bubbles feature shares some similarities.

Notifications
Notifications will now be accompanied by suggested actions - more on that later - and will also receive Automatic Prioritisation. This means that each notification will be categorised as either High or Low priority. High priority notifications will work as they do now and may even trigger a ‘Bubble’ if configured, while Low priority notifications will only be displayed in the notification shade (no icon will be displayed in the status bar). The goal with this feature is to prevent a constant bombardment of notifications. Focus mode will also help achieve this by allowing the user to ‘pause’ selected apps which the user finds distracting, for a period of time.

Privacy

Q is introducing a new section in the device settings dedicated to privacy. It will give users full control of the information they choose to share on a per-app basis. New restrictions have also been added to prevent apps from launching themselves from the background without user interaction. Location permission requests will also be changing. No longer will the options presented to the user be binary. The user can now choose to allow the app permission all the time, when the app is in the foreground or not at all.

Huawei Mate X

Foldables
They are the future. Yeah, I said it! The unveiling of the Huawei Mate X prototype convinced me that foldable devices aren’t just a gimmick. Developers who already follow best practices in design and architecture should find that their apps will work on these new form factors with little intervention. Some optimisations may be required, however, to better present content on different screen ratios.

Gesture navigation
Device manufacturers have pioneered more immersive experiences with custom navigation and gestures, but catering to this variety has become a tricky task for designers and developers. So, Google has worked with OEMs to unify the ecosystem in this regard by providing support for two primary navigation models only: the traditional ‘3-button’ and the new ‘fully gestural’. We’re all familiar with the former but the latter is yet to permeate the Android user base quite like its iOS counterpart since the iPhone X was released. The three simple gestures are as follows: swipe up to go home, swipe up and hold to view recent apps, swipe from left/right edges to go back.

Android Q’s fully gestural navigation

Of the three, the back gesture will likely affect the most apps due to the numerous components that allow or require the user to swipe from the edge, such as drawers. As you’ll see in the image below, the use of drawers in our app may need to be reconsidered. Eliminating the navigation drawer entirely and moving to a bottom navigation bar is certainly a favoured option but before undertaking such a task, Google suggests a number of optimisations that can be made to limit gesture conflicts while maintaining predictable user experiences.

Dark theme
My favourite Android Q announcement was the system-wide dark theme setting. As the name suggests, it will affect the entire user experience from apps to notifications to widgets, so Google is now strongly recommending that developers support it to match growing user demand. Google, of course, intends to do the same and has committed to adding dark themes to all its apps by the time Q is released. As a prolific dark theme user, this announcement was music to my ears (and eyes 😍)!

Testing Night Mode in the ASOS Android app

As I found out while working on the ASOS Android app, implementing a new theme can be tricky. As well as some refactoring to allow the use of alternative assets based on the current theme, numerous discussions were had about how best to translate our light theme to a dark one across various features while maintaining the look and feel our users have come to love. To help with this transition, and to give users a dark theme with almost no effort, developers can now make use of ‘Force Dark’. This opt-in feature, which automatically converts your app to a dark theme, requires only one additional line of XML to your app’s theme android:forceDarkAllowed="true” and you can also limit its scope just as easily, right down to individual components. The future is bright for dark themes, 😉 so find out more about their design principles and how to implement one in your app.

Kotlin
At Google I/O 2017, it was announced that Kotlin would be added as an official programming language for Android. Two years on, not only is it the preferred language, its use is growing rapidly outside of Android. In fact, it’s one of the most-loved programming languages. The high adoption rate and user satisfaction can be partially attributed to Google’s endorsement and partnership with JetBrains, their combined effort to promote Kotlin/Everywhere and investments in the language based on user feedback.

Android Studio

What’s new in Android Studio? Nothing. Well, no brand new features at least. But that’s not necessarily bad news. Dubbed ‘Project Marble’, the Android Studio engineering team have been focusing exclusively on addressing the top issues affecting users. This includes improving system health, polishing features and fixing bugs. As a QA Engineer, I have to commend them for making what was certainly not an easy decision.

Layout inspector
An upcoming update will introduce 3D view hierarchies, reminiscent of the ‘Debug View Hierarchy’ from XCode. Direct links from view attributes in the inspector to the corresponding source code will also be added. The visualisation of the different layers of a given activity along with the ability to easily jump to the source code will greatly help with debugging.

Emulators
These are now faster, more versatile and reliable. As well as refactoring the internals to optimise for speed, features such as Snapshots and quick-boot have drastically sped up the set-up process. Adding more features such as support for foldable devices, and virtual environments for AR development has improved versatility. Reliability was addressed in three ways: emulators now share the exact same code base as a physical device, over 900,000 compatibility tests are run on both emulators and physical devices, end to end testing via the Play Store is fully supported.

Tips and tricks
Just how efficient are you when working with Android Studio? Whether you’re a newbie or a pro, there’s something in this talk for you; I could tell from the audible gasps from the crowd as the demos revealed numerous efficient ways of working. Topics included: profiler tools, editing code, IDE navigation, design tools and building & deploying.

Google Play

Here’s a rundown of the notable features and improvements that have been introduced to Google Play:

  • The introduction of App Bundles last year saw many apps, including ASOS, significantly reduce their app size. The positive results prompted Google to find new ways to help developers optimise further. To help with this, new metrics, reporting and guidance have been introduced to the Google Play Console. In that same vein, dynamic feature modules, which has just been fully released after a successful Beta, will allow specific features or resources to be installed separately within the app, on-demand, to help keep app download sizes small.
  • In-app updates will provide a seamless update experience for users by showing an update dialog within the app in question. The feature will adhere to the targeting rules set in the Play Console so if only 5% of users should get the update then only they will see the dialog.
  • Internal app sharing is now available to make sharing APKs and (crucially) app bundles a quick and painless process without the need to create and manage a release.
  • Custom peer groups allow developers to use the combined average of Android Vitals statistics from 8-12 peers (of the developer’s choice) as a benchmark. A separate list of peers can also be configured for Play Store rating comparisons.
  • Play Store ratings will be changing for the better in August 2019. Average app ratings will no longer be a lifetime cumulative value but will instead be calculated with heavier weight given to more recent reviews to better reflect the app’s current state. 🎉 Your app’s new rating is now available alongside the lifetime rating in the Play Console.
  • Custom store listings were recently introduced to allow for better targeting of audiences per country and region. Now, users can also be targeted by the following app install states: not yet installed, uninstalled and currently installed. This discards the ‘one size fits all’ approach in favour of a more customised experience where the listing title, icon, description and graphics can be changed to retain existing users and entice new and previous ones.
  • The Play Academy has now been updated to include these and many more improvements made to Google Play over the last year.

AI, AR & Machine Learning

Yup, it’s those buzzwords again. Few tech innovators can resist mentioning these terms, but as Google has shown, they’re more than just phrases misused to feign integration of these exciting but complex technologies and the progress made with Google Assistant and others is a testament to that; though as I have come to understand, the human element underpinning much of these advancements cannot be ignored.

Next generation assistant
Processing speech is a complex task that has historically required 100GB of storage for machine learning models and an internet connection. These models have now been shrunk to 0.5GB, small enough to be viably stored on mobile devices with no internet connection required, which Google has hailed as a significant milestone. Also, no internet connection is required. The practical advantages of these advancements were showcased in a live demo on stage. The lighting fast responses of Google Assistant, when facing back to back commands, was impressive. As you’d expect, the Pixel 3a & 3a XL are the first devices to benefit from this.

Adaptive notifications
AI & Machine Learning are being used in Android notifications to bring users more features such as suggested smart replies and actions. To achieve this, the notification content is fed into on-device machine learning models which compute what actions the user might want to take (if any) and the output is then displayed. The options given to the user include text-based replies and actions such as opening navigation in Google Maps if an address is detected.

Google Maps
Navigation guidelines using AR have been introduced to Google Maps, but for now, this is only available on Pixel devices.😢 Dotted around the venue, however, were AR codes that could be scanned by the Google I/O app with any AR-enabled phone, to provide an AR view of the conference activities from that location. I found it to be a very useful application of this technology especially when there were numerous unmarked paths.

Google Search
When available, search results now include a ‘View in 3D’ option allowing users to place a 3D object in their space and manipulate its size, position and orientation. As you can see above, a great white shark juxtaposed with ordinary objects, like chairs and tables, really brings it to life. Google is working with numerous partners to bring more AR content to search.

Google Lens
Some significant improvements were announced this year for Lens, the image recognition product from Google. When viewing a menu, it can now identify the restaurant, pick out the most popular dishes and provide links to reviews. I can be quite picky with food so this is a welcome addition. Now, as well as translating printed text and overlaying the translation over the original text, it can also read out the text while highlighting each word. As demonstrated at the Google Keynote, this is a powerful feature which can help those unable to read gain independence.

Duplex on the web
We got our first look at Google Duplex at last year’s I/O conference where the conversational expertise of the automated voice assistant was put on display as it placed a restaurant reservation over the phone. Now, Duplex is branching out to web-based bookings. To start with, only a narrow set of use cases will be supported such as rental car booking and cinema tickets. I’d like to see what it does when it encounters reCAPTCHA.

AI for everyone
With great power, comes great responsibility. Google appears acutely aware of that phrase and feels it’s their duty to continuously review and improve their technology, which is consumed by billions worldwide. Enhancing accessibility features and reducing bias are two examples of some of the actions they have taken.

AI model internal decision making has long been difficult to understand with even their creators not fully understanding how certain aspects work. Google is working to clear the fog to gain a deeper understanding of how, for example, an image is processed, to uncover biases that may exist. The example given was that of identifying an image of a doctor. If the model was trained with images of doctors containing primarily men, then the model may incorrectly assume that being male was an important factor in identifying a doctor.

Live Transcribe, an app that transcribes speech (via the device microphone) to text in real-time, was introduced earlier this year to help those who are deaf or hard of hearing. At Google I/O it was announced that this functionality has been extended to all media sources in a feature now called Live Caption. Google is also exploring how captions can be used with phone calls with Live Relay. Pushing these technologies even further is Project Euphonia. Its goal is to better understand distorted speech, perhaps due to a neurological disorder, and even translating facial gestures and nonlinguistic vocalisations to actions for those unable to speak. In these cases, personalised communication ML models are driving the research forward. Google is also reaching out to those who would benefit from this technology, in an effort to bring this project’s goals to fruition.

The greatest impact of these products seems to be the autonomy and confidence it gives its users to explore and interact with the world.

The rest

There was plenty more going on at Google I/O than I’ve mentioned here but fear not, there’s a YouTube playlist of all the recorded sessions. There are over 175 of them - that’s a lot. The livestream event schedule may, therefore, come in handy as you can quickly filter the list of sessions.

Here are a few more fun bits:

  • Speechless Live, where speakers had the daunting task of giving a presentation with slides they had never seen before, was a barrel of laughs and a demonstration of how difficult improv can be.
  • Wear OS was hardly mentioned during the conference so my colleague put this to the panel at the Android Fireside Chat.
    👏👏👏
  • This year’s Android collectible is modular. The head, upper and lower body and legs are detachable. They were available in the four Google colours - red, green, blue and yellow - allowing unique colour combinations to be created.
  • Fungineers, an entertainment group with an eclectic mix of performances, put a smile on everyone’s faces as they made their way around the venue.

On a personal note

It was my first time in the US so at almost every turn I experienced snippets of US culture that I had only ever seen on TV such as shopping at Walmart or treating myself to waffles and pancakes for breakfast.😋 I also had the opportunity to cross the iconic Golden Gate Bridge. Fun fact: when Google Street View was launched in 2007, it was the first location I visited. The bridge looks even more grandiose in real life. From the moment the plane touched down at San Francisco airport, to watching the final Keynote of the event in the Amphitheatre, the whole experience felt surreal. I did my best to absorb as much as I could from the city as well as the conference and with any luck, I’ll be back in the not too distant future. I’d like to give a big thanks to my colleagues Savvas Dalkitsis & Andrea Trocino who accompanied me on this trip and helped make my first US visit truly unforgettable.

More about me

I work at ASOS primarily as a QA Engineer on the Android app but I have been known to develop small features and fix bugs. When I’m not dissecting apps, you’ll find me daydreaming about space travel or pondering the dystopian futures presented to us by shows such as Black Mirror.

--

--