Here’s what to expect from intelligent apps in 2018

Technology research company Gartner puts out a yearly Top 10 Strategic Technology Trends list, aimed at giving a high-level overview of trends we should expect to see play out over the year to come. The 2017 list correctly called out a couple of rapid growth areas for the year, including blockchain, conversational systems, AI/ML, and exciting for us, intelligent apps. Well, we just got around to looking at Gartner’s 2018 outlook and were pleased to see that intelligent apps get a second year in the limelight.

Here’s what they had to say,

Over the next few years every app, application and service will incorporate AI at some level. AI will run unobtrusively in the background of many familiar application categories while giving rise to entirely new ones.

So this got us asking, What did we learn from 2017 and where do we think 2018 will be headed?

tldr; 2017 was the year of language and vision. we believe 2018 will be have surprising new categories of machine learning happening on the mobile device. Led by new standards for extracting machine learning data from device location and user behaviors, prediction is going to get exciting!

What did we see in 2017?

Advances in on-device intelligence in 2017 fell into three primary categories: vision, language, and access. These three categories were necessary first steps in the multi-year trend because we already know how to extract machine learning features from raw data in those categories directly on device. We’ll get into how we will solve that for diverse new fields in our 2018 section below.

Vision

Moving computer vision off the cloud and onto the device makes sense for some pretty practical reasons. First and foremost is speed, then image quality and transfer size. Most people take way more high quality photos using their mobile phone than companies like Apple and Google actually want to move over the air to servers somewhere in the cloud. Additionally, not everyone wants their images moved off their device, so the ability to run machine learning right in an app opens up a lot of new functionality even for people who are more weary of sharing their images with services.

There is another big reason why moving imagery-based machine learning to device is an optimal early step in on-device intelligence: feature extraction from images is relatively straight-forward. While there are many ways to approach feature extraction and normalization from imagery, it is a well documented challenge and can be largely contained to a single photo or single library of photos.

In 2017 we saw some big moves in the image space. One interesting one is the on-device face detection work released by Apple for developers. Although not totally new to 2017, the work of Clarify was pushed out as a public SDK for on-device image learning.

Language

Like image analysis, natural language processing (NLP) has some practical reasons for moving on-device early in the transition off the cloud. Like imags, extracting features from language is fairly easily constrained to a sentence, conversation, or individual, removing the need for server-side resources.

Unlike the server-side language tool from Google, Cloud Natural Language, Apple updated (WWDC Vid) their NLP tools in iOS so that many basic routines a developer needs can be handled right on the device. Apple’s NLP tools are giving developers access to Apple-scale trained models to run securely in their app for the benefit of users. While maybe not perfect, it is quite interesting to watch what other kinds of models trained by the giants will be made available for any developer. Google did have another on-device model they shared around for language, specifically for translation, highlighting the diversity of language-focused algorithms that are going to quickly move to the phone.

Access

Most of the examples above use a hybrid on-device/in the cloud approach, where models running on the device are actually trained in the cloud (like the famous Not Hotdog app). 2017 saw both Apple and Google deliver frameworks to help independent companies navigate the hybrid approach of training in the cloud and running on the device, and this hybrid approach is an important first step in the transition to on-device machine learning.

To facilitate this hybrid approach, Apple released a handful of products at WWDC, with CoreML being one of the most significant [WWDC Video]. CoreML is a set of tools to help developers move trained models to the device and then run them efficiently with Apple’s optimized systems for machine learning. From Google, we saw TensorflowLite, the little sibling of the ubiquitous machine learning and neural net library currently in every data scientist’s toolkit. The objectives of both tools are essentially the same: providing tools to design, build, train, and deploy offline models to be executed neatly on-device.

What can we expect to see in 2018?

2018 is setup to be a huge year for machine learning on mobile devices. Huge. Where 2017 was all about language and vision, we think that 2018 is going to see on-device intelligence touching exciting and sometimes surprising new areas. With the new tools available to help facilitate cloud training and on-device running of new models, now it’s all about collecting the right data so that our intelligence is possible on any device.

Here are some of the areas we expect to see developments over the next 11 or 12 months.

Tools for distributed training

Tools for distributed learning is an area we expect to see advances in this year. While probably not solved outright in any grand ways, we know people are excited for them and others are working on the solutions. While deep neural nets can require tons of computing resources, many of the models we require in mobile applications need smaller models trained on specific groups or cohorts of users. A search for distributed training support was probably among the first Stack Overflow questions asked after CoreML was announced at WWDC. Keep your eye out for new startups and technologies trying to tackle this domain.

On device learning for individuals

There are some really interesting companies trying to do this already. Relationship apps, scheduling apps, and bots have been quickly leveraging the intelligence that can come from mobile devices to help predict what to do for the user. Think about any health, wellness, or fitness app that you download with the goal of personal improvement. Now, imagine that app being able to understand your patterns of engagement to help reinforce your desired behaviors (meditate more, sleep better, go for a run). This is a perfect area for on-device models, or as Matthijs Hollemans said,

Sleep tracking / Fitness apps. Before these apps can make recommendations on how to improve your health, they first needs to learn about you. For privacy reasons, you may not want this data to leave your device. [machinethink]

Tools to make raw mobile data ready for machine learning

The hybrid approach to modeling on the cloud and executing on the device will only grow more popular in 2018. This means that we still need more tools to make that as simple as possible. One challenge we have seen is that many machine learning tasks sit at the end of fairly long and complex ETL pipelines that exist entirely in the cloud. The problem is that the data then used to train models isn’t anything like what is available to the app on the device.

While vision and language are more straightforward to extract consistent features from, no matter where you prepare the data, things like location data and time-series can be pretty tricky without bigger databases and SQL queries.

On-device behavioral learning

While vision and language were the low-hanging fruit for the on-device modeling we saw in 2017, we expect 2018 is going to be about going so much further. As we solve issues around standardized feature extraction and availability of the same features on-device and in the cloud (for the hybrid approach) we know we will be seeing a greater diversity of models running on the phone.

Predictive shopping and on-demand services

Probably one of the earliest consumers of behavioral prediction will be consumer shopping and on-demand services. The ability to make better guesses, cheaper, and in real-time will make it possible for these apps to get their users what they want, faster.

Take any brand app with brick-and-mortar purchasing (e.g. Starbucks) or any generic skip-the-line app (Ritual, BonApp, Skip The Dishes, etc) and imagine when they are able to predict your next arrival time. While the UX components of these interactions will be the harder bit to figure out, we’ll be seeing some app updates this year that add predictive learning to these types of apps.

Keeping up with the models

2018 is going to be a busy year for machine learning on device. We want to do our best to keep track of new models and fields being tackled on mobile devices. So we put together a small list to keep track of projects as they emerge. You can find the list on our git repo here. If you know any models we are missing (I’m sure there are quite a few) please submit a pull-request or just email me at andrew@textile.io.

If you are interested in learning more about what we are up to, or why we think on-device modeling could be key to better privacy for users, visit Textile.io.