How we’re creating a privacy-preserving AI for your smartphone

Olivier Corradi
Snips Blog
Published in
10 min readFeb 29, 2016

Snips is about instantly accessing your personal data (places, meetings, colleagues, bookings…) on your smartphone, and providing you with a quick access to associated apps. To do that, the Snips AI constructs a knowledge graph of your personal data, and understands how you interact with that data in a given context. It understands that you go to the gym on Tuesdays, and that you communicate with your sister using WhatsApp.

Building a product heavily based on Artificial Intelligence is a challenging task. We face three main challenges when building such an AI-based product:

  1. People tend to have a mental model of how a product works. They expect it to behave in a certain way in specific situations. An AI is a quite difficult thing to have a mental model for, and therefore, the AI needs to be perfectly predictable and apparently simple. All the inherent complexity needs to be hidden to the person, and as far as he is concerned, there should be no technology. It should feel like there was no AI.
  2. The Snips AI is here to make you save time. As people use the product, they will forge new habits. Therefore, the product has to be completely reliable and have a low battery footprint. It has to have the ability to know when it doesn’t know in order to avoid presenting erroneous information or making invalid inferences. One bad experience is enough to break a habit, and therefore, it is not sufficient to have a product reliable only 80% or 90% of the time.
  3. A good AI is an AI that knows you well. It needs access to sensitive information. That is why we are so strong advocates of privacy by design. All personal data gathered by the Snips AI never leaves the device, meaning that computations have to happen on the device. We can’t leverage the power of the cloud and we don’t have access to large amounts of personal data that can be used to bootstrap our algorithms.

How has Snips organised itself to build a product given those challenges? In this blog post, we’ll go through how we’re dealing with very limited amount of data, how we constructed a rapid prototyping environment, how we transform those prototypes into product features, and finally how engineers and data scientists collaborate. It’s a global overview of what is currently happening at Snips, as I have been fortunate to be involved in many aspects of what the company is creating.

Dealing with small data

By cross-referencing different sources of data, the AI is tasked with the construction of a personal knowledge graph, which is a structured representation of a person’s habits and app interactions. The Snips AI therefore needs to gather data about:

  • the physical context of a person (location, temperature, rain, public transport schedules, traffic…)
  • digital clues about a person’s past or future intentions (geolocation history, address book entries, calendar events, emails, messages, app usage history…)

The obvious sensitivity of personal data such as calendar events requires us to take a very serious approach to privacy. We never upload those events to our servers. In fact, our servers are only used for our internal tooling infrastructure, and never receive data from anyone.

We do however need to get our hands on some data in order to build and test the AI. That’s why we launched the SEEDS initiative, and why Snipsters (that’s how employees call themselves) collect as much data as they can about themselves, using custom-made apps for example. However, the relatively small amount of data collected compared to the wide range of possible situations prohibits us from using traditional big data approaches, i.e. very flexible algorithms that can capture complex patterns because they have access to large datasets.

Instead, we rely on less flexible algorithms having a substantial amount of pre-defined structure. As time goes by, this prior behaviour is adjusted by learning from newly acquired data. Manually pre-defining a structure which conservatively adapts to the observed behaviour as time goes by gives us reasonable guarantees that algorithms correctly generalise to situations not covered by the data collected, for example in countries for which we haven’t run experiments. Pre-defining a structure also speeds up the learning process because the structure itself is already present: it only has to be adjusted to a person’s habits or environment.

Assessing the correctness of the constructed knowledge graph is a challenge in itself. For example, reconstructing a person’s moving habits requires understanding which places he/she visited. However, we only have weak digital clues: a geolocation trace, and possibly a text message or a calendar event if we’re lucky. A statistical model combines all those digital clues and estimates which place is the most likely to have been visited. One problem arises. Because we don’t know which place was visited in reality, we have no ground truth available to assess the precision of the algorithm: we lack access to so-called supervised data.

From this noisy geolocation trace, the AI has to figure out which place this person visited.

We are therefore forced to supervise our data ourselves. For this particular example, we use the Moves app to note down which places we visit. We can then assess the performance of the algorithm by comparing the history of places visited with the constructed knowledge graph. This also gives us a certain confidence in whether or not the algorithm can generalise to new situations.

We strive to do probabilistic inference, meaning that we build algorithms able to express a confidence level in their answer. For this particular example, the algorithm outputs the most likely visited place with an associated probability. If the confidence is sufficient, then the supposedly visited place is added to the knowledge graph and the associated habit recorded. If the confidence is not sufficient, then the AI knows it doesn’t know, and it won’t record that pattern because it is likely to be inaccurately described. The critical confidence level is calibrated using the supervised data we collected.

Rapid prototyping environment

A tech company’s potential to create value comes from its ability to prototype quickly and iterate fast. We’ve always liked to build cool stuff with data, and that’s why prototyping has been part of the Snips DNA from the start. We’ve always focused on hiring extremely versatile people that shared a deep understanding of data science with a fast prototyping ability, whether it takes form of an app, a backend or a dashboard.

“One day you have a crazy new idea, the next you’re testing it live on your phone. This is how it is to be part of a family of bright and versatile people. It makes you feel like anything is possible.” — Colas Kerkhove, Snipster

People passionate about building stuff will develop the tools they need. This is how we ended up developing an infrastructure and a whole tooling suite that allows Snipsters to go from idea to prototype in the least amount of work and time possible. As an example, we’ve developed a web-based IDE that allows us to visualise spatial data and prototype algorithms with an instant feedback. By coding HTML sliders directly from the IDE, we can build interfaces that help us explore and understand the impact of changing parameters of an algorithm. Code can be saved, shared and is version controlled. We call this tool the explorable.

Rapid prototyping of a metro detection algorithm using the “explorable”. The geolocation trace (in blue) is used to infer how far on the metro line (green point) the person has travelled. JSON or CSV data can be drag-and-dropped onto the interface, or a REST API can be queried.

In order to be able to rapidly prototype algorithms, one also has to be very efficient at collecting and/or constructing datasets. Therefore, we built the ContextApp for iOS and Android, whose purpose is to gather real-time data from all device sensors and system APIs for research purposes. Those datasets are centralised in a backend, and made available for anyone who has a prototype idea and want to test it on collected data.

Constructing a knowledge graph also requires understanding language in order to understand digital clues left in text messages for example. The first task in such an endeavour is to construct a dataset of text messages with their associated meaning. One data scientist built such a tool, which allows us to very efficiently tag words with their associated meaning. Everyone in the company is highly incentivised to use the tool (cookie rewards anyone?), and it has been improved with keyboard shortcuts such as to minimise the pain of use.

From prototype to production

Prototypes need to be made robust and solve a real user problem in order to be integrated into the product. It is sometimes frustrating to see how many prototypes we build compared to what we consequently shipped in our app. The reason is that it takes a lot of time and effort to make something robust enough. However efforts are far from wasted because we acquire a solid domain expertise from knowing what works, what doesn’t, and why.

“In an environment where time is the biggest constraint, our job is not only to build algorithms, but first to figure out what really needs an algorithm.” — Adrien Ball, Snipster

It makes a lot of sense to test new algorithms as close to the product as possible. It forces the team to think about UI and UX, which often offer much simpler solutions than algorithms do. For example, why spend a couple of days waiting for enough data to detect where a person lives instead of just asking him/her through the UI?

If the feature makes sense, a new build of the app containing the new algorithm is created. We add debug screens or notifications allowing us to get real-time on-device insights. There’s nothing more exciting than to hold your phone and see an algorithm work live. It gives so much more intuition as to its inner workings compared to running it offline. It exposes the data scientist to engineering problems: network latency, varying data quality, OS constraints… Dashboards and logs are nevertheless used to inspect and debug retrospectively (the explorable is used extensively at this stage).

The reality check is sometimes quite harsh. Battery drainage might render the algorithm unusable in practice, or the process could get killed because the algorithm uses too much memory or too much time to execute. This is especially true on Android where devices radically vary in specs and limitations.

The multidisciplinary feature team

Typically a team responsible of implementing a simple feature would consist of engineers and designers. The technical details of how the feature is implemented is left to the engineers, as long as the feature works from a user perspective.

For a feature heavily based on data science, the data scientist(s) that designed it must be added to the team. The battery impact must be assessed, and all the edge cases where the algorithm might fail must be identified and proper fallbacks implemented, both in terms of engineering and UI. The algorithm requires data sources, and thus a data pipeline must be designed or integrated with. This rises the complexity of building a new feature, and forces a multidisciplinary team to work very closely with each others.

A team of Snipsters working (very) closely

However, a data science feature requires a lot of data science prototyping and exploration before the engineers can start implementing the feature. It is very difficult to stay agile and avoid the sequential waterfall model when people wait on other people to finish their work. How do you move fast without isolating product and data science teams, thus risking data scientists being disconnected from users (read: preventing over-engineered solutions to problems that users don’t even have)?

That’s a tough one. We’ve had several iterations, and the currently implemented organisation model focuses on building a company-wide use case driven roadmap for the next months. Data science teams are tasked with researching and prototyping algorithms that will become the next building blocks enabling a particular feature identified in the roadmap. When time comes to implement the feature, the associated data scientists physically change rooms and sit with the product team. They effectively become part of the product team, and peer-code their algorithms in the production codebase. They get confronted to production-level concerns (battery, network latency…) together with engineers, and think of user interactions together with designers.

The interesting part is that this organisation model emerged by itself. We witnessed it by observing the flow of people at the office, as we don’t have dedicated desks. At the Snips HQ we have several floors, with the product teams being located at the first and data scientists at the second. We started observing convective-like motions of data scientists moving up and down between floors as they were alternating between algorithm research with other data scientists, and implementation with product teams. We’re still researching the best ways to share knowledge between data science teams and product teams, and the best ways to make sure product teams feel confident enough to own the implementation of complex data science algorithms.

What’s next

We’ve come a long way but there’s much more to learn and experiment on the product, data science and organisational side. Future posts will dig deeper into those aspects, so stay tuned!

If you’re as excited about the product as we are, head for the beta and help us improve our AI. Also, we’re hiring, so if building an AI-based product excites you, join us!

--

--