Building Chris: A journey through inception, design and technology

We set out first with to make driving smarter and safer in every car. More specifically, we set out to reinvent how the average commuter stays connected while driving. Over 500 million cars on the road across North America and Europe today are more than 9 years old, and don’t offer any connectivity the way we as a society have grown accustomed to adopting. Unfortunately, this has caused serious conflict and consequences with increasing accident rates due to distracted drivers engaging illegally on smartphones.

So, we posed the question: How can we change this behavior of distracted driving, while improving user experience and connectivity?

We then began our journey from discovery to solution through a series of phases, including:

  1. Inception
  2. Use Cases
  3. Design
  4. Market Research
  5. Integrating Technology

Early Pitches

We started testing of the early idea by pitching it to friends and friendly potential investors. This enabled us to collect feedback, generate additional ideas and build depth, such as which markets to prioritize, potential partners, and competitive intel. Also, importantly, these conversations answer the question on whether the concept is clear and if there are potential commercial obstacles. Did they get the idea? How did they react? What are their concerns?

Nailing the Basics

“Who is the customer?” and “What problem are you solving for them?” are the two big hairy questions in the early days. It’s important to solve this early. You will need to formulate a hypothesis based on what you believe will happen, and then stick with this solution as the core directive as you develop.

Once we had the very early “core team” in place, we ran a workshop on those two questions with the “honorary guest participants” whose input we wanted. It was a hot summer day, and the discussion made it even hotter. But, we followed the framework “MVP” vs. “later” vs. “never” vs. “doesn’t matter” for both questions, and at the end of the day, we had reached agreements.

The purpose of being clear on those 2 questions from the beginning is to de-bottleneck the design process. Importantly — do NOT change that decision in the design process unless you have truly and fully validated that it does not make sense to pursue this market segment or problem. You can change the solution you are building as a “solution pivot”, i.e. the kind of product that will solve the problem, and you can change that quite radically (from digital to physical, etc.), but if you change your mind on the customer or problem, you will reset the design process back to the start.

Product Design Iterations

In the first month we ran an extensive “phase 0” of the design, going into more than 20 design directions, researching materials, similar products, collecting inspirations, etc. We then used simple tools, like popularity scores (the green stickers), and post-it notes to come to some of the fundamental decisions around the design. The important take away from this exercise is a reconfirmation of what you are and what you are not, and a treasure trove of ideas that you can tap into for the next months and years.

Finding and Interviewing Customers

One really useful side effect of our office location in an industrial area of Berlin is that it is right next to two car maintenance shops, one focusing on tires, and the other on general maintenance. People go there to get their tires changed or car repaired, and then they sit and wait for 30–45 minutes. It was a perfect opportunity to conduct customer interviews with drivers for a product that goes into their cars!

For example, we would load a range of design variations onto a tablet, walk in, spend an afternoon, and come back with 5–10 high quality, in depth interviews of potential target customers. And we didn’t spend a dime!

Testing with Videos

Videos are a very powerful way to test concepts and features. For example, we were unsure how important the gesture recognition would be to the proposition, and whether people understood what to do.

When we ran our first smoke tests on a simple web page with product picture galleries, we had the impression people “rationally” understood the product, but did not emotionally connect. They also had no clue what “gesture recognition” should be and what it’s good for. So we got in the car and recorded a video with high quality photography, UI visuals and professional voiceovers. The result was a video that looked good and achieved results. Suddenly, everyone picked up on the gestures, and they were truly wowed — and we knew that hand gestures would be an essential part of Chris.

Display and Hand Gesture Recognition

Finding the right hardware setup is a mixture of many years of engineering experience with lots of research and prototyping to see whether components do what they should. For example, we started prototyping with rectangular displays, but it became clear very quickly that they were simply not a good fit for the car.

One of our design principles was that drivers should never have to touch the device while driving, since that is both distracting from the road and also often requires the driver to lean forward, further reducing the ability to react in a sudden dangerous situation. So we focused on voice and hand gesture recognition.

Gesture sensors differ vastly in their capabilities, some time-of-flight sensors allow the detection of a wide set of super complex gestures, and there is going to be lot of innovation in this space in the next years. For Chris, our feel was that we should not bother drivers with complex gestures because complex gestures require more cognitive bandwidth, but keep it to very few, simple gestures that can be detected with very high reliability and at a reasonable distance to the device. For this, we looked at radiation based passive IR sensors, reflection based IR sensors and electrical near-field 3D gesture controllers.

Collecting NLP Test & Training Data

One of the very first steps in creating a voice-based product is to collect test and training data. This should be utterances, phrases, sentences of varying degrees of difficulty within the domains you are targeting (say, emails, music, navigation) and within the context you are in.

For us, this was: in the car, not in the car, in the car with open windows, windows closed, high speeds, low speeds, cobblestone roads, etc.

This catalogue of phrases, contexts and domains creates a huge matrix of areas to record. A good way to do it for us was to have two people drive around (male/female combo for the different voices), and the co-driver would read the script to the driver, and the driver repeats back the phrase. The whole conversation gets recorded and later cut up. Of course, some elements of the training can be crowdsourced.

Equipped with our test and training set, we evaluated different ASR (automatic speech recognition) and NLU (natural language understanding) systems, configurations and setups to get to the magic formula.

One finding was that a certain kind of noise reduction can significantly deteriorate the accuracy of the language processing, and so we made sure to include a range of filtering and noise cancellations solutions in the hardware design to be able to continuously optimize the speech recognition even when the devices are already in the market.

Parting Thoughts: “See the Whole” as early as possible

“Seeing the whole” is one of the seven principles of lean software development, and a critical one in our view. Our product involves software and smartphones, cloud platforms, a variety of SDKs, speech recognition, embedded device software and communication protocols between these. And of course, you have app developers, server and full stack developers, embedded software developers and NLP/AI developers and data scientists. In such a setup, it’s easy to end up with a bit of a mess once the pieces come together.

That’s why we pushed as early as possible to bring together all systems (app, NLP, embedded), even if the resulting prototype still has very limited functionality. It became one of the first features of our prototype app, a simple way to take apart what the ASR (automatic speech recognition) heard, what the NLU (natural language understanding) understood, how long it all took, and so on. A great way for our AI team to start banging on accuracy and speed for months to come! Once we had that first integrated prototype, we simply worked and are still working every day to make it better, more functional and more sleek for our customers.

Become a part of the journey. Pre-order your Chris now.