Our Approach to Product Design at Rainfall

An exploration of a self-driving vehicle user interface

Marc Anderson
rainfalldotco
7 min readMay 29, 2019

--

Last week I partnered with the Adobe Live team to showcase Adobe’s user experience design tool, Xd, while speaking about my design philosophies when it comes to creating digital products, and how those have extended to our process here at Rainfall.

Watch me on Adobe Live.

Since then, I have received a number of messages asking for more details about how we approach product design, from setting up the initial framework for applications to exploring various segments of functionality.

In light of that, I thought it would be fun to pull a piece from our archive, a Friday afternoon exploration of a simple car navigation, completed in the summer of last year.

This is far from complete, but offers an inside look at how we explore and test ideas that may or may not make their way to the final product.

The Brief

Assumptions

From the initial paragraph of the brief, a few key points are noted, along with their implications:

  • This is an electric vehicle, so charge and vehicle range play a key role in trip success
  • The touch interface is central to the overall experience, and most key vehicle actions will be available from this panel.

We can assume that because the terms “driver” and “passenger” were used in the brief that the seating arrangement in this vehicle is more traditional, and not the more collaborative central-facing model proposed in vehicle concepts.

The specifications call for a screen resolution of 1024 x 730, therefore, this screen is neither capable of, nor meant to offer the same number of omnipresent features as that of the Tesla Model 3 due to the disparity in physical size.

This screen will sit in the traditional dash-mounted position, with the expectation that the “driver” will sit to its left (In left-hand driving countries) and the passenger will sit to its right, something like this:

Finally, because this is a “fully self driving” vehicle, we can craft an experience in which the driver need not be aware of vehicle health and operational particulars at all times.

Instead, we can create a model that is contextually aware of the status of both vehicle and passengers alike, and present only the information necessary in each of those contexts.

The Framework

Prior to starting the design of any experience I like to start with a birds-eye view of how the user will access each portion of the experience. In a way this is a light heuristic exploration that will be evaluated further through prototyping and testing.

Despite the fact that this vehicle is fully self-driving, the driver, or individual overseeing its movement might still require access to specific controls and information related to its safety, functionality, and current operational status.

Therefore, all operational controls such as speed, lights, wipers, and defrost are kept on what is traditionally the driver’s side, alongside climate adjustment.

Secondary controls and information are kept on the passenger’s side, along with their associated climate adjustment.

Core vehicle controls sit on top of the experience on the left / right, with navigation and entertainment centrally located underneath

Many of the primary and secondary controls require toggling, multiple selections, or increasing and decreasing of scale, so when selected, those options expand the panel to allow for the subsequent interaction.

This can be a tabbed or slider structure, with multi-selection being accomplished through tap and drag or multiple taps.

Core control options can be multi-selected, toggled, or scaled through an expanded state of each panel

Finally, the large area at the center of the experience is kept for the two pieces of information that driver and passengers might be most interested in while the vehicle is in motion:

  • Trip status and navigation
  • Audio and visual entertainment

Each of these experiences can be viewed with the core controls visible, however, because this car is self-driving, these experiences allow for a completely focused state where those additional controls are not present.

Vehicle entry and charting a course

When first entering the vehicle, we assume that the occupants would like to be taken to a particular destination. Therefore, the screen will default to the navigational view and provide information about the current health and range of the battery, including the expected drive time until depletion, in this case two hours and thirty minutes.

If the need is simply to re-park the vehicle, a single tap will bring back the core controls.

Navigational view with current vehicle location, charge status, and expected driving duration

Once a destination is entered, the vehicle plans a route that contains any necessary charging stops and begins step-by-step route navigation.

Remember, this information is unnecessary to operate the vehicle, but will provide the occupants with important locational information throughout their journey.

Trip added to navigation with midpoint charge destination automatically added

A tap takes the user to the core experience, with the navigation present at its center.

Primary view with navigation visible

Interacting with the core controls

This is the primary view core controls visible and navigation present as the central information.

On the left we have

  • Vehicle speed
  • Current movement status (Gear is unnecessary because the vehicle is self driving)
  • Headlights
  • Wipers
  • Rear defrost
  • Front defrost
  • Fan speed
  • Seat warmer and vent direction
  • Temperature

On the right we have

  • Current time
  • Weather
  • Overall vehicle controls (Mirrors, drive comfort, etc.)
  • Phone and calling
  • Settings
  • Entertainment volume
  • Fan speed
  • Seat warmer and vent direction
  • Temperature
Primary view with navigation visible and route charted

Tapping on the driver’s side temperature control allows adjustment through a drag gesture.

Primary view with driver’s “scaling” panel open
Driver’s side temperature increased

The controls on each side are independent and can be adjusted simultaneously.

Both panels can be open simultaneously thanks to assumed multi-touch functionality

Revealing Navigational Turn-by-turn Directions

The default state of the navigational view displays only the next action the vehicle will take, however, the route can be previewed by tapping on that action.

From the subsequent list, the route can be viewed, modified, and canceled.

Trip added to navigation with vehicle traveling along the suggested route
Navigational view with list of directions exposed

Revealing Entertainment

Entertainment is central to the experience given that the vehicle will drive itself, so a solution was developed that keeps it central to the display.

A swipe up on the navigation reveals whatever media is currently playing, along with the core controls for that media.

Primary view with navigation visible and route charted

Want to watch a movie or show without the additional clutter of vehicle controls? Not a problem, tapping on the screen makes all surrounding elements invisible.

Primary view with entertainment visible and additional controls toggled invisible

To change the currently playing media, a vertical swipe-up or scroll gesture in the center of the screen will reveal the media selection screen. Options include video, streaming audio, and traditional radio.

This screen would also be what the user would see when swiping the navigational view up in the case that they have not yet chosen a particular piece of content.

Primary view with entertainment toggled and media selection exposed

A tap on the down arrow or outside the core media controls returns the user to the currently playing content, while a swipe down from the top of the screen again brings the navigation front and center, alongside the core controls.

Primary view with navigation visible and route charted

The Framework in Action

This prototype explores the key interactions of the framework.

Additional Exploration and Future Vision

This exploration barely scratched the surface of this system and all potential use cases.

In addition to further testing and refinement, a number of further explorations could follow this exercise, such as using multi-touch gestures to make quick adjustments to common actions, or activating voice control through a simple trigger phrase.

And there you have it, a look behind the curtain at our design process and the power of deep yet rapid functional exploration. Thanks for reading!

Marc founded Rainfall, a brand experience design studio in Seattle and New York that specializes in graphic, app, and website design. We are always hungry for new clients. You can also follow Rainfall’s work on Instagram. Or reach out directly to discuss product design and branding.

Discuss a project

View our work

--

--

Marc Anderson
rainfalldotco

Founder of Rainfall (www.rainfall.co), a brand experience, website, and digital product design company, headquartered in Seattle @rainfalldotco