Designing “Google Maps on Crack”: Reimagining AI Ridesharing with Argo

CL
designfirebase
Published in
8 min readJun 4, 2018

--

Full disclosure: not a real app (yet).

For this project, I was acting creative director and strategic design lead for a group of five in the Integrated Design 350: Advanced Design for AI course at UT Austin.

This spring, in the last semester of my masters program at the University of Texas at Austin, I decided to build a courseload that revolved around my interests and really pushed my skills. Though I went into my MA sequence with only a vague understanding of the advertising industry and no set opinion on where I’d fit in, I developed interests in spheres and verticals that ended up uniting pretty serendipitously in the end. (Spoiler alert: “fit” wasn’t the word I should’ve used as a diagnostic, as it turns out.) I learned that tech, UX, design, and storytelling all did fit in with one another — and actually should coexist to create the best products, companies, design methodologies and teams.

But for now, we’re going to talk about Argo.

Most of us have some amount of exposure to the idea of autonomous ridesharing. Whether understood only through vehicle “autopiloting” features by the likes of Tesla, BMW, and Mercedes, or via the presence of research by larger companies with an investment in autonomous technology (think Waymo by Google, or even Uber), it’s a step closer to the chromed future we’ve all envisioned for ourselves thanks to sci-fi — and today, it’s easier than ever to see.

My group in Advanced Design for AI, a course offered by UT’s integrated design department and taught by the illustrious Jennifer Sukis, was tasked with an assignment to develop a cognitive experience using IBM’s communication model, and with respect to IBM’s AI-specific design principles. The moment we got on board with one pitch was summed up by a question a group member led with:

Imagine you’re in an autonomous rideshare, and it’s raining outside — how would you turn on the windshield wipers?

It makes sense. And that’s only one issue: what about explaining quick swerves, or determining the “best” route for different users? Currently, it seems that consumers and producers may not be thinking about the potential pain points of early adopters once said technology hits the market. Thus it would seem like there’s serious potential in the autonomous vehicle market (and ridesharing, but we’ll get to that later) to design truly delightful experiences for users, and to develop solutions that make the relationship between people and AI seamless, comfortable, unique, and scalable for the future, all from the get-go.

Hi, I’m Argo. Where are we going?

Argo is an intelligent conversation interface designed to interact with autonomous vehicle ridesharing systems and create a better experience for the rider. We envisioned Argo to operate as the vocal component to an autonomous vehicle’s mainframe, acting as a chauffeur of sorts to speak with riders while getting them where they need to go.

Unfortunately, we ran into some problems early on in our explanation of Argo — not in its value proposition, exactly, but in its execution. After our first pitch, we were asked to clarify what type of AI system we’d say Argo was best likened with. Without missing a beat, we answered “J.A.R.V.I.S.”, as we aimed to design an interface that could liaise between intelligent technology and their users, and navigate problems that balanced required computational power with conversational finesse. There was also a significant amount of indication in our research, as one group member said while cracking a smile, that people might be more comfortable with a smartass in the driver’s seat.

Users

Our team dedicated a significant amount of time toward deciding whether Argo would be specific to ridesharing, versus an interface that would attach and parlay with the systems of personal semi- or fully-autonomous vehicles. We opted to develop Argo in tandem with a ridesharing experience to best understand the needs of users as they pertain to their relationship with the road, their car, being a passenger, and getting where they need to go. Argo’s central mission is to get people where they need to go as simply as possible, and to remove/better understand the extraneous hassles associated with personal vehicles that Argo may not yet be equipped to handle — such as a hail within a parking lot for instance, or speaking to emergency responders in the case of an accident — we decided to roll Argo out as a rideshare navigator, driver, and chauffeur first.

During the user research phase of this task, we decided to hone in on one of the more popular hassles/shortcomings of ridesharing, which is accessibility and language barriers. Our conversations indicated that many passengers who had used popular ridesharing services in the past (looking at you, Uber/Lyft) had encountered a driver with whom they experienced communication obstacles, including linguistic differences. But beyond these conversations, we also just wanted to understand the barriers of some of our more extreme users to make sure we had all our bases covered. Through the interviews we conducted, we developed holistic user personas that we’d use to make Argo the best version of itself.

And we found some interesting aspects. Some of our interviewees were riders in the ridesharing equation, but had experience in other relevant verticals such as restaurants or the service industry. Partnered with exposure to riders and drivers who faced language barriers between one another, these interviewees gave us valuable insight regarding what riders wanted, needed, and expected in their trip experiences — sometimes without even knowing it.

From here we decided on five design goals for Argo, based on the conversations we had: we intended to design Argo to be conversational, perceptive, transparent, savvy, and locally proficent.

Design and Features

Though Argo would be an interface/integration outside of the car and an “all-knowing voice” within one, we wanted the app’s communication with the AI to reflect our design goals inside and outside of a vehicle. We designed the first few touchpoints to be very sleek and easy, reflecting the nature of many existing ridesharing apps for simplicity and familiarity. After signing up, the user would input an address, be prompted with a confirmation screen demonstrating relative distance, traffic density, and pricing estimate, and they’d be off to the races.

But beyond the standard specs, we wanted to include a few features that reflected specific pain points we discovered through our research.

One of our interviewees expressed frustration over current ridesharing apps’ disjointed “connection” feature — that is, the inclusion of a license plate number or vehicle model/make/color to help identify your vehicle, or the presence of markers on an integrated map to help both rider and driver sync with a pick-up location. But what happens if the rider is preoccupied, or the driver is in a different vehicle, or the pick-up point is confusing/unfamiliar?

On a recent trip to LA, I ran into this issue myself — while surely well-intentioned, the ridesharing pick-up points outside LAX seemed wildly unorganized, and lacked straightforwardness and clarity. After asking three separate (and very kind) airport staff for directions, I finally found my pick-up point — but still had to wait 25 minutes for my driver to arrive, thanks to traffic.

With this in mind, we designed an AR feature into Argo’s system to alleviate this issue for good by highlighting a ring around the designated point using a combination of geolocation, Google Street View for accuracy, and the user’s phone camera, and notifying the user to re-open the app to see Argo’s specific car highlighted once it arrived.

Another feature we thought people would really love — especially busybodies, such as a few personal friends of ours here in Austin — is an AI driver with the foresight to schedule another car for pick-up after identifying two consecutive appointments in two separate locations.

Should the user opt in, Argo would know to call a car after their meeting from 2–3pm at WeWork to take them to their 3:30pm haircut — accounting for traffic and sync time, of course, and confirming with the user once more before calling the scheduled car.

Just for fun (and to address the “local guide” aspect of our design goals), we wanted to integrate Argo’s opinions on local restaurants and landmarks on various popularity scoring sourced from Google Maps, Yelp, and TripAdvisor.

(Incidentally, Musashiya Udon was a spot Google recommended on my trip. Next time you’re in Westwood, give it a peek.)

The most important feature we foresaw in Argo was its potential to effortlessly remind users of aspects of its conversation with them. If this happened to be a recommendation from earlier, Argo could prompt a user in dialogue to verbally (or through an app interaction) affirm whether they’d like to receive notifications about places they’d discussed with them.

Why should we care?

For this section, it seems only appropriate to call forward a portion of a piece penned by Alvaro Soto, former design lead at IBM Watson and one of the biggest fans (and critics) of Argo during our final pitch. In User Centered AI Products, he writes the following:

When it comes to 1-second (AI) tasks, the principle “Just because you can doesn’t mean you should” should still apply. Buyers and users will need to see a real benefit for the technology to diffuse in the market. … And adoption of AI will be slow if innovators ignore the user’s job.

One of Alvaro’s praises of Argo was its ability to take a segment that was largely considered an after-thought — being autonomous vehicles and user experience — and transform it into something that maintained the benefits of both human-to-human interaction as well as bringing the efficiency of human-to-machine interaction. Our goal with Argo is not to alienate drivers and passengers from one another, but to take a potential inevitability of future roadway infrastructure and optimize it to create the best of both worlds — taking into mind user experience as well as logistical effectiveness.

This being said, with the current (2018, Q2) state of technological innovation around AI, it’s hard to innovate in terms of efficiency without delving into some degree of idealism. It’s our job as designers to always keep our users — whether extreme or normative — front-of-mind in the work we do and in the products we create. Otherwise, what are we creating for?

--

--

CL
designfirebase

Cosplayer, digital evangelist, brand skeptic, purveyor of justice by day. Dreaming in the sunlight, scheming in the moonlight.