So Much Information, So Little Space: Evolving Emu’s UX


In 2012, I missed a meeting. The founder of a well-known tech company had emailed me, asking to meet up. We’d picked a time, scheduled it, and…I’d forgotten to put it on my calendar. By the time I realized, it was too late.

I’ll never know why this tech celebrity wanted to chat. Add it to the list of embarrassing moments that bubble up to the surface when my guard is down.

This has happened to me more than once — and, I suspect, to you as well.
Not every problem has a technological solution. But I couldn’t help thinking this one did — after all, computers excel at keeping track of things. In this case, the event existed in my email; it just needed to get into my calendar somehow, or at least generate a reminder.

Making that happen automatically was beyond my technical abilities — it would require expertise in natural language processing and machine learning — but as an interaction designer I knew I could reduce the effort needed to transfer the information to my calendar. I thought that might be enough.

I wasn’t the first to tackle this problem. From Fantastical for Mac (one of my favorites) to Doodle and Boomerang Calendar to Apple’s Data Detectors, we’re all trying to achieve better integration between the conversations where we discuss plans, and the tools where we get stuff done — calendars, to-do lists, OpenTable, Fandango, Yelp, etc., not to mention the good old fashioned telephone.

When they’re successful, these integrations reduce a tedious, multi-step interaction to a few simple clicks, without taking you away from what you’re doing. When they fail, it’s because they build their own cumbersome workflows, and you still stumble over technology.

This five-screen wizard is Doodle’s replacement for the message, “What works best? Tues at 2:30 or 3:30, Thurs at noon, or Wed at 2?”

I thought I could do better. I conceived a Gmail plug-in that would turn an email thread into a calendar event in just a few clicks. I designed it, showed it to a few advisors, and got positive feedback — even a suggestion of possible funding.

As I was digging into the technical details, I met Gummi, my eventual co-founder. He was tackling a similar problem, from the perspective of his own background in machine learning and natural processing. We recognized that the most powerful solution in this space had to combine a smart back-end with a polished UX, and teamed up.

We talked about calendar-centric products, task-centric products, notebook-centric products. The more we talked, the more we came back to communication — and, increasingly, to texting or chat. Today, people text with friends, family, and co-workers to manage their daily chaos, especially the inevitable last-minute changes and snags. This involves a lot of typing (“Yeah, it’s a Chinese place downtown.” “I’m five blocks away!”), and a lot of app-switching. (“Hang on, I’ll check my calendar.”) It all adds up to something more than mere annoyance: Lost opportunities, like my missed meeting. Cranky spouses. Embarrassment and frustration, confusion and panic, as the details of daily life become overwhelming.

Assistants like Siri and Google Now try to pull it all together for you. And as web search gets ever more powerful, we’ve seen how the right kind of content aggregation can create real convenience:

But these moments of magic and efficiency are still few and far between. And they’re often far removed from the real action — strange notifications coming in from left field. The more Gummi and I discussed the problem, the more convinced we became that conversation is the center of the action. So why not bring the intelligence, the assistance, directly into the conversation?

Defining the Problem

We set out to create Emu, a texting product smart enough to understand your conversation, with the ability to pull in contextually relevant information and offer especially useful actions based on that information. From a product-design standpoint that meant (a) discovering the types of coordination already happening over text, and enhancing them; and (b) creating workflows so simple and compelling that they would foster new behaviors.

Our big UX challenge? Keep it simple. Learn from our cumbersome predecessors and avoid repeating their mistakes. After all, texting itself is easy. So time and again, we’ve returned to these guidelines:

(1) Emu’s UX must be more effective than the pure messaging that would happen without it.
(2) It must be efficient and feel lightweight.
(3) And it must let users stay in the context at hand — the conversation.

That’s especially tricky in a mobile app, where space is at a premium. It goes double for messaging: the keyboard is typically onscreen, leaving very little space for content.

Our natural language processing engine lets us streamline the UX. It figures out what you’re trying to do: schedule a time to meet up, remind your spouse to take out the trash, etc. Because we detect these things, we don’t need UI to collect them.

But that’s not a panacea. Coordination is a dialogue; technology can support it but can’t replace it. Software can determine that I’m scheduling something; it can show me my calendar for tonight; but it can’t tell me whether I feel like going out. No matter how smart technology gets, we need UI…we still have to interact with our users.

Action Buttons: Simple and Elegant

One of our earliest designs is still one of my favorites. When you send or receive a message that suggests an action, we give you a small button right in the message:

Action buttons in my initial wireframes, my early mocks for the app, and in Emu today.
An action button with a hint.

In usability tests over the years, I’ve seen plenty of users who are reluctant to explore unknown functionality; for them, a simple icon isn’t enough. We didn’t want to put a label on every button, since it would take up a lot of room. So, we added hints that appear the first three times a particular action button does. They let us play both sides of the trade-off between clarity on the one hand, and information density on the other.

“Voltron”: Out of Sight, Out of Mind

Those little buttons are great for simple actions, but we need to display information too. When you mention a restaurant, we want to give you reviews and available tables, not just a simple Reserve button. We want to show movie trailers and ratings and showtimes, not just a Buy Tickets option.

We considered showing such information right in the chat, but we worried that these enhancements would overwhelm the conversation.

We thought about making them expandable: the information is hidden at first, and a button expands it. Expandable/collapsible UI can be effective on the desktop, especially when it uses animation to preserve context (to help your eye track the transition of onscreen elements as one of them expands). But it’s rarely usable on mobile: even with animation, too much of the screen changes too fast for the user to follow what’s happened. That’s why simple iPhone interactions like picking from a list are implemented as drill-downs: it might seem heavier than an expansion, but you don’t lose your place.

From the start, we’ve been intrigued by the idea of an alternate, coordination-centric view of the conversation, an organized view of the planning “thread” that weaves its way through the chat. So we chose an approach we called Voltron: a categorized list of all the dates, times, places, and other planning-related entities in the chat. A badge in the chat let you know when new Voltron items were waiting.

Sketches for “Voltron.”

Sometimes you have to build something before you understand its limitations. Once Voltron was functional enough to play with, we knew it wasn’t right. It was so close to the chat — just on the flip side — but that distance was an unbridgeable gulf. Out of sight, out of mind.

The Drawer: Effective but Weird

The “mini view” drawer slid up over the keyboard when you tapped an action button.

A chat app needs the keyboard onscreen for quick typing and sending. Hiding and showing it all the time is irritating, especially on Android where the transition isn’t animated. But we thought we could use its real estate temporarily, putting our contextual info in a transient drawer that slid up over the keyboard.

Say a friend texts you, “Are you free Friday night?” Emu adds an action button to the message. Tap it, and a mini view of your calendar slides up over the keyboard, showing your availability Friday night. Swipe away the drawer when you’re done. Simple and in-context.

This approach worked perfectly…but….it was weird. Something about our novel over-the-keyboard approach just felt strange. Every time that drawer slid up it felt like we were violating some unspoken mobile UX principle. I can’t really explain it, but sometimes, “This feels weird,” is enough reason to seek another approach.

Which Brings Us Back To…

All these attempts led us back to an approach we’d rejected: putting compact, highly relevant info in the messages themselves. Yes, it takes up space. Yes, it means less conversation onscreen at once. But in terms of providing information in context, you can’t beat it.

A scheduling and restaurant enhancement in Emu today. These are automatically added to the message when it’s sent.

Ultimately, we concluded the trade-off was worth it. We think users will too.

Going forward, we can put our intelligent back-end to work here too, creating more situational nuance. Suppose you’re talking about your favorite restaurant. If we think you’re actively planning something, we can show you reservations, as we do now; but if you’re discussing the awesome meal you had last night, we can save space by omitting them.

Where to?

So at this point, our UX is a perfect reflection of a perfect product, and we plan to sit back and let the accolades and users roll in.

Yeah, right.

In implementing our enhancements, we’ve kept things really simple. We’re exploring deeper functionality, like more interactive coordination and collaboration, but we’re proceeding with caution: we’d much rather err on the side of simplicity than build a powerful solution that falls down of its own weight. We’re constantly mindful of our first UX principle: if we can’t make it easier than just replying to the message, we won’t build it.

We’re also constantly evaluating and reconsidering our UX. At its core, it’s similar to other chat experiences. That’s great for the familiarity and instant usability it brings. But, is there something more appropriate to our target audience and the use cases we want to enable? Are there better navigational paradigms, more effective ways to present a conversation than the traditional chat-bubble view? We’re sketching, designing, and prototyping…and time will tell.

I remain especially excited about the ongoing partnership between UX and AI. Our UI can facilitate the necessary dialogue between the AI and the user; and it can manage the fact that a machine-learning system is probabilistic (you can’t always predict what it will come up with, nor can you ensure 100% accuracy). Meanwhile, the AI can enable the UX to tailor itself not only to time and place, not merely to the content of a conversation in the abstract, but to the interplay between those things, and a user’s own behavior over time. Truly intelligent UX is an outgrowth of every signal we can throw at it, both microscopic and macroscopic. We’re just scratching the surface so far.

Emu for iPhone is currently in beta. You can download it http://emu.is/dld/.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.