“A person looking at photo thumbnails on their phone” by João Silas on Unsplash

Intelligent Design: How to be User-First When it Comes to AI

Last month in Montreal, Jana Eggers, CEO of Nara Logics Inc, gave the opening keynote for AI Fest. Titled “AI: The Good, the Bad, and the Beautiful”, Eggers spoke about the incredible advancements being made in the field of AI and the amazing things that can happen because of that. But she also addressed the fact that when things are easy, there is enormous opportunity for messing it all up.

The advancements in AI have enabled even small companies to harness the power of artificial intelligence with just a few simple lines of code. “The big companies are doing the research, and we can take those learnings and apply them,” Eggers said. “You can build a GAN in 50 lines of code.” Advocating for a more practical approach to AI, she highlighted how AI implementations aren’t as exclusive as they used to be. “Just seven lines of code can produce AI.”

In essence, artificial intelligence and machine learning have now reached a point of democratization. Anyone can include some form of it in their product or service now, if they really want to.

The result of this easy AI access has mostly been a lot of hand-wringing. Some of the fretting focuses on the tangible, immediate concerns around data protection and how we can safeguard against improper use of personal information or embedded biases, and some of it focuses on more esoteric concerns, like what might happen if the machines become ‘too smart’ or whether they should be granted rights of citizenship. Either way, the concerns are ever-present.

At Myplanet, we’ve been building digital products and software for the better part of a decade with a clear focus on user-centered experiences. But in an increasingly “intelligent” world, where everything from smartphones to smart refrigerators are standard parts of our lives, what does it mean to be user-centered? And how do we let our users know that we’ve thought through their experience, that we’re creating something that works for them at every level?

Need to Know Basis

In his book The Laws of Simplicity, John Maeda discusses the idea of trust. “The more a system knows about you,” he says, “the less you have to think.”

For many, this is the best part of the rise of AI: all those mundane, annoying little details can be streamlined into the machines and free us up to do more dynamic, interesting things. But letting go of control requires an enormous amount of trust and it can be difficult to trust a thing we know nothing about (as is the case for most users with AI-infused software).

Which is why Maeda continues on, saying, “Conversely, the more you know about the system, the greater control you can exact. Thus the dilemma for the future use of any product or service is resolving the following point of balance for the user: How much do you need to know about a system? How much does the system know about you?”

It’s a problem we’re all facing with increasing regularity as AI starts to fold into even the most mundane products and services. Where do we draw the lines? And when we do draw those lines, how can we make them clear without over-burdening our users?

All in the Details

To start with, we need to view AI through a particular lens. As Eggers maintains, AI operates best as no more and no less than “a tool for us to use” and says we should be taking a “people plus machines” approach.

She sees a future much like the one we have been envisioning and working towards at Myplanet — one where AI shoulders the mundane and tedious tasks that humans don’t need to perform and leaves us to do the more creative, more innovative, and more meaningful work. A big part of achieving that harmony will come through thoughtful design approaches.

Thoughtfulness in how we design our experiences isn’t a new concept, of course. The heart of a user-centered approach is to consider the user: their mindset, their experiences, their needs, and their wants, and to shape an experience around those understandings. That requires, by definition, a lot of careful thinking and evaluating. And as Maeda notes, this has long been a driving force in consumer products, especially for high-end products from companies like Braun, Apple, or Bang & Olufsen.

“We can only truly relax when we trust that we’re in the finest hands and are treated with the best intentions. Being able to lean back and relax often seems impossible in our competitive society. [Bang & Olufsen]’s exquisite design inspires you to lower your guard. Their extraordinary attention to detail melts fear into safety — causing you to float away in its care.”

For our users to feel as though they’re being “treated with the best intentions” requires at least a strongly implied (if not explicitly stated) high level of attention to detail. With consumer products — an iPhone or a high-end speaker, for example — a user can feel the difference in their hands. There is a tangible quality that comes with that level of design.

But that attention to detail becomes harder for users to identify when we’re dealing with software, and especially with AI. So much of the experience is hidden away behind the scenes, buried beneath lines of code that are incomprehensible to the average user. So how do we establish trust?

One way we see companies working towards this goal is through things like clear Terms & Services policies (think back to the slew of emails when GDPR came into effect) — explicit declarations of how and why your data is being used and offering ways for you to opt out if you wish. But even with those options, people feel lost. (How many of those emails did you actually read? More to the point, how many of the policies did you read?). The reason for that is because words on the screen still feel… removed. It’s too abstract.

It’s important to have those guidelines in place, but the best way to build trust isn’t through legalese. The way we’ll successfully build trust with our users is by creating experiences that feel as detail-oriented and thought-through as a well-designed consumer product always should. We don’t have the advantage of tangible objects, but we can still make the experiences feel a cut above.

There’s an oft-referenced story from the book “In Search of Excellence” that talks about how important it is that airplane tray tables are properly cleaned, because if the tray tables aren’t properly maintained, what does that say to passengers about the engines? Attention to detail conveys a sense of quality and trustworthiness that stating aloud “We check our engines before every flight!” simply can’t convey.

If our software is slow and clunky, if the interactions don’t work correctly or it’s difficult to navigate, if it doesn’t do what we need it to do, even if it just doesn’t feel right, we’ve lost user trust. That attention to detail is crucial in all design. But it becomes amplified when we’re asking users to trust us with sensitive information and a system that operates without direct input.

There is no shortage of hand-wringing and theorizing about the state of the world and how AI is ruining/saving us from ourselves. But at the end of the day, the way we’re going to build trust is twofold: 1) think through the policies & procedures around AI implementation and make sure they’re available to our users to dig into if they wish, 2) show that we’re paying attention to the experience on the whole.

When we design experiences that offer utility and joy that reach beyond expectations, we telegraph that we’ve thought through every aspect of the product or service we’re offering. Users stop worrying that we skipped over thinking through the impact of including AI or cut corners on creating a secure implementation of it because we’re setting an example through the experience itself.

The last few years have brought a shift towards more “intelligent” design — the use of AI to power better, more intuitive experiences. We hear a lot about the need to pare down our designs, to focus on building the right thing. But that’s only part of the conversation. As Maeda notes, “Achieving clarity isn’t difficult… The true challenge is achieving comfort.” And comfort comes when we don’t just find a problem and fix it. It comes when we fix it in a way that feels natural, thought-through, and with a level of attention to detail that reassures our user that we’ve thought as much about the hidden aspects of our products as we have about the pieces they interact with.


Interested in helping to shape the future of AI implementations for thousands of people? Join us as we build smarter interfaces that re-energize workplaces. Apply today!

And thanks for reading— be sure to 👏 so others can find the article, too!