Designing for an Authentic AI

Sri dhar
10 min readJul 20, 2018

--

Mechanical Duck, built by Jaques de Vaucanson (1738, France) Source: https://commons.wikimedia.org/wiki/File:MechaDuck.png

Higher order automation as opposed to mechanical automation

During my stint as a co-founder and product manager at Bizosys (2009–2015), a company developing Hadoop based products to manage large scale data (structured, unstructured and time-series sensor data) I had this overwhelming moment where a machine system could learn from past data and predict future events. This was for a telecom service provider who wanted the ability to accurately predict communication tower failures. There were over hundred parameters ranging from network to the fuel levels in its power generators to weather and national holidays. Remotely located towers could go down for days unattended. Initially we tried Weka but were able to get prediction accuracy beyond 55% — no great business benefit for such reliability. We then tried with a self learning machine learning program deploying a window-shifting algorithm, HotSAX that discovers discordant patterns in data. The results were exciting with accuracy in high 90%. Suddenly, this opened up new opportunities for the telecom infrastructure team — they could manage their shifts better based on reliable predictions, downtime was reduced, yielding significant, tangible business benefits.

This sort of reliability can only be matched by humans having tacit knowledge gained from decades of experience. Such as a train driver on a Southern Pacific line who has memories of record snow falls and how to deal with a developing snow storm. Where a machine lacks is the ability to predict with minimal training data. For example, in our telecom experiment, three quarters of data fed to the algos resulted in excellent prediction of the fourth quarter. Human on the other hand can manage within bounded rationality. If human thought as we know is essentially Cartesian then, our knowledge of our experiences are traceable ultimately to the knowledge of the world around us. We know that such thought leads to errors. For example once you operate a light switch, you expect it to work the same elsewhere. When it doesnt, we adapt to the situation or enquire into it. The difference is in our learning capacities and input conditions. This is evident in the following comparison between Mooney images and machine based face recognition.

A tale of two faces!

Like this Smithsonian article says, “The early Greeks and Renaissance artists had birds on their brains” and there was always a quest for the robot. Vaucanson’s mechanical (incontinent) duck of 18th century perhaps was as awe inspiring to the audiences then as the AI driven automation unfolding today. Till recently automation was rule based at least in production. With the announcements from deep learning successes, a new era is emerging.

This brings me to the premise of this story — how do we design experiences for a higher order automation instead of the mundane mechanical systems? Consider an old analog temperature controller compared to a connected Nest device. How are we supposed to engage beyond its visible appearance and display controls? Cognitively, the task was straightforward — decide when in the room, how hot or cold the room should be, and turn the dial clockwise or anti-clockwise. With a connected device, there is an app that can learn from your past spins on the dial up or down, to recommend or even offer to preset, via an App toast notification sensing you are 30 minutes away from the air conditioning system; it having already contributed to the larger big data pool; an analysis of consumption patterns feed utility companies on predicted loads, resulting in them controlling sluice gates of hydroelectric dams to produce power for the consumer, who is expected to turn on the AC to a comfortable 24 degrees in 30 minutes.

When you see the capabilities of advancing technology such as New Zealand based Soul Machines, technology is not just fascinating, but resets our relationship with machines. Just as Ava has trained itself, or with the help of its creators, to mimic human expressions, would the machine be ‘aware’ of its learning. Like learning to factor in the response or expression in a conversation and change how it smiles next time it sees the same person — a man, woman or child? Would it also smile at the pet cat (which overzealous robots might see as a pet and as food) in the same manner as it would to a human? Would it spook the cat or dog with its smile, and realize “uh-oh?” The larger question, how much of the ‘cultural learning’ does the machine pickup. How would a driver-less car behave in traffic in Arizona or say in Bangalore, India (where I am from)? Would the driver-less car honk like they do in India for the heck of it? Is honking a cultural thing?Does the machine learn these nuances?

Creating Ava — Soul Machines

As a user experience designer trained to adopt user centered approach, and I do; I ask — so, which user center am I designing for? The user as an individual, or user as a part of community, or a part of the larger ecosystem, or a speck in the biome? Our knowledge has advanced thanks to cognitive neuroscience driven by FMRI insights to map human cognition better than ever before. What qualities do I care for beyond usability? What matters when it comes to user relationship to ecosystem? Transcendence? Uncertainty? Can AI help support humans with suggestion in these complex situations with its own highly scalable, high performance, processing vast data?

There are multiple degrees to the user center

New technologies have the potential to trigger these thoughts, while businesses attempt to balance growth and yet remain sustainable. Especially, platform businesses that service connected consumer needs, connecting producers to them, via a platform infrastructure. The UX designer needs to work closely with technologists (a point I have underscored in another story on “Future of UX”) to determine where to anchor the user experience in a complex, interlinked, connected world.

Nir Eyal ~ “behavioral designer, at the intersection of psychology, technology, and business.”

Langdon Winner ~ “attempts to fix and humanize the internet usually reflect the same consumerism, narcissism & profit seeking that are the root of the problem”

Authenticity and Free will

We want machines to learn to develop better products and technology (irrespective of whether it aids consumerist growth), or to understand human psychology (irrespective of whether it leads to narcissistic behaviors online), or to enhance business productivity (primarily as a profit-seeking/growth YoY measure). AI and technology here is cast in the role of a mere tool. Not the partner it ought to be.

Nir Eyal and Langdon Winner are two diverse experts I respect and am aware of as a designer — attempting to design new behaviours, yet not being naive about the responsibility to be shouldered while harnessing technology. As much as user research and ethnography feeds my creative highs when I know what interface elements to tweak above the line of visibility, yet to be bold enough to recognize that the underlying systems can and may not be apolitical when deployed is a challenge to comprehend. More here where Langdon Winner enquires “Do Artifacts have politics?

Like in the decades building up to the newer AI based solutions, we have imagined user experiences in the same, rule based manner, across collaborating experts — designers, engineers, technologists, marketeers, product managers, focusing on transactions!

The Half Full Cup — remove noisy information before analysis and design

Consider flipping this.

Gone are the days of limited computing power. Gone are the days of siloed organizations and consumers. We have come far from the days when Bill Gates proclaimed 640K ought to be enough! While technology has advanced beyond even Moores Law, we retain those Gatesian heuristics. We look at data as having noise — incomplete data, bad data, so on, which in the past would have crashed rigid rule based computer systems. Remember the blue screen of death!

Dunn/Belnap multi-valued logic

After all, what is noisy data? Is it like a proverbial weed i.e. a plant without a benefit for human consumption? I find succour in Political Theory for such behaviours. Specifically, Dunn/Belnap’s multi-valued logic. A voter in an election could be voting in multiple ways beyond the boolean for or against! What we refer to bad data or noisy data is likely to have rich information. Political, fuzzy, inconsistent, outlier tidbits of data, perhaps!

Not Boolean > How the swing voter went extinct by Alvin Chang. Source: https://www.vox.com/policy-and-politics/2016/11/4/13496688/swing-voters-dying-cartoon

Why not let machine learning differentiate good vs bad data? What are the opportunities for technology and design? That opportunity lies in the Half Empty cup of data that traditionally let drop to the floor!

The Half Empty Cup — let the machine learn to tell between Good Vs Bad/Noisy data. Let AI generate Anticipatory user interfaces. Think of them as A/B tests on steroids.

In fact, architecturally, as we move from monolithic systems to microservices based systems, there is an opportunity for us to use machine learning and information discovery automation (agents) to mashup fascinating views of information, presented within accepted aesthetic conventions, appealing to common sensibilities, as machine generated user experiences!

The key I believe lies in how we decompose the functional elements, which I construct as a diagonal that slices the vertical stack embedding system layer, interaction layer and user intent layer.

Decomposing Micro Interactions to be served by underlying micro services.

Assuming we progress to this scenario, then UX designers and Engineers have the opportunity to look at data as well as user experiences holistically. We could redesign the five-star based rating/feedback mechanism to transform it from its trasactional moorings.

Data driven, AI driven technology can lead to more wholesome, personalized user experiences provided it makes sense of all the data

Rhetorically, one may ask but are such machine generated experiences authentic? Can the mere mimic of human expressions like Soul Machines Ava create a lasting trust?

Pause and ask, is there something synthetic, unnatural about such computed personalization? Is such personalization actually benevolent? Are we allowing machines to manipulate us into believing its our free will that drives us? Is there an eery suspiscion of a manipulative entity or organization with an agenda? Is the intent behind personalization authentic, and not fake?

Designing for technology and user experiences needs to weigh in on the output of AI, how its tuned, how it learns. AI generated UX builds first on trust, wherein the user in some manner places trust in the data he or she unlocks. Such data is authentic since it flows from the user to the AI system. Its from that base that AI generates UX that generates delight. Even if UX disappoints, core trust still remains. It is authenticity flowing from the sense that a user empowered the AI system. However, technology can only go as far. As Descartes points out Free Will is “the ability to do or not do something” (Meditation IV) and “the will is by its nature so free that it can never be constrained” (Passions of the Soul, I, art. 41). But, I suppose that as long as the human consumer of tech served choices believes it is not interfering with her free will, it should be OK.

I choose to do or not do something — is there a tilt? is the salt enough?

A light human touch makes a thing personal. Authenticity is further cemented with the deft user touch, or tweak to personalize. When untouched by user, its incomplete, impersonal, and not empowering human free will. The role of UX for AI is a little like the light touch one gives to set right a tilted painting. Or that little dash of extra salt to a dish! Such actions make it a signature something, very personal. An expression of human free will.

Design will stay relevant to celebrate that need — free will. UX Designers recognize that and incorporate it…irrespective of process to discover it. Assume, AI builds on trust where possible and to learn and generate the delightful UX. Assume the UX is authentic because it allowed user to configure or change it. Even if the human finds it authentic, does the machine know? Algorithms that interpret this and feed back to represent as new learning will be key for scale. UX design needs to train ML for such representational feedback.

Error handling in AI driven systems, if such a thing is possible with automation

Lastly, as a design practitioner in big data space, another aspect of AI besides Authenticity that I feel UX Designers should focus on, is Error handling. If processing for choice using multi-valued logic allows automation of user interfaces, then similarly, we need to diversify post system response, or feedback to and from users in a similar manner. Errors such as 404 Page not Found is a binary setting, then in our AI driven world, there is room for error that needs to be flagged. User interface design and information architects need to device fresh UI approaches to flag false positives and false negatives that a AI based system may throw up. This will require the user experience to elicit users critical thinking to be aware and flag issues.

How can UX incorporate behavioural cues that trigger critical thinking — to detect errors and act to prevent them or flag them — Immensely useful in driverless car ecosystems, fake news publishing

These ~ Authenticity of AI generated UX and Error handling in unsupervised ML systems, and how UX Designers address will bridge what I call last mile delivery of UX, to help transform it will be the pivots in UX for AI — less visual and more cerbral!

--

--

Sri dhar

Sustainable human future is best measured by how much we learn to give up.