The 2nd year edition of a more mature UX.Live conference invited some fantastic speakers over the course of 3 days.
This blogpost summarises my notes and thoughts concerning Design practice. It comments on talks by Noel Lyons (@noel_lyons), Morgane Peng (https://medium.com/@morganepeng), David Attwater (Enterprise Integration Group), Richard Banks (@rbanks).
Make sure to check out my other article about User Research UX.Live.
Often people ask about the ROI of Design — treating it like a delivery function only. Building on a talk by Noel Lyons (Barclays) and my own experience I would argue that design contributes different:
- Empathize and answer the question what problem to solve — and why
- Explore a breadth of ideas to find a more appropriate solution
- Illustration and vision building
- Collaboration, sharing of ideas and knowledge
In a fun talk Morgane Peng shared ideas how to deal with stakeholders that don’t get Design. Her simple, but crucial steps to a seat at the table:
- Learn the business — and enough domain expertise to really contribute to discussions
- Be an ally, not a resource — and align to business strategy
- Be part of the solution — solve business challenges
Design for Interaction with Intelligent Systems
Various talks touched on Design for/with AI or new technology.
David Attwater presented on design and development of Voice interaction. It brings a number of specific challenges that are also relevant for interaction with intelligent actors generally.
Lack of a Metaphor
Voice interfaces literally provide no metaphor — equally we tend to overestimate interaction with intelligent systems for example through keyword based search. David brilliantly summarised Theory Of Mind with a simple illustration.
Theory of Mind
As an individual we constantly visualise our position in our environment (physical and social). We perceive our surroundings and their constraints on our actions.
When interacting with a cognitive machine we assume it to behave in a similar way, namely to be aware of the shared environment. We not only wonder about what it can do, what it is able to understand and remember — but also what it thinks that we might know. On the other hand we expect it to have some understanding and model about what the we want, prefer and have done previously
During interaction through language we try to align with our conversational partner in various ways:
Lexical alignment– use of the same words/terminology
Grammatical alignment– use of similar grammatical structures:
- “What to time does your shop close?” — “five o’clock”
- “At what time does your shop close?” — “Atfive o’clock”
- “he gave the book to her” — “she gave it back to him”
- “the book was given to helen” — “then it was given back”
- “Go one cross then three down” — “I’m at one up and four across”
- “go to box A 4” — “I’m at B 6 currently”
Guidelines for Human-AI-Interaction
Richard Banks spoke about designing for and with AI. Data-driven solutions often break with key usability principles, for example personalised search breaks with a search experience that is “consistent” across users and time. Different users will get different results — as will the same user at different times.
It is difficult to understand limitations and errors (any) intelligent systems make. Explanations should give an indication of constraints — and carefully reflect the granularity that can be expected from the underlying model.
Microsoft published guidelines that can be used to probe and steer design efforts. For each prompt there are obviously numerous solutions — depending on the system and context.
Why there are all these light bulbs and pen’s sprinkled through the post? Design is all about ideas — find the full set of (beautiful) icons here Web & Seo Glyphs Icons Collection | Noun Project