To Be Designed Conference Notes

The Ethics of AI by Cennydd Bowles

What are the ethical priorities for things like smart cars? MIT tried to crowd source. Mercedes will prioritize the driver and passengers over the pedestrians. Others have said people who buy SUVs are already choosing to hurt others.

In version 1: human drivers need 1.9–3 secs to react to hand over from automated bot. Emergency transitions can cause the human to freeze. Thus level 1 is highest risk.

What are the limits of automation? In Dallas recently, the police killed a man with a robot

Was there not other options though? Will nations be more likely to start wars knowing they are less likely to hurt their own citizens by sending in robots rather than soldiers into battle?

Disruption has a history of ruining an industry before it replaces it with a viable option. Its said about 30+% industry will be replaced in the coming decade which will cause serious upheaval. 47% of current US jobs at risk in next two decades. Carl Benedikt Frey + Michael Osborne, The Future of Employment.

We have a duty to behave in a certain way

A data based company tends to think of people as subjects over time and thus tend to push the limits of ethics.

There’s a tendency to believe that if you’re not paying for a service then you are the product. Does the project I take on define me? But the stuff you turn down also defines you. Accessibility should be something that says “I’ll treat every human like they deserve”. Culture will dictate values. We need to work at convincing our senior staff on ethics. You may need to build a business case for it. (Revenue, Cost, Risk) Build an ethical infrastructure. Must be a part of the culture. Book: Design for Real Life. Appoint a designated dissenter (constructive). Diversity is an ethical early warning system. Research has ethical benefits. It focuses on the humanity of the system. Product teams are often too insular. Policies and docs (whistleblowing, etc types of policies) can be an influence on culture but this mainly comes from the top. As designers we’re responsible for the ethics of technology. Is it right that technologists can route around governments and their ethics? Disregarding the politics of our work is a political act in and of itself.

There are some countries that are completely clueless to the changes that are coming due to tech. No one at this point agrees with how to handle this. It helps to have an understanding of moral psychology & philosophy.

— — — — — — — — —

Crafting Holistic Experiences with Sound light and motion — Carla Diana

Products will be more social than ever. How do we combine these gestures and design accordingly. Some robots can learn through human conversations and design has always been about a conversation. Sound Light and Motion is becoming part of our conversations within design

Apple designed the “Mac Breathing Light” to mimic human breathe. There’s subtle things we can do that refer to subconscious behavior that we’re familiar with. We’re going to see more projected light onto surfaces & the use of LED’s in subtle interactions. Tactic feedback (cold for cold weather), smell, light, touch, sound, “breaths of air”. A lot more exploration for designers.

With starting to design for smart devices, it’s important to map out the ecosystem (we’re all interconnected these days):

And storyboards:

We can’t just look at the product in isolation but need to look at the situations that can unfold overtime.

Chart the ergonomics

Think about how they interact with the device. How do they know if the robot vacuum is dirty?

Using sound light and motion tends to create a personality to these products which is so much more powerful for user interaction

What are the critical messages that need to be communicated? Breakdown how the robot moves, what it does… Does it omit sound?

— — — — — — — — — —

Designing Conversations — Giles Colbourne

Privacy is important. Preference (for AI) is at home.

Figure out how your bot can gather data to make guesses at what the user wants. ResistBot

Think about the formats of your answer. Think about the time pressure, error rate, shared knowledge.

Think through the varying options a bot might think it hears.

Using intimate language makes the bot feel more personal. User theory of mind and user expectation.

This bot responds similar to R2D2. It’s a default thing to go to smart and sassy but most bots aren’t actually smart and sassy like humans.

Humans rely on facial expressions to understand behaviors. Think about the social and emotional intelligence of bots. Clippy was patronizing and stupid to most users. That’s why we hated him. They’d often suggest condoms with Cucumbers but really the bot is going after the data not the human meaning. Facebook has this problem as well. It’ll suggest memories that aren’t always happy memories you want to see again. However it can quite wisely suggest someone is about to commit suicide. We need to not just rely on the data but think further about the human meaning.

Feelings are individual and unique and come and go. If you don’t deal with the emotional ramifications of someone’s day within your product then it’ll fail. Just listen to the user and then ask them what they’d like you to do about it. People just want to be heard. Be upfront with people and let them know they are dealing with a bot. Let the user control when they need to talk to a human. AI and deep learning is inevitable because we are limited to keep the data all in our heads. Things are more service designed then silo’ed

2m+ cumulative reviews of the report today

Design was invented to create more economic impact.

— — — — — — — — — — —

The UX of Predictive Behavior for the IOT: Mike Kuniavsky

Service is strategy, tech is the product

With IOT the experience isn’t just in buying the product but in the service after buying the product.

Amazon: We want you loyal to our service.

Just connecting to the internet doesn’t make something valuable.

Business models: Virtual ownership (Uber, Lyft, Airbnb, etc), consumable renewables (Brita, games, etc), Amazon dash buttons give an opportunity to mine people’s intent. They can preemptively sell you things before you even want to buy them. Predictive behavior enabled by machine learning. This is even a value prop in most IOT devices. They learn by your behavior and will predict what you need and when.

Computers are getting better at recognizing objects and now content of things.

This is why FB asks you about your friends in photos and wants you to tag people. It’s for their face recognition feature.

Knowing about how devices behave. For all the times we get things right, it’s actually all the times we get it wrong that creates cognitive load and informs the user about the experience. How do you design for uncertainty. Predicability is very valuable even in something that is flawed.

Without a clear story you don’t know what decision you need to make.

Human behavior in conversation is often filled with interruptions, misunderstandings, and “repair”. How does a machine handle these situations?

How do you encourage people to give up data? Google teaches you how to speak to it better.

Users need to know what the expectations are.

It’s important for people to be able to say no.

This app kept asking “is this you?” He eventually turned it off.

We need to figure out what these devices can do for us. It takes 6 months for machine learning to learn our behaviors.

Technology asks for more than it gives right now. We need to offer more value to the users.

Stay connected with what’s happening for the user around them on a moment to moment, day to day sort of place.

The buddha walks into a bar.

There’s a trend away from connectedness with devices and towards human connections.

Make something the world needs

These devices are given agency by us to do things on our behalf. Thus agentitive tech.

Shot Spotter will sprinkle miss around a neighborhood and can then detect where a gunshot came from helping police get to the location within minutes. Propero helps farmers.

Typical interaction design vs Agentive

Once a culture commits to a technology It begins to change things.

We don’t have general AI yet. Tech hasn’t gotten to a point when it can review all scenarios like in Tic Tac Toe and just stop playing cause it knows it can’t win.

Automatic — little constrained UX

Narrow AI is good within it’s knowledge base. It can automatically do things.

Assistive — can do the job, create connection (robotic doctor but maybe not a nurse), Physiology, Skills, Art

Agentive — goes and does it for us

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.