Where are the Humans in AI?

Spring 2018

  • How does the creative interaction between people and AI fit into the broader scope?
  • What would that environment look like? (not just optimizing or automating tasks)

Intelligence emerges from context. Are there critical moments, could it explain itself? How much is trust a factor, and context (finding a piece on the bread on the ground). Building relationships with our technology. The perception of stuff and how it should enable humans, instead of replacing them. Specific small functions compared to complex and diverse functions.

Jason Salavon Lecture Notes

  • Mondrian and Cézanne — Other things around it as a variable and a proxy as a translation of tree-ness
  • A sight for formal manipulation
  • Want to make shape — but the structure can give meaning to the situation
  • 100+ deep learning — GAN can’t learn — visualizing the learning process (as an animation)
  • TV and film data sets are easy to get, millions of frames, broad interrogation of the data set (neural network)
  • yGan with fluid simulation
  • Data and generatively — the way they merge as a data visualization
  • With common parameters, how can it translate into something recogonizable?
  • The archetype of top films — every shot is averaged to one color
  • High degree of abstraction that still shows meaning — through color relationships and pacing
  • A way of analyzing and critiquing film? Practical data visualization?
  • Data is just real things happening in space in time — inherent shape that is given
  • Whether or not your thinking about it
  • “Mannered in use of it”
  • The striation is the information
  • Is my selection still influencing the outcome
  • How fast a data set can converge with something
  • The ideal of mean (but doesn’t actually exist) v. the median which is based more in actual reality
  • Data’s innate malleability
  • Proper tension with the environment
  • Latentculture
  • Translation of shots from movies — simple loss of a full frame reconstruction — not enough (a non existing frame in the data set)

Would this inherent shape be equated to an objective representation of the data? Would the curation and variables of the data and then the interpretation of form still be considered as objective. Jason is dancing the line of what objectivity means.

Questions:

  • Where are the eyes in AI?
  • Could AI act as an emotional connection
  • Can it give people control, empowerment
  • Can AI learn a noise language, instead of just words? — soundscape
  • WWDC talk on sound design

AI — things that respond, but do they learn? Is it just different forms of learning? Expert systems, or does it just appear as intelligent? What happens when the system fails?

Grey Walter and Elsie and Elmer and the mechanical tortoise

Could you simulate aspects of human intelligence in simple ways? “It’s hungry it wants some light”, It’s not even a computer, but it exhibits a form of consciousness.

Vehicles — drew a series of simple vehicles that exhibit human emotions. It’s a form of AI, but is this intelligent?

How is AI categorized? Is it by what it does? What is the technology behind it?

Alan Turing — “if a human can’t tell which is which, it is an intelligent machine”, The Logic Theorist Machine — adapt, be malleable? Autonomy? Rational model of intelligence, but in other people consider more spacial intelligence (Australia) — based on intuition — When will this water tip over? Emotional intelligence?

Intelligent about what?

Bot Poetics — An AI as a Horcrux — what part of yourself can you encapsulate? What part of yourself would you send into the future?

Presentation: Human Infrastructures of AI

Madeleine Elish

  • Intelligence and autonomy initiative
  • Socio-technical system — science, tech and literature embedded together
  • Data & Society — non-profit research institute
  • Books: the four mirrors of war, the counselor, override, the winter of our discontent
  • Future forum: sci-fi can ground discussions in AI
  • Future perfect — literal infrastructures and speculative fiction

There is no thing as technological autonomy

The History of Auto-pilots — “my rumba just went off” — what kind of changes around responsibility and liability

Court Cases and Accidents: counter intuitive pattern

As complexity of the systems advanced, and autopilot systems controlled more and more of the flights, responsibility of the pilot remained the same. The operator was like a sponge, soaking of reliability

Ex. 2009 airbus crash — human operator error, not disempowerment, change the nature/expectations of their job. Humans are really bad at taking over control at times of high stress. “The handoff problem” —the moral crumple zone. A way to talk about how autonomous systems reflect accountability. It protects that actual system, scape-goats. Illogical expectations for the human. Systems were described as perfect — “preventing any dumb moves by pilot”. Pilots lose their skills because they are not flying as much.

“A human driver is meant to always have their hands on the wheel” — moral problem v. google with no steering wheel. Perceptions of the success of the technologies is not matching up. The drivers head is down and looking away in recent crash — blame on the driver not the technology.

AI is not magic — the metaphors we use

Dig deeper into the social perceptions of technology and inform the use unfolds. Technology as faultless, people overestimate the capacity of machines and underestimate for humans. Definitional slippery-ness of AI: is scariest and most universal term. There is still opportunities to shape that term even more. IBM — commercial on support group and the personification of robots. Humans love to see agency in animation of robots.

Magic: echo chamber of prototypical features without deployed messy reality. Magic without understanding techniques, practice and background. Magic hides human infrastructure — people involved in training data sets, curators, the maintenance — dangerous because the term obscures the resources that makes it possible.

Always ask who is being made the hero. Does it come at the expense of someone else. Empowering the user, calling out components that makes other areas invisible.

Tactics of Intelligence — the way machine intelligence is undermined

Madeleine’s perspective: Machine intelligence is not all it’s cracked up to be. The perception and realities of technology don’t always match. Technologies can really govern and change society. What are the ways that bias creeps into data sets? Ex. facial recognition. The system designer wants it to perform well. Adversarial machine learning: Hackers can undermine systems in intentional ways (Ian, Open AI). A stop sign could be interpreted in multiple ways.

The practice of everyday life — the difference between tactics and strategies — what would that look like in. Harvey — trick facial recognition systems.

Unfit Bits — (Seria, Tegg Brain?) Insurance systems may be using fitness data. What’s to stop you from tricking the fitbit? Seam-full design instead of seamless. Re-appropriate and critique new technologies.

Questions

  • Theres a difference between your relationship with humans and machines — AI as a system of tools, any thoughts on the interactions we make with AI and people?

In animate objects (non-human) playing important roles is not new. Distinct from ours — different kinds of agencies that non-human objects can have (ex. wind, you can perceive that it caused something to happen) Not all actors having equal kinds of agency (apples and oranges). How will this change our larger set of relationships. Positioning google assistant as a slave (clean it, do it)… Should we be polite to robots?

  • The filter bubble — What are the limitations of AI’s understanding of emotions/identity?

Double edged sword of personalization. Media manipulation, siloing can create a filter bubble and are only around people that think like you. A huge danger of cutting ourselves off from the world. An agent might represent you somehow (Neon Genesis??). How you are oversimplified, systems are imperfect and will fill in the gaps (Amy is manually curated, crowd flower — human in the loop AI). Always just one step away — what are the potential actions that can be taken by an actor that is imperfect — socio-technical company system. In the past, people have always extended the senses of the self (Henry James — all that a man can call his own, women used to be like that — historical reflection)

  • What are the shifting definitions of AI?

The Defecating Duck, the Ambiguous Origins of Artificial Life — it had an internal system that allowed it take in corn and poop it out. Historical ways people think about artificial life — what does it mean to be alive? What constitutes intelligence? How does that change historically and culturally? The recent explosion is distinct compared to the 50s. Simulating logic and abstract representations of language (rejection of Skinner) v. Recent rise of machine learning (behavior based way f what constitutes intelligence). It doesn’t matter what the thinking it is, only the matching of patterns. Facial recognition — it doesn’t know what a face is, just a pattern of pixels in a way that satisfies the definition of the task.

Is this things only optimizing a particular task or just be a general intelligence? If we assume it’s about meaning, and the decisions don’t understand the meaning, will we then add meaning into things.

Gary Marcus — ssrn — types of intelligences. What are the mismatches between capabilities and expectations. What a social meaning is that is not represented adequately by that system, how do we deploy that system. Our life is defined by meaning?

  • Our ego is reflected in the AI’s behavior — how would you go about identifying bias and for people to accept these things as not a flawed system?

Seam-full design — making those a part of the design process and the experience of the product. Scroll load — thinking about the interface in a non-magical way. How much do you reveal behind the scenes? The pair initiative — Google brain group that focus on visualizations. Design and explanation perspective. Alternative interfaces in the Facebook news feed. Can it just be thought of layers — I should be able to see below the surface. Machine learning and deep-learning in a knowledge presentation and explainability. Explainability v. Transparency. Would this go into the education system?

  • How do people create models of AI?

A model that ascribes to how something works. The remote is talking to the TV. What are the different modes of explainability. Metaphors and phrases are really useful. There is huge power in establishing shared languages and mental models. This process should be very careful, it could to lead to deeply flawed interpretations of technology. Data: bodily and digested terms. This feeds into a complete anthropomorphism — that AI is akin to humans (even though it is not). Pattern recognition v. learning. There words carry other contexts into the definition. The cloud: it is a set of huge energy consuming data wear houses in rural places. There is no sense of physicality of data — it’s not stored in the sky. It’s owned and is a bunch of silicon and fiber optic cables. It’s a reasonable model of how it behaves. The cloud concept is a corrosive idea — natural given thing without ownership that completely erases the implications for climate change. It takes huge amounts of energy and material resources. Machine learning is maybe a way to problematize the ways we use the models to facilitate good things. The ways things are framed (super intelligence) affects how it gets funded or interpreted. People initially make exciting and persuasive metaphors, but have not necessarily thought through the baggage. This matters in the law — intellectual and property rights — cyber world as a place. It really influenced the litigation around the technology.

  • How do we move away from the corporate context of AI?

New Thoughts

What if AI could take on multiple identities. As a mother, a women, a sister, can it learn what these roles mean if it has never experienced them? Would you help to teach the AI about these experiences? What if you surrounded the AI, instead of it surrounding you as an exoskeleton.

AI and Ethics

David Danks and Emily LaRosa

Ethics: human values, human interests. The thing you are doing everyday all the time. Economic impacts can be ethical as well. It can affect your ability to realize your goals and values. Policy and regulation — inform social norms. How can you shape the broader social systems to try and achieve

Ex. AI and Medical Diagnosis

  • What is the impact of this kind of system?
  • How many people would go out of business?
  • How many more people could this help?
  • What is the impact on the relationship between patient and doctor?
  • What happens when the doctor becomes a middleman with the AI?
  • What are the psychological harms to doctors (socially and individually)?
  • Multi-tiered systems that emerge in relation to technology

Cost benefit analysis as a priority and a selling point. What is it that people actually want, not what they think the developer wants? Language use — ‘public perception’ and ‘social implications’ v. ethics. What are the relationships around trust. Learn how to recognize when a decision is beyond your training.

How do human-like AI’s affect our social interaction

  • What is it that makes us human?
  • What will matter is AI in particular uses

How do we treat AI as they become more human-like?

  • AI is not as smart as we think it is, their capabilities are very limited
  • Does this connect with animal rights and those debates

Even if its performance is not very good, if the presentation alludes to a different skillset, that partially explains to people what its capabilities are. Be aware of the specific morals and cultural norms. Is there a whole new set of interactions with technology?

Humans are very good at taking small behaviors to make large inferences about other people. The powerful tool of eye contact that develops a relationships and understanding. Unfortunately, you better get it right or the intention may create the opposite affect. There are some cases that make it less human-like. Theory of mind — assumptions and anger as a result of inability.

My theory of your mind, and theory of mind — human behavior is generated by systems of goals and knowledge. Anthropomorphize a dog —

  1. Making wrong inferences about how it’s feeling (smiling v. stressed out), how do you take empirical evidence to figure out what’s going on in the mind
  2. It can be structured differently — “Stressed out is maybe the wrong word”, so that language

AIs process the world in deeply different ways then we do. Viewing the world as cars and people v. a different conceptual picture. Passive hacking — and our interpretation of how it’s processing the world.

  • What’s a good metaphor to equate an AI to? You being similar to me, and me observing your behavior. Using analogies.

Automated v. Autonomous. ‘Thinking’ ‘data sets, points’. It’s not a decision completely out of the blue. They do make inferences based on the data sets that we give them. We talk about automated, not autonomous (the best that’s out there) all the time. What can it do (deep learning, neural nets)?

Goldfish, or a tapeworm? It has a programmed function, a mosquito v. a needle. And responding reflexively to the environment. The metaphor might be very different from what’s going on underneath.

AI can be useful when switching contexts? Human ability to recognize context outpaces AI. Right now it exists in a space where humans decide if it is in a good place in the system.

Trust development

Legitimization from other people, and this affects your trust in who’s pushing the product. Repeated interaction is a way to build trust. Organizational behavior — trust is multi-dimensional — predictability (repeated interaction)(…wife v. car), why or how systems come to behave (understanding trust — what do you value, beliefs). Do people adapt to the tech or does the tech adapt. Sometimes AI also tries to make correlations that may not be true with really wide data sets.

__________________________________________________

Cathy O’Neill: Weapons of math destruction

Rise of the Robotariat

elarosa@andrew.cmu.edu

Concept 01

How can AI develop an emotional literacy (body language, limitations of what is said v. implied, different forms of communication) — extension of the self, empowerment and extension of capabilities?— because it thinks differently, how can that reveal the limitations around how it understands identity?

Teaching AI about Multi-identity, might have a two-way bias.

The goal is to investigate question and help people understand. Who would benefit from this, who wouldn’t? Would you want everyone to have this? Do you need a skillset to use this, education, certain background?

  • Emotional Communication of Artificial Intelligence
  • I’m exploring emotional artificial intelligence and how it can learn unique communication, teach ot hers and empower people to take control of their environment
  • The final form of the project might be a simulation of what the product would be like
  • I would love to see examples of the different physical forms of artificial intelligence
  • I am interesting in discussing the limitations of AI’s understanding of emotions

Inspiration

Shots from the show Neon Genesis Evangelion — visualizations of the internal workings of human to technology connection (Eva). They show the chemical communication between the two entities. The two sides end up in a symbiotic relationship where one can only exist and connect with the other (other people that try and connect with the specific Eva are rejected).

Thought Process

Issues: I want to veer away from emotional intelligence and focus more on exposing the relationship and dynamic between the person and the AI. There is still a limitation and disconnect of understanding between the inner workings of a human v. AI, and I think this will limit the communication and translation idea in concept 01. I may try and find a way to expose this limitation.

Conference on Ethics and AI

AI in the Open World

Intro: Herb Simon — people and their artifacts are simple systems in complex environments. How do you transition technologies from the lab to the open world? Societal impacts and policies — how does the system know how to work with people? Neural networks — people learned how to input a large amount of data to get things like speech recognition and object recognition. Currently, there are widely available tools that can recognize gender, basic emotions, and opportunities to have big influences in the world.

“I believe we have an ethical imperative to apply AI to where we can to help save lives” Ex. Medical error is the third most common cause of death in the USA. Can AI act as the safety net under a bridge. The model can surface what an expert may not notice, a surprise, blind spots, biases, and gaps. How does machine intellect work to extend our abilities — complementarity.

Rough Edges and Critical Problems:

  • Capabilities. Blindspots and biases in algorithms — how can we work to minimize their impact
  • Values. Agency and alignment — who is the principle agent of that decision, who’s life is at stake?
  • Misuse. Human rights, risk of death and serious injury, legal, ethical, privacy, denial of consequences, persuasion, manipulation of attention, beliefs and cognition

“We were building one-off systems. We could talk to people that would buy with upsides and downsides” Open world challenges: Replicable, re-usable, internet scale — nuances that have wide and deep societal influences.

Capabilities and Trust

Can we give the systems the ability to know and not know (from unconsciously incompetent to consciously incompetent). Reliable predictions of performance — describing the car as having “confidence”. How can we learn what the car does not know? Would predicting the car’s confidence still have more blindspots? Issue: how can a system that learned entirely on a training model, continue learning in practice.

Practices and Designs: Phased trials? required studies and standards on reporting. Disclosure on risks and failures, failsafe designs.

Bias and Fairness

Emotion detection — the emotion classifier at Microsoft has failures with children (biased and performance issues). How can we address biases in our services.

Values and Agency

Pattern recognizer (ex. recent data in a medical case — most likely to see in the world). True positive and a false positive rate. If you operate at this threshold, the classifier can tell you that 3/4 of the time they are correct. A cardiac patient v. spam filter that threshold has difference. This performance significantly affects the end user’s implications. Most ML takes data sets and gives you probability. We need to do more full decision analysis. “The golden data decisions pipeline”. It’s not just automation from an output of one of these systems. Data — predictions — decisions. You can’t just ship a classifier, use it in the world, and expect it to work for everyone. In any AI system — who’s utility function? Who inspected it? Who did we reveal it to?

AI, Attention and Engagement

Algorithmic, large-scale, personalized targeting. Simon “These are adversarial attacks for someone else’s gain. Long term prefernces on time-well spent. Policies, disclosure. Time and attention controls education.

AI and Manipulation

Ex. Tweets that can influence what people click on. Cambridge Analytica and Data-Driven behavior change. We need a reflection and guidance on how to continue processes for study and proactivity. Funding of programs and research centers as roles for industry, civil society, academia, and government. Could we have a study every five years that was aimed at reflecting on the evolution of AI?

Peter Drucker: 20th century business practices. “Profit is not the primary goal”, only the essential condition for sustainability. Leaders in every institution are also responsible for the community.

Corporate and Community Responsibility

Aether committee at Microsoft. They have working groups. Refer to the guiding principles… Partnership on AI — to benefit people and society — a non-profit focused on the community.

Moving Ahead

“We need to pursue…”

  • Principles of intelligence
  • Better understand human intelligence and its computational variance
  • Work to solve societal challenges
  • Identify and address costs
  • Bring in multiple disciplines

Act responsibly in the world to transform existing situations into more desirable outcomes.

Questions:

The future of human cognitive Biases: The issue of transparency — what is needed beyond accuracy and performance measures.

  • We might want other features — it will vary depending on usage. I see someday that some independent party has certified the datasets used, fairness and biases. In other cases we need to think through how to best explain inferences, visualizations. (population in understanding policy at a hospital). We can’t look at AI as one thing. I wish it was called computational intelligence. It refers to a large set of disciplines from a computational lens. Each space will have different requirements at how to adapt to human beings.

AI for a purpose, diverse people have diverse goals that may come in conflict. Does that fall to the developer, the user, should the AI know?

  • Fixed technologies that can recognize patterns and do simple things. When we build these systems, they can be used to do specific things, we know how well it performs based on this test set ,This captures an intended purpose. I can imagine another system weaving this in that knows the intended purpose. In a future system, it can be interesting to see if does this system that is training with personalized data, does it perform well depending on who interacts with the system (kid, wife, ex.)

Technological Literacy — is this something that people should know?

  • You would hope one day (ex. automated car), rather than being concerned with the manufacturer, the car cannot drive alone until the user has been assessed and has knowledge of the system. Modularizing operations — what will happen in the system visually, and get end user assessments that they understand. The sophisticated systems are understood by maintenance, etc. What are the terms of reference and usage, that acknowledges this interconnected dynamic of different group / touch points. Can systems monitor their own performance — as a layer of reflection and looks out for certain problems — is the base system not going off the rails. Overview and meta-view of what’s going on at the base level. Can you give the blindspot a signature? the nature of our technologies will more over time. We need to have continuing studies, instead of “we solved the problem” mentality. The challenges and opportunities will never go away, we just need to stay on it. The advances can change as we go.
  • These things then influence the people’s response to the biases. (Ex. cars, aviation studies, it’s hard for people to take over”. The importance of understanding cognitive psychology is deep and broad. Prototype robotics surgeons working hand-in-hand. A mix of initiatives, how do we do this in a fluid manner. When to start, who is making contributions. Mutual grounding and conversations. We need to make sure the AI can augment humans instead of replace them. The future of jobs — also about helping to coordinate, and AI might help shift the nature of the job.

Will the rise of automation help to create the rise in the caring economy (value of the human touch, experience)? Could caretakers, educators, etc. have the top pay because they do the job the best?

Herb Simon, again — We are actors of the future, not observers. Our actions can determine the future’s shape to advance our own goals. Computation is about building worlds. Pittsburgh: support equity, social welfare, technological innovation that pairs with social innovation. But also has one of the biggest racial disparities and sharing wealth. We almost have two entirely different communities side-by-side.

Concept 02

Questions: How does the interface and the AI fit into someone’s life? Would it ask you questions? Is this something that’s designed to be everything? Compatibility, dating with your AI, Is this something everyone would use? What would this offer people? How much work would you put in? Is it more public? Something that’s continuously listening, knows all of your activity, has a “complete picture of you”. Is it a companion to you all the time? Is it a friend or a partner, but has a different way of thinking about things? Someone changes what they do as a result of this — co-creation. Show how overtime someone changes how they do something. Work out a case for this, in what situation? What would you plan to make or do? How critical do I want to be with this? Is this coming, is this great. Is it a new interface for things that already exist. It shows where the info is — reframing. Maintain a critical edge, leave some things unresolved. What is the AI’s interface for the human? How is the hierarchy and dynamic, and would that be highlighting the AI or the human? Is the AI side actually a group of people in a call center. Who else does its knowledge come from — (stereotypes and shadow people).

A visualization of personal data - from an person’s (blue) and an AI’s (red) perspective

Storyboarding

Critique Notes:

  • Two people, the AI is the screen
  • Scanner — shown in your body?
  • Shadows in a cave — that’s your understanding of he world
  • How close do you stick to the metaphor of a confessional?
  • Would you know who the other person is?
  • Google my data, shows (my account)
  • Build up that profile, “data selfie”Grand perspective
  • Right now is all a digital reflection — maybe it’s new types of data, gum wrappers, times looked in mirror
  • How my house spied on me Kashmir
  • Numerology, date of birth and time

How do you express an undefined output? Confessions? Self reflection? The AI is changing who you are as a person. The output isn’t important anymore , it’s the dynamic and the symbol. Does it test what you believe or you reflect in a certain way, why do people even consider that? Is the relationship valid? Do you even want to notice, and when you do, then what happens? Different roles you embody

With identity and reflection, there is vulnerability. Ability and externalize and look into a situation. What do you give up to engage with this thing? Maybe it asks you a vulnerable question:

  • When’s the last time you looked in the mirror?
  • What’s the last text you sent?
  • How many questions have you asked today?
  • What are you worrying about right now?

________________

  • What question would an AI want to know about you?

What do I want people to experience? If you get to see everyone else’s response, would you change your answer?

__________________________________________

Concept 03

What are the dialogues we have with AI?

How does AI influence our behavior? Artificial Intelligence changes its definition of who you are based on your interactions. However, we can fail to realize that this dynamic in-turn influences our self-perception and actions. The framing of content can limit how we perceive the bigger picture. What categories of your identity does an AI represent?

Does the AI take on the identity of you as a researcher, a mother or a woman?

Deliverable: I want to develop an interaction that can display this relationship.

  1. A person feels a desire to check in on their AI’s identity. This is enforced by showing how many people around you have gone through the process. This social pressure influences the initial decision to go through the process.
  2. In order to interact with AI, there is an exchange, and a moment of vulnerability. This dynamic also occurs in a confessional. A person must give up information about themselves in order to enter.
  3. A person can now enter the AI’s perspective and see what others have responded. From the inside, people can change the question that is asked.
  4. After seeing these responses, would a person change their answer?

I want to leave room for spelling errors and human nature.

  1. “Type a response to enter” — When standing in front of the reflection, the AI asks you a question. (ex. why do you look in the mirror?)
  2. You have to type your answer in order to see the other side of the reflection and other’s answers.
  3. You can choose to change the question — something that probes what you think reflects the true self
  4. Would you change your answer?

The wall of answers creates a collective identity through vulnerable questions. A wall of vulnerabilities.

_____________________________

AI voice interaction — how would the AI find the gaps in your identity and the identity it embodies? What AI want to know about you?

Letting people control the whole project may get messy.

What is not a fact, not quantifiable.

How do you know the AI is not already collecting that information?

What question would an AI ask to get a more wholistic picture of you?

Can it do something with people’s answers? “Data dashboard for people” Analyze the text somehow — kew words, highlights certain words. What is the AI extracting from the data set?

Physical and digital “holes”. See the word typed, and send it to the other side.

Like what you read? Give Emma Brennan a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.