Where are the humans in AI?

aisha dev
7 min readApr 17, 2018

--

IDEAS — Embedding politeness, social queues into AI. What if you created AI that would only work if you thanked it and spoke to it. company-oriented ethics. parents could set the settings to teach values. How could you use it as a teaching platform? Rehabilitation?

Avatar — there is a genuine connect with the things around you.

What if AI was based on religious beliefs? Buddhist, Christianity.

buy products that will give you the intelligence you need — you go to a doctor and you’re short on EQ, you get and AI to aid and train you in EQ.

The Ethics of AI

GETTY IMAGES via Wired

Instead of thinking about the ethics of current or developing AI in relation to the ethical affect on people and cultures, I’m going to be looking at the ethics or the moral code of the actual AI, the Alexa, Cortana, Siri.

What are the values that these systems hold? How does our behaviour change in response to these networks?

“Amazon Echo Is Magical. It’s Also Turning My Kid Into an Asshole.”

Ethical and Religious Theory

If data is the ultimate renewable, if it is the ultimate footprint of all of human activity and in some cases, human predictability, will we be able to use data to test religious, moral and ethical theory?

Materialising the invisible — how realistic is the materialising?

If there were different emerging moral trends, how would we introduce that into AI? getting a system of beliefs that aligns to your beliefs and then teaches them back to you, keeps you in check.

Or what if these ethical/moral beliefs were constructed into the AI, by the increasingly polarised world. What would ethical beliefs in context look like?

Theory of Mind and Intelligence Profiling

This tangent kind of spun out of the conversation with David Danks. He mentioned considering AI rights as we consider animal rights, there is a certain level of anthropomorphising that takes place in this situation.

A lot of these rights, however, are specific to cultural mores and morals and integrate a set of social interactions. A lot of this depends on how you make inferences from observations and happenings, from the experienced world.

For example, on the roads, people see people to be mindful of, cars, trees etc. Self driving cars see people not as we see people, they don’t see the rich picture, they see pixels and data to avoid via computer vision.

What if we took people from different places? A driver from Pittsburgh would see people, trees, signs etc. A driver from Delhi would see crowds, a really wide range of precarious and unsafe vehicles, etc.

How do we construct ideas of how other people think? How do we construct ideas of why people do what they do?

“Data is infinite” — Cennydd Bowles, if data really is infinite, it lacks time, it doesn’t have a past, present and future. What if everything we have done, along with everything we could potentially do is already solidified in data?

Exploring the idea of Artificial Ethics

What if we could create a system that would record and predict the decision-making process for people, according to their ethical code.

Visualising “where people are coming from.” We hear this being thrown around, especially in such a polarised world, what if this system could literally visualise the contexts that people are situated in, so their decisions are more understandable.

Visual Inspiration

  1. Quipu

This is an Incan system of accounting and record keeping — it is based entirely on knots on rope, worn as a belt on the community accountant. The knot patterns are unique to each individual/family, according to their own personal account. It is essentially a footprint of their data that is presented in a comprehensible way, for administrative purposes.

2. Computational Quilt

This got me thinking about textiles and weaves — how they’re so specific to cultures, people and places.

Computational quilt

Lorrie Faith Cranor (2013)

The Thing

Theory of the human black box — free will and decision making

Daniel Shiffman Fractal Trees — Processing
generating different patterns, moral codes — Ernst Haeckel

a system that reads your fractals

where the rot occurs- what does it do

MARISA

visualising — modular system — crystal growth- particle systems-

rot — seeing the logical fallacies- question the consistency of beliefs

colour vs particle systems — physical representation of the inconsistencies

shared moral codes — family to

memory orbs — blade runner 2049

DIFFERENT

completely analog — specimens in a natural history museum- life cycle of the thing- you

cross-section — tree radius thing

Rhino/Grasshopper model

The Idea of Moral Consistency

“What people should strive for, in Greene’s estimation, is moral consistency that doesn’t flop around based on particulars that shouldn’t determine whether people live or die.”

This extract was taken from The Atlantic article on the response of Buddhist monks to a variation of the trolley problem. What was thought to be a choice made by “psychopaths and economists” was also made by these monks.

The data physicalisation would also be a representation of the judgement of morality and decisions we make based off our moral codes; what is considered prudent or even noble in one culture or belief system, might be considered brutal and inhumane in another.

“AI systems must embed some kind of ethical framework. Even if they don’t lay out specific rules for when to take certain behaviors, they must be trained with some kind of ethical sense.”

The article goes on to point out that although there should be some sort of ethical code embedded into AI technologies, people aren’t comfortable with the idea of companies making decisions on their part. “Again, in that instance, people don’t hold consistent views. They say, in general, that cars should be utilitarian and save the most lives. But when it comes to their specific car, their feelings flip.”

Do we know which parts of our moral intuition are features and which are bugs?

The Thing 2.0

Avatar connections
Spy Kids 2 animal agents

The Moral Code is a way to sync up to the AI technologies around us. For example, when entering a self-driving car, instead of a key, the way to activate a car would be to plug in this code. The machine would learn your moral code and your ethical system and make decisions based on you. This way, you would be responsible, just like you would if you were driving. This would also prevent companies from embedding certain moral codes into the things we use and by extension, the larger systems we exist within.

Potential names for the thing:

  1. Moral Key
  2. Moral Compass
  3. AE Tree
  4. Ethical Footprint
  5. Moralmetrics

Moral Code Typologies

moral key typologies

crumple zone — data + society

decodability of the form

hang the form, annotate the shadow, reflection etc

--

--

No responses yet