Domesticating Intelligence

An attempt to rethink how to design future intelligent products at Copenhagen Institute of Interaction Design

As part of Visiting Faculty at CIID together with Joshua Noble , we had the chance this year to rethink our previous course called “The secret life of connected products” and push it a bit further in near future. We asked James Auger to join us for the start of the course to bring his experience around futures, smartness and domestication of robots and Churu Yun to step up prototyping and industrial design craft.

In June we ran a 3-weeks class to explore, discuss, research and design how Smartness and (Artificial) Intelligence is making the transition from kickstarters, visions and laboratories and becoming part of our everyday life.

Honda Asimo and the family that he never had

When labelling a product “smart”, we charge it with assumptions that change the way we interact with it and we charge it with expectations that influence the way we experience its flaws. Experiences with “smart” products seems to converge into a passive taking over of tasks that hides all the complexity and control behind “simple” and hidden interfaces.

With a growing awareness towards the implications of algorithmic decision making (ehm where to start…tesla? Nest?) and the huge amount of tools that allows AI-like functionalities to leak even in very mundane objects, it was the right time to reflect on some of these trends and challenge the notion of ‘smartness’ and ‘intelligence’.

While there is a big buzz around chat bots and conversational UIs and personalities being one of the next main material of interaction design, in this course we pushed the students to go beyond the existing metaphors of talkative butlers, the goals of a quantified and efficient life and the fears of robotic takeovers.

We also wanted them get deeper in some of the “black boxes”, to understand the processes of computer vision and machine learning , to play with them and understand these new tools that will have to become part of the interaction design lingo and materials.

We put them in a real and mundane future, where things don’t work and these intelligences are being inserted into situations which they may not understand and which may not understand them. Where both user and object will need to adapt to one another, make one another familiar and comfortable with one another, a process we can equate to domestication.

On Dogs, bots and domestication

Domestication is an interesting lens to use to understand the successes and failures of technologies that were introduced in our lives, you can read more about this topic here on James’s PhD thesis at RCA.

In brief, Domestication means a shift in habitat, where an organism is adapted to the new environment via human agency, adapting its function, its form and its interaction with us.

The dog perhaps represents the best example of domestication (for a natural organism) — it evolved from a hunting and dangerous animal, that could only be handled by few into something…completely different. Its function evolved beyond utilitarian needs, its form and interaction shaped and mediated by living together with humans and understanding their language, signs and needs.

‘If you could read the genome of the dog like a book, you would learn a great deal about who we are and what makes us tick.’ — Micheal Pollan

Looking at some of the technologies that tried to become part of our daily life, some of them also had to evolve in form, function and interaction to be fully ‘domesticated’, while some are still failing at this process.

Computers failed first to be accepted in our homes when sold as tools for making more efficient tasks like planning a dinner menu or printing invitation. They became welcomed later when their environment changed and due to digitization of media, their main role and function shifted and they became a central hub of our homes.

Robots are an example of a something that was never truly domesticated, a recurrent ‘technological dream’ living mostly in movies, conferences and ads. They were never really accepted in homes in their anthropomorphic form to automate our daily life, but more as “robotic” cleaners and other types of appliances. Instead they became extremely successful in their ‘arm’ evolution in the industrial context, where repetition and automation is a great value.

In a similar way, when we look at some of the incarnation of “smart” in today’s products we can see a similar pattern of recurring dreams and push backs from people(i.e. the smart fridge…). Most of these products represent a view of the world, where more automated, efficient and optimized tasks promise a life that not everyone is necessary looking for and try to sell a future of “generic users in his perfect glass cage” where everything works smoothly, but what would be actually smart for a more “real” and imperfect future?

As with the early examples of “intelligence” that we can see now in some of our homes (nest, echo and google home), while they have abandoned an anthropomorphic shape, they are still based on the metaphor of a talkative butler, but what if we could explore different metaphors of interactions like horses, centaurs, puppies, shepards and teachers?

To push the beyond what smartness and intelligence means today and find new and even weird meanings and incarnations, we started the class with this pretty broad set of questions.

What would be a new notion of smartness and intelligence that goes beyond the automated dream? What new roles and motives would it serve beyond making our life more streamlined and efficient? What new forms of intelligence and ecosystems can we take inspiration from? What new interactions, forms, metaphors can we explore to design more domesticated intelligent products?

The process of domestication of intelligence

In the first week of the class we focused on challenging the main meaning “smart” and “intelligent”. However as a first step we had to agree ourselves on what smart meant or at least a version of that. This is the one that James, Josh, Churu and I got to:

  • It can sense its environment (through time and space)
  • It can compute that sensory information (with specific goals)
  • It can act in some way on the world (with personality or behaviour)
  • It’s part of an ecosystem (of people, products, processes and companies)

An example we used to break the ice was a previous example we had with a discussions with James. Imagine a ‘smart’ lift in a company building with too many people on it, having to mediate who should step down. What information does it have about them? what logic and motive will it use to decide? and how will it communicate? Will it choose fit/unfit/important/hurried/premium people?

We used this as a very loose framework to get the students to explore the new potential functions and interactions of a smart product or system and come up with their own definitions to go beyond smart=automated and intelligence=human.

Each team looked into what information could be part of the “environment” of a product, thinking of sensing even beyond the human ones and start to map the complexity of sources that can influence its computing with some simple bayesian networks visualizations.

We talked a lot about whose goals products might serve and about the biases in decision making of algorithms. Thinking of profit based or humanitarian self driving cars that have to deal with crashes or products that have as a main goal their own survival, helped the students find new scenarios.

We looked at different forms of intelligence that live around us (dogs, cats, birds,…) and how by looking at home/supermarket/farms/… as a complex ecosystem of people and products, we could design new rules, relationships and even complete new ‘services’.

A bayesian network from Real predictions machine by Auger+Loizeau and “Killing Smart” poster by Automato

We channeled the explorations in few areas where machines might start to come up: Believing, Governing, Nourishing, Learning or Entertaining. and we let the students get lost in the topic. For a while…

A first experiment of AI politicians fights.
A mix of ideation. On the left a set of religious products (faith usb, sabbath lamps and christian printers) On the right a vision of server parliament

Each group started tackling complex questions and scenarios where intelligence could leak in our life at very different scales and after a week of deep discussions, post its thrown away and brain aching, some interesting scenarios started to pop up :

What If religious belief would be used as a base to the behaviour of an object?

What if a toy’s fear could be used to teach kids about their fears?

What if animal happiness would be the measure for the way a farm works?

If we consider the home as an ecology, what would be considered the fittest product and how would it evolve?

If the city is a home for some, what would be an helpful intelligent system?

What would be the daily life in when government would fully or partially made by non human intelligences?

In the second week we focused more on form and interaction.

Josh arrived with a lot of code and examples and we jumped straight into “making” bits of intelligence: dealing with it as a material to shape and trying to figure out how to define the initial form, interaction

After a week of very abstract future scenarios it was a big switch to find what to sense and how, exploring conceptual models with actual code and start prototyping behaviours and interfaces.

Josh gave a few tutorials to get information with APIs in p5js, recognizing shapes and colors with openCV, understanding text with sentiment analysis with Alchemy, making very basic bots with wit.ai and understanding the basics of machine learning with Wekinator. We tried to give a sense of a new toolbox to design with intelligence, but without forcing on any tool in particular.

Some groups started exploring the perspective of an object, defining indexes of fear, or how would an object understand whether it is fit to the context where is in.

Some focused on defining the touchpoints of a new larger systems that they imagine, like talking to an algorithmic governament or designing a clock that judges you if you are lazy.

In the third week we focused on exploring the relationship over time and the implications that emerge from the scenarios that groups worked on. We stressed on designing the right metaphors and interfaces to relate with these new inhabitants. With the help of Churu, the class focused on building a good story and express it both in working prototypes and videos. Exploring what parts of their vision could be made real enough and what could become part of a storytelling piece.

New scenarios of domestication

And without further ado, here are the final projects. It is a mix of nearer and further futures, some more critical than others, but all of them posing new questions and proposing new ways of thinking about the function, the interaction, the forms and ultimately the relationship with smartness and artificial intelligence in our life.

On the Origin of objects (by Dario Loerke, Jivitesh Ranglani, Mette Morch, and Sena Partal) explores the various survival strategies of a simple light bulb during its evolutionary path of becoming the fittest object at home. By sensing and learning from its environment the object starts taking decisions towards its form, function or affordances. This project showcases a series of artefacts that actively compile new physical and digital qualities in order to stay relevant for its owner.

Three evolution of a lamp

Ascetic Aesthetic (by Leila Byron, Nicolas Armand, Charlie Gedeon and Monika Seyfried) is an exploration of the hidden motives and biases of intelligent products. The project is a collection of 7 products that base their judgment on religious principles. The result is a system where a user is blissfully happy following the rules, but doesn’t question why they need those rules. The project highlights the aspects of AI that are obscure and not adaptive, a set of promises of well-being for the individual without any clear reward system or explanation.

Guardians of the dark (by Luca Mustacchi,Kate Twomey, Sophie Chow and Priscila Ferreira) explores how to design transitional object for kids that evolve with kids fears and aspirations. The guardians are 3 creatures that respond differently to light and sound and help the kid get over the fear of dark by going together on a journey in his own fears.

Real-Time Democracy (by Ines Araujo, Luuk Rombouts, Lars Kaltenbach and Bjorn Karmann) explores the everyday implication of a governament fully or partially run by an artificial intelligence. With a future where the government is ruled by AI, would people be more engaged or even more disconnected? It’s a scenarios where human-robot relationships evolved to the point where it’s largely considered a normal thing and politics become a series of activities spread across objects in the private and public space. More here on their blog, soon hopefully the video too.

SurvivAI (by Adriana Chiaia, Cyrus Kamath, Mary Mikhail and Mikio Kiura) A secret learning system camouflaged within the urban environment to help homeless people survive in cities. The system devised as a brick that reads sensor information and people feedback to determine whether places are safe to sleep, or good for food.

Meadow (by Justine Syen, Grishma Rao, Daan Wijers and Iskra Uscumlic)is an exploration of animal needs as main input for an automated farm. Meadow discards the idea of a traditional farm, run by a farmer, but rather lets the cows run their own lives: we call it “the ultimate free range”. It’s a non-farm with no farmer, that directs animals in the open fields based on their needs of food, health and also…death with a series of Cow-Computer Interfaces.

On more real/fictional intelligences

With this class we tried to jump a little bit forward and look at the potential issue that happened with “smart” and will possibly happen with “AI”: words that we add to products too easily without really thinking about the hidden biases and implications that it contains.

As it happened to learning thermostats and self driving cars, more examples will show the need of new languages and interfaces for people to deal, trust and interact with future intelligent objects, but also for a different type of design.

Translated to the world of data, the introduction of a new service, products or algorithms requires a responsible design that considers moments when things start to disappoint, embarrass, annoy or stop working or stop being useful. — Fabien Girardin

Most of the projects that came out of this class might be more fictional in nature and some prototypes were not actually using machine learning or neural networks, but we believe this to be a good way to explore the complex field of intelligence. Building something real enough to be experienced and rooted in observation of people/machine relationship and understanding of code , but also fictional enough to be inspiring, even a bit critical of the status quo and thoughtful of flaws and ‘real’ futures . They are examples to push forward a dialogue that we must have about what is a better and wanted use of intelligence in everyday products and how to start inserting and unpacking this topic in the education of today’s designers.

With a good mix of real coding experimentation and design fiction, we pushed students (and ourselves) to dig deeper in ‘machines’ and their scary logic interiors to understand what is there to influence, translate and turn into possible controls and interfaces for people. At the same time stepping into parallel presents or near futures, we got free to imagine complete new scenarios and think a bit more critically about implications and not just add smartness and intelligence ‘as a feature’.

I hope that we could finally try to domesticate these intelligent beasts for more surprising and interesting uses, even if perhaps we might get bitten in the process.


Thanks to all CIID crew (Pavla, Annelie, Alie, Simona, Peter) for hosting us and bringing us to Copenhagen from all the corners of the world for these weeks. Thanks to all the students for having put a lot of brain and hearts and have survived. Thanks to James Auger, Churu Yun and Joshua Noble to have built and run this experiment. The others in automato.farm Matthieu and Saurabh and all the people that are fighting this battle too and have given so many references and projects to point at.

Some reads and videos that inspired us

Some projects that we talked about