On the Democratisation & Escalation of Creativity — Chapter 01


We live in times, where science fiction authors are struggling to keep up with reality. In recent years, there has been an explosion of research and experiments that deal with creativity and A.I. Almost every week, there is a new bot that paints, writes stories, composes music, designs objects or builds houses: Artificial Intelligence systems performing creative tasks?

Our research started by wondering about this phenomenon and playfully experimenting with it. This lead to an in-depth investigation, of what we call “CreativeAI”. This document is the first chapter of our adventure into CreativeAI, aiming at establishing a backstory and language we can use to talk about this intricate subject. Our initial intuition was, that creativity is a central force throughout human history — and is currently evolving in interesting ways. In our attempt to understand this phenomenon, we think about creativity and technology in a structured way. We focus on emerging creation patterns, Assisted Creation and Generative Creation, and argue that they are leading to the Democratization and Escalation of Creativity.

The goal of this project is to find a set of guiding principles, metaphors and ideas that inform the development of a CreativeAI praxis, new theories, experiments, and applications. To explore this space, we investigate history and technology, construct a narrative and develop a vision for a future where CreativeAI helps us raise the human potential.


  1. Creativity
  2. Assisted Creation
  3. Generative Creation
  4. Conclusion
  5. Authors & Acknowledgments
  6. References

>>>> Subscribe to our Newsletter <<<<

Interested to help out bringing AI to studios and agencies in creative industries around the world? We’re hiring at

1. Creativity

It is central to the human condition and takes many forms in our daily activities, yet defining creativity is challenging. This section provides a selective overview of historical, theoretical and technological metaphors for creativity, relevant for CreativeAI.

Ancient cultures lacked our concept of creativity, including thinkers of Ancient Greece, China, and India [1]. They viewed creativity as a form of discovery. The rejection of creativity in favor of discovery would dominate the west until the Renaissance. By the 18th century, mention of creativity became more frequent, linked with the concept of imagination [2]. In late 19th century, theorists such as Walls, Wertheimer, Helmholtz and Poincaré [3] began to reflect on and publish their creative processes, pioneering the scientific study of creativity.

The scientific study of creativity produced many theories, models and systems throughout the 20th century: philosophical, sociological, historical, technical and practical. While defining creativity in objective terms was and still is challenging, the systematic study of creativity and its enabling factors allowed industries such as advertising, architecture, design, fashion, film and music to adopt creative processes rapidly and reproduce them at scale.

Science, technology and creativity have a long, intertwined history. Selecting which metaphors to explore is an important research decision. We explore three metaphors: Augmented Creativity, Computational Creativity and Creative Systems.

Augmented Creativity

In “As We May Think” (1949), Vannevar Bush imagines the “memex”, a desk-like device where people could search through a library of articles through a series of switches [4]. While entirely mechanical, Bush describes a device that features hyperlinked text, aggregated notes and bookmarks all extending human capacity to research and process information: The web.

Vannevar Bush / Memex (1949)

The article inspired a young Douglas Engelbart to quit his job and attend graduate school at UC Berkeley [5]. At Berkeley he wrote a paper, published in 1962 titled “Augmenting the Human Intellect: A Conceptual Framework”. In it, Engelbart, influenced by Bush’s memex concept, wrote about a “writing machine [that] would permit you to use a new process of composing text (..) You can integrate your new ideas more easily, and thus harness your creativity more continuously (..) This will probably allow you to devise and use even-more complex procedures to better harness your talents…” [6].

Douglas Engelbart (1968)

Engelbart did not only provide a vision of interacting with a computer system but he had a guiding philosophy [7]. He believed that computers can be used to create an extension for the ways we do thinking, representation and association in our minds [8]. Engelbart’s vision was not just to automate processes but to multiply the power of people and collaborators by creating systems that augment our intellect, humanity and creativity. His goal was to raise the human potential [9].

Sketchpad (1963) and First Virtual Reality Headset (1968) by Ivan Sutherland

Ivan Sutherland, a student of Claude Shannon, who in turn was a student of Vannevar Bush, built a working system inspired by the Memex already in 1963. His seminal PhD project “Sketchpad” [10] is considered to be the ancestor of modern computer-aided design (CAD) programs [11]. It demonstrated the potential of interactive computer graphics for technical and creative purposes.

Sketchpad (1963) and First Virtual Reality Headset (1968) by Ivan Sutherland
The Mother of All Demos” (1968), Doug Engelbart and his team from the Augmentation Research Center in Menlo Park.

Only a few years later, Engelbart’s Stanford Augmentation Research Center (ARC), invented a range of technologies, still widely used today. Among them, video conferencing and the mouse [12]. Simultaneously, John McCarthy had founded the Stanford Artificial Intelligence Laboratory (SAIL). McCarthy’s group wasn’t concerned with augmentation, but wanted to reproduce the human intelligence electronically [13]. Engelbart’s Center and McCarthy’s Laboratory brought together Ph.D.s, hardware and software hackers, and high school students, including Steve Wozniak and Steve Jobs [14], to experiment collectively.

Xerox Parc Computers and GUI (1970s)
Mass Market Video Chat (2005) / VR (2016)

When Xerox (a paper company) decided to fund its Palo Alto Research Center (PARC) in 1970 [15], it quickly attracted ARC and SAIL veterans eager to work on personal computing, user interface design and graphics. This facility developed a number of innovations like ethernet and pioneered a new metaphor for doing creative work with computer systems: the Desktop. Soon after Xerox opened its center, a more informal but equally important outlet and movement emerged to explore computers: The Homebrew Computing Club. Homebrew attracted a mix of antiwar activists, makers and computer scientists. Ultimately, Dozens of companies, including Apple and Microsoft, and technologies such as the Personal Computer (PC) would come out the Homebrew movement [16].

Apple Computer 1, by Apple Computer Company (1976)

Computational Creativity

Already in 1950, Claude Shannon was able to approximate proper English grammar and generate new sentences using computational methods [17]. Such early research in “computational creativity” lead to an interdisciplinary dialog, exploring the use of computational approaches for creative problems.

IBM 7094 with IBM 7151 Console (1962) / Creative use of Computer Graphics by A. Michael Noll at Bell Labs (1962).
Generative music video, by Raven Kwok (2015)

Starting the early 1960s, researchers at Bell Labs were pioneering the use of computers for creativity. In a series of breakthrough experiments, they generate graphics, animations and art [18] with early computer systems. One of the most active researchers was Michael Noll. In 1970, he made a call to action: “What we really need is a new breed of artist-computer scientist” [19]. Noll’s call was soon echoed by artists and musicians, such as Brian Eno. Already in 1975, Eno was using algorithmic and generative principles to compose music — later describing his work as “using the technology that was invented to make replicas to make originals” [20].

Backcover of Brian Eno’s Generative Music Album “Discrete Music” (1975) / Computer Generated Ballet — Michale Noll (1960s)
Generating Music From Sport Data (2015) / Music Style Transfer (2015) / Machine Learning Drum Machine (2015)

A further milestone was set in 1979 by Benoit Mandelbrot [21] with the discovery of the Mandelbrot set. He was the first to use computer graphics to display fractal geometric images. By doing so, he was able to show how visual complexity can be created from simple rules. Fractals had a profound effect on our perception of creativity and machine. It led many to ask “can a computer/algorithm be creative?” and inspired scientists, artists and engineers to experiment with creativity.

Benoit Mandelbrot / Mandelbrot Fractal (1979)
Generative Shoe Midsoles by Nervous System (2015) / Mandelbulb 3D Fractals (2009)

Video games pioneered the industrial application of computational creativity. Around 1978, games started to make extensive use of procedural systems to define game maps and character behaviours [22]. Such methods allowed for the development of complex gameplay without having to spend excessive time creating games. Games such as Simcity [23] by Will Wright developed these concepts further with playful interactive simulations of complex systems.

Procedural Games: Beneath Apple Manor (1978) / Akalabeth (1980)
Procedural Game Universe — No Man’s Sky (2016)

Since the 1980s, focused research in industry and academia has led to the formalisation of computational creativity as a scientific discipline [24]. At the same time, a wide range of fields — such as computer science, architecture and design — started intensely experimenting with computation creatively. Finding a single definition for computational creativity is challenging, yet many have tried. A currently often cited definition is: “create computations which — if they were made by humans — would be deemed creative” [25].

DeepForger — Image Style Transfer with Deep Neural Networks (2016)

Today, interest in creativity from an A.I perspective has begun to blossom, with yearly conferences, schools and PhD programs dedicated to computational creativity [26]. A steady surge of ideas and techniques, that are at least computationally creative in intention, have moved into the mainstream: A.I characters, artificial musicians, journalist bots, generative architecture and neural nets that “dream”. While such systems are nowhere near human capabilities, they are actively being used in culture, industry and academia to create outputs that are increasingly met with great curiosity by the public. In many areas, systems are making the leap from experimentation to production, leading to new creative processes and outputs.

Woman working on ENIAC — The first electronic general-purpose computer (1940s).

Creative Systems

After World War II, the United States enjoyed a period of euphoria. The Allied Powers had triumphed — seemingly through science, technology and systems thinking. In this environment, the Josiah Macy Jr. Foundation organized a series of conferences from 1946 to 1953 “on the workings of the human mind” [27], later titled “Cybernetics”. The aim of the conferences was to promote meaningful communication across scientific disciplines and restore unity to science [28]. It included people like J.C.R. Licklider, Margaret Mead, Heinz von Foerster, John von Neumann, Claude Shannon and Norbert Wiener.

Macy Conference attendees (1940s)

Inspired by the conference, in 1948 Wiener published his seminal work “Cybernetics: or Control and Communication in the Animal and the Machine” [29] and Shannon published “A Mathematical Theory of Communication” [30]. Such works laid the foundation for today’s information age by providing a scientific theory for concepts such as “information”, “communication”, “feedback” and “control”.

Norbert Wiener / Claude Shannon / Ancient Greek Ship Steersman
Steering a boat: Control is Circular /The ultimate machine, Claude Shannon et al.

Wiener defined cybernetics as the science of adaptive, feedback-based control [31]. The name comes from the ancient greek word for steersman. Cybernetics takes the view that control in complex environments must be conversational. It requires not just action but also listening and adaptation: To steer a boat across a lake, you have to use your tiller and sails to adjust to changing winds and currents. The cybernetic model of control is circular, decisions depend not only on how well people carry out their intentions but also on how the environment responds.

An early link between Cybernetics and Creativity was made in 1968 with the exhibition “Cybernetic Serendipity”, at the Institute of Contemporary Arts in London [32]. The show explored connections between creativity and technology. Artists such as Gordon Pask and Nam June Paik were using systems to generated music, poetry, movies, paintings and computer graphics [33].

This new spirit of creation was addressed by Buckminster Fuller in his notion of the “comprehensive designer”, which he describes as “an emerging synthesis of artist, inventor, mechanic, objective economist and evolutionary strategist” [34].

Cybernetic Serendipity Exhibition (1968)
Cybernetics/Control Theory in action today: Robot, Boston Dynamics (2016), Robotic Painter (2013)

In the following decades, cybernetic ideas would profoundly impact thinking in fields such as business, politics, art, design and architecture [35]. As Pask noted, “architects are first and foremost systems designers,” but they lack “an underpinning and unifying theory… Cybernetics is a discipline which fills the bill” [36]. By systematically integrating context and relationships, cybernetics pushed creation & design beyond its object-based approach.

While cybernetics went out of fashion in the 1970s, its legacy lives on in fields such as Control Theory and Complex Systems Studies, Interaction Design and Design Thinking [37]. Today holistic approaches, that attempt to combine technological, human and social needs, are cited in many fields. Inspired by cybernetics, creative systems thinking has found “surprising” application in areas such as software (agile, open-source), management (Google 20% time), labour (Uber / Lyft) and resource allocation (algorithmic trading / amazon).

Examples / Media

The following a selection of projects from Augmented Creativity, Computational Creativity and Creative Systems research. The aim is to provide visual context and show progress over time.

1. Computer Interaction Input Device (1968)
2. Mouse — Mass market Input Device (1982)
3. Touch Screen — Input Device (1982)
4. Mass Market Voice Control (2011)
5. Mass Market Virtual Reality Headset (2016)

1. Sketchpad — Computer Aided Design (1963)
2. Autocad — Mass market CAD Tools (1982)
3. Maya — Mass Market 3D CAD 
4. Generative Bicycle — 3D Printed (2015)
5. Generative Dress — 3D printed (2016)

1. Tetris — Procedural Gameplay (1984)
2. Simcity — Simulation of Complex Systems (1989)
3. Spore — Procedural Game Characters (2008)
4. Minecraft — Procedural 3D Worlds (2011)
5. No Man’s Sky — Universe Simulation (2016)

1. Hypercubes — Computer Graphics/Animation (1968)
2. Fractals — Complexity from Simple Rules (1980)
3. Mandelbulb — 3D Fractals
4. DeepDream — Generative Painting (2015)
5. NeuralPatch — Generative Style Transfer — (2016)

— Intermission —

In the previous section, we explored the long, intertwined history of science, technology and creativity. In this process we investigated three metaphors for creativity: Augmented Creativity, Computational Creativity and Creative Systems. In the following sections, we consider how these metaphors have developed further and extrapolate two main categories of activity today: Assisted Creation and Generative Creation.

2. Assisted Creation

Humans have used tools to extend their creative capabilities since the stone age — adapting to changing needs. While mastering creative skills used to be attainable only for few, assistive systems are making creativity more accessible. This section presents three generations of assisted creation systems and explores how they democratise and escalate creativity.

Inspired by Engelbart’s vision from the 1960s, countless scientific papers and experiments explored how to assist humans to perform “creative” tasks — or as researcher Ben Shneiderman defined it, technologies that allow more people “to be more creative more of the time” [1]. Such research, coupled with the emerging PC revolution, allowed companies like Apple and Lotus to build early digital applications for creative tasks. Ultimately, this movement led to the founding of companies such as Autodesk (1979) [2] and Adobe (1982) [3], that exclusively focused on building tools and systems that enable creativity.

Industry pioneered the development of first generation assisted creation systems in the 1980s: Photoshop, Autocad, Pro-Tools, Word and many more. First generation systems mimic analogue tools with digital means [4]. The human’s full attention is required to drive the creative process: Feedback is slow and assistance limited. Yet, such tools allowed expert and non-experts alike to be more creative, which lead to a flood of new creative processes and outputs.

Adobe Photoshop 1.0 (1988) / Autodesk Autocad 1.0 (1982)

The camera Autofocus, invented by Leica in 1976 [5], is an early example of a second generation assisted creation system. In these systems, humans and machines negotiate the creative process through tight action-feedback loops. The machine is provided with greater agency so control can be shared. Decisions are made collaboratively with the system. Second generation systems are ubiquitous today. They are being used in production across cultures and industries.

Leica SLR Camera with Autofocus (1976) / Autocorrect (1991) / Autotune (1998)

Autocorrect, invented in 1991 by Dean Hachamovitch at Microsoft [6], changed how millions of people write — Autotune, invented in 1998 by Andy Hildebrand at Exxon [7], transformed how music is made. The impact such systems had on creativity is hard to measure, yet clearly significant: By lowering the bar of mastery, assisted creation systems empowered experts and non-experts alike to shift their attention to higher level issues, perform complex creative tasks more reliably and experiment quickly. While such systems are not with out their risks and complications [8]- ultimately, they enable us to be more creative, more of the time.

Assisted Creation 3.0

Second generation systems are often limited and limiting: Negotiation for control is blunt and interactions not fine grained. Due to such limitation, widely used tools such as autocomplete have a mixed reputation. A set of new ideas and techniques, coming from diverse research disciplines, promise to overcome previous limitations. We define them as Third generation assisted creation systems (AC 3.0). A shared vision is to design systems that negotiate the creative process in fine-grained conversations, augment creative capabilities and accelerate the skill acquisition time, from novice to expert. Third generation assisted creation principles are finding practical use across an expanding range of creative tasks.

To name a few examples:

  • Assisted Drawing helps illustrators to draw, by correcting strokes.
  • Assisted Writing helps authors to write, by improving text style.
  • Assisted Video helps directors to edit, by fine-tuning movie cuts.
  • Assisted Music helps musicians to make music, by suggesting ideas.
Assisted Photo Enhancement (2016) / Assisted User Gesture Map (2014)
Assisted Freehand Drawing (2011)

Describing the breadth of ongoing research in a few examples is challenging, as there are many ideas and domains to explore. To track assisted creation, we analyzed recent research publications across many organisations with the help of machine learning, graph theory and visualization. Judged on quantitative measures (publications and experiments), assisted creation research and use is on the rise across creative disciplines. Notably, Machine Learning (ML) and Human Computer Interaction (HCI) are contributing a steady stream of research, relevant for the design of assisted creation systems. Together, ML and HCI are providing us with a conceptual framework for machine intelligence in a human context.

Already in 2011 Rebecca Anne Fiebrink, HCI/ML researcher at Goldsmith University, fittingly asked: “Can we find a use for machine learning algorithms in unconventional contexts, such as the support of human creativity and discovery?“ [9]. In the years since, Fiebrink’s call has been taken up by a multidisciplinary community: a wide range of new ideas, theories, experiments, approaches and products are being explored and developed. Ongoing HCI/ML research offer us new possibilities and metaphors for the design of assisted creation systems.

Selection of graph analysis of ongoing HCI/ML research (AE, 2016)

Democratisation and Escalation

By researching Assisted Creation, we recognise emerging trends, with implications for creativity: 1. Assistive Creation Systems are making a wide range of creative skills more accessible. 2. Collaborative platforms, such as Online Video and Open Source, are making it easier to learn new creative skills. As these trends are increasingly converging, they are accelerating the skill acquisition time from novice to expert. This is leading to a phenomenon we have named “the democratisation of creativity”. We explore these trends further and extrapolate a vision.

Assisted Handwriting Beautification (2013) / Assisted Fashion Style Selection (2015) / Assisted Animation with Webcam (2015)

TREND 1: Creativity is becoming more accessible.

While having a photo studio or music recording studio at home was but a dream for a 1980s creator, in today’s world it’s one click away. Such trends, observable for many creative tasks, are empowering non-experts and experts alike to be more creative, more of the time. One could say, the price of creation is falling. In this trajectory, a key challenge has been the “high barrier to entry” [10] for those without specific skills or talents. Today, Assisted creation systems are increasingly lowering this “high bar” by actively guiding creative processes and bootstrapping the learning of new skills.

Assisted Reading (2015) / Assisted Hair Design from Photos (2015) / Assisted CV Writing (2015)

TREND 2: Collaboration is becoming more accessible.

Already in the 1960’s, Engelbart’s vision was not only about enhancing individuals: He wanted to augment the collective intelligence and creativity of groups, to improve collaboration and group problem-solving ability. With the rise of collaboration and social software, and a deeper theoretical understanding of how groups can use technology to self-organize and cooperate, systems are emerging that can make groups effectively more creative. A key notion is that creativity is a collective process that can be strengthened through technology, but goes beyond just technological means. Human capabilities and tool capabilities have to be raised in sync.

The escalation of creativity

By projecting these trends into the (near) future, we can start to imagine a scenario we call “the escalation of creativity”: a world where creativity is highly accessible and anyone can write at the level of Shakespeare, compose music on par with Bach, paint in the style of Van Gogh, be a master designer and discover new forms of creative expression. For a person who does not have a particular creative skill, gaining a new capability through assisted creation systems is highly empowering. If creative tasks can be master on-demand and access is democratically shared, age-old notions such as “expert” or “design” are bound to be redefined. Further, this escalation can lead us to scenarios such as using creativity as means of empathic communication — at scale.

Assisted Drumming with Robotic Arm (2016) / Assisted Physical Table (2015)
Even though such scenarios are currently fiction, thinking about the implications of the democratisation and escalation of creativity, influences today’s design decisions. Creating systems that are respectful of cultural practices, support different types of creativity, are responsive to human needs, provide feedback transparently and are ethically grounded is proving to be highly challenging already now.

Automation or Augmentation

A first question we should ask ourselves when talking about the democratisation and escalation of creativity is “Are we designing tools that empower us or autopilots that replace us?. Such questions have been negotiated in a global discussion, starting over 5000 years ago with the use of oxen in agriculture [11]. History shows us that any technology has feedback dynamics and momentum, or in the words of Marshall McLuhan: “First we shape our tools, thereafter they shape us” [12]. Nonetheless, we see technology not as a primary force of nature: Human decisions and actions play a key role — negotiated through culture, politics & power.

Collage of “Portrait of a Family in a Landscape” (1641) / Le Net — First convolutional neural network used to automatically read bank notes — (1989)

A second question to address is “what does automation mean?”. Our current understanding of automation is heavily influenced by ideas from the industrial revolution: mass producing goods with mechanical butlers. While concerns about mass unemployment due to certain types of automation have to be taken serious, automation is not inherently bad; some “jobs” might be better left to machines, as they can be inhumane and wasteful of human potential. Shifting the discussion to one of human potential and capability — reflecting on our strengths and weaknesses, needs and dreams — allows us to reframe fears of automation as opportunities for augmentation.

Augmentation is not the same as automation: Where automation promises to “free us from inhumane tasks”, augmentation aims at strengthening our capabilities. It is the notion of raising the collective human potential, not replacing it. To analyse this notion further, we refer to a framework introduced by NASA, for thinking about autonomy: The H-Metaphor [13]. It proposes to view our interactions with systems more as we do with horses, instead of butlers.

Image from “The H-Metaphor” by NASA (2003)

Think of a rider on a horse: If a rider uses deliberate movements, the horse follows exactly. As the control becomes vaguer, the horse resorts to familiar behaviour patterns and takes over control. Being able to “loosen or tighten the reins” leads to smooth ebb and flow of control between human and horse, rather than instructions and responses. Considering feedback and control is key.

HCI/ML Researcher Roderick Murray-Smith suggests using the H-Metaphor and control theory when thinking about interface dynamics. He predicts:

“Future devices will be able to sense much more on and around them, offering us more ways to interact. We can use this to let go sometimes and be casual about our interactions” [14].

First Film Recording of Race Horse by Eadweard Muybridge (1878)

Having the ability to interact with systems casually, and let go of control at times, promises substantially improved forms of human-machine and human-human collaboration. By combining human intuition with machine intelligence, shared control principles lets us imagine new creative processes, not possible independently by either human or machine. Such principles promise to make creativity more accessible and raise our collective potential.

While a range of dystopian outcomes can easily be imagined, we deliberately choose to explore a vision that focuses on opportunities, not fears. In order to prevent bleak future scenarios, complex metaphorical and ethical questions can not be an afterthought, but are an opportunity for collaborative exploration and a call to action for collective decision making.

Examples / Media

The following is a selective overview of ongoing assisted creation research, experiments and products, across a range of creative tasks / disciplines.

Assisted Photography

1. Assisted Photo Enhancement (2016
2. Assisted prediction of photo memorability (2015).
3. Assisted Categorisation and Tagging of Photos (2015).
4. Auto Photo Colorisation (2016)
5. Realtime Smile and Emotion Detection (2015).

Assisted Drawing

1. Assisted Handwriting Beautification (2013).
2. Assisted Freehand Drawing with Real-time Guidance (2013).
3. Autocomplete hand-drawn animations (2015).
4. Animating drawings with face recognition (2015).
5. Robotic Handwriting Assistance (2013).

Assisted Read/Write

1. Assisted CV text creation and optimisation (2015).
2. Auto-respond to email (2015)
3. User guided / Automatic summarization of text (2015).
4. Text Style transfer from English to Shakespeare (2015).
5. Word Processor with a Crowd Inside (2010).

Assisted Music

1.Music style and harmony transfer, genre to genre (2014).
2. Composing Music with Augmented Drawing (2009).
3. Assisted Musical Genre Recognition (2013).
4. 909 Drum-machine that learns from behaviour (2015).
5. Assisted Robo Guitarist (2013).

Assisted Design

1.Learning Visual Clothing Style (2015).
2. Assisted Design of 3d models by merging shapes (2015)
3. Learning Perceptual Shape Style Similarity (2015).
4. Parsing Sewing Patterns into 3D Garments (2013).
5. Shape Shifting Table (2015).

Assisted Experiments

1. Wearable Assisted Text-Reading Device (2015).
2. Pain Visualization through patient text (2013).
3. Hair Modeling with DB (2015).
4. Assisted Ethical Decision Making, with a fan (2015).
5. Text Entry for Novice2Expert Transitions (2014).

Assisted Community

1. Real Time video stream of creative processes (2015).
2. Massive Open Course (2012
3. Large scale Open Source Collaboration (2008).
4. Creative Process Question Answer Sites (2010).
5. Zero Cost Creative Content Distribution (2007).

Assisted Culture

1. Assisted drumming with third robot arm (2016).
2. Assisted Vending, selects drinks based on looks (2016). 
3. Computer Ballet (2016).
4. Assisted Karaoke Singing with Face Swap (2016).
5. Pingpong Assistant with AR Glasses (2015).

3. Generative Creation

Our abilities to represent complex creative problems is increasing. A fundamental shift in perspective is allowing us to revisit many creative problems. The following section presents generative creation and explores how it democratises and escalates creativity.

Representation has played a pivotal role throughout human history. Ever more detailed representational systems have allowed us to communicate complex phenomena in understandable terms — to organise information, manage problems and make informed decisions. Since the invention of writing, representational strategies have evolved substantially. From the inclusion of measurement in the early 16th century [1], to the adoption of perspective drawing in the Renaissance [2]: New forms of representation have lead to revolutions in science and technology.

Historic Representational Experiments

Abstraction strategies, such as drawing and writing, try to represent big ideas with highly limited means. They force humans to keep all the moving parts in their heads. As Matt Jezyk (Autodesk) suggests, such tools were invented in the age of documentation, where the bandwidth to represent problems was low [3]. Jezyk describes the 20th century as the age of optimization: New techniques, such as simulation expanded our representational bandwidth significantly and allowed many disciplines and industries to adopt reproducible abstraction methods at scale.

Early 20.Century Simulations

Historically, the use of simulations were largely isolated in different fields. 20th century studies of systems theory and cybernetics combined with the proliferation of computers led to a more unified, systematic perspective: the age of models. A Model is a high-bandwidth, computational representation of reality. The model represents the system — its characteristics and behaviors — whereas the simulation represents the operation of the system over time. This new representational paradigm does not rely on abstraction methods but tries to make “things that behave like the thing they represent”.

21st Century Digital Modeling

Models give us an infrastructure for representing the overall problem. They help us to understand complex interconnected issues and develop a deeper understanding of inherent logic and relationships of parts. While modeling techniques go back till at least the early 1940s (Nuclear-bomb simulation) [4], it was the exclusive domain of experts and inhibitively expensive. Today, modeling and simulation methods are becoming highly accessible and cheap.

We argue that accessible modeling techniques are allowing us to negotiate a wide range of creative problems from a higher level perspective and create differently. We explore this emerging pattern — culture, technology and implications — and name it the generative age.

The Generative Age

Already in the 1960s, Engelbart pointed to the impact digital technologies have on our representational ability: “We can represent information structures within the computer that will generally be far too complex to study directly” [5] . A conceptual link between these new representational abilities and creativity was made in the early 2000s by designers like Patrik Schumacher, co-founder of Zaha Hadid Architects. He describes an “ontological shift”, from the platonic ideal shapes of the past 5000 years, to new computational “primitives“ [6].

What he was alluding to, is a fundamental shift in perspective, from 3D to nD: While the renaissance age gave us the ability to represent reality from a three dimensional perspective (3D), the generative age enables us to represent (model) complexity and see (infer) reality from a probabilistic, or high dimensional perspective (nD). Inspired by such ideas, influential design manifestos [7], books [8], and software [9] were published, laying the foundation for a new movement: Generative Design.

Generative Column Design, created with digital manufacturing, by Michael Hansmeyer (2010)
“Digital Grotesque”, by Michael Hansmeyer and Benjamin Dillenburger (2013)

Michael Hansmeyer, an architect, describes Generative Design as “thinking about designing not the object but a process to generate objects” [10]. He is implying a shift from object to process — from certainty to probability — suggesting that instead of designing one “artefact”, we use computational models to design processes that generate infinite “artefacts”.

“Housing Agency System: Mass-Customization System for Housing” by Autodesk (2012)

Essentially, Generative Design is an umbrella term — describing ongoing research and developments in diverse fields, ranging from design, architecture, industrial design to machine learning. A shared vision is to empower human designers to explore a greater number of design possibilities from a new perspective and lower the time between intention and execution. In the generative age, the cost of creating diversity and complexity is falling. This allows us to create an order of magnitude more elaborate form and function. For example: bicycles that are mass-customizable to people’s individual taste, while using a fraction of the material traditionally required. While generative approaches are not constrained to any particular field, notably architecture and more recently design have been among the first disciplines to systematically take hold of these approaches, as illustrated in the following examples.

Generative Dress: “Kinematics” by Nervous System (2014) / Generative Shoes: “Molecule-shoes” by Francis Bitonti (2014)/ Generative Shirts: “Processing Foundation” (2015)
Generative Car: “Hack Rod” by Autodesk (2015) / Generative Bicycle: “Skeleton” by Gary Liao (2016)
Generative Study “gaudism” by echonoise (2013) / Generative Architecture “Heydar Aliyev Centre” by Zaha Hadid (2012) / Generative Study by Designmorphine (2015)
Generative Chair with “Dreamcatcher” by Autodesk (2015) / Generative Lamp “Hyphae” by Nervous System (2014) / Lamp made with 3d room scan by Hybrid Platform (2015)
Generative Bow: “Tekina — Optimal Recurve Bow” by Aminimal Studio (2015) / Generative Motion Art by Raven Kwok (2015)

While Generative Models have been used for creative applications since the 1970s (procedural game), recent research advances — driven notably by Machine Learning and Deep Learning — are leading to a quantitative and qualitative leap in generative modeling capabilities. Today, new models are released practically every week. Research projects with acronyms such as VAE [11], DRAW [12], VRNN [13], GAN [14], DCGAN [15], LAPGAN [16] and GRAN [17] are allowing us to model complexity with greater resolution and apply modelling techniques to a wider range of creative problems.

Autoencoding images beyond pixels (2015)

The application of generative machine learning models to creative tasks is a recent development, yet it is already leading to the discovery of new primitives for creation: Design building blocks that are applicable across many creative domains. Such creative generative models have been successfully used to generate fashion items, paintings, music, poems, song lyrics, journalistic news articles, furniture, image and video effects, industrial design, comics, illustrations and architecture, to name just a few of its applications. See “Examples” section for a selection of projects.

“Semantic Shape Editing Using Deform Handles” (2015) / “Procedural Modeling Using Autoencoder Networks” (2015)
“NeuralDoodle” — Semantic Image Style Transfer (2016) / Automatic Colorization of B/W images with neural nets (2016) / “Neural Image Analogies”(2001/2016)
Deep Visual Analogy-Making (2015) / Generative Form Editor “Cindermedusae” (2015) / Exploratory Modeling with Collaborative Design (2012)
Generated Street Sign Images (2015) / Generated Fake Chinese Characters (2015) / Generated Choreography and Animation (2016)
RNN generated Super Mario Levels (2016) / RNN generated TED Talks (2015) / RNN generated Wikipedia Article (2015)
Artist Agent: Reinforcement learning ink painting (2013) / BrainFM — Dynamic Generative Music for Relaxation (2015) / Jukedeck — Generative Music for Videos (2015)

Generative models let us explore data in unprecedented ways. To give an example, imagine a chair: we can represent its characteristics, such as color, height or style, as dimensions in high-dimensional information spaces. This space can be filled with data about millions of chairs. Chairs with similar characteristics are mapped in vicinity of each other. This creates a chair model, which can be explored and visualized.

“Joint Embeddings of Shapes and Images via CNN Image Purification” (2015)

Such high-dimensional topologies allow us to easily retrieve information, explore data and ask questions about relationships, logic and meaning: e.g. “Show me all chairs that are red and tall”. Further, they allow us to make predictions and infer new characteristics: e.g. “Show me all chairs, that are similar to chair A and B but unlike chair C”. Finally, we can use high-dimensional spaces to generate new objects: e.g. “Make me a chair that resembles a car, and is comfortable to sit in”.

Generative models invite designers to play with data and generated infinite imaginative variations and solutions to creative problems. By having powerful tools to explore, optimise and test creative design ideas rapidly, we computationally maximise the opportunity for serendipity. While generative models can be used to perform classical creative tasks efficiently, additionally they open up a range of new creative capabilities, incomparable with classical methods.

Artificial Serendipity: Systems that maximise the opportunity for serendipity.
Multi-Modal Network Diagram (2015) / Generating Stories about Images (2015)

Recent advances in machine learning make it possible to include data from different “modalities” in a single model. It enables us to translate between modalities. The key insight is that all forms of information can be encoded in a shared information space. Early research into multimodality has lead to a set of widely adopted systems: “Auto Translate” [18] lets us translate from one language to another, “Speech2Text” [19] transcribes audio to text. Multi-modal machine learning is allowing for more complex scenarios, which go beyond simple translation of data: Generate Images from Text [20], Text from Videos [21], Music from Movement [22], 3D shapes from shopping data, etc. We call this:

Artificial Synesthesia: Systems that enable inter-sensory experiences.

Democratisation and Escalation

The Generative age gives us a new canvas for creativity, which we have only just started to explore. While it is hard to predict where these developments will take us, emerging trends are worth investigating — as their impact can already be felt. By extrapolating these developments and thinking about their implications, we arrive at a scenario we call “the democratisation and escalation of creativity”.

We explore this notion further and describe four trends:

Image from Apollo 10 Space Mission (1969)
Image from “Large-scale Image Memorability” (2015)

1. Generative Perspective: 
For the first time in human history, we can create from a blended, or generative perspective — as it mixes elements of the collective human perspective, machine perspective and individual perspective. It gives us the ability to transcend creative constraints, such as habit, socialisation and education, and create objects which are altogether new.

2. Generative Predictions: 
Take the concept of recommendation, personalization and customisation and apply it to the creative process. The vision is to have systems that suggest potential next “actions”, allow people to casually adjust aspects of designs according to personal needs and histories, and enable us to playfully discover creativity.

Image from “Corporate Social Networking Platforms As Cognitive Factories” (2016)
Image from “Generative strategies for welding” (2016)

3. Generative Markets: 
In the future, generative models might be shared in an open collaborative model marketplace (OCMM). While current marketplaces allow us to trade artefacts/products, generative markets will facilitate the sharing of recipes to create unlimited new artefacts. In essence, think of it as GitHub for open-source creativity.

4. Generative Manufacturing: Emerging digital manufacturing techniques, such as 3D printing, combined with generative systems used to create physical objects. Early signs of such trends can be observed in things such as Shapeways, Kickstarter and the “maker movement”. It’s starting to redefine the relationship between creation, production and consumption.

Autodesk Project “Dreamcatcher” (2015)

When projecting such scenarios even further into the future, we arrive at the realisation that Generative Creation has profound implications for fields like technology, manufacturing, resource allocation, economics and politics. Already today, Generative Creation methods are leading to the democratisation of creativity in many areas. By lowering the time between intention and realisation, Generative Creation is leading to an escalation of new “artefacts” — forms, functions and aesthetics. It allows us to explore what lies beyond the artefact.

Combined with new manufacturing techniques, generative creation is redefining concepts such as production, consumption, labour and innovation. As current economic models are largely built around the notion of “artefacts”, a renegotiation of fundamentals is foreseeable. While predicting the future of the demo-cratisation and escalation of creativity is impossible, thinking about narrative, opportunities and implications, informs today’s decisions and visions. Or in the words of Robert Anton Wilson:


The following is a selective overview of generative creation research, experiments and products, across a range of creative tasks / disciplines.

Generative Experiments

1. Generating Flora and Fauna (2015).
2. Generating Chairs, Tables and Cars (2015).
3. Generative Font Design with Neural Networks (2015).
4. Generative Manga Illustration (2015).
5. Generating Faces with Manifold Traversal (2015).

Generative Design

1. Semantic Shape Editing Using Deform Handles (2015).
2. Generative Motorcycle swingarm Design (2015).
3. Generative airplane partition design (2015).
4. Generative Data-Driven Shoe Midsole Design (2015).
5. Generative Jewellery design (2015).

Generative Text

1. Generating Stories about Images (2015).
2. Generating Sentences from a continuous space (2015).
3. Generative Journalism (2010)
4. Generating Cooking recipes with Watson (2015).
5. Generating Clickbait Web content and site (2015).

Generative Serendipity

1. Exploratory Modeling with Collaborative Design (2015).
2. Generative Music Score Composition with RNNs (2015).
3. Messa di Voce, Generative Theater (2003).
4. Generative Image Style Transfer (2015).
5. Interactive Neural Net Hallucinations (2015).

Generative Architecture

1. Generative Columns Design and Manufacturing (2010).
2. Generating House 3d Models with House agents (2012).
3. Heydar Aliyev Center (2012).
4. Generative Biological inspired form (2015).
5. Francis Bitont on 3D printing (2015).

Generative Design

1. Generative strategies for welding (2015)
2. Generative Car Chaise Design (2016).
3. Generative mass customized knitwear (2016)
4. Generative mass customized T-shirt and bags (2016)
5. Generative Lampshade based on room 3D scan (2015).

Generative Games

1. Generative Creation of Universe (2014).
2. Texture Synthesis (2015).
3. Generative Character Controls (2012).
4. Generative Game Map and Characters (2013).
5. Generative enemy manager (2010).

Generative Synesthesia

1. Synesthesia Mask Lets You Smell Colors (2016).
2.Cross-modal Sound Mapping Using ML (2013).
3. Expressing Sequence of Images with Sentences (2015).
4. Generative Graffiti, adapting to Music (2016).
5. Music to 3d Game (2001).


Our research journey began with a series of experiments — playfully exploring the space between creativity and A.I. It led to an in-depth investigation into creativity, which reinforced our initial intuition that creativity is a central, evolving force throughout human history. New metaphors such as Augmented Creativity, Computational Creativity and Creative Systems allowed us to approach creativity from new perspectives and explore how it intersects with technology.

During this journey, we have tried to think about creativity and technology in a structured way. This has allowed us to recognize, analyze and define emerging creation patterns. We focused on two general types: Assisted Creation and Generative Creation. Together, these patterns are leading to a vision we call the democratization and escalation of creativity: A world where creativity is highly accessible, through systems that empower us to create from new perspectives and raise the collective human potential. Through our research, we learned to appreciate creativity as an ever evolving, driving force of humanity and as a wide open frontier for interdisciplinary research and development.

A primary goal of this research project was to find a set of guiding principles, metaphors and ideas, that inform the development of future theories, experiments, and applications. By combining different domains into one narrative, we formulate a new school, or praxis for creativity: CreativeAI. Its desire is to explore and celebrate creativity. Its goal is to develop systems that raise the human potential. Its belief is that addressing the “what” and “why” is as important as the “how”. Its conviction is that complex ethical questions are not an afterthought, but an opportunity to be creative collectively.

Finally, CreativeAI is a question, rather than an answer. Its only demand is more collaboration and creativity. It is an invitation for play!

“The creation of something new is not accomplished by the intellect but by the play instinct acting from inner necessity. The creative mind plays with the objects it loves” — Carl Jung

Interested to help out bringing AI to studios and agencies in creative industries around the world? We’re hiring at:

About the Authors

Roelof Pieters 

Samim Winiger


Special thanks to: Boris Anthony, Jake Witlen, Hendrik Heuer, Kiran Varanasi, Francis Tseng, Mark Riedl, Jack Clark, Alex J. Champandard, Gene Kogan, David J.Klein, Simone Rebaudengo, Matthieu Cherubini, Saurabh Datta, Max Niederhofer, Dave Ha, Kyle Kastner, Mattias Östmar, Melisande Middleton, Tor Sanden, Mami Ebara.

Further, thanks to people on twitter for inspiration, conversations and helpful suggestions!

>>>> Subscribe to our Newsletter <<<<


1. Creativity

  1. Albert, R. S.; Runco, M. A. (1999) “A History of Research on Creativity”. In Sternberg, R. J. Handbook of Creativity, Cambridge University Press.
  2. Tatarkiewicz, W. (1980) A History of Six Ideas: an Essay in Aesthetics. Translated from the Polish by Christopher Kasparek, The Hague: Martinus Nijhoff; Runco, Mark A.; Albert, Robert S. (2010) “Creativity Research” In James C. Kaufman and Robert J. Sternberg, The Cambridge Handbook of Creativity, Cambridge University Press.
  3. Wallas, G. (1926) Art of Thought, New York, Harcourt, Brace and Company; Wertheimer, M (1945) Productive Thinking. New York: Harper; Poincaré, H. (1952) [1908] “Mathematical creation”. In ed. Ghiselin, B. The Creative Process: A Symposium. Mentor; Helmholtz, H. v. L. (1896) Vorträge und Reden (5th edition). Friederich Vieweg und Sohn.
  4. Bush, V. (1945) As We May Think, Atlantic Monthly 176 (July 1945) pp. 101–108 [pdf].
  5. Moggridge, B. (2007) Designing Interactions. Cambridge, MA: The MIT Press. p 30–31.
  6. Engelbart, D. C. (1962) Augmenting human intellect: A conceptual framework, Stanford Research Institute, Tech. Rep., October 1962, p 5 [online text]
  7. In December 1950, after three years of working at a steady job and getting engaged, Douglas Engelbart realising that there was more to life, asking himself, “How can my career maximize my contribution to mankind?”. Recounted in: Bardini, T. (2000) Bootstrapping: Douglas Engelbart, Coevolution, and the Origins of Personal Computing, Stanford University Press, p 7. And O’Brien, T. (1999) “From the archives: Douglas Engelbart’s lasting legacy”, Mercury News, at []. This same realization is recounted in Markoff, J. (2005) What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry, New York: Penguin Group. p 8.
  8. This point is eloquently described by Alan Kay at the MIT/Brown Vannevar Bush Symposium, on October 12–13, 1995, to celebrate the 50th anniversary of Vannevar Bush’s seminal article “As We May Think”, [Video] (recounted at: 38m 44s).
  9. Gillespie, T. (2014) Media Technologies: Essays on Communication, Materiality, and Society. Mit Press Ltd, pp 213–215.
  10. Ivan Sutherland’s PhD thesis was reprinted in 1980 as: Sutherland, Ivan Edward (1980), Sketchpad: A Man-Machine Graphical Communication System, New York: Garland Publishers. An electronic edition was published in 2003 [pdf].
  11. Sears, Andrew; Jacko, Julie A. (2007). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, Second Edition. CRC Press. p 5.
  12. Both mouse and video conferencing made their debut in what was come to be known in a demonstration by Doug Engelbart which came to be known as “The Mother of All Demos”: Engelbart, Douglas C.; et al. (1968–12–09). “SRI-ARC. A technical session presentation at the Fall Joint Computer Conference in San Francisco”. NLS demo ’68: The computer mouse debut. Engelbart Collection (Menlo Park, CA: Stanford University Library) [Video recording, annotations, and full transcript].
  13. Markoff, What the Dormouse Said, pp 80–109.
  14. ibid, pp 254–287.
  15. A detailed history of Xerox, PARC, and other intersections of business and technology, and the creation of “Silicon Valley” is: Rao, A. and Scaruffi, P. (2011) A History of Silicon Valley: The Greatest Creation of Wealth in the History of the Planet. Redwood City: Omniware.
  16. Stephen Wozniak (1984) “Homebrew and How the Apple Came to Be” In: Steve Ditlea (ed) Digital Deli: The Comprehensive, User-Lovable Menu of Computer Lore, Culture, Lifestyles and Fancy. [text online]. A good history of the Home Computer Club can also be found in John Markoff, What the Dormouse Said. And in: Freiberger, Paul; Swaine, Michael (2000) Fire in the Valley: The Making of the Personal Computer, 2th ed, McGraw-Hill; An adaption of “Fire in the Valley” on the History of Apple and Microsoft led to the documentary “Pirates of Silicon Valley” (1999) by director Martyn Burke.
  17. Shannon, C. E. (1951) “Prediction and Entropy of Printed English” In Bell System Technical Journal 30 (1): 50–64 [full text at].
  18. An excellent historical overview is given in: Holbrook, B.D. and Brown, W.S. Computing Science Technical Report No. 99 : A History of Computing Research at Bell Laboratories (1937–1975), [pdf]. A more visually oriented overview all the way to contemporary digital art is given by Paul, C. (2015) Digital Art, 3th ed. Thames & Hudson (pp 15–18 focuses on the period we are mainly interested in).
  19. Noll, A.M. (1970) “Art Ex Machina” in IEEE Student Journal, Vol. 8, No. 4, (September 1970), pp. 10–14, [pdf]. Two other important early works by Noll, exploring computer graphics are: Noll, A. Michael (1967) “The Digital Computer as a Creative Medium,” IEEE Spectrum, Vol. 4, No. 10, (October 1967), pp. 89–95; and Noll, A. Michael (1971) “Scanned-Display Computer Graphics,” Communications of the ACM, Vol. 14, No. 3, (March 1971), pp. 145–150.
  20. Brian Eno, lecture in Rio de Janeiro, October 20th, 2012 [Video] (citation at: 52m 35s).
  21. Benoît Mandelbrot (1924–2010), Nature Obituary, Nature 468, 378 (18 November 2010) [nature].
  22. “Beneath Apple Manor”, a rogue-like dungeon crawl released in 1978 for the Apple II, was one of the first games to have procedural generated maps and game objects. Available to play online at []
  23. The first in the SimCity game series was published in 1989. Available to play online at []. Wright was greatly influenced by Jay Forrester’s (1969) work on System Dynamics in his design of the game. Or in the words of Wright, 25 years later: “Most of the simulation is really built up of rather simple rules, if you look under the hood, and it’s really interesting how these simple rules, when they interact with each other, give rise to great complexity,” Wright says. “You can’t even really sit back and engineer it or blueprint it. It’s more like you have to discover it, because the emergent systems are inherently, by definition, unpredictable.” In Doug Bierend, SimCity That I Used to Know: On the game’s 25th birthday, a devotee talks with creator Will Wright, Medium, Oct 17, 2014, [].
  24. Work defining creativity in computational terms is as early as: Newell, Allen, Shaw, J. G., and Simon, Herbert A. (1963), “The process of creative thinking” in Gruber, H. E., Terrell, G. and Wertheimer, M. Contemporary Approaches to Creative Thinking, pp 63–119. New York: Atherton Press; Early foundational work on the categorization of creativity are: Boden, M. (1990), The Creative Mind: Myths and Mechanisms, London: Weidenfeld and Nicholson; Boden, M. (1999) Computational models of creativity, Handbook of Creativity, pp 351–373. This has been further explored and formalized by: Wiggins, G.A. (2006), A Preliminary Framework for Description, Analysis and Comparison of Creative Systems, Journal of Knowledge Based Systems 19(7), pp. 449–458. A good overview of the field of computational creativity is given in: Pereira, F. C. (2007). “Creativity and Artificial Intelligence: A Conceptual Blending Approach”. Applications of Cognitive Linguistics series, Mouton de Gruyter. Similar, but more multi-disciplinary oriented, works are: Veale, T., Feyaerts, K. and Forceville, C. (2013) Creativity and the Agile Mind: A Multi-Disciplinary Study of a Multifaceted Phenomenon, Walter de Gruyter; Besold, T.R., Schorlemmer, M. and Smaill, A. (2015) Computational Creativity Research: Towards Creative Machines, Atlantis Press.
  25. Wiggins, G. A. (2006) A Preliminary Framework for Description, Analysis and Comparison of Creative Systems, Knowledge-Based Systems, Volume 19 Issue 7, pp 449–458.
  26. The International Conference on Computational Creativity (ICCC) occurs annually since 2010, organized by The Association for Computational Creativity. Before that, a more informal collective of researchers held a dedicated workshop, the International Joint Workshop on Computational Creativity, every year since 2004. Today also few specialized departments or research groups exist (notably Godsmiths London, UK) and research cooperation with industry (Autodesk, Adobe, Sony, etc.) is common practice.
  27. Von Foerster, H. “A Note from the Editors,” in Cybernetics: Circular Causal and Feedback Mechanisms in Biological and Social Systems, Transactions of the Eighth Conference (New York: Josiah Macy Jr. Foundation, 1952), 321.
  28. Von Foerster, H., Mead, M., & Teuber, H. L. (Eds.). (1951). Cybernetics: Circular causal and feedback mechanisms in biological and social systems. Transactions of the seventh conference. New York: Josiah Macy, Jr. Foundation, p. vii; Fremont-Smith, F. (1960). The Macy Foundation conference plan. In M. Capes (Ed.), Communication or conflict: Conferences, their nature, dynamics and planning. New York: Association Press, pp. 218–19.
  29. Norbert Wiener, Cybernetics: or Control and Communication in the Animal and the Machine (New York: Wiley, 1948), 11 [pdf].
  30. Shannon, C. E. (1948). “A Mathematical Theory of Communication”. Bell System Technical Journal 27 (3): 379–423. July 1948; Shannon, C. E. (1948). “A Mathematical Theory of Communication”. Bell System Technical Journal 27 (4): 623–656. October 1948. Later in 1949 published together as what would become the text book of information theory, titled The Mathematical Theory of Communication by University of Illinois Press. It introduced concepts as information entropy and redundancy, and introduced the term “bit”.
  31. Wiener, N. (1948). Cybernetics, or Control and Communication in the Animal and the Machine. Cambridge: MIT Press.
  32. Cybernetic Serendipity, 1968, Institute of Contemporary Arts (ICA), London, UK, from 2 August to 20 October 1968 []. Exhibition catalogue “Cybernetic Serendipity: the computer and the arts”, Edited by Jasia Reichardt, Studio International Special Issue, London, 1968 [pdf].
  33. Gordon Pask, “The Colloquy of Mobiles”, ICA London 1968; Nam June Paik, “Robot K-456”, ICA London 1968; Peter Zinovieff, “Music Computer”, ICA London 1968. More info at the cybernetics serendipity archive and catalogue in [32].
  34. Fuller, B. (1969) Ideas and Integrities: A Spontaneous Autobiographical Disclosure (New York: Macmillan). For an analysis see: Turner, F. (2010) From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism, University of Chicago Press, pp 56–58.
  35. For an extensive history on the impact of cybernetics on fields as diverse as psychiatry, engineering, management, politics, music, architecture, education, etc. see: Pickering A. (2011) The Cybernetic Brain: Sketches of Another Future, University Of Chicago Press. Another historical work grounded in the belief that there is no way to separate cyberculture from counterculture is: Turner, F. (2006) From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism, Chicago: University of Chicago Press. On the bridge between cybernetics and design see: Dubberly, H. & Pangaro, P. (2015). How cybernetics connects computing, counterculture, and design. In Hippie modernism: The struggle for utopia (pp. 126–141). Minneapolis, MN: Walker Art Center, And: Glanville, R. (2014). How design and cybernetics respect each other, RSD3 Relating Systems Thinking and Design 2014 working paper, [pdf]. Influential to architecture and cybernetics was Cedric Price’s Fun Palace going well beyond then established notions of architecture. See: Matthews, S. (2005) The Fun Palace: Cedric Price’s experiment in architecture and technology, Technoetic Arts Vol 3 Nr 2, [pdf]. An important legacy of cybernetic thinking can also be found in MIT Media Lab, founded by Nicolas Negroponte, to merge the “old media” (print and broadcasting) with the “new media” of personal computing. A foundational work on its vision is: Brand, S. (1988) The Media Lab: Inventing the Future at M.I.T., Penguin Books.
  36. Gordon Pask (1969) “The Architectural Relevance of Cybernetics,” Architectural Design, p 494.
  37. Dubberly, H. & Pangaro, P. (2015) How cybernetics connects computing, counterculture, and design [].

2. Assisted Creation

  1. Shneiderman, B. (2000) Creating creativity: user interfaces for supporting innovation, ACM Transactions on Computer-Human Interaction (TOCHI) — Special issue on human-computer interaction in the new millennium, Part 1, Volume 7 Issue 1, March 2000, pp 114–138.
  2. The founding and development of Autodesk (and CAD systems) is well described in David E. Weisberg (2006) The Engineering Design Revolution: The People, Companies and Computer Systems That Changed Forever the Practice of Engineering, chapter 8: Autodesk and AutoCAD [].
  3. The history of Adobe Systems is most extensively (and perhaps a bit biased as published by Adobe Press) described in Pfiffner, P. (2002) Inside the Publishing Revolution: The Adobe Story, Adobe Press. Another interesting review is given in the short documentary by Terry Hemphill (also of Adobe) “The Adobe Illustrator Story”, [Vimeo].
  4. We intend the skeuomorphic meaning here: designing representations of objects to resemble their real-world counterpart.
  5. Leica (then Ernst Leitz Wetzlar GmbH) invented its autofocus knowledge and patented it between 1960 and 1973 to sell the technology to Minolta who produced the first autofocus camera: the Minolta Maxxum 7000. Reference patent US 4047022 A, Holle, W.H. (1977) Auto focus with spatial filtering and pairwise interrogation of photoelectric diodes”.
  6. Lewis-Kraus, G. (2014) “The Fasinatng … Frustrating … Fascinating History of Autocorrect” in Wired Magazine, 22/7/2014, []; Original patent 23155063 Rayson et. al. (1994) “Autocorrecting text typed into a word processing document”.
  7. Crockett, Z. (2015) “The Inventor of Auto-Tune”, at []; Feature with Andy Hildebrand in “Auto-Tune”, NOVA scienceNOW, PBS TV, June 30, 2009 [Video].
  8. Case in point is the example of Auto-tune, which was both celebrated by some and despised by others. Critical voices come from people as music producer Rick Rubin: “Right now, if you listen to pop, everything is in perfect pitch, perfect time and perfect tune”; On the other side, auto-tune — as we wrote — lowers the barriers for creative participation, mainly for less professionals, but also opens up new possibilities in music. A funny example here comes from the band The Gregory Brothers in their series Songify the News, “songifying” serious news broadcasts, as with their “Bed Intruder Song” video, which became the most-watched YouTube video of 2010.
  9. Fiebrink, Rebecca Anne (2011) Real-time Human Interaction with Supervised Learning Algorithms for Music Composition and Performance, Dissertation, p 2 [pdf].
  10. Nicholas Davis, Alexander Zook, Brian O’Neill, Brandon Headrick, Mark Riedl, Ashton Grosz, and Michael Nitsche (2013) Creativity support for novice digital filmmaking. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘13). pp 651–660 [pdf]; Unlocking human creativity: Mark Riedl at TEDxPeachtree, [Video].
  11. Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part 2, Mechanical Engineering. Taipei: Caves Books, Ltd. p 89, 110; A A Harms, B W Baetz, R R Volti (2004) Engineering in Time: The Systematics of Engineering History and Its Contemporary Context, Imperial College Press, p 49.
  12. The quote “First we shape our tools, thereafter they shape us” seems to be mistakenly attributed to Marshall McLuhan. While consistent with his ideas on technology, the quote is actually from John M. Culkin in an article about McLuhan: Culkin, J.M. (1967) A schoolman’s guide to Marshall McLuhan. Saturday Review (March 18, 1967) pp. 51–53, 71–72. This point is made by Alex Kuskis (2013) “We shape our tools and thereafter our tools shape us” [].
  13. Flemisch, Frank .O., Adams, Catherine A., Conway, Sheila R., Palmer, Michael T. and Schutt, Paul C. (2003) The H-Metaphor as a Guideline for Vehicle Automation and Interaction, NASA/TM — 2003- 212672 [pdf].
  14. Roderick Murray-Smith (2013) Casual interaction with computers: music applications, presentation at Digital Audio, Copenhagen, 20th June 2013, p 54 [slides]; For an application of this “focused–casual continuum” approach also: Henning Pohl and Roderick Murray-Smith (2013) Focused and Casual Interactions: Allowing Users to Vary Their Level of Engagement [pdf].

3. Generative Creation

  1. Human scale measurements and proportions became commonplace, most famously in Leonardo Da Vinci studies, ie The Vitruvian Man, in ca. 1492. Studies of anatomy, light and the landscape by Leonardo see specifically “A Treatise on Painting” by Leonardo da Vinci, assembled by his pupil Francesco Melzi and published in France and Italy in 1651 and Germany in 1724 [full text at].
  2. Renaissance painters Brunelleschi and Alberti developed theories of perspective to represent three-dimensional objects on a two-dimensional surface. Again it was Leonardo Da Vinci who improved on these theories through his scientific approach to art. See again: Leonardo da Vinci “A Treatise on Painting”
  3. Matt Jezyk (2015) Design Computation Symposium Part 1, AS12577, Autodesk University, Las Vegas [video at].
  4. Early applications of computer simulations were almost all military. Arguably the first massive computer simulation was during the Manhattan Project in World War II to model the detonation process of a nuclear bomb. See also: Masco, J. (2006) The Nuclear Borderlands: The Manhattan Project in Post-Cold War New Mexico. Princeton University Press.
  5. Doug Engelbart (1986) “The Augmented Knowledge Workshop” at ACM Conference on the History of Personal Workstations, held at Rickey’s Hyatt House in Palo Alto, California, January 9–10, 1986. [Video], [Transcription].
  6. Schumacher, P. (2010) The Parametricist Epoch: Let the Style Wars Begin, In: AJ — The Architects’ Journal, Number 16, Volume 231, 06. May 2010 [text at]; Schumacher, P. (2014) The Historical Pertinence of Parametricism and the Prospect of a Free Market Urban Order, In: Poole, M. and Shvartzberg, M. The Politics of Parametricism, Digital Technologies in Architecture, Bloomsbury Academic, New York 2015 [text at].
  7. Schumacher, P. (2008) Parametricism as Style — Parametricist Manifesto, London, Presented and discussed at the Dark Side Club , 11th Architecture Biennale, Venice 2008 [text at].
  8. Bohnacker, H., Gross, B.,. Laub, J., Lazzeroni, C. (2009) Generative Gestaltung: Entwerfen, Programmieren, Visualisieren. Schmidt, Mainz [book & info at].
  9. ie Rhinoceros (typically abbreviated Rhino, or Rhino3D), a popular 3D computer graphics and CAD application, used for many generative design tasks (ie 3D printing) because its geometric model is based on the NURBS mathematical model, which focuses on producing mathematically precise representations. A popular visual scripting language add-on is Grasshopper, used extensively under generative artists and designers.
  10. Michael Hansmeyer: Building unimaginable shapes, TEDGlobal 2012, Jun 2011 [Ted Talk] (citation at 10m12s).
  11. Diederik P Kingma, Max Welling (2013) Auto-Encoding Variational Bayes. Arxiv, GitXiv (Code & Article); Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra (2014) Stochastic Backpropagation and Approximate Inference in Deep Generative Models. [Arxiv].
  12. Gregor, K., Danihelka, I., Graves, A., Rezende, D.J., Wierstra, D., (2015) DRAW: A Recurrent Neural Network For Image Generation. [Arxiv], [GitXiv] (Code & Article).
  13. Chung, J., Kastner, K., Dinh, L., Goel, K., Courville, A., Bengio Y., (2015) A Recurrent Latent Variable Model for Sequential Data. Arxiv, GitXiv (Code & Article)
  14. Goodfellow, I.J.,Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y. (2014) Generative Adversarial Networks. [Arxiv], [GitXiv] (Code & Article).
  15. Radford, A, Metz, L., Chintala, S. (2015) Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. [Arxiv], [GitXiv] (Code & Article).
  16. Denton, E., Chintala, S., Szlam, A., Fergus, R. (2015) Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks. [Arxiv], [GitXiv] (Code & Article).
  17. Im, D.J., Kim, C.D., Jiang, H., Memisevic, R. (2016) Generating images with recurrent adversarial networks. [Arxiv].
  18. “Auto Translate”, most famously implemented in Google Translate.
  19. A flurry of Speech-to-text APIs and service has become available over the past few years, ie Google’s speech-api (api key needed).
  20. Mansimov, E., Parisotto, E., Ba, J.L., Salakhutdinov, R. (2015) Generating Images from Captions with Attention. [Arxiv].
  21. Yao, L., Torabi, A., Cho, K., Ballas, N., Pal, C., Larochelle, H. and Courville, A. (2015) Describing videos by exploiting temporal structure. In ICCV15. [Arxiv], [GitXiv] (Code & Article); Venugopalan, A., Xu, H., Donahue, J., Rohrbach, M., Mooney, R. and Saenko, K. (2015) Translating videos to natural language using deep recurrent neural networks. In NAACL15. Arxiv; Srivastava, N., Mansimov, E. and Salakhutdinov, R. (2015) Unsupervised learning of video representations using LSTMs. In ICML15. [Arxiv], [GitXiv] (Code & Article); Venugopalan, S., Rohrbach, M., Donahue, J., Mooney, R., Darrell, T. and Saenko, K. (2015) Sequence to Sequence — Video to Text. In ICCV15. [Arxiv], [GitXiv] (Code & Article); Simonyan, K. and Zisserman, A. (2014) Two-stream convolutional networks for action recognition in videos. In NIPS14. at Arxiv; Sharma, S., Kiros, R. and Salakhutdinov, R. (2015) Action Recognition using Visual Attention. [Arxiv], [GitXiv] (Code & Article)
  22. Fiebrink, R.; Trueman, D. and Cook, P. R. (2011) The Wekinator: Software for using machine learning to build real-time interactive systems. Demo at MusicTechFest London (2015), [Video], [Software]; 
    Vik, C. (2012) Astral Body, Demonstration at PACT Theatre, Sydney, 18 February 2012 [Video].