Human Factors: How We Designed an Adaptive Culture for Our AI Company

creative.ai
16 min readJan 22, 2018

--

Most companies won’t survive the transition to a workplace powered by Artificial Intelligence. It all comes down to their internal culture. Some teams simply don’t have the freedom to adapt quickly enough; others won’t use the flexibility they already have. In fast-pace environments where AI drives progress, organizational change will become the distinguishing factor.

Early on at creative.ai, we realized we couldn’t even build the kind of products we wanted to build without changing the way our company operated. Since then, we’ve continued to evolve our internal culture to level-up alongside our own AI systems. Not only is this approach helping us rapidly innovate in technology, but—more importantly—it lead us to develop the humanist principles necessary to take it further!

(Alex Champandard, co-founder of creative.ai, presenting the company’s invited talk at “AI & Society” in Tokyo.)

This article is based on creative.ai’s invited talk at “A.I. & Society” delivered in Tokyo recently, describing the motivation and story behind our company culture.

Watch the video recording of our invited talk here. (17 minutes)

Many of the graphics designers and visual artists that creative.ai works with wonder about the future. They wonder if we’re building a machine with big red button that says “AUTOMATE CREATIVITY” and if someone’s going to press that button what would happen... Are we heading for some kind of cartoon-like moment where we hit the ceiling and slide down the wall? Who knows! That’s the question everyone has and that’s probably why you’re reading this today...

But we’d like to flip the question around; instead of asking what the impact of technology is on society in the future, instead we’d like to ask what society does to technology. What has been the impact of our culture and societal values on technology?

We realize this is coming from the complete opposite perspective as most of the conversation online, and many people at the “A.I. & Society” event too. But we hope it stimulates some neurons and gets you to think about the problem from a different angle.

After a quick survey of our followers on social media, we found that 2 out of 3 people in the AI community and IT industry in general believe that machines will have strictly superior abilities compared to humans at some stage in the future — both intellectually and creatively. That’s two out of three! (67% is also corresponds to the number of Twitter bots that follow us, but it’s likely that’s just a coincidence.)

As a team at creative.ai, 70% of us think that humans will always have superior intellectual and creative abilities to machines. Some things machines may be more efficient or effective at particular tasks, but humans will always be able to do the superset of those abilities. We believe humans will always have the upper hand, so we’re coming at this situation from a different perspective.

At creative.ai we think of Artificial Intelligence as a technology tree, rather than an exponential curve of doom. It’s very much like a game of Civilization to play where you have different branches to pick, many options to choose, mix and match. Some of the techniques in the past we used may not have been funded well enough, we didn’t have the hardware or necessary understanding as a society to pursue this research…

Now we have the choice to be able to go back into the technology tree and decide what we want to achieve. What do we want to accomplish? How do we want society to turn out? We can jump back and make decisions according to these goals, given the extra perspective we have now.

For creative.ai, two of our founding pillars and the inspiration for the company are:

  1. Augmentation & Interaction — when you work on creative things, that tactile feel and playful interaction is very important. It all goes back to the work of Douglas Engelbart in the late 1960s. We’re big fans of his work and building on his accomplishments early on in this technology tree.
  2. Patterns & Generative Life—when building living systems almost like organisms, they emerge from patterns. This is based on ideas from Christopher Alexander since the 1970s that inform our design philosophy.

These are our own two guiding lights that we’re basing our approach on.

We also believe very strongly that the Team—or the Organization—equals the Product. And that’s called Conway’s law, which says that whatever patterns are present in your organization then those will be reflected directly in your product. So any advantages, opportunities, disadvantages, quirks, politics you have in the company will translate directly into the product you’re building.

At creative.ai, we’re firm believes of this Law, but also taken the opposite approach too, which is called the Inverse Conway Manoeuvre. It means designing the product you want to build then structuring your organization accordingly so the product emerges accordingly as designed. We’re very mindful about working on both of these and designing them accordingly, levelling them up side-by-side.

We’d like to take you through a short history of creative.ai, it’s a very young company relatively speaking, but we’ve been through many different mindsets that are reminiscent from The Industrial Age, then The Information Age, and a more humanist age coming up that we call The Creative Age.

For each of these mindsets, everything affects the whole company the whole product. There’s AI and design involved everywhere here, from the culture to legal, including messaging communication. Everything is inter-related so each one of these things affects the other. This article will dig into each of these…

When we began as a startup, we didn’t think like a big corporation; that’s the advantage of a small company. But we inherited lots of external pressure and expectation, so we fell into these traps of the corporate mindset which were not necessarily beneficial for us.

Looking at it from a legal perspective, ownership is control. As founders, we initially had all of the ownership with some allocated to the team. Then over time the investors buy incrementally buy over more and more of the company as the company grows and goes through multiple rounds of funding. As shareholders, whether founders or investors, we’re basically a small group of people that have control over the company.

We don’t think this structure, even when you have a board of directors, is intrinsically ethical. There are few people in control of what our A.I. technology can do and what the impact is on society. We’ve talked a lot about addressing this, for example with ethics committees, but since this ownership and control structure is not ethical in itself, we think it’s a bit like putting lipstick onto a pig. (We’ve been redesign this accordingly, as you’ll see below.)

The ownership structure reflects itself in the team directly too, where it’s usually setup as a command and control structure. The investors love to hear what your roles are, who is Chief Creative Officer? Chief of Technology? Science Officer? We had these different labels that we gave ourselves, even though we were a small company, they were expected from us. In the future maybe we’ll have a Chief AI Officer with an army of Machine Learning engineers reporting to them ;-)

This setup creates a lot of tensions, both within the hierarchy top and down between “leadership” and the team itself, as well as between separate branches of the hierarchy—in particular if you separate the design and technology!

People tend to be frustrated and dissatisfied when working in these kinds of environments, no matter the company size. It’s coming from a place of power and control, which affects the people on the team and their mindsets. Everyone spends more time in internal company politics (we call this “Job 2”, the one you’re not hired for) not being able to affect things outside of their small area of influence in the hierarchy.

What does design do? Is it a separate branch from technology? Do designers just write documents as large specifications and throw them over the fence? This isn’t a very participatory culture; not only does this produce worse results, with fewer people bought in to the designs, but also causes a lot of sadness…

Looking at the statistics globally for the U.S., 18% of employees are actively sabotaging the projects they are on. 52%, that’s over half of employees, are retired on the job! Are these the kinds of emotions you want in people building or using the next generation of A.I. systems?

These are direct consequences of the power structure that goes on in almost all companies today, as a reflection of the ownership structure and the hierarchy that goes with it.

As creative.ai, we wanted to change all this but we inherited a lot of this baggage from the outside. People defined us as a team of engineers solving creativity. This very much a trans-humanist message where we’re coming in on a shining white horse trying to save humanity as a whole by solving creativity.

Actually, there’s nothing to be solved; creativity is a process and doesn’t need a solution to be found… and engineering the least important part of it.

As a product, we talked about generative pipelines which resonated with some investors and big companies as well. The idea is that there’s an overall A.I. system that controls the generation of things end-to-end, then we put human-shaped holes inside this pipeline wherever there were problems we couldn’t “solve” with technology or where we needed more data.

This was a hybrid human/machine system, but it was not a humanist system. The people were basically cogs in a larger process that we were just using to gather data. It’s a form of centralization of power, just like the ownership and the hierarchy, but in this case it’s in the machine. Words like “cloud computing” and “platform” remind us about this power dynamic…

On the technology side, everyone talks about Big Data and Deep Learning. For the same reasons, they’re not the kinds of techniques we pursue primarily because every single individual’s is less important. You sacrifice every individual at the cost of the average; so if you’re not average, you’ll get sub-optimal performance.

But nobody is average. In the creative space, everyone wants to stand out! These techniques, as defined today, didn’t work for us. We couldn’t rely on them as our primary technology to empower individual creativity and agency. We have lots of data but we don’t think of it this way. Even the concept of a Server Farm is also a centralization of power, but you get the idea…

As a team, we started out firmly in The Information Age with a more distributed mindset, which affected how we approached things internally.

As you saw above, the ownership structure has a strong impact on the entire company and its culture. However, from a legal perspective, we were not “distributed” in the sense that we were not a cooperative with equal ownership—except at the very start when the company was founded. After that, we inherited a lot of legal structures and expectations.

Trying to keep things as fair as possible, we give out a lot of shares to employees and have deterministic salary models too. We also wrote exit clauses that specifies constraints on our technology should something out of our control happen to the company...

The thing we benefited from the most was having a distributed team from all around Europe. We were able to hire a dream team of programmers, designers, architects, in A.I. and M.L., frontend / backend—each that fit well with our company culture so they could onboard quickly and efficiently.

We found that consultants and contractors were the most keen to join forces with us. But we became very aware of this power tension between the company and the individuals, and we didn’t want to turn into a Gig Economy or the “Uber for AI”. We were very mindful of this and drifted out of this model.

When you have this power relationship between the company and its team members, the tension is like a competition and also affects the individuals themselves. Conflicts can emerge between the design and the technology side of building the same AI.

We got around this by using popular self-management techniques, and more transparency which helps a lot! But these are remnants of this Industrial Age mindset where the hierarchy is flattened out and the network must police itself: trying to control everything going on, making sure everyone shares all the time. There’s no trust there.

With people having more agency in a network, being able to decide what they can work on and who they can work with, it creates confusion and uncertainty about what they should be working on. Should I work on this new feature of the AI product or design the UI for its frontend? Should I collaborate with this team or this one over there?

So instead of being angry and frustrated with the hierarchy and lack of freedom, team members become more afraid of making missteps. From school and work life, most people have inherited a mindset of being in a hierarchy and when that structure is gone, most don’t know what to do with the freedom.

This fragmentation and fear also translates in the way we communicated about things. It was about our software vs. established tools, about our humanist approach vs. others that want to automate creativity. For example, “if you use our tool then we’ll use more augmentation.” We defined ourselves with this message to stick together to face the uncertainty of the future together.

The product in a distributed mindset is very fragmented; you have lots of applications on your mobile phone for style transfer or photo manipulation, etc. We fell into this trap too by considering many applications to build on top of our platform. Passing data between these different applications is difficult and doesn’t make for a very good experience. These are side effects of having a distributed culture, for example in a company where product teams don’t work as an integrated whole to collaborate together towards a bigger vision.

On the technology side, we’re building what we call the Generative Web where you can run algorithms on the server on behalf of designers, or on their own computers or mobiles as appropriate. Wherever the computation is most appropriate it will happen there, and the data will get passed around accordingly.

This kind of infrastructure is one of the biggest benefits of The Information Age mindset, making the most of the internet and the many hybrid devices that are connected together.

As a company, we’ve found ourselves most recently moving into this more collective mindset, which is new for us. We pulled together lots of different approaches about this, and are very curious to see where it will take us in the future. (We call this The Creative Age.)

In this mindset, it all starts with the people. We assume that everyone on the team is a mature, rational, responsible and has a purpose. Team members know why they’re in the company and know what they want to achieve in life. It also means they have an “Internal Locus Of Control” and feel like they have agency over their environment.

This changes the internal culture significantly. It makes a culture that’s based on Trust, since you can rely on each other to behave like mature rational adults. You don’t need radical transparency to make sure nobody’s making mistakes, but can rely on team members asking for help when they need it, sharing often because they know it helps everyone learn. If someone does make a mistake, there’s a culture of Respect—or maybe even, Love—to make sure that problem gets resolved in an appropriate fashion.

This culture also affects the way the company also addresses the team too, moving more into coaching to help everyone grow, support and encouragement of personal alignment, and lifelong employment as long as no foundational rules are broken. This helps create a space that’s conducive not only to growth and learning, but being more creative!

This is a great environment to build innovative A.I.-based products because it lets the team self-organize dynamically to tackle difficult problems that would be difficult to solve with a rigid structure. The kind of things we’re exploring nobody else has before, and we need the space for self-organization for everyone to come together to complement each other’s skills.

We picture it like a band. As soon as there’s a tune for a specific milestone or deadline, demo or client project, presentation that needs to be prepared, people with the necessary skills can just come together like band—then go back to regular jamming or practicing or join another project.

This approach often makes things faster to self-organize and it’s very dynamic. It’s not a fixed network structure or hierarchy, it’s more like a living organism that’s pulsating and vibrating to a common beat. It’s quicker to respond to external changes too.

We also think of the legal side for the control of the company as a shared performance too, with in this case four or five different voices: the team, the community, the clients/partners, and the investors. Each have their own style, tone and message to bring into this band that helps steer the company as a whole.

This makes it a much more ethical structure when there’s not a single mindset (profit) that’s dominating the Board of Directors. Instead, it’s the shared performance of a very diverse set of voices that come together in unison—in this case in favor of the common good. (This structure is sometimes called Fair Shares, and we’re currently looking into this with our partners.)

As we move into this more humanist phase, this also affects how the product works. The product is no longer a rigid tool that we hit things with, it’s an instrument that you play. Since it’s an A.I.-amplified instrument that learns how you play, it becomes a feedback loop in the cybernetic sense.

You work perfectly fine on your own as a creator, and this instrument can also do stuff on its own. But when you put the two together, the sum is bigger than the individual parts on their own. This is the core theme of augmentation.

It also reflects in the way we communicate about our product too. The messaging is about taking our A.I. instruments and putting them into groups to augment them, to improve them. This helps empower everyone on those teams to provide more agency, and a stronger sense of mastery over their creative process.

The technology is equally fascinating, and worth a whole other article! It’s all about interaction; how do creators interact together? How do we build our technology into instruments that are easy to interact with? How can these instruments plug together modularly and come together in these jamming sessions and creative environments where everyone contributes their own perspectives and skills.

To conclude, we think there’s a direct relationship between how you structure your organization and the product. If you want to have a positive impact on society you should start at home with the organization itself and rethink everything from the ground up. We’ve done so very mindfully by taking the organization’s principles and levelling up the product accordingly, but sometimes the product was ahead of the organization and we had to level up the team as well.

By doing this, we’ve likely gone out of the comfort zone of most managers and leaders, but just seeing how the team and potential future hires resonate with our culture, we know that in the future the best performers and the best teams will be the ones that really engage with these ideas and resonate with the projects they’re working on.

With all that said, if this message resonates with you and you’d like to work with us in any way, both on creative topics or AI-related topics, we’d love to hear from you!

If you’ve read everything this far, we’re sure our paths will cross again on our journeys into The Creative Age. See you soon… #⚘

Watch the video recording of our invited talk here. (11 minutes)

--

--