On AI As Architecture Of Choice

Part I — How human choices and values shape men-machine relationships

This piece is the first one of an upcoming series on how AI is incorporated in our lives by companies and startups. The purpose is simple: bypass vague reflections on AI ethics and focus on how companies that work on AI see our future lives and organisations.

Disclaimer: this piece focuses on Satya Nadella’s (Microsoft’s CEO) vision of AI in men-machine interactions and I happen to work at Microsoft France. I simply chose it as my first anchor because it’s a much original and interesting case of a strategic and ideological corporate pivot. It’s entirely of my own and has nothing to do with corporate communication. These reflections are only based on public material available on the Internet and there is not a single word about a Microsoft product.


Three years ago, facing a breath-taking view in the sole coworking space in Matera, Italy, I was finishing my first piece of length about humans and machines, Of cooperation between men and machines, for a p2p approach to collective intelligence. I was concerned about the sense of our work when AI would conquer one human capability after the other. How men and algos could become complementary, how technology could help to trigger collective work? Why do the best decisions and creations stem for communities relying on diversity? Whilst I was still into the evolutionary stuff (noosphere, the betterment of humanity and similar theories I since left behind) this work had embedded what would remain my obsession until this very moment: the will to understand how to design technology to make humans more human, capable, and free.

It has been clear over the past few weeks that things have been speeding up as far as AI is concerned. A Partnership on AI has been launched by Amazon, Google, IBM and Microsoft to conduct research and reflect on AI risks and subsequent ethical choices. Microsoft has just merged several divisions to create one integrated 5000-employees AI division. The New Yorker’s last “Money” issue features a major article about Sam Altman, one of the Open AI’s initiative founding fathers (together with Elon Musk, Reid Hoffman and Peter Thiel), with a substantial part dedicated to his thoughts on AI. The fact that the tech archipelago will obsessively be thinking and talking about AI in the next year(s) is beyond doubts. How and what for?


Beyond good and evil

With the notable exception of Elon Musk and intellectuals such as Tim O’Reilly, few tech enthusiasts have demonstrated a sincere concern regarding the possible substitution of humans by robots, and to be more specific, of human labour and skills by AI and machines. What’s more, the reflections about AI, data analysis and machine learning seemed to be mostly infused in polarized positions: the pros, with their dreams of singularity and the celebration of the day when DeepMind won the Go game; and the cons, technoscepticists (Morozov, Caar to name a few) pleading the return of public authority in the “Silicon Valley” (more a concept that a place by now). Being pro or con AI makes no more sense that being pro or con technology for at least three reasons: AI is everywhere, organically incorporated in the value chain and organisation that produce goods and services we cannot live without; there is not a clean cut between what we call Human and Artificial Intelligence since AI is basically a protocol; last but not least, its curiosity would never allow humankind us to drop such a fascinating development of our knowledge. Yet to avoid the fate of the Internet, which has evolved into something much different from — antithetic to — its early days, we better ask the right questions right now.

Arguably there are a lot of them, and ethics and security top the pyramid. Yet these topics mask something that appears critical yet less immediate:

1. How do AI-based products and services get shaped by the philosophy and values of the company / group of people that make them?

2. Why are these factors critical for the development of relationships between men and machine in our near future?

I’ll take the example of a company that I happen to work at, Microsoft, for two main reasons: because it has put a great deal of efforts into developing the AI division in the pasts months, it had to reveal in quite a blunt way its vision and intentions. The second reason is that, because I have dedicated some of my time to understanding the company’s vision of the future of tech, Satya Nadella’s theory on how AI gets infused in software and devices has appeared to my eyes quite unique as compared to other tech bulge brackets.


A lot have been written about Microsoft’s pivot and the part Satya Nadella has played in it. All this have tremendously contributed to the regain of the most intangible and fundamental asset in the economy of attention: sexiness. But, really, sexy is overrated because ephemeral and superficial: one day you’re cool, the next one you get lame. What matters is what’s happening behind the fickle veil of fuss. Beyond business strategy and organisation restructuring, Mr Nadella has made the company relentlessly ask itself the following questions:

1. How should an organisation interact with and impact the socio-economic and human structures?

2. What is Microsoft’s take on the future of human & machine relationship and how to translate it into the design of everything it makes?

Let’s examine both.


An organisation’s goal is inherently social.

An organisation constantly generated externalities while being affected by them simultaneously. In theory, because all side effects are external and do not affect its performance, a company has few incentives to consider them, let alone integrate them as key elements of its mission (the story of Ford raising salaries of his workers to create a market for his own cars is admittedly a fairy tale). At the same time an organisation is deeply entrenched in the environment it operates in and, because it’s intertwined with people, other organisations, institutions, it holds an inherently social responsibility. Because of this tension, public policies are designed to push companies to internalize the externalities through law and regulations. But are there companies that acknowledge, without public constraints, that their social impact is not merely another layer of corporate responsibility to be put on top of (or rather under) the business KPIs but the fundament, the spine of the company that should be fully infused in its mission? A lot actually, and a lot of them are B corps: Patagonia, Ben & Jerry’s, Hootsuit, and a lot more.

What about tech bulge brackets? Apple’s obsession with design and craft as a mark of respect for people, Google’s belief in Singularity — both are now part of history (I am not discussing here their actual faithfulness to these ethical standards) and both are paramount to understand their business strategy. On the contrary, Microsoft had long been labelled as the company which obsession over selling Office licences has undermined the vision and stamina that had once been part of Bill Gates’ legacy. You could not be a credible player in AI with such a disastrous starting point.

That explains why Satya Nadella has put a great deal of effort into building a solid ground for the company’s understanding of its own mission. Although to my knowledge he never mentioned it directly, his intellectual parentage with Amartya Sen is obvious (both are Indian-born, both humanists, both deeply believe in the virtues of markets and development). Such concepts as “capabilities are omnipresent in his speeches and the obsession with putting ethics first resonates soundly with Sen’s take in his work (see On ethics and economics). This theoretical framework helps to understand his willingness to take his distances with « one-size-fits-all » discourse: a company should adapt its strategy and products to local culture and social environments. In an interview, Mr Nadella says that when he has young and still living in India, he perfectly knew which foreign companies were there to bring something valuable to the country, and which ones were prepared to tear it apart for the sake of profit[1]. Yet asserting Microsoft’s mission as inherently social was only the first step, necessary to correct its past deeds, insufficient to re-establish it as a credible innovation pioneer. For that purpose, one organisation needs a proper ideology, softly called “vision”.


Everything needs to change, so everything can stay the same

The times of ideologies are seemingly behind us, swiftly replaced by rationality and scientism. There are very few companies that do assert, beyond PR and fuss, a vision for the world to come. Maybe that’s the reason for my generation’s disappointment in big corporations and our increasing passion for startups, where “vision” and “people” are arguably the sole indispensable assets. But there too, vision is often mistaken for “what the world shall look like tomorrow to make my success plausible”. Few really think or care about the real impact a business choice would have on the architecture of our future.

Giant tech companies are an exception. Alphabet, Amazon, Facebook and Tesla all have a vision. Amidst the four of them, two have an ideology: Alphabet and Tesla. The former has an obvious take on singularity and the fall of traditional structures, such as institutions, States, regulations. Elon Musk is obsessed with achieving autonomy from natural and economic determinisms by creating integrated ecosystems (I share most of this, who wouldn’t?). Although fundamentally different, they have in common the will to eliminate human labour from business and consumer processes altogether. Their vision of technology is that it is entirely substitutable for human labour. Google now answers the questions you haven’t formulated yet and Tesla cars will have to take decisions on the ones to spare and sacrifice in case of accident. The underlining goal is obviously glorious: liberating us from nasty tasks to allow us to dedicate our time to what we want. Of course, what the story doesn’t tell is whether we will be able to know what we want, let alone to want anything at all.

Mr Nadella’s vision on the fate of human labour is obviously different from those theories. In most of his interviews, he acknowledges that automation and organizational change are massively destroying jobs, but only to put a new kind of pressure on humans: permanent reskilling. Things are not becoming easier, but trickier: far from rendering human labour obsolete, machines’ skills are still inferior to human capabilities.

One should be a fool to abandon to machines and algos the kingdom of our reason, emotions, and will. Yet that’s exactly what’s happening today: much like a bunch of consumers downloading the latest cool app just to find out it’s useless, we abandon to the invisible hand of tech our most precious treasury, attention. “With all this abundance of technology, what I don’t have is the attention span, to live a full life, to enjoy it”, Mr Nadella says in an interview. We’re delegating the care for our decisions and our autonomy to obscure automates whose rules of design we mainly ignore. What has truly changed in the past twenty years is not fundamentally the power of tech (AI, machine learning, VR, all has been in the place for a long time) but our acceptance of gradual submission to it, as if in the long term we were already doomed. But we as consumers, employees and citizens are not to only ones to blame: the ones that are responsible for our surrender are those who design the tech to nudge us in such behaviours, predetermining the choices we are able to make.

As a consequence, the remedies don’t lie on the side of « less tech » or a techless society, but in designing a tech that augments our freedom, capabilities and choices, and, most of all, education.


Rules for AI that would augment human choices and capabilities

Tech is not neutral: the machines and educational environments we create are heavily biased while we perceive them as objective, simply because we ignore what kind of human choices has been infused in their design. We think that we know, while we should be aware that we know nothing. A technology impersonates not only conscious choices of their creators (which are ideological) but also what we do not see: the structural context, the subjective biases, the creator’s values, the state of mind at the moment of creation, and so on. Kate Crawford explains it brilliantly in her piece, Artificial Intelligence White Guy’s problem: “Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters — from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.”. We can design technology that harnesses our autonomy and variety of choices. Or we can choose to make it opaque and biased in order to lure you in the behaviour we want you to have.

Here I want to make room for the works of a philosopher and a motorcycle workshop owner, Matthew Crawford. In his book The World Beyond Your Head, Crawford unravels the strategies behind tech design choices: a war for our attention. Only through sustained attention (to the world, to others, to ourselves) could we access the world and the other, and only through the true experience (physical, sometimes rough, unpleasant) could we sustain attention. To steal our attention, tech makers (he speaks broadly of entertainment designers) therefore seek to protect us from experience by building “layers of representation” that limit the diversity of our choices and prevent us from being aware of the kind of choices technology has already made for us. Do you remember the last time you yelled after a piece of device because it was too slow / obscure / unresponsive? How uncompassionate are you when an app doesn’t stand up to your standards? Now, think about the wealth of patience our grandparents had to deploy to make much simpler technology work. Again, it’s not only the technology that has changed, but our willingness to tolerate its flaws, in a very infantile way.

I am not sure whether Satya Nadella read Mathew Crawford’s books (and the latter was certainly not thinking of Microsoft when arguing for better tech choices), but I cannot prevent myself from drawing a parallel when I hear Mr Nadella stating the following rules for AI:

1. It should augment the human capabilities and allow for more experience and contact with the real world (make us makers)

2. AI should be transparent enough so that we can understand the architecture of choice that has been infused into tech, in order to take control back when needed

3. It shall integrate openly a set of values relying mostly on empathy and diversity.


“It is wise to have decisions of great moment monitored by generalists. Experts and specialists lead you quickly into chaos.” Frank Herbert

These rules obviously make only sense if they are infused into the products and services a company will design and manufacture (it is actually the direct result of one of the three rules). What does that practically mean that Satya Nadella wants to focus on “technology that partners with humans”?

One could find the beginning of an answer in what Mr Nadella calls “conversation as a service”, an interaction between men and machine that should primarily take place through human language, and not code. To increase the capabilities of all, AI should be able to speak the language of human intelligence and respond to its needs. First and foremost, this conversation should not be initiated by the AI but by the human: it should not answer at our place before we even know we want to ask a question. For instance, when I browse online to find a restaurant, technology is more “honest” when it doesn’t give me immediately what I haven’t asked for (for instance if it’s animal-friendly) but lets me first formulate the question. Generally, AI should be there when I decided I needed it and not the other way round. We shall not turn the whole world into coders and blame those can’t or don’t want to learn how to code for being obsolete.

Productivity is a matter of attention allocation, it’s not the measure of tasks we achieve per four. The goal of this “conversation as a service” aims at freeing humans from the need of unreasonable specialization and let them focus on where we have a definitive comparative advantage over the machines: dedicate our attention to what matters most, namely our peers and the world, “look outward; look for living principles, knowing full well that such principles change, that they develop”, according to the words of Frank Herbert. A society build on such principles is anti-individualistic, it strives to create organisations and institutions of people for the people. Our desires, biases and sometimes counterintuitive goals are not mere flaws to be balanced or corrected by algos, but the very traits that define us as humans, the basis of human work and creation. AI that partners with humans would actually be an enhancer of these irreplaceable ingredients of true work and creation. It should give us the keys to be a maker, actually a community of autonomous makers, a goal that are much dear to my heart.

[1] From my own experience as a young Russian girl living in Moscow in the early 90s (so-called capitalism, in fact savage chaos), I have kept the same memories and it is still shaping heavily my vision of business (with mission), markets (with regulation) and the State (without corruption).