Building a positive future with AI: a shift of paradigm, where Europe can take the lead

Insights from Hello Tomorrow

Alex Terrien
Future Positive
11 min readNov 7, 2017

--

Intro: is “Homo-Machinalis” beneficial to humanity?

“We no longer are mere users of technology, we are now part of the technologies themselves.” This is how the all-star AI panel started last week at the Hello Tomorrow Summit, arguably the leading “deep-tech” conference in Europe. Steffi Friedrichs, a scientist by training who now works as a policy analyst for the OECD focused on biotechnology, nanotechnology and converging technologies (BNCTs), couldn’t have been clearer, and her fellow panelists agreed: technology is now so pervasive that we can no longer consider it as separate from humanity.

This struck me as a paradigm-shifting thought. While Homo-Sapiens are gradually converging and merging with machines, potentially ushering a new evolution of our genus, Friedrichs effectively (and perhaps unwittingly) declared the age of “Homo-Machinalis” already open.

AI panel at Hello Tomorrow (from left to right): Alexandre Cadain, Laurent Alexandre, Stuart Russell, Cédric Villani, Steffi Friedrichs

With this in mind, the panel — which, in addition to Friedrichs, gathered Laurent Alexandre, the french entrepreneur and outspoken thought leader on the potentially devastating impact of accelerating nanotechnologies, biotechnologies, information technologies and cognitive sciences (NBICs) on Europe’s economy, Cédric Villani, the world-famous mathematician and Fields Medal recipient who was recently elected as a member of the French parliament, Stuart Russell, the AI pioneer, and moderator Alexandre Cadain, the AI XPRIZE Ambassador, founder of AI startup Anima and WEARE|FUTUREPOSITIVE member — proposed solutions to ensure we build machines and AIs that are truly and unequivocally beneficial for Sapiens.

Given the widespread concerns about AI, fuelled by apocalyptic declarations from everyone from world-leading academics to corporate leaders as well as the constant portrayal by some of our favorite science fiction pieces of AI as competition for humanity (cementing in our imagination the idea of evil AI), the panelists suggestions were particularly insightful. As AI and machines become increasingly embedded in the very fabric of our lives, I wanted to share their points and briefly explore how their solutions challenge the way we need to think about AI, both individually and as a society.

Technology is politics

Laurent Alexandre opened by sending a single powerful message: “technological success is a POLITICAL issue, not a technological issue.” Accelerating and converging technologies will shake the foundations of society as we know it, challenging our conception of work, our relationship with each other, perhaps our purpose as a species. The economic implications are massive, with the complete devaluation of skilled and unskilled labor as we know it, replaced by machines developed and operating at a marginal cost. The social repercussions are just as significant. Our current political society, Alexandre notes, is nowhere near ready for this socio-economic tsunami; none of our political leaders — with a few notable exception like fellow panelist Villany — understands the scope of change. To solve for this, he proposed that scientists urgently get out of their labs and get involved with politics. Those who best understand how those technologies work need to actively steer the political ship.

Reframing technology as a political issue rather than an economic opportunity requires us to revisit the way we educate our political leaders. Designing mutually-beneficial partnerships between Science Po (the leading political science school in France) or l’École Nationale d’Administration (the school that produces top civil servants) or their US equivalents and newer educational bodies like Singularity University or the newly-created French The Camp should be at the top of every university president’s agenda. More importantly, this requires giving technology centerstage in our political discourse. Though politicians in the past two years have focused on the opportunities brought about by technology, few politicians are proactively increasing citizen awareness of the implications of converging and accelerating technologies, empowering them to take ownership over the conversation, and publicly debating the frameworks we will need to implement. We need to take these steps urgently to increase collective awareness and avoid sleepwalking into potentially destructive scenarios. Led by Villany, the French government just launched its first parliamentary open conversation on AI, taking place on November 14, and I’m excited to see what comes from it, but we need to do a lot more to bring citizens into the fold (more on this below).

Viewing technology with this political lens seems particularly critical for Europe. The incentives for developing AI are so large, with the value of achieving human-level AI potentially greater than the GDP of the entire planet today, that governments, research institutions and the world’s largest companies are racing to build ever more powerful AI. The reality, however, is that the quality of the algorithm depends first and foremost on the amount of data it can be trained on. No company in Europe has the industrial base (the raw data) to build algorithms that can compete with those developed by the American giants Google (now Alphabet), Apple, Facebook and Amazon (commonly known as the GAFA) and their Asian counterparts, Baidu, Alibaba, Tencent and Xiaomi (the BATX). Cast in that lens, technology becomes an issue of national sovereignty, not just an issue of economic prosperity. A collaborative government-citizen push on properly regulating the industry would pave the way for Europe to take a leading role in the global conversation regarding AI.

Accelerating long-term research

Villani’s suggestion to create more “integrated” AI was, perhaps unsurprisingly given his background, to significantly increase our investments in long-term research. As he highlighted, even some of the most talented AI researchers in the world will tell you that they don’t always understand how their algorithms work (especially when it comes to unsupervised learning). While we understand how AI systems work to reproduce what he calls “human-based” knowledge — which in and of itself generates enormous social value by empowering people with sharper skills, opening up new possibilities — we don’t yet understand how AI might impact us in ways which we don’t currently suspect, or which are hard to imagine. Boosting research will help us find the right ways to incorporate algorithms, their rules and processes into our lives. Increasing long-term investments, however, will require a concerted effort from national governments, the European Union and Europe’s largest companies, working together on long-term horizons — not an easy task.

Most of our AI innovation will most come from private enterprises, heavily incentivized to invest significant resources in R&D. Given Europe’s starting handicap (its aforementioned lack of industrial base), producing competitive AI will partly rely on our ability to enable tech transfers from our world-beating research universities into private companies, empowering researchers and scientists to take the technologies they developed in their labs and “excubate” them in entrepreneurial endeavours. I’m particularly excited about the work that organizations like Oxford Sciences Innovation (disclaimer: my partner works for them) and or Deep Science Ventures are doing to encourage scientists to commercialize their research. Beyond commercialization, building leading European AI systems will require companies to find innovative ways to collect datasets that are not necessarily reliant on having hundreds of millions or billions of users. Companies like Mapillary are at the forefront of that movement, leveraging crowd-sourced images to build machine learning capabilities that can compete with the GAFAs and the BATX. The early signs seem to be positive, with the number of European deep tech startups increasing year-on-year along with investments in deep-tech and more European cities becoming hubs of deep-tech (and AI) talent (for more data on this, see slides 69–81 of Atomico’s 2016 State of European Tech).

But these positive signals tend to highlight companies that have “near-term” applications so they can get some traction in the marketplace and justify capital injections from venture investors. Long-term more fundamental research, however, will require us to significantly increase long-term investments. While it’s encouraging to see that GAFAs establish deep-tech engineering centers across Europe in cities like Zurich (Alphabet), London (Alphabet, Facebook, Microsoft, Twitter), Paris (Facebook), and Berlin (Apple, Amazon) since these companies can leverage their record profits to fund long-term research, building Europe’s capabilities looks like it might require public-private-academic partnerships to finance research labs at an unprecedented scale. I have yet to see a significant European initiative built to achieve this.

Encoding for uncertainty

Building on Villani’s suggestion, Stuart Russell suggested a technical key to solving Villani’s initial challenge — what happens when we build AI without fully understanding the impact these technologies might have on us. Russell suggests “designing machines in such a way that constitutionally the only thing they want is what we want.” He calls those machines “human-compatible AI”, and proposes three rules to govern their design:

  1. Each machine has one objective: maximize the realization of human values. In other words, their purpose is to make us happy.
  2. The machine is initially uncertain about what those values are.
  3. The machine can learn about human preferences and values through information clues in our behavior. The set of choices we make in some way indicate our underlying preferences.

I find rule #2 the most intriguing and critical: hard-coding our values into the AI would impose a single view of the world that machines will then implement. That isn’t reflective of reality. Not only do people globally have different values, but many of our core values aren’t expressed and values might evolve over one’s lifetime. Our AI machines need to account for this.

Russell calls this the gorilla problem: imagine the gorillas discussing whether it was a good idea for their ancestors to create humans. You would imagine that they’re not too happy about Homo-Sapiens coming into existence. If that’s how we’ll end up feeling, then we should stop AI development now before it’s too late.

While it may be complicated to tell human values and preferences from a single point in history, historical data going back thousands of years might give more clues to happiness. As Russell reminded us, very soon machines will know how to read and understand what they are reading, and very soon after that they will have read everything that humans have ever written. Observing behaviors over the course of history, and analyzing which of them led to overall global happiness and which of them pushed humanity away from happiness, machines could potentially identify which broad values to follow (and which ones not to follow if they serve too small a group). If machines somehow adopt a destructive behaviour, the historical data they have should show damage to happiness. Even in the event that machines don’t know exactly what it is that they are doing wrong, they will understand that they are doing something wrong, providing them with an incentive (as per rule one) to turn themselves off (or allow humans to turn them off).

Most interestingly, that process might even teach us things we don’t currently know or acknowledge about ourselves as we make explicit information about our preferences that we had never expressed before (for e.g. “we like having our own arms and legs”). Contrary to public opinion, especially in such a divisive political climate, we’ll likely learn that we have more in common with each other in our preferences for our future lives than we imagine today.

Inclusive development

Russell’s point in many ways brought an algorithmic solution to one of the most important insights from the panel, which came from Friedrichs, our original panelist. Perhaps unknowingly, she laid out a vision for a path to building AIs that empower humanity: “[since] society [and humans] is now an integral part of the tech we are developing, [we need to be] much more inclusive” as we develop those technologies. The key word here is: inclusive.

We cannot afford to build a future that hangs by a thread at the whim of a few technology billionaires. The expression of human values, both what we have in common and the preferences that make us all different — are becoming ever more important to integrate technology in a way that empowers us. And this diversity of thought needs to be reflected in the way in which we integrate technology. This is where we come back to mobilizing citizens in a responsible way, and engaging them in the political process of defining frameworks for technological advancement. We need more opportunities for citizens who contribute different perspectives to the table to get involved in the conversation. And in opening those opportunities, we need to make a proactive effort to reach out to segments of society that perhaps won’t naturally be drawn to those conversations. Here, companies like Make.org can make a dramatic difference in our ability to reach far and wide and get a sense for citizen preferences.

A recent Make.org campaign to engage citizens in the French political debate

As Robert Wiener, the famous mathematician and father of cybernetics and modern control theory, said back in 1960: “we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” This statement stays ever more true today.

Conclusion: what’s our role in this?

Being at the forefront of technological development is a privilege that, in our minds, comes with a responsibility, and we need to think about what we can do ensure we are indeed building positive futures.

I would agree with Laurent Alexandre that politics needs an intravenous infusion of sharp science minds. Politicians seem to be waking up, as demonstrated by the EU General Data Protection Regulation (GDPR) that will be enforced 200 days from now. But while some of these regulations are important, others risk further curtailing Europe’s ability to innovate. Some GDPR clauses, such as the Right to be Forgotten or the Right to Access make sense from a privacy perspective and don’t have significant downsides. Others, however, like the Right not to be Subject to an Automated Decision, or the controversial and ill-defined Right to Explainability”, may curtail innovation in machine learning and deep learning, depending on how they’re both implemented and enforced in court. Beyond politics, we’ll need to figure out how to encourage long-term investments in those areas, when neither our governments nor our leading corporations have the resources to compete with the GAFAs and the BATXs.

From an investment perspective, venture capitalists and other technology investors need to own their fair share of responsibility, and start considering societal value as well as financial value when they make investment decisions. With my co-founder Sofia Hmich, we seek companies that take ownership over their societal responsibility, led by founders who, no matter how technical and ambitious they are, are driven by deep-rooted intentions to positively advance society. These founders manage their company for the long term by anchoring their intentions in the company’s vision, governance and exit strategies.

But perhaps the most effective solution is Friedrichs’ suggestion: inclusion by design. The implications of developing ever more intelligent technology are so large in scope, sometimes so difficult to understand, and so fundamentally transformative for society that too many people, overwhelmed by a general feeling of helplessness, choose not to take the time to think about how to build this integrated future. This may be the biggest societal risk of them all — a growing chasm between a small minority who take defining decisions and a large majority who, too confused to get involved, sails passively into the future. It’s on all of us to prevent that from happening.

Join us at FuturePositive Coffee

With a few other actors, we’re taking a first step to bringing together, in an open format, anyone who’s interested in taking a more proactive role in leveraging accelerating technologies to shape a prosperous, equitable and inclusive future by launching WEARE|FUTUREPOSITIVE. If this sounds like your jam, join us at our first PositiveCoffees in Paris on November 14 and London on November 28.

--

--