Vyacheslav Polonski: ‘Solve our technology problem with humanity’

Tech's Good
Tech’s Good
Published in
8 min readSep 4, 2017
Vyacheslav Polonski, Network Scientist at the University of Oxford, and founder and CEO of Avantgarde Analytics

We spoke with Vyacheslav Polonski, a Network Scientist at the University of Oxford, and founder and CEO of Avantgarde Analytics. He researches human dynamics and complex social networks at the Oxford Internet Institute. At Avantgarde Analytics, he advises senior political leaders on algorithmic election campaigns and AI governance. Vyacheslav has previously studied at Harvard University, Oxford University and the LSE.

He is a frequent speaker at the World Economic Forum and is actively involved in its Global Shapers Community, where he serves as the Outgoing Curator of the Oxford Hub, was selected for the WEF Expert Network on Behaviour Science, and was featured as one of Top 100 Visionary Leaders Under 30 by Real Leaders Magazine. His research and commentary have been highlighted in the media, including in TechCrunch, Newsweek, Forbes, Fortune, BBC, Bloomberg and The New York Times.

What is Avantgarde Analytics’ mission?

Our mission is to reinvigorate democracy with artificial intelligence. We are a team of data enthusiasts who met at Oxford University and embarked on a journey to develop state-of-the-art machine learning solutions for politics. Our first area of focus is algorithmic campaigning to help candidates for office to build more meaningful and truly personal relationships with each individual voter.

What inspired you personally to get involved in this work?

Throughout my career, I’ve always enjoyed going in new directions — trying new things that others haven’t tried yet. And so it was already during my PhD studies that I had started experimenting with different ways in which I could apply my research to benefit society.

A catalysing moment was meeting PM David Cameron and ten of his cabinet ministers in April 2016. The meeting was convened by the World Economic Forum and I was fortunate to be invited to the event as a Global Shaper. Just one month before the EU referendum this was an opportunity for me to present my research and urge the Remain camp to change its campaign strategy.

At the time, I was leading a large-scale research project that investigated the social media side of the Brexit campaigns. Analysing millions of data points, we were one of the first to point towards the powerful effects of polarisation and echo chambers in the debate. The Leave camp had routinely outmuscled its rival, with more vocal and active supporters across almost all social media platforms. The Brexit message was more intuitive and emotionally charged, which contributed to its momentum online. The Remain camp had to radically change course and step up its social media game.

But our research could do little to change the minds of the other attendees, leading many to wake up in horror at the final referendum result. Even though we had the right data insights, we failed to act upon them. This is when we realised that knowing is not enough, we had to take action.

We continued developing our technology. We continued working with political leaders. Now we are ready to deliver AI solutions for the next generation of election campaigns. There has never been a better time to be a politician, but it’s an even better time to be a machine learning engineer working for a politician.

What is the main challenge you think technology presents to society?

A lot of people are excited about the prospect of the “singularity” as the final frontier of artificial intelligence. But clearly, there are much more pressing questions that we need to address first. As artificial intelligence expands its reach, there are people who intend to use its enormous potential for evil. In fact, there is evidence that AI-powered technologies have been systematically misused to manipulate citizens in recent elections.

One example of such manipulation is the use of political bots to spread right-wing propaganda and fake news on social media. Bots are autonomous accounts that are programmed to aggressively spread one-sided political messages to manufacture the illusion of public support. This is an increasingly wide-spread tactic that attempts to shape public discourse and distort political sentiment.

In addition to shaping online debate, AI can also be used to target and manipulate individual voters. During the U.S. presidential election, an extensive advertising campaign was rolled out that targeted persuadable voters based on their individual psychology.

The problem with using AI in political campaigns is not the technology itself, but rather the covert nature of its use and the targeted messages that preyed on people’s psychological vulnerabilities. The paranoid received ads with messages that were mostly fear-based. People with a conservative predisposition received ads with arguments based on tradition and community.

We have reached a point where technology can be used to manipulate people’s emotions and this is a very disquieting trend. A representative democracy depends on free and fair elections in which citizens can vote their conscience, free of manipulation. Yet AI and related technologies threaten to undermine fair elections if they continue to be methodically used to manipulate voters and promote extreme alt-right narratives.

How are you trying to do better than other people working in this space?

There are a number of innovative companies working in this field. What we all have in common is the belief that all is not lost. AI itself is not harmful. The same algorithmic tools used to mislead, misinform and confuse can be re-purposed to support democracy and increase civic engagement. An ethical approach to AI can work to inform and serve an electorate.

At Avantgarde Analytics, this is precisely what we are trying to achieve. Our cross-disciplinary teams of experts work directly with political clients to run better AI-powered campaigns in an ethical and legitimate way.

We can, for example, programme political bots to step in when people share articles that contain known misinformation. They could issue a warning that the information is suspect and explain why. This could help to debunk known falsehoods, like the infamous article that falsely claimed the pope had endorsed Donald Trump.

We can also use AI to better listen to what people have to say and make sure their voices are being clearly heard by their elected representatives. Despite the many available digital tools, the average voter today is neither more informed nor engaged, and certainly does not possess greater trust in politics. We think that there is a real opportunity to use AI to empower voters. Based on these insights, we can deploy AI techniques that help to educate voters on a variety of political issues to help them make up their own minds.

People are often overwhelmed by political information in TV debates and newspapers. This is where AI can help them discover the political positions of each candidate on what they care about most. For example, if a person is interested in environment policy, we could use an AI targeting tool to find out what each party has to say about the environment.

People-focused techniques like this can help to break up echo chambers and give individuals diverse information. But at all points during the campaign, we have to ensure that personalised political ads always serve the voters and help them be more informed, rather than undermining their interests.

How are you measuring your impact?

Data is crucial to our work in delivering empirically-grounded results and recommendations. This is why, every time we implement a new machine learning system, we systematically measure its accuracy and impact against conventional methods. We conduct experiments and develop predictive and explanatory models. Once we have solid evidence that a specific intervention was successful in one region, we can scale it up to more people in other regions.

Continuous evaluation and intelligent feedback ensures that we are always optimising our algorithmic campaign operations. In some projects, our goal is to be the mega-phone for the messages of a specific political party. In other projects, we only come in to ensure a fair election climate online that does not favour any particular political agenda. In either case, we want to make sure that everything we do is always informed by deep data insights.

What have your main challenges been so far?

AI has an intellectual lineage stretching back many decades. Unfortunately, there are still a lot of myths about machine learning that persist today. Many of these misconceptions stem from dystopian depictions of AI in Hollywood and inaccurate reports about AI in mass media. Ex Machina, Transcendence, Her, and I, Robot are good movies, but they have only served to fuel societal concerns about the chilling long-term effects of AI.

Elon Musk said that AI constitutes a fundamental risk to the existence of human civilization. Microsoft’s Tay bot decided to become a white supremacist after a day of interacting with Twitter users. There are fears that the automation of jobs will make millions of people unemployed. In general, people like to focus on the negative side of AI, but there are many incredible positive effects for society.

I strongly believe that AI will redefine many industries and systems of infrastructure, but not everything that can be automated will be automated. In the future AI has the promise to help society solve some of its toughest challenges by augmenting human intelligence — not by replacing it.

In the political domain, the use of AI techniques is not going away anytime soon — it is simply too valuable to politicians and their campaigns. However, they should commit to use AI as ethically and judiciously as possible to ensure that their attempts to sway voters do not undermine democracy as a whole.

What is the one thing you think all tech for good ventures need to do to ensure they are ethically responsible?

Many people have argued that we need to encode morality into our machines to mitigate their ethical risks. But the ethical dilemmas we’re dealing with are far too complex and require more than a simple formula to solve. This is why we need to stop trying to solve our human problem with technology, but instead try to solve our technology problem with humanity.

One good analogy to help clarify some of these issues is the tale of the Ring of Gyges. It was written by Plato around 380 BC and details the story of a mystical ring that granted its owner the power to become invisible at will. With the magic of the ring, the owner could do whatever he pleased. He could seek pleasure, wealth and power without the fear of being caught of punished for injustice. So Plato asks, under what conditions can the owner of the ring act morally and use his incredible power for good?

Plato’s conclusion is that morality is not a social construction but comes from within. His view of ethical responsibility is connected to people’s virtues that allow them to do the right thing irrespective of others.

I think this is something that many designers of AI systems still need to learn: differentiating between what is technically possible and what is ethically justifiable. Machine intelligence has gotten a lot smarter in recent years and now offers incredible opportunities for social good. But whether AI systems are programmed to do something beneficial or something utterly devastating depends on the people designing these technologies.

My advice to technology entrepreneurs in this field would thus be to follow your internal moral compass and always think about the people you’re designing the AI systems for. If you have the algorithmic equivalent of the ring of power, it’s up to you to make the right choice.

--

--

Tech's Good
Tech’s Good

A digital publication critically evaluating the #SocialImpact of #tech. We encourage a transparent innovation culture within the #tech4good ecosystem.