Photo by Frederic Köberl on Unsplash

How Could Democracy Benefit from AI?

The global population is getting interconnected and, in theory, we can all benefit from access to information, knowledge exchange, networking, and global communities. On the other hand, there are major side effects and threats to Democracies — such as the Fake News phenomenon, information overload, or distorted reality.

George Krasadakis
Published in
13 min readNov 2, 2023

--

How could Artificial Intelligence help us make better sense of our digital world? How could AI help people make better decisions about the politicians they select? How could governments leverage AI to better connect with people and the global socio-economic system? How could AI make ‘running a country’ simpler and more transparent?

Joseph Yun, Enrico Panai, Agnis Stibe, Mike Tamir, and Jennifer Victoria Scurrell share their insights.

Joseph Yun

AI Architect and Research Professor in Electrical & Computer Engineering — University of Pittsburgh • USA

If you have not watched ‘The Social Dilemma’, I would suggest you consider allocating some time to do so. In this documentary, they talk about how social media companies boost exposure to content that gets the most engagement because this kind of content sells more digital advertising. The (un)intended consequence is that individuals see more and more extreme content on their social media feeds since this happens to be the type of content that drives the most engagement. Given that this content is individually personalized to each user, the downstream effect is that users are served increasingly extreme content that confirms their natural biases (exposed by their profiles and the type of content they engage with). Play this model out long enough and you have a society filled with individuals that have views of ‘personalized reality’ that are extremely different than one another. Does this sound familiar?

We could develop AI-based models for ‘consensus building.’ - Joseph Yun

Extreme views and divided societies have existed long before the Internet. What did not exist until the last couple of decades is the increasing use of AI-based algorithms that hyper-personalize the content being served via social media platforms — extreme content that drives engagement and advertisement sales. These AI-based algorithms are essentially playing against themselves — they continue to make the models better and better, which means that ‘personalized realities’ for individuals are becoming more and more extreme and divisive over time.

Many individuals and groups are well aware of this phenomenon and are brainstorming various solutions that come with the realization that the only way we can fight the speed at which AI-based models are being built, is to build competing AI-based models. For most AI-based algorithms, the builder must define either a target or goal (reward function) that the algorithm is trying to optimize. In our current state of social media business models, that target is ad revenue and the winning strategy is serving more personalized and extreme content. Some of us are suggesting that a healthier target for democracy could be a ‘movement towards common ground’ with a strategy that has not been defined by AI yet because we have not tried to tune the models to aim for this state. While there are questions as to how we can measure this ‘movement towards a common ground’ and how this could be a profitable business model for a social media company, there is a clear need for such a shift: one could just look at various regions of our country and immediately see that things are trending toward destabilization that makes it difficult for businesses to even operate.

To avoid this situation, people must come closer together in consensus on various matters. The goal is not assimilation or brainwashing, but rather, enough consensus so that diverse people with a plurality of views/backgrounds can live prosperously and peacefully.

AI has been a part of breaking this democracy apart, thus now we need to consider ways to bring it back together. If we start to go down this path of building models that drive consensus amongst people within a diverse society, we can start to use those models to assess the content and laws being produced/sponsored/supported by politicians. This could be immensely helpful to individuals who, instead of simply voting according to a party line, are willing to vote for those politicians who bring people together rather than driving them further apart.

We could use the growing body of work focused on making AI algorithms more transparent and explainable to gain AI-based knowledge on what kind of things in society build consensus versus division. We could use these findings to be more informed when speaking about the strengths and weaknesses of the society that we live in. We could also develop AI-based models for ‘consensus building’ from data that is sourced from other countries and cultures, thus giving us a better picture of how we can relate with those that sit outside of our geographic borders.

Does this sound like too wishful of thinking? If so, let me ask you one question: Do you feel comfortable watching our society and democracy continue to be fueled by AI algorithms that are essentially built to promote extreme and even violent views and perspectives?

Joseph Yun is an AI Architect and Research Professor in Electrical and Computer Engineering research. He is primarily focused on novel data science algorithms, user-centric analytics systems, and societal considerations of AI-based advertising and marketing. Yun is the founder of the Social Media Macroscope, which is an open research environment for social media analytics.

Enrico Panai

AI and Data Ethicist — Éthiciens du numérique / BeEthical.be • France

People might be afraid that AI could possibly run a country. What we are talking about should be called ‘AIcracy’, because at the root of the Greek word democracy there is ‘demos’, which means the people. So, as they say in philosophy, this hypothesis is ontologically impossible. AI will not be able to ‘run a country’, but neither will the coffee shop under my house. Because it has a huge flaw, it does not have ‘meaning’.

We, humans, are semantic agents, but machines in general are syntactic agents. The form is perfect, but there is no meaning. It’s like saying “I love Warwick, the capital of France’’. It doesn’t make sense, but it is semantically correct. To put it another way, AI systems reduce difficulty into complexity. If you look at the Latin etymology, complex means made up of several parts (cum+plexus). In a computational sense, when something is complex we just need more resources to solve it. So AI uses a sum of specific skills to solve easy problems, but cannot solve difficult things. AI plays chess very well, but we don’t use it to tie our shoes. Playing chess is complex. Tying shoes is difficult. And since AI couldn’t reduce the difficulty of tying shoes to a complex task, it can’t do it.

The reality of a democracy is so difficult that it cannot be reduced to a sum of elements. - Enrico Panai

Now a question arises; is democracy difficult (which again comes from the Latin dis+facile and means not-easy) or is it the sum of a number of small, simple problems? Here a worldview comes into play. The engineering tendency (in the history of mankind) has been to try to reduce difficulty into many small problems in order to solve them. Rene Descartes, the father of modern critical thinking advised to divide every difficulty into as many parts as possible and necessary to solve each problem separately (Discourse on Method, 1637). The Cartesian method has advanced science and the world, but the reality is probably more than the sum of its parts. So this method is valid as long as it realises its metaphysical limit: the real cannot be reduced. For it to be similar to the real it must have its own dimensions (whatever they may be). A bit like the geographical map of Jorge Luis Borges (On Exactitude in Science, 1946) that, to represent an empire, was equal to the empires’ size (a fictional map that had the scale of a mile to the mile). In this case, the map would no longer be useful.

The same happens with democracy. The reality of a democracy is so difficult that it cannot be reduced to a sum of elements. However, we can and should exploit AI to improve some of its processes (those that can be reduced from difficult to complex). The ethical error consists in not distinguishing difficult problems from complex ones. We cannot entrust to AI systems the choice of political representatives, the understanding of current events, or the management of our states. But we can make the most of AI in all those syntactical processes that it is so good at. We can entrust it with the complex tasks. There remains the crucial point of distinguishing complexity from difficulty. That is why ethicists are needed; to make this axiological distinction in order to avoid wasting time; mitigate the risks and make the most of the power of AI for democracy and humanity.

Enrico Panai is an Information and Data Ethicist and a Human Information Interaction Specialist. He is the founder of the French consultancies “Éthiciens du numérique” and BeEthical.be and a member of the French Standardisation Committee for AI, and ForHumanity’s Fellow.

Agnis Stibe

Artificial Intelligence Program Director — EM NORMANDIE BUSINESS SCHOOL • FRANCE

Democracy is a term that people have created to describe a state, in which there is more collective good and freedom than injustice and suppression. Of course, this is an oversimplification. Nevertheless, it is highly important to establish at least some background understanding before adding another sophisticated term, namely Artificial Intelligence (AI). Again, an important reminder, AI is a tool, nothing more. A very advanced and high-potential digital tool, of course.

For centuries, people have been developing tools to make their lives easier and more pleasant. That naturally brings satisfaction to the bright side of our human nature. The one that continuously seeks joy, happiness, and fulfilment. In modern terms, it can be framed as a striving towards efficiency and performance. Indeed, better tools can bring desired results faster, using fewer resources. Therefore, it sounds like a good idea to develop more advanced tools.

Nonetheless, there is always the dark side of human nature also taking part in the evolutionary journey. Most often, it manifests itself through the same tools, only when applied towards achieving opposite ends. Almost any tool can be used for good or bad. A hammer, a knife, a phone, a computer, an algorithm, an AI system. All the tools are great to reveal one essential truth about people. Their intentions. All of them, good and bad.

Any advanced technology is a great mirror for people to see their bright and dark sides. Obviously, not everyone is rushing to use this unique opportunity. It is a choice. A decision. Do I want the technology to expose my bright intentions, or do I want it to hide my dark sides? The more capable innovations are created, the more they can reveal who we really are as people, organizations, and societies.

Any advanced technology is a great mirror for people to see their bright and dark sides. - Agnis Stibe

AI will gradually open the doors to all the deeply hidden secrets accumulated over the course of human evolution. It will allow people to finally meet and know their true nature across the spectrum of all inherited aspects, ranging from some deeply dark to the very shiny bright ones. If applied mindfully, AI can significantly foster all the major processes toward democracy -with transparency serving as the key catalyst on this journey.

Fake news, hatred, distorted reality, and manipulation are not the products of AI. They are rather outcomes of human actions, often underpinned by some dark intentions. So, AI, contrary to some misleading opinions, can actually help trace down the origins of such destructive sources. Firstly, it can detect, filter, classify, and analyse dark patterns. Secondly, it can learn ways for becoming more efficient at doing the listed activities. Thirdly, it can suggest alternatives for dealing with current and future circumstances.

Similarly, as the police is an institutional force for counterbalancing the darkness of human nature, the values of democracy can be firmly grounded in AI right from the start. Who can make this decision? Yes, you are right. We, people. Everyone in governance, organizations, companies, institutions, and globally. The core decision should be simple and straightforward: To transfer all the good sides of human nature, while restricting passing on any dark patterns.

If applied mindfully, A.I. can significantly foster all the major processes towards democracy. - Agnis Stibe

Indeed, it might seem difficult and challenging to completely control how much and what is taught to AI about humans. However, it is a decision that people can make. With persistency, honesty, transparency, and dedication, any regulatory institution can find efficient ways for strengthening democracy with AI. Ever-increasing capabilities of technological innovations can greatly simplify everyday tasks at all levels of management and governance.

Agnis Stibe is the Artificial Intelligence Program Director and Professor of Transformation at EM Normandie Business School and a globally recognized corporate consultant and scientific advisor at AgnisStibe.com. Provides the authentic science-driven STIBE method and practical tools for hyper-performance. 4x TEDx speaker, MIT alum.

Mike Tamir

Chief ML Scientist, Head of ML/AI — Susquehanna International Group • USA

Information integrity and how we can leverage ML to detect and prevent misinformation has been a passion of mine for many years now. Even in an era of corrosive social media misinformation, it is important to appreciate the benefits to free widespread information exchange. Figuring out ways of leveraging ML to detect and combat manipulative content that is not knowledge-based couldn’t be more critical to preserving these benefits without succumbing to the threats that misinformation can pose to the health of democracy.

From a 50,000 foot view detecting what is ‘fact based’ is intimately related to truth detection and ‘knowing the truth’ is, of course, a challenge that humans have been working on since the invention of language; so expecting an ML algorithm to become a magic truth detector might be naive. That being said, the research we did in the FakerFact project did reveal that there are other (more indirect) ways of detecting malicious intent in sharing ‘information.’ Specifically, while ML algorithms may not be able to detect, given a bit of text, if every claim identified in the text is true (or false) based only on the words on the page, what they can do is detect subtle patterns of language usage that are more (or less) common when the text aims at sharing facts vs when the text is primarily focused on manipulating reader behavior.

Expecting an ML algorithm to become a magic truth detector might be naïve. - Mike Tamir

The FakerFact project revealed that if a journalist or scientist has the intent of sharing the available facts discovered and presenting the information in a logical progression allowing for intellectual scrutiny by the reader, that text tends to be detectably different from the text where the author is pushing a conclusion regardless of available facts, or if they are manipulating available facts to force a desired reaction in the reader.

In other words, ML can often reliably detect when the author of text has an agenda. This is an encouraging discovery that hopefully can be leveraged by both for-profit and not-for-profit organizations to combat misinformation and support knowledge sharing to the benefit of thriving democracy in the future.

Mike Tamir, PhD is a data science leader, specializing in deep learning, NLP, and distributed scalable machine learning. Mike is experienced in delivering data products for use cases including text comprehension, image recognition, recommender systems, targeted advertising, forecasting, user understanding, and customer analytics. He is a pioneer in developing training programs in industry-focused machine learning and data science techniques.

Jennifer Victoria Scurrell

PhD Candidate in AI Politics — ETH Zurich • Switzerland

Education is everything. Many issues in the globalised world — be it the climate change or global health crises like the Covid-19 pandemic — can be tackled by equipping people with critical thinking. Especially with regards to political opinion formation and participation in democratic decision-making processes, digitalisation, and specifically, Artificial Intelligence, can support us in gaining political literacy. In times of disinformation, algorithmic content curation, bot armies, and democratic backsliding, it is more important than ever to provide citizens with the right tools to assist them in making their choices independently without any nudging or manipulation.

Artificial Intelligence can support us in gaining political literacy. - Jennifer Victoria Scurrell

So why not have your own personal Artificial Intelligence buddy? An AI that accompanies and supports you in political decision-making by acting as a sparring partner discussing the political issues at stake with you. This conversational AI could provide insights based on your values and attitudes, nurturing the argument with scientific facts. It would recommend news articles to gain further information about the topic, algorithmically customised to your interests and stance. In parallel, it would show you articles, which you normally would not see and read. As such, the AI would lead you out of the filter bubble or echo chamber, which is important, as a good citizen in a democracy is a fully informed citizen. Moreover, as political and societal issues are often complex, the AI could break down the topic and accurately explain it in a way most illuminating and viable to you[1]. The AI could represent politicians standing for election in the form of a hologram, so that you can dyadically discuss with their digital twins important issues and their positions in a direct way to decide upon which candidate you would like to cast your vote into the ballot for.

Facing away from this utopia: The technology is there to support us in our everyday life, as well as in complex situations such as critically thinking about the political decisions we must make. However, there are still many basic problems scientists, together with the developers and providers of AI systems, as well as society by its very nature, must address. Be it privacy issues, the black box problem, biased data for training, the risk of getting hacked and manipulated: A personal AI buddy for political decision-making is still far-off reality, as current incidents in the virtual realm of social media, but also in the political and societal reality, demonstrate that humanity cannot handle AI technology in benevolent ways without slipping into maleficent enticement.

What can we do about it? When developing AI systems and technology, we should always think one step ahead: How can the developed tool be used in a harmful way and how can we prevent that? If scientists, developers, tech providers, and policymakers follow the basic ethical framework for creating AI in a transparent way (see Beard & Longstaff 2018), society can regain and consolidate trust in technology, science, tech companies, and politics. Prudently complying with ethical regulations, the utopian dream of living with good and trustworthy AI side by side might become reality and we can use AI justly and with integrity, educating citizens to become more critical and informed in the process of democratic decision-making.

Jennifer Victoria Scurrell is a political scientist and pursues her PhD at the Center for Security Studies (CSS) at ETH Zurich. In her dissertation, she examines the influence of bots on political opinion formation in online social networks and the resulting implications for political campaigning.

Excerpt from 60 Leaders on AI (2022) — the book that brings together unique insights on the topic of Artificial Intelligence — 230 pages presenting the latest technological advances along with business, societal, and ethical aspects of AI. Created and distributed on principles of open collaboration and knowledge sharing: Created by many, offered to all; at no cost.

--

--

George Krasadakis

Technology & Product Director - Corporate Innovation - Data & Artificial Intelligence. Author of https://theinnovationmode.com/ Opinions and views are my own