AI for Social Good at DAIA, AILA, and SingularityNet

DAIA
DAIA
Published in
6 min readJul 22, 2019

--

Recently in the news AI has had a bad rap for causing social problems from biased credit and hiring policies to political or corporate hacking of public opinion. Along with this has come a push within the AI community towards finding ways AI can be used to solve rather than create social problems. One instance of this is our DAIA partner, the Artificial Intelligence Los Angeles group, focused on making Los Angeles an AI hub for social good. To arrive at this goal, AILA has promoted collaborations between activists and AI researchers through summits and symposia. AILA held an AI Earth Summit to address problems like climate change and species diversification in April, and an AI Ethics and Fairness symposium in June. In October they will have another summit on Healthcare. The summits bring together experts in AI and social policy to brainstorm on and articulate problems on their first day, followed by startup focused hackathons the next day to prototype solutions. The symposia take up just one evening, but are just as interactive and solution focused.

Such collaborations are excellent participatory ways to take a crack at society’s most pressing problems. We at SingularityNet believe that AI is critical to solve modern social problems, and are developing ways to support software collaborations toward this end. One is to combine AI with computer simulation in such a way that AI becomes not a black box but a convincing tool to iterate through social problem solutions at the pace needed by modern societies. It is the “convincing” part that is critical here: at the AI Earth Summit we learned that even though almost everyone believes that solving climate change is urgent, people do not act because they don’t believe that we can do anything about it. The same could be said of the problems of bias, inequity, and the rest of progressivism: that the majority may like a more equitable world, but are not convinced that progressive solutions will not make conditions worse. We propose to use AI to shore up those solutions. At present, AI may be misused by interested parties that capitalize on deception through confirmation of bias or disinformation campaigns, but combined with simulation AI can be a tool to extend reason in an objective manner.

Simulation is an extension of reason by virtue of its ability to compute out the effects of phenomena we agree that exist. If we can derive multiple effects that we also agree to exist from those phenomena, then our model has good explanatory power. Simulation models are objectively closer to or farther from reality depending on whether the events within them correlate in a similar manner to events in the real world, and are more convincing the greater the correspondence. This correspondence is an objective thing to argue over, which means that simulation models can be collaboratively built upon over time by people who initially may disagree but may come to agree based on reasonable conclusions from evidence, in this case the simulation evidence of what logically follows from agreed upon states.

The tool of simulation gives us not only a way to collaborate on perception, on what the world we are acting upon is like, but also on the effects of actions that we take on the world. Gone are the days when we could expect justice naturally without collaborative reasoned discourse on social rules beforehand. For example, before writing, knowledge was passed down through legends which people participated through decentralized retelling, making them more wise and satisfying over time for all involved. This happened naturally and without the need for founding fathers to design constitutions that promote decentralized collaboration in order to protect democracy and the interests of all stakeholders. However, since many natural social collaborations, such as social discourse and markets, have been replaced by computer proxies, it makes sense to test the social effects of these algorithms on a different set of values than those which incentivized their creation, to see their effects on different stakeholders, through simulation. In a collaborative transparent environment, deciding together on the rules of social interaction is a way to combat the centralization of rulesets made by companies incentivized solely by near-term profits or entities motivated by power. Such systems, if transparent and convincing, can win in the market in the long term, for example, if they can demonstrate greater fairness and protection to market participants than the rules of social interactions encoded in corporate or politically motivated algorithms which often work through deception. Simulation can unveil such deception.

AI is an essential part of the equation in its emulation of adaptation, as it is the ability of one part of a living system to adapt to another which, when relations between entities are encoded correctly, fuels growth. To find the consequence of actions, we must look not only to individual reactions to actions, but individual reactions to those reactions and so on, making the virtuous or vicious cycles of social structure of a complex adaptive system. Simulation with AI is necessary to view the effects of action in such systems.

Simulation and AI can help solve many of the problems brought up in the AILA summits and symposia. Participants in the AI Earth Summit sought to develop sustainable agriculture systems in which waste products are reused, track climate change’s complicated effects on biodiversity, and use the power of the market to incentivize climate helping behaviors. All these needs involve data which when brought together with adaptive simulation will have richer outcomes than when considered alone. For example, with crowdsourced facts about species, perhaps from apps like iNaturalist, put together in their natural adaptive relations in a holistic simulation, we may be able to better anticipate and prevent extinctions. We can make more convincing arguments about the effects of climate change on human beings and explore the complex effects climate policies would have on the environment.

Participants in the AI Ethics and Fairness Symposium sought to understand bias caused by AI and use AI to alleviate bias, to help ensure that justice is blind, and to expose and find alternatives to a corrupt business culture in which revenue is of sole importance. At the same time, they wanted to address anonymization that leads to trolling and political deception, and use AI to find ways for the workers that it has displaced to participate in the economy. At SingularityNet we are already starting to address many of these issues through Simulation and AI. In our Political Influence project, we study political polarization in social media for the purpose of observing its psychological and social effect on intelligent agents with simulated cognitive dissonance, so that strategic information warfare campaigns can be automatically detected. In our Equity project, we are exploring the use of a new metric, a modified Gini coefficient, that measures inequity while taking into account potential to contribute to the economy, so that the economy may be seen as a Reinforcement Learning system. With the modified Gini metric we can see how inequity shifts knowledge in society towards exploitation where market participant capacities are not known, while random choice shifts knowledge towards exploration where market participants are known indiscriminately but are given no incentives. However, maximization of wealth in proportion to talent with the modified Gini brings about a fairer society that can also maximize utility, creating the best products for consumers. Perhaps “winner take all” usages of online market reputation systems, in which consumers typically pick only the “best” sellers the reputation system knows of, leave most sellers unknown. This is one algorithmic explanation for the increasing inequality in society that we see today. Our simulations test different reputation systems and their usages so as to foresee their effect on markets and on society before they go live.

Just as legends were the product of participatory social evolution of ancient societies, so can simulation with AI be the product of participatory social evolution in modern societies. AI has as much capacity to be a truth finding tool of objectivity and an extension of reason that supports democratic decentralization as it does to undermine democracy by promoting disinformation, inequity, bias and ecological irresponsibility through hacking public opinion. Up to this point, interested parties contributed vastly more funds to the later, and little has been done to counteract these misuses of AI. Online market and social media companies, beholden to shareholders, can not be trusted to protect consumers against deception and divisiveness when they increase profits. However, organizations like DAIA, AILA, and SingularityNet are developing methods to answer these threats in a decentralized and participatory manner.

Text by Dr. Deborah Duong, AI Researcher at SingularityNET

--

--