Using dumb AI to solve complex problems

Marc Alexander
Lux et Libertas
Published in
3 min readJul 14, 2016

The Human Nature Lab at Yale is currently experimenting with using very simple AI to solve complex social coordination tasks. Nicholas Christakis, the lab’s Principal Investigator and the head of the Yale Institute for Network Science, had an ingenious idea to test whether introducing simple automated agents can improve coordination in human social networks. His team has designed online experiments, where human subjects are recruited to play coordination games in groups where at least some of the members are AI bots with simple pre-defined strategies.

Bots are used widely by social media, online forums and mobile apps to drive up user engagement, generate content or improve targeted advertising. While most users finds bots an annoying feature of online advertising, the practical uses of bots made them increasingly indispensable for driving traffic online. Outside these basic commercial applications, the potential of AI bots still remains largely untapped.

Conceptually, the idea of using dumb AI is fascinating because it is mirrors early development of dumb HI (human intelligence) to complement complex AI problem solving tasks. Most famous early examples include using a combination of machine learning algorithms with simple human oversight by companies such a Paypal for identifying financial transaction irregularities that characterize common fraud in online payment systems. Other crowdsourcing solutions such as computer vision and pattern recognition also aim to complement complex AI algorithms with simple human input of oversight to improve accuracy of solutions. Similarly to this model of smart AI and dumb HI, the model being explored by my lab is now looking at adding simple AI bots to complex social coordination problems that involve multiple human decision-makers, who behave in very complex, strategic interactions that are almost impossible to predict and model with complete accuracy.

The idea of using dumb AI is also interesting because human decision-making is full of simple biases and well-characterized psychological fallacies. Despite possessing incredible powers of intuition and analytic thinking, even best-thinking humans often have difficulty formulating unbiased expectations over rare outcomes or in high-risk situations. Depending on our different brain chemistries, we assign different weights on our previous experiences and formulate different discounting factors over future utilities. We also may be evolutionary hard-wired and culturally soft-wired to use specific heuristics when making choices in presence of large numbers of alternatives or high degree of uncertainty. An interesting question that arises is whether adding dumb AI bots to a set of human players can work to minimize or eliminate some of these well-established biases or fallacies in human decision-making.

Some simple examples include AI bots that are programmed to introduce a degree of randomness to a set of pre-defined strategies. Humans are very good in intuitively solving coordination problems, given the right incentives, but sometimes pursuing a self-interested strategy can result in sub-optimal outcomes on the group level. In certain situations, introducing agents who randomly select between one or more good strategies may be just enough to shift the equilibrium of group-level outcomes. Other situations may involve AI bots that repeatedly play the same strategy regardless of other players’ non-cooperative behaviors — AI bots can be programmed to be insensitive to emotional responses, grudges, envy or propensity to punish non-cooperators that good-intentioned, intelligent humans can succumb to easily.

--

--

Marc Alexander
Lux et Libertas

Yale network scientist and biologist interested in genomics of social networks and evolution of human cooperation