Modelling incentives: importance of ML-based approach and the human factor

Piotr Grudzien
Incentivai
Published in
4 min readJul 12, 2018

--

It’s fair to say that nowadays there is increasing interest in systems that coordinate behaviour and build trust by setting incentives for users.

How do you design such a system? How do you reward and penalise users such that their dominant strategy is to be honest, reliable, unbiased and hard-working? It’s all about the incentives.

We might also want to decentralise our system such that nobody owns it, or rather, everyone owns it. Enter blockchain and smart contracts. News curation is just one application among many: DAOs, insurance, prediction markets, lending, storage, compute power and many more…

Getting the incentives right is all about mechanism design. Mechanism design is hard.

If there is no free lunch in machine learning, in mechanism design there is nothing to eat at all.

At Incentivai, we build a tool for testing the incentive structure of your system. We simulate your environment and observe the behaviour and failure modes identified by ML agents. One way to look at it is that the agents approach your smart contract system the way AlphaZero approaches chess.

See our case study, the first two blog posts and the concept paper to learn more.

Why Machine Learning?

The use of Machine Learning agents in simulations is critical for several reasons:

  • agent behaviour is real-world-like
  • agents are capable of identifying new failure modes
  • agent behaviour quantifies the importance of failure modes

Will they offer bribes? Will they accept bribes?

The recently published analysis looked at the Nexus Mutual system (decentralised alternative to insurance). One of the key failure modes is the possibility of submitting false insurance claims and offerring bribes to users who vote to accept them.

While it is obvious that such an attack exists in theory, it is crucial to simulate and check under what circumstances it is more prevalent and how to tune system parameters to mitigate it.

For the attack to be a real threat, there need to be both users who find it beneficial to offer a bribe and those who are willing to accept it. During simulations, Machine Learning agents make decisions that are most likely to be beneficial for them.

Bribe acceptance rate as a function of vote bond size (risk averse users)

One way to discourage users from accepting bribes is to increase the amount they put at stake when voting. They would only accept a bribe if they strongly believe others will vote the same way, otherwise the risk is too high.

In the graph above we can see that increasing the bond size to a relative value of 30, reduces bribe acceptance rates across all scenarios to around 20%.

The human factor

The users of your system will eventually be real people, so modelling the human factor is essential.

Taking insurance as an example, it is particularly important to go beyond modelling humans as rational profit-maximising agents. They need to be assumed to be risk averse (continuous low cost preferred to one-time large losses in spite of higher expected long-term return on the latter).

In order to make the system robust, it cannot assume a certain risk profile. Your design must not be too sensitive to any particular assumption made but rather perform well across a wide range of scenarios.

Bribe acceptance rate as a function of vote bond size (less risk averse users)

The graph above is equivalent to the one shown earlier except it assumes that users are less risk averse. Even though, as expected, they purchase fewer insurance covers (50–100 as opposed to 200–250), their willingness to accept bribes follows a similar pattern. This shows that decreasing the bond size is a robust measure against bribing schemes.

Conclusion

Reasoning about interactions in complex systems and designing incentive structures is hard. However, getting them right is a crucial prerequisite for widespread user adoption.

Running simulations of your system prior to deployment allows for iterative improvement of your design. Machine Learning agents make simulation results more insightful and real-world-like.

Agents make decisions that are best for them the way humans do. That way they are capable of identifying and quantifying various failure modes.

If you and your team would like to use Incentivai to test the incentive structure of your system, please reach out or check us on Twitter.

Please read the Incentivai legal disclaimer

--

--