Introducing Maker Risk Dashboard

Building a tool for computer-aided governance

Jan Osolnik
Block Analitica
7 min readJan 28, 2022

--

Maker Risk Dashboard

Introduction

DAO governance involves decentralized community members coming together and going through the political process of compromise in order to make decisions. These decisions ideally optimize for some mission/vision which is best stated explicitly. Decentralized mechanism design enables us to align incentives across different stakeholder groups even if their views diverge on the exact steps of aiming to achieve some specific DAO’s aim. The decentralized component is relevant as it provides stronger resistance to a various attack vectors (adversarial behavior). It also introduces new ones which are less present in centralized governance (e.g. collective action problem). In theory, this can all be well understood and modeled. In practice, we’re constantly relearning that social coordination is messy and that we’re just beginning to learn how to use our new tooling well.

The challenge (and opportunity) of DAOs is the wide diversity of opinions with no objective truth to validate those views against. Besides having a clearly stated strategy, the movement towards community alignment can be supported by data-driven decision making. Data shouldn’t be the only source used to make our decisions as too much reliance on it can lead to local optimization, making us disregard the big picture. Yet in a lot of cases, it can definitely help. This post explores the topic of computer-aided governance and how we use Maker Risk Dashboard as a tool for that aim. We’ll dive deeper into specific simulation models and their impact in coming posts.

Computer-Aided Governance

In the context of steering complex systems such as MakerDAO, the decision-support process can be described as Computer-Aided Governance. It often leverages models to build an approximate representation of the systems that we’re interested in. We can then use different modeling methodologies to compute estimates and predictions across various conditions and assumptions. These include optimizations and what-if analyses. This is crucial for creating a shared understanding for the purpose of political discussion.

“All models are wrong but some are useful”

George E. P. Box

The above, commonly used aphorism describes well the humility with which we need to approach the problem of modeling. While scientific models always fall short of the complexities of reality, this shouldn’t discourage us from our aim of using them to make better informed decisions.

“Simple causal reasoning about a feedback system is difficult because the first system influences the second and second system influences the first, leading to a circular argument. This makes reasoning based upon cause and effect tricky, and it is necessary to analyze the system as a whole.”

Karl Johan Åström and Richard M.Murray, Feedback Systems: An Introduction for Scientists and Engineers

There is also the consideration of complexity where sometimes reductionist approaches which isolate individual components don’t measure up to the problems at hand. This is interlinked with the defined questions and audience which should both have an impact on our chosen modeling methodology. With increasing integration of different DeFi protocols (e.g. D3M), the DeFi ecosystem is becoming more and more interconnected. Besides growing the industry’s capacity for rapid iteration, it also introduces previously unknown levels of systemic risk.

In the process of aiming for protocol growth, decentralized governance introduces various implementations. One of the dimensions is determined by how much we want people to be involved in this process. It can vary from automated governance to more human-in-the-loop approaches. More alongside governance automation is also the aim to gradually eliminate governance altogether (governance minimization).

So far, the approach taken by MakerDAO has been to first learn to walk before running. Besides a large governance surface (decision space), there are also many intangibles that are difficult to quantify. The latter include the dynamic risk profiles of collateral assets such as governance and oracle risk. There are other examples for risk parameter setting which demand domain expertise that is inevitably subjective (no scientifically “true” answer). That makes it often unsuitable for governance automation. For this reason, it’s crucial that we’re mindful of “subjective” choices of “objective” measures. Especially considering that a lot of decisions are made from summary statistics (starting from mean, median, etc.) which can show a different story across different levels of aggregation. For this reason, any kind of rigorous data analysis also requires a path of a priori subjective decisions.

There are also alternative arguments with too much human-in-the-loop involvement in the governance processes being detrimental. Too many decisions taken on-chain can cause governance overhead and a vicious cycle of voter apathy. On the other hand, automated governance with too many parameter changes can harm protocol UX by making it difficult for users to follow its development.

Another factor to consider in setting governance parameters is also the constant (and necessary) tension between maximizing growth on one side and minimizing risk on the other. Maker’s MKR burn vs. Surplus Buffer trade-off discussions is a clear example of that.

The above Computer-Aided Governance (CAG) Map and Process (MAP) aims to create a high-level view of many of the challenges presented above. With a systematic approach to decentralized governance, we can continuously build tools and processes that enable us to make better informed decisions.

MakerDAO Risk Dashboard

Maker Risk Core Unit aims to propose risk minimized governance parameters by relying on professionally developed risk metrics. We use the MakerDAO Risk Dashboard as a tool for Computer-Aided Governance. Some of the tasks in our mandate include risk parameter proposals, collateral onboarding evaluations, monitoring of portfolio exposure and DeFi system risk monitoring. We specialize solely in studying financial risk which means that other risk domains (such as smart contract risk) are beyond our team’s scope.

While deep dives into various research topics demand domain expertise provided by our risk analysts, our software engineers integrate the chosen methodologies into our dashboard. This includes scheduled model reruns, automated alerting and continuous monitoring of MakerDAO as a complex system.

The increasing protocol integration in the DeFi ecosystem has already been mentioned above. This causes a growing importance of both data integration across different sources and also a need for continuous ecosystem monitoring.

Analytics platforms such as Dune Analytics have proven to be invaluable in showcasing the open and transparent ethos of crypto. There are also some quality dashboards created by MakerDAO’s core contributors. Meanwhile, we experienced first-hand that these kind of platforms fall short when building more custom risk analyses and simulations.

Data science is a difficult domain to grasp in itself and using it as a framework of methodologies to govern a complex system introduces another layer of complexity. In a machine learning model, it is sometimes possible to isolate certain real-world phenomena and extrapolate the future based on historical data. Even then there are many challenges with ML model monitoring and validation such as concept drift. On the other hand, in a complex system where we misrepresent a single component’s function by isolating it from other interconnected components, it can be detrimental if we don’t think thoroughly about our model assumptions. That’s why simulation modeling is as much about systems thinking as it is about technical implementation of the chosen solution.

Given DeFi’s rapid development, it is crucial for us to effectively curate information and proactively follow up on value-adding tasks for the MakerDAO community. Curation in practice means combining both objective data and subjective social activity across Maker forum, Discord channels and of course Twitter. Separate from proactive engagement and equally (if not more) important is studying potential failure modes and presenting appropriate risk mitigation strategies. In both cases, this can either mean surfacing new questions or answering questions coming from the community. MakerDAO’s community is especially well-versed in high quality discussions which can be invaluable for bringing up arguments about various decision trade-offs.

There are three key modeling principles that we aim to follow when building our models: actionability, conservativism and explainability.

Actionability focuses on choosing to model a part of the system through which we can have an impact on the system’s performance. Risk premiums and debt ceilings are key risk parameters in Maker Governance which makes them valuable mechanics for us to model. There are other similar areas that we’re modeling such as auction parameters and D3M parameters.

Conservatism guides our modeling assumptions towards proposing parameters that are safe(r) for the protocol to implement and minimize the risk of ruin. It’s an (intentional) implicit bias that moves us away from parameter setting that maximizes growth at the expense of potential risk of ruin.

Explainability is key to understanding system levers’ impact on model output and further communicate it to community stakeholders. It matters when sharing a model that others can play with to understand how sometimes small changes in model inputs can cause large changes on the outputs. That invariably teaches us that system modeling is as much art as it is science.

Conclusion

Through computer-aided governance, we aim to support better informed decisions, guided by our dashboard and its underlying models. When building simulation models, there is always a possibility that certain unknown unknowns are not incorporated. Assumptions can either be too simplistic, optimistic or just plain wrong. Sufficient domain expertise can mitigate these but there is no easy, bulletproof way around it. Another key element in CAG that aims to tackle this challenge is stakeholders’ ability to monitor the systems of interest and observe the impact of chosen decisions. That can provide a valuable feedback loop for further improvement of decentralized governance.

In the next post we’ll deep dive into the collateral risk model which we use to propose two key governance risk parameters: risk premium and debt ceiling.

Finally, some of the mental models shared in this post are a result of our interactions with the token engineering community. It provides a collective learning on the sustainable design of decentralized governance systems which is well aligned with our team’s aim.

Acknowledgements

Thanks to Angela Kreitenweis, Danilo Lessa Bernardineli, Kurt Barry and Michael Zargham for reading earlier drafts of this article and providing valuable feedback.

Resources

--

--