A Tailored Reward Function Methodology

Nomiks
24 min readOct 2, 2023

--

This article has been produced by Nomiks, Web 3.0 firm specialised in Token Economics Design and monitoring

Preamble

Given a protocol emitting a certain number of reward tokens to a set of users, how can its equity be controlled? How can a reward function be found and optimized ?

A sampling of rewarded Creators after 600 days: given a protocol emitting a certain number of reward tokens (here 20M tokens called PASSION), how can its equity be controlled? And conversely, how can the formula be adjusted? Different colors represent various rewarded activities or behaviors

In a protocol, how can one arbitrate between directing specific incentives to boost economic growth and ensuring equity for all its users?

Introduction

In the world of cryptocurrency and blockchains, a crucial question arises: How to distribute rewards in a just manner according to specific strategies that promote economic activities and encourage (rewardble) behaviors. For a protocol, having such control over the balance in a distribution of rewards, which could be predefined initially, would be invaluable and even crucial for defining its governance rules, decentralization, clarity, and ultimately its adoption.

But is this always possible? At Nomiks, we claim “Yes!”

The demonstration remains delicate and goes through some distribution analysis tools, statistical modeling and the creation of an objective function.

It is essential to highlight certain key steps in our methodology.

Regarding the balance of reward distribution, we will use Lorenz curves and the Gini coefficient. A Lorenz Curve is a graphical tool used in economics and statistics to represent the distribution of a certain quantity or value, such as wealth, within a population. It allows visualizing the inequality of the distribution. In a perfectly equal world, x% of the population would earn x% of the total income, y% would earn y%, and so on. This would translate into a straight line, called the line of perfect equality, going from the bottom left corner to the top right corner of a graph diagonally. But this is rarely the case! To quantify the inequality represented by the Lorenz Curve, we will then use an index called the Gini coefficient.

After recalling the standard principles of equity measurements, we will statistically adapt theLorenz Curve to apply it to different reward distibution optimizations. We will present two immediate methodological applications and then extensively address a use case. This use case is operational. It is a project that Nomiks assisted in its tokenomics. This project (named X) wanted, among other things, to test five types of reward issuances for its token (linear, proportional, capped, exponential, logistic) to calculate and adjust for each of these groups, as well as globally, the deviation from the equitability of its reward protocol.

This project X has certain specificities that we will need to recall. However, the current Nomiks study extrapolates this use case by answering the question of equitable reward sharing with a quantitative approach integrating what we will call, for the three “Horizons” of a protocol:

  • Horizon 1 of its economic activities,
  • Horizon 2 of its users’ (rewardable) behaviors,
  • Horizon 3 of the growth activation strategies of its users.

The ultimate aim is to provide a protocol with all the arbitration data it needs to make the right choice for issuing its rewards.

Be careful with the articulation of our methodological construction: Distribution (Lorenz) is not emission. Most of the emission will be largely determined by a certain scoring of these “Horizons”. Our “turnkey” arbitrage solution simply compensates for this pre-determination.
And that’s already a lot !

We will conclude with the utmost generality by presenting an optimization and equity control applicable to any type of project.

Measures of Equity: Some Principles

Gini Coefficient

The Gini Coefficient measures inequality in the distribution of rewards and is calculated from the area between the actual Lorenz Curve and the line of equity. The Gini Coefficient formula is:

Where:

  • G is the Gini Coefficient.
  • A is the area between the actual Lorenz Curve and the line of equity.
  • B is the total area under the line of equity.

The closer G is to 0, the more equitable the distribution is. The closer G is to 1, the more unequal the distribution is.

Lorenz Curve

In general, a so-called Lorenz curve can be drawn by plotting the cumulative percentage of values (e.g., income) on the x-axis and the cumulative percentage of individuals (or groups) corresponding to these values on the y-axis.

Given an ordered set of values V1 ≤ V2 ≤ … ≤Vn and a set of individuals (or groups) P1, P_2, …, Pn, the general formula for the cumulative percentage of values Li up to the ith individual (or group) is:

Where:

  • Li is the cumulative percentage of values up to the ith individual (or group) on the x-axis.
  • Vj is the jth value of the distribution.
  • n is the total number of individuals (or groups).

The Lorenz Curve is typically created by connecting the points (Li, Vi) for each i on a chart. When this curve approaches the reference diagonal line, it signals a relatively equitable distribution of values. Conversely, if the curve deviates from this diagonal line, it indicates a more unequal distribution.

A Sensitivity Analysis

It can be interesting to calculate the cumulative shares of rewards given to a population, then to represent them graphically to evaluate how close this distribution is to an ideally equitable distribution among all participants in the economy. This is therefore an adaptation of the Lorenz Curve, which allows us to evaluate and visualize the distribution of rewards within a population and thus highlight any existing inequalities.

Consider a random variable which is the average of the rewards, in a protocol, given to a population, the cumulative distribution function (CDF) of this random variable would give us the probability that a certain distribution is lower (or higher) than a certain value while the Lorenz Curve, on the other hand, would show us how these rewards are distributed among all users (by fractions of the population).

Let:

  • For the actual cumulative distribution of rewards: X (random variable)
  • For the ideally equitable cumulative distribution (diagonal line): Y = [0, 1/n, 2/n, …, 1] where n is the total number of participants.

The problem is as follow : how do the variables of a certain reward model influence the overall equitability of the protocol and how to approach the best fit?

This is a sensitivity analysis that Nomiks actually carried out for protocol X. We will omit certain details such as the resulting distribution density of the different emission models actually tested to synthesize the method on a single reward model: that of an exponential decline. Initially, a large amount of rewards can be issued, but over time, this quantity decreases, following a decreasing exponential curve.

This model is often chosen to reward early users compared to the following. Its decline rate can be parameterized. The general formula for such an exponential decline is as follows:

where y(t) is the value of the quantity at time t, y0 is the initial value of the quantity (i.e., its value at t = 0 ), e is the base of natural logarithms (approximately equal to 2.71828), b is a constant decline rate and t is time.

The negative exponent ensures that, as t increases, y(t) decreases. The speed of this decline is determined by the parameter b, which can be adjusted: a high value of b means a rapid decline, while a low value means a slow decline.

Consider a model of exponential decline for an average reward distribution (left chart). A sensitivity analysis was conducted in accordance with this model deployed according to 5 values of a positive and increasing decline variable b (the negative exponent of our exponential curve being −b). Thus, we have 5 different distribution modulations and consequently 5 tests… until finding (right chart) the one that comes closest to the desired fairness. The fairness closest to perfect equity is the exponential function with the lowest decline rate (b=0.1).But is perfect equity the best choice? In this case, and generally?

Consider a generic model of exponential decline for an average reward distribution (left chart above). A methodical sensitivity analysis involves deploying this model based on 5 values of a positive and increasing decline variable b (the negative exponent of our exponential curve being −b). Therefore, we have 5 different distribution modulations and consequently 5 tests… until finding the one that comes closest to the desired fairness (right chart).

Objective Function

The resolution of the desired best fit problem that motivates such a sensitivity analysis fully follows an optimization problem of an objective function.

  1. Maximization or Minimization Objective: Objective functions are used to maximize or minimize a certain quantity → here, it is about minimizing inequity (or maximizing equity).
  2. Decision Variables: Objective functions typically depend on decision variables, which are the parameters that the agent can control or adjust to achieve its goal → here, the economic agent is the protocol itself which can adjust the parameters of its reward emission model.
  3. Constraints: Often, there are constraints or limitations that must be respected → Here, the constraints are the total number of tokens allocated to rewards, the maximum and minimum (thresholds) that can return to users, etc…
  4. Optimal Solutions: The goal is to find the values of the decision variables that optimize the objective function while respecting the constraints, the optimal values being those that minimize the desired quantity. → here, the optimal solution remains a matter of trade-off between several equity minimizations (gradual, depending on the value of the decision variable b).
  5. Numerical Optimization: The resolution of objective functions often involves numerical optimization methods. → in our case, a simple brute force allowed us to find the best fit.

Emission Functions and Arbitration

Reward Emission: some reminders

In addition to the exponential decline function, there are several other models depending on the desired protocol incentive strategy (the exponential emission, as we said, is mainly used to encourage early adopters): as many possibilities as can also be subjected to tests and analyses in order to evaluate their equity.

  • In a linear emission model, the number of tokens issued increases at a constant rate over time. Unlike an exponential model where the emission rate changes significantly, a fixed number of tokens is issued at each defined time interval. Mathematically, the emission of a token according to a linear model can be represented as:

where y(t) is the total number of tokens issued up to a certain time, m is the emission rate (number of tokens issued per unit of time), t is the time elapsed since the beginning of the emission, c is the initial number of tokens issued at t=0 (it could be zero if the emission starts from scratch).
In this model, every time interval, m * t additional tokens are issued, resulting in a linear increase in the total number of tokens in circulation. It encourages constant and predictable adoption.

  • In a proportional emission model, the number of tokens issued at each interval is proportional to a certain measure, often the remaining amount to be issued or another variable metric. Proportional emission generally reduces the number of tokens issued at each step, as the reference base decreases over time. Mathematically, the emission of a token according to a proportional model can be represented as:

where y(t) is the total number of tokens issued up to time t , r is the proportional rate (a percentage of the current amount), y(t-1) is the total number of tokens issued up to the previous time (or the previous step).
In this formula, every time interval, a quantity r * y(t-1) of tokens is issued. If r is set to, for example, 0.05 (or 5%), then 5% of the current amount of tokens issued is added at each step. This means that even if the percentage remains constant, the absolute number of tokens issued decreases over time, as it is always based on the percentage of the remaining balance. This model allows for a gradual reduction in the supply of new tokens while rewarding early adopters.

  • In a tiered capped emission model, tokens are issued in fixed lots or “tiers” until a determined cap is reached. Once a tier is filled, the next one begins, until the total number of tokens issued reaches the cap or the set limit. This is often used to structure initial token offerings (ICO) or token sales where participants can get benefits for early participation while offering a clear and defined emission structure. Mathematically, the emission of a token according to a tiered capped model could be represented as:

where y(t) is the number of tokens issued at time t , pi is the cap of the ith tier, t is the time at the end of the ith tier.In this model, a certain number of tokens, say pi , are issued until time t reaches t1 . After that, the next tier begins, issuing tokens up to p2 , and so on, until the last tier is reached or the total cap is reached. The tiered emission mechanism ensures that emission is structured and predictable, allowing participants to clearly understand how many tokens will be issued at each step. This model can encourage early participation.

Restating the Problem

Let’s assume that a protocol wants to encourage different revenue streams while maintaining a certain balance in this token issuance over time.

The problem is as follows: is it possible for the protocol to arbitrate between a certain reward emission strategy and the gap in equity? In other words: how can it build its optimal incentive strategy.

In the graph below, four reward emission models (Linear, Proportional, Fixed, Exponential Decline) are represented, crossed with two main user types called Fans and TVL, (cf. above). Both participate (but differently) in increasing its revenues.

That makes eight graphs that show for these two scripts and for each reward model the gap in equity (given by the Gini coefficient).

This series of graphics intersects two types of distributed rewards with four reward emission models detailed previously. The Gini coefficient indicates the deviation from perfect equity. It is notably inequitable for the TVL with a proportional model. It is perfectly equitable for all users with a linear model. But is this a desirable incentive strategy?

If a single reward function is used, one model can be fair for one type of activity and not the other. A issuance may seem fair for one parameter, but not for the other. In general, there is non-linearity (as we shall see later) between what we call “incentive centers”.

Methodology Part 1: Our Optimization Steps Algorithm

This article does not address the case of emission optimization based on inflation mitigation, but rather the sizing of reward issuance within a protocol in service of its token-economy. This comes at the cost of a slightly more complex optimization process and a quite crucial intermediate modeling. Let’s present its deployment in a concise manner.

Optimizing the issuance of rewards within a protocol is an exercise that closely depends on the definition of clear objectives. A primary goal might focus on minimizing sell pressure to control inflation. From this perspective, optimization can be based on an emission using a logistic function (specifically, a sigmoid) and is quite straightforward (through a simple curve fitting mechanism, for example).

However, if the ambition is to drive the economic activity of the protocol, the approach is more sophisticated. In this context, the protocol should consider optimizing several intrinsic dimensions, which can be referred to as its secondary objectives, such as:

  • Economic Activities Properly: These activities can encompass direct revenue sources for the protocol, like royalties, transaction fees, and subscriptions.
  • “Behaviors”: Beyond mere transactions, the protocol might want to incentivize specific behaviors among its users. This could include promoting content quality, encouraging a high activity rate, or managing financial ratios like the holding or debt ratio.
  • Acquisition: How the protocol wishes to expand its user base is critical. This could mean primarily targeting tech-savvy pioneers, pushing for mass adoption by the general public, or focusing on later adopters.

Once these secondary objectives are clearly defined, the next step is to determine how to develop their economic scope. This involves encouraging, for example, the quality of produced content or a high activity rate. The choice of customer acquisition strategies also fits into this dynamic, whether by targeting market pioneers, pushing for mass adoption, or focusing on later adopters.

Finally, after detailing these elements, it’s possible to propose a tailored token distribution strategy. This strategy would be based on the choices and possible weightings from the previous steps, ensuring that the reward issuance actively follows the protocol’s objectives (primary then secondary).

This approach follows points 1.2–2–3 -4–5 of the following algorithm:

However, at the end of these steps, we have not yet achieved the promised optimization. Another step 6 is also necessary, which involves the project’s modeling phase, including data mining, sampling, and data bootstrapping. Only at the end of this step will we be able to formulate a reward objective function and proceed with its purely numerical/quantitative optimization (step 7).

We will detail these steps in the continuation of this article after briefly presenting our theory of the three “Horizons”.

Brief Theory of the Three Horizons and Their Weightings

Our Use Case

As mentioned earlier, the use case that allowed us to develop our approach is operational. It is a project (X) that Nomiks assisted in the token economy of its token. Let’s quickly introduce its specifics before extrapolating. Project X is a platform that aims to bring Content Creators and their community of Supporters closer by monetizing social interactions. Its economic design includes a marketplace at its core. The supporters of A pay a subscription to access exclusive publications that Creator A will have published (thus accessing a membership). Creator A earns a certain percentage of the revenue from subscriptions paid by his A supporters. Promoters can relay content, fans can be paid for their posts (via mint fees), more discreet users can receive management fees for their good and loyal services, etc.

All of this sketches a rich and complex economy of monetizing social interactions. We were thus led to design, among other things, an incentive mechanism called “Fan wars” inspired by the “bribes” of the Curve protocol. This “Fan wars” allocates part of the inflation to Creators according to their ranking via a score, which depends on several dimensions: each Creator account has its own Creator score resulting from a function that treats variations of different inputs according to what we identified as (three) different “Horizons”.

In such a context, the bearers of Project X asked themselves the following question: how best to direct incentives? And first: what are our incentive centers (towards which poles do we want to direct incentives)? And then: how do we measure all this?

The Three “Horizons”

Conceptually, Nomiks managed to frame this type of questioning in a preliminary analysis of what was called “Horizons” (in the thread of the “secondary objectives” of a protocol seen previously). What are these “Horizons,” how do we distinguish them, and how do we use them to obtain expressive and adapted incentive curves?

  • Horizon 1: This relates to the economic activities of the protocol. Obviously, these are crucial. But how do we understand them? These are the inflows of the protocol that we have come to identify.
  • Horizon 2: Our reflection led us to decide that these behaviors were actually determined by the category to which this protocol belongs and the Utility of its token. For example, if the protocol falls into the ‘Open Digital Economy — Content Creation’ category and the Utility of its token has a ‘Work Token’ aspect (for an overview of Nomiks’ universal classification of protocols: https://medium.com/@Nomiks/what-do-you-know-about-the-competitive-landscape-of-your-web3-project-843fc7bb89fe) then these behaviors relevant to content production will polarize according to a qualitative vs. quantitative production. To avoid any reference to economic agents, we prefer to speak of a dualistic dynamic.
  • Horizon 3: This relates to the growth in the number of users or their acquisition. This level takes into account the user acquisition strategy of the protocol. Does the protocol intend to direct its incentives towards early adopters, with a view to mass adoption or even late adoption?

The Idea of an Incentive Score

To integrate these three Horizons into an incentive curve that addresses Project X’s problem, we followed the following method:

  1. Pre-determine certain specific criteria of the project in a general matrix (called the incentive centers matrix) for each Horizon. Depending on the importance given to the criteria by the protocol, a weight is assigned to it.
  2. Calculate an incentive score for each vector of this matrix.
  3. Represent the overall incentive score on a graph that integrates these three “Horizons”.

Basis for Calculating Different Incentive Centers

It is possible to account for in a formula a comprehensive incentive strategy implementing these three “Horizons”. Equipped with this score, the protocol can control its reward pools based on a selection of its incentive centers along these “Horizons”, namely:

  • Economic Activities:
    We have been tasked with identifying certain centers of incentives within this Horizon. Here they are in full : Royalties, Transaction Fees, Subscriptions, Ads, Direct Sales, Flat Fees & One-Time Purchases. Certainly, let’s briefly explain how each of these terms can be viewed as economic inflows into a protocol.
    * Royalties refer to payments made to content creators or rights holders based on the usage or sale of their intellectual property, such as music, books, or patents. In the context of a protocol, it can represent revenue generated from users interacting with or utilizing intellectual property within the protocol.
    * Transaction Fees: In a protocol, these fees can represent income generated from users engaging in transactions.
    *Subscriptions revenue is earned when users pay regular fees to access a service or content over a specific period (including users subscribing to premium features or services offered by the platform).
    * Ads revenue is generated from displaying advertisements to users within a platform and may involve displaying ads to users as they interact with the protocol, with the protocol earning income from advertisers.
    *Direct Sales refer to revenue generated from selling digital goods, services, or access to specific features or content directly to users.
    *Flat Fees can represent income from charging users a flat fee for certain actions or privileges within the protocol.
    *One-Time Purchases can represent income from users buying non-recurring items or services.
  • Behaviors:
    We previously discussed this aspect, which we preferred to refer to as “Dualistic Dynamics.” It became apparent in our use case that a comprehensive reward emission incentive strategy must also encompass such dynamics. In a context of content publishing, the dual qualitative production and quantitative production seemed particularly relevant. We won’t detail them here.
  • (User) Acquisition Strategy:
    Here, our use case aligns with a general concern: Does the strategy involve favoring the early adopters, mass adoption, or even late adoption. These are three exclusive incentive centers from which the protocol must choose within this horizon.

a) So, we start with a Decision Matrix in line with the strategies of our use case protocol :

Mdecision is a binary matrix that contains 1s (True) and 0s (False), representing assignments. Each row of the matrix signifies a “decision” or an “assignment” to a specific category or option.
- The 1s indicate a positive decision or assignment to a particular option.
- The 0s denote a negative decision or a lack of assignment to that option.
(This kind of matrix is commonly used to represent classification, assignment, or decision problems where you have multiple options, and you need to determine whether each option is activated (1) or not (0) for each specific example or case represented by a row in the matrix).

We then perform matrix multiplication with Mweighted. Mweighted is a larger matrix that displays scores based on the correlation between each subcategory within the horizons and the emission function.

b) In our use case, the MHorizon for each Matrix would be:

In the end, the scores generated by these matrices allow us to determine the type of emission to prioritize for this project. (In our use case, the weights are similar in this example but may vary depending on the analyzed competitive landscape.)

It is the summation of these three “Horizons” that determines the choice of emission type. In most cases, this will lead to the implementation of an exponential function of this type:

Methodology Part 2: Data Mining and Modeling

Let’s continue with our algorithm starting from step 6.

To measure the distribution of rewards on each incentive center, it was necessary to model the evolution of the sources of economic models. Activities were modeled through sampling and calibration of historical data; while “behaviors” (or dualistic dynamics) were based on linear or non-linear regression models.

This part is the most technically dense. We have condensed it to allow for a comprehensive overview of our entire method. It can be skimmed quickly and then revisited for a more in-depth reading later.

Generation of `fans` and `TVL` Distributions

Fans & TVL distributions are based on historical data. In order to increase the sampling size and better estimate a realistic distribution, the data have been densified by bootstrapped (bootstrapping is a resampling technique used to estimate statistics on a population by sampling a dataset with replacement. It allows the estimation of the sampling distribution of almost any statistic).

Where the original dataset is:

and the sampled set is:

Model parameters distribution targets & motions

We used a (targeted) Brownian Bridge. This refers to a specific type of mathematical model that describes random motion. It is a stochastic process that starts at an initial value, ends at a target value, and fluctuates between these two points based on Brownian motion for the TVL behaviors. The Fans fluctuations follow another calculated motion based on fans accumulation mechanisms.

If B(t) is standard Brownian motion, T is a fixed final time, B(0) = b0 is the starting value, and B(T) = bT is the target value, then the targeted Brownian Bridge BB(t) at time t is defined by:

Where 0 ≤ t ≤ T.

Motion of the two selected Model inflows

  • TVL: Given an initial value S0 , the value at time t under GBM (which means Geometrical Brownian Motion, a mathematical model used in finance to simulate the random behavior of stock prices, taking into account both their expected returns and variability) is:

Where μ is the expected return (drift term), σ is the volatility (diffusion term) and W(t) is a standard Brownian motion.

  • Fans:

This is the essence of the fan growth behavior based on historical data. It can infered that the overall growth on day is influenced by an exponential trend, modified by a random fluctuation, with the entire process subjected to a 10% probability of manifesting. If this probabilistic event doesn’t occur, the growth remains the same as the previous day.

1. Trend Growth:

2. Random Fluctuation:

3. Trajectory Update: Given the 10% probability of growth -:

if (rand() ≤0.1) then A(i) = F(i= else A(i) = A(i-1)

Adoption function

To model user adoption, Nomiks utilizes an instantiation process that relies on a pre-modeled cumulative adoption curve, shaped as a logistic function derived from historical data.
If t represents time (in days), the cumulative adoption function that fit the empirical data points is:

Where e is the base of the natural logarithm, k is the slope of the function at its inflection point and t is the inflection point of the curve (a sigmoid).

Instantiation

A distribution is generated based on the previous function to simulate the creators’ instances. Each creator is represented by a unique creation day, defined by a normal distribution centered around day 300 with a standard deviation of 100 days. Mathematically, if is a random variable representing the creation day of an instance:

Where μ = 300 and σ est =100.

Nb: The distribution is constrained to be between days 1 and 1400 in this model. μ & sσ are calculated from the previous function.

Equity Control & Optimization — Decentralizing Rewards

Following the implementation of constraints on the 3 Horizons, significant imbalances may have already affected fairness in reward distribution. How can we remedy this?

This custom objective function, which we formulated to meet all constraints and issues related to our use case, calculates the rewards distributed to Fans, TVL holders, and quality content contributors.

  • Reward(TVL, Fans,QualityContributors). This is the reward function that takes three parameters as input: The number of TVL, Fans, and quality content contributors.
  • αTVL and αFans: These are proportionality constants that determine how TVL and Fans values influence the reward.
  • The exponential terms signifies an exponential decay. The larger the TVL, the closer this value is to zero, due to the nature of the exponential function. Similarly, this expression signifies an exponential decay based on the number of Fans. The more Fans there are, the closer this value is to zero.
  • Bonus(QualityContentScore): This is the Bonus distributed to quality content contributors. It is constant.

By multiplying by Total Tokens/Total Score, our formula normalizes the combined score of TVL and Fans to determine how many tokens should be distributed as rewards. If Total Score is the sum of all incentive scores from participants, then this formula ensures that all rewards total “Total Tokens”.

Our objective function integrates all 3 “Horizons” as our use case requires; they are in its formula. The Dynamic Horizon is incorporated in the term (Bonus(QualityContentScore)), and the economic and adoption Horizons are incorporated in the exponential functions; incentives to Fans and TVL are expected to decrease as their number increases.

But this only works as long as the coefficients of the exponential functions are well dimensioned! Actually, in the first round, the result obtained may well meet the constraints on the three “Horizons”, but the distribution may turn out to be largely unequal, with a small part of the users absorbing the majority of the rewards while the others capture almost nothing.

In the example above, the resulting Gini coefficient is indeed 0.8606, which is particularly inequitable given the powers of the exponential terms ( αTVL = 0.01 and αFans = 0.003) until the distribution of the 20 million tokens in a satisfactory manner.

The resolution of such an objective function then consists of iterating to find the values of αTVL and αFans.

The graph below recalls the objective function to be optimized so that its Lorenz curve aims for a Gini coefficient G close to 0.5 (best fit). Recall that G = A/B + A where A is the area between the real Lorenz Curve and the equity line and B is the total area under the equity line. The smaller A is, the closer the curve is to its equity line.

Lorenz Curve of our formula that relates the cumulative proportion of creators in percentage to the cumulative proportion of reward tokens distributed in percentage. (Rewards for Quality Content contributors are omitted, as they do not significantly impact the curve and only play a marginal role in achieving the best fit).

Solving the best fit problem follows an optimization process tailored for our Use Case.

  1. Maximization Objective: The goal is to approach a target which is a Gini coefficient of ~0.5. In this sense, we want to maximize (1-G).
  2. Decision Variables: These are the exponential terms of our formula.
  3. Constraints: They are at the discretion of the protocol itself. This mainly concerns the total number of tokens allocated to rewards (in our use case, 20 millions); as well as the range of the exponential variables which determine the rate of decline, that is to say, the difference between rewards distributed to early adopters and others (a strong rate of decline increasing this gap), the project lifespan, etc.
  4. Optimal Solution: The optimal solution consists, given the decision variables and constraints, in approaching the target.
  5. Numerical Optimization: After a certain number of iterations, we reach the desired target.

The diagram below shows how, by taking into account the constraints of the protocol on “Horizons” 1, 2, and 3, we improved the equity between users in a decentralized manner and in a way that is satisfactory for the use case protocol, which remains the arbitrator of its incentive strategy.

This graph displays, for 100 creators randomly selected and sorted by their Total Value, the token reward distribution in line with the incentive strategy (our three colors) of the protocol X. To calculate the Gini coefficient, the data is reshaped to correlate the cumulative proportion of our creators with the cumulative proportion of issued tokens. The Gini coefficient reaches the desired target when the terms αTVL and αFans of our objective function are respectively 0.0003 and 0.0001.

Quod Erat Demonstrandum

To summarize : at the beginning of our approach is the determination of a primary objective. Is it about limiting selling pressure or stimulating the economic activities (in the broad sense) of a protocol? In the latter case, we define several secondary objectives according to what we have called the “Horizons” of a protocol. Key indicators are identified to weight the incentive centers of that protocol. Incentive scores can be calculated which constitute the initial parameters of a whole incentive system.

From there, a recommendation for the issuance of reward tokens can be made.

Then comes a crucial modeling step. Using a set of test data, we calibrate the token distribution suitable for the protocol’s incentive strategy.

But Is this reward distribution equitable ?
It all depends on the equity requirements of the protocol! In other words, a trade-off, but operating on quantitative variables (especially the initial parameters of the protocol and the variables of the reward issuance curve). We use the Gini coefficient of a Lorenz Curve and a reward objective function to provide the protocol with all the measures of equity control, giving meaning to a “decentralized control.”

We believe that this quantitative study offers results that go far beyond the specific use case from which we started. Actually, the methodology we have implemented has the major advantage of being adaptable and extendable to a multitude of cases.
Our methodology can be adapted to any protocol wishing to develop a reward issuance system targeting the incentive centers it has previously identified and want to favor. Any protocol’s incentive strategy, taking into account its dynamics, its economic activities, its specific acquisition objectives, can be immersed, with some adjustments and modelizations, in a complete objective function integrating these three “Horizons”: QED.

At the end of this tokenomic engineering study, Nomiks is proud to announce the concrete implementation of such control of a decentralized and balanced reward distribution (according to the specific incentive strategy of a protocol): it is the pillar of a “rewards” module (among other components like “ICO”, “allocation”, “Vesting” and “Ruins Tests”) of its online SaaS software.

If you want to know more about this software feature, understand all its features, be part of the cohort of early users or beta testers: contact@nomiks.io

Our list of articles:

If you find an error or a typo, please contact our Research Analyst or simply want further explanation, please contact our Head Data Science

This article, written under the guidance of Pascal Duval, Research Analyst at Nomiks, is the result of a collaboration of the Nomiks team. The methodology and analysis conducted are notably the result of the work of Yann Mastin, our Head Data Science.

--

--

Nomiks

Nomiks is a token design & risk management research lab. We design, audit and stress test your token economy.