An introduction to objective functions

Nomiks
28 min readDec 7, 2023

--

Introduction

An objective function is a function used to evaluate the quality or performance of a system, model, or solution with respect to a specific objective. Objective functions in economics are mathematical expressions used to model the behavior of economic agents (individuals, firms, governments) with the goal of maximizing or minimizing some quantity. They are commonly used in economic analysis, operations research, and other fields to make rational decisions in light of specific objectives and constraints.
This article examines the use of objective functions in the economic context, and in particular in crypto economy.
We take a look at some local and global optimization methods with an emphasis on evolutionary algorithms that seem particularly congruent with the Web3.
The reason for this congruence, however, will not be explained in this article, but will be left to the imagination of the reader.

The principles

When solving an objective function problem, there are several things to consider:

  1. What we want to maximize (or minimize): Objective functions can be used to maximize or minimize some quantity. For example, a company may want to maximize its profits, while a consumer may want to minimize his purchase cost.
  2. Decision variables: Objective functions generally depend on decision variables, which are the parameters that the economic agent can control or adjust to achieve a goal.
  3. Constraints: Often there are constraints or limitations that must be respected. For example, a company may have a limited budget for advertising, or a consumer may have a time limit for making a purchase decision.
  4. Optimal Solutions: The goal is to find the values of the decision variables that optimize the objective function, given the constraints. Optimal values are those that maximize or minimize the desired quantity.
  5. Numerical Optimization: Objective functions are often solved using numerical optimization methods (our aim is not to cover them all, but to provide a synopsis in the appendix).

First example taken from monetary policy:

Suppose a central bank wants to stabilize inflation at a certain target (obviously below current or actual inflation), taking into account certain parameters. To determine this, the central bank uses the so-called Taylor rule. The Taylor rule is a formula used to guide monetary policy, in particular to determine the different policy rates that the central bank should set in response to inflation and output differentials. This is our objective function:

where :

  • InterestRateLoan is a prime rate, in this case the interest rate on loans to commercial banks.
  • r* is the natural rate of real interest. This is the rate that would prevail if the economy were operating at its potential (i.e., with no output gap) and if actual inflation were equal to target inflation. In many applications of the Taylor rule, its value is often set at 2%. For us, it’s a constraint variable.
  • IReal is the rate of inflation, currently 5.9%. This is an initial figure.
  • ITarget is a certain range of target inflation rates that the central bank aims to achieve. This is a decision variable that we limit to between 2% and 5%.
  • a is the central bank’s responsiveness to inflation differentials. The higher the value of a, the more vigorously the central bank reacts to inflation differentials. If current inflation is above target, the central bank will raise its policy rate more with a high a than with a low a.
  • b is the responsiveness of the central bank to the production gap (see below). The higher the value of b, the stronger the central bank’s response to the output gap.
    In short, a and b are tools for the central bank to calibrate its response to economic challenges. They are determined by the central bank’s preference for price stability over stabilization of economic activity. In practice, the exact values of a and b can be debated and vary depending on the central bank and the nature of the transaction. economic conditions. a and b are decision variables that we bound as follows: 0.1 ≤ a, b ≤2
  • ProductionGap the difference between the current level of output (or GDP) and the economy’s “natural” or potential level of output. A positive output gap indicates an overheating economy, while a negative gap indicates an underutilization of resources. The central bank can raise interest rates to cool an overheated economy or lower them to stimulate an underperforming one. For us, this is a constraint that we have set at -1.

For practice and familiarization with objective functions, let’s look for a value of InterestRateLoan as close as possible to 3.5% (arbitrary, for practice), i.e. let’s minimize the difference (InterestRateLoan — 3.5). The optimization problem thus described is formulated as a linear equation that relates the interest rate on commercial bank loans to several other variables, such as actual inflation, target inflation, the productipn gap, and the central bank’s responsiveness to inflation and this gap. The key features that make this a linear optimization problem are:

  1. Linear objective function: The Taylor rule formula is a linear combination of variables and coefficients. There are no non-linear terms such as higher powers, products of variables, or non-linear functions (e.g., logarithms, exponentials).
  2. Linear constraints: These constraints are also linear. For example, the inflation target constraint (between 2% and 5%) and the ranges for the reactivity coefficients a and b (between 0.1 and 2) are linear constraints. Similarly, the natural rate of real interest set at 2% and the output gap set at -1 are linear constants.

Despite the linear nature of this problem, the nonlinear least squares algorithm may be preferable for reasons of simplicity of formulation, convergence, and accuracy. The rationale for choosing the nonlinear least squares method, despite the apparent linear nature of the problem, lies in its ability to better handle the specific objective function of the problem and provide solutions closer to the actual optimization objective..

Example of a least squares optimization code.

from scipy.optimize import least_squares

# Données
# IRéel => IReal
# ICible => ITarget
# EcartDeProduction => ProductionGap
# TauxInteretPret => InterestRateLoan

r_star = 2.0 # Taux d'intérêt réel naturel
IRéel = 5.9 # Taux d'inflation actuel
EcartDeProduction = -1.0 # Écart de production
# Fonction résiduelle pour la méthode des moindres carrés
def taylor_residual(params):
a, b, ICible = params
# Calcul du taux d'intérêt selon la règle de Taylor
TauxInteretPret = r_star + ICible + a * (IRéel - ICible) + b * EcartDeProduction
return TauxInteretPret - 3.5 # Nous voulons minimiser cette différence
# Bornes pour a, b, ICible et TauxInteretPret
bounds = ([0.1, 0.1, 2], [2, 2, 5]) # (borne inférieure, borne supérieure) pour chaque variable
# Estimation initiale pour a, b et ICible
initial_guess = [0.5, 0.5, 4]
# Résolution par la méthode des moindres carrés non linéaires
result = least_squares(taylor_residual, initial_guess, bounds=bounds)
# Résultats
a_optimized, b_optimized, ICible_optimized = result.x
TauxInteretPret_optimized = r_star + ICible_optimized + a_optimized * (IRéel - ICible_optimized) + b_optimized * EcartDeProduction
if result.success :
output = {
"a optimisé": a_optimized,
"b optimisé": b_optimized,
"ICible optimisé": ICible_optimized,
"Taux d'intérêt prêt optimisé": TauxInteretPret_optimized
}
else:
output = "Erreur: L'optimisation a échoué."
print(output)
{'a optimisé': 0.10364142688889007, 
'b optimisé': 1.6171517032331146,
'ICible optimisé': 2.7953849717662833,
"Taux d'intérêt prêt optimisé": 3.500000000000003}

The function taylor_residual returns the interest rate calculated using Taylor's rule for a triplet of values (a, b, ICible).The least squares algorithm attempts to minimise the absolute value of this residual function. The starting point of the optimisation is defined by initial_guess, which is a vector containing the initial estimates of the triplet. From this point, the algorithm explores the parameter space to find the values that minimise the residual function, while respecting the bounds defined by bounds.

Initial value dependence is a common problem in nonlinear optimization, as many of these methods are based on iterative procedures that start from an initial estimate. If the function to be optimized has multiple local minima, the algorithm may find different local minima depending on the initial estimate. To counteract this, we can also use a multi-start method: this approach involves running the optimization algorithm several times with different random initial estimates and selecting the best solution from all the starts. This method is particularly useful when the objective function has multiple local minima and you want to increase your chances of finding the global minimum.

Example of an optimization code implementing a multi-start method on top of the nonlinear least squares method

from scipy.optimize import least_squares
import numpy as np

# Données
r_star = 2.0 # Taux d'intérêt réel naturel
IRéel = 5.9 # Taux d'inflation actuel
EcartDeProduction = -1.0 # Écart de production
# Fonction résiduelle pour la méthode des moindres carrés
def taylor_residual(params):
a, b, ICible = params
# Calcul du taux d'intérêt selon la règle de Taylor
TauxInteretPret = r_star + ICible + a * (IRéel - ICible) + b * EcartDeProduction
return TauxInteretPret - 3.5 # Nous voulons minimiser cette différence
# Bornes pour a, b, ICible
bounds = ([0.1, 0.1, 2], [2, 2, 5]) # (borne inférieure, borne supérieure) pour chaque variable
# Nombre de démarrages aléatoires
num_starts = 10
# Stocker les résultats de chaque démarrage
results = []
for _ in range(num_starts):
# Générer une estimation initiale aléatoire pour a, b, et ICible
initial_guess = np.random.rand(3) * (np.array(bounds[1]) - np.array(bounds[0])) + np.array(bounds[0])
# Résolution par la méthode des moindres carrés non linéaires
result = least_squares(taylor_residual, initial_guess, bounds=bounds)
# Ajouter le résultat à la liste des résultats
results.append((result.fun, result.x))
# Trier les résultats par la valeur de la fonction objectif et sélectionner la meilleure solution
best_result = min(results, key=lambda x: np.sum(np.abs(x[0])))
# Résultats
a_optimized, b_optimized, ICible_optimized = best_result[1]
TauxInteretPret_optimized = r_star + ICible_optimized + a_optimized * (IRéel - ICible_optimized) + b_optimized * EcartDeProduction
output = {
"a optimisé": a_optimized,
"b optimisé": b_optimized,
"ICible optimisé": ICible_optimized,
"Taux d'intérêt prêt optimisé": TauxInteretPret_optimized
}
print(output)
{'a optimisé': 0.13643581628975424, 
'b optimisé': 1.8508182620893088,
'ICible optimisé': 2.9480691696148216,
"Taux d'intérêt prêt optimisé": 3.500000000000007}

In this script, multiple starts (defined by num_starts) are performed with randomly generated initial estimates for the parameters a, b, and ICible at each iteration. Each result is stored in the results list, and after all starts are complete, we select the solution with the lowest value of the objective function (i.e. the lowest residual in this case) as our best solution.

Unlike local optimization methods, global optimization methods are designed to search the entire solution space for the global minimum. These global methods include heuristic algorithms, which are often inspired by natural phenomena to explore the solution space. Particle Swarm Optimization (PSO) is an example of such a heuristic algorithm. Inspired by the social behavior of birds in swarms, this method differs from local approaches in that it explores the search space with multiple solutions (particles) that adjust their positions according to their own experience and that of other particles.

Example of a PSO global optimization code

from pyswarm import pso
import numpy as np

# Données
r_star = 2.0 # Taux d'intérêt réel naturel
IRéel = 5.9 # Taux d'inflation actuel
EcartDeProduction = -1.0 # Écart de production
# Fonction objectif pour PSO
def taylor_objective(params):
a, b, ICible = params
# Calcul du taux d'intérêt selon la règle de Taylor
TauxInteretPret = r_star + ICible + a * (IRéel - ICible) + b * EcartDeProduction
return np.abs(TauxInteretPret - 3.5) # Nous voulons minimiser cette différence
# Bornes pour a, b, ICible
lb = [0.1, 0.1, 2] # bornes inférieures
ub = [2, 2, 5] # bornes supérieures
# Exécution de l'optimisation par essaims particulaires
xopt, fopt = pso(taylor_objective, lb, ub)
# Résultats
a_optimized, b_optimized, ICible_optimized = xopt
TauxInteretPret_optimized = r_star + ICible_optimized + a_optimized * (IRéel - ICible_optimized) + b_optimized * EcartDeProduction
output = {
"a optimisé": a_optimized,
"b optimisé": b_optimized,
"ICible optimisé": ICible_optimized,
"Taux d'intérêt prêt optimisé": TauxInteretPret_optimized,
"fonction objectif": fopt
}
print(output)
Stopping search: Swarm best objective change less than 1e-08
{'a optimisé': 0.1,
'b optimisé': 2.0, '
ICible optimisé': 3.233333331518751,
"Taux d'intérêt prêt optimisé": 3.499999998366876, '
fonction objectif': 1.63312385836889e-09}

In this script, pso is the function that executes the PSO algorithm. It takes as input the objective function and the lower lb and upper ub bounds of the parameters to be optimised. The function returns the best solution found xopt and the value of the objective function for this solution fopt.

Another global method: Simulated annealing

Simulated annealing is a probabilistic optimization technique that mimics the controlled cooling process in metallurgy known as annealing. It is useful for finding a global minimum in a complex search space. To use simulated annealing in Python, you can use the scipy library, which provides an implementation of this method.

import numpy as np
from scipy.optimize import dual_annealing

# Données
r_star = 2.0 # Taux d'intérêt réel naturel
IRéel = 5.9 # Taux d'inflation actuel
EcartDeProduction = -1.0 # Écart de production
# Fonction objectif pour le recuit simulé
def taylor_objective(params):
a, b, ICible = params
# Calcul du taux d'intérêt selon la règle de Taylor
TauxInteretPret = r_star + ICible + a * (IRéel - ICible) + b * EcartDeProduction
return np.abs(TauxInteretPret - 3.5) # Nous voulons minimiser cette différence
# Bornes pour a, b, ICible
bounds = [(0.1, 2), (0.1, 2), (2, 5)] # (borne inférieure, borne supérieure) pour chaque variable
# Exécution de l'optimisation par recuit simulé
result = dual_annealing(taylor_objective, bounds)
# Résultats
a_optimized, b_optimized, ICible_optimized = result.x
TauxInteretPret_optimized = r_star + ICible_optimized + a_optimized * (IRéel - ICible_optimized) + b_optimized * EcartDeProduction
output = {
"a optimisé": a_optimized,
"b optimisé": b_optimized,
"ICible optimisé": ICible_optimized,
"Taux d'intérêt prêt optimisé": TauxInteretPret_optimized,
"fonction objectif": result.fun
}
print(output)
{'a optimisé': 0.1340434000475078, 
'b optimisé': 1.064008595590175,
'ICible optimisé': 2.04762286634576,
"Taux d'intérêt prêt optimisé": 3.5000000000158717,
'fonction objectif': 1.5871748360041238e-11}

In this script, dual_annealing is the function that executes the simulated annealing algorithm. It takes as input the objective function and the bounds of the parameters to be optimised.

Simulated annealing, like other global optimization methods, can take longer to converge, especially for high-dimensional search spaces or complex objective functions. It is also known to allow efficient exploration of the solution space, reducing the probability of getting stuck in local minima.

Global optimization in general (which includes PSO and the annealing method seen here) can take longer to converge than local optimization methods, especially when the search space is large or the objective function is complex.

In most practical cases, the least squares algorithm will provide a solution in much less time than an exhaustive search. However, as we have said, there is a hige number of triplets that can satisfy its objective of minimizing the interest rate. This is because the objective function we started with has multiple local minima. The central bank concerned with finding the smallest value of ICible may prefer to order these triplets according to increasing values of ICible. This can be done using a random search approach. In this case, a large number of random starting points are generated and the nonlinear least squares method is used for each starting point.

import numpy as np
from scipy.optimize import least_squares

# Données
r_star = 2.0 # Taux d'intérêt réel naturel
IRéel = 5.9 # Taux d'inflation actuel
EcartDeProduction = -1.0 # Écart de production
# Fonction résiduelle pour la méthode des moindres carrés
def taylor_residual(params):
a, b, ICible = params
# Calcul du taux d'intérêt selon la règle de Taylor
TauxInteretPret = r_star + ICible + a * (IRéel - ICible) + b * EcartDeProduction
return TauxInteretPret - 3.5 # Nous voulons minimiser cette différence
results = []
# Générer 1000 points de départ aléatoires et optimiser à partir de chaque point
for _ in range(1000):
initial_guess = [np.random.uniform(0.1, 2), np.random.uniform(0.1, 2), np.random.uniform(2, 5)]
result = least_squares(taylor_residual, initial_guess, bounds=([0.1, 0.1, 2], [2, 2, 5]))
if result.success:
a_optimized, b_optimized, ICible_optimized = result.x
TauxInteretPret_optimized = r_star + ICible_optimized + a_optimized * (IRéel - ICible_optimized) + b_optimized * EcartDeProduction
results.append((a_optimized, b_optimized, ICible_optimized, TauxInteretPret_optimized))
# Trier les résultats en fonction de ICible de manière croissante
sorted_results = sorted(results, key=lambda x: (x[2], abs(x[3] - 3.5)))
# Afficher les 50 premiers résultats
for i, (a, b, ICible, TauxInteretPret) in enumerate(sorted_results[:50]):
print(f"Résultat {i+1}: a = {a:.4f}, b = {b:.4f}, ICible = {ICible:.4f}, TauxInteretPret = {TauxInteretPret:.4f}")

A possible result is :

Résultat 1: a = 0.3217, b = 1.7596, ICible = 2.0075, TauxInteretPret = 3.5000
Résultat 2: a = 0.1714, b = 1.1788, ICible = 2.0125, TauxInteretPret = 3.5000
Résultat 3: a = 0.2286, b = 1.4012, ICible = 2.0127, TauxInteretPret = 3.5000
Résultat 4: a = 0.2777, b = 1.5927, ICible = 2.0135, TauxInteretPret = 3.5000
Résultat 5: a = 0.3361, b = 1.8203, ICible = 2.0140, TauxInteretPret = 3.5000
Résultat 6: a = 0.1379, b = 1.0554, ICible = 2.0206, TauxInteretPret = 3.5000
Résultat 7: a = 0.1184, b = 0.9850, ICible = 2.0263, TauxInteretPret = 3.5000
Résultat 8: a = 0.3012, b = 1.6948, ICible = 2.0290, TauxInteretPret = 3.5000
Résultat 9: a = 0.3169, b = 1.7610, ICible = 2.0368, TauxInteretPret = 3.5000
Résultat 10: a = 0.3226, b = 1.7838, ICible = 2.0382, TauxInteretPret = 3.5000
...
Résultat 20: a = 0.1245, b = 1.0464, ICible = 2.0697, TauxInteretPret = 3.5000
...
Résultat 50: a = 0.1637, b = 1.3118, ICible = 2.2074, TauxInteretPret = 3.5000

Summary considerations

In all the examples we have given, we have different optimal values for a, b and ICible. The difference in the optimal values of a, b, and ICible obtained by different optimization methods is explained by the nature of the algorithms and the form of the objective function.

  1. Local vs. global optimization: Local optimization methods as implemented in functions such as least_squares generally use algorithms based on gradient descent or variants thereof can be used to optimize a system (such as Levenberg-Marquardt) tend to find the local minimum closest to the starting point. In contrast, global optimization methods, such as PSO or simulated annealing, are designed to explore a larger search space to find the global minimum. This can lead to different optimal values for the parameters.
  2. Objective function: If the objective function has several local minima, global optimization methods are generally preferable, as they are less likely to get stuck in a non-optimal local minimum.
  3. Complexity of the parameter space: In a complex parameter space with many dimensions, global optimization methods can find parameter combinations that local methods won’t discover.
  4. Stability and reproducibility: Some methods, particularly stochastic ones like PSO, can give slightly different results at each run, due to their random nature. Simulated annealing, although a global method, can also be subject to this variability.

Which method is the best depends on several factors:

  • Accuracy: If all methods give the same InterestLoanRate, this indicates that they all reach a solution that satisfies the condition of our objective function. However, the “best” method will be the one that gives the most accurate solution (i.e., the one closest to the true global minimum), if this accuracy is known.
  • Consistency: A method that produces consistent results across runs may be preferred.
  • Computational complexity: If a method achieves similar accuracy but with fewer or faster iterations, it may be considered superior.
  • Understandability and control: Sometimes a method may be preferred if it is easier to understand and implement, or if it offers more control over the optimization process (e.g., by adjusting algorithm parameters). We won’t discuss this aspect here.

In practice, it is often advisable to use several optimization methods and compare the results to gain more confidence in the solution found. If different methods converge to the same solution, this increases confidence in the optimality of that solution.

We ran several optimization algorithms on our Taylor function.

These results show the optimal parameters found by each method described above to minimize the objective function, as well as the TauxInteretPret (InterestLoanRate) calculated from these parameters and the time taken by each method.

  1. Least Squares: This method has found parameters that give a LendingRate of 3.501102, which is slightly above the than the target of 3.5. The execution time is quite fast.
  2. PSO (Particle Swarm Optimization): This global optimization method found parameters that “RateInterandPret” closer to the target of 3.500077 with a reasonable execution time.
  3. Basinhopping (a simplified version of Simulated Annealing). This method has succeeded in finding parameters that give a LendingRate exactly equal to the target of 3.5. However, it takes longer to run than the previous methods.
  4. Multi-start: This method restarts the algorithm several times from different starting points to avoid local minima. It found parameters that gave a RateInterandPret slightly below the target, at 3.499999. This method takes longer than the others because of the multiple starts.

In summary, if accuracy was our main concern, Basinhopping would have given the best result, hitting the target exactly. However, if execution time is important, Least Squares or PSO may be better options because they are faster, although they don’t hit the 3.5 target exactly. The multi-start method offers a good compromise between accuracy and execution time, and a good consistency.

And in Tokenomics? Trent McConaghy

In a groundbreaking article, TMC, with his background in electrical and civil engineering, retro-engineers the incentive for BItcoin.

let’s decompose this objective function

  1. Calculation of the Expectation: The expectation of the reward for actor i depends on the probability that this actor will succeed in solving a block. This depends directly on its hash power Hi . The higher the hash power of actor i, the higher its probability of solving a block.
  2. Product calculation Hi×T : Actor i’s hash power Hi is multiplied by the number of bitcoins distributed to the current block T. This value represents a proportion of the total mining reward that actor i could expect to receive by resolving a block.
  3. Global Expression: The expectation of reward E(Ri) for actor i is then proportional to the product of its hash power Hi and the number of bitcoins distributed to each block T.

The expression E(Ri) ∝ Hi×T is actually a proportional relationship rather than an equation. It expresses the relationship between the expected reward E(Ri) for actor i, the hash power of that actor Hi, and the number of bitcoins distributed to each block T. So it’s not an equation where both sides are equal, but rather a proportional relationship. α indicates that both sides of the equation increase or decrease together, but not necessarily at a 1:1 rate. There could be a constant proportionality factor that is not specified. If the equation were E(Ri)=k×Hi×T with a constant factor, then the equation would indicate a direct relationship with a specific scaling factor. In the absence of this constant factor and using α, the equation simply indicates a proportionality relationship without specifying the exact factor. In many models, especially when describing general relationships without going into exact details, proportionality is a concise and clear way of describing the relationship between variables.

The purpose of the objective function thus highlighted is first and foremost network security!

  • Expected Value: In statistics, expectation is the average value of a random variable. In the context of bitcoin mining, a miner’s reward expectation is the average of the rewards he can expect to receive over a large number of attempts (or blocks mined). If a miner has a 1% chance of solving the next block, then his reward expectation for that block is 1% of the total block reward. Example: Let’s say the total reward for solving a block is 10 bitcoins. If a miner has a 10% chance (due to his hash power) of solving the next block, then his expected reward is 0.10×10=1 bitcoin.
  • Hashrate and Block Reward: Multiplying the hashrate by the block reward in the equation seems to indicate that the more processing power a miner has, the more reward he can expect. This makes sense: the higher the hashrate, the higher the probability of solving the next block. The hashrate is not the only factor that determines the reward; the total block reward (which in the case of bitcoin decreases over time) is also a key factor.
  • Objective function: In the context of bitcoin mining, the objective function is to maximize reward expectation. Miners increase their hashrate to increase their reward expectation, but they must also take into account the costs associated with increasing the hashrate (such as the cost of electricity and equipment). An individual miner using standard mining hardware, such as an Antminer S19, might have a hashrate of about 100 terahashes per second (TH/s). When this user successfully solves a block, he or she will receive a reward of 6.25 bitcoins. However, in terms of expected value, you need to relate your hashrate (or hash power) to the total power of the network. As an indication, the total hashrate of the bitcoin network is about 150 EH/s (exahash per second), which is an estimate based on recent trends. Thus, a hashrate of 100 TH/s represents approximately 0,0067% of the total hashrate of the Bitcoin network. If a miner represents 0,0067% of the total hashrate, then his earnings expectation for each block is simply 0,0067% of the total block reward. In other words, over a period of time, he can expect to earn an average of 0.0067% of 6.25 bitcoins for each block mined by the network. In other words: 0.00041875 btc or 41.875 satoshis (about 12 euros at the current BTC rate…)

How well does Bitcoin do towards its objective function of maximizing security? The answer: incredibly well! From this simple function, Bitcoin has incentivized people to spend hundreds of millions of dollars to design custom hashing ASICs and building ASIC mining farms. Others are creating mining pools with thousands of participants. Now the hash rate is greater than all supercomputers combined. Electricity usage is greater than most small countries, and on track to overtake USA by July 2019. All in pursuit of Bitcoin token block rewards! (Not all of it is good, obviously.)

Token Engineering Case Studies — Analysis of Bitcoin, Design of Ocean Protocol. TE Series Part III.

Trent McConaghy

It all started with a simple block reward equation, but at the cost of energy consumption and entropy creation. Can we avoid this too, without questioning the ingenuity of Bitcoin (whose purpose, let’s remember, is network security!)?

With the emergence of large mining farms and mining pools, the issue of fairness in the distribution of rewards has become more complex. And all this at the same time as creating ecological waste.

TCM’s point is particularly interesting, because if Bitcoin’s objective function were effectively left in the hands of an AI, we would find ourselves in the greatest danger. TCM’s point is that the design of an objective function is entirely a matter for the tokenomist’s ethics and that we can’t blame a system (such as Bitcoin) for succeeding perfectly in what it is designed to do.

Ocean’s objective function

In the same article, TMC describes another objective function: that of the Ocean protocol, which it co-founded. The Ocean protocol is a communication and data exchange protocol designed to facilitate the sharing and monetization of data and digital assets, particularly in the field of artificial intelligence (AI) and machine learning.

This new objective function is designed to reward players who identify and invest in high quality, relevant data assets, while taking into account the popularity of these assets in the network. It is defined as follows.

It links a player’s conviction to the actual popularity of the data asset. It is an incentive function, as it encourages players to invest in quality data assets, as they are rewarded according to the actual popularity of these assets. Explanation of Terms :

  1. Sij : actor i’s OCEAN token stake in data asset j: This is a measure of actor i’s belief in the value or relevance of data asset j. The more actor i invests in data asset j (in terms of OCEAN participation), the more he believes in its value. This encourages players to invest in data assets they consider relevant or valuable.
  2. Dj: Direct measure of data asset popularity j. If the data asset is frequently consumed, this indicates that it is considered valuable by the community.
  3. T (Number of tokens distributed during a certain time interval) : Constant determining the size of the reward. OCEAN has a token issuance rule somewhat similar to BITCOIN, but more sophisticated.
  4. Ri (Attack vector mitigation): Here we extrapolate for demonstration purposes. What is referred to as this in the Ocean documentation is a factor that reduces an actor’s reward if his behavior is deemed malicious or if he tries to exploit the system. For example, if an actor attempts to manipulate the popularity of a data asset to his advantage, Ri could decrease to reduce the incentives for such actions. If Ri=1, this could indicate that no malicious behavior has been detected for actor i, so its reward is not mitigated. If 0<Ri<1, this would suggest that the actor has displayed some level of suspicious or malicious behavior, leading to a reduction in its reward. If Ri=0, this could mean that highly malicious behavior has been detected, completely nullifying the actor’s reward.

Example of an Ocean optimization problem.

Let’s maximize the expected reward for a miner on the Ocean network (this is a simplification and extrapolation, again by way of example).

Background : A user on the OCEAN network has 500 Ocean tokens and wishes to maximize his expected reward E(Rij) by adjusting his participation in Ocean tokens by judiciously choosing which of 3 data assets (datatoken_j, datatoken_k, datatoken_l ) he should focus on and selecting his level of benevolence each time. In other words Maximize E(Rijkl) which is proportional to

Data for each datatoken pool (three in number) :

  1. Number of Ocean tokens invested by user i in datatoken j (Sij Number of Ocean tokens invested by user i in datatoken j; Sik Number of Ocean tokens invested by user i in datatoken k;Sil Number of Ocean tokens invested by user i in datatoken l)
  2. Total number of Ocean tokens in any datatoken pool = 1000
  3. Number of ocean tokens distributed (during a time interval T) = 10 Oceans (for exercise)

Constraints :

  1. 0 ≤ Sij,Sik,Sil ≤ 500: a user’s investment in any datatoken pool cannot be equal to half the tokens in that pool. The user must diversify. Other decision variables :
  2. An attack vector is defined for each datatoken pool: 0.25 (close to malevolence) for pool L — 0.50 (neutral) for pool K — 1 (benevolence) for pool J
  3. Other initial variables : Dj = 100 : Average number of downloads per day (over x months) of datatoken j; Dk = 150 . average number of downloads per day (over x months) of datatoken k; Dl = 200 : Average number of downloads per day (over x months) of datatoken l

From the moment hewrote its founding articles in engineering token formTrent MConaghy had stressed the importance of evolutionary algorithms.

These are families of algorithms whose principles are inspired by the theory of evolution to solve various problems. They are therefore bio-inspired calculation methods. The idea is to evolve a set of solutions to a given problem, with a view to finding the best results. These are so-called stochastic algorithms, as they iteratively use random processes. The vast majority of these methods are used to solve optimization problems, and in this sense they are metaheuristics, although the general framework is not necessarily dedicated to optimization algorithms in the strict sense. They are also classified as computational intelligence methods.

Wikipedia

Duly noted: We’ll use a genetic algorithm to find the best distribution of 500 ocean tokens among three pools of data tokens that maximizes the expected reward while respecting the constraints of the problem.

import random
random.seed(42) # Fixe la graine du générateur de nombres aléatoires

from deap import base, creator, tools, algorithms
import numpy as np
# Paramètres du problème
TOKENS_TOTAL = 500
TOKEN_PAR_POOL = 1000
DJ = 100
DK = 150
DL = 200
T = 10
R_ij = 1.0 # Fixé pour la pool J
R_ik = 0.5 # Fixé pour la pool K
R_il = 0.25 # Fixé pour la pool L
# Fonction de fitness avec contrainte sur la somme des S_ij, S_ik, S_il
def evalReward(individual):
S_ij, S_ik, S_il = individual
if S_ij + S_ik + S_il != TOKENS_TOTAL or S_ij < 0 or S_ik < 0 or S_il < 0:
return -1, # Pénalité pour ne pas respecter les contraintes
return (np.log10(S_ij) * np.log10(DJ) * T * R_ij +
np.log10(S_ik) * np.log10(DK) * T * R_ik +
np.log10(S_il) * np.log10(DL) * T * R_il,)
# Création des types
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Individual", list, fitness=creator.FitnessMax)
toolbox = base.Toolbox()
# Générateurs d'attributs
toolbox.register("attr_float", random.uniform, 0, TOKENS_TOTAL)
# Structure de l'individu avec contrainte sur la somme des S_ij, S_ik, S_il
def create_individual():
S_ij = random.uniform(0, TOKENS_TOTAL)
S_ik = random.uniform(0, TOKENS_TOTAL - S_ij)
S_il = TOKENS_TOTAL - S_ij - S_ik
return [S_ij, S_ik, S_il]
toolbox.register("individual", tools.initIterate, creator.Individual, create_individual)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
# Opérateurs génétiques
toolbox.register("evaluate", evalReward)
toolbox.register("mate", tools.cxBlend, alpha=0.5)
toolbox.register("mutate", tools.mutGaussian, mu=0, sigma=1, indpb=0.1)
toolbox.register("select", tools.selTournament, tournsize=3)
# Mutation personnalisée pour respecter les contraintes
def custom_mutate(individual):
# Mutation sur les tokens
individual[0] = random.uniform(0, TOKENS_TOTAL)
individual[1] = random.uniform(0, TOKENS_TOTAL - individual[0])
individual[2] = TOKENS_TOTAL - individual[0] - individual[1]
return individual,
toolbox.register("mutate", custom_mutate)
# Création de la population
population = toolbox.population(n=100)
# Algorithme génétique
NGEN = 40
for gen in range(NGEN):
offspring = algorithms.varAnd(population, toolbox, cxpb=0.5, mutpb=0.2)
fits = toolbox.map(toolbox.evaluate, offspring)
for fit, ind in zip(fits, offspring):
ind.fitness.values = fit
population = toolbox.select(offspring, k=len(population))
# Affichage du meilleur individu avec détails
best_ind = tools.selBest(population, 1)[0]
S_ij, S_ik, S_il = best_ind
print(f"Best Individual: [{best_ind[0]:.2f}, {best_ind[1]:.2f}, {best_ind[2]:.2f}]")
print(f"Best Fitness: {best_ind.fitness.values[0]:.2f}")
print(f"Tokens in datatoken j: {best_ind[0]:.2f} ({best_ind[0]/TOKEN_PAR_POOL*100:.2f}%)")
print(f"Tokens in datatoken k: {best_ind[1]:.2f} ({best_ind[1]/TOKEN_PAR_POOL*100:.2f}%)")
print(f"Tokens in datatoken l: {best_ind[2]:.2f} ({best_ind[2]/TOKEN_PAR_POOL*100:.2f}%)")

The benevolence/malevolence values for each pool are fixed and not part of the optimization in this example. A more interesting solution might be to leave these variables to the discretion of the user as decision variables (it is more likely the fact). This exercise is up to the reader.

How does the algorithm work?

  1. Initialization: The algorithm starts by creating an initial population of individuals. Each individual, in this case, represents a potential solution to the problem and is encoded as a vector of three numbers (the amounts Sij, Sik, Sil) representing the tokens invested in each pool of data tokens.
  2. Valuation : Each individual is evaluated by a fitness function. This function calculates the quality of the solution represented by the individual. In our case, the fitness function is based on the given formula to maximize the expected reward, taking into account the constraints (the sum of the invested tokens must not exceed 500).
  3. Selection: Individuals are selected for reproduction. Those with better fitness are more likely to be selected. This step simulates process of natural selection, where the best individuals are more likely to pass on their genes to the next generation. to the next generation.
  4. Crossover: Selected individuals are mated and their vectors are mixed to produce offspring. This simulates sexual reproduction. In our case, crossover is performed by mixing the amounts invested in the parents’ datatoken pools to obtain those of the children.
  5. Mutation: With a certain probability, random mutations are applied to the offspring to introduce variability. This prevents the algorithm from getting stuck in local optima. In our case, the mutation randomly adjusts the amounts invested in each pool.
  6. New generation: The offspring form the next generation of individuals, and the process repeats. Each generation should theoretically be better than or at least equal to the previous one.
  7. Termination: The algorithm terminates after a specified number of generations, or when other termination criteria are met. The best solution found up to that point is taken as the optimal solution. Ocean is one of the first projects to clearly present objective functions as part of its engineering, opening the way to a whole range of problems (and their solutions). The advantages of using objective functions

Conclusion

Objective functions in tokenomics, as demonstrated by the examples of Bitcoin and the Ocean protocol, serve as a powerful tool for translating the fundamental aspirations of a crypto-economic system into a tangible, actionable token design. By encapsulating the essence of a system’s goals and constraints, these functions pave the way for a simpler, more effective approach to aligning the complex dynamics of decentralized networks with desired economic and social outcomes.

In the context of Bitcoin, the objective function effectively aligns the incentives of network participants with the overall goal of network security. This alignment is evident in the immense resources and effort devoted to mining activities, indicating the effectiveness of the function in driving desired behavior. crucial questions about ecological impact. Yet bitcoin cannot be blamed for its success.

The Ocean Protocol represents a different facet of tokenomics, where the objective function is adapted to encourage the identification and support of valuable data. This approach not only encourages participation, but also fosters a data ecosystem that is both rich and relevant by aligning individual incentives with the collective goal of data utility and accessibility.

In our view, the use of objective functions in tokenomics opens up new horizons for the design of crypto-economic systems. These functions provide a systematic and quantifiable approach to incentive design, ensuring that the diverse and often competing interests of participants are consistently aligned with the overall goals of the system. The examples presented highlight not only the potential of objective functions to shape the behavior of decentralized networks, but also the importance of careful design to avoid unintended consequences.

As the field of tokenomics evolves, the exploration and refinement of objective functions will bridge the gap between a system’s theoretical underpinnings and its practical implementation, providing a clear framework for aligning incentives with desired outcomes.

Appendix

Optimization Algorithms Overview

In the realm of optimization, algorithms are broadly categorized into two types: global and local optimization methods. Global optimization methods aim to find the best solution over the entire search space, making them suitable for complex problems with multiple local optima. These methods are often more computationally intensive but provide a broader exploration of the solution space. Local optimization methods, on the other hand, focus on improving a solution in a specific region of the search space. They are typically faster but may converge to a local optimum rather than the global one.

Here’s a brief overview of some common optimization algorithms:

Each of these algorithms is designed to tackle specific types of optimization problems, taking into account the nature of the objective function and the constraints inherent to the problem. The choice of an algorithm depends on the problem’s characteristics and the balance between accuracy and computational efficiency required.

Glossary of Key Terms

  • Local Minima: In the context of optimization, a local minimum is a point where the function value is lower than at all other nearby points, but possibly higher than at a point farther away. Essentially, it’s the lowest point in its immediate vicinity. In complex functions, there may be multiple local minima.
  • Global Minima: The global minimum is the point where the function achieves its lowest possible value over its entire domain. In a well-behaved convex function, the global minimum is unique. In a non-convex function, there could be several local minima, but only one (or more) of these will be the lowest point overall, which is the global minimum.
  • Linear: A function is linear if the change in the function is proportional to the change in the input variables. Linear functions are represented by straight lines in a two-dimensional space.
  • Convex: A set or a function is convex if the line segment between any two points in the set or function lies entirely in the set or below the function’s graph. In optimization, convex problems are easier to solve since local minima are also global minima.
  • Differentiable: A function is differentiable at a point if it has a derivative at that point. This means the function’s rate of change at that point is defined.
  • Non-linear: A function is non-linear if it does not form a straight line when graphed. Such functions have a variable rate of change and can represent more complex relationships.
  • Non-convex: A function or set that is not convex. In optimization, non-convex problems are more challenging as they can have multiple local minima.
  • Discrete: Relating to distinct, separate, and countable values or states. Discrete optimization problems involve variables that can only take on discrete values, like integers.
  • Continuous: Relating to unbroken and uninterrupted sequences or values. Continuous optimization problems involve variables that can take on any value within a given range.
  • Global Optimization: The process of finding the best solution from all feasible solutions, not just a local optimum in a specific area.
  • Hessian: In calculus, the Hessian matrix is a square matrix of second-order partial derivatives of a scalar-valued function. It describes the local curvature of a function of many variables.

Bibliographic sources

--

--

Nomiks

Nomiks is a token design & risk management research lab. We design, audit and stress test your token economy.