Akash Whitepaper

Curious Cosmonaut Research
35 min readNov 10, 2022

--

This whitepaper/originating information accuracy is in no way endorsed by Curious Cosmonaut Research. This paper’s content was aggregated from their website here on 11/7/2022 as part of a series to make content more accessible from one place in Cosmos.

Akash Network: Decentralized Cloud Infrastructure Marketplace

Overclock Labs

May 13, 2018

Version 0.0.3

Note: The Akash Network is an active research project and new versions of this paper will manifest at akash.network. For comments and suggestions please reach out at research@akash.network

Abstract

Cloud computing — the process of offloading work to remote servers — is inherently broken. While it mostly works as advertised, we’ve found that inefficiencies still plague the system. The products produced by the major cloud providers are usable but they are limited to shortcomings that can be solved today with advancements in container technology and a powerful token economy. The purpose of this white paper is to put forward our plan for a cloud services market called Akash Network, the worlds first global spot market for cloud computing.

We see a future where the global cloud infrastructure of the world is decentralized and distributed between all cloud service providers; a market that deploys and liquidates (increasingly commoditized) data center compute in a secure, fast and transparently spot priced manner. Services are sold in a democratic but unified ecosystem that anyone can use.

In this paper, we present Akash, cloud infrastructure network that is decentralized, competitive, and able to distribute applications between multiple cloud service providers around the globe. The paper will introduce the state of the existing market, outline how we are using latest developments in serverless container orchestration to combat these issues, the basics of and the necessity of the networks native token, AKASH, and finally our roadmap for launch.

Contents

1 Introduction 4

1.1 A Troubled Industry 4

2 The Akash Network 6

2.1 The Akash Blockchain 6

2.2 The Akash Token, AKASH 7

3 Marketplace 8

4 Deployment 10

4.1 Manifest Distribution 10

4.2 Overlay Network 10

5 Automation 11

5.1 Example: Latency-Optimized Deployment 12

5.2 Example: Machine Learning Deployment 13

List of Figures

  1. Illustration of on-chain and off-chain interactions amongst

various participants in the Akash network . . . . . . . . 7

  1. Summary of procurement from Marketplace. (1) User’s

deployment order is posted to the orderbook (2) Dat-

acenters posts eligible fulfillment orders for the deploy-

ment order (3) The best fulfillment order is matched

with the deployment order, creating a new lease. . . . . 9

  1. Illustration of Akash’s overlay network . . . . . . . . . . 11
  2. Illustration of slower performance due to higher laten-

cies for end-users distributed across the globe for a single

datacenter deployment . . . . . . . . . . . . . . . . . . . 12

  1. Illustration of improved network performance by dy-

namically distributing workloads and their state across

datacenters in close proximity to the end-users . . . . . 13

  1. A machine learning batch job under less load running a

single master and single worker node . . . . . . . . . . . 14

2. A machine learning batch job under load running a sin-

gle master and multiple worker nodes . . . . . . . . . . 14

  • Introduction

The Akash Network (Akash) is a secure, transparent, and decentralized cloud computing marketplace that connects those who need computing resources (clients) with those that have computing capacity to lease (providers).

Akash acts as a “super” cloud platform (supercloud) — providing a unified layer above all providers on the marketplace so as to present clients with a single cloud platform, regardless of which particular provider they may be using.

Clients use Akash because of its cost advantage, usability, and flexibility to move between cloud providers, and the performance benefits of global deployments. Providers use Akash because it allows them to earn profits from either dedicated or temporarily-unused capacity.

  • A Troubled Industry

By 2020, cloud infrastructure providers will account for 53% of global internet traffic[Cisco(2016)], out of which Amazon, Google, and Microsoft will deliver 80% of the payload[Forrester(2017)].

While the cloud will deliver the majority of the workloads, the future of the internet stands at a risk of being consolidated, centralized, and at the mercy of these three providers.

The primary driver for cloud adoption is the promise of flexibility and cost advantage, but the reality is that the products offered by cloud providers are overpriced, complicated, and lock clients into ecosystems that limit their ability to innovate, compete, and have sovereignty over their infrastructure needs.

The difference in capital expenditure of purchasing hardware and leasing datacenters between running in the cloud and self managing (on-premise) is marginal; however, the cloud providers have a significant advantage with operating expenditure because of their investments in automation with minimal human touch.

Even though running computing on-premise can offer much better flexibility, performance, and security, organizations are abandoning their datacenter operations and migrating to the cloud because they are finding it increasingly hard to justify the operating costs due to lack of adequate automation along with low utilization footprint. Idle, underutilized servers prove to be costly and wasteful. Analysts estimate that as many as 85% of servers in practice have underutilized capacity [Glanz(2012)] [Kaplan et al.(2008)Kaplan, Forrest, and Kindler] [Liu(2011)] [Koomey and Taylor(2015)].

Cloud providers drive margins by building hyper-scale installations, i.e, consolidating resources in few datacenters for economic efficiency, and cross-selling fully managed backend services, such as databases, cache stores, API gateways, etc.

Being hyper-scale allows them to oversubscribe their customers, hence driving higher margins but creates single-points for failures. Geographically distributed workloads offer much reliability and end-user performance; however, the cloud providers make it extremely hard for clients to be multi-regional because it doesn’t work in their best interest.

The cloud providers prefer customers to deploy their applications in a single datacenter and penalize them for being cross-regional or multi-zonal, usually through hefty bandwidth fees and variable regional pricing. This is why AWS’ pricing model is different for each region for the same exact resource.

Even though selling instances is lucrative, Cloud Providers usually charge a small amount for instances compared to the premium they charge for managed backend services (PaaS); analogous to the old burgers-and-fries model where a restaurant needs to sell burgers at a loss so that they can sell the more addictive fries at a high margin.

The PaaS services sold by the providers tend to be white-labeled open source projects where the original authors are never incentivized, and the cloud providers have no incentive to evolve the product. For example, AWS’ ElastiCache is a white-labeled open source software called Redis. Redis is an open source project — much loved by developers — written by Salvatore Sanfilippo and maintained by Redis Labs.

As of the writing of this paper, a managed Redis server, in US East (Ohio) running on r3.8xlarge is priced at $31,449/yr [Amazon(2017a)] whereas the same instance without Redis costs $18,385/yr[Amazon(2017b)]. The extra $13,064 just for a “piece of mind” to the customer. Neither Sanfilippo or Redis Labs are incentivized for the efforts.

Also, more services mean more dependent the customer is on the cloud provider. The complexity introduced by increasing amounts of features, service availability, and codification using non-standard APIs lead to customers being locked in by the cloud vendors, preventing clients from exploring other better options in the marketplace while inhibiting innovation.

This model adopted by the providers stifles innovation as it dramatically reduces the chance of an open source project from succeeding. Cloud providers effectively act as middle-men that set the rules of engagement for the industry while making a no contribution to society on the whole.

  • The Akash Network

The foundational design objective of the Akash Network is to maintain a low barrier to entry for providers while at the same time ensuring that clients can trust the resources that the platform offers them. To achieve this, the system requires a publicly-verifiable record of transactions within the network. To that end, the Akash Network is implemented using blockchain technologies as a means of achieving consensus on the veracity of a distributed database.

Akash is, first and foremost, a platform that allows clients to procure resources from providers. This is enabled by a blockchain-powered distributed exchange where clients post their desired resources for providers to bid on. The currency of this marketplace is a digital token, the Akash (AKASH), whose ledger is stored on a blockchain.

Akash is a cloud platform for real-world applications. The requirements of such applications include:

  • Many workloads deployed across any number of datacenters.
  • Connectivity restrictions which prevent unwanted access to workloads.
  • Self-managed so that operators do not need to constantly tend to deployments.

To support running workloads on procured resources, Akash includes a peer-to-peer protocol for distributing workloads and deployment configuration to and between a client’s providers.

Workloads in Akash are defined as Docker containers. Docker containers allow for highly-isolated and configurable execution environments, and are already part of many cloud-based deployments today.

  • The Akash Blockchain

The Akash blockchain provides a layer of trust in a decentralized and trustless environment. Clients inherently trust today’s large infrastructure Providers based primarily on the brand equity they’ve built over years.Akash does not and should not require that same leap of faith, since any Provider with capacity can compete to offer services on Akash. Instead, the blockchain earns trust via an open and transparent platform. Data on the chain is an immutable and public record of all transactions, including each Provider’s fulfillment history.

Akash is also politically decentralized. No single entity controls the network and no intermediary facilitates transactions. Therefore no entity is incentivized to control or to extract marginal revenue from the network. As an example, a large company such as Coca-Cola can

Figure 1: Illustration of on-chain and off-chain interactions amongst various participants in the Akash network

participate in the network as a Provider, providing compute to another large company or to an individual developer, yet all three parties are on equal footing in the network.

  • The Akash Token, AKASH

The Akash Token (AKASH) is used to simplify the exchange of value and align economic incentives with proper user behavior. The Akash token is the marketplace currency used to pay for leased compute infrastructure on Akash’s decentralized network. Our token serves two primary functions in Akash’s ecosystem.

In a market that is expected to be $737 billion, with well over 21% annual growth [Gartner(2017)], the liquidity of AKASH will be matched by the demand for compute power. Along this line of thought, we have full confidence in the network and for AKASH to achieve maximum liquidity for its early adopters and end state user.

  • Staking

The stability of the Akash network relies on a staking system that prevents bad actors from abusing our system. A staking system provides a prohibitive monetary disincentive for bad actors who consider participating in our network. The risk of fraudulent behavior is highest when new, unknown providers join our network. Rather than requiring a centralized or federated approval process for new accounts, the Akash network allows anyone to join.

When a new provider chooses to offer its resources on the Akash network, rather than being approved, it must stake a meaningful value on the network in Akash tokens. There is no minimum stake amount, but participation in Akash Network governance is proportional to a providerâĂŹs stake, taken as a fraction of the sum of all stakes. Additionally, stake contribution is factored into a provider’s reputation score, which tenants may use as a deployment criterion.

  • Global Payments

Akash tokens mitigate the foreign exchange risk that usually results from cross-border payments. Taking the place of fiat for these transactions, Akash tokens simplify the exchange of value in the cloud infrastructure industry. Our matching engine competitively prices each container compute against a prevailing market amount of Akash tokens. When a tenant is matched with a provider, the tenant pays Akash tokens to the network, which are subsequently paid to the provider according to the terms of the lease.

  • Marketplace

Infrastructure procurement — the process through which clients lease infrastructure from providers — on Akash is implemented through a decentralized exchange (marketplace).

The marketplace consists of a public order book and a matching algorithm. Clients place deployment orders, which contain a specification of the client’s service needs, and datacenters place fulfillment orders to bid on deployment orders. Deployment orders include the maximum amount the client is willing to pay for a fixed number of computing units (as measured by memory, cpu, storage, and bandwidth) for a specific amount of time; fulfillment orders declare the price that the provider will provide the resources for.

Deployment orders are open for a client-defined length of time, as measured to the second. While the deployment order is open, providers may post fulfillment orders to bid on it.

A fulfilment order is eligible to match with a deployment order if the fulfillment order satisfies all minimum specifications of the deployment order. Given a deployment order and a set of eligible fulfilment orders, the fulfilment order offering the lowest price will be matched with the deployment order. If multiple fulfilment orders are eligible for a match and offer the same price, the fulfilment order placed first will be matched with the deployment order.

Businesses and individual consumers will want and need to protect how they are publicly displaying their use of compute power. To guard against competitor data mining and other attack vectors, a homomorphic encryption layer is added.

A lease is created when a match occurs between a deployment and fulfillment order. The lease contains references to the deployment and fulfilment orders. Leases will be the binding agent in fulfilling a deployment.

Figure 2: Summary of procurement from Marketplace. (1) User’s deployment order is posted to the orderbook (2) Datacenters posts eligible fulfillment orders for the deployment order (3) The best fulfillment order is matched with the deployment order, creating a new lease.

  • Deployment

Once resources have been procured, clients must distribute their workloads to providers so that they can execute on the leased resources. We refer to the current state of the client’s workloads on the Akash

Network as a deployment.

A user describes their desired deployment in a manifest. The manifest is written in a declarative file format that contains workload definitions, configuration, and connection rules. Providers use workload definitions and configuration to execute the workloads on the resources they are providing, and use the connection rules to build an overlay network and firewall configurations.

A hash of the manifest is known as the deployment version and is stored on the blockchain-based distributed database.

  • Manifest Distribution

The manifest contains sensitive information which should only be shared with participants of the deployment. This poses a problem for selfmanaged deployments — Akash must distribute the workload definition autonomously, without revealing its contents to unnecessary participants.

To address these issues, we devised a peer-to-peer file sharing scheme in which lease participants distribute the manifest to one another as needed. The protocol runs off-chain over a TLS connection; each participant can verify the manifest they received by computing its hash and comparing this with the deployment version that is stored on the blockchain-backed distributed database.

In addition to providing private, secure, autonomous manifest distribution, the peer-to-peer protocol also enables fast distribution of large manifests to a large number of datacenters.

  • Overlay Network

By default, a workload’s network is isolated — nothing can connect to it. While this is secure, it is not practical for real-world applications. For example, consider a simple web application: end-user browsers should have access to the web tier workload, and the web tier needs to communicate to the database workload. Furthermore, the web tier may not be hosted in the same datacenter as the database.

On the Akash Network, clients can selectively allow communications to and between workloads by defining a connection topology within the manifest. Datacenters use this topology to configure firewall rules and to create a secure network between individual workloads as needed.

Figure 3: Illustration of Akash’s overlay network

To support secure cross-datacenter communications, providers expose workloads to each other through a mTLS tunnel. Each workloadto-workload connection uses a distinct tunnel.

Before establishing these tunnels, providers generate a TLS certificate for each required tunnel and exchange these certificates with the necessary peer providers. Each provider’s root certificate is stored on the blockchain-based distributed database, enabling peers to verify the authenticity of the certificates it receives.

Once certificates are exchanged, providers establish an authenticated tunnel and connect the workload’s network to it. All of this is transparent to the workloads themselves — they can connect to one another through stable addresses and standard protocols.

  • Automation

The dynamic nature of cloud infrastructure is both a blessing and a curse for operations management. That new resources can be provisioned at will is a blessing; the exploding management overhead and complexity of said resources is a curse. The goal of DevOps — the practice of managing deployments programmatically — is to alleviate the pain points of cloud infrastructure by leveraging its strengths.

The Akash Network was built from the ground up to provide DevOps engineers with a simple but powerful toolset for creating highlyautomated deployments. The toolset is comprised of the primitives that enable non-management applications — generic workloads and overlay networks — and can be leveraged to create autonomous, selfmanaged systems.

Self-managed deployments on Akash are a simple matter of creating workloads that manage their own deployment themselves. A DevOps engineer may employ a workload that updates DNS entries as providers join or leave the deployment; tests response times of web tier applications; and scales up and down infrastructure (in accordance with permissions and constraints defined by the client) as needed based on any number of input metrics. The “management tier” may be spread across all datacenters for a deployment, with global state maintained by a distributed database running over the secure overlay network.

  • Example: Latency-Optimized Deployment

Figure 4: Illustration of slower performance due to higher latencies for endusers distributed across the globe for a single datacenter deployment

Many web-based applications are latency-sensitive — lower response times from application servers translates into a dramatically improved end-user experience. Modern deployments of such applications employ content delivery networks (CDNs) to deliver static content such as images to end users quickly.

CDNs provide reduced latency by distributing content so that it is geographically close to the users that are accessing it. Deployments on the Akash Network can not only replicate this approach, but beat it — Akash gives clients the ability to place dynamic content close to an application’s users.

To implement a self-managed dynamic delivery network on Akash, a DevOps engineer would include a management tier in their deployment which monitors the geographical location of clients. This management

Figure 5: Illustration of improved network performance by dynamically distributing workloads and their state across datacenters in close proximity to the end-users

tier would add and remove datacenters across the globe, provisioning more resources in regions where user activity is high, and less resources in regions where user participation is low.

  • Example: Machine Learning Deployment

Machine learning applications employ a large number of nodes to parallelize computations involving large datasets. They do their work in “batches” — there is no “steady state” of capacity that is required.

A machine learning application on Akash may use a management tier to proactively procure resources within a single datacenter. As a machine learning task begins, the management tier can “scale up” the number of nodes for it; when a task completes, the resources provisioned for it can be relinquished.

Figure 6: A machine learning batch job under less load running a single master and single worker node

Figure 7: A machine learning batch job under load running a single master and multiple worker nodes

References

[Amazon(2017a)] Amazon. Amazon elasticache pricing. 2017a. URL https://aws.amazon.com/elasticache/pricing/.

[Amazon(2017b)] Amazon. Amazon ec2 pricing. 2017b. URL https://aws.amazon.com/ec2/pricing/.

[Cisco(2016)] Cisco. Cisco global cloud index: Forecast and methodology, 2015–2020. 2016. URL

https://www.cisco.com/c/dam/en/us/solutions/collateral/service-provider/global-cloud

[Forrester(2017)] Forrester. Predictions 2018: Cloud computing accelerates enterprise transformation everywhere. 2017. URL https://www.forrester.com/report/Predictions+2018+Cloud+Computing+Accelerates+Enterpr

[Gartner(2017)] Gartner. Forecast analysis: Public cloud services, worldwide, 2q17 update. 2017. URL https://www.gartner.com/doc/3803517.

[Glanz(2012)] James Glanz. Power, pollution and the internet. 2012. URL

http://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-ener

[Kaplan et al.(2008)Kaplan, Forrest, and Kindler] James Kaplan, William Forrest, and Noah Kindler. Revolutionizing data center energy efficiency. 2008. URL https://www.sallan.org/pdf-docs/McKinsey_Data_Center_Efficiency.pdf.

[Koomey and Taylor(2015)] Jonathan Koomey and Jon Taylor. New data supports finding that 30 percent of servers are ’comatose’, indicating that nearly a third of capital in enterprise data centers is wasted. 2015. URL

https://anthesisgroup.com/wp-content/uploads/2015/06/Case-Study_DataSupports30Percen

[Liu(2011)] Huan Liu. A measurement study of server utilization in public clouds. 2011. URL http://ieeexplore.ieee.org/document/6118751/.

AKT: Akash Network Token & Mining Economics

Greg Osuri, Adam Bozanich[1]

Akash Network, Akash Network

(Dated: January 31, 2020)

Akash is a marketplace for cloud compute resources which is designed to reduce waste, thereby cutting costs for consumers and increasing revenue for providers. This paper covers the economics of the Akash Network and introduces the Akash Token (AKT). We describe an economic incentive structure designed to drive adoption and ensure the economic security of the Akash ecosystem. We propose an inflationary mechanism to achieve economic goals. We provide calculations for mining rewards and inflation rates. We also present mechanisms for allowing a multitude of fee tokens.

ACKNOWLEDGMENTS

We thank Sunny Aggarwal (Research Scientist, Tendermint), Gautier Marin (Tendermint), Morgan Thomas (Co-Founder, Kassir), and Brandon Goldman (Frm. Lead Architect, Blockfolio) for providing valuable comments that significantly improved the manuscript.

I. INTRODUCTION

Cloud infrastructure is a $32.4 billion industry[1] and is predicted to reach $210 billion by 2022[2].

By 2021, 94% of all internet applications and compute instances are expected to be pro-

cessed by Cloud Service Providers (CSP) with only 6% processed by traditional data centers[3]. The primary driver for this growth is poor utilization rates of IT resources provisioned in traditional data centers as no more than 6% of their maximum computing output is delivered on average over the course of the year [4], and up to 30% of servers are comatose[5] — using electricity but delivering no useful information services.

With 8.4 million data centers globally, an estimated 96% of server capacity underutilized, and accelerated global demand for cloud computing, the three leading cloud service providers — Amazon Web Services (AWS), Google Cloud, and Microsoft Azure — dominate the cloud computing market with 71% market share[1] and this figure is expected to increase. These providers are complicated, inflexible, restrictive, and come at a high recurring cost with vendor lock-in agreements[6]. Increased cloud usage has made cloud cost optimization the top priority of cloud service users for three consecutive years[7].

Outside of the large incumbent providers, organizations do not have many options for cloud computing. Akash aims to create efficiencies in the cloud hosting market by repurposing compute resources that go to waste in the current market.

By leveraging a blockchain, Akash introduces decentralization and transparency into an industry currently controlled by monopolies. The result is that cloud computing becomes a commodity, fueled by competitive free market, available and accessible anywhere in the world, at a fraction of current costs.

Akash is the world’s first and only Supercloud for serverless computing, enabling anyone with a computer to become a cloud provider by offering their unused compute cycles in a safe and frictionless marketplace.

In this paper, we present an economic system that uses Akash Network’s native currency, AKT, to achieve economic sovereignty in our decentralized computing ecosystem. We also propose an inflation design for mitigating inherent adoption challenges that face an early market economy — lack of sufficient demand from the tenants (consumers of computing), which negatively impacts demand due to lack of supply. We also present a mechanism for a stable medium of exchange by solving for token volatility, a major challenge for adoption of decentralized ecosystems.

Note: This whitepaper represents a continuous work in progress. We will endeavor to keep this document current with the latest development progress. As a result of the ongoing and iterative nature of the Akash development process, the resulting code and implementation will likely differ from what this paper represents.

We invite the interested reader to peruse the Akash GitHub repo at https://github.com/ovrclk as we continue to open-source various components of the system over time.

A. Definitions

Akash Token (AKT): AKT is the native token of the Akash Network. The core utility of AKT acts as a staking mechanism to secure the network and normalize compute prices for the marketplace auction. The amount of AKTs staked towards a validator defines the frequency by which the validator may propose a new block and its weight in votes to commit a block. In return for bonding (staking) to a validator, an AKT holder becomes eligible for block rewards (paid in AKT) as well as a proportion of transaction fees and service fees (paid in any of the whitelisted tokens).

Validator: Validators secure the Akash network by validating and relaying transactions, proposing, verifying and finalizing blocks. There will be a limited set of validators, initially 64, required to maintain a high standard of automated signing infrastructure. Validators charge delegators a commission fee in AKT.

Delegator: Delegators are holders of the AKT and use some or all of their tokens to secure the Akash chain. In return, delegators earn a proportion of the transaction fee as well as block rewards.

Provider: Providers offer computing cycles

(usually unused) on the Akash network and earn a fee for their contributions. Providers are required to maintain a stake in AKT as collateral, proportional to the hourly income earned; hence, every provider is a delegator and/or a validator.

Tenant: Tenants lease computing cycles offered by providers for a market-driven price set using a reverse auction process (described in section below).

II. NETWORK OVERVIEW

The Akash Network is a secure, transparent, and decentralized cloud computing marketplace that connects those who need computing resources (clients) with those that have computing capacity to lease (providers). Akash acts as a supercloud platform providing a unified layer above all providers on the marketplace so as to present clients with a single cloud platform, regardless of which particular provider they may be using.

Tenants use Akash because of its cost advantage, usability, and flexibility to move between cloud providers, and the performance benefits of global deployments. Providers use Akash because it allows them to earn profits from either dedicated or temporarily-unused capacity.

A unit of computing (CPU, Memory, Disk) is leased as a container on Akash. A container [8] is a standard unit of software that packages up code and all its dependencies, so the application runs quickly and reliably from one computing environment to another. A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries, and settings.

Any one with a physical machine (ie, computer, server) can slice the machine’s resources into containers using a process called virtualization. Docker is a company that provides widely adopted container virtualization technology, and it is common to refer to containers as “docker images.” The relation between a physical computer and a container is illustrated in fig. 1).

All marketplace transactions are on the

Akash blockchain. To lease a container, the tenant (developer) requests a deployment by specifying the type(s) of unit(s), and the quantity of each type of unit. To specify a type of unit, the tenant specifies attributes to match, such as region (e.g. US) or privacy features (e.g. Intel SGX). The tenant also specifies the maximum price they are willing to pay for each type of unit.

An order is created in the order book (upon acceptance by a validator).

The provider(s) that match all the requirements of the order then place a bid by competing on price. The provider that bids the lowest amount on the order wins (and match requirements), upon which a lease is created between the tenant and the provider for the order. For every successful lease, a portion of the lease amount (Take Fee) is paid to the stakers as describe in sec. IVA.

Containerized Applications

Container Runtime (Docker)

Host Operating System (Linux)

Physical Server (Bare Metal / Cloud VM)

Figure 1: A simple illustration of containerized applications in relation to the physical servers

A. Proof of Stake Based Consensus

Akash employs a blockchain secured by a Proof-of-Stake consensus model as a Sybil resistance mechanism for determining participation in its consensus protocol and implements the Tendermint [9] algorithm for Byzantine faulttolerant consensus. Tendermint was designed to address the speed, scalability, and environmental concerns with Proof of Work with the below set of properties:

a) Validators take turns producing blocks in a weighted round-robin fashion, meaning the algorithm has the ability to seamlessly change the leader on a per-block basis.

b) Strict accountability for Byzantine faults allows for punishing misbehaving validators and providing economic security for the network.

Anyone who owns an Akash token can bond

(or delegate) their coins and become a validator, making the validator set open and permissionless. The limited resource of Akash tokens acts as a Sybil prevention mechanism.

Voting power is determined by a validator’s bonded stake (not reputation or real-world identity). No single actor can create multiple nodes in order to increase their voting power as the voting power is proportional to their bonded stake. Validators are required to post a “security deposit” which can be seized and burned by the protocol in a process known as

“slashing”.

These security deposits are locked in a bonded account and only released after an “unbonding period” in the event the staker wishes to unbond. Slashing allows for punishing bad actors that are caught causing any attributable Byzantine faults that harm the well-functioning the system.

The slashing condition and the respective attributable Byzantine faults and punishments are beyond the scope of this paper. (For more information on these, please review Akash Network Technical White paper).

1. Limits on Number of Validators

Akash’s blockchain is based on Tendermint consensus which gets slower with more validators due to the increased communication complexity. Fortunately, we can support enough validators to make for a robust globally distributed blockchain with very fast transaction confirmation times, and, as bandwidth, storage, and parallel compute capacity increases, we will be able to support more validators in the future.

On Genesis day, the number of validators Vi is set to Vi(0) = Vi,0 = 64 and the number of validators at time t year will be:

Vn(t) = dlog2(2t) · Vi,0e (1)

So, in 10 years, there will be Vn(10) = 277 validators as illustrated in fig. 2

Figure 2: Number of validators over the years

III. AKT: THE AKASH NETWORK TOKEN

The primary functions of AKT are in staking (which provides security to the network), lease settlement, and in acting as a unit of measure for pricing all currencies supported by the marketplace. Although AKT can be used for settling transactions in the marketplace, the leases can be settled using a multitude of tokens as described in later sections of this paper. However, transaction fees and block rewards are denominated in AKT. The income stakers earn is proportional to the tokens staked and length of staking commitment. That said, AKT performs three main functions: Resolve, Reward, and Reserve.

A. Resolve

Akash relies on a blockchain in which a set of validators vote on proposals. Each proposal is weighed by the proposer’s voting power, which is the total tokens they staked and the tokens bonded to them (stakers can delegate voting power to validators).

B. Reward

Users of AKT stake tokens to subsidize operating and capital expenditures. Stakers are rewarded proportional to the number of tokens staked, the length of lockup time, and the overall tokens staked in the system. Lockup times can vary anywhere from one month to one year. Flexibility in lockup encourages stakers who stake for shorter periods (bear markets), in a self-adjusting inflationary system that is designed to optimize for lower price pressure during bear markets.

C. Reserve

Fees on Akash can be settled using a multitude of currencies along with AKT. However, the market order book uses Akash Token

(AKT) as the reserve currency of the ecosystem. AKT provides a novel settlement option to lock in an exchange rate between AKT and the settlement currency. This way, providers and tenants are protected from the price volatility of AKT expected to result from its low liquidity. In this section, we also present a mechanism “Transaction Ordering using Consensus Weighted Median” as described in sec. IVD to establish exchange rates without the need for an oracle.

IV. SETTLEMENT AND FEES

This section describes various fees incurred by users of Akash Network.

A. Take Fee

For every successful lease, a portion of the lease amount (Take Fee) goes to a Take Income Pool. The Take Income Pool is later distributed to stakers based on their stake weight (amount staked and time remaining to unlock, described in detail in the following sections). The Take Rate depends on currency used for settlement. The proposed take rates at Genesis when using AKT (TokenTakeRate) is 10% and 20% when any other currency(TakeRate) is used. The TokenTakeRate and TakeRate parameters is subject to community consensus managed by the governance.

B. Settlement with Exchange Rate Lockin

The lease fees are denominated in AKT, but they can be settled using any whitelisted tokens. There is an option to lock in an exchange rate between AKT and the settlement currency. This protects providers and tenants from the price volatility of AKT expected to result from its low liquidity.

For example, suppose a lease is set to 10 AKTs and locks an exchange rate of

1 AKT = 0.2 BTC. If the price of AKT doubles, i.e., 1 AKT = 0.4 BTC, the tenant is required to pay 5 AKT. Conversely, if the price of BTC doubles while keeping the price of AKT same, i.e., 1 AKT = 0.1 BTC, then the tenant is required to pay 20 AKT.

C. Fees Using a Multitude of Tokens

In order to avoid issues of network abuse

(e.g. DOS attacks), all transactions and leases on Akash are subject to fees. Every transaction has a specific associated fee, GasLimit, for processing the transaction, as long as it does

not exceed BlockGasLimit.

The GasLimit is the amount of gas which is deducted from the sender’s account balance to issue a transaction.

Unlike most other blockchain platforms that require fees to be paid in the platform’s native cryptocurrency, such as Ethereum [10], Bitcoin [11], and Neo [12], Akash accepts a multitude of tokens for fees. Each validator and provider on Akash can choose to accept any currency or a combination of currencies as fees.

The resulting transaction fees, minus a network tax that goes into a reserve pool, are split among validators and delegators based on their stake (amount and length).

D. Transaction Ordering using Consensus Weighted Median

In order to prioritize transactions when multiple tokens are used, validators need a mechanism to determine the relative value of the transaction fee. For example, let us assume we had an oracle to inform us that the relative value of BTC is 200 AKT, and that of ETH is 0.4 AKT. Suppose we have two transactions of equal gas cost, and the transaction fees on them are 10 BTC and 6000 ETH, respectively. The first transaction’s fee is equivalent to 2000

(10 x 200) AKT and the second transaction’s fee is equivalent to 2400 (6000 x 0.4) AKT. The second transaction will have a higher priority. In order to get these relative values without using an oracle, we can employ a Consensus Weighted Median using Localized Validator Configuration [13] mechanism.

In this method, each validator maintains a local view of the relative values of the tokens in a config file which is periodically updated, and the relative value is achieved by using a weighted mean, meaning they submit their “votes” for the value of each token on-chain as a transaction.

Lets say for example, there are five validators {A,B,C,D,E} with powers {0.3,0.3,0.1,0.1,0.2} respectively. They submit the following votes for their personal views of each token:

A : AKT = 1,BTC = 0.2

B : AKT = 2,BTC = 0.4

C : AKT = 12,BTC = 2

D : AKT = 4,BTC = 1

E : AKT = 1.5,BTC = 0.5

These values are stored on-chain in a ordered list along with their validator that placed the vote.

AKT : [1A,1.5E,2B,4D,12C]

BTC : [0.2A,0.4B,0.5E,1D,2C]

The proposer takes a weighted mean (by stake) of the votes for each whitelisted token to determine a consensus relative value of each token, where w¯(xn) = WeightedMean(xn) :

AKT : w¯([1,0.3],[1.5,0.2],[2,0.3],[4,0.1],[12,0.1])

BTC : w¯([0.2,0.3],[0.4,0.2],[0.5,0.2],[1,0.1],[2,0.2]) which give us the relative value for each to-

ken: AKT = 2.8 and BTC = 0.58 respectively.

V. TOKEN ECONOMICS AND INCENTIVES

Providers earn income by selling computing cycles to tenants who lease computing services for a fee. However, in the early days of the network, there is a high chance the providers will not be able to earn a meaningful income due to a lack of sufficient demand from the tenants (consumers of computing), which in turn hurts demand because of lack of supply.

To solve this problem, we will incentivize the providers using inflation by means of block rewards until a healthy threshold can be achieved.

In this section, we describe the economics of mining and Akash Network’s inflation model. An ideal inflation model should have the following properties:

• Early providers can provide services at exponentially lower costs than in the market outside the network, to accelerate adoption.

• The income a provider can earn is proportional to the number of tokens they stake.

• The block compensation for a staker is proportional to their staked amount, the time to unlock and overall locked tokens.

• Stakers are incentivized to stake for longer periods.

• Short term stakers (such as some bear market participants) are also incentivized, but they gain a smaller reward. • To maximize compensation, stakers are incentivized to re-stake their income.

A. Motivation

Akash Network aims to secure early adoption by offering exponential cost savings as a value proposition for tenants, and the efficiency of a serverless infrastructure as an additional value proposition for tenants and providers. These value propositions are extremely compelling, especially for data and compute intensive applications such as machine learning.

B. Stake and Bind: Mining Protocol

A provider commits to provide services for at least time T and intends to earn service income r every compensation period Tcomp = 1 day. Providers stake Akash tokens s and specify an unlock time t1, where minimal locktime t1 − t should not be less than Tmin = 30 days. Additionally, they delegate (voting power) to validator v by bonding their stake

via BindValidator transaction.

A staker is a delegator and/or a validator to whom delegators delegate. Every provider is a staker, but not every staker is a provider; there can be stakers who are pure delegators providing no other services, and there can be stakers who are pure validators providing no other services.

At any point, a staker can: a) Split their stake (or any piece of their stake) into two pieces. b) Increase their stake l by adding more AKT. c) Increase the lock time T, where

T > Tmin.

Stakers choose to split their stake because the compensation is dependent on lock time L which will be addressed in later sections.

C. General Inflation Properties

1. Initial Inflation

If we assume Akash will have the same number of tokens locked as NuCypher [14] and DASH [15]: λ = 60%, then 1 40% of the supply of AKT will be in circulation. The adjusted inflation rate for inflation, I will be:

∗ I I = 1 − λ, (2)

Considering that ZCash [16] had I∗ = 350% (turn around point during the overall bull market), which makes I = 140% APR, it is reasonably safe to set the initial inflation to be I0 = 100% APR (meaning 1/365 per day).

2. Inflation Decay

M(t) = M +Z t I(t)dt = M +I T 1 2−T1t/2 Ti,initial > Tmin, (7)

decay factor (time to halve the inflation rate) to be T1/2 = 2 years in this case. Inflation depending on the time passed from the Genesis t, then looks like:

I(t) = I0 ·2−T1t/2 = I0 expln2 t , (3)

T1/2

In this case, the dependence of the token supply on the time t is:

3. Staking Time and Token Creation

We will reward the full compensation (γ = 1) to the stakers who are committed to stake at least T1 = 1 year (365 days). Those who stake for Tmin = 1 month will get close to half the compensation (γ ≈ 0.54). In general,

= 0.5 + 0.5min(Ti,T1), (6) γ

T1

Assume that all miners have the maximum compensation rate. We define the inflation 0 0

0

(4)

If we let I0 be the relative inflation rate, then

I0 = i0M0. For 100% APR, i0 = 1 and I0 = M0, which gives us the maximum number of tokens which will ever be created (as illustrated in fig. 3):

Mmax = M() = M0 1 + i0T1/2 3.89M0,

where M0 is initial number of tokens.

Figure 3: Token supply and tokens locked over years with an initial inflation of 100%

APR that is halving every 2 years

where the unlocking time Ti means the time left to unlock the tokens: Ti = t1 −t. t1 is the time when the tokens will be unlocked, and t is the current time. The initial Ti cannot be set smaller than Tmin = 1 month, but it eventually becomes smaller than that as time passes and t gets closer to t1.

Shorter stake periods (for lower rewards) result in a lower daily token emission. Considering that miners will most likely stake for short periods during a bear market, we can expect token emission to decline during a bear market, which will help to boost the price. Therefore we can expect this mechanism to support price stability.

The emission half decay time T

T1/2/γ∗, where γ∗ is the mean staking parameter, is also prolonged when γ < 1. T1/2 prolongs to 4 years instead of 2 if all stakers have γ∗ = γ = 0.5.

The total supply over time (eq. 4) at γ∗ = 16 will then look like:

i

M(t) = M0 1 + ln2 1 2 1/2 .

(8)

D. Delegate Pool Distribution

The exponential is a solution of a differential equation where inflation is proportional to the amount of not yet mined tokens:

I(t) = ln2 (Mmax − M(t)) (9)

T1/2

dM = I(t)dt. (10) where M(t) is the current token supply with M(0) = M0 and dt can be equal to the mining period (1 day). Each validator can trivially calculate its dM using few operations using the token supply M from the last period. The amount of mined tokens for the validator pool p in the time t can be calculated according to the formula:

= sv ln2

δmv,t δM(t),

S T1/2

(11)

δMt = Xδmv,t,

(12)

v

where sv is the number of tokens bound to the validator’s delegate pool v and S is the total number of tokens locked. Instead of calculating all the sum over v, each validator can add their portion δmv,t.

The distribution factor for a delegate bound to pool v is:

κ = 21 γγv + Ssv , (13)

γv is the aggregate stake compensation fac-

tor for the pool and Sv is the sum of all tokens bound to the pool.

E. Mining strategies and expected compensation

In this section, we look at three possibilities: a staker liquidating all the compensation while extending the lock time (Liquidate mining compensation), a staker adding all the compensation to their current stake, and a miner waiting for their stake to unlock after time T. Each of these possibilities could have different distributions of γ. Let’s consider γ = 1 and γ = 0.5 as the two extreme values of γ. Let’s take the amount of tokens locked to be λ = 60%, as in DASH.

1. Liquidate Mining Compensation

In this scenario, all stakers in the pool are liquidating all their earnings every Tcomp period. The total amount of tokens staked in the network can be expressed as S = λM. Assume all the delegators have equal amounts of stake bound to the pool. The amount of stake stays constant in this case, and equal to mi = s, making mv = sv and γ = γv where, γv is the mean staking parameter of the pool. Then, the pool mining rate (i.e. the cumulative pool reward) is:

drv v Sv ln2 (Mmax − M(t)). (14) dt = γ λM(t) T1/2

When we substitute M(t) from eq. 8 and integrate over time, we find total pool compensation:

rv M(t), (15)

M0

If ∆rv(t) = rv(t) − C where C is validator’s commission, that brings individual staker’s compensation to be:

r(t) = κ · rv(t) = 12 γγv + Ssv · rv(t)

(16)

If γ = 1 (staking for 1 year) and λ = 60%

(60% of all AKT are staked). With C = 0.1 · r(t), staker compensation in AKT starts from

0.45% per day, or 101.6% during the first year of staking.

We should note that if other miners stake for less than a year (γ∗ < 1), the inflation rate decays slower, and the compensation over a given period will be higher.

Figure 4: Daily compensation over time assuming 60% tokens locked for lock times of 1 year and 1 month

2. Re-stake mining compensation

Instead of liquidating mining compensation, it could be re-staked into the pool in order to increase the delegator’s stake. In this case, the actual stake s is constantly increasing with time:

ds = γ s ln2 (Mmax − M(t)). (17) dt λM(t) T1/2

If we substitute S(t) from eq. 8 and solve this differential equation against s, we get: s(t) = s(0) M . (18)

M0

Assuming the validator commission is 1%, if γ = 1 (staking for 1 year+) and λ = 60% (60% of all nodes in the network are staking), delegate compensation in AKT starts from

0.45% per day, or s(1)−s(0) = 176.5% during the first year of staking.

3. Take mining compensation and spindown

When the node spins down, the staker does not extend the time for end of staking t1, and the compensation is constantly decreasing as the time left to unlock becomes smaller and smaller, effectively decreasing γ gradually towards 0.5. This is the default behavior. To avoid this, the staker should set t1 large enough, or increase t1 periodically.

4. FAQ

How many tokens will ever be in exis-

tence? We will start with 100 million tokens, and the maximum amount of tokens ever created will be 389 million, as illustrated in fig. 3

What is the inflation rate? The infla-

tion rate will depend on how many short-term miners and long-term miners are working in the system. Depending on this, the initial inflation will be between 50% APR (if all miners are very short term) and 100% APR (if all miners commit for a long term). The inflation will decay exponentially every day, halving some time between 2 years (if all the miners are long term) and 4 years (if all the miners are short term). fig. 5

Figure 5: Annual inflation over the years when tokens are locked with long and short commitments

VI. RELATED WORK

The majority of proof of stake networks such as Ethereum 2.0 [17], Tezos [18], and Cardano [19] all use a single token model. However, there seem to be networks that are experimenting with more novel models. In this section, we will review some of these systems and explore the differences with Akash’s token model.

A. Cosmos Hub

Akash and Cosmos Hub use Tendermint [9]

Consensus Algorithm and share a core set of values with interoperability and user experience. Similar to Cosmos’s Atom [13], AKT’s primary utility is to provide economic security to the network. Akash’s model variously improves Cosmos’s model. First, AKT provides a mechanism to normalize compute prices for the marketplace auction. Secondly, Akash introduces a mechanism to lock in an exchange rate to a reserve currency of choice to mitigate market drive volatility risk of AKT when leasing computing for more extended periods. Finally, Akash’s block reward distribution is proportional to the time and amount of a stake, unlike Cosmos’s model where the distribution is homogeneous for a fixed time. Cosmos imposes a 21-day “unbonding” period — considered lock up — and there is no incentive to commit for more extended periods. Whereas, stakers in Akash can choose to commit for one month to a year, for which they will receive ~54% and 100% compensation respectively.

B. NEO

According to NEO’s white paper [12]:

NEO network has two tokens.

NEO representing the right to manage NEO blockchain and GAS representing the right to use the NEO Blockchain.

At the surface, NEO’s primary utility is a staking token and GAS is the fee token. However, after closer observation, NEO’s model is very different from Akash’s model.

Firstly, NEO is used as a mechanism to determine how many votes each NEO account gets without a requirement to stake tokens. Each account can vote for as many validator candidates as they wish and each validator candidate they vote for receives the number of votes equivalent to the amount of NEO in the voter’s account.

With regards to the fee, NEO’s chain only supports a single fee token, unlike Akash’s multi-token model. Furthermore, unlike Akash, NEO does not provide volatility protection for the GAS tokens.

C. EOS

EOS’s delegated proof of stake consensus [20]

has similarities with Akash’s model but is extensively different. In EOS, each token holder can stake their tokens in order to vote for block producer and in return, they are rewarded in resource units such as CPU, RAM, and NET that can be spent for transactions on the network. However, like in NEO, the staking token EOS is not staked by the block producers, and it is not slashable in the case of misbehavior.

In EOS, staking means, stakers are putting tokens in a lockup period and not necessarily contributing to the functionality of the network. Stakers earn rewards in CPU, RAM, and NET that are used to purchase computational resources on the network. These resources are not transferrable. CPU and NET are only spendable by the receiver, whereas RAM can be traded with other users in a Bancor-style marketplace [21].

EOS burns these resources upon spending, instead of giving them to block producers. The validator compensation model is unclear, considering transaction fees is not the primary mechanism. EOS is seemingly a single token network, despite having nuances and additional steps.

VII. CONCLUSION

This paper explains the network and mining economics of Akash Network and presents various incentives and utilities of different tokens in the staking and fees mechanisms. The Akash Token (AKT) acts as staking token and reserve currency for the network while using a multitude of tokens for settlement.

[1] “Worldwide Market Share Analysis: IaaS and IUS” [Online]. Available: https://www.ga rtner.com/en/newsroom/press-releases/201907–29-gartner-says-worldwide-iaas-public-cl oud-services-market-grew-31point3-percentin-2018

[2] “Cloud Infrastructure Market — Global

Forecast to 2022” [Online]. Available: https: //www.marketsandmarkets.com/PressRelea ses/cloud-infrastructure.asp

[3] “Cisco Global Cloud Index: Forecast and Methodology, 2016–2021 White Paper” [Online]. Available: https://www.cisco.co m/c/en/us/solutions/collateral/serviceprovider/global-cloud-index-gci/white-paperc11–738085.html

[4] J. Kaplan, N. Kindler, and F. William, “Revolutionizing Data Center Efficiency McKinsey and Company.” [Online]. Available: https://www.sallan.org/pdf-docs/McKin sey_Data_Center_Efficiency.pdf

[5] “Uptime Institute Comatose Server Savings Calculator.” [Online]. Available: https: //uptimeinstitute.com/resources/asset/coma tose-server-savings-calculator

[6] “Prime Leverage: How Amazon Wields Power in the Technology World” [Online].

Available: https://www.nytimes.com/20 19/12/15/technology/amazon-aws-cloudcompetition.html

[7] “RightScale 2019 State of the Cloud Report.” [Online]. Available: https://www.fle xera.com/about-us/press-center/rightscale2019-state-of-the-cloud-report-from-flexeraidentifies-cloud-adoption-trends.html

[8] “What is a Container?” [Online]. Available: https://www.docker.com/resources/wha t-container

[9] E. Buchman, J. Kwon, and Z. Milsosevic, “The latest gossip on BFT consensus” [Online]. Available: https://arxiv.org/abs/1807.04938

[10] G. Wood, “Ethereum: A Secure Decentralised Generalised Transaction Ledger.” [Online]. Available: https://gavwood.com/pa per.pdf

[11] N. Satoshi, “Bitcoin: A Peer-to-Peer Electronic Cash System.” [Online]. Available:

https://bitcoin.org/bitcoin.pdf

[12] “NEO Whitepaper.” [Online]. Available: http://docs.neo.org/docs/en-us/basic/white paper.html

[13] S. Aggarwal, “Cosmos Multi-Token Proof of Stake Token Model” [Online]. Available: https://github.com/cosmos/cosmos/b lob/master/Cosmos_Token_Model.pdf

[14] M. Egorov, M. Wilkinson, and,

“NuCypher: Mining & Staking Economics” [Online]. Available: https://www.nucyph er.com/static/whitepapers/mining-paper.pdf

[15] E. Duffield and D. Diaz, “Dash: A

Payments-Focussed Cryptocurrency.” [Online]. Available: https://github.com/dashpay/dash/ wiki/Whitepaper

[16] “ZCash Emmission Rate.” [Online]. Available: https://z.cash/technology/ [17] “Ethereum 2.0 White Paper.” [Online]. Available: https://github.com/ethereum/wi ki/wiki/White-Paper

[18] L. M. Goodman, “Tezos: a selfamending crypto-ledger.” [Online]. Available: https://tezos.com/static/white_paper-

2dc8c02267a8fb86bd67a108199441bf.pdf

[19] A. Kiayias, A. Russell, B. David, and R. Oliynykov, “Ouroboros: A Provably Secure Proof of Stake Blockchain Protocol.” [Online]. Available: https://iohk.io/research/paper s/#ouroboros-a-provably-secure-proof-ofstake-blockchain-protocol

[20] D. Larimer, “EOS: Technical Whitepaper.” [Online]. Available: https://github.com /EOSIO/Documentation/blob/master/Tec hnicalWhitePaper.md

[21] “EOS RAM 101: Non-Technical Guidebook for Beginners.” [Online]. Available: https://medium.com/coinmonks/eos-ram-

101-non-technical-guidebook-for-beginners-

6f971322042e

[1] greg@akash.network, adam@akash.network

--

--

Curious Cosmonaut Research

Curious Cosmonaut Research plans to become a platform that will decentralize Cosmos Research. It will aim to create a write to earn model.