An Industry for the Future; A Company for a Future Industry
Constellation’s Vision for an Open Network
By: Benjamin J. Jorgensen
Part 1: Introduction
I think we have all read enough intros that proceed with a nod to Bitcoin, the white paper that Satoshi N. wrote, followed by a brief summary and analysis. However, it goes without saying that bitcoin/Satoshi set the stage for the birth of not only a revolution but an alternative industry that masks itself in the concept of the 4th Industrial Revolution, with new socioeconomic rules, and the foundation of reimagining transactions and a connected world. Out of that spawned Ethereum and the years following their launch the Ethereum Foundation discovered smart contracts and encoding logic homogenous data sets: a step forward from transactions and towards real world developer adoption. Here we are today, 10 years after Bitcoin’s debut and 5 years after the launch of Ethereum, Constellation Network. Constellation is an evolution of transactions and smart contracts. A microservice ecosystem that enables big data application support with simultaneously running multiple layers of consensus which can be applied to more than just monetary financial transactions. An evolution that truly embraces the ethos of open source by attempting to unveil technology that can appeal to a group of developers that want to explore a secure communications protocol for big data.
What drew me into the space was an opportunity to be a part of a completely different industry and a different way of thinking. A team that wanted to believe in a radical and romantic world view. Organically, Constellation has a postmodern air to it in that embraces elements of existing technologies and industries, transcends political agendas, expands on existing notions of decentralization, adheres to knowledge as the source of truth, and all the while weaving an essential secure digital currency as a means to represent and exchange value. It literally is a postmodernist’s dream.
1.1 A Postmodern Industry
Let me backup a second to set the stage. I don’t believe that Constellation is a technology in THE technology Industry (as it pertains to our connotation of Silicon Valley). I believe that Constellation is a technology in a completely different industry — a hybrid of industries and features — a postmodern industry if you will. An industry that is just being created and is synthesis of cryptocurrency, blockchain, internet of things, and most importantly provides connectivity, like glue, between a multitude of industries. It is an evolution past the existing internet protocol layer of TCP/IP and beyond the application layer of Google, Facebook, Apple, Netflix, Ebay, Paypal; conversely, it is an evolution of the fat protocol issue we see with early blockchain technologies. This evolution embraces a balance of the protocol and application layer. It is all together a synthesis of existing solutions, new and old business models, and technical and non-technical products and features.
Constellation is by definition a technology and a technology is denoted as “the application of scientific knowledge for practical purposes, especially in industry”:
Technology — According to Merriam Webster (2019):
- the practical application of knowledge especially in a particular area :
- a capability given by the practical application of knowledge
- a manner of accomplishing a task especially using technical processes, methods, or knowledge
- the specialized aspects of a particular field of endeavor
Technology is not an industry it is an application of knowledge. Constellation, like Technology, is an application of knowledge that is a hybrid of cryptocurrency/blockchain, distributed systems computing, machine learning, and enterprise (SME) adoption.
1.2 Melting Pot of Past Learnings
One of the main flaws over the past year in the cryptocurrency and blockchain space, is that many people, including myself, came into the space thinking that we could apply our traditional Silicon Valley “technology” industry experience (SaaS, Advertising Tech, Software and Hardware Engineering, etc.) and bolt it on to a nascent blockchain industry. Modern day imperialists in some capacity: “We can solve all your problems”. Only to have many people bow out and exit extremely quickly. I believe most of us were wrong and in fact we should have let the industry naturally mature. There was nothing wrong with the space to begin with. It isn’t as simple as merely bolting on traditional practices simple because we were in IT and a lot of people quickly realized that and entered and exited the space rather quickly. Quicker than any trend I have ever seen in Silicon Valley and I have lived through a few: Gaming, Advertising, MarTech.
I believe that the cryptocurrency and blockchain space is its own industry with a certain set of roles and hierarchies (financiers, companies, vendors, teams/employees, and contributors/influencers), incentives and motivations, rules of engagement (valuations, market caps, business and marketing mechanisms, and growth factors), vernacular (technical, academic, and memetic), and economics. We have entered into a world into something completely new and we should study this as a new culture, with a beginners mind and an anthropological lens vs. simply more tech in a tech Industry.
THE RULES JUST SIMPLY DON’T APPLY! This is a postmodern synthesis of a new industry.
There is an opportunity to merge these new industries with existing industries, a postmodern modality, creating something altogether “new” (although postmodernists don’t completely believe in “new” since everything in this philosophy is a construct and synthesis of past behaviors, references, symbols, ideas). Constellation is building a technology that can actually be used in existing stacks and by developers solving real world big data and functional programming problems. Developers at SME’s (Small Medium and Enterprise) play a role in our ecosystem because this is how massive adoption occurs and confidence in the industry. It isn’t everything, but it is a solid indicator of an emerging market. Major companies and organizations pay developers to be a part of open source communities and gain a sense of community; conversely, a new business model relies on bottom up selling vs. C-suite buy in and as a result massive organizational and cultural shifts have occurred by appealing to subcultures. As adoption with individuals and homebrew communities continue to flourish, then come the evangelists at SME’s, followed by partnerships.
1.3 Setting the Stage for SME’s
The Constellation approach to SME’s is starts by using SME clients/partners to aid us in developing our MVP (minimal viable product) and product positioning (this is a play right out of the Four Steps to the Epiphany and The Lean Startup). What an SME partnerships can provide is product direction, product market fit, and revenue which not only helps build a healthy organization and provides an exchange of value: is there a need in the market for such a solution and will someone pay for it? Beyond that, clients/partnerships teach us to communicate our message effectively and mold your vision and product to meet a demand. That being said, SME’s can prohibit the growth of an organization and technology through too much value engineering at early stages of the company. This can cripple an organization if they don’t keep their eye on the vision. We take this sentiment into account everyday.
Part 2: Knowledge Industry Revolution
The Constellation vision sets forth that the successful adoption of distributed ledger technology and distributed network will be through easy access via an API to appeal to a broader developer base: easy integrations, implementation, and testing (something very few of our comparative companies offer). Secondly, we see an opportunity to take large data sets (transit, financial, IoT, automotive data, etc), from a multitude of sources, and put them on a distributed ledger technology to make data accessible to all and governed by a community of data validators. Value extraction from the data space is relatively a new thing, not even 10 years old, but yet it is rapidly changing the world in front of us. Thirdly, we envision the opportunity for these validated and open data sets to be queryable, like a knowledge graph, and be merged together to create new logic, new businesses, new insights, and more transparency. We believe that the data of the world should be governed by a community, evaluated by reputation, and governed accordingly. This postmodern industry is the advent of the Knowledge Industry a synthesis of these overlapping inefficient industries and composing a data revolution.
We have entered an era where there is limited transparency on the data we provide to major multinational conglomerates. It has been abused and recklessly made unsecure exposing billions to data hacks and breaches costing trillions of dollars. Data should be governed by you, it should be validated by you, it should NOT be merely mined, bought, and sold between the Googles, Facebooks, and Apples of the world. We have entered an era of truth and authenticity and it is time that we create a distributed network of big data, vast amounts of data, that are governed by a community that in-turn is governed by reputation, and exchanged via peer to peer (p2p).
2.1 Constellation Proposal
To do this, we need to build quality open source tools that stand a chance of developer adoption and use cases that meet developer interests and problems. Developers don’t adopt faulty tools that aren’t scalable, are poorly constructed, and don’t solve problems and use cases in their wheelhouse. An open source community will not evangelize new technology frameworks to a parent organization.
Constellation has built Spore Technologies (See 4.3), Spore enables us to appeal to a big data developer community in the same vein as UX focused products on top of Spark like Databricks by making it simple to connect to our Network and Protocol. Our entire code base, including the Protocol and our consensus model (PRO — Proof of Reputable Observations) is built from the ground up. These are homegrown solutions pieced together with scientific knowledge and research — a true technology. We see that existing technologies do not stand the chance for an organic open source community to adopt and the only way to do this is to build everything from scratch and simultaneously invite established and known big data engineers to help us build infrastructure tools that meet the threshold of existing centralized solutions today.
In building our Protocol (See 4.1), we saw an opportunity to articulate ourselves in connection with existing big data tools vs. positioning ourselves solely against/with other blockchain companies. We took notice in how the Spark developer community built out limited use cases around big data firehoses and rapidly built a thriving community of engineers that went back to their organizations and started applying these tools in house: ground up. As a result of Spark adoption, managed services like Databricks and Cloudera were developed in the same way that RedHat was built around Linux. We are not opposed to being correlated and connected to other DLT technology, but we saw a chance to begin to appeal to a broader and more commercially viable developer community in big data (machine learning and AI). Spore, as a set of infrastructure tools including core components in cryptographic security, sits nicely in between data pipeline management tools such as Storm (Apache) and Kafka, adding further visibility, security, and audit trails to data pipelines (Figure 1).
Figure 1 — Displays how we integrate with existing tech stacks in the big data space. We integrate with Cloudera and Databricks and fit alongside big data tools such as Kafka and Storm (Apache) by adding more visibility and security to a data scientist’s data pipelines.
This diagram helps us appeal to technologists that might have a need for a blockchain solution but have a need for better data infrastructure tools.
2.2 The Evolution of Multi-Generational Blockchains from Bitcoin to Ethereum to IOTA
Existing protocols like Ethereum and Bitcoin have successfully created distributed ledger technologies but with limited developer application support. Additionally, these technologies are linear blockchains that require an entire network to validate transactions. This causes severe latency issues and will not support enterprise and consumer applications that demand high speed and throughput to process billions of transactions. Furthermore, the consensus model used to govern these networks are largely undemocratic with PoW (Proof of Work) and PoS (Proof of Stake) ultimately consolidated a lot of the power which poses risks to true developer adoption. As such, Constellation is addressing many of these issues and are building an asynchronous “chain”, a more democratic, secure, and scalable way to enhance a world of connectivity and a true vision of AI and ML.
IOTA was the first blockchain company to rely on a directed acyclic graph architecture (DAG) to address scalability in the blockchain industry and ultimately appeal to the IoT industry. They have done a great job of deploying PoC’s (proof of concepts) with major multi-conglomerates, like Bosch, and attaining a $1B+ market cap. It is absolutely incredible. While they have done a great job of making the DAG architecture known and attempting to battle real world big data issues, there comes a lot of issues including lost transactions that are randomly sent in a pool of nodes that may be offline. While this might be an easy way of solving load balancing and rate limiting, it places a strain on the network by either forcing nodes to crash or spamming the network. Additionally, it doesn’t present a clean up mechanism and reconciliation for lost transactions. Constellation could actually offer up part of our technology, dynamic partitioning (a take on Spark’s dynamic partitioning), to help reorganize the data in the network before and after data is accepted in IOTA’s network.
Another major point that we touched on earlier is the lack of a native application integration interfaces and application support. Something that nearly every blockchain company has not addressed. Many of these technologies simply require you to code in their language, deploy, and hope to God everything works. This is not something that open source developers, enterprise partners want to tackle (“build it. Cross your fingers. Don’t Look. Deploy”). Great for PoC’s but less than idea for real world deployment.
Once again, we evaluated many of the companies and projects out there and what we saw was several solutions that offered interesting capabilities (smart contracts are a huge leap forward) but nothing that could address the market for big data, AI, and machine learning and mix data sets from different sources (heterogenous datasets — See 4.4).
Part 3: Adoption Beyond Existing Blockchain Technologies
Our strategy in gaining adoption around our technology will not only rely on articulating a vision of the evolution of existing blockchain solutions and appealing to a broad and sophisticated open source community, but will be to appropriately seed partnerships across government bodies, research institutions, and SME’s.
For government bodies and research entities, we provide an opportunity to redefine solutions around multi-domain operations and how open networks, at scale, can handle big data operations. These organizations are incentivized to improve standards of living and ensure civilian security (governments) and further develop a knowledge industry (academia and research entities) to further innovation with no monetary motivators.
For SME’s, we are faced in articulating a vision to key buyer personas that include, VP of Engineering, CTO’s, and data science teams around improving existing big data tech stacks and downstream data visibility. While “blockchain” as a product does not have a line item and is not actively being demanded by customers (“we demand that blockchain be in our solutions”), we are navigating how to appeal to different stakeholders in an organization and existing needs and problems.
3.1 Private Sector: SME’s
Having additional security around big data will lend itself nicely to the connected world we live in, especially mission critical data. The next major hack and vulnerability point will be at IoT devices and infrastructures. Vulnerabilities and security risks around data in transit and data in use in the IoT sector, are not only readily apparent across smart buildings, energy grids, autonomous vehicles, IT, and health care but around operational IT (OT IT) and the data that pertains to employee safety in companies. Security around data in use and data in transit doesn’t just revolve around consumer data and IoT but also operational technology and infrastructures.
While cyber security at scale is a clear identifiable use cases and industry, visionaries of the IoT and cybersecurity sectors will be challenged to implement and secure open networks and distributed ledger technology to meet growing security concerns around connectivity. The IoT and cyber security industries have extremely dated legacy infrastructure and have established corporate immune systems that lack the agility and desire needed to evolve. Additionally, many of enterprise and private sector companies are motivated by money and quick go-to-market driven by customer demands for features and functionality. Security vulnerabilities are typically not addressed until an actual breach or hack occurs. This leaves an opportunity for government bodies to get ahead of the curve and be faster than a monetarily incentivized organization. Many government RFP’s (request for proposals)* are illuminating vast security concerns along with the urgent need to merge data from legacy systems (multi-domain operations). As a result, there is an instant and marketable need to validate and notarize data, through open networks, and tie in cryptographic security at scale — enter a DAG architecture with scalability and the ability to organize data for queryable purposes.
3.2 Beyond Cryptographic Security
Beyond security, AI and machine learning will need a new type of infrastructure to scale to the vision we all have of connectivity and autonomous and connected everything. While AI is the hottest thing out there, it only touches and utilizes data after it has been aggregated and implemented. AI is only as good as the data that goes in. While great hardware can become exponentially improved with AI, how do we know the data is valid, secure, and has an immutable audit trail? Are we prepared to bet everything on connected everything with a central audit trail? What if that is spoofed, hacked, and data is altered to set a new course of AI?
With an open network audit trail, data governed by a community- a commitment to decentralization, we will be able to pinpoint exactly where and when the hack occurred and how to prevent systems from adding bad data to its algorithm. Tragically last year, we saw what happened with Uber’s autonomous vehicle crashing and killing someone. There wasn’t a an audit trail and immutable record of the maintenance log on the lidar to attribute responsibility. We can continue to discuss how Constellation is more scalable and faster, but this puts us in a new category altogether: ensuring safety, accountability, and saving lives. For interoperability to exist we need to have a secure and reliable audit trail of data (Figure 2) — a solution that is appended as close to the source as possible. An immutable insurance policy on our data, if you will.
Figure 2 — Shows how Constellation’s Spore Infrastructure Tools impact downstream data visibility, providing an added layer of security, while improving upstream data products.
3.3 Public Sector
In review of many of the RFP’s, from the various United States’ government branches (NIST, SBIR, AFWERX, AirForce, NASA, and NSF), there is a driving need to advance certain technologies and industries such as advanced manufacturing, artificial intelligence, quantum information technologies, space exploration, wireless technologies, IoT, and energy and power systems. At the core of these various government RFP’s, there is a foundational focus around big data validation and interoperability between domains and systems. Constellation’s approach is an alternative, open, and scalable infrastructure, with supporting economics. Constellation’s secure communications protocol and a distributed graph based database (DAG) will scale and be more secure beyond current existing centralized solutions and cloud infrastructures. Furthermore, our Protocol will ensure secure message/event transfer between devices while creating a tamper proof audit trail of the metadata generated. By securely validating the integrity of the data across various industries, we will create an interoperable network of trusted data, governed by a self organizing community.
Part 4: The Constellation Umbrella: The Protocol, Spore Technologies, Constellation Network
Constellation has built a distributed ledger and protocol using a directed acyclic graph (DAG) architecture with a novel reputation based consensus model called Proof of Reputable Observations (PRO) that will be hosted by an incentivized community that is programmatically optimized for network efficiencies. Our consensus model, PRO, along with the DAG architecture ensures security without sacrificing speed. Furthermore, our microservice framework will appeal to a broad 9 million strong members of the JVM development community due to its ability to directly integrate with JVM code, while finding commercial viability across the IoT and cybersecurity industry. Our goal is to provide an open network of concurrent consensuses to build a network of data that can be further the extraction of value in the data space.
Our main technical components of Constellation include validators, the protocol, and Spore Technologies. My best analogy of the organization of Constellation is to think of Coca-Cola. At Coca-Cola, you have the chemists tweaking and maintaining the recipe for Coca-Cola. They have to integrate commodities while managing altering flavors (due to weather or new sourcing), with the goal of maintaining consistency. No one can bother these chemists. There are no writing utensils or photos to be taken in this secret room. Above the chemists, you have the business and product people working to price and bottle the product to get it to a sellable state. Above them, you have the marketing people that are making this product known.
For Constellation, The Protocol is where the chemists live (no one goes in and no one goes out); Constellation Network is the governance and workings of The Protocol; and Spore is the ability for real world data to be piped on to the network as well as utility of our Protocol and the Network.
4.1 The Protocol
The Protocol — The Constellation Network sits on a secure communications protocol with the core features being the architecture, the consensus model, and the language it was built in (Scala). The Protocol has a programmatically determined reputation system, called Proof of Reputable Observation (PRO) that serves as our consensus model and method of organizing and optimizes nodes in the Network. Our vision was not achievable with the traditional Consensus models, so we built ours from the ground up incorporating machine learning to balance our complex network topology with a performance approach to validator rewards. This is commonly seen in real life (influencers build reputation and Uber drivers give and get scores). Our protocol is built with a non-linear and asynchronous data model called a directed acyclic graph (DAG). Concurrency allows for our network to integrate with scalable infrastructure that comprises big data tools and due to the infancy of distributed ledger technology had not been achieved before.
The end goal technologically of our application support was to create a high level API for distributed datastores, like the same models used in common MapReduce frameworks. Specifically we provide the ability to define state channels as MapReduce and streaming join operations across state channels.
4.2 Constellation Network — Validators
Constellation Network — The layer below the network is comprised of distributed node operators that validate the data, maintained by the state channels, which was piped through an API (Spore). Just as Chainlink is an onramp for data onto DLT technology, we too provide an onramp for existing systems to transfer data onto our DAG network. We want to make sure it is easy for any developer to decide which data schema/set needs extra security and notarization and should be piped into the Constellation Network.
State Channels are their own network designed to processes specific data with specific verification criteria. $DAG network is the state channel for $DAG data, our cryptocurrency. State Channels will want to operate at least one $DAG node if they want throughput.State channels can be integrated to create composite data types. An example would be integrating a health record state channel with $DAG to create a PII data marketplace.
Here is a metaphor for $DAG nodes vs. state channels:
- $DAG Nodes are Apples
- State Channel A nodes are oranges
- A state channel is a way of storing all schemas of a data set
- State Channel A nodes can exist alone (private network)
- State Channel A nodes can also host $DAG nodes, making them fruit salad nodes (data marketplace)
- The fruit salad is the composite of dataset A data and $DAG Data ($ transactions)
Node Operators/Validators host nodes dedicated to validating state channel data. To grow the network, we are onboarding up to 100 nodes for the initial network stability and the foundation: foundation nodes. These network validators will be chosen on several factors including existing familiarity, geolocation, capabilities, and several other factors). Over time, we envision the network of node validators scaling, avoiding human influence, by foundation nodes assigning trusted nodes to their network. Think of this like fans at sports teams.
Initially, we will require each node to stake a certain amount of $DAG (more to come in the Tokenonmics Modell) to validate transactions on the network. As such, the validator will earn $DAG for validating transactions with success. Validator rewards will come from a designated allocation of 1.6B rewards to be distributed over 10 years.
Validator rewards are distributed based on performance in consensus and validators are selected and organized based on several factors, determined in the programmed logic in including, but not limited to, geo location, past consensus participation, and the ability to attract other trustworthy node validators.The Network is designed and organized to enable multiple consensus happening simultaneously while using dynamic partitioning, similar to Spark, to organize a hierarchical topologically. Optimizing throughput.
4.3 Spore Technologies — Data Validation and Infrastructure Tools
Spore Technologies — This is our proprietary pay-only platform of licensed state channels: microservice platform that provides a hosted and UX enabled application for SME’s to build distributed applications, state channels, on a decentralized network (The Network) similar to similar to Splunk and Databricks. Additionally, the path to monetization is similar to Splunk, Cloudera, and Databricks while presenting Constellation as a bolt on technology to these advanced infrastructure tools. While any open source dev could replicate/make similar solutions to our pay-only platform, this is easier for corporate partners looking for a managed solution.
Think of this as the application layer to a protocol — our application layer happens to be about managing and enabling data to come onto a network. Spore is an application on top of the same set of infrastructure tools Constellation provides, enabling application developers to create state channels, validate, and notarize data at scale with hosted networks, data science notebooks and graphical user interfaces.
Spore relies on Constellation’s secure communication protocol to add additional security, appended at the source or on an API, while creating a tamper proof audit trail for multi-domain systems, AI, and Big Data. Spore provides data scientists with downstream visibility and fits nicely into existing big data management tools such as Cloudera and Databricks. Our goal with Spore is to empower a developer community in big data, AI, and machine learning, to create a more secure and interoperable ecosystem with our tools. Through a simple to use API interface, developers will be able to pipe data onto our DAG network.
4.4 How it all plays together: Connecting the Dots
[ API’s connecting centralization to Decentralization — commitment to making decentralization — decentralization network happen — this is important ]
Existing notarization approaches sort of fail because they only really store the time somewhere, but don’t have the means to establish it as its own entity. Constellation wants to have it so a time chain can be started which notarizes the time, but also pays validator rewards / self-governs who can participate in it through reputation. Essentially we create an o it create an incentive model to accurately represent the time. If you look at how Augur works, they have reputation based bets which are essentially manually validated. However, once you have something like time embedded on its own chain, it becomes trivial to programmatically use a ‘declarative base layer’ to validate things.
Constellation is splitting up the contract / execution style logic into 3 layers — (base, shared application logic via dependencies, and unshared logic via executor services). Base layer is anything that can be expressed and understood by all nodes, i.e. transactions of the form (did this piece of data exist before this one, or the presence of two items on chain equal to some value, etc.). This is equivalent to some of the more ‘corporate’ smart contract platforms that have come out recently (i.e. its very easy to declare and non-turing complete and can be automatically understood by all nodes). Something easy to put on the base layer is a REST API call to some external service. Any node can easily verify it with no shared code.
Let’s say we have a REST API call to Coinbase with a specified time of interest:
- One chain that is doing consensus on time
- One chain that is doing consensus on the price of bitcoin
Now someone wants to make a declarative bet that bitcoin will go above X value by Y time. This requires no code execution to verify — this is a declarative base layer application oriented transaction.
Now let’s make it a more complicated bet:
- You take the time and price as inputs (i.e. you declare that in an easy way)
- Feed them into some program ( *this gets validated either by another application user inheriting the same code or through an executor service)
- The output of the program then gets committed on another chain
- if the output of the program is > 10 for example, (lets say the program is sqrt(price)*5 / 20 or something hard to express in base layer logic)
- Then the final evaluation can be done on the base layer, isolating the code execution
Applying to Mobility Sector
Now the way that this might apply to mobility is more complicated and is ultimately what Constellation aims to do. Essentially you have a marketplace where nodes need to be incentivized to propagate valid data:
- they need to trade data to one another
- they need to perform calculations and verify the results of other calculations
- they all need to agree on a world model very quickly
- there are numerous types of inputs they’re coming to agreement on
- there’s money at stake involved with the data they’re agreeing on
Part 5: Business Models
One of the challenging things about Constellation, and many companies with a cryptocurrency, (especially with a utility token) and a revenue producing solutions, is that the organization has to consider two business models: one that derives value and demand for the utility of the token (like a commodity: see Generative Tokenomics) and one that operates in our existing capitalistic world. In public securities, a security often pays a dividend to shareholders, giving rights to specific property and terms, and there is some correlated value between the share price and overall revenue (ultimately determined by market and this is not 100% accurate). However, in Constellation’s case as being a utility token, we do not have shareholders, but instead individuals that purchase $DAG, our token, purchase the token for future network use.
$DAG, the token, is essentially to the network. Attach value to information you need DAG. In the same way you use smart contracts for business logic you use DAG to transact value database and data driven applications. If you want to exchange information it is essential to use $DAG. The Network itself is a data exchange mechanism and ties utility to the token.
The Constellation Network’s, a DAG, focus is to build a network that provides high throughput for big data use cases and needs. A DAG network’s throughput is not determined by Blocksize but by the number of nodes participating in the network. Each node provides compute resources to the network. The more nodes join the network the more resources in form of throughput are available in the network. Next week, Constellation will release a detailed paper to show how throughput, reputation, and $DAG are all linked. For now, I merely want to shed light on how the overlapping businesses types are connected.
What most cryptocurrency companies struggle with is the need to drive revenue, which unlike a security, doesn’t necessarily mean higher network value since revenue is a different form of currency and incentive. For a company like Constellation, we need to both a revenue model to display an exchange of value and build a healthy company with a P&L while simultaneously network utility. Basically this is two business models that need to naturally and logically work together: crypto economic model and the enterprise business model. As many SME’s have yet to adopt a coin as part of their natural business process (and are probably several years out as they figure out tax obligations, custodianship, etc.) and there are very few organizations that have a line item for the cryptocurrency to purchase throughput on an open network, Constellation attempts to build out two models that work with one another and prevent the “chicken-and-the-egg” scenario.
5.1 Enterprise Business Model
As for Constellation, and for the enterprise business model, the tactics we are taking to gain adoption is to appeal to big data science teams and VP’s of engineering by offering a technology, Spore Technologies, that is the stripped down version of Constellation Network: Data validation and infrastructure tools to provide added security, audit trail, and additional downstream data pipeline visibility that fits with existing infrastructure tools (Kafka, Storm) and data management tools (Cloudera, Databricks). As such our model is the following:
- We charge for hosted services (which may be creating on premise private DAG networks or it means purchasing $DAG and creating state channels on the public network on behalf of our client).
- Our model will be to charge the client on a per event/message basis, API call, or per node basis (all can easily be backed into one another).
- These are common models across Cloudera, Mulesoft, and Databricks
- DAG is used to purchase extra throughput on the network when needing to validate and create an audit trail for mission critical data*.
- We foresee that Constellation Network Inc, will act as the custodian to purchase DAG, on exchanges, on behalf of our clients.
*Please note that we do not anticipate that clients will initially put all of their data on the Constellation Network. The first phase will be to onboard mission critical data and certain segments and schemas of the data pipeline.
5.2 Crypto Economic Model
Many partners and clients we are engaging with are cautious of putting putting their data on an open network. Anyone that has a different claim in talking with certain partners, I will challenge wholeheartedly. Some of our partners have already created public data feeds, to which we will implement public data on Constellation so as to further build out our notion of a knowledge graph and begin creating that universal source of truth for a myriad of data sources. For most of the partnerships we are exploring will be to move their data (which is meta-data) on to the Constellation Network during the lifetime of our engagement with the client: vacillating between public and private networks.
As such, we want to actively consider both the crypto economic model and the enterprise business model and provide a incentivization mechanism for both models that reflects a licensing and value exchange for our time and use of components of our technology (whether we use the open network or not).
In such an example where we onboard a client/partner, that has an immediate use of the private DAG network and will acclimate themselves to the open network, we keep our commitment to a long term deflationary tokenomics model:
- We will burn 10M $DAG for partnership acquired to the Constellation Network regardless of contract size or current market value of our token
- In the event that we cannot publicly announce our partnership due to non-disclosure agreements, we will still announce the acquisition and burn of 10M $DAG publicly.
- We will announce new partners at the beginning of every quarter, at which point DAG will be burned.
This model, we believe, is the first attempt of satisfying a two sided and two business model, organization. Many blockchain and cryptocurrency companies have only figured out revenue models that pertain to consulting and advising agreements around the blockchain industry as a whole. Our focus is to build a company with a scalable business model that can grow exponentially while attempting to bridge the gap around network utility and value while the adoption of the industry and technology blossoms.
Part 6: Summary
In summary, we live in a fascinating, high paced, exciting, volatile, and dynamic time. There is a romantic curiosity to the possibilities of humankind like no other. The United Nations call out 17 sustainable development goals and number nine is a call a goal to innovate: “Goal 9: Industry, Innovation, and Infrastructure: Think of new ways to repurpose old materials” (United Nations SDG 2019). Constellation provides a new framework for innovation. A new infrastructure tool to unlock value in data. Constellation is a postmodern synthesis that applies theory and overlaps existing and advanced big data tooling, with a decentralized and open network approach, that introduces a new industry, a knowledge industry. Like the extraction of value from big data, we can extract and attribute essential value to knowledge in ways we have never seen before.