Introducing Proxima.one

Data for Web 3.0

Chase Smith
Proxima.one
4 min readAug 22, 2021

--

Blockchain and decentralized networks have opened up a variety of disparate data and execution networks that can operate in an autonomous and trustless manner. With the advent of these networks, we have fallen into the trap that the software architecture of old faced: we have siloed the data so that it is only useful between bridges or in a limited format from blockchain-specific APIs. Connecting balances between two contracts, or aggregating the liquidity of multiple exchanges is currently being done without regard for protecting the verifiability of the information that is being aggregated. Proxima provides a network that trustlessly connects to the blockchain without relying on a secondary consensus.

Motivation

As a team, we see data I/O as one of the biggest bottlenecks for blockchain at this point in time. Our team is trying to solve the data availability layer for blockchain and realized a major drawback is a manner in which blockchain databases are built. Our mission is to overcome the data I/O bottleneck for trustless distributed networks. We intend to do this by providing a performant data layer, and trustless query protocol.

Proxima

Our solution is designed to combine, transform, and utilize data from a secure source. With blockchain, this is a natural extension, but this can be done for any data source that has a verification method. This includes: blockchain consensus, trusted signers, multi-sig curation parties.

  • Extensibility.
  • Security and Audibility.
  • Scalable Performance.

The stream processing of Proxima is designed to be used for a trustless data layer, that maintains an audit trail and can be used with the same trust of the blockchain that it originates from. This stream processor takes in a series of event streams and then pushes them into a variety of events, with the resulting value(s) being output to another set of streams. Throughout this entire process of transforming, merging, and splitting, the data can always be verified back to the chain itself.

Services: Streams and data sources

In order to do this, Proxima uses Data Services that store events, operations, and materialized views. Data from Proxima Services are consumed by the Proxima Streaming Server which transforms, merges, splits, and aggregates large pieces of data into an efficient schema. The Proxima Streaming system includes multiple elements that enable tracing and auditing of data:

  • Proxima-defined streams
  • User-defined streams
  • External data sources

Events

Streams are composed of events, which are given a type, payload and trace. The type of event refers to the action, data update, or operation. Events are produced and consumed as a core part of the Proxima Protocol. In order to ensure that they are valid, each event contains a trace that enables it to be authenticated. The trace contains the identifier to get the event’s originating function and its arguments.

Event-processing Functions

Events are processed using functions that can take in one or many events and update one or many streams.

  • Merge
  • Transform
  • Split
  • Filter
  • Accumulate

Applications

Our mission is to create an efficient means for using data in a secure and trustees manner. To accomplish this we are focusing on constructing a complete solution for decentralized exchanges so that it can be generalized for many use cases.

  • Purely for data regarding the decentralized exchange. (Processing and aggregation of multiple data sources).
  • Analytics and extras
  • Document store
  • External service triggers

How it works

When combining multiple different data manipulation functions it is possible to extend the feature set. To increase the complexity of the operations and what can be done with Proxima, there are a specific set of functions circuits that can be facilitated through self-hosted or consensus-driven authorization functions.

Not all data needs to stay in Proxima, in fact, there are instances in which the data needs to trigger external services in order to fix this one needs to be able to create the specific operation, and services that consume and trigger these services.

Using Consensus for Actions Verification not action creation

Authorization/Authentication Piece: Consensus is essentially lambdas that can be scaled fast because they do not use external data, and choose consensus for specific transactions (consensus is a one off).

Core Processes

Events

Ethereum is burned for the good of the Binance contract. User id: 1, amount: 4

Non-consensus

Generation of an Operation. Ethereum is burned, so we need to generate a transaction for Binance contract with update for User with id: 1, amount: 4. Tag for event that created it. Note: unsigned, verifiable, valid. General operation.

Consensus

Verify audit trail, and then sign update. Audits this operation (it doesn’t need outside information), Signs. Use whatever consensus you want: Multi-sig, On-chain, Self-hosted.

Disputes

Disputes can be resolved programmatically. Determine if data is bad. Determine if verification is bad. Any issue will be on an individual transaction. Because disputes are programmatic. Longer a transaction/operation is withheld, the more likely the data is correct. More people use data, the more likely individuals will file disputes on this data

Evolution of the Protocol

Our goal is more than purely data, we see data as a fundamentally core piece of integrating blockchain into the real-world

--

--