Q3–4 Development Progress and Codes Released on Github!
Development Progress Since July 2018
For the past few months, our development team has been crunching and rigorously testing out thousands of codes related to the features in our overall network and back-end technology architecture to ensure that every feature in Airbloc is able to function and interact with each other.
After almost half a year of development and private testing, Airbloc’s codes are now released in the Official Airbloc GitHub!
Our development priorities have been on our back-end features, chiefly categorized into the following major overarchs:
Data Management and Warehouse | Identity Data Management | Data Relay, Exchange and Export | Airbloc Web and Mobile SDKs | Airbloc APIs | Aero Sidechain Development | Airbloc Node Development | Development Language Migration from Python to Golang |
Data Management and Warehouse
- Implemented data warehousing functionality.
Bundlesystem for storing data — A bundle is a set of data records belonging to the same category. Data on Airbloc will be registered and stored as a bundle altogether, instead of being stored separately.
- Enhanced data ID representation — Data ID is now represented by
<Bundle ID>/<User ANID>, which allows easier user-centric data management.
- Integrated data streaming through gRPC stream — This allows efficient data ingestion and further ability to integrate with continuous data pipeline.
- Improved bulk data management system by introducing
Batchconcept — Batch is a cached set of data record IDs, which enables more memory-efficient data handling by using trie-like data structure.
- Integrated data warehouse with
- Integrated data warehouse with our meta-database, BigchainDB.
- Implemented Amazon S3 storage driver and protocol — essential for integrating Airbloc into Airbridge’s back-end data pipeline.
Identity Data Management
- Implemented Account Management System — Users can now use Airbloc (e.g. control their own data, withdraw ABL rewards) using either their own private key or password.
- Introduced Proxy Accounts — This allows users to use the password instead of using wallet apps or MetaMask, and delegate transaction fees.
- Implemented Temporary Accounts, which can be used during DAuth data collection authentication — Allows users to create a temporary account on Airbloc without the need of Metamask or a private key.
- Implemented DAuth on both server-side and client-side.
Data Relay, Exchange and Export
- Designed an Abstract Exchange System through smart contracts — The new system is designed to support any types of data exchange by enabling data consumers to offer a smart contract according to the type of exchange they want.
- Improved the data exchange process to include three stages of Order, Open and Close — This allows more customizable data exchange process through smart contracts.
- Refactored the architecture of the exchange smart contract.
- Added support of Ricardian Contracts — We’re currently researching a better way to use Richardian Contracts for enterprise-level data exchange and will be implemented soon.
- Implemented Data Export Functionality — Data consumers can view purchased data by providing re-encryption key for the data.
Airbloc Web and Mobile SDKs
- Implemented our first version of Airbloc Mobile and Website SDK — The SDK contains data collection authentication functionality and can collect multi-variate data types leveraging Airbridge’s data pipeline.
- Developed DAuth UI/UX for both web and mobile.
- Tested Airbloc mobile SDKs on sample applications.
- Introduced gRPC and Protobuf — To ensure Airbloc Protocol is more polyglot and capable of dealing with massive data, gRPC is the best choice for our API implementations.
- Implemented APIs essential for exchanging data.
- Improved API design by separating User-side APIs and Provider-and-Consumer-side APIs.
Aero Sidechain Development
- Implemented Plasma Cash proof of concept — Our approach is focused on implementing Plasma Cash as a framework by only using smart contracts and watchtower nodes (Validator, Operator).
- Worked on Sparse Merkle Tree implementation which can serialize and cache the tree data.
Airbloc Node Development
- Migrated from Python to Go. We were originally working on the first implementation of Airbloc Protocol using Python. However, we decided to switch to Go because it is faster and provides more robust concurrency than Python, which are more suitable properties for a high volume data exchange protocol.
- Switched local database from LevelDB to BadgerDB. BadgerDB is embedded key-value store written in pure Go, and about 2 ~ 3x faster than the other competitors.
- Added P2P networking functionality using libp2p. Since we have switched to Go, there’s no reason not to use the best P2P networking stack :)