Everything you wanted to know about Flowertokens but were afraid to ask — Part 3: the Oracle, and natural asset tokenization

This post serves as a explanation of the underlying infrastructure behind the Flowertokens experiment, a discussion of its flaws, and what we learned from the process involved in designing and implementing it. If you aren’t familiar with the project, you can get familiar here.

The two components that will be discussed are:

  1. The core functions of the Oracle
  2. The linking of tokens and their metadata

Core Oracle functions

The first of these components to be discussed is the Oracle’s core functions. Flowertokens tries to create a first combined crypto-collectible physical asset. But how? Everything involving real world data (i.e. data that originates from outside of a blockchain) and distributed ledgers is hard. The ‘oracle problem’ describes this pretty well:

Just because you can throw data into a ledger, doesn’t mean it can be trusted.

A common problem which occurs with tokenization of intangible and fungible assets concerns token consistency. In decentralized ledgers such as like Ethereum, data consistency is always achieved via consensus mechanisms and transaction rules. These don’t leave space for exceptions, and are inherently decentralized. Yet in the real world there is no perfect consistency — human actors don’t always obey the rules. The same applies to using real world data — this data often originates from a single point (or single human), and there is (as of yet) no proven way of verifying this data in the same manner as implemented in decentralized ledgers.

We acknowledge this problem and are planning to further experiment with novel cryptoeconomic systems that minimize (and hopefully remove) this single point of failure. However, removing this issue was out of the scope of the Flowertokens project. Flowertokens was thus designed with a centralized point of verification; since the installation is site specific, trustless verification was not necessarily required for the project to fulfil its purpose.

So let’s get into the details. The installation is monitored by an EOS7D, an SLR digital camera. This camera takes a photo of the grow-rack every 13 minutes, and is triggered by an on-site computer. This computer then processes (crops and resizes) the image captured by the camera. The images are then uploaded to our main image server. This gives us the ability to provide a ‘live’ image for the project’s frontend, as well as travel through the history of the installation. This feature is also provided for each individual flower.

Early Test with Gladolia

Once a day, this local server also analyzes the captured image. Here we use PlantCV, a imaging processing framework, designed for plant biology research that is built upon open-source software platforms OpenCV, NumPy, and MatPlotLib. This analysis provides certain values such as height and the pixel widths of different areas of the color spectrum. This allows us to calculate the growing rate of each flower, as well as detect if a flower is blooming.

Current successful recognition

As mentioned in Part 2, the initial testing of this system was done with different, larger flowers in mind. Thus these early tests involved plants with greater areas, which were easier to detect. As we are now using dahlias grown from seeds, the technical infrastructure is being pushed to its limits: the resolution of the camera is often not enough to detect the small saplings. The readings of height and growth rate seem to be inaccurate until a certain size is reached. Since the cropped individual images are not bigger than 200*600 (wherein 1 pixel is around 0.2cm) it is hard for the PlantCV program to detect the stems of the plants, which then leads to reading errors. We are currently experimenting with different ways of recording to counter this effect. However, based on past experiments with previous plants, we predict that these reading errors will stop once the plants have grown above a certain size (and thus have wider stems).

current half-failing recognition

Natural asset tokenization

So how is this data linked to the tokens?

Within the system, each flower is represented by an ERC721 Token. As each flower is an individual natural asset, we are using individual non-fungible tokens.

Every flower has an associated token ID and associated metadata (e.g. the ‘growth_rate’ and ‘height’ parameters shown below). They might vary in colour or size, but still can be represented in the system. For every flower, the oracle generates a JSON file with the following structure:

"Flower": {
"id": 1,
"height": 5.494048134972253,
"growth_rate": 0.021186424461041794,
"timestamp": 1531990624

The original ERC721 Token Proposal suggested that metadata would be ‘attached’ to the token in form of a hash, which would refer to data stored in a decentralized database. Since the metadata created via the image analysis is not immutable, but is more akin to version control, we choose to use the Dat protocol for our metadata storage. Using Dat makes the updating process very easy. When a new version of the JSON data is created, the Dat repo automatically syncs this data with its peers, while still be available at the same hash. Furthermore, Dat allows for the entire history of this hash (i.e. every set of JSON data that has been stored at that hash) to be viewed.