Building an Ether DEX Price Aggregator with Fluence

A decentralized application development tutorial

Andreas Pogiatzis
Coinmonks
Published in
14 min readJun 29, 2021

--

It’s been a long time since my last post guys, as lately workload has been quite overwhelming but things are getting back to normal now, so I thought of restarting my blog activity with a tutorial blog post focused around a quite intriguing technology stack that I have recently worked with.

In the context of Hack The System hackathon, one of the medium difficulty published challenges was to build an Ether Price discovery service using the Fluence stack. In fact, the challenge seemed quite easy as most of the parts required for this had already been implemented and showcased during one of Fluence’s workshops. Therefore, I took the initiative of stepping this up a notch and implement an Ether price aggregator from a DEX (Decentralized Exchange) rather than only fetching Ether price.

Assuming that this is a new technology for you (as it was for me at the time of the hackathon) I have composed a couple of FAQs below to cover some fundamental background knowledge.

What is Fluence?

Fluence (by Fluence Labs) is an Open Application Platform. To better understand the term lets first clarify what an Open Platform is.

In Layman’s terms, an Open Platform is a software system which is based on some published/documented programming interfaces/standards to enable open external development.

That is, allowing external parties to build on top of the aforementioned platform. Referring back to the initial term, an Open Application Platform allows the open development of applications that conform with the standards that the platform dictates.

An example of such a platform, is Heroku. It enables the deployment of an application (assuming it has been built with one of the supported frameworks) on the platform which can then be accessed by everyone else.

In Fluence’s case however, there are some characteristics which make it stand out amongst other application platforms. Quoted directly from the official Fluence page these are:

  1. Decentralized: Build with confidence as access can’t be disabled or integrations broken.
  2. Composable: Reuse code and data to build new features on top of existing applications.
  3. Accelerated: Radically faster path to market by abstracting complex backends.
  4. Rewarding: Apps and low-level components will be monetized by its usage. Build popular elements and earn.

I would admit that the first two characteristics (Decentralized, Composable) were the ones that captivated me assuming that the rest can be achieved if and only if the first two are in place. For the sake of keeping this post simple, a far more thorough vision as to why these properties are critical elements of future application development can be found at reader’s convenience in their official manifesto.

What comprises the Fluence stack?

The Fluence stack consists of four fundamental layers: Execution, Composition, Topology and Security & Accounting. Below is a table summarizing the technologies for each one of those layers.

Source: https://github.com/fluencelabs/fldist
Source: https://github.com/fluencelabs/fldist — Due to rapid development of the Fluence stack, keep in mind that figure is a slightly outdated.

In addition to those layers, Fluence Labs has developed supporting tools such as docker local nodes/development environment, an interactive repl and Fluence-Distributor which support Fluence development.

Note: FCE has been renamed to Marine.

Unfortunately however, at the time of writing this post the last layer of Persistence & Accounting is still under development therefore this tutorial encompasses only the first 3 layers really.

What is a Particle?

Intuitively, the Fluence stack closely resembles a microservice architecture where each service has a single responsibility and does not have any external dependencies. This is also referred as shared-nothing architecture.

To distil the meaning of a Particle is, we need to understand communication in a distributed system. Well, normally a bunch of microservices communicates with each other via a protocol. Whether that is HTTP, gRPC or something else, in a case of a chained interaction, it is up to each and every service to perform the interaction in sequence such that to fulfils its initial purpose. In Fluence stack this is abstracted away from the service using Aquamarine Intermediate Representation (AIR). In this sense, Fluence stack allows you to program (choreograph) service interaction via AIR scripts! This is an extremely important point. Take some time to digest this!

To make it clearer let’s see an example and compare it with a conventional distributed microservice architecture.

Consider a scenario of a modern e-shop application

In an effort to build the e-shop system 4 microservices can be designed that interact together to achieve the desired functionality. These are:

  • User service: Manages User’s account.
  • Inventory Service: Manages the inventory of the products (Available quantity etc…)
  • Order service: Manages customer’s orders.
  • Shipping Service: Manages anything that has to do with shipping. (Scheduling, tracking etc…)

When a user places an order, the client interacts with the Order Service through a protocol (usually HTTP) to place the order. In return, the order service, checks with the User Service that the user is indeed authenticated, then checks with the Inventory Service to ensure the availability of the item and deduct the quantity from the available amount of products, and finally it instructs the Shipping Service to schedule a shipment and notify the customer. Note that throughout this communication chain the business logic of all those services is responsible for responding to the Order Service.

As you can see there is a chain of events happening which is dictated by each service individually. If in the future I would like the Inventory Service (instead of the Order Service)to interact with the Shipping Service directly, then I would have to modify both Inventory and Order services.

Now let’s imagine the same scenario built on top of Fluence stack. In this case, the client would utilise an AIR script (in which, by the way, you can control authentication) which choreographs a network requests that dictates what services are pinged, in what order, what functions are called and where to return the result! Considering the change described above, it would require only a minor modification in the AIR script to achieve it!

Referring back to the point of the initial question, this programmed network request is called a Particle. A more technical definition would be a conflict free replication data structure containing (application) data + script + metadata and evaluated and executed on Aqua VM — itself a Wasm modules running on Marine.

It is also important to mention that FluenceLabs have also been developing Aqua, a higher level programming language that compiles down to (the lower level) AIR.

Ether DEX Price Aggregator Tutorial

The remaining parts of the post will walk you through the development of the application. This includes a simple Fluence service that performs curl requests and an Angular 11 front end.

Screenshot of the application

The full code of the project is provided here in the blogpost branch: https://github.com/apogiatzis/fluence-ether-dex-price

For the sake of keeping the article short and concise I will only be referencing the important parts of the code therefore you are encouraged to refer back to the GitHub repository while reading to get a better understanding of the code.

Requirements

First things first, let’s define the scope of our DEX Price aggregator. There should be a user interface that accepts the following inputs:

  1. Token ticker (DAI, USTD, REN etc…)
  2. Action (Buy or Sell)

Upon price query, the frontend must send a particle to a Fluence service that queries the Ether DEX prices based on the token and action selected. An already made HTTP API for aggregating Ether prices amongst various DEXs is provided by AirSwap here. Therefore, it would make sense not to reinvent the wheel, and just use that directly. Note that it would have been completely feasible (and more challenging) to query the exchanges within the Fluence service rather than using the API.

Finally, the results of the Fluence service are required to be displayed intuitively to the user via a table.

Development

Before beginning the development of the application, the environment must be set up properly. Initially, ensure that Docker, NodeJS (at least v14) and Rust are installed on your system. Then, it’s necessary to download the marine compiler and marine REPL to assist with the development. To do so, you can use Rust’s crate manager cargo :

  • cargo install mrepl
  • cargo install marine

FluenceLabs, also provide an already setup development docker container here, however I would encourage you to follow along setting up the environment in order to acquire a better apprehension of Fluence’s tooling.

With all the above, you are set to jump into coding!

The folder structure illustrated below is opinionated but I feel that this structure helps to keep stuff organized. Feel free to adopt a different structure for other Fluence projects but I would recommend following along for this one.

- project_name
|--- ng-app # Angular application root folder
| |--- ....
|
|--- services # Fluence services
| |--- curl # curl Fluence service root folder
| |--- .... # other services...
|
|--- artifacts
| |--- *.wasm # Build wasm moduel
|
|--- app-<env>.config.json # environment Fluence app config
|--- repl.toml # Marine REPL config
|--- package.json # defines root dependencies and scripts
|--- package-lock.json # Auto generated
|--- README.md

Start off by creating a folder and initialise a NodeJS project: npm init

Install fldistroot dependency: npm install --save fldist. Fluence-Distributor (fldist for short) is yet another Fluence tool which assists with the deployment of services and apps on the Fluence network. This could have been installed globally, however when installed as part of the root project dependencies anyone downloading the project can view, install and use the correct Fluence-Distributor version.

What is more we can add the following script aliases to the root package.json file to aid development:

"scripts": {     "fluence-build": "rm -rf artifacts && for D in services/*; do if [ -d \"${D}\" ]; then (echo \"[+] Compiling ${D}\" && cd \"${D}\" && marine build --release && mkdir -p ../../artifacts/ && cp t arget/wasm32-wasi/release/*.wasm ../../artifacts/); fi; done",     "fluence-deploy": "fldist deploy_app --env ${FLUENCE_ENV} -i app-${FLUENCE_ENV}.config.json -o ng-app/src/fluence/deployment.${FLUENCE_ENV}.json",     "fluence-node": "docker run --rm --name fluence_node -e RUST_LOG=\"info\" -p 1210:1210 -p 4310:4310 -d fluencelabs/fluence:latest -t 1210 -w 4310 -k gKdiCSUr1TFGFEgu2t8Ch1XEUsrN5A2UfBLjSZvfci9SPR3NvZpACfcpPGC3eY4zma1pk7UvYv5zb1VjvPHwCjj --local",     "fluence-node-peerid": "docker logs fluence_node 2>&1 | grep \"server peer id\" | rev | cut -d' ' -f1 | rev"}

Find a description for each script below:

  • fluence-build : Builds any Fluence services that reside in the services folder and dumpe the wasm artifacts to the artifacts folder.
  • fluence-deploy : Looks for an app-<env>.config.json configuration file and deploys the defined app on Fluence network. Environment is set by the FLUENCE_ENV variable.
  • fluence-node : Runs a local docker node for local testing.
  • fluence-node-peerid : Retrieves the peer id of a running local node.

Fluence curl service

Hopefully you have grokked the nuances of the proposed folder structure by now, so let’s kick off with building the Fluence curl service! To make things clearer, it is essential to clarify that a Service is a collection of one or more WebAssembly modules. In this case, the aforementioned service comprises by only one module therefore you might see the terms service and module being used interchangeably throughout the post.

Create a folder named curl in the root services folder and run cargo init inside it. This should create a child src folder and a Cargo.toml file. All you need to know is that this is the basic folder structure of a Rust package.

Add another file config.json in the curl folder which contains the following:

This file defines the configuration of the service. For clarification,mountedBinaries here just provides access to the /usr/bin/curl from within the Fluence service.

This is necessary because WebAssembly does not have any built-in IO capabilities therefore we wouldn’t be able to perform an HTTP request otherwise.

A great introduction on how services are configured and work can be found here: https://www.youtube.com/watch?v=om038Mnzj_w

Moving on, replace the contents of the src/main.rs file with this:

Pay attention how curl is utilized to create a post_request and get_request functions that are exposed to the Fluence service using the #[marine] macro!

Well done! You have just coded a Fluence service.

Let’s go ahead and test it out! Recall that have created some shortcut scripts when creating the package.json file. It’s time to put them to use!

To test the service using marine REPL we need to build it first. Navigate back to the root folder andnpm run fluence-build will populate all folders in the services folder and attempt to build them as Fluence services. All the build artefacts(WASM modules) will be dumped to the artifacts folder.

Final thing before we start testing, is to configure our REPL. That is, to specify what module to load and where to find them. Create a repl.toml in the project’s root folder and add the following:

Now run the REPL by executing marine repl repl.toml. You should be greeted with the message below:

This is the REPL already loaded with the curl module. Keep in mind that in a multi service scenario, the repl.toml file can be extended with more modules such that all of them are loaded at once.

Run interface curl while being in the REPL to get a list of the available functions of the module:

From the REPL, we directly call module functions, like so:

call <module> <function> [<params>]

Carry on with testing the get_request function:

call curl get_request "https://random-data-api.com/api/bank/random_bank?size=3"

Works like charm! Consider the post_request testing as an exercise for the reader! Go ahead and try it!

Deploy locally

The next step is to deploy the Fluence service locally so that it can be called by the front end Angular application through an AIR script.

Ensure that you have docker installed and launch a local Fluence node by executing: npm run fluence-node

Fetch the Peer ID of the newly created node by running npm run fluence-node-peerid.

Recall that all npm run ... scripts are defined in package.json. Feel free to refer back to this if you are curious about the exact bash commands that these aliases correspond to.

To let fldist know where to deploy the service, you must create an application config. You can have an application configs per each environment. Create an app-local.config.json file with the format shown below where <peed-id> is the peer ID of the node to deploy to.

Your local Fluence node peer ID should be:

12D3KooWKEprYXUXqoV5xSBeyqrWLpQLLH4PXfvVkDJtmcqmh5V3

This is fixed because during the deployment of the local node, the private key is given as part of the docker container creation command (see package.json)

With your Fluence node ready to host the curl Fluence service, move on into rebuilding the curl service npm run fluence-build.

Now before running the npm run fluence-deploy command it is insightful to inspect the actual command behind this alias:

fldist deploy_app --env ${FLUENCE_ENV} -i app-${FLUENCE_ENV}.config.json -o ng-app/src/fluence/deployment.${FLUENCE_ENV}.json

As you can see, the environment variable FLUENCE_ENV is used to specify the environment and find the corresponding app configuration file, but also, the json file with the deployment details will be dumped to ng-app/src/fluence/deployment.<FLUENCE_ENV>.json. This file is picked up by the Angular application so that the Fluence client is aware of the correct node and service ID. (see ng-app/src/environments/.. )

At this stage, you should download the repository of the full code that I have provided and copy over the ng-app folder to your project folder. This is to avoid spending time building the UI which as a matter of fact is outside the scope of this blog post but it also serves the purpose of assuring that the folder destination of deployment artifacts exists. Now you are all set to deploy the Fluence service using:

FLUENCE_ENV=local npm run fluence-deploy

When the deployment is finished, you should see something like that:

Store the service-id (curl field) somewhere because it will be used further below.

Congratulations! You have deployed your Fluence service!

Testing AIR scripts

With this in hand now you can test an AIR script (which is how the frontend queries the service) to see the whole Fluence stack in action!

Review the following AIR script:

For functional programming enthusiasts, the S-expression syntax may look familiar from LISP, Clojure etc… For the rest it may seem confusing but it is actually pretty straightforward. In Layman’s terms the above script attempts to call get_request with the URL “https://ethereum-dex-prices-service.production.airswap.io/sell?amount=1&symbol=DAI&decimals=” as parameter, on service with id set by the service variable and return the result. In the case of an error it returns the error. For a more information about AIR please refer to the documentation.

Save the above AIR script in a file called request.air and test it against the deployed service using fldist :

fldist run_air -p req.air -d '{"service": "07e91bc8-273e-4fdf-9a58-4409a292746a"}' --env local --node-id 12D3KooWKEprYXUXqoV5xSBeyqrWLpQLLH4PXfvVkDJtmcqmh5V3

The command above, utilises the subcommand run_air to run the given AIR script against the service with id 07e91bc8–273e-4fdf-9a58–4409a292746a hosted on node with id 12D3KooWKEprYXUXqoV5xSBeyqrWLpQLLH4PXfvVkDJtmcqmh5V3 (Which is our local node).

A respone similar to the one below should be received:

[
{
"error": "",
"ret_code": 0,
"stderr": "",
"stdout": "[{\"Bamboo Relay\":{\"exchangeName\":\"Bamboo Relay\",\"totalPrice\":0.0008333333333333334,\"tokenAmount\":\"1\",\"tokenSymbol\":\"DAI\",\"avgPrice\":0.0008333333333333334,\"timestamp\":1624287973500,\"error\":null}},{\"Kyber\":{\"exchangeName\":\"Kyber\",\"totalPrice\":0.000500727994965449,\"tokenAmount\":1,\"tokenSymbol\":\"DAI\",\"avgPrice\":0.000500727994965449,\"timestamp\":1624287973339,\"error\":null}},{\"Uniswap\":{\"exchangeName\":\"Uniswap\",\"totalPrice\":0.0004982218341102291,\"tokenAmount\":\"1\",\"tokenSymbol\":\"DAI\",\"avgPrice\":0.0004982218341102291,\"timestamp\":1624287973701,\"error\":null}},{\"Switcheo\":{\"exchangeName\":\"Switcheo\",\"totalPrice\":0.00035138145619503076,\"tokenAmount\":\"1\",\"tokenSymbol\":\"DAI\",\"avgPrice\":0.00035138145619503076,\"timestamp\":1624287973491,\"error\":null}},{\"Eth2Dai\":{\"exchangeName\":\"Eth2Dai\",\"totalPrice\":0.000250187640730547,\"tokenAmount\":\"1\",\"tokenSymbol\":\"DAI\",\"avgPrice\":0.000250187640730547,\"timestamp\":1624287973767,\"error\":null}},{\"Ethfinex\":{\"exchangeName\":\"Ethfinex\",\"timestamp\":1624287973364,\"tokenSymbol\":\"DAI\",\"tokenAmount\":\"1\",\"error\":\"no price data found on Ethfinex for DAI\"}},{\"Bancor\":{\"exchangeName\":\"Bancor\",\"timestamp\":1624287975588,\"error\":\"Error: ESOCKETTIMEDOUT\",\"tokenAmount\":1,\"tokenSymbol\":\"DAI\"}},{\"IDEX\":{\"exchangeName\":\"IDEX\",\"timestamp\":1624287972614,\"tokenSymbol\":\"DAI\",\"tokenAmount\":\"1\",\"error\":\"no price data found on IDEX for DAI\"}},{\"DDEX\":{\"exchangeName\":\"DDEX\",\"timestamp\":1624287974841,\"tokenSymbol\":\"DAI\",\"tokenAmount\":\"1\",\"error\":\"no price data found on DDEX for DAI\"}},{\"Radar Relay\":{\"exchangeName\":\"Radar Relay\",\"timestamp\":1624287972593,\"tokenSymbol\":\"DAI\",\"tokenAmount\":\"1\",\"error\":\"no price data found on Radar Relay for DAI\"}},{\"Saturn Network\":{\"exchangeName\":\"Saturn Network\",\"timestamp\":1624287977826,\"tokenSymbol\":\"DAI\",\"tokenAmount\":\"1\",\"error\":\"no price data found on Saturn Network for DAI\"}},{\"AirSwap\":{\"exchangeName\":\"AirSwap\",\"timestamp\":1624287977674,\"error\":\"No quotes returned by any AirSwap maker\",\"tokenSymbol\":\"DAI\",\"tokenAmount\":\"1\"}},{\"Forkdelta\":{\"exchangeName\":\"Forkdelta\",\"timestamp\":1624287979173,\"tokenSymbol\":\"DAI\",\"tokenAmount\":\"1\",\"error\":\"no price data found on Forkdelta for DAI\"}}]"
}
]

The above confirms that the AIR script works as intended!

Run the Angular application

It is time to see the whole application in action by running the Angular application. To keep the post short I won’t be getting into how to build the Angular frontend. However, you should already have everything you need if you copied the ng-app folder over, as described in the steps above.

In a nutshell the Angular application uses an AIR script similar to the one you tested out earlier and sends a particle to the specified Fluence node and service id (based on the environment) using the Fluence JavaScript SDK.

To launch the angular application, navigate to ng-app folder run npm install to install all NodeJS dependencies and then:

npm start -- --configuration=local

Wait until the development server is up and running and visit http://localhost:4200/

You should see a simple interface with a form to search for Ether prices against a specific token and action. Try it out and verify that it works!

Conclusion

That’s all from me folks! Although, this is a very simple project, the possibilities with Fluence stack are endless. In future posts I will try to demonstrate more advanced concepts like a multi service application and authentication. Feel free to fork, extend or experiment with the project and please leave any feedback or questions that may arise in the comments below this post! 👍

Join Coinmonks Telegram Channel and learn about crypto trading and investing

Also, Read

--

--

Andreas Pogiatzis
Coinmonks

☰ PhD Candidate @ UoG ● Combining Cyber Security with Data Science ● Writing to Understand