Building autonomous applications on the Fluence network

Joera Mulders
9 min readOct 29, 2021

--

In this post I would like to explain why I choose to build on the Fluence network. A few weeks ago I began programming with Fluence during a Gitcoin hackathon. It was a success and I have been making more progress since. Since it will take a while still before I will understand enough to write an authoritative text on the subject, let me just go ahead and write up several preliminary thoughts and experiences. Please correct me in the comments when I err.

Computation networks in the decentralised ecosystem

There is a new world or perhaps new dimension taking shape, in which humans coordinate their actions through decentralised protocols: applications that are controlled by a set of smart contracts which in turn can be governed by communities of people. The rules are programmed by humans. Decisions are made by humans. As humans we do not loose power vis a vis machines, we rather create new environments where the distribution of power between humans is more transparent and predictable. There will be very hierarchical communities, there will be very egalitarian communities and much in between. Their nature and power distributions however will be clear and people can choose what kind of community they want to join for differents aspects in your life.

We now see the first of these protocols emerging in our lives: blockchains, decentralised exchanges and other de-fi services built on top of them.

Many of the enabling technologies, however, like the large api providers and analytics firms that power the exchanges are built as centralised companies. I am not degrading their current services. Without them we would not be able to trade or mint NFT’s. In time however, my expectation is , that the larger decentralised protocols will come to prefer to cooperate with services that are similar in nature: decentralised and governed by smart contracts.

For us to built these services we need so called computation networks that are more programmable than the blockchain, while not breaching that trust-less nature we come to expect. There is no need to be as secure as a blockchain. It’s about being secure enough to perform computations complementary to blockchains.

For example, when a smart contract delegates a computation task to be run, it does not need to run the task multiple times and compare outcomes. It only needs to be sure the code being run is authentic and that the published results can not been replaced with something altered.

There are several decentralised computation layer projects that promise to enter this place. Over the past years I have been reading up on several of them, but I never got the impression that the code was ready for me to work with. For many the proposition is to rent out a decentralised network of machines to perform tasks like rendering for video editing software. This can very likely become a commercial success, but i am not looking for network that is designed around a specific use case, but one that will rather permit me to build from my imagination. I do not want to start single computation tasks from my computer. I envision a system of tasks running permanently and autonomously on a network.

So then a few weeks ago I stumbled upon the Fluence network.

Discovering the Fluence network

I had just written my first application in rust: a data aggregation pipeline. And now I was curious to find out how much of that rust code would actually compile to wasm. Wasm I understand as being an efficient, portable unit of code able run on very small, low cost machines.. Or on your refrigerator …

Having worked as web-developer for many years, I tend to be put off by standard project websites with their generic promises and partnerships. fluence.network has a clean design and there is a large ‘getting started’ button that immediately goes into the documentation. The documentation starts with a lot of explanation on the p2p/network related concepts, which I dismissed as interesting for later.

What mattered to me is that without much fuzz I could build a service in rust, compile it to wasm, deploy it on a local node and see what happens, that is: figure out what my next step should be.

Building services with Marine

The initially most relevant part of Fluence’s marine_rs_sdk crate is a macro that exposes rust functions of your service as a public method on the node. In other words; you can call it after connecting to the node. When you are building or debugging a single service you can use the marine repl command line tool. MREPL mimics a node and outputs println!() in your terminal.

Code exposing methods of a service with marine macro
exposing methods of a service with marine macro

You will find out that many of the complexer rust crates do not compile to wasm or are not compatible with marine because of a fixed serde version. (the later will be fixed, probably) Ethers or web3 libraries for example will not compile.

The positive aspect of this is that the rust code that you write can be rather simple. As a mostly frontend developer coming from typescript I did not have to work with the more rusty design patterns like traits and implementations. The code can be very linear. Everything is synchronous. And services of course should be rather small by design, anyhow. If I can do it, chances are that you can do it better.

So, then — without ethers or web3 — how can I interact with Ethereum?

Your service can consist of multiple submodules, with the one that exposes methods with the marine crate generally being called the facade. There is a rust submodule written by the Fluence team that enables curl operations within your service. This curl adapter has become my Swiss army knife, or perhaps the hammer to everything that may or may not look like a nail.

Learning how to construe requests to the json-rpc api’s of Ethereum and IPFS is a bit cumbersome at first, but eventually it has helped me a lot to better understand these technologies themselves. Libraries tend to abstract away much of the cryptographic data transformations. Libraries also create constructs, like instances of contracts and wallets, that are easy to use, but also obfuscate the actual process you are programming.

You can use individual typings from these libraries. I use the typings from the ethers-core crate to construe eth_raw_transaction and eth_call methods on the Ethereum json-rpc api. For the input data argument specific to the contract being called I use the ethabi crate. The graphql_client crate can be used to fetch data from a subgraph. For the interactions with IPFS I have used the /api/v0/dag method on the http api, which handles the encoding, decoding on the IPFS side.

Do you see how you can start piecing together data flows from multiple protocols?

After some puzzling I had build services for all the individual tasks I need to perform.

Code that is preparing a raw transacting for Ethereum
Preparing a raw transacting for Ethereum

Connecting services with aqua

Then, it was time to piece them together. On the fluence network you can do that with the aqua language. With aqua you can program sequences of services on a p2p network, distribute tasks over multiple nodes. You write the code as aqua and compiles to code in the .air format, which you can run on an “entry” node in the network.

By that time I had spun up a single local node using the fluencelabs/node docker image. First thing to do was to deploy your service, the compiled wasm code, on my node.

Eventually I will want to use other nodes in the network, so I can not assume that a service is already present on a node. Rather, I want the aqua code, the “task distributor” in my words, to deploy a service, run it and then remove it again.

Code for tan aqua function deploying a service, running methods on it and removing the service
Aqua function deploying a service, running methods on it and removing the service

My current workflow is that I have extended the shell script that compiles the rust code to wasm with a curl upload to an IPFS node. In aqua the task distributor fetches the wasm code from the IPFS node. Aqua has bindings to IPFS on your Fluence node that will work for general add and get methods, meaning that you can download and upload files, not data. So, using the bindings you can fetch the wasm file from your remote IPFS node. The wasm code will be stored in a local /tmp folder in your Fluence node called “vault”.

Because we effectively deploy a service from a content-hash, we can be sure that the code has not been tampered with. Furthermore, imagine that we have a smart contract governed by a DAO, which states which content-hash to use for what service. Then, that DAO can vote on implementing an update of the code. And since we re-deploy the service for each task, such an update would take effect the next time the task is run.

After a while I was able to have aqua/air run the complete process I had programmed initially in rust, on a node, split up in individual wasm services. Yay!

Making it autonomous

Next challenge was to keep my process running. With the Script functions you can have a an aqua/air task distributor run another compiled aqua/air tasks distributor on an interval. My project, for example, has a service that uses eth_getFilterChanges to poll the blockchain for an event on a specific contract with an interval of 60 seconds. When a transaction takes place an an event is fired on the smart contract, the Fluence polling service will (eventually) notice the event and run the task.

Code for an aqua function to run other compiled aqua functions in intervals
Aqua function to run other compiled aqua functions in intervals

I found the aqua language in itself not be that difficult to learn. The examples in this repository are very helpful. I heavily re-use code from the builtin.aqua and process.aqua files. In general I found that it was most productive to tell the team in the Telegram chat what I wanted to achieve and ask which of the examples I should take a look at.

The tediousness of programming with aqua comes with debugging your code. In your console you can only see the data that is returned by the aqua process. So, yes you can pass data on, but it is time-consuming to keep adding and removing arguments and typings to functions. I did learn to read the logs from the node on your docker container, but there is — that I know off — no way to add your own custom logs.

I should add that in many use cases you may have a front-end in the browser. In such case you will use aqua with typescript and debugging could perhaps be easier. I have not tried.

Another reason for custom logging is the need to acquire a better understanding of the p2p architecture. In my project, for example, I will want to limit the use of services to a single process. Within the wasm code the marine crate gives you the get_call_parameters function that provides you with information about the larger process in which your single service is performing a task. Which peer (node) initiated the process? Which peer deployed this specific service? What is the id of the particle (a dataflow)?

I really want to log this struct, so I can see and imagine how I can design for security. The solution I see now is to — while I am developing — add a curl call to a logging service like Elasticsearch within all of my services.

Summarising …

Well, that is about it for now. When I get a better understanding of the network, I will write another post. I should also write a post on the project I am building. It is called the Public Record DAO.

To succinctly answer the question I asked myself at the start of writing: I have come to enjoy programming on the Fluence network because it doesn’t tell me how to use it. Fluence provides me with the tools that I need to program applications I have been imagining for some time now, autonomous applications that securely conduce data-flows between other decentralised protocols.

Interestingly, the other developers in the community calls have very different idea’s and visions. If you want to understand how powerful an open, un-opinionated platform can become, look at Ethereum.

To be continued.

--

--

Joera Mulders

Anthropologist turned developer. “The web is dead. Long live the web.”