Building the Future: A Guide to Decentralizing Your Platform using Swarm

Aman Bishnoi
videowiki.pt
Published in
12 min readSep 1, 2023

Hello, I’m Aman Bishnoi, a full stack developer at Videowiki. In today’s digital age, where data is often described as the “new oil,” the concept of decentralization has become more critical than ever before. As we navigate an increasingly interconnected world, the need for a shift in the way we handle and distribute data has become abundantly clear. In this blog post, we’ll delve into the world of decentralization, exploring why it’s essential, and how technologies like Swarm and FairDrive, which we’ve embraced at Videowiki, are leading the charge in transforming centralized platforms into more democratic, secure, and user-centric systems. Join me on this journey as we uncover the power and potential of decentralization in reshaping the digital landscape.

Decentralized Storage and the Advantages of Swarm Protocol

Decentralized storage, a method of distributing data across a network of nodes instead of a single location, offers enhanced security, privacy, resilience, and cost-effectiveness compared to centralized storage solutions. This approach ensures that even if certain nodes fail or are compromised, data remains accessible. The use of decentralized storage has gained momentum due to its potential to mitigate censorship and downtime risks.

Benefits of Decentralized Storage

  1. Security: Avoids single points of failure, increasing resistance against hacks or data loss.
  2. Privacy: Distributed nature makes tracking and control of data access more challenging, enhancing privacy.
  3. Resilience: Reduced vulnerability to censorship or downtime attempts due to data distribution.
  4. Cost-effectiveness: Shared storage costs across network nodes can be more economical for large datasets.

Swarm Protocol

Swarm, built on the Ethereum blockchain, is a decentralized storage protocol offering numerous advantages:

  1. Security: Utilizes a decentralized network structure to eliminate single points of failure.
  2. Privacy: Encrypts data for restricted access, ensuring only intended recipients can decode it.
  3. Resilience: Distributed architecture safeguards against downtime and censorship threats.
  4. Scalability: Adaptable to varying application needs, making it suitable for different use cases.
  5. Cost-effectiveness: Provides an economical solution for storing substantial amounts of data.

How Swarm Works

  1. Data Division: Swarm divides data into smaller fragments.
  2. Node Distribution: Fragments are stored on multiple network nodes.
  3. Constant Communication: Nodes communicate to track data locations, ensuring availability even if some nodes go offline.
  4. Client Access: Accessing data stored in Swarm requires a Swarm client, connecting to the network and retrieving required data.

In conclusion, the adoption of decentralized storage brings forward several advantages over centralized storage solutions, such as improved security, privacy, resilience, and cost-effectiveness. Swarm, a leading decentralized storage protocol, operates on the Ethereum blockchain, offering a secure, private, and resilient storage solution through its distributed network architecture.

Introducing FairOS: Seamless Integration with Swarm

FairOS is an innovative decentralized operating system that operates atop the Swarm peer-to-peer network. It offers numerous advantages over traditional operating systems, including decentralization, security, privacy, efficiency, and scalability.

FairOS seamlessly integrates with Swarm by leveraging the network for storing and sharing files. Files are fragmented into chunks and securely stored on multiple nodes within the Swarm network, ensuring constant accessibility, even if some nodes experience downtime.

Additionally, all files are encrypted before being stored on the Swarm network, preventing unauthorized access. This makes FairOS a secure and private solution for storing and sharing files.

The following image illustrates how FairOS interacts with the Swarm Bee node.

https://docs.fairos.fairdatasociety.org/docs/fairOS-dfs/introduction

Implementing Swarm Bee Node

Bee is available for Linux in .deb package format, and it can be installed on various system architectures. One notable advantage of this installation method is that it automatically configures Bee to run as a service during the installation process.

wget https://github.com/ethersphere/bee/releases/download/v1.17.2/bee_1.17.2_amd64.deb
sudo dpkg -i bee_1.17.2_amd64.deb

For additional installation methods and configuration details, please refer to the following link

Find Bee address

To initiate a Bee full node or light node, the node is required to generate a Gnosis Chain transaction. This transaction is funded using xDAI. To obtain our node’s Gnosis Chain address, we can extract it directly from our key file by reading its contents.

sudo cat /var/lib/bee/keys/swarm.key

Output from cat /var/lib/bee/keys/swarm.key:

{"address":"215693a6e6cf0a27441075fd98c31d48e3a3a100","crypto":{"cipher":"aes-128-ctr","ciphertext":"9e2706f1ce135dde449af5c529e80d560fb73007f1edb1636efcf4572eed1265","cipherparams":{"iv":"64b6482b8e04881446d88f4f9003ec78"},"kdf":"scrypt","kdfparams":{"n":32768,"r":8,"p":1,"dklen":32,"salt":"3da537f2644274e3a90b1f6e1fbb722c32cbd06be56b8f55c2ff8fa7a522fb22"},"mac":"11b109b7267d28f332039768c4117b760deed626c16c9c1388103898158e583b"},"version":3,"id":"d4f7ee3e-21af-43de-880e-85b6f5fa7727"}

The address field contains the Gnosis Chain address of the node, simply add the 0x prefix.

Funding Bee Node

Prior to funding your node, it’s essential to acquire a certain amount of xDAI cryptocurrency. You can obtain xDAI from sources like the faucet available at stakely or gnosisfaucet. Alternatively, if you possess DAI on the Ethereum network, you can leverage the xDAI bridge to mint xDAI on the Gnosis Chain.
Once you have obtained an ample quantity of xDAI, you are ready to provide funding for your Bee node. To activate your Bee node on the mainnet, it’s essential for its Ethereum wallet to maintain a balance of: You can perform the transfer of xBZZ through the omnibridge platform, and it’s recommended to have a minimum of 10 xBZZ to ensure optimal performance of full nodes.

Following the transfer of xDAI and xBZZ to the Gnosis Chain address obtained in the previous step, proceed to restart the node.

sudo systemctl restart bee

Initialisation

Upon its initial launch in full mode, Bee is required to establish a chequebook on the Gnosis Chain blockchain and synchronize the postage stamp batch store. This synchronization process is crucial for validating stored or forwarded chunks. Kindly note that this procedure might take some time, so your patience is appreciated. Once finished, you’ll observe Bee initiating connections, adding peers, and becoming part of the network. To track the progress, monitor the logs during this phase.

sudo journalctl --lines=100 --follow --unit bee

Verifying Bee’s Functionality

Initial Verification of Installed Bee Version

bee version

After funding the Bee node, deploying the chequebook, and synchronizing the postage stamp batch store, the HTTP API of the node will become active and listen at localhost:1633.

To ensure proper functionality, you can perform a GET request to localhost on port 1633.

curl localhost:1633

result should be

Ethereum Swarm Bee

Stake your node with Bee

At present, the minimum staking prerequisite stands at 10 xBZZ. It’s imperative to confirm that the node’s wallet contains an adequate number of tokens, and it’s also essential to possess some native tokens to cover gas expenses.

Subsequently, you can execute the provided command to stake 10 xBZZ. The specified amount is denominated in PLURs, which represents the smallest unit of xBZZ. It’s noteworthy that 1 xBZZ equals 1e16 PLUR.

curl -XPOST localhost:1635/stake/100000000000000000

If the command executed successfully it returns a transaction hash that you can use to verify on a block explorer.

Upload and Download files

To upload your data onto Swarm, it’s necessary to commit a portion of your xBZZ tokens, effectively spending them. This action serves as an indicator to storer and forwarder nodes that the content holds value. Before moving forward, it’s imperative to acquire stamps.

Currently, the simplest approach to begin uploading content is by purchasing a sufficiently large batch. This choice minimizes the likelihood of numerous chunks ending up in the same bucket.

The quantity you designate will determine the duration for which your chunks will persist within Swarm. Due to variable pricing, it’s not feasible to precisely predict when your chunks will exhaust their balance. Nevertheless, an estimate can be made based on the existing price and the remaining batch balance.

To commence, we recommend setting the batch depth to 20 and the amount to 10,000,000,000 for your batches. This initial configuration should enable you to upload several gigabytes of data for a span of a few weeks.

curl -s -XPOST http://localhost:1635/stamps/10000000000/20

After purchasing your batch, it will require a few minutes for other Bee nodes within the Swarm network to recognize and register it. Please be patient and allow some time for your batch to propagate throughout the network before advancing to the next step.

To inquire about your stamps, initiate a GET request to the stamp endpoint.

curl http://localhost:1635/stamps

To retrieve details about the purchased stamps, execute the following command:

    curl "http://localhost:1635/stamps"

If your batch is nearing depletion, or you wish to prolong its duration to safeguard against potential storage price increases, you have the option to extend the batch’s Time-to-Live (TTL). You can achieve this by adding more time to your batch through the stamps endpoint. In this process, you’ll utilize an HTTP PATCH request while providing the pertinent batchID.

curl -X PATCH "http://localhost:1635/stamps/topup/6d32e6f1b724f8658830e51f8f57aa6029f82ee7a30e4fc0c1bfe23ab5632b27/10000000"

You are now ready to proceed with uploading and downloading content on Swarm. For additional information, please refer to: link

To upload a file, initiate an HTTP POST request targeting the files endpoint of the Bee API. Ensure to incorporate your Batch ID within the Swarm-Postage-Batch-Id header, adhering to the specified structure. Additionally, you have the option to include the relevant MIME type in the Content-Type header, along with a designated file name using the name query parameter. This facilitates proper handling of the file by web browsers and various applications.

curl --data-binary "@bee.jpg" -H "Swarm-Postage-Batch-Id: 78a26be9b42317fe6f0cbea3e47cbd0cf34f533db4e9c91cf92be40eb2968264" -H "Content-Type: image/jpg" http://localhost:1633/bzz

All data uploaded to Swarm is publicly accessible. To maintain the privacy of sensitive files, it’s imperative to encrypt them before uploading.

After a successful upload, you will receive a JSON-formatted response. This response will incorporate a swarm reference or hash, acting as the address for the uploaded file. As an example:

{
"reference": "22cbb9cedca08ca8d50b0319a32016174ceb8fbaa452ca5f0a77b804109baa00"
}

After uploading your file to Swarm, retrieving it is as straightforward as making an HTTP GET request.

Replace the hash at the end of the URL with your data’s specific reference.

For instance, using curl:

curl -OJL http://localhost:1633/bzz/042d4fe94b946e2cb51196a8c136b8cc335156525bf1ad7e86356c2402291dd4/

Alternatively, you can even access the URL directly in your browser:

http://localhost:1633/bzz/042d4fe94b946e2cb51196a8c136b8cc335156525bf1ad7e86356c2402291dd4/

FairOs-dfs installation

The Decentralized File System (dfs) is designed specifically for FairOS. It serves as a lightweight, stateless layer that leverages Swarm’s core components to deliver advanced functionalities such as:

  1. Presenting a structured file system
  2. Establishing logical drive creation capabilities.
  3. Managing users and permissions.
  4. Facilitating charging and payment processes.
  5. Supporting mutable, indexed data structures on top of an immutable file system.

Use cases for dfs encompass:

  1. Personal data storage.
  2. Storage of application data, catering to both Web 3.0 DApps and traditional Web 2.0 applications.
  3. Data sharing on an individual and organizational level.

To install FairOS-dfs, you can use the following Docker command:

docker run\
-p 9090:9090 \
--rm -it fairdatasociety/fairos-dfs\
server \
--ens-network="testnet" \
--rpc="https://xdai.dev.fairdatasociety.org" \
--beeApi="https://bee-1.fairdatasociety.org" \
--postageBlockId=0000000000000000000000000000000000000000000000000000000000000000

Once you’ve completed testing and are content with the Docker command, you can move forward with deploying FairOS-dfs. Simply use the same command, but remember to substitute “your-bee-api-address.com” with the real address of your hosted Bee API, and replace “your-bee-postageBlockId” with the appropriate Bee postageBlockId. With these adjustments made, go ahead and run the command to start FairOS-dfs according to your specified configuration.

To test your FairOS-dfs node, you can send a cURL request to http://localhost:9090, and it should respond with “OK”.

You can now employ FairOS APIs to engage with the FairOS decentralized file system (dfs). These APIs encompass functionalities like user management, pod handling, file systems, key-value stores, and document stores. Our focus will be on creating APIs for uploading and downloading user data.

Create user account API

curl 'http://localhost:9090/v2/user/signup' -H 'Content-Type: application/json' -d '{"userName":"<username>","password":"<password>"}'

Upon making the API call, you will receive a response similar to this:

{
"address": "0x8C49B85011596609d313E2DE2dD0AC39140a4970",
"message": "insufficient funds",
"mnemonic": "drift game dutch coach minimum either business hour ski normal admit banana"
}

To finalize the signup process, you must fund the provided address with 0.2 sepolia. After successfully funding the account, you can proceed to call the signup API again to complete the account creation process.

You can use the following script to fund wallets:

const { Alchemy, Network, Wallet, Utils } = require("alchemy-sdk");
const dotenv = require("dotenv");

dotenv.config();
const { API_KEY, PRIVATE_KEY } = process.env;

const settings = {
apiKey: API_KEY,
network: Network.ETH_SEPOLIA,
};
const alchemy = new Alchemy(settings);

let wallet = new Wallet(PRIVATE_KEY);

async function main() {
const nonce = await alchemy.core.getTransactionCount(
wallet.address,
"latest"
);

let transaction = {
to: "wallet_address_to_fund",
value: Utils.parseEther("0.2"),
gasLimit: "21000",
maxPriorityFeePerGas: Utils.parseUnits("5", "gwei"),
maxFeePerGas: Utils.parseUnits("20", "gwei"),
nonce: nonce,
type: 2,
chainId: 11155111,
};

let rawTransaction = await wallet.signTransaction(transaction);
let tx = await alchemy.core.sendTransaction(rawTransaction);
console.log("Sent transaction", tx);
}

main();

Please substitute wallet_address_to_fund with the actual wallet address you wish to fund. This script uses Alchemy SDK to fund wallets with 0.2 sepolia.

To complete the signup process, call the signup API again using the following command:

curl 'http://localhost:9090/v2/user/signup' -H 'Content-Type: application/json' -d '{"userName":"<username>","password":"<password>","mnemonic":"<12 words from bip39 list>"}'

Login

After completing the signup process, you can proceed to log in using the following API call:

curl 'http://localhost:9090/v2/user/login' -H 'Content-Type: application/json' -d '{"userName":"<username>","password":"<password>"}' -v

Upon utilizing the signup and login APIs, a ‘fairOS-dfs’ cookie is provided for storage in the browser’s memory. Additionally, starting from version 0.9.5, a successful login will yield an accessToken, which should be included in API request headers as “Bearer <accessToken>”.

You can apply any of these methods for authentication while using the other APIs.

Creating User Pods for Data Storage

To establish a dedicated pod for each user to store data, you can utilize the following API call:

curl --request POST 'http://localhost:9090/v1/pod/new' -H 'Content-Type: application/json' -d '{"podName":"<podName>","password":"<password>"}' -H 'Cookie: fairOS-dfs=<COOKIE_FROM_LOGIN>'

After setting up the pod, you can access it using the following API call:

curl --request POST 'http://localhost:9090/v1/pod/open' -H 'Content-Type: application/json' -d '{"podName":"<podName>","password":"<password>"}' -H 'Cookie: fairOS-dfs=<COOKIE_FROM_LOGIN>'

Creating User Directories

To establish directories within the user’s pod for storing data, use the following API call:

curl --request POST 'http://localhost:9090/v1/dir/mkdir' --header 'Content-Type: application/json' --data-raw '{"dirPath": "<dir_with_path>","podName": "<podName>"}' -H 'Cookie: fairOS-dfs=<COOKIE_FROM_LOGIN>'

Uploading Files

With all preparations complete, you can proceed to upload files using the following API call:

curl --request POST -H "fairOS-dfs-Compression: gzip" --form 'dirPath=<dir_with_path>' --form 'podName=<podName>' --form 'blockSize=<in_Mb>' --form 'files=@<filename1>' http://localhost:9090/v1/file/upload --header 'Content-Type: multipart/form-data' -H 'Cookie: fairOS-dfs=<COOKIE_FROM_LOGIN>'

Downloading Files

curl --request POST --form 'filePath="<filePath>"' --form 'podName="<podName>"' http://localhost:9090/v1/file/download -H 'Cookie: fairOS-dfs=<COOKIE_FROM_LOGIN>' --header 'Content-Type: multipart/form-data'

For the filePath, use the format /directory/filename. This format helps identify the location and name of the file you intend to interact with in the API call.

Delete File

To delete a file, you can execute the following API call:

curl --request DELETE http://localhost:9090/v1/file/delete --header 'Content-Type: application/json' -H 'Cookie: fairOS-dfs=<COOKIE_FROM_LOGIN>' --data-raw '{"podName": "<podName>","filePath": "<filePath>"}'

With the completion of the Delete API, we have addressed all the essential APIs. For those interested in exploring more APIs, you can access further details by referring to the following link.

If you’re interested in experimenting with the mentioned APIs, you can utilize the VideoWiki uploader. To get started, you’ll require a functional bee node and a running Fairos server.

Furthermore, if you intend to employ the aforementioned code for developing and releasing an extension, you can refer to the instructions provided on the following GitHub repository: https://github.com/VideoWiki/extension. You can follow the guidance outlined in this guide: https://support.google.com/chrome/a/answer/2714278?hl=en to accomplish this.

Here is the link to the VideoWiki extension if you’d like to explore it further: https://chrome.google.com/webstore/detail/videowiki-uploader/beoblccnlpefppbmgkdekcbcjfcpmefa/related?hl=en-GB.

Summary

In summary, the adoption of decentralization through technologies like Swarm and FairOS has the potential to be a transformative step for your platform. This approach not only enhances data security but also empowers users by granting them greater control over their data. By decentralizing storage and operations, you contribute to a digital landscape that is more democratic, secure, and user-centric.

As we look to the future, the importance of decentralization is set to grow even further. It offers a pathway to a more resilient and censorship-resistant internet, where individuals have the autonomy to safeguard their data and online presence.

At Videowiki, we have firsthand experience of the potential of decentralization to reshape the digital realm. We encourage you to explore these technologies more deeply and consider how they can benefit your own platform. By building towards a decentralized future, we not only protect data but also lay the foundation for a more inclusive and equitable digital world. Join us on this exciting journey towards a decentralized future.

Thanks for Reading

We appreciate your time and interest in exploring the possibilities of decentralization with us. If you have any questions or would like to continue this conversation, please don’t hesitate to get in touch.

A Shout-Out to Our Dedicated Teammates

Before we wrap up, I’d like to take a moment to express my deep appreciation for the incredible individuals who have poured their time, expertise, and passion into making this journey of decentralization possible. To my teammates — Ritik Mahajan and Anish Jha — you are the beating heart of our collective efforts. Your dedication to innovation and relentless pursuit of excellence have been invaluable. I’m immensely grateful for your unwavering support. Here’s to the outstanding teamwork that has made this all possible! 🌟 #TeamAppreciation #ThankYouTeam

Stay in Touch

If you found this tutorial helpful and are eager to explore more articles from us, be sure to follow our contributors:

Stay connected for the latest insights and updates.

References

Access relevant resources and references to deepen your understanding of the topics discussed in this blog post.

We look forward to connecting with you and continuing our journey towards a decentralized future.

--

--