IPFS (Interplanetary Files System) promises a better and more efficient way of sharing files. For example, instead of having everyone in a classroom download a file from Dropbox, why not from someone else in the room? This removes a lot network overhead. But what about getting files onto the IPFS network? I will go over an example that uploads a file from the browser to IPFS directly.
To create an uploading feature, a developer needs to receive data from the browser and then store it somewhere. It could be a service like Amazon Web Service S3 or other file hosting service. They could also make one themselves if so desired. But that may not be the best use of time for a development team on a budget.
There is nothing wrong with this structure. It allows a developer to write server code to modify the image for example. There also could be multiple storage solutions. All these solutions increase the amount of bandwidth used by the application. A 1 mb upload becomes 2 mb because the server needs to upload it onto the storage solution. Bandwidth is cheap, but it can be cheaper!
Browser to IPFS:
This is nice because we save on network costs. This is also doable with current storage platforms. They may be required to do a few authentication requests, but it’s still doable. You will see how easy it will be to upload directly to IPFS.
To run the example, you need an IPFS node running on your local computer. To make the tutorial easier, you will also configure CORS on your local node. Follow the IPFS install guide for your operating system and then do the following:
- Stop IPFS with ctrl-c
ipfs config -- json API.HTTPHeaders.Access-Control-Allow-Methods ‘[“PUT”, “GET”, “POST”, “OPTIONS”]’
ipfs config — json API.HTTPHeaders.Access-Control-Allow-Origin ‘[“*”]’
We are configuring
ipfs to return the necessary headers for CORS to work. The last command just restarts the
ipfs service locally.
Hosting a Website:
There are several ways to host a website on your laptop. You can use a server like
SimpleHTTPServer , or just drag and drop
index.html into the browser. The gist is here. Save the file in a directory of your choice.
Using node to host website:
If you want to use node, run the following in the terminal:
npm install http-server -g
http-server -p 1337
Using Python to host a website:
If you are on OSX, you most likely have Python installed by default. This is nice because there will be no need to install anything. In the terminal, run the following:
python -m SimpleHTTPServer 1337
The above starts an HTTP server on port 1337 in the local directory.
Using the file system to host a website:
The easiest way to host your website is to drag and drop index.html into any browser. No need to install or run any commands.
However you host the index.html file, you should see the following:
The example is a browser to IPFS image uploader. Having the browser directly upload the image to IPFS, bandwidth is reduced for developers. Normally to upload an image a client will upload to a server, which in turn saves it somewhere else. Why have client → server → directory, when you can have client → directory.
- Create HTML input field with
- Create HTML button with
const reader = new FileReader()
reader.readAsArrayBufferthe file inside
- Bind to
readerevent emitter method
ipfsobject bound to local IPFS node on port 5001
- Create buffer of the image that was read from
ipfs.files.addwith buffer with callback function
- Create the url string
- Modify data on DOM directly
It’s easy to create a service that receives data from the browser and then stores it on a storage platform. As a developer you can write code to do what you want. Conversely, a front-end developer has more restrictions because their code runs in browsers. This add some complexity sometimes.
The browser does not have direct access to directory files. As a matter of fact,
FileReader will give you a fake directory to the file selected (
C:/fakedir/some/path.png). This is a big difference from server code where reading a file is easily done and accessible. But that’s OK, because
FileReader allows you to read files in different formats:
readAsText. We use
readAsArrayBuffer in the example.
In Node.js, you have access to the
required('buffer').Buffer native module. This means we need to look for a compatible Buffer function. Luckily others have done the work! I found a browser compatible buffer module by feross. In the example code, I added the async html script tag to include the
buffer object. If I was using Webpack I could have included buffer as
require('buffer/').Buffer. Either way, as long as the buffer code is accessible, this will work.
Having users upload directly to IPFS is nice because it removes the need for some server code. One drawback is that it requires a user to have IPFS running locally. Most users will not know why or how to install it. There is a Chrome plugin, but it’s still a hoop to jump through. Another drawback is that images are not replicated between nodes. They only exist on nodes if the file was accessed. This just means if server A receives the image, only server A knows about it. We need to copy the file to multiple servers to increase the availability of the file.
One way to solve the issues above is to modify the
5001 ports and host your own IPFS node. This means you have to manage a few servers, but at least you don’t have a server processing images between the uploading. But if your server turns off, then the image becomes unavailable again. To survive restarts, you could use
ipfs.pin to tell IPFS to store image in a directory instead of memory. This works for the single server, but not all the other nodes you are hosting.
Let’s get back to the topic of having images available in IPFS after servers are destroyed. The goal is to have the uploaded image on as many servers as possible so it becomes more accessible. After an image is uploaded to one IPFS node, maybe the node uploads it again to other servers. Therefore the same image lives on many IPFS servers. Another way is to save it on a cloud platform like AWS or GCE. Then when accessing images, if IPFS is slow or does not return, use the AWS or GCE image URL.
This issue of file availability is open at the moment. IPFS in and of itself does not do that work. The availability can be written on top of IPFS. They are working on FileCoin to incentivize people to store data. People uploading win by having people save data, and hosts win by receiving FileCoin for their work. There are also other protocols like SaiCoin, StorJ, and MainSafe.
You should be able to see the overall structure to upload an image from the browser to IPFS. There are a few drawbacks like availability, speed, and ease of use. Its still early, and hopefully more work can be done to push the technology forward!