How Fiverr Accelerates attachments

Oren Levitzky
Fiverr Tech
Published in
3 min readApr 22, 2020

Fiverr is changing how the world works together, connecting freelancers with businesses around the world. In 2019, over 2.4 million customers bought a wide range of services from freelancers working in over 160 countries.

Over the years as we listen to our community of buyers and sellers, we continue to invest to improve our product, keeping our focus on our users’ needs. One of the biggest challenges we have had to overcome is with file sharing.

Scaling Fiverr with AWS

File sharing is such an important part of our infrastructure because it is used extensively by both buyers and sellers to exchange information and deliverables on the Fiverr platform. For example, when an order is made on Fiverr a buyer usually receives at least one attachment from the seller. It might be the logo they ordered, the video they wanted edited or any of the other services on the platform. In addition, when users want to sell their services they must showcase their previous work or examples of it in their Gig page (Gig is a Digital Service) and upload content to be viewed such as images, videos, audios, PDF’s and other types of visual files.

With millions of users and businesses connecting and working together on our platform this generates a multitude of attachments we need to manage. On average we manage millions of attachments per month! That’s about tens of TB of storage per month.

This quantity of data is one of the many reasons that we chose to migrate our infrastructure to AWS a few years ago. The idea was to make our life easier and more importantly — improve our users’ experience. That included Amazon EC2 Auto Scaling groups, Amazon Aurora, Amazon Simple Storage service (Amazon S3) and many other services that can be used in AWS.

The next step was to implement a new system architecture which supports the attachments’ scale, together with extending its features. We ended up with a higher file size limit which is basically unlimited, we managed to speed up upload and download times by 3–4X and we also found a better solution for compressing and optimizing files for preview, using Cloudinary’s image and video management platform. And BOOM!, this without a doubt exceeded our expectations.

The high level architecture looks like this:

The main idea with this architecture is that we connect our users’ devices with our Amazon S3 account without passing through our own servers when uploading attachments. Going through our own servers (in a previous solution) hurt our upload performance resulting in higher latency but more importantly it loaded our entire servers too much. This new architecture gives our users a much better experience. Speeding up the upload time helps our users to deliver more content with better quality, much faster. After connecting our users to our S3 service (which is located in the US), we saw that it is faster than our previous system but still in many countries the experience was still not where we wanted it to be.

Using Amazon S3 Transfer Acceleration to speed up our experience

To solve this we enabled Amazon S3 Transfer Acceleration, which connects our users to closer Cloudfront servers which usually exist in the same region. Enabling Amazon S3 Transfer Acceleration was very easy — just choose the desired S3 bucket and click a button. We managed to speed up the upload/download times for those countries in about 3X!

Our architecture would not be complete without making sure that all the files uploaded to our S3 account are secured. Regardless of the scale and file size, each file is being scanned for viruses immediately without any impact on our users experience. Now our users can focus on their work to deliver the best product they can!

--

--