Design Instagram

Let’s design a photo-sharing service like Instagram, where users can upload photos to share them with other users.

Crack Fang
Jun 19 · 12 min read
Photo by Alexander Shatov on Unsplash

Why Instagram?

Instagram is a social networking service that enables users to upload and share their pictures and videos publicly or privately with other users.

We plan to design a simpler version of Instagram for this exercise, where users can share photos and follow other users. The newsfeed for each user will consist of top images from all the people the user follows.

Requirements and Goals of the System

We will focus on the following set of requirements while designing Instagram:

  1. Users should be able to upload, download and view photos.
  2. Users can follow other users.
  3. The system should generate and display a user’s newsfeed consisting of top photos from all the people the user follows.
  1. Our service needs to be highly available.
  2. The acceptable latency of the system is 200ms for newsfeed generation.
  3. Consistency can take a hit (in the interest of availability). If a user doesn’t see a photo for a while, it should be fine.
  4. The system should be highly reliable, any photo/video uploaded should not be lost.

We will not cover additional features that Instagram has, such as tags, comments, search, suggestions, etc., as they are advanced concepts for now.

Some Design Considerations

The system would be read-heavy, focusing on building a system that can retrieve photos quickly.

  1. Practically users can upload as many photos as they like. Therefore, efficient management of storage should be a crucial factor while designing this system.
  2. Low latency is expected while reading images.
  3. Data should be 100% reliable. For example, if a user uploads an image, the system will guarantee that it will never be lost.

Capacity Estimation

  • Let’s assume we have 1 Billion total users, with 500 Million daily active users.
  • Let’s assume 20% of the users add a picture every day, so 100 Million new images are added daily.
  • Average photo file size =~ 1 MB
  • Total space required for storage of 1 day of photos = 100 M * 1 MB = 100 TB
  • Total space required for 5 years = 100 TB * 365 days * 5 years =~ 200 PB

Feel free to play with this Excel file to get the estimates specific to your system.

High-Level System Design

We need to support two scenarios at a high level, one to upload photos and the other to view/search photos. Our service would need some block storage servers to store photos and database servers to store metadata information.

Database Schema

We need to store the metadata about users, their uploaded photos, and the people they follow.

The photo table will store all data related to a photo. We need to have an index on (PhotoID, CreationDate) since we need to fetch recent images first.

All the metadata related to photos can go to a table, where the ‘key’ would be the ‘PhotoID’ and the ‘value’ would be an object containing PhotoLocation, UserLocation, CreationTimestamp, etc.

We also need to store relationships between users and photos to know who owns which photo. Another relationship we would need to store is the list of people a user follows. So for the ‘UserPhoto’ table, the ‘key’ would be ‘UserID,’ and the ‘value’ would be the list of ‘PhotoIDs’ the user owns, stored in different columns. We will have a similar scheme for the ‘UserFollow’ table.

A straightforward approach for storing the above schema would be to use an RDBMS like MySQL since we require joins. But relational databases come with their challenges, especially when we need to scale them.

We can store the above metadata schema in a wide-column datastore like Cassandra. Key-Value stores, in general, always maintain a certain number of replicas to offer reliability. Also, in such data stores, deletes don’t get applied instantly. Instead, data is retained for some days (to support undeleting) before getting removed from the system permanently.

The raw image files can be stores in a distributed key-value store to enjoy benefits offered by NoSQL like HDFS or S3.

MetaData Size Estimation

Let’s estimate how much data will be going into each table and how much total storage we will need for five years.

Assuming each “int” and “datetime” is four bytes, and “bigint” is eight bytes each row in the User’s table will be 68 bytes.

UserID (4 bytes) + Name (32 bytes) + Email (32 bytes) + DateOfBirth (4 bytes) + CreationDate (4 bytes) + LastLogin (4 bytes) = 76 bytes

If we have 1 Billion users, we will need 76 GB of total storage.

Each row in Photo’s table will be of 284 bytes:

PhotoID (8 bytes) + UserID (4 bytes) + PhotoPath (256 bytes) + CreationDate (4 bytes) = 272 bytes

If 100M new photos get uploaded every day, we will need 50 TB of storage for five years.

100 M * 272 bytes * (365 * 5) ~= 50 TB

Each row in the UserFollow table will consist of 8 bytes. If we have 500 million users, and on average, each user follows 500 users. We would need 4 TB of storage for the UserFollow table:

1 Billion users * 500 followers * 8 bytes ~= 4 TB

Therefore, the total space required for all metadata tables for 5 years = 0.076 TB + 50 TB + 4 TB ~= 50 TB

Feel free to play with this Excel file to get the estimates specific to your system.

Component Design

Writes or photo uploads could be slow as they have to go to the disk, whereas reads will be faster, especially if they are being served from cache.

Uploading users can consume all the available connections, as uploading is a slow process. This means that ‘reads’ cannot be served if the system gets busy with all the ‘write’ requests. We should keep in mind that web servers have a connection limit before designing our system. If we assume that a web server can have a maximum of 500 connections at any time, it can’t have more than 500 concurrent uploads or reads. To handle this bottleneck, we can split reads and writes into separate services. We will have dedicated servers for reads and different servers for writes to ensure that uploads don’t hog the system.

Separating read and write requests will also allow us to scale and optimize each of these operations independently.

Reliability and Redundancy

Losing files is not an option for our service. Therefore, we will store multiple copies of each file to retrieve the photo from the other copy present on a different storage server if one storage server dies.

This same principle also applies to other components of the system. If we want to have high availability of the system, we need to have multiple replicas of services running in the system so that even if a few services die down, the system remains available and running. Redundancy removes the single point of failure in the system. For example, if two instances of the same service run in production and one fails or degrades, the system can fail over to the healthy copy. Failover can happen automatically or require manual intervention.

Data Sharding

Let’s discuss different schemes for metadata sharding:

Let’s assume we shard based on the ‘UserID’ to keep all photos of a user on the same shard. If one DB shard is 1TB, we will need fifty shards to store 50TB of metadata. Let’s assume, for better performance and scalability, we keep 100 shards.

So we’ll find the shard number by UserID % 100 and then store the data there. Then, to uniquely identify any photo in our system, we can append the shard number with each PhotoID.

Each DB shard can have its own auto-increment sequence for PhotoIDs, and since we will append ShardID with each PhotoID, it will make it unique throughout our system.

  1. How would we handle hot users (celebrities/influencers)? Several people follow such hot users, and a lot of other people see any photo they upload.
  2. Some users will have many photos compared to others, thus making a non-uniform distribution of storage.
  3. Storing all photos of a user on one shard can cause unavailability of all of the user’s data if that shard is down or higher latency if it is serving high load etc.

If we can generate unique PhotoIDs first and then find a shard number through PhotoID % 100, the above problems will have been solved. We would not need to append ShardID with PhotoID in this case, as PhotoID will itself be unique throughout the system.

We cannot have an auto-incrementing sequence in each shard to define PhotoID because we need to know PhotoID first to find the shard where it will be stored. One solution could be that we dedicate a separate database instance to generate auto-incrementing IDs. If our PhotoID can fit into 64 bits, we can define a table containing only a 64 bit ID field. So whenever we would like to add a photo to our system, we can insert a new row in this table and take that ID to be our PhotoID of the new photo.

Yes, it would be. A workaround could be defining two such databases, one generating even-numbered IDs and the other odd-numbered. For MySQL, the following script can define such sequences:

KeyGeneratingServer1:

auto-increment-increment = 2auto-increment-offset = 1

KeyGeneratingServer2:

auto-increment-increment = 2auto-increment-offset = 2

We can put a load balancer in front of these databases to round-robin between them to deal with downtime. Both these servers could be out of sync, with one generating more keys than the other, but this will not cause any issue in our system. We can extend this design by defining separate ID tables for Users, Photo-Comments, or other objects present in our system.

Alternately, we can implement a ‘key’ generation scheme similar to what we have discussed in Designing a URL Shortening service.

First, we can have a large number of logical partitions to accommodate future data growth, such that in the beginning, multiple logical partitions reside on a single physical database server. Since each database server can have multiple database instances running on it, we can have separate databases for each logical partition on any server. So whenever we feel that a particular database server has a lot of data, we can migrate some logical partitions from it to another server.

Finally, we can maintain a config file (or a separate database) that can map our logical partitions to database servers, enabling us to move partitions around quickly. Whenever we want to move a partition, we only have to update the config file to announce the change.

Ranking and Newsfeed Generation

To create the News Feed for any given user, we need to fetch the latest, most popular, and relevant photos of the people the user follows.

For simplicity, let’s assume we need to fetch the top 100 photos for a user’s News Feed. Our application server will first get a list of people the user follows and then fetch metadata info of each user’s latest 100 photos. In the final step, the server will submit all these photos to our ranking algorithm, which will determine the top 100 photos (based on recency, likeness, etc.) and return them to the user. A possible problem with this approach would be higher latency as we have to query multiple tables and perform sorting/merging/ranking on the results. To improve the efficiency, we can pre-generate the News Feed and store it in a separate table.

We can have dedicated servers that are continuously generating users’ News Feeds and storing them in a ‘UserNewsFeed’ table. So whenever any user needs the latest photos for their News-Feed, we will simply query this table and return the results to the user.

Whenever these servers need to generate the News Feed of a user, they will first query the UserNewsFeed table to find the last time the News Feed was generated for that user. Then, new News-Feed data will be generated from that time onwards (following the steps mentioned above).

Clients can pull the News-Feed contents from the server at a regular interval or manually whenever they need it. Possible problems with this approach are:

a) New data might not be shown to the users until clients issue a pull request b) Most of the time, pull requests will result in an empty response if there is no new data.

Servers can push new data to the users as soon as it is available. To efficiently manage this, users have to maintain a Long Poll request with the server for receiving the updates. A possible problem with this approach is a user who follows a lot of people or a celebrity user who has millions of followers; in this case, the server has to push updates quite frequently.

We can adopt a hybrid approach. We can move all the users who have a high number of followers to a pull-based model and only push data to those who have a few hundred (or thousand) follows. Another approach could be that the server pushes updates to all the users not more than a certain frequency and letting users with a lot of followers/updates pull data regularly.

Newsfeed Creation with Sharded Data

One of the most important requirements to create the News Feed for any given user is to fetch the latest photos from all people the user follows. For this, we need to have a mechanism to sort photos at their time of creation. To efficiently do this, we can make photo creation time part of the PhotoID. As we will have a primary index on PhotoID, it will be quite quick to find the latest PhotoIDs.

We can use epoch time for this. Our PhotoID will have two parts; the first part will represent epoch time, and the second part will be an auto-incrementing sequence. So to make a new PhotoID, we can take the current epoch time and append an auto-incrementing ID from our key-generating DB. We can figure out the shard number from this PhotoID ( PhotoID % 100) and store the photo there.

What could be the size of our PhotoID?

Let’s say our epoch time starts today; how many bits would we need to store the seconds for the next five years?

(24*60*60) sec/day * 365 (days a year) * 5 (years) = 160 Million seconds

We would need 29 bits to store this number (2²⁹ ~= 500 Million). Since, on average, we are expecting 1157 new photos per second, we can allocate 11 additional bits to store the auto-incremented sequence. So every second, we can store (2¹¹ = 2048) new photos.

PhotoId Size = 29+11 = 40 bits = 5 bytes

We can reset our auto-incrementing sequence every second.

Caching

Our service would need a massive-scale photo delivery system to serve globally distributed users. Our service should push its content closer to the user using many geographically distributed photo cache servers and use CDNs (for details, refer to Caching in the Appendix).

We can introduce a cache for metadata servers to cache hot database rows. We can use Memcache to cache the data. Application servers, before hitting the database, can quickly check if the cache has desired rows.

The Least Recently Used (LRU) can be a reasonable cache eviction policy for our system. Under this policy, we discard the least recently viewed row first.

How can we build a more intelligent cache?

If we go with the eighty-twenty rule, i.e., 20% of daily read volume for photos generates 80% of the traffic, specific photos are so popular that most people read them. This dictates that we can try caching 20% of the daily read volume of photos and metadata.

CodeX

Everything connected with Tech & Code