Scaling @Turbohire
Attempt #1 (WIP)
Structure
Scaling Frontend
- Virtualization
- Pagination
Scaling Backend
- Db pagination
When we initially started building Turbohire, we started with the basic functionality of creating a backend service that responds to our frontend service. This frontend service was designed to handle API calls so as to reduce the load on the backend Server, But now has grown and requires us to handle enormous amounts of data, which was earlier throttled.
For this to happen our engineers have worked around the clock to design an infrastructure that is capable of handling huge data at the same time maintaining the consistency of the data.
Here we will look at a few architectural decisions we took in order to scale different services based on the use cases.
Designing a scalable pipeline to process incoming resumes
This is the module where Turbohire’s core resume processing happens. Here the input data is a large number’s files that are supposed to be processed to be fed into different databases for different use cases.
This pipeline uses a lot of independent components developed by different teams. So it is important for the pipeline to be fault tolerant. If one of the components fails, it is important.