Building realtime interfaces for SaaS

Siddharth Kshetrapal
Practo Engineering
Published in
4 min readMay 19, 2017

At Practo, we have a software offering (Ray by Practo) that is used by more than 40,000 doctors and their staff. They use it manage their scheduler and digitize medical records.

Doctor’s scheduler

SaaS products are fundamentally different from mass consumer products in a few ways, when it comes to performance.

  1. User tailored content: Each authenticated doctor/receptionist sees content specific to their clinic.
  2. Quick updates: As Ray sits in the clinic’s workflow, data changes really fast — with check-in of a patient, a new walk-in appointment, a phone call for rescheduling appointment, etc.

Due to these reasons, Ray cannot benefit from the static caching mechanisms that we have deployed for our consumer-facing website practo.com, where content does not change as fast.

The fact that the doctor needs to see his schedule for an entire week (or even month) before giving an appointment doesn’t help either.

But, that does not mean the application can be slow! Our users expect the application to load really fast and always be in sync across all user devices within a clinic.

So, how do we power that?

Story time!

Rewind to 2013. While closing sales with a number of big names in healthcare, we were presented with an interesting problem statement by our users.

Doctor: I want to know how many people are waiting outside the cabin at any time. The consultations are 15mins long and my receptionist handles the waiting area.

Sounds easy. Let’s just poll the server every 5 minutes. That will do it.

It solved the problem (kind of), but created another big problem for us. 15k users were making requests to our API servers every 5 minutes. We had to quadruple the numbers of servers that we were running.

This is the architecture that we ended up with.

This wouldn’t scale well. We needed a way to reduce the number of requests that were coming to our servers.

Socket.IO had just released its 1.0.0 version around that time, so we thought we’ll give it a shot. This is what our architecture looked like.

Data flow is now unidirectional.

The unidirectional data flow removed the need for the clients polling for data. Data would now be “pushed” to the client. We were able to replace all the servers we added by one small EC2 instance!
Bonus: Since data was flowing in one direction, debugging it was much easier.

But, we still had initial load times that were not so great. Even though our API response times were looking good (~150ms), transferring all this data with network latency would take up to 5 seconds!

Average load times ~ 4.5s!
95% of our users were on Chrome and Safari

Chrome and Firefox have supported IndexedDB since early 2013. Safari on the other hand, did not support it at the time (but they were planning to implement it). Looking at what browsers our users were on, we decided it was okay for us to implement this as a progressive enhancement.

No latency between UI and database = super fast load times. We utilised web workers, so that we can store huge amounts of data without blocking the main thread.

Latency = poof!

Now that we were caching all the data on the browser, the initial load times were amazing, and schedulers were always in sync and we saved a lot of money! Win win.

Average load times came down to ~2.2s

4 years later, after growing our customer base 5 times, this solution still powers our scheduler with the same web sockets instance!

Follow us on twitter for regular updates. If you liked this article, please hit the ❤ button to recommend it. This will help other Medium users find it.

--

--