Making your meteor app fly part-2: Server response time optimization

Dhaval Chaudhary
Fasal Engineering
Published in
6 min readApr 21, 2020

At Fasal, good software design and optimization runs in our blood. Be it app size optimization or speed optimization. We make sure, we measure it and optimize it to the fullest so that our customers (Farmer) on the field can have a smoother experience.

In part 1 of this series, we discussed how we brought down the app size by 50% to less than 5MB. In this article, we are sharing our experience in optimizing server response time.

For taking any optimization challenge, we have to first measure it. As the saying goes — If we can’t measure, we can’t improve / optimize.

In our case, we looked into all the components which can impact server response time and measured it precisely. We use ELK stack and Monti APM for our logging and monitoring needs. We identified that —

  1. Our complete infra deployment strategy (application servers and database servers) is not very optimal and leading to a higher RTT (Round Trip Time) between services and databases.
  2. Incomplete indexing on MongoDB collection leading to slow running queries.
  3. Background jobs, Blocking meteor calls and its response size.

Let's dig deep into the detail of every point one by one and explore the possibility of optimization.

The first step was — finding the optimal deployment strategy for the app servers and the Databases.

Initially, we were a small team and we didn’t want to spend time on doing DevOps and maintaining the multiple servers. So we hosted our application on Galaxy which is pretty cool and well managed by the meteor experts. The only limitation that we faced — it is only available in the US East (N. Virginia), Europe (Ireland) and the Asia Pacific (Sydney) region. For the start, we deployed our application servers to the nearest region (Sydney for us). But even the nearest region wasn’t an optimized solution considering our majority of customers are sitting in India. So it was clear that we had to re-model our app server deployment strategy.

We found a solution that worked for us in terms of moving our server in the nearest region that helped reduce server reachability timing (RTT) by 3X to 4X. Here are a few options available that might help you also in case the Galaxy is not available in your region.

  1. Deploying in AWS beanstalk
  2. Using Mup (we used this)
  3. Dockerized solution

We can test RTT using any efficient ping tool. We used meteor-ping. It’s an old package but it gives a pretty fair idea of ping time.

We use MongoDB as a database server and for MongoDB hosting, there are many options available that provide managed MongoDB services like Atlas, Scalegrid, mlab and more. We can also host and manage it by our-self but we will always discourage that option especially when we are a startup. Coming back to improving response time between our services and databases —

We have to make sure the network latency between our database and our application server is always the minimum. One way to achieve that is by putting the database as close to the application server as possible.

Index on MongoDB collection and slow running queries.

For identifying slow running queries, we require an APM tool. DBaaS platform like ATLAS, do have performance advisors for slow running queries and index suggestions depending on the query patterns. Here is an example of ATLAS.

If you are using Galaxy you can use APM or here’s one free APM tool Monti APM which we use and found it very useful for performance monitoring of the meteor application. After the successful integration of APM, we can go for hunting slow running query. Go to the Methods tab of MontiAPM and sort them by DB time.

Sort by DB time and we will get the list of function that needs attention
example of time drill down on the methods

Drill down these methods and check why these queries are taking more time. If there’s the scope of optimization go ahead and do it.

The next step is to do the same with our Meteor publications.

publication listing sorted on response

If you are also using the hosted Mongo service checkout, performance advisors in your service provider’s dashboard. It will look something like below —

atlas performance advisor In case you are using their service.

After we have identified the queries which are slow and can be improved or need an index. Go ahead and fix these issues.

This can improve our query running time drastically. Plus there are many other advanced improvements listed here which we can follow for further improvements.

Background jobs, Blocking meteor calls and response size

Another important task is to identify all meteor calls that do heavy lifting like processing batch files, generating pdf, excels, sending emails, etc. which can be done in background or it can be given to some other batch service that can offload the work from our main application server. Offloading from our main application server frees up system resources so that the user gets faster results. In scenarios where we are bound to use such background jobs in the same application server, do use Meteor.defer which will make it run in the background. Although it will take up available system resources but will not block user response.

For offloading such tasks there are again many approaches. One good approach would be going with a serverless. It is event-based and we don’t have to worry about another server management. We will cover this in great detail in our next article. We are in the process of moving all our batch jobs to serverless.

Next step, try to reduce the wait time of the meteor method calls. For reducing wait time we have to identify methods that are totally independent and can execute in parallel.

Wait time example

In the above example, if I find that method getAllFarm can be executed in any order and my client will not get affected by these changes then I will go ahead and use this.unblock in that method. This will allow the current method to run independently of the getAllFarm method.

this.unblock allows the next method call from the client to run before this call has finished. This is important because if we have a long-running method we don’t want it to block the UI until it finishes.

In the next step, check all the responses where we are sending unnecessary fields in the response and remove it.

Also, try to limit fields in Mongo query and get only necessary fields from the database. This step might not show any immediate improvement but this will help us in the long run.

Few more tips can be used depending on the requirement —

  • Try to use aggregate when doing two or more queries for resources that are linked to each other. Example: Let’s say the task is to get plot data and farm data. Instead of running two Mongo queries one after another. Just use one aggregate to get data of both in one query. (Check if aggregate that we write uses the index or not).
  • Don’t run long-running loops on the server it blocks CPU and can be a problem when we have to scale.
  • If we are going to use any package go and read more about it and check out the code of the package if it’s proper and not going to cause any performance problem.

Let me know if you have tried other options that helped you improve response time and give us your feedback on the approach that we took.

Also read —

--

--