Developing High-Performance Applications & Microservices

Saurabh Gupta
8 min readJul 15, 2022

--

Microservices and its advantages

Microservices design is getting very popular nowadays. Application teams are moving towards microservices architecture as it has lots of advantages.

We should remember that a microservice is not just breaking a big monolithic application into sub-applications, it’s a lot more than that. The concept and epicenter of Microservices revolve around creating a self-contained piece of functionality that offers clear interfaces and could have its own internal components.

Below are some high-level advantages of a microservices based architecture:

  • Loose coupling and high cohesion
  • Supports both horizontal and vertical scaling: easy to upscale and downscale.
  • Diversified and Modern technology stack: each microservice can be implemented in a different programming languages and can be deployed on heterogeneous servers or clouds.
  • Modular and distributed: Small individual microservices organized around business capabilities.
  • Time-to-market is fast: Development, Testing and Deployment is much faster than monolithic service.
  • Easy to deploy, test and productionize.

Microservices Performance Best Practices

http://clipart-library.com/clipart/1717618.htm

The microservices come with lots of benefits. But, the management, maintenance, and performance of the microservices can be degraded very easily. Even many Software Engineers find it difficult to build and maintain high-performance microservices.

As the count of microservices increases within an application (or project), maintaining the overall application performance becomes a paramount challenge. Moreover, troubleshooting microservice performance is a tough job.

I have been working on microservices for the last few years and I think there is no direct or defined way to measure if a microservice is 100% efficient and performant. I believe that good development is a habit and not a one-time job. And, if we keep some best practices and recommendations in mind while developing and reviewing any code irrespective of microservice or a simple program, we will end up writing an efficient and high-performance code. I would like to share some best practices that I use to follow while writing and reviewing the code:

Design, Communication & Security:

1. Choose the best-fit technology and use its best practices for developing the microservice: Choose the technology stack based on the functionalities and business features/use cases. Sometimes, we (engineers) form a biasing (emotional attachment) with a technology and then we try to develop all the microservices based on our favorite technology stack. It took me time to change this perspective. Example: AI/ML-based services can be built on Python or a similar language. If we try to build AI/ML models in JAVA, they might not work as expected in terms of performance. Once we have identified the best-fit tech stack then we should design and build optimized code for that tech stack by following the technology’s best practices.

2. Design Microservices using the SOLID Principles: The SOLID Design Principles are the foundation for designing a good and efficient Object-Oriented Program or Application. I would like to advocate using SOLID Principles in Designing the microservices. This is my tech blog on Designing Effective Microservices by following SOLID Design Principles.

3. Design Microservice Architecture for Performance and Security: The security and APIs of the microservices should be in the design starters and not the last item of implementation. As unsecured services cause more harm to the consumers compared to its benefit. OAUTH and Kerberos are well-used security principles and their libraries are readily available in most programming languages.

4. Microservice communication: Asynchronous (Non-Blocking) requests wherever possible: Synchronous requests are blocking and can cause serious performance bottlenecks. Asynchronous communication refers to the exchange of data/information/messages between two or more services without the requirement for all the services to respond immediately. In simple words, the interacting services are not required to be up and running during communication. This can be achieved via either Messaging queues or Database polling. Kafka is one of the widely used solutions for intra-service communication in microservices. Here is my tech blog on Micorservices: Intra-service communication

5. Keep memory footprint in limits: Microservices footprint and business logic should be small and atomic. A microservice should solve a particular use case and not everything. This way the performance will increase. A good microservice should not expose methods/functions that are not directly related (e.g. sales and procurement). If we integrate with a microservice for a specific use case and we only use less than 30–40% of its features, then we are probably not calling a microservice. As such services are more like low-performance mini-monolithic services.

6. OAuth & Kerberos Tokens caching: Security tokens are costly and time-consuming to generate. The best practice is to cache the OAuth and/or Kerberos tokens by calling systems to avoid hitting the token-generating API frequently. I prefer 60 min to 180 mins cache time. Although it depends on the level of security we would like to provide in our microservices. Spring provides a good integration for caching and refreshing tokens as a non-blocking background process. If distributed messaging services (like Kafka) are being used in an application, then its authentication token should also be cached to improve performance.

Database Related:

7. Use the right database Type/Technology: Microservices response time directly depends on the data source and underlying database response time. Database selection and database modeling are very important. A microservice or microservices architecture can have RDBMS, Key-Value store, and/or unstructured data(eg. Images, videos), etc. To improve performance, structured data should be stored in RDBMS databases, and unstructured data should be stored in No SQL data store like Mongo DB, or Cassandra.

The Database Type (RDBMS, NO-SQL, Object DB, etc) should be decided based on the use case and we should not try to follow “one solution fits all”.

8. Database side Caching: Database queries and responses should be cached for non-frequently changing data (or reference data) in the database. This will reduce hits to the database and saves it from getting overloaded. EHCache is a good open-source service that integrates well with Hibernate, Spring, and JPA. Also, we should spend some time identifying the good indexing/partition strategy in database tables. I prefer an 80/20 or 70/30 ratio for Database to Cache size i.e keeping 20–30% of more presently accessed data increases the performance.

9. Optimize Database Calls/Queries: Avoid fetching the entire row (tuple) from the database. Suppose an API hits a database table and provides the 10 attributes, and the database table has 40 attributes (or more). If we execute a “Select All” or “Select * by id” query, it will return the whole row/tuple. A better approach is to pass the attribute names that are required for the API response. This will save the network cost and executes faster as well.

10. Database connection pooling: DBCP is a way to reduce the cost of creating and closing connections by maintaining a connection pool. As making a new connection for each request is costly as well as time-consuming, DBCP allows to use of the same database connection for multiple requests. Also, spending time identifying the number of idle/stand-by connections is better. This enables good resource utilization.

11. Use Database clustering: Database clustering with load balancing allows the database to give a faster response to queries. There are multiple ways of handling this:

  1. There could be a Master-Slave configuration, where Slaves are read-only and eventually consistent, or
  2. Master-Master configuration which is a bit slow compared to master slave.

12. Database Tuning & Choosing a good Index and/or Partition Strategy: Fine-tune the table space, disk space, and user space. Choosing the right indexes or partition strategy is very integral. A good index or partition strategy can optimize the query time and bad index selections can degrade the performance.

If we are using JPA, it is a good practice to log and review the JPA-generated queries during the development phase, as sometimes these queries add unnecessary joins and self-joins.

Server-side caching & Scaling:

13. Server-side caching: Good caching leads to high-performance gains. And, a bad caching strategy degrades the performance. I would suggest caching the response of microservices depending on the request and its parameters. If the response is not frequently changing (responses like image, movie, or item details), then the response of the microservice can be cached based on input parameters. This will improve the performance as the business logic/compute needs not to be executed for similar/same requests. Memcached and Redis are good examples of in-memory caches that stores key-value between application and database. Redis is an in-memory, distributed, and advanced caching tool that allows backup and restore facilities as well. And, both integrate very well with Spring-based microservices. For Videos (clips, movies, etc), CDN is a good solution. Here’s the wiki link for the content delivery network.

14. Scaling: Vertical Scaling (Scaling up) and Horizontal Scaling (scaling out) are two recommendations to handle the increased load on microservice.

Vertical scaling refers to increasing the memory of a single service. So, a vertical scaling requires a microservice restart (downtime) and it has hard dependency on the underlying storage availability.

Horizontal scaling refers to adding new node to service the requests. THis can be done in same host/cloud pool or can be done on different hosts/cloud pools. For same host/ cloud pool, the Auto-scaler is very common service which is available in most cloud providers. We can configure auto scaling based on Http Throughput, memory usage, etc.

For Different hosts, a Load Balancer is required to route the traffic to microservices running on multiple nodes/cloud pools. A load balancer can be Pure Geographic or Round Robin or more customized. LB is a vast topic and I’ll cover it in my future blogs.

API Gateways, Rate Limiters, and Proxies:

15. Rate Limiters: An API Gateway or in-house developed API Rate Limiter protects the APIs from over usage and increases the Availability of the microservices. Load Balancer also helps with Throttling or fixing the number of requests that hit a service at a certain point in time. I would suggest enabling Auto-scaling along with multiple node deployments, then adding a Load Balancer for distributing the requests.

Conclusion

I believe that good development is a habit and not a one-time task. And, if we keep some best practices and recommendations in mind while developing any code irrespective of microservice or a simple program, we will end up writing an efficient and high-performance code.

Thanks for the read. If you like the story please clap, like, share and follow for more such content. As always, please reach out for any questions/feedback.

LinkedIn: saurabh-gupta-engr
Github:
SaurabhGupta-repo

--

--