Microservices at Property Finder

At Property Finder, we put a lot of effort into bringing people together from different squads across the company around important mutual topics. As a lead backend architect, I always feel rather compelled to engage others and bring forward change from within the company. Therefore, I recently held a presentation with the goal of introducing everyone to the topic of microservices at Property Finder: the history and the evolution curve, the role of microservices and the plans we have for the future.

The case for services-oriented architecture

To start, let’s take a trip back in time…

Property Finder has been around for over 15 years and is the leading digital real estate platform in the UAE and Middle East. We operate in 7 countries and are constantly expanding that reach using a technology stack that has evolved numerous times. Codebases have been rewritten in full, frameworks have been swapped out, data stores have been upgraded — and the list goes on!

Naturally, our main monolith app kept growing in features, including both backend and frontend code, with several different frameworks on the frontend side, different types of tests and so on. As the business was growing, the domain and responsibility of the codebase grew as well, and it expanded more and more.

Over time, a clear need to split the codebase into smaller, independent, detachable services emerged.

However, sensing that something should be done, does not make a clear case for doing it. To fully verify, we pulled together all of the pre-requirements for our microservices journey, and we also wanted to make sure our team structure would enable us to apply a different development process.

The size of our product and tech team is already in the three digits now. Our tech team is split into different teams, which we call squads (loosely following the Spotify squad model with some natural adjustments that took place internally on our own). During our discovery phase, we confirmed that our team structure is well established: squads are self-organized and take responsibility and ownership. Therefore, we realized that the structure of microservices output seemed a good reflection of our team structure. In addition to this, according to Conway’s Law, “the organizations that design systems are constrained to produce designs which are copies of the communication structures of these organizations”. Therefore, it seemed natural to move on with our migration taking into consideration our current team structure.

Don’t follow blindly

To ensure that we were not blindly following an established pattern, and that we had a solid case for moving to a microservices architecture, we reviewed several important topics.

We were aware of all the FOMO and excitement around microservices, which are actually not a new thing (service-oriented architecture has been well-known since the 2000s). We also knew that several companies around the world hav moved their development strategies towards microservices for nothing more than the sake of it. We didn’t want to be one of these companies. Therefore, it was agreed that the move to microservices architecture would not be mandatory, would not be at a organization-wide level, and would be done only if, and when, it made sense.

The ground rules were established: the service should have clear boundaries, properly-scoped responsibility, and a squad assigned to it from development through monitoring to deployment. In return, the benefits were clear:

  • such service would focus on one task only (as in Unix’s philosopy: do one thing and do it well)
  • the deployment time would be reduced from an hour, to mere minutes, as the whole codebase would be minimal compared to a big monolith and the continuous integration pipeline would run much faster.
  • it would present a chance to develop in a new programming language.
  • it would allow us to focus on performance more deeply, as we were unable to make any big gains with a PHP 7 monolith anymore.
  • it would allow us to rethink our monitoring strategies, and would allow us to introduce enhanced service observability, tracing and profiling.
  • it would benefit from better scalability especially as we moved to containerized workloads in production.

The move to microservices allowed us to scale services individually, while previously we were only able to scale the whole codebase. Out of many different independent services we have, we focused on the one that made most sense and a had a narrow scope of responsibility. This service is responsible for parsing thousands of our client’s XML files containing all of their properties inventory and updating it accordingly in our database in bulk. It has only a few JSON API endpoints for on-demand import, a bulk import run periodically via cron and couple of RabbitMQ consumers.

The case for Go

Go gopher, an iconic mascot and one of the most distinctive features of the Go project. People love it so much that you can even create your custom gopher.

Ultimately, we used the chance to rewrite an existing feature in an entirely new language for us. We decided to go with Go. As a language created at Google 10 years ago, an alternative to C++, it won our sympathies and we felt it was a strong choice. Out of all the positive things, the following should be pointed out:

  • clean syntax: Go has less than 30 reserved keywords and they’re all you need to know. Picking up syntax for people with Java, PHP and Python backgrounds was rather easy.
  • statically typed: with the advent of type hints and return types and increased strictness in PHP 7, we were more or less already used to declaring types everywhere. In Go, this allowed us to fully escape from the dreaded array type-hint or no type-hints at all that the monolith app inevitably contained.
  • powerful standard library: even though all of us were happy Symfony users, striving to write SOLID, clean code without attaching to framework too much (besides Doctrine everywhere…), we were happy to make a decision not to use any framework at all and rewrite the new service with all the tools that Go offers out of the box: HTTP client & server, I/O utilities, RPC, database, JSON, templates support, testing, benchmarking and so on.
  • fast compilation and fast execution: even though APIs written with PHP 7 and strict OPCache configurations can go a long way, on average we won 10x faster execution time without long compilation time of other programming languages such as Java…our app is ready to go in few seconds!
  • concurrency primitives make squeezing performance simple and fun again: we tried to not write Go code with PHP in mind, but rather rethink our approach as we can do better now: IO pipelining, worker pools and so on. Our tests run with native race condition checking turned on.
  • small, portable binaries: as we’re using Docker images we were glad to be able to create a production image that’s less than 20 MB in total and contains only our binary on top of Alpine Linux, and includes ca-certificates and our config files. That’s all!
  • testing, benchmarking and code coverage tools out of the box: we’re keeping things simple here: writing table-driven tests and using testify/assert for assertions.
  • easy dependency management with Go modules: originally, we’ve used godep, but nowadays things are smooth with Go modules and day-to-day minor updates are done in a snap. We can have two different versions of the same package, unlike in PHP.
  • composition instead of inheritance: the code that we write cannot get into inheritance hell and it’s very straightforward to embed structs.
  • profiling & live tracing in production: in PHP, we were already a big fan of Blackfire, and I even did a presentation about it at ViennaPHP meetup last year. We were happy to continue our production performance profiling with zero overhead in Go.

Deployment strategy

After our CI pipeline runs our unit and integration tests, lints our codebase and generates Docker images and pushes them to AWS Elastic Container Registry, our feature branches can be merged to master. From that point on, rolling out a new version in production is as simple as switching the desired image version and takes anything between seconds to a minute, depending on how much it takes to drain the existing current open connections from the load balancer.

Post-deployment: the three pillars of observability

Logs, metrics, and traces are often known as the three pillars of observability. While plainly having access to logs, metrics, and traces doesn’t necessarily make systems more observable, these are powerful tools that, if understood well, can unlock the ability to build better systems. (Distributed Systems Observability, by Cindy Sridharan)

After a deployment has been done, we continue observing our service and it’s communication in distributed world using Jaeger, checking our metrics in NewRelic and Grafana, as well as examining our logs using Kibana. The whole switch to a small microservice allowed us to make faster iterations and numerous deployments per day. By running our production services in EKS we can easily roll back to a previous version in case of performance degradation or for any other reason.

In regards to this, we try to make our logs highly contextualised, by including tagging HTTP requests (or RabbitMQ messages) with request ID, container names and hostnames, image versions and so on, and we’re able to see a bigger picture by using Jaeger, especially when calls to other services are involved. We measure and monitor our database queries, HTTP calls, and some of the most important transaction segments (XML parsing, S3 uploads, etc). As the old saying goes: if you can’t measure it, you can’t improve it.

Conclusion

At Property Finder, we needed more flexibility to reduce iteration cycles and deliver a scalable, high-performing service that was of critical business value to us. Moving a service with very well defined boundaries was a successful project for us, and we used it as a learning curve for the first case of a microservice written in Go. After extended periods of stability and continuous feature development and improvements, we started moving other chunks of our big monolith app as well.

We realise Go is not always the right tool, and that’s fine — we don’t push it and we choose the right tool for the job. We use different languages across our company and the knowledge we gained and concepts we experienced in Go made a big impact throughout our tech team.

Two years later, we can say that Go has met all of our goals and it continues to be an important part of our constant effort to deliver the best and fastest property search experience.

If you liked this article and be part of our brilliant team of engineers that produced it, why not have a look at our latest vacancies here.

--

--

Emir Beganović
Property Finder Engineering and Tech Blog

Architecting service-oriented server software and distributed cloud-native systems at scale.