PickMe: The story of Sri Lanka’s booming tech platform

Shayanthan Kanaganayagham
6 min readFeb 21, 2019

--

How we augmented a neoteric technology infrastructure from the ground-up in Sri Lanka

“Shall we take a PickMe then?”..

is a common phrase many of us in Colombo utter when we want to go somewhere; just like when someone says “Just Google it”.

PickMe is Sri Lanka’s very own ride-hailing platform with the PickMe Passenger app, you can:

  • Order Taxis — a range of Taxis starting from Tuk Tuk to a Mini Hatchback to a Luxury Sedan.
  • Order Food — from a large variety of restaurants right to your doorstep here in Colombo.
  • Order Trucks — either a light or a heavy truck, on-demand for anyone’s logistical needs.

PickMe’s Driver App meanwhile, is used by thousands of drivers on a daily basis.

With a desire to address Sri Lanka’s issues in transportation with new technology, PickMe was launched back in 2015 where the technology was built from ground-up. Today in 2019, PickMe is a multi-cloud platform with the power of more than 5000 CPUs, handling nearly 1,000,000 events per second across a distributed event driven system.

In this post, I’ll share our journey about how we developed technology from scratch to the burgeoning machine it is today.

Three Years Ago

PickMe started as a simple system to connect drivers to passengers and vice versa, via two mobile apps — one for drivers and one for passengers.

In late 2015, PickMe started life as a monolithic application. Our code base was simple and was more than effective in solving PickMe’s core business problems in the early stages. The fact that we had a single code base was very favorable for quick development and product releases.

PickMe quickly penetrated the market and started creating traction.

In a short time, PickMe had managed to etch itself comfortably in the Sri Lankan market.

Naturally, the expectations and demand for PickMe started to increase.

Our codebase kept growing steadily, and new features were being introduced. With this growth, we were now discovering the limits of a monolithic application.

The DevOps team started experiencing critical service degradations and also the engineering team was starting to face trouble in adding new features, bug fixing, and resolving the technical debt.

The business team meanwhile, wasn’t really happy with this new, rather staggering development. (As a business team would, usually :P). With the insistence of the business team, we were anticipating dramatic growth in the capabilities and capacities of our systems in a limited amount of time. The business wanted stability and growth. The engineering team had to get on with that mission.

The engineering team was now faced with three questions:

  1. How to preserve the system’s stability and availability?
  2. How to make the system scalable?
  3. How to handle fast-paced product development?

How to resolve the above 3 questions while rolling out important features was the challenge the team was faced with.

We quickly assembled a committee — Engineering Steering Committee. (Yes, that’s a very Sri Lankan committee name :P). With the help of this committee, we were able to communicate effectively across teams in the department and come to a cohesive understanding of how to proceed into the future.

We were now set for what we named as an “Architecture Revamp”.

Taking on the challenge

The first decision we took was to move away from our monolithic codebase to a microservice architecture. We reckoned it’s the step forward as we were now aiming for availability, scalability and flexibility to add new features. In the meantime, we also wanted our system to support high concurrency.

A problem we noticed in the old system was that the languages we used to build the system were proving to be ineffective as the demand increased. The technology was more than effective to take us through the startup phase, but now we had to amp it up. We wanted our critical front-facing APIs to be written in a much more versatile programming language.

We decided that “GoLang” by Google was the best language for the job. It favoured excellently in our initial testing for modularity and concurrency. It was modern, flexible and scalable.

With the choice being made, we started implementation — a high concurrency, microservice architecture powered by “GoLang” in place of the old monolithic application.

IoT protocols incoming!

We anticipated that with heavy amounts of data flowing to our services, the simple HTTP protocols we used would be too heavy and bulky for remote client communication at such a scale.

So, we replaced our HTTP protocols with a lightweight protocol that supported low bandwidth situations; which is called MQTT (MQ Telemetry transport).

With MQTT in place, we observed significant advantages to our clients in terms of resource usage, performance and speed. Thanks to MQTT, which is rather essentially an IoT protocol, our application stacks saw discernibly high increases in performance.

150+ microservices with event-driven architecture

Next, we turned our attention to our back-end.

The first thing we did was to introduce Apache Kafka into our core back-end application stack.

Then we benchmarked several message serialization frameworks for our event message serializations. We defined proper messaging standards — set up schemas and versions for the event messages.

After that, it was the time to implement a centralized schema registry and conduct several research and benchmarking on memory key-value data stores to use them in data store in high scalable microservice.

With all that work done, we were able to successfully implement a fully-fledged event-driven application architecture. In 6 months, a young tech startup team had transformed into a high-tech engineering team.

Apache Kafka and event sourcing

Apache Kafka is not, obviously an attack helicopter :P

But rather, it is what is known as a distributed system designed for data streams.

Kafka, we observed had the capacity to facilitate high-throughput, horizontally scalable, fault-tolerant messaging, and it also allowed for geographically distributed data streams.

We noticed numerous advantages in using Apache Kafka in our back-end. Kafka makes our life easy by ensuring guaranteed event ordering within a given partition key in a topic. We started exclusively using Kafka streams to initiate event sourcing in our new stream-centric application.

Developing stream processing APIs with GoLang for our new platform was a major challenge as Kafka and Kafka stream APIs were evolving with the world of JAVA. However, the K-Stream API we developed using GoLang was ready to take up the task.

Another challenge we faced was that most of the microservices needed aggregating data from multiple sources. For this, we used Kafka streams and materialized views which created more effort and complexity for implementation. However, despite the intricacies, we were able to gain a higher degree of scalability, and availability with a minimum compromise on consistency.

Time to assemble ……… the league.

In order to sustain and expand the new platform we’ve built, we assembled a research group.

We gathered some of our highest performing folks from different teams into the group and named it “The Research and Development Team”.

The new research team worked hard day and night, generating new knowledge.

They then joined back their original teams with the new knowledge they developed. Afterwards, armed with the new knowledge, we modularized our applications as well as the scrum teams and accelerated the development.

We set up small teams within the core scrum team to focus on GIS, load testing, simulation testing, research and development in rocksDB, KStream API development, and finally on feature development.

Scrum practices helped us with transparency and visibility across our micro-teams.

Although it was immensely challenging, we managed to align the technology with business needs, resources with the plan, and we managed to align our stakeholders along a common vision.

With all this hard work, we’ve been successful in creating a world-class technology platform from scratch, here on Sri Lankan soil.

Now and beyond …

With almost 150+ microservices created and running, we now have the capacity to handle operations beyond our current business scalability.

Our focus is now twofold:

  1. Minimising the risk of data inconsistency.
  2. Optimising our microservices for capacity, low latency, and power intensity.

Kafka and Golang are our weapons of choice for handling both the objectives.

Here at PickMe Advanced Technology Centre located in Nawala, Sri Lanka, we are setting a foundation for better and leading-edge technology infrastructure in the country.

We invite you to drop by our office and have a chat with our team anytime if you’re interested in computing, distributed event-driven architecture, and business. We are always happy to help.

Writer: Shayanthan Kanaganayagham and Editor: Ashen Monnankulama — Wednesday, February 20, 2019

--

--