Extending the Brain of the Smartest Two-Wheeler in the World

Karlo Novak
Greyp
Published in
7 min readOct 3, 2019

At Greyp, we build award-winning electric bikes but we still like to say we are not a bike company. We are more than that — we are a high-tech mobility company building out a connected mobility platform. That is why we launched this month a digitalized equity offering for retail investors to own a piece of Greyp and help us power beyond bikes.

In my humble opinion, it is our tech that makes our bikes so cool and helps us understand how to shape the future of connected mobility. So I wanted to take a deep tech dive to share some of the fun work we have been developing on the back end. I am a Greyp Backend Developer — and yes I have fun.

Specifically, this article covers the technological aspects of our cloud implementation that make our latest G6 bike a perfect blend of high-tech and two wheels.

On March 15th Greyp Bikes launched a new all-electric, always-connected smart bike — the G6. And while everyone was enjoying the launch show, with a demo of remote- control capabilities, live-streaming and competitive gaming, our software team was behind the curtains monitoring servers, memory, CPU, threads and all of the “invisible” stuff making the bike so cool. The bike itself is loaded with all kinds of technology and is powered by a Linux brain. But its Linux brothers, living in the cloud, are also helping it all the time, making it what it really is — the smartest bike on the planet.

Illustration 1: Ladies and gents… The G6!

1. The rise of IoT

Lately, we’ve been witnessing tons of different devices getting connected to the Internet leveraging its power and global reach. The biking industry is no different and that’s how the G6 was born. The Greyp G6 is an always-connected electric bike, meaning that in each moment it maintains a connection to the servers running in a cloud — “the backend”. Connectivity is provided by T-Mobile and their e-SIM card implemented inside a bike, which provides myriad capabilities in terms of providing a whole new experience for the riders.

Illustration 2: https://europe.hughes.com/sites/europe.hughes.com/files/IOT.png

2. The platform

To create a platform for an innovative product many things have to be taken into consideration. A big decision is choosing a cloud platform (PaaS) to host and run all the services. After many days of brainstorming, architecting and drawing cloud schemes of a system, we decided to go with Amazon Web Services (AWS). This was mainly because of the in-house knowledge we already had on the subject, but also because of its popularity, vast adoption among developers around the world and it’s Java/JVM friendliness (yes we’re heavy on JVM usage!).

AWS included a nice IoT MQTT broker — perfect for a lightweight communication with a device — in this case a bike. But there was also the other side of the coin to consider — communication with a cellphone application. One of the solutions was to create a custom implementation using Websockets for instance. But we were looking for something to make our lives easier and did not want to reinvent the wheel. So we decided to go with Google’s Firebase platform, which is widely adopted among mobile developers.

The system architecture has undergone many improvements and changes since the development started. This is approximately how it looks right now:

Illustration 3: Back-end tech stack

We’ll briefly cover each of the technologies being used as well as explain how and why each of these was needed to build our system.

3. Data flows

As already mentioned, there are two ways of entering the system. Each is implemented separately, with different technologies per use case required, as well as their security and scaling concerns.

3.1 Bike to cellphone

The first thing to consider when creating an entry point to a system is which communication protocol to use. The de facto protocol of IoT is MQTT because of its lightweight nature and that’s what we went with. Without going into too much detail comparing other options, MQTT has proven to be the right choice. It was really good with both the bike’s bandwidth and potentially unreliable connections (Mt. Everest anyone?) as well as with battery, whose longlasting capacity is a high priority for us at Greyp.

The MQTT broker we use is provided by AWS IoT and is used for high throughput ingestion of all the data coming from the bike. Inside an AWS network resides a Lambda which reads binary data provided by MQTT broker, parses it and delivers it to Apache Kafka for further processing.

Apache Kafka is the backbone of our system and all the data must pass through it to reach their final destinations. Currently, the core system has a few consumers. The most important of these include the one that collects data and passes it to a “cold storage” of InfluxDB, and the other that passes data to a Firebase Realtime Database, making it available to cellphone applications. This is where Apache Kafka really shines — our cold storage (InfluxDB) which saves all the data ever collected from a bike and Firebase don’t need the same pace to receive bike data (real time metrics for instance, such as current speed). Kafka enables us to consume messages at different paces and deliver them in near real-time to Firebase (to be shown on a cellphone app) and deliver them eventually to an InfluxDB (for data analysis, statistics, monitoring). Pulling the data enables us to optimize our resource usage for a particular use case. Using Kafka also enables us to scale horizontally with increasing load as well as connecting many other decoupled consumers that can process incoming data at their own pace and for their own use case (competitive gaming for instance).

All of our services (called “GVC” which stands for Greyp Vehicle Cloud on the scheme above) are mostly implemented using Spring Boot and Java though other languages are also used. Each build is packed into a Docker image hosted on an ECR repository and is run and orchestrated by Amazon’s ECS.

Although NoSQL is pretty neat and popular for narrow cases, Firebase in service of a data store isn’t able to fulfill all of our needs and that’s why we also use MySQL (on RDS) for special queries (Firebase uses JSON tree that doesn’t excel in some cases) and specific data storing. Another storage we use is Amazon’s S3, keeping safe all of our bikes’ collected media (videos, photos…).

3.2 Cellphone to bike

As mentioned, cellphone to bike communication was thrown at Firebase. The Firebase platform is used for application users’ authentication and its Realtime Database part. Realtime Database is actually a combination of two things: one is a database, as its name implies, for permanent data storing and the other is for delivering data in real-time to cellphones via its Websocket connection underneath.

A perfect use case for this kind of connection is issuing commands to a bike when devices are not close to each other (no BT/Wi-Fi connection available). In that case we’ve implemented our custom command-processing mechanism which receives cellphone commands via Realtime Database. That mechanism is also part of our core application services (GVC) whose task is to transform cellphone commands into valid MQTT messages delivered to a bike via IoT broker.

When a bike receives a command, it does the processing, acts accordingly and returns a response that ends on a Firebase (cellphone) in the same way as explained in the previous data flow from bike to cellphone.

4. Monitoring

If you’re a back-end developer and like to sleep at night without checking the servers all the time, you have to implement automated monitoring and alarm mechanisms over your system. This is where we come to the final two components of our cloud that we haven’t discussed so far — Amazon’s CloudWatch and jmxtrans.

CloudWatch is Amazon’s monitoring and observing service which grants an insight of your applications, services and other resources through logs and metrics collection. AWS managed resources can be monitored out of the box and are configured extremely easily. Since our core services are Docker/ECS powered and act as a black box, JVM that resides inside should be monitored manually.

Luckily, CloudWatch is very extensible and allows us to collect custom metrics and deliver them to a centralized dashboard. There, they join all of the metrics coming from other Amazon managed resources. JVM metrics are acquired through a JMX endpoint using great connector called jmxtrans (also hosted as a Docker container inside an ECS).

OK, so now when we’re monitoring everything we can create alarms to wake us up if something goes wrong (but it never does, right? :P) and we’re done!

Illustration 4: Good night, sleep tight, thanks to monitoring!

5. The end

And that’s all folks for my first deep-dive article on Greyp’s tech. Hope you learned something new, got inspired to build your next IoT project or just had fun getting some insight into the coolest bike’s cloud system. All critcism (praise), comments and questions are very welcome!

Also, visit our official website at greyp.com or join us in building cool and innovative products here.

Stay up to date on Greyp and our digitalized equity offering where the pre-sale starts on Monday, October 14th:

--

--