How we created OneView for Deutsche Telekom’s OneApp

Nikhil Gupta
Deutsche Telekom Digital Labs
4 min readNov 4, 2019

Our Background

It all started at the end of the year 2017 when Deutsche Telekom (DT) decided to start its own development centre in Gurgaon, India. The idea was to incubate an in-house development team instead of outsourcing one like most of the Telecom giants. DT’s determination led to the start of the Gurgaon Development Center (GDC) in March 2018.

OneApp: Our first Project

One of GDC’s initial projects was to take over DT’s self-care app - OneApp. DT Europe is basically divided into 10 National companies (NatCos) which historically were managing their own self-care apps. But DT in a very bold move decided to drive the app centrally instead of letting 10 different Product brains do it. What in turn came out was OneApp with the single app codebase catering to 10 different NatCos. Our aim was to develop the app centrally but let NatCos drive the Business operations so it was only logical to back the app up with a robust Content Management System.

We started developing User stories for the app and the BFF (Backend for frontend) and every one of that was configurable via CMS which gave NatCos enough flexibility to switch on and off features based on their local business requirements. Since the BSS/OSS stack for each NatCo was different, BFF needed to be locally deployed at each NatCo. This is what final OneApp architecture looked like:

OneApp Architecture

Operational nightmare

Each of the locally deployed BFF now dumps logs (app-bff-local stack API communication) into Elastic search and data could be visualised using Kibana dashboards, this enabled development team to debug production issues, Ops team to monitor API health checks and Product team to monitor different KPIs like Payment, login success, etc.

But due to our architecture what came up was a different Kibana dashboard per NatCo. Due to the absence of a central data lake, these teams had to manage multiple Kibana dashboards and there was no single dashboard where the central team could easily visualise and compare app traffic from different NatCos.

Central data lake

Single Kibana dashboard needed a central data lake and the most obvious choice for that was to dump data via BFF, but this would need each of the NatCo local ops team to enable outbound internet access. Now this was a moment which made me agree with one of Trump’s “famous quote”

“Sounds good, doesn’t work”

Donald Trump.

We realised getting this done at NatCo level would mean an eternity of time spent in security and other approvals and since this was a burning issue for us we couldn't afford to jeopardise it.

OneApp to the rescue

During our quest to the central data lake, we realised that all the data that we need is actually already cooked up for us in OneApp and it could easily log its API interactions with BFF into Elastic search which could be used to create different dashboards that our team needed.

But instead of traditional HTTP, we decided to go ahead with MQTT. This gave us a better real-time logging as compared to batch logging with HTTP. But there was one issue:

GDPR compliance — Since this was a central data lake, any data stored should be GDPR compliant and the app cannot afford to log wrong data since correcting that would mean another app release.

While solving this problem we also realised that the definition of sensitive data changes from NatCo to NatCo. A classic example was of a parameter named profileId, for few NatCos it was just a random number mapped to local customer profile but for others, it can be an email ID.

Sidekick CMS

Due to the nature of data, we decided that it would only make sense for the local IT team to define what parameters in an API are sensitive and needs special attention/encapsulation.

To solve this we came up with a CMS module where a NatCo will upload App-BFF API swagger and then a NatCo admin can go ahead and mark every parameter as Plain text, Encrypted, Hash, Skip. This configuration is then synced with OneApp on the fly. So in our previous example, NatCo A can then mark profileId as plain text but NatCo B can mark it as a Hash.

This is what our final architecture looked like:

OneView Architecture

The world is not enough

So after our development was completed, our developers were able to debug production issues, ops team was able to monitor API health checks and Product team was able to track different business KPIs. But we then we realised that what we’ve created is more than this and we could harness more out of this data lake.

If you notice, in step 2 of the above diagram OneApp is sending more than just API responses. Will discuss that in a follow-up post.

--

--