Published in


The most complete AIOps platform for critical telecom networks (Part 2):

AI in Telcos: the hallmarks of a suitable AI approach

AI in Telcos: the hallmarks of a suitable AI approach

This is the second instalment of a series of blog posts that look to navigate the rapidly changing world of critical telecom networks and provide insights on the future of the market. The series aims to contextualise the requirements of technological solutions for highly complex data-driven operational environments, and highlight OPT/NET’s very own flagship AIOps platform: OptOSS AI — the most complete realtime AIOps monitoring & analysis platform for critical infrastructure networks.

This series was originally posted on LinkedIn.
Make sure to follow us there to stay up to date and learn more!

Full BCG report — Transforming Telcos with Artificial Intelligence

A changing landscape

The amount of customer data traffic that Telcos are faced with is growing exponentially and the enormous variety, velocity, and volume of customer data (and the associated data from the network elements and systems controlling the flows) is increasing to astronomical levels far exceeding the capabilities of human operators tasked with processing, analysing and making informed decisions. The usage landscape is also undergoing rapid changes as customers migrate from fixed landlines in favour of mobile connections, and interactive data usage now far outstrips voice traffic. Soon, non-interactive — i.e. machine to machine — data will surpass even that by many orders of magnitude.

The emerging reality for Telcos is that their network(s) must be optimised for data traffic rather than voice traffic, both in terms of infrastructure as well as monitoring. This optimisation is taking place in the form of an ongoing transition to a more future-proof Telco model. As discussed in our previous article, this new model incorporates concepts like: software-defined self-healing networks, zero-touch digital service delivery, closed-loop automation, as well as a data core & insights engine.

This data core & insights engine would aim to provide a unified & real-time overview of their customers and network elements and would serve as the backbone guiding all of the other elements of the transition as well. This applies in particular to closed-loop automation and zero-touch service delivery.

Enter AI…

The telecommunications industry is looking for Artificial Intelligence (AI) and Machine Learning (ML) systems capable of processing and analysing the ever-increasing deluge of information — both customer and data traffic from the associated traditional and new rapidly multiplying virtualised network infrastructures — and it is already a real challenge given the amount of data and the need to produce a unified & insightful overview that is scalable, reliable and functional in real-time.

There are several key challenges in this environment, which form the basis for a set of requirements of a truly suitable AI approach for the future of Telcos.

Primarily, the challenges are as follows:

  1. Data Acquisition & Sifting — How does one efficiently sift through the sheer ocean of available data and hone in on what really matters? There is obviously tremendous business value in the oceans of data, but the common practice of just dumping all of it for later analysis has led to the appearance of massive so-called “data swamps” — where data comes to rot. This is already a real problem in and of itself, with the addition of associated escalating storage costs to boot.
  2. Speed of Acquisition/Processing — How long can one afford to wait, in the context of making data-driven decisions? For example, detecting a potential severe outage in the making or a potential distributed denial of service attack requires a timely detection & mitigation protocol.
  3. Intelligent Responses — How can a person’s domain-specific intelligence/expertise be infused into an AI/ML system such that it “knows” the right course of action when detecting anomalies in the data. There are requirements to go beyond just detection, but further into identification and remediation. Remember, all of this also has to happen in near real-time to be effective, rather than as an afterthought during a post-mortem inspection!

With these challenges in mind, we have the necessary context to extrapolate the functional requirements and/or hallmarks of a suitable AI approach.

Hallmarks of a suitable AI approach:

  1. Data agnosticism:
    The difficulties of dealing with an ever-increasing volume of data-points is further compounded by the fact that many mission-critical processes and decisions rely on a variety of data sources and/or formats. An effective AI solution needs to be generic, meaning it is able to process a broad swathe of both structured and unstructured formats from different sources & devices. Even though the ability to generalise is important, however, there are limits to what is possible due to the myriads of intricate specifics pertaining to various data protocols. Getting it right from the beginning is the most challenging step, and requires true industry expertise. Getting it wrong will doom an entire approach from the start.
  2. Real-time:
    Mission-critical environments require real-time insights in order to make the right decisions before things spiral out of control. Supporting complex operations over billions of events, in real-time, is simply beyond the scope of most existing AI approaches today, as duly and widely accepted by most established industry players.
  3. Scalability & Reliability:
    The AI approach should culminate in a platform/service that is highly performant and resilient even under the demanding (and ever-increasing) load of thousands of data sources. Rather than resorting to the “noise reduction” method, which effectively blindly filters out troves of potentially valuable data, it should have the capacity to process and parse it in its entirety. Typical noise reduction methods are akin to vacuum-cleaning a crime scene before letting the detectives in!
  4. Continuous learning:
    There should be a continuous feedback loop between the AI system and its human operators, such that the insights derived from the expert’s knowledge & skills can be learned over time. This will enable potential remediation strategies to be enacted with the appropriate context and understanding, but on a much larger scale and continuous availability.
  5. Convenience:
    Many typical AI & ML approaches rely heavily on the availability of massive, correctly-labeled and validated training datasets. Unfortunately, this means any attempt to integrate these approaches has to go through a tedious and expensive process that disrupts the existing workflows, and often requires data science expertise and willingness to share such data. Such an approach is not always feasible and can be outright impossible due to certain active data privacy laws (e.g. GDPR in the EU and HIPAA in the US). A suitable approach should slide seamlessly into the existing workflows to be practical. In the immediate term acting purely as a supplementary ‘tool’ in the operator’s arsenal, eventually growing organically in terms of capability over a period of days/weeks of usage.

Enter OptOSS AI:

The most complete AIOps platform for critical networks

The OptOSS AI platform is the culmination of OPT/NET B.V.’s 50+ year (combined) experience in the telecom industry. The tooling was developed and tested by network engineers and AI scientists for network engineers and operators.

Our team has spent decades solving the unsolvable: untangling the most complex network issues in environments where failure simply isn’t an option. Our team initially built OptOSS AI as a supplementary tool to provide a global overview of complex networks in the scope of our consultancy services, and assisted in diagnosing root-cause issues for customers worldwide.

Over time, and with advances in the field of AI beckoning, we saw the potential to massively scale our own expertise by infusing the tool with our expansive domain knowledge. At this point, it became much more than a simple tool…rather, it became a massively scalable extension of our expertise that can always be called on, never got tired, and never missed anything of importance in the oceans of data.

With OptOSS AI our consultants could perform very complex technical audits and ‘post-mortem’ network examinations in a short span of time and with less effort than usual practices, while solving long-lasting enigmatic incidents and finding true root causes of costly service disruptions.

Up Next
Stay tuned for the next edition of this blog series, where we dive deeper into the history and current capabilities of OptOSS AI — the most complete AIOps platform for critical networks — and illustrate how it handily meets the needs of the Telco’s of the future.

At OPT/NET B.V., we focus on enabling our partners & users to rapidly deploy real-time AI solutions in mission-critical environments. Mission-critical can be defined as “crucial for the continued functioning of an organisation”. From telecom networks to disaster zones, from farmlands to the open ocean. In a world of increasing technological connectivity, maintaining the resilience and robustness of these environments becomes increasingly relevant (not to mention complex).

Follow us on LinkedIn to stay up to date & learn more about how exactly OptOSS AI helps to facilitate the transition to a more agile, digital-first type of organisation, and navigate an increasingly disruptive environment.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store