How did cyber security become a global issue, and what is to be done about it?

Dan Tofan
Dan Tofan
Aug 9, 2018 · 10 min read
Image for post
Image for post
Fig. 1 — Global Cyber Security

In the last two decades, one term has become very popular by attaching itself onto many traditional terms related to war, terrorism, security, in order to express the many implications of technological development in our everyday life. “Cyber security” belongs to a wide family of modern terms inspired by the multidisciplinary science called cybernetics. Norbert Wiener defined “Cybernetics” in 1948 as “the scientific study of control and communication in the animal and the machine.” You can read more on “cyber” etymology here.

Security represents a basic human need along with food and water. As humanity evolved and technology became an integral part of our life, being technologically secured become a paramount need. Judging by the level of attention given by the media nowadays, one might infer that being cyber secure could be in and of itself an important accomplishment.

There is no worldwide accepted definition of cyber security, as each authoritative organization comes with their own version. Nonetheless, they all go into the same direction. One of the simplest and straightforward definitions I could find comes from US National Institute of Standards and Technology: “the ability to protect or defend the use of cyberspace from cyber attacks”.

Short history

Cyber security has gathered some decades of history behind, in which it has evolved massively. It all started around the 60’s with the famous “phreaking attacks”. One of the most notorious is John Draper’s fraudulent access to free calls within the AT&T network just by using a toy whistle that could generate sounds on the 2600 Mhz frequency. Similarly, Kevin Mitnick (“world’s most famous hacker”) broke into “the Ark” (a computer system belonging to DEC) in 1979, just to prove to his friends that he could do it.

In the 1980’s FBI investigated a breach on the NCSS Company, where an employee managed to compromise the password list just for fun, so he could poke around computers and see what interesting things could be found. Thus, he developed and used a password cracker, but no harm was recorded in the end.

By 1984 FBI makes the first arrests related to hacking in the USA. A group called 414s was arrested for breaking into 60 computer systems. The hacker profile starts to take shape in the press at that time, as “young, male, intelligent, highly motivated and energetic”. The same year brings the first hacker publication “2600 — The Hacker Quarterly”, edited by Emmanuel Goldstein in New York. In 1986 the US Congress adopts “Computer Fraud and Abuse Act” as a response to the increasing number of attacks against computer systems.

As for Europe, in 1981 Chaos Computer Club (CCC) is formed in Germany.

First computer viruses appeared around 1970’s, but the ones that achieved high impact appeared later on around 1987 (Cascade, Friday the 13thand Stoned). In 1988 the Morris worm was developed (by Robert Morris) that managed to affect 10% of the total number of ARPAnet connected computers (around 60k at that time). Morris managed to block the activities of several universities and governmental structures. Later that year the first Computer Emergency Response Team (currently CERT-CC) was established.

The Computer Misuse Act 1990 was adopted in the UK, criminalizing any unauthorized access to computer systems.

Other notable events scattered across the 90s, when the Windows operating system became very popular thus becoming the preferred target for hackers. Many Windows based malware samples were created in that period and the number of attacks grew exponentially.

After the 2000’s hacking is raised to another level, as communities realize it’s potential. By 2003 the hacktivist group Anonymous was established.

In 2007 Estonia was targeted by a massive DDoS that took down several banks and governmental institutions for a few days. Year 2010 brought the unveiling of what was called “the first cyber weapon” Stuxnet, an extremely complex malware able to manipulate industrial control systems. As hacking enters the era of cyber-terrorism and cyber-espionage, we see more developments such as Operation Aurora and the Sony PlayStation attack.

A decade later, in 2017, 2 very large incidents, unprecedented in scale and effects, affected users on a global scale: WannaCry and Petya/NotPetya. If Stuxnet was not a wake-up call, then this surely was.

Overall, over the years, computing technology has become faster, cheaper and started involving an increasing number of applications in real life. The boom in technology adoption across the world has created an inter-subjective virtual reality (that we call online), that grew so much as nowadays is undoubtedly an integral part of our day-to-day reality. Just as an example, cashless wallets are very common these days, you only have to carry a banking card and account where you electronically store some bits that stand for your money. Many real life scenarios/activities have been transposed from real life to online, as they brought, in theory, many simplifications to our daily lives. Overall, we live more and more into the digital world.

Image for post
Image for post
Fig. 2 — Evolution of Technology (Credits

The remaining unbridged gaps

Nevertheless, such a rapid growth left behind some unbridged gaps. The building blocks of what we nowadays call Internet (protocols) were designed in the 60’s. Large global deployment with low costs could not have been done if the building blocks were redesigned every 10 years. This is why nowadays we are still relying on legacy infrastructure and protocols designed way too many decades ago (e.g. TCP/IP, SS7, Diameter etc.). While they might provide us with the basic functionalities, they cannot natively sustain new requirements such as encryption. Patches and workarounds are continuously developed, and do work for a while, but barely enough to sustain internet’s rapid growth and development. Adding so many patches, additional modules, libraries etc. to fix bugs or add additional features can overcomplicate the systems, with excessively too many variables and dependencies, making their management difficult. Yet, this is how vulnerabilities are being born.

The gaps left within the IT&C historic development have allowed the rise of a complex cyber security industry. Hackers have evolved from script-kiddies to nation states, cyber espionage or organized crime groups, transforming the Internet into a battlefield. The easiness and cheapness of developing cyber hacking operations gave everybody hope that they can become rich, important or even change the world order overnight.

Image for post
Image for post
Fig. 3 — Cyber attacks evolution (Credits Infosec Institute)

Fig. 3 depicts how cyber threats have evolved over time. Increasing in complexity was necessary given the similar trend of technology (apps, software packages, developing frameworks, connectivity solutions etc). Besides, we can also notice how the overall purpose of threats has evolved from simple malicious code made for fun to highly specialized malware targeting specific types of activities or industry branches. The shift is not unusual, as more granularity was needed to cover the broad range of fields where IT&C was adopted, together with their peculiarities.

The complexity of the cyber security industry

Given the circumstances there was enough room for the development of a cyber security industry. Probably the first antivirus software was released in 1987 by G Data Software, followed closely by Ultimate Virus Killer (UVK), McAfee and NOD. In 1987 antivirus was the only cyber security solution available, but also the range of threats was narrow. In 2018, the landscape of available security solution looks a lot more complicated, as you can see in Fig. 4. Choosing a security solution is becoming more of an investment than just an acquisition.

Image for post
Image for post
Fig.4 — Cyber Security Vendors Landscape (Credits:

What is nevertheless surprising is that as deep as you will go into the subject you will find out there is no 100% security. No matter how much security you will buy, there is always a risk of being hacked. Therefore, all that investment might be in vain in the end.

And speaking about prices you also have to know that security is not cheap at all. Gartner predicted that the global cyber security market would reach “$96.3 billion in 2018, an increase of 8 percent from 2017”. There are a lot of studies out there covering this subject, but the reality is that one company’s cyber security budget can depend on a lot of factors: did they had a breach or not, what type of systems do they have etc. One study mentions average yearly spending for the financial sector in UK of about 17,900 British pounds. That might not seem much but the study took into account all types of companies (small, medium and big). I can guarantee you that the cyber security budget of large enterprises can reach up to several millions (euro, dollars or pounds).

On top of that we can add also the skills shortage, visible especially in US and Europe. According to Cybersecurity Ventures the current estimate of open cyber security positions in the US is around 350,000 , and the predicted global shortfall of cyber security jobs by 2021 is 3.5 million. At this point, you might want to reconsider your career.

So, in the end, what is to be done to solve this problem?

Well, for some years now, governments have started imposing regulations (e.g. HIPAA, FISMA, EU-US Privacy Shield, EU GDPR, EU NIS Directive etc.). Bruce Schneier, a reputable cyber security expert, argues that companies will not make sufficient investments in cybersecurity unless government forces them to do so. I couldn’t agree more, as cyber security is not a direct source of revenue for most of the companies, but a huge expense. In their quest for profit, companies usually become reluctant to such costly investments and might need some types of “incentives”. One might think that in some cases regulation and only regulation, can solve problems where technology has failed to provide a proper response. Take GDPR for example: an enormous and complex regulatory package, meant to solve the global data theft done on a regular basis by everybody, just because they can. But regulation has some drawbacks. First, you need a strong apparatus to enforce and monitor. Not many countries have managed to succeed in this chapter, especially when it comes to technology. Usually a well-developed state, with a strong democracy, might be able to maneuver efficiently such initiatives. However, the internet is vast, including many states, also the ones with not too much regulation in place, or the ones that choose to avoid it. Nevertheless, I have to agree, regulation is a good measure and I strongly suggest exploring it.

Another approach might be user education/awareness. Some say that the more users become technology knowledgeable, the less changes of being hacked. I do not contest this, as it is certainly true. This website provides stats regarding the internet usage worldwide. End of 2017, the global internet penetration rate was about 54%, but I couldn’t find statistics on how knowledgeable are these 54%. Now consider that the other half of the world is expected to join us online in the next 5 to 10 years. How much did it take you to learn how to safely use the internet? Building a security culture might solve the issue, but it will take us quite a while.

Other voices say companies/providers should take action and assume the responsibility of protecting their customers. We depend on our providers in many types of situation, but most specifically in case of cloud. They have a very good visibility upon the threat landscape that concerns their clients and are really the first ones that can take preventive measures and even respond in case of an incident. I see this as a very effective and efficient measure in assuring security. The only issue is that we need a large-scale adoption of this approach. In a previous post of mine, I have detailed about how cooperation is done in cyber security and gave details on similar initiatives such as the Cybersecurity Tech Accord. I consider this as a rather good start, but let’s see the results.

This Forbes article provides some interesting ideas on how to fix cyber security, from some renowned experts in the field. Among them I found Heather Adkins, Manager of Information Security at Google, mentioning “For the last 20 years we’ve been playing catch-up to fix operating systems that were designed in the 60s and 70s. We need to rethink that from the ground up”. He means a complete redesign of nowadays technologies. This might also work, and in the end, it could even be the best solution. Trustworthy Computing, an initiative launched my Microsoft in 2002, is the reason the company has survived and still makes a good profit today. Occasionally, you need to give a proper reset to the systems. However, the issue here is represented by the multitude of components/systems that we use every day, components that are built with different technologies, belong to different parties and are supported by different types of communities (open source, commercial etc.). We need a serious joint effort from all these involved parties if we want something notable to come up.

There you have, we now have identified four big approaches: regulation, user education, responsible providers and complete redesign. All of them are completely valid, though none of them can be fully applied by itself in real life. You can regulate as much as you want, if the objective reality cannot sustain the requirements, the regulation is useless. You can educate but you will never reach 100% user awareness; there will always be a rather big percentage of the population unaware of what they should have been aware. In addition, redesign cannot be done on a full scale as the Internet is a network of communities, and each community has its own ideas, rules and technologies.

There are also other approaches out there that can be applied, but I stopped at these four because I do consider them most important. I do think that we can achieve global cyber security by adopting a balanced policy consisting of the four elements described above, applied simultaneously at the necessary intensity. There are areas where you can solve more by having more regulation in place, while others might need complete redesign. Achieving cyber security needs all of the above, we only need to figure out how much of each and at what moment.

The question remains then, who is to decide when and where to apply what?! Waiting for your input on this.


Originally published at on August 9, 2018.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store