Continuous delivery for data protection by design and by default — Restricting access to privileged environments.

Johan Sydseter
Sydseter
Published in
12 min readMay 15, 2019

In a series of articles, I will present a way of continuously delivering software while protecting the privacy of natural persons by design and by default.

Attribution: Mahnazy Azdani

Restricting access to privileged environments is the most important concern when protecting personal information and the easiest to do. I will go through a couple of points that are important in this regard.

Managing access control risk for compliance

GDPR recital 39 states that «personal data should be processed in a manner that ensures appropriate security and confidentiality of the personal data, including for preventing unauthorized access to or use of personal data and the equipment used for the processing.» Recital 39

This means, in every practical sense, that every employee has to go through authentication and authorization procedures for getting access to privileged environments and to the equipment used in those environments given that these environments and equipment are used in the processing of personal data, but what is considered appropriate security?

This is mentioned in article 32.

…to ensure a level of security appropriate to the risk, including inter alia as appropriate:

(a) the pseudonymisation and encryption of personal data;

(b) the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services; …

So it’s mandatory to have done a risk assessment and define what is appropriate to the risk. (recital 76) If someone got unauthorized access to the system and made the personal data public, what could be the consequences for the data subject?

But how can we know whether the access control and general security are good enough? To tell you the truth, nobody really knows. We can only assume. This is why a risk management approach to security is so important. You have to forever assess what is appropriate according to what is happening in the world around you. Recital 78 mentions that a system should be developed “with due regard to the state of the art», meaning that you always should take into account that the environment in which the system operates is changing and that the system, therefore, needs to change with it. The moment you stop to consider, you will get careless and expose yourself. The questions will always come after the security breach is a fact. Why didn’t we have stricter procedures for patching? Why didn’t we secure all of our access points the same way? The post mortem of the security breach will reveal whether your security was built into the system. Meaning that you really were doing data protection by design and by default according to article 25 or whether your security was a patch in your system architecture.

When you start doing risk assessments, you will probably find out that you have to have the same authentication mechanisms across all your access points. This is by far the easiest approach. Some stakeholders might try to sway you into making it easier to access parts of the system that doesn’t expose personal data. It might sound like a good idea, but it’s rather challenging to do as you will have to maintain exceptions in your access control system. Most access control systems I know of, at least in the digital world, aren’t made that way. Either you use access control, or you don’t, either you use password-based access control or you use PKI. You will probably need to handle auditing, monitoring, logging, and data subject access differently as well when using different types of access control. Designing, Implementing and maintaining all of this will, with time, become challenging and water out your security architecture until nothing is left.

Your risk assessment will probably highlight that your development and testing tools need to be secured as well. It’s easy to ignore the importance of a tool, especially digital tools. Take version control systems used for development as an example. A version control system won’t expose private data and isn’t used in a production environment, but the code that is stored there will eventually be deployed into a production environment which means you have to secure it in the same manner as with your production system and be very careful that you don’t version control sensitive information like passwords and usernames that can be used in an attack. It might sound obvious, but if I had a dollar for each of the passwords I have found lying around in version control, I could have earned far more then what I’ve earned writing articles.

Securing your privileged environments

Privileged environments are best left completely off limits. It should be completely unnecessary to log onto your system manually for operation, system maintenance or development purposes. You have a very long way to go if you think otherwise. Ask yourself why you need access to privileged environments in the first place? Your answer will most likely highlight your ignorance for production safety, security, and privacy. I dare you to show me that having direct access is faster, cheaper and as good. «If it’s fast and cheap it won’t be good. If it’s cheap and good, it won’t be fast. If it’s fast and good, it won’t be cheap.» – Tom Waits

You might want to give read access for monitoring-, log- and config management purposes to DevOps engineers. This is were role-based access will be required. You can even create the possibility for staging application deployments or system configuration changes. The changes can be applied once operation team members have given their approval. On my current project, we have all the configuration and deployments versioned controlled. New deployment and configuration changes happen through pull-request approval. At least two senior staff members have to approve a merge to the trunk, configuration change or a deployment. This makes sure that no unintended changes get applied and that intended changes go through the proper channels for approval, review, and testing.

Such an approval step is appropriate to cover article 32.4 which states that:

The controller and processor shall take steps to ensure that any natural person acting under the authority of the controller or the processor who has access to personal data does not process them except on instructions from the controller, unless he or she is required to do so by Union or Member State law. Article 32.4

Keep in mind that opening up for production read access and treating configuration and deployments as code (meaning that it’s version controlled) requires that all connected sensitive information is stored elsewhere in a solution like Hashicorp Vault.

Your continuous integration-, test- and staging environments should be privileged as well meaning that they should be identical to your production environment in regards to security and access control. It will definitely slow you down in the beginning, but you will regain the momentum later on when you set up your production environment. It’s an almost genius way of doing data protection by design and by default as the process for which you will develop, test and stage your security architecture will go through the same procedures as with the rest of the software that you build. You will very early on be able to verify that your security is good enough. You can even have your penetration testers come on board and test before you are ready for production since the security mechanisms will be ready before you have your production environment up and running. For your project it might end up being the slack it requires to get ready for delivery as it will give you time to make adjustments after the offensive security test period is over.

Remember to create separate access zones for your continuous integration-, test-, staging and production environments. If a password or certificate gets leaked, it won’t have the same security impact given that it only can be used in development or test.

Prevent unauthorized processing by ensuring both the integrity and the confidentiality of the system

One of the topics that have led to the most contention on the projects I have been in is the use of OAuth2 for authorization.

The challenges with OAuth2 is well known and has been described very colorfully by one of OAuth’s ex-authors, Eran Hammer, in his blog post, “OAuth 2.0 and the Road to Hell”.

To be clear, OAuth 2.0 at the hand of a developer with a deep understanding of web security will likely result in a secure implementation. However, at the hands of most developers – as has been the experience from the past two years – 2.0 is likely to produce insecure implementations.

And that is my experience as well. One example I’d like to highlight is the method in which the client application is meant to authenticate itself to receive an access token. After the user has logged on and given the control over to the client application it’s the client application’s turn to authenticate itself. The standard way of doing so is to use a client id and client secret. The client id and client secret is nothing more then a username and password sent in clear text hopefully over a secure connection. If a man-in-the-middle would be able to gain access to the communication or spoof the resource server, he could also effectively spoof the client application. This is why I always recommend using signing or mTLS for authentication and OAuth2 for authorization.

The three most important security principles to follow is what is known as the CIA triad. The CIA triad is also mentioned several places in the European General Data Protection Regulation. e.g

(b) the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services; Article 32

The regulation recommends especially to choose measures that prevent unauthorized processing by ensuring both the integrity and the confidentiality of the system. Article 5.f

When using passwords, you do not ensure the integrity of the authentication request. If someone hacks your secure line, something that does happen from time to time, they could steal the password and change it, thereby taking on the identity of the user. The only way to guarantee the integrity of the access request is to sign the request with a private key.

The OAuth2 based OpenID Connect 1.0 specification has an authentication method called a private_key_jwt client assertion which uses the concept of signing the request for access. If Your thinking about using OAuth2 and OpenID Connect then I would recommend such an approach. It will guarantee the integrity of the system access. There is another option if such an alternative not is available to you. You can use can require that the client application communicate using an mTLS client certificate. When the communication between the client application and the resource server is setup using mTLS, integrity and confidentiality can be guaranteed.

No matter whether you use client credentials or client certificates, make sure that your solution takes into account that certificates and client secrets need to be initially registered by an end-user during an onboarding process and rotated on a regular basis. None of the project managers I have worked with have stopped to consider that password and certificates needs to be registered upfront and rotated on a regular basis. The biggest threat to Internet security these days is that nobody changes their certificates and passwords.

Identity Management Systems and Federated Identity

Even if you are making a backend system you will still have to think about whether you are going to use your own Identity Management System or whether you will federate access to your system to a third party. In both cases, you need to consider the implications that the Identity Management System has in regards to personal data.

You might come to the conclusion when looking into your use case, that your backend system might not need to know whether there is an end-user using the system. This tends to be the case for many B2C systems were the customer simply has been given a general username and password that can be used by their PoS system or ERP system to access your API on behalf of their organization. You still need to register an administrator and set up proper onboarding and password rotation for the customer, but you can push some of the responsibility for ensuring integrity and traceability onto the customer’s system as they are the ones that will have to deal with end-users. One way of doing this is to use the OAuth2 client credentials grant flow. This is perfectly fine and the cheapest solution as long as you take into consideration that your customer’s client application might get compromised and put in place proper security measures for mitigating the risk like binding your ACLs to the client credentials etc.

Personal data considerations in regards to access control usually come up when you are creating an API for a user interface or are creating an end-user application. In the case where you are using your own IMS, you will be storing personal information on behalf of end-users, no doubt about it, but what if you have federated access to your customer’s Identity servers by using a solution like Azure AD B2C? The customers will, in that case, not be registered by you. So you do not need to store personal data, but you are still processing personal data since the personal identifiers belonging to your customers are federated back to your application or API through the authentication and authorization requests. In that case, please keep in mind that these identifiers may have consequences to the data subjects privacy. recital 30

Let’s say that the client application is a digital appointment booking form used by your local doctor’s office made so that the patient can register their doctor appointments. The next consideration you need to make is how you are going to use these personal identifiers for authentication and authorization. If these identifiers are patients registering themselves for a doctor appointment, then those identifiers might very quickly be connected with a person’s health information. Processing of health information is considered to be special categories of personal information according to article 9. You are required to do a DPIA if so is the case. Given that you are operating in an environment where it’s not certain whether you are processing personal information or special categories of personal information, doing a DPIA in any case, is the best solution. Through your DPIA you might end up understanding that your IMS system is involved in the processing of special categories of personal information.

Authentication methods for users.

According to Recital 64, “The controller should use all reasonable measures to verify the identity of a data subject who requests access, in particular in the context of online services and online identifiers.”

If you are developing an end-user client application, you need to consider whether your users will authenticate using username/password, PKI, Biometric information, token, SMS/Phone, key card or pin and whether you should use multi-factor authentication schemes.

The answer to that question depends on what is available to you. In general, username/password is the best option if you do not have the possibility to use PKI or key cards. The other options can easily be hacked. Using e.g. fingerprints may seem more secure at first, but when you look at possible attack vectors it’s probably one of the less secure options. Faking a fingerprint only requires the attacker to offer you a drink, take a picture of your drinking glass, then, with photoshop, increase the contrast of the image and feed the image through a 3D printer accessible at your local library (Yes, my local library have a 3D printer). This makes phone authentication less secure as well since most phones these days use a fingerprint scanner to give users access to their phones. see: (I attempted to fool the new Samsung Galaxy S10)

It’s also very easy to gain access to another person’s phone number. All you need is the person’s business card. Most telecom employees aren’t very thorough when they are out on the street selling mobile subscriptions. Many of them usually get paid per subscription they sell. If you offer them someone’s business card you will probably be able to get them to transfer the phone number on the business card over to you when buying a mobile subscription from them. PKI is the next step if you want to improve your security. When key cards are used, make sure the employees managing the keycards are properly trained in regards to security. Handing out a key card should never happen without the other person showing a proper ID.

In my opinion, backup codes are best for cases where the users forget their passwords. You should never use SMS or phone call as a backup solution in case users forget their password. If your users have problems with remembering their passwords, help them rather with remembering their passwords by safely storing their passwords in secure password sharing solutions like 1Password

When considering MFA, it is better to use a digital or physical key generator together with backup codes. There is no point in using SMS or other phone authentication methods. If you want to improve your security, consider enforcing regular password rotation before opting for using MFA methods. MFA methods are there because most user’s passwords are a combination of birth dates and their dog's name just watch this video from the Jimmy Kimmel Live show.

To conclude

I started this article by talking about risk management. Ensuring compliance in regards to restricting access to personal data means that you need to document your risk assessments and how you have chosen to mitigate the risks identified through your risk assessments. You can have the best IMS that exits, but if you do not document why you are using it, others will start to question why you are using that expensive and overly complicated piece of software. When I am not working, I am home helping with the potty training of my two boys. Part of that training is teaching them that the work is not done before they have done the paperwork. The same also holds true when building up an access control system.

--

--

Johan Sydseter
Sydseter

Co-leader for OWASP Cornucopia and co-creator of Cornucopia Mobile App Edition, an application security engineer, developer, architect and DevOps practitioner.