“Encryption as a Service”, Buzzword or valuable technology?

Marco Amann
Digital Frontiers — Das Blog
10 min readJul 9, 2021

What is EaaS and does it even make sense? In this article we discuss some concepts found in EaaS offerings and explore where and under which circumstances they make sense.

Photo modified from Amol Tyagi on Unsplash

This article is split in two parts, the first one discusses some concepts of EaaS, whether they may make sense and gives you some aspects to think about before considering using it. The second part shows a sample implementation using some of Hasicorp Vault’s features in this area.

What claims to be Encryption as a Service

To be honest, when I first read about “Encryption as a Service”, I prepared to get out my Bingo card, this promised to be a gold mine of snake oil. But since I could not find “Military Grade Encryption” in the first section, maybe this was legit?

There is probably no field in IT where more snake oil is sold than in security. In this regard, security has a unique benefit: You normally can’t see it working: If someone sold you some shady AI-powered chip, you can run your workload on it and may or may not see a difference. But with security related systems? You only hear from them when they fail to do their job and by then it is too late. Your ciphertexts (hopefully) all look pretty boring, so let’s approach this topic by first discussing the principles at play and then presenting a bit of example code.

To keep the scope of this article constrained, I limit the discussion to a single technology, the transit secrets engine of Hashicorp Vault but refer to other offerings where applicable.

Vault Transit Secrets Engine

Vault has a secrets engine (their name for a component with some related functionality) called transit that allows for encryption and decryption of data sent to its API. We use this functionality as our definition of EaaS, for now this is sufficient. Vault exposes its API (in the default configuration) as an HTTP endpoint, so virtually every other application can make use of the offered service.

Here is a bit of an example exchange with the transit backend:

Trust model

Assume, your application has been entrusted with some sensitive plaintext. This means you need to trust your application, no matter what you do with the plaintext afterwards. Further, you have to trust the transport that brought you the secret in the first place, in many cases some TLS connection from a client or a (hopefully also TLS-secured) connection to your database.

If you store the sensitive data somewhere, you either have to trust the storage and all those who have access (now and in the future) or encrypt it and have to trust all those who hold the key.

So why introduce Vault as another party here?

If you introduce Vault to manage your secret keys, access to them, and the encryption and decryption of data, you obviously have to trust the Vault. Furthermore, you need to trust the transport you use to communicate with the instance.

Is this reasonable? I would argue it is, under the condition that you, better yet some team in your company, operate the Vault. Sending secrets over the network is most often a bad idea. But since the secrets reached your service somehow, that problem has to be considered in context: A tightly controlled, mutually authenticated TLS connection to Vault is probably less of a problem than a connection made by a user with IE6, sending you the secret in the first place. Running Vault as an internal service is probably not less-secure than running any other internal service, including a storage backend and your application, especially since Vault is Free Software (Mozilla Public Licence), you can even audit it if you are so inclined.

Beware of all-in-one cloud offerings (if you can afford it)

This might be contrary to the opinions of some of my colleagues but here it goes: I would advise against using hosted services that create and manage your keys for you. If your regulations require the use of FIPS 140–2 certified modules, such offerings are probably the easiest way for you. But if you care about the actual security benefits and not only mere regulations and can afford to have your ops guys maintain a local installation, I would advise you to do so. Letting your secret keys ever leave your infrastructure is problematic. If they never leave your infrastructure because all your infrastructure already is hosted on some cloud, then the benefit of hosting your encryption service in your own datacenter vanishes.

Some conceptual benefits

Whilst there are many reasons against EaaS, there are some strong reasons to implement EaaS in your environment.
First and foremost, you do not have to implement it yourself. This argument ranges from Schneier’s law to whether you can explain to your junior developer why you would choose AES-GCM over AES-CTR or vice-versa. If you externalise all encryption, your application will be encumbered with less complexity like secure random numbers, nonces, cipher specs, etc.

If your application does never see the actual encryption keys, you do not have to manage them. This reduces the likelihood of accidentally exposing them, e.g. by a misplaced configuration or vulnerabilities in public facing applications, introduced by you or the framework you run on. This is also true for operations that otherwise would need coordination in a distributed system, like key rotation.

If the chosen solution supports centralised policies and authentication, operations like granting and revoking access to signage or decryption are by an order of magnitude easier compared to enforcing this in a set of distributed application instances. Further, auditing the whole system becomes easier if not even possible at all.

Some pitfalls

Problems with security solutions often are nuanced, therefore I do not attempt to create an exhaustive list that covers all aspects you have to consider before using EaaS in your system. Rather, I present the three aspects that I personally find the most concerning.

Using centralised encryption services creates a single point of failure. If the encryption service stops working, all services relying on it will also stop. Depending on your use case, this might be the whole environment. So if you use your own EaaS service, make sure you use and understand the provided high availability features and failure modes. This of course extends to the storage implementation beneath that service and a way to have it recover from a crash. In the case of Vault, this requires you to have your set of unseal keys ready, probably involving a bunch of people and a bit of ceremony.

If you keep your keys centralised, they present a nice target for an attack, since they basically are the master key to your application. This does not mean that the setup is harder to secure this way: who can reliably rotate encryption keys located in each and every application instance by hand? Instead I want to emphasise that the processes around your EaaS service have to be clearly documented and monitored to prevent fraudulent behaviour.

One last point I want to address is that you need to make sure that you do not accidentally expose secrets due to the sheer simplicity of using the encryption service. Tight access control policies and separated user accounts for your roles should ensure you won’t absently click the wrong button in the Web-UI and have your master encryption key deleted (did totally not happen to me during development) or set the checkbox to allow export of said key.

Example application

To try out Vault for myself, I wrote a tiny API, that allows a user to post and retrieve notes that are stored in an encrypted form. Let’s have a look on how it works: We expose 3 endpoints: one for adding, one for listing and one for retrieval of notes.

Vault Transit: Encrypting the Email Addresses

I decided the application should save the email address of a registered user. Instead of saving them directly in the database, I want to save them encrypted to minimize problems in the case of the database being leaked. Since the addresses are only saved for manually contacting a user in the case of misuse, their storage and retrieval rates are low. The procedure was designed as follows:

  • Receive the email address upon signup
  • Send the email address to Vault to have it encrypted and receive the ciphertext
  • Store the ciphertext in the user table for later use
  • If required, send the ciphertext to Vault to have it decrypted

Vault allows us to have the public-facing application to only have access to encryption functionality but never decryption operations on email addresses. You might ask why not simply use asymmetric encryption with the key split between a public and private application? Well, if there are more than two parties, this becomes quite complicated and might require re-encryption of secrets for certain use-cases. Think of GPG for group emails with hybrid encryption or password sharing in teams.

Let’s build this.

For this example, we need a transit backend in Vault. If you want to follow along, enable the backend in your installation.

The transit secrets engine in the Vault UI

Within the backend, we need to have a named encryption key. Vault lets you choose from an array of symmetric and asymmetric ciphers and modes of operation. Since in our case the same application will encrypt and decrypt messages, probably only seconds apart, I could not justify using asymmetric encryption for anything useful here. Choose whatever matches your requirements. I named our keysemail-key andnotes-key, referred to throughout the code later.

Pay attention when using “Convergent Encryption”: Vault allows for keys to support convergent encryption upon creation. When encrypting, you have to supply a “context” alongside your plaintext and vault will generate a deterministic ciphertext. This is done by deriving a non-random nonce. This mode has some applications but allows for a set of attacks, i.a. are Distinguishing Attacks possible by design.

Using the Spring Vault library, encrypting some plaintext is trivial:

val operations: VaultOperations = vault
val transit = operations.opsForTransit()
val ciphertext = transit.encrypt("email-key", Plaintext.of(email));
return ciphertext.ciphertext

Note that you do not have to care about SecretKeySpecs or Initialization Vectors (IV) here. This leaves less room for messing them up.

Vault allows us to rotate encryption keys and “rewrap” ciphertext, meaning decrypting with the old and encrypting with the new version of a key. This can become handy if you are required to do so, the security benefit gained by this can be questionable.

It is worth noting here, that with the designed system, the key never leaves the Vault. This changes with the datakey we use for the notes, as described in the next section.

To debug your application, you can make use of the various functionalities provided by the Vault UI

Using a Datakey

Each user should have their own symmetric encryption key in the system for their notes. This has multiple benefits: It allows to revoke individual keys if they become compromised and once we delete the key from the database, we can be sure that all stored ciphertexts are inaccessible. This makes GDPR compliant deletes a breeze. These keys are cached inside the application and stored in the database, secured by another key only known to the Vault. When retrieval becomes necessary, we can let Vault encrypt the user key for us. Instead of implementing key-generation ourselves, Vault can handle this for us. Their datakey feature was made for exactly that purpose.

Using Vault, we could directly start to encrypt and decrypt messages using that key. In this context, a datakey provides two advantages: I do not want to make another API call for each and every message which might become slow and cumbersome e.g. for listing or bulk-inserts. The second reason was outlined above: We want to have individual keys for each user. For this, datakeys are easier to implement compared with creating a named key in vault for each and every user.

User and Key Creation

Requesting a new key from Vault is as simple as calling the transit backend. The below path will give us a key, wrapped by a named key notes-key. In this context, “wrapped” means encrypted with. Our resulting key can be safely stored alongside our user, since it can only be decrypted by contacting vault (and being allowed to do so).

transit/datakey/wrapped/notes-key

Note however, that Vault will not save any information about the generated key, saving it and remembering the name of the wrapping key is your obligation.

Decrypt Ciphertexts using Vault

Upon each database read or write concerning our notes, we need to have access to the key for the particular user. To let vault decrypt it for us, we use the Vault library provided by Spring.

val transit = operations.opsForTransit()

val maskedKey = userRepo.findByUsername(username)?.userKey ?: ...
val key = transit.decrypt("notes-key", Ciphertext.of(maskedKey));
return key.plaintext

The nice thing about this is, compared to an implementation by hand, there is much less room for error.

Revoking access to decryption

A nice property of the built system is that we can revoke access to the saved notes for individual instances of our application by deauthenticating them in the vault, without affecting any other component.

Summary

We have seen that EaaS has a lot of benefits in certain use cases, if we are aware of the implications of its usage. Whilst there are offerings in the cloud, you might consider hosting such a service yourself, if you can spare the ops. With Vault and Spring, using externalised encryption services is concise and spares you a lot of security relevant, boring and error prone code. Vaults datakey functionality can be used offline to encrypt data without having to do network roundtrips and provides the ability to have the key delivered to you already wrapped by vault.

Thanks for reading! If you have any questions, suggestions or critique regarding the topic, feel free to respond or contact me. You might be interested in the other posts published in the Digital Frontiers blog, announced on our Twitter account.

--

--