Application Level Encryption(ALE) with gRPC-java

Ivan Bahdanau
5 min readFeb 16, 2023

--

Preface

So, you own both a gRPC client and server that communicate serialized protobufs between each other and you’d like to make sure that under no circumstances the payload, precious and sensitive data, can be accessed/intercepted by a man-in-the-middle-attack. The only place where data can be encrypted or decrypted is on the client or on the server side. Not only that, each client needs to have its own key that is distributed between client and server so that client can not pretend to be a different client or read messages from other clients.

The why?

Some readers may say: well, the gRPC is happening over http2 and the TLS-encrypted traffic can solve a problem. That is true: TLS-encryption may help alleviate man-in-the-middle-attack, but it won’t protect completely from intruders to have access to unencrypted traffic if TLS termination happens before request reaches final destination: gRPC server.

And here is why… The thing is that modern cloud solutions come with a lot of optimizations and filtering to help with efficient traffic routing. It is not just a gRPC Client talking to a gRPC Server directly. In reality it is gRPC Client → CloudWhereServerIs → Proxy1 → … ProxyN → gRPC Server. There can be several proxies that the request possibly has to go through. Often, the reason is mainly separation of concerns. Some proxies will be there to rate limit traffic, some proxies will be figuring out unusual activity, some proxies will be in charge of massaging traffic or making sure that connections to final destination routed efficiently, some of proxies will be terminating TLS traffic so that they can analyze request headers and metadata to make decision about where and how to route the request.

Once there is a proxy (proxy3 in our case), likely being application-level firewall, terminates TLS traffic to do required analysis of request headers or payload, it may choose not to recreate TLS connection again before sending it to the next proxy or final server destination (see more for “the why?” proxy would do it in L7 firewalls). For optimization purposes, if request is carried between proxies internally, when TLS connection is terminated, the proxy can add a flag to request and pass it down unencrypted (for example, in https the X-Forwarded-Proto can be used to carry information if original request was secure or not). This way, the proxy doesn’t have to spend extra time to recreate the TLS connection and the server doesn’t have to terminate it again, and still have knowledge about whether the original request from the client was secure or not. But even if TLS connection for downstream requests is recreated, the proxy who terminates the connection in the first place has access to raw payload and it’s no good.

The fact that a request travels in an internal network unencrypted or there is a link in a chain of proxies that has access to raw data makes the protobuf payload be exposed to anyone who manages to get access to those proxies. That problem can be fixed if the protobuf payload is encrypted by the client and can only be decrypted by the server.

What is in scope

This post describes how to intercept bytes at client level and encrypt them before sending to server and how to properly set up a server side to intercept bytes and decrypt them before trying to deserialize them into protos. When the response is ready, similar steps are done for the response to be encrypted by server and decrypted by client.

What is out of scope

Distributing keys between clients and servers is out of scope of this post. It is incredibly important to be able to safely distribute keys between servers and clients, but it will make this post way too big so I won’t include this information here.

How to encrypt gRPC payload by client and decrypt by server

Java implementation of grpc.io allows us to use Marshallers in order to “massage/intercept” bytes on client side before they are sent to server and do the similar trick when request is received by Server.

In this example I’ll be using the tink library. Tink will let us do encryption and decryption very easily. Tink’s documentation describes well how to create “key” files that later can be used to set up encryption/decryption. For simplicity and demo purposes, we can use this static method that builds the Aead object to help with encryption/decryption:

  public static synchronized Aead initiateTink() {
try {
AeadConfig.register();
KeysetHandle handle = KeysetHandle.generateNew(KeyTemplates.get("AES128_GCM"));
return handle.getPrimitive(Aead.class);
} catch (GeneralSecurityException e) {
throw new RuntimeException(e);
}
}

All the code above does is generate that Aead object in memory that can encrypt/decrypt on given bytes with aead.encrypt(..) or aead.decrypt(..). Of course, in the real world you’d be generating Aead with tink code, saving it to a key-file and distributing these key-files between clients and servers and then using the tink library to create Aead objects for encryption/decryption.

Next, we need to update our client with an interceptor that has encrypt/decrypt logic in place for encrypting requests and decrypting responses. The code for this will look something like:


//clientABRequestKey and clientABResponseKey are just aead from above to encrypt/decrypt traffic.
final ManagedChannel encryptedChannel = NettyChannelBuilder
.forTarget(grpcServerPath)
.usePlaintext()
.intercept(new ClientE2eEncryptingInterceptor(clientABRequestKey, clientABResponseKey, CLIENT_ID)).build();

The code ClientE2eEncryptionInterceptor can be found here.

And finally, server code for that setup will look something like:

  private static Server startGrpcServer() throws IOException {
final ImmutableMap<String, Aead> requestDecryptionKeys = ImmutableMap.of(CLIENT_ID_A, clientABRequestKey,
CLIENT_ID_C, clientCBRequestKey);
final ImmutableMap<String, Aead> responseEncryptionKeys = ImmutableMap.of(CLIENT_ID_A, clientABResponseKey,
CLIENT_ID_C, clientCBRequestKey);
return NettyServerBuilder.forPort(SERVER_PORT)
.intercept(new ServerE2eAppEncryptionInterceptor())
.addService(
ServerInterceptors.intercept(
ServerInterceptors.useMarshalledMessages(new HelloServiceGrpcImpl().bindService(),
new ServerRequestDecryptor(requestDecryptionKeys),
new ServerResponseEncryptor(responseEncryptionKeys)
)
)).build().start();
}

ServerE2eAppAncryptionInterceptor, ServerRequestDecryptor and ServerResponseDecryptor can also be found here in the same github repo.

As a result, we have following setup:

Since we have encrypted the payload on the client side, it doesn’t matter that TLS connection gets terminated. There is no way that Proxy3 or Proxy4 on diagram above could have access to serialized payload and be able to make sense of it.

Finally, making gRPC call using the client we have created above is done exactly the same was as you’d do it with regular gRPC client:

...
final HelloServiceGrpc.HelloServiceFutureStub helloServiceFutureStub
= HelloServiceGrpc.newFutureStub(encryptedChannel);
final ListenableFuture<HelloResponse> responseFuture = helloServiceFutureStub.hello(
HelloRequest.newBuilder().setRequestPayload(requestPayload).build());
...

And on server side, the request also processed exactly the say way:

private static class HelloServiceGrpcImpl extends HelloServiceGrpc.HelloServiceImplBase {

@Override
public void hello(HelloRequest request,
StreamObserver<HelloResponse> responseObserver) {
responseObserver.onNext(...).build());
responseObserver.onCompleted();
}
}

Such no-change in gRPC client and server side is achieved thanks to interceptors and marshallers that we set up globally for ManagedChannel for client and gRPC Server for server objects.

Full code and more documentation available at e2e app-level encryption github repository.

--

--