A Developer’s Experience: Linking C++ and Python with gRPC in Docker

Iftimie Alexandru
5 min readFeb 14, 2024

--

I had the recent opportunity to work on a project where the primary focus was the interaction between a C++ client and a Python server with Windows and Linux support. The requirements for this project revolved around performance and efficiency with a request-response model.

Generated with ChatGPT

TL;DR

A simple gRPC with a C++ client and Python server. Docker container builds and runs both. Run the commands below to pull a pre-built docker image and run it.

Code on GitHub: https://github.com/Iftimie/gRPC-cpp-python/tree/127d65442ee8e5166a510690e23ed31c7ba9c8da

docker pull iftimiefa/grpc-cpp-python:initial
docker run --rm -it -p 50000:50000 --name grpc-cpp-python iftimiefa/grpc-cpp-python:initial

Goal of the project

The project involved a pre-existing C++ application, tasked with sending images to a Python-based tool for quick processing, emphasizing the need for low latency and reduced bandwidth usage.

In the initial phase, I explored various options for this setup.

  1. One of the initial considerations was the use of low-level built-in sockets. While this approach initially seemed straightforward due to the lack of dependencies, it soon became apparent that it would complicate the addition of new endpoints. The need for custom implementation for each new feature would be like “reinventing the wheel.”
  2. I considered the creation of a RESTful API, employing JSON over HTTP. This is a widely adopted method on the web. However, the challenge lay in the C++ client implementation, which promised to be cumbersome and potentially suboptimal in terms of performance. On the Python side, there would be less of a concern, given the abundance of efficient frameworks available.
  3. Apache Thrift was also on the table, boasting a long-standing presence in the field. It offers an Interface Definition Language (IDL) for defining interfaces, and the rest of the code is auto-generated, making use of an efficient binary protocol. This cross-language support is a significant advantage, particularly for projects involving multiple programming languages.
  4. Ultimately, I settled on gRPC. Though it’s a relatively newer framework compared to Apache Thrift, it mirrors many of Thrift’s functionalities at a high level. The decisive factor for choosing gRPC was its growing popularity and the robust community support it enjoys.

At first, I faced significant challenges in integrating gRPC with the C++ client, primarily due to complexities in the build configuration and tools, as well as my limited experience with low-level programming. I’d like to share the insights I gained from this experience, in the hope that it might assist others encountering similar difficulties.

Installing dependencies

The easiest way to work with gRPC in C++ is by using vcpkg. I will provide a Docker image that has everything configured, but to give you a gist, this is what is necessary to install vcpkg. The only inconvenience is that vcpkg builds are quite big. The docker image has 6GB.

# Dockerfile
FROM ubuntu:latest

RUN apt-get update && apt-get install -y git curl zip unzip tar cmake build-essential pkg-config python3 python3-pip
RUN git clone https://github.com/microsoft/vcpkg.git
RUN ./vcpkg/bootstrap-vcpkg.sh
RUN ./vcpkg/vcpkg install grpc --triplet x64-linux-release

RUN python3 -m pip install grpcio-tools

Defining our interface

Before we write the client and server, we have to define what are the data structures they will exchange and what endpoints are available on the server. That interface resides in message.proto

// message.proto
syntax = "proto2";

package grpctemplate;

service MyService {
rpc SayHello(RequestHello) returns (ResponseHello) {}
}

message RequestHello {
required string name = 1;
}

message ResponseHello {
required string output = 1;
}

Building the server

To implement our server we first have to generate the Python code based on the message.protofile. We have already installed thegrpcio-tools package in the previous section.

Next, we want to generate the message_pb2.py and message_pb2_grpc.py . The first one contains the data structure definitions while the second contains the client & server that can operate with those data structures.

WORKDIR /app
ADD message.proto message.proto
RUN mkdir py
RUN python3 -m grpc_tools.protoc --proto_path=. --python_out=py --grpc_python_out=py message.proto

Finally, we can write the server code that imports the previously 2 generated files.

# server.py
from concurrent import futures
import message_pb2_grpc
import message_pb2
import grpc
import time

class MyServiceServicerImpl(message_pb2_grpc.MyServiceServicer):
def __init__(self) -> None:
super().__init__()

def SayHello(self, request: message_pb2.RequestHello, context):
print(f"Received request from {request.name}")
response = message_pb2.ResponseHello()
response.output = "Hello " + request.name + "!"
return response

def serve():
pool = futures.ThreadPoolExecutor(max_workers=1)
server = grpc.server(pool, maximum_concurrent_rpcs=10)
message_pb2_grpc.add_MyServiceServicer_to_server(MyServiceServicerImpl(), server)
server.add_insecure_port('[::]:50000')
server.start()
while True:
print(f"Requests in queue: {pool._work_queue.qsize()}")
time.sleep(5)

if __name__ == '__main__':
serve()

Finally, we add the server to the Dockerfile

ADD py/server.py py/server.py
ENTRYPOINT ["python3", "py/server.py"]

We build the image and run the container

docker build . -t grpc-cpp-python
docker run -it -p 50000:50000 --name grpc-cpp-python grpc-cpp-python

If you followed this you should be able to see the logs mentioning the number of requests in the queue.

Requests in queue: 0
Requests in queue: 0

Building the C++ client

We will follow a similar procedure as we did for the server. I have discovered the hard way these paths of the protoc and grpc_cpp_plugin executables.

ENV PROTOC_PATH=/vcpkg/installed/x64-linux-release/tools/protobuf/protoc
ENV GRPC_CPP_PLUGIN=/vcpkg/installed/x64-linux/tools/grpc/grpc_cpp_plugin
RUN mkdir cpp
RUN $PROTOC_PATH --cpp_out=cpp --grpc_out=cpp --plugin=protoc-gen-grpc=$GRPC_CPP_PLUGIN message.proto

Executing the above commands will result in the following files: message.grpc.pb.cc message.grpc.pb.h message.pb.cc message.pb.h

Next, we can use these files in our client code

#include <grpcpp/grpcpp.h>
#include "message.grpc.pb.h"
#include "message.pb.h"

int main() {
std::string connection = "127.0.0.1:50000";
std::shared_ptr<grpc::Channel> channel = grpc::CreateChannel(connection.c_str(), grpc::InsecureChannelCredentials());
auto stub = grpctemplate::MyService::NewStub(channel);

grpc::ClientContext context;
grpctemplate::RequestHello query;
query.set_name("Alex");
grpctemplate::ResponseHello result;
try {
grpc::Status status = stub->SayHello(&context, query, &result);
std::cout << status.error_message() << std::endl;
std::cout << result.output() << std::endl;
} catch (const std::exception& ex) {
std::cerr << "Exception caught: " << ex.what() << std::endl;
}
std::cout << "Done" << std::endl;
return 0;
}

At this point, we could build the client directly with g++ and specify all the include and library paths manually, but I will provide a more elegant way by using CMakeLists.txt

cmake_minimum_required(VERSION 3.22)
project(cppclient)

set(Protobuf_USE_STATIC_LIBS ON)

find_package(Protobuf REQUIRED)
find_package(gRPC REQUIRED)

add_executable( client client.cpp message.grpc.pb.cc message.pb.cc)

target_include_directories(client PUBLIC
${CMAKE_CURRENT_BINARY_DIR}
${Protobuf_INCLUDE_DIR}
)

target_link_libraries(client
protobuf::libprotobuf
gRPC::grpc
gRPC::grpc++
)

Given these files, or Dockerfile will look like below.

ADD cpp/client.cpp cpp/client.cpp
ADD cpp/CMakeLists.txt cpp/CMakeLists.txt
ENV CMAKE_TOOL_CHAIN=/vcpkg/scripts/buildsystems/vcpkg.cmake
RUN mkdir cpp/build && cd cpp/build && cmake -DVCPKG_TARGET_TRIPLET=x64-linux-release -DCMAKE_TOOLCHAIN_FILE=$CMAKE_TOOL_CHAIN .. && make

Executing the server and the client

I wanted to keep the interaction simple. Everything you need to be contained in a single docker container. You build the image which in turn installs everything, builds the server, and the client, and also has an entry point that executes them both.

For this, I have provided a basic start.sh file. It starts the server in the background, waits a second for it to be ready, then executes the client.

#!/bin/bash
python3 py/server.py &
sleep 1
cpp/build/client

The Dockerfile is modified in the following way

ADD start.sh start.sh
RUN chmod +x start.sh
ENTRYPOINT ["./start.sh"]

Finally what is needed to run this container are the following commands

docker build . -t grpc-cpp-python
docker rm grpc-cpp-python
docker run -it -p 50000:50000 --name grpc-cpp-python grpc-cpp-python

The output of the container should look like below.

Requests in queue: 0
Received request from Alex

Hello Alex!
Done

That’s it, folks. I believe I provided one of the most basic usages of gRPC with everything contained in a simple Dockerfile.

--

--