Experimenting with Delivering a Reactive Python Application
Recently, I had to extend a console script written in Python 3 to be reactive. The new incarnation needed to execute a task on an interval or when it received a TCP signal. Sometimes, it’ll receive a burst of messages and other times, that signal could happen very close in time to when the timer fired. To save on frivolous processing, the combined stream of events from the timer and network was debounced. Since I have experience with ReactiveX in other languages, I knew it would be a great fit and decided this would be a great time to learn RxPY and Python 3’s asyncio.
If you just want to skip ahead to the code, you can find it on GitHub.
To start, I needed to find a way to make a simple TCP server. Luckily, the Python documentation has an example echo server. To get this to work with RxPY is actually really easy. After reading the data, it needs to be pushed onto an Rx Subject. A Subject is both an Observer and an Observable so data can be pushed into it and other parts of the codebase can listen for those updates.
The other part is the timer. Rx has one of those built in; in this context, it just needs a Scheduler to hook into asyncio: RxPY’s AsyncIOScheduler. Now, I have all the data sources. For my problem, I merged the two streams and debounced the resulting observable sequence. Should I need to manipulate the data I get over the socket first or something else in the future, RxPY has me covered. There are hundreds of operators just waiting to be used.
Over the years, I’ve come to appreciate being the one responsible for running what I build. To make this as easy as possible, I really want to have a single package to deliver. Docker is all the rage right now and for good reason. But after building my image from the base Python image, I had an image that was about 750MB. This is just too big; I want something smaller.
This is when I turned to pex. With it, I can generate a single executable that has all of the project’s dependencies and my source included. All it needs is a Python executable. This might seem like a great place to stop, but there are more problems to solve.
In order to build the pex file reliably, you need to have the same version of Python installed I had. This is where Docker can have real value if we can solve that image size issue. With pex and an advanced Docker usage I’ve seen called “Dockerception”, we can. With this technique, we build one image to perform the heavy lifting of installing packages and generating a binary like our pex file. That builder image can be executed, writing the pex file and a Dockerfile in a tar stream to standard out. That tar stream becomes the input to another Docker build command which will start building from a much smaller base image and won’t have to waste layers building anything. The result is an image about a third the size. An added benefit is that Docker is the only thing that’s needed to build and run the application, making the entire lifecycle extremely portable.