A Basic Algo Trading System In Rust: Part V: Polishing, Containerization, Cloud Deployment
We’re back for the fifth and final installment of this series. This time we’ll be turning the app into something production-ready and deploying it (as a Docker image) on Google Cloud Platform.
Architectural Decision Record
- For the docker image, we’ll use a two-stage build (separate compile and deployment image bases) for efficiency, but not go all the way to a “From Scratch” image
- I chose GCP somewhat arbitrarily over AWS because I have more professional experience with the latter and wanted to broaden horizons a bit. (Note: The container can be deployed just as easily on either, and either can access the associated Mongo Atlas instance.)
- Mongo Atlas will be used as the cloud-based DB provider. It is managed, ubiquitous, and gives excellent performance.
- There’s no need for container orchestration — we’ll deploy to a single Container OS instance on Google Compute Engine.
- We’ll need a “real” logging solution for features like rolling log files — I chose log4rs.
- Per 12-Factor App standards, secrets will be moved from config to the environment.
- Because I believe in processes that never need bouncing, and to avoid associated cloud complexities of restarting containers, the system will be modified to never need restart, instead re-initializing daily.
Reset For Day
main has been significantly refactored to restart services at the end of the calendar day:
Note that the Tradier market data websocket API disconnects after 15 minutes of inactivity; the new MarketData implementation will be in a reconnect loop at the point of day-turnover, with U.S. markets closed:
A related change was to use Crossbeam’s non-blocking try_recv instead of of recv in both TradingService and Persistence for cleaner error-handling.
Here is the updated TradingService main loop:
Logging
We’ve already seen the simple log4rs init in the gist above.
Log4rs is, of course, patterned after the ancient, extremely popular Java library log4j. There is not a lot of interesting stuff to talk about here.
For the sake of completeness, here is my config:
[Note: I decided against some kind of structured logging solution, which is overkill for this application.]
Docker
There are countless treatises on the web regarding Docker in general and Dockerizing Rust applications in particular, so we will not dwell on mundane details.
However, I am going to mention one blocking snafu that I ran into: If deploying to an x86–64/AMD64-architecture VM (these are the same thing) vs. ARM, you must specify this platform when building the image.
On MacOS, after installing Rust x86–64 support with
rustup target add x86_64-unknown-linux-gnu
run either of these equivalent commands to build the image:
docker build — platform=linux/amd64 -t algo-trading .
docker build — platform linux/x86–64 -t algo-trading .
Here is the Dockerfile. As noted earlier, I do use a separate build image to keep the resulting image size reasonable:
MongoDB Atlas
To run in the cloud, one has several choices regarding a database, in general:
- Create a DB instance manually on a manually-managed VM
- Create a Docker image to run a DB in another container VM
- Make use of a cloud-managed dB
I chose the latter. The system already runs on MongoDB; MongoDB Atlas is the managed cloud offering of Mongo, accessible from virtually any VM — in fact, I now use the Altas instance of Mongo when running the system locally.
It was easy to set up per the instructions linked above.
Here is a look at the Atlas dashboard with my instance running:
Cloud Deployment
As noted, I’ve been deploying the image to Google Compute Engine. Deploying to AWS ECS would be just as simple and directly analogous to the below.
A prerequisite to deploying on GCE Container OS is creating a Google Image Repository and uploading one’s image there.
For deployment, I followed the very simple instructions here, which entail just choosing VM properties, specifying the image URL and environment variables, and launching.
The VM will create and launch a container upon startup:
After creating the instance, SSH is accessible either via the GCE Console (Web UI) or via the gcloud CLI; for example:
gcloud compute ssh — zone “us-central1-f” “algo-trading-1” — project “algo-trading-1–434115”
(The exact gcloud command can be conveniently obtained right from the Instances UI.)
SSH’ing into the new instance and tailing the container log with the docker logs command, we are greeted with algo-trading running in the cloud:
And — we are done!
I’m not using the system to trade a funded account yet, but I may.
If I do, however, I will likely follow Quant Rules and keep my secrets very close to my vest, rather than on Medium. :)