
Enterprise Pixelflood :) — part 4
In part 1 of the series Pixelflood was introduced. Part 2 explained in more detail the infrastructure. Part 3 was all about CI/CD and the DevOps tools we used.
This part, part 4 is the final part of this series. It’s about the Application architecture and the lessons learnt. Yes… We’ll probably do this again next year and very much want to learn… and improve… to be (even ;) ) more successful.
Application architecture
Our Pixelflood “client” called Pixelflut consists of multiple components. The various components and what they (should) do have been described below.
Initially our Pixelflood client started out as a very simple client, probably like it should be and maybe some people would even say, as it should have stayed :). The first versions of the client had just enough functionality to send coordinaties+color packets as fast and efficient as possible. Later on, more and more functionality was added (without losing effectiveness by the way).
Not only did we do a lot of performance improvements. We also adapted to Pixelflood server changes and changes in game rules but also adopted new functionality by request from our supporters :). These supporters were mostly the people who lend us their compute, memory, bandwidth capacity to host the OKD cluster nodes (discussed in part 2 of the series) and inherently the Pixelflut client software.
Just a few examples of functionality which was added over time:
- We wanted to be able to easily swap the picture the clients we’re sending
- We wanted statistics; queues, bandwidth, memory usage and more
- We wanted to be able to (temporarily) disable clients
- We wanted to be able to pool clients/group clients together
- We wanted to be able to dynamically increase and decrease threads
- We wanted to be able to dynamically change timeouts, # of packets grouped together, and more
- We wanted to be able to change the connection string; ip address, port number, protocol (We have UDP and TCP support)
We decided adding these kind of logic to the application code would not be ideal that’s why the application design was extended with some “rules management” in the master component (still not ideal of course as it’s, to some extend, still part of some of the code). With the addition of this functionality we were able to change parameters on-the-fly without restarting the clients.
In short:
- We had an infrastructure based on OpenShift (OKD) which we could easily expand (scale-out) by adding a system (pc, laptop, etc), deploying a virtual machine and running an Ansible playbook.
- We had CI/CD setup to make sure code commits got automatically build, deployed, tested and released to production without manual intervention.
- We had application logic “outside” of the code of the client to be able to make changes to various parameters on-the-fly without code changes or application downtime. Ideally we would actually have used a rules management solution to be able to define rules and apply them. In our case we’re still “limited” to the logic defined in the master components’ code.

Now let’s have a look at the components of our Pixelflut client:
ConsoleHost:
- Listens for client user input/output.
- Manages underlying components being: PoolHost, Swarm service, Stats collector, etc.
- Reads and processes configurable settings through a configuration file and command line settings.
PixelFlutSwarmPoolHostMediator:
- Acts as a mediator by subscribing for updates from the Swarm master and pushing these updates (which are in fact new commands) towards the PixelFlutPoolHost. Every new connections will be made with these new commands.
PixelFlutPoolHost:
- Initiates dynamic pools (of clients) based on parameters or commands given from the mediator.
- Manages vital system resources (memory, tcp stack) to make sure the system stays healty.
- Pushes results towards listener (PixelFlutPoolHostStatsCollector).
PixelFlutPoolHostStatsCollector:
- Collects and manages outcomes retrieved from the PixelFlutPoolHost.
- Calculates averages and other useful counters.
PixelFlutPool:
- Initiates PixelFlut clients dynamically based on connection strings and connection factories.
- Runs the client and makes sure it stays active.
- Pushes results towards listeners (PixelFlutPoolHost).
PixelFlutClient:
- Connects to the PixelFlood server (the actual server component controlling the screen).
- Sends commands (coordinates and color code).
- Pushes results towards listeners (PixelFlutPool).
PixelFlutSwarmService:
- Connects to a Swarm master.
- Manually receives updates or subscribes for automatic updates when changes were made on the master.
- Pushes results/updates towards listeners (SwarmPoolHostMediator).
Results/lessons learnt
So, that’s it for the application, the DevOps team, process and tools and the infrastructure. Did we learn anything? In short: Yes! We learnt a lot.
Most of the things written in the Infrastructure blog, CI/CD blog were to a certain extend new to our team or part of our team. Outside of the Campzone event we’re a group of friends, not a group of colleagues. We’re not used to collaborate in the way we did during these days. Setting up VSTS (Visual Studio Team Services) to work together with OpenShift is another one. A first for us. All of this was a great experience and we learnt a lot.
To mention just a few things we learnt and some possible improvements:
- We created some Pixelflood Server code to virtualize the screen hardware (and real Pixelflood server) used at the event. This allowed for testing (UAT) locally before releasing the application to production. It also allowed us to compare between clients: At some point we tested a Rust client from a ‘friendly’ competitor to see how this performed.
- Having a scale-out, OpenShift (Enterprise Kubernetes), based infrastructure together with Continuous Integration and Continuous Deployment in place gives you a very attractive environment to do software development.
- Our client code performs rather well and can fill the network capacity available. For instance, when 1 Gb is available, at least 0.9 Gb will be used.
- Scaling out, adding more capacity, is very easy, simply press the button and another 1, 10, 50 clients get deployed within the cluster.

- Although initially thought bandwidth would not be the issue, actually it is: when pushing data at only 1 Gb, using an efficient client, a 10 Gb connection will outperform it. Since the competition was thought to be more ‘fair’ in the sense the server would equalize the competitors, this was not anticipated. When increasing the bandwidth there will not be any need to change our implementation.
- For fun and to expand knowledge we are thinking of adding some industry standard middleware for example for decision management/rules management between master and client components in the Pixelflut client architecture.
- Another idea is to transform the application architecture into a more standards-based, API-based, microservices architecture and maybe even add something like Istio.
Special thanks
Thank you very much for reading this Enterprise Pixelflood blog series. Feel free to leave any comments, questions or suggestions.
Special thanks go out to Clan Badjas [CBJ], [DeF], [Watt]. Many thanks for all other teams and people who supported us in making this happen.
See you in 2019!

