CodeX
Published in

CodeX

Real-Time Neural Style Transfer in Quake3

In this article, I will demonstrate how to set up a Real-Time Neural Style Transfer (NST) pipeline on the open-source game ioQuake3 using the Ubuntu operating system. This is dependent on how much compute power you have available, but if we put it this way, we can do a slide show on a single RTX 2060.

The first thing we need to do is export screen buffers on demand, luckily for us, this is relatively simple using an open-source video game that renders using OpenGL. We will be using glReadPixels() to export JPEGs at a blistering speed and then using Tensorflow Keras in python to set up a daemon that will read these exported screen buffers and convert them to NST frames as fast as it can possible do so.

First things first is to clone the ioQuake3 repository, change directory into it, install SDL2 Dev, and then compile the project like so;

git clone https://github.com/ioquake/ioq3
cd ioq3
sudo apt install libsdl2-dev
make

Hopefully, you’ve got through that process with no errors, if not then you can join us over at the official ioQuake3 discord and ask for some assistance.

The next step now is to modify the ioQuake3 source code to export JPEG screen buffers on demand. For this you will need to navigate to ioq3/code/renderergl2/tr_backend.c this is because ioQuake3 uses the GL2 renderer by default. Open this file in an appropriate text editor such as Visual Studio Code, or Gedit which comes pre-installed with the official distribution of Ubuntu.

In this file what we need to do is navigate to the piece of code which deals with swapping the screen buffer which is called RB_SwapBuffers, currently, on the 29th of July 2021 this starts on line 1346. You will need to add two lines of code after the texture swapping test like so;

Because we are using the C function access() you will also need to include the unistd header at the beginning of the file after line 21 like so;

So back to the two lines of code we added, let’s explain how it works;

What we are doing here is a simple check that a file exists, if the file does not exist we export a screenshot as a JPEG. This allows us to trigger the ioQuake3 game into generating a new screen screenshot every time we delete the old one! Although keep in mind that the RB_TakeScreenshotJPEG()function will save to a relative directory depending on what mod is loaded, by default in a vanilla Quake3 game with no mods this directory will be the ioq3/baseq3 directory in your home folder. As such you will need to modify the username ‘ v ’ in the access() function to your own.

The RB_TakeScreenshotJPEG() function uses the glReadPixels() function to read the screen buffer as you can see in the RB_ReadPixels() function;

Keep that in mind if ever exporting raw buffers as GL_FLOAT for training data using glReadPixels(), that if you don’t plan to mean/sample-wise normalise your buffers the data will come already 0–1 normalised for you.

There’s just one more thing you will need to do, we can’t actually access the RB_TakeScreenshotJPEG()function from the tr_backend.cfile at the moment. We will need to expose it in the appropriate header file, head over to the ioq3/code/renderergl2/tr_local.h, and pop our function prototype in after line 1974 like so;

Brilliant! We’re done, you can now type make back into the console and compile our changes.

Now it’s time to create the Python daemon which will run as fast as it can (on a 1ms infinite loop) reading the output jpg, running an NST neural network over it, and then outputting the results to our tmp folder as /tmp/newbuff.bmp or whatever image type you desire. For this, we will be using the NST neural network from the Keras.io code examples page here. This code just needs minimal adaption so that the infinite loop only does what it needs to, this way the algorithm runs as fast as it can in real-time on the output screenshots from the ioQuake3 game.

We ultimately want our loop to look something like this;

The full script file is available here: https://pastebin.com/WF6aasEz

At the end, we output the NST modified image to the tmp directory as /tmp/newbuff.bmp and remove/delete the old screenshot triggering the ioQuake3 game to output a new one and then repeat the process. The default Ubuntu image viewer is Eye of gnome but on Xubuntu I use Ristretto image viewer which will automatically load modifications to the image file it has loaded in real-time essentially allowing you to have a second screen side by side of the game window. Eye of gnome should also do this but encase it doesn’t you know what to do.

Assuming you’ve saved yourself a bit of time and grabbed the pre-modified script from here you will just need to modify the config variables near the beginning:

You will need to update the inim path with your user home directory like before by replacing ‘ v ’ with your username. You will need to input a path to a style image you desire to use as the stim variable, it doesn’t have to be a JPEG. The img_nrows variable will change the size of your output image and the iterations variable defines the ‘deepness’ of the end result of the NST neural network (personally I tend to use 6 iterations but if you can run at 10–33 that’s even better). It’s also the main factor that will define the speed at which the network can output new images, the higher the number the better the quality of the outputs but the longer it takes to generate them.

Now all you need to do is run the ioQuake3 game, run the Python NST script in a separate console window, and then open the /tmp/newbuff.bmp file in the image viewer of your choice and if you’ve done all that correctly you should end up with something like this;

This is the render speed of my RTX 2060 OC at 6 iterations and a little higher at the end.

Notes for basic optimisation; you might want to use tf.keras.preprocessing.image.load_img over tf.keras.utils.get_file as the latter is technically for downloading images from the web, so there may be some performance benefit in not using that function to load files locally. Also, consider what format you are exporting images from the ioQuake3 game as, importing them into python as, and then outputting the new image as. Saving and loading different types of image files all have their own overheads. If you can export the raw pixels from ioQuake3 as GL_FLOAT and load them into Python as a numpy array you’d be saving on the superfluous JPEG save & load.

If you don’t want to get too invasive about grabbing screen buffers from video games — such as you might like to grab the screen buffer of a closed-source game then you can check out this stackoverflow answer which shows the fastest method of grabbing whole screen captures on Linux (keep in mind the slowest part of this code is the savepng() function) my benchmarks show this code running at 3,000+ FPS with no processing on the grabbed image or 6 FPS with the savepng()on my 1920x1080 dispay. (check the benchmark here)

You could also try to convert this Keras model as a TFLite model and integrate TFLite directly into the ioQuake3 source code using TFLite Micro. I’m not sure how well that would go but maybe it might be worth looking into.

Edit: Here’s one that actually runs in real-time.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store