Creating a Machine Learning Auto-Shoot Bot for CS: GO. Part 3.

Jan 18 · 5 min read

In continuation to Part 2 of “Creating a Machine Learning Auto-shoot bot for CS:GO.” using my minimalist adaption of the VGG network originally designed by the Visual Geometry Group at Oxford University I have managed to use offline training to get satisfactory head-shot results in the game of Counter-Strike: Global Offensive.

Where we last left off I had managed to use real-time training to train the network I had dubbed as TBVGG3 to detect and shoot at the football in the map Dust II, with very little to no miss-fire. Although it was not as easy to train as I would have liked it to have been, particularly on such a simple and defined object as a mostly white round football with black pentagons chequered across its surface.

Of course I knew what the problem was, in real-time training, it was hard to get the random sample variation that a good training model needed, in the original article where I trained a small neural network of 3x3 pixels to target mostly aqua blue models, the scene was much less complex and easier to train against having mostly just a black or grey background with no real textual variation (because I had disabled textures) but in CS:GO I was training against noisy textured pixel data so to attain a well-trained model I needed to first sample a dataset and then train a network on that dataset in an offline manner.

I first designed an adaption of the FPS bot [fps_dataset_logger.c] which would take little picture snapshots of what I aimed at in the reticle field and save them to file when I activated certain keyboard keys, I also had a key that would show a framing border so that I could line up the shots before taking samples. These samples were saved out to file in a range of different formats all being in R,G,B order; 8 bits/1 byte per channel raw pixel data, 3 floats per channel 0–1 normalised, 3 floats per channel zero centered and finally the set that I used for training, 3 floats per channel mean centered. I used this program to collect 300 samples of only Counter-Terrorists for this demonstration and, 300 samples of random background scene samples.

I then designed a small program [fps_offline_training.c] to do the offline training, the trained weights were supplied into the original [fps_autoshoot_tbvgg3.c] program for real-time detection and shooting. The offline training was very simple, it just loaded the 300 samples of Counter-Terrorists (mostly head and upper body samples) and 300 samples of random background scenery. This was nowhere near enough training data for a flawless model as there are many different Counter Terrorist player models and a number of different maps and backgrounds. I estimate maybe a sample set of 1,000 to 4,000 would have been more adequate, but it is a very time-consuming process, enemies don’t stand still for you that often and what player models the game gives you per match is completely random. But the 300 sample set did work satisfactorily well as you can see in the following play-through video;

This model was trained using a Learning Rate of 0.003 and a Gradient Gain of 0.0065

As you can see, there is minimal miss-fire, and the upper body/head detection has good results, maybe not good enough to play in a real online game, but good enough to kill a few bots.

I have one more video to show which demonstrates the detection, although there are four in total on the YouTube channel.

So is there going to be a Part 4? Am I done and dusted with this? Well, sort-of because real-time training on this kind of data is just not realistic, no one wants to spend hours in-game variating player model scans with scenery scans, I could make the offline training process more user-friendly for the end-user who is not accustomed to compiling source code by making a 3 step system; scan a dataset in-game, train a dataset offline and, play the dataset in-game. But this would come with some moral issues, this kind of model with enough work could create a pretty good head-shot bot for any FPS game on the market, and for me to make this process easy and user-friendly would do the industry of online first-person shooters no favours, it may use a little more CPU resources and slow down your frame rate a little in-game but it does have the potential with enough training to do particular harm to online games. This is not what I set out to achieve, I don’t want to risk being known as “the guy who ruined online gaming” so I am happy to provide the training dataset I have demonstrated in the videos above which does work best on the map Engage against only Counter-Terrorist player models but purely for research purposes because I know it would not create a competitive enough foe for even novice players.

So this is where I am leaving it,
(set ACTIVATION_SENITIVITY to 0.7 when used in the map Engage)

I think possibly one could reduce the number of filters per layer and using a larger training dataset still attain high accuracy in detections but also reduce the CPU load, which for this small network, was a tad high for what I would have liked it to have been.

I enjoyed making this series, and it was nostalgic for me to re-visit Counter-Strike, the last time I had played it was pre-2010 on the Source version (11 years ago to date!) and I must say it’s still a brilliant high-quality game to play, one where dying actually makes me laugh in a “damn you cheeky, I can’t believe you got me” kind of way.

Please forgive me, Gabe, it’s been 15 years since we changed emails concerning the topic of making games being more rewarding than making cheats and here I still am, making cheats, for Valve Software titles. I feel like I have let you down :’(, but also, 2008 was a bad year for the games industry. I did author two Half-Life 2 modifications independently, so it’s not like I didn’t try and I have made a whole bunch of games since then, I just suppose in a strange way we’ve come full circle back to my roots. I never was a very good game designer, but saying that, neither was I that good at making cheats.

Yours truly,


The Startup

Medium's largest active publication, followed by +771K people. Follow to join our community.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store