Counter Strike is one of the most popular first person shooter games. The game pits two teams against each other: the Terrorists and the Counter-Terrorists. Both sides are tasked with eliminating the other while also completing separate objectives.
After spending several hundred hours killing Terrorists and defusing bombs, I thought why not create a bot and check how well it fares against other human players. (PS: We dont encourage cheating nor the use of aim bots and other hacks. Keep the game clean).
In this article we will try to train our own convolutional neural network object detection classifier starting from scratch, that can detect the players in the game. In the subsequent articles, we will use OpenCV and PyautoGUI to automate the bot.
- TensorFlow -GPU: Will not go into detail how to install this. Follow this beautifully explained video and you are good to go:
2. Set up TensorFlow Directory and Anaconda Virtual Environment.
- We will first create an object classifier that can detect the Counter-Terrorist and Terrorist players in the game. For this we created a dataset of players, the photos are taken from a dataset we found online and combined it with photos that we have gathered(in-game screenshots and Google). The Dataset in available here: https://github.com/Hrishi321/Python-Plays-CS.
- Download TensorFlow Object Detection API Github repository :
Create a folder directly in C: and name it “tensorflow1”. This working directory will contain the full TensorFlow object detection framework, as well as our training images, training data, trained classifier, configuration files, and everything else needed for the object detection classifier.
- Download the Object detection models according to your needs:
Models and examples built with TensorFlow. Contribute to tensorflow/models development by creating an account on…
TensorFlow provides several object detection models (pre-trained classifiers with specific neural network architectures) in its model zoo. Some models (such as the SSD-MobileNet model) have an architecture that allows for faster detection but with less accuracy, while some models (such as the Faster-RCNN model) give slower detection but with more accuracy. We are using the Faster-RCNN-Inception-V2 model. Extract the downloaded faster_rcnn_inception_v2_coco_2018_01_28.tar.gz (in our case) file folder to the C:\tensorflow1\models\research\object_detection folder.
- Download this wonderful repository located on this page and extract all the contents directly into the C:\tensorflow1\models\research\object_detection.
Many of our object detection scripts are taken from here, with slight modifications to suit our requirements.
- Since we want to train your own object detector, delete the following files (do not delete the folders):
- All files in \object_detection\images\train and \object_detection\images\test
- The “test_labels.csv” and “train_labels.csv” files in \object_detection\images
3. All files in \object_detection\training
4. All files in \object_detection\inference_graph
- Annotate the images using LabelImg. This process is basically drawing boxes around your objects in an image.
LabelImg saves a .xml file containing the label data for each image. These .xml files will be used to generate TFRecords, which are one of the inputs to the TensorFlow trainer. Once you have labeled and saved each image, there will be one .xml file for each image in the \test and \train directories.
- We now generate the TFRecords that serve as input data to the TensorFlow training model. We use the xml_to_csv.py and generate_tfrecord.py scripts from Dat Tran’s Raccoon Detector dataset, with some slight modifications to work with our directory structure.
First, the image .xml data will be used to create .csv files containing all the data for the train and test images. From the \object_detection folder, issue the following command in the Anaconda command prompt:
(tensorflow1) C:\tensorflow1\models\research\object_detection> python xml_to_csv.py
Next, open the generate_tfrecord.py file in a text editor. Replace the label map starting at line 31 with our own label map, where each object is assigned an ID number.
# TO-DO replace this with label map
if row_label == 'c':
elif row_label == 'ch':
elif row_label == 't':
elif row_label == 'th':
Then, generate the TFRecord files by issuing these commands from the \object_detection folder:
These will be used to train the new object detection classifier.
python generate_tfrecord.py --csv_input=images\train_labels.csv --image_dir=images\train --output_path=train.record
python generate_tfrecord.py --csv_input=images\test_labels.csv --image_dir=images\test --output_path=test.record
- Create Label Map and Configure Training:
The label map tells the trainer what each object is by defining a mapping of class names to class ID numbers.
See this was so easy!
- Now its time to run the Training:
From the \object_detection directory, issue the following command to begin training:
python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
If everything has been set up correctly, TensorFlow will initialize the training.
The training routine periodically saves checkpoints about every five minutes. You can terminate the training by pressing Ctrl+C while in the command prompt window. I typically wait until just after a checkpoint has been saved to terminate the training. You can terminate training and start it later, and it will restart from the last saved checkpoint. The checkpoint at the highest number of steps will be used to generate the frozen inference graph.
- Export Inference Graph:
python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training/model.ckpt-XXXX --output_directory inference_graph
This creates a frozen_inference_graph.pb file in the \object_detection\inference_graph folder. The .pb file contains the object detection classifier.
- Using the Newly Trained Object Detection Classifier:
To test our object detector, move a picture of the object or objects into the \object_detection folder, and run Object_detection_image.py, using the suitable image name.
We are able to detect the Counter-terrorist and Terrorist players successfully. In the next article we will try to automate the shooting process.