Python plays Counter Strike GO(Part 1)

Hrishikesh Saikia
Jun 11 · 5 min read

Counter Strike is one of the most popular first person shooter games. The game pits two teams against each other: the Terrorists and the Counter-Terrorists. Both sides are tasked with eliminating the other while also completing separate objectives.

After spending several hundred hours killing Terrorists and defusing bombs, I thought why not create a bot and check how well it fares against other human players. (PS: We dont encourage cheating nor the use of aim bots and other hacks. Keep the game clean).

In this article we will try to train our own convolutional neural network object detection classifier starting from scratch, that can detect the players in the game. In the subsequent articles, we will use OpenCV and PyautoGUI to automate the bot.

Pre-Requisite:

2. Set up TensorFlow Directory and Anaconda Virtual Environment.

The Process:

Part 1:Detection:

https://github.com/tensorflow/models

Create a folder directly in C: and name it “tensorflow1”. This working directory will contain the full TensorFlow object detection framework, as well as our training images, training data, trained classifier, configuration files, and everything else needed for the object detection classifier.

TensorFlow provides several object detection models (pre-trained classifiers with specific neural network architectures) in its model zoo. Some models (such as the SSD-MobileNet model) have an architecture that allows for faster detection but with less accuracy, while some models (such as the Faster-RCNN model) give slower detection but with more accuracy. We are using the Faster-RCNN-Inception-V2 model. Extract the downloaded faster_rcnn_inception_v2_coco_2018_01_28.tar.gz (in our case) file folder to the C:\tensorflow1\models\research\object_detection folder.

https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10 .

Many of our object detection scripts are taken from here, with slight modifications to suit our requirements.

3. All files in \object_detection\training

4. All files in \object_detection\inference_graph

LabelImg GitHub link & LabelImg download link

LabelImg in action.

LabelImg saves a .xml file containing the label data for each image. These .xml files will be used to generate TFRecords, which are one of the inputs to the TensorFlow trainer. Once you have labeled and saved each image, there will be one .xml file for each image in the \test and \train directories.

First, the image .xml data will be used to create .csv files containing all the data for the train and test images. From the \object_detection folder, issue the following command in the Anaconda command prompt:

(tensorflow1) C:\tensorflow1\models\research\object_detection> python xml_to_csv.py

Next, open the generate_tfrecord.py file in a text editor. Replace the label map starting at line 31 with our own label map, where each object is assigned an ID number.

# TO-DO replace this with label map
def class_text_to_int(row_label):
if row_label == 'c':
return 1
elif row_label == 'ch':
return 2
elif row_label == 't':
return 3
elif row_label == 'th':
return 4
else:
return None

Then, generate the TFRecord files by issuing these commands from the \object_detection folder:

These will be used to train the new object detection classifier.

python generate_tfrecord.py --csv_input=images\train_labels.csv --image_dir=images\train --output_path=train.record
python generate_tfrecord.py --csv_input=images\test_labels.csv --image_dir=images\test --output_path=test.record

The label map tells the trainer what each object is by defining a mapping of class names to class ID numbers.

item {
id: 1
name: 'c'
}

item {
id: 2
name: 'ch'
}

item {
id: 3
name: 't'
}

item {
id: 4
name: 'th'
}

See this was so easy!

EZ!

From the \object_detection directory, issue the following command to begin training:

python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config

If everything has been set up correctly, TensorFlow will initialize the training.

Training the model.

The training routine periodically saves checkpoints about every five minutes. You can terminate the training by pressing Ctrl+C while in the command prompt window. I typically wait until just after a checkpoint has been saved to terminate the training. You can terminate training and start it later, and it will restart from the last saved checkpoint. The checkpoint at the highest number of steps will be used to generate the frozen inference graph.

python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training/model.ckpt-XXXX --output_directory inference_graph

This creates a frozen_inference_graph.pb file in the \object_detection\inference_graph folder. The .pb file contains the object detection classifier.

To test our object detector, move a picture of the object or objects into the \object_detection folder, and run Object_detection_image.py, using the suitable image name.

Counter Terrorist Detection.
Terrorist Detection.

Conclusion:

We are able to detect the Counter-terrorist and Terrorist players successfully. In the next article we will try to automate the shooting process.

GoodBye!

Hrishikesh Saikia

Written by

Trying to teach myself and Machines|Engineering Student|