Playing Tekken with OpenCV and Python

Creative use of Image Processing Techniques.

Harsh Malra
5 min readOct 19, 2020

After completing my Project on GTA V with Gesture Navigation, I thought to take the main idea to the next level to play even better games. So my first choice was to go with fighting games, and one of the best games in that category was Tekken(SFTK).

The main concept is simple, whatever the human player will enact (like punching), the player in the game will replicate the same action.

So to implement this idea I came up with two approaches.

First, one was the obvious and complex one to take 200+ images of me doing different actions (like punching, jumping), etc. And then using it to train the model to classify between different poses/actions and to simulate it in-game. But there were some problems with this approach.

The first was the obvious one that I had to take many images of me, and 200+ in each category. Which will be quite hectic and also for generalization I would also need images of different people of different sizes and with different background and lighting conditions. Which could take a lot of time.

And the second way was to simply use the idea (full idea below) from my previous projects on Virtual Switch and Gesture Gaming which will let me complete it fast, and after that, if I wish I can always improve it by adding some ML/CV techniques.

So, for the proof of concept, I decided to go with my second approach.

TL;DR - Code is here.

Concept -

The main concept behind its working is quite easy. It is that there is just a Virtual Switch, which whenever is pressed simulates the corresponding action (kicking) in the game. (see the gif below)

Note — If you haven’t then make sure that you first read my blog Virtual Switch and Gesture Gaming because most of the codes and programming logic has been taken form there.

Steps -

  1. Track the face.
  2. Moving the bounding box wrt to face bbox.
  3. Use a particular region as a virtual switch dedicated to a particular action.

Bbox (Bounding Box) is the term used to refer to the region of interest (inside rectangle/box).

So if you have finished reading my previous post on Gesture Gaming (compulsory), then you can see that we tracked the position of our hand in it. And its deviation from the center was used for calling the functions responsible for navigation.

So the only difference here is that we are tracking the face and the virtual switches boxes are moving relative to it. So for moving within the game the logic used is-

  1. When we go forward by a certain threshold, the player will also start moving forward and vice-versa.
  2. We are storing a horizontal line passing through the center of the face. If we jump and go above the centerline by a threshold, the player also jumps. And similarly, when we go below the line, the player will squat down.

And for the purpose of action, we will use the Virtual Switch.

Notebooks

The full code is here.

Setup.ipynb ->

Note: You need to run this notebook only if the default settings don’t work for you. Otherwise, you can skip this.

In this notebook, we will define the initial position of the human player to start. So then he will be tracked from that position by tracking algorithms.

First, set up your camera and place it at a fixed location.

Then we will just track the face to track the position of the human player in the game, so execute the Face box cell then -

  1. Get ready in position from where you will start.
  2. After the timer finishes, make a bbox around your face.

Now if you also want to add new switches for new actions you can run the Buttons cell then-

  1. Set n. It is the total no. of switches you want to add.
  2. Keep your face inbox, and do the action (eg. Kicking). Then make a box around the region you want to allocate for kicking action.
  3. To save it for later, copy the printed output and paste it on Switch.py.

Note that Switch 0 will be mapped to Action Key 0. So map the action to key in class Action.

Gameplay.ipynb ->

This is the main notebook if you want to start it directly.

We will initialize the buttons object, which will contain all the Virtual Switches. Passing training as False will make it to use the default values.

The rest of the logic is same as of GTA V Gesture Navigation

The only change is we will pass the current frame to buttons object, which will track the change in position and the actions to be performed in-game corresponding to the virtual switches pressed.

Switch.py ->

This script contains all the essential functions.

Switch — This class is to implement the Virtual Switch. Read more about it here.

Buttons — This class is used to -

  1. store the all the Switch object
  2. pass the current frame to decide which switch to be pressed.

bbox_wrt_center — This function is used to calculate the coordinates of switch wrt to the center of the face. This is done so that when we move, the switch also moves accordingly.

Run — This function takes the current frame, and pass it to switches. If any switch is pressed (returns True) then action corresponding to that switch is pressed in the game.

Actions — This class is used for mapping the Switch with respective keys for action (Punch, move left) in-game.

Note — I haven’t tested this on different PC, so if an action doesn’t work on your PC, try different values for the time gap between Pressley and ReleaseKey.

Keep In Mind -

  1. Make sure that the video quality is decent and there is proper light. As it works by calculating noise, so low image quality or low light may cause some anomaly.
  2. You can play with the history parameter of the background subtractor and change the threshold of the switch according to your needs.

Conclusion

So this was my attempt to use my skills of Image Processing and Python to create a fun way of playing Tekken game. This was just an experiment with the idea and in the future, I may use some better technique for better results.

If you liked my work, show your support by giving this a clap (as much as 50 claps are possible).

If you have any doubts or suggestions, let me know in the comments.

Feel free to connect with me on LinkedIn and Github.

Thanks for reading.

--

--

Harsh Malra

Hey, I’m an AI enthusiast. Trying to gather new Skills each day.