TensorFlow & reflective tape šŸ€ (am I bad at basketball?)

Zack Akil
Towards Data Science
6 min readOct 5, 2018

--

Recently a friend got me into basketball. Turns out, itā€™s a lot harder than it looks. No matter, I can over engineer a solution using machine learning. If your into ML and shooting hoops then thereā€™s also this article that combined TensorFlow and basketball in a simulation.

nothing but netā€¦ if there was a net

The task is to find the exact angle of my shots. Then I can hopefully use that information in a proactive way to get better.

psst! the code for all of this is on my Github

Task 1: collecting data

I didnā€™t need to follow the seems of the ball, but it looks cool

I donā€™t have access to 3D tracking studios fitted with 200 cameras, but I do have Ebay. Itā€™s quite easy to buy reflective tape online and stick it to the ball. Then (thanks to the lack of lighting at my local court) I can record some footage of me practicing in the evening and capture the balls movements.

The torch build into my phone provides the perfect light source to bounce off the reflective tap of the ball.

As a result the footage captured shows a sparkly object flying through a mostly dark scene, perfect for doing some image manipulation in python.

Task 2: Getting our video into python

Firstly I do everything in Python, and a really easy way to import video into Python is to use a library called ā€˜scikit-videoā€™, so I installed that:

pip install scikit-video

and then used it to load in my video as a matrix:

from skvideo.io import vreadvideo_data = vread('VID_20180930_193148_2.mp4')

The shape (that you can find by running video_data.shape) of this data is (220, 1080, 1920, 3). Which means 220 frames of 1080x1920 pixels of 3 channels of colour (red, green, blue):

raw video data

Task 3: Extracting the shot (image processing)

So I want to get the data on just the balls movement. Fortunately, itā€™s one of the only things moving in the video. So I can do my favourite video processing trick: delta frame extraction! (thatā€™s what I call it, but thereā€™s probably another name for it).

By subtracting all of the pixel values in one frame from all of the pixel values in the next frame you will be left with non-zero values in just the pixels that have changed.

calculating delta frame in order to isolate moving pixels

Cool cool cool, now I do that for each frame in the video and combine the result into one image:

adding together all of the delta frames from the video sequence

Next is extracting the shot data into a usable format. So weā€™ll covert the pixel values that are lite up into a list of x and y points. The code to do this is a numpy function called numpy.where which will find all of the values that are True in an array and return their indices (i.e their positions in the matrix).

But before we do that, weā€™ll quickly crop out just the balls trajectory and flip the data so that it starts at the origin (the bottom left of the scene):

and the resulting image:

Notice how it still seems upside down? Thatā€™s only because images tend to be drawn starting at the top left corner (note the axis numbering). When we convert the pixels to data points and draw them on a normal graph they will get drawn starting at the bottom left corner.

now we run our numpy.where code to get the pixels as data points:

pixels_that_are_not_black = cropped_trail.sum(axis=2) > 0y_data, x_data = numpy.where(pixels_that_are_not_black)
awesome! our relatively clean ball trajectory data

Task 4: Build TensorFlow model

This is where TensorFlow shines. You may be used to hearing about using TensorFlow for building neural networks, but you can define almost any mathematical formula and tell it to optimise whatever parts of it you want. In our case we will use the formula for a trajectory which we know from primary school to be:

extremely mathematical equation of trajectory that I found online

Īø (theta) is the angle of the shot (the value we really care about)

v is the initial velocity

g is gravity (9.8m/s)

x is the horizontal position (data we have already)

y is the vertical position (data we have already)

A far more interesting way to see the equation in action is to play around with this trajectory tool.

We can use the ball trail of my shot as the x and y of the equation and task TensorFlow with finding the correct angle (Īø) and initial velocity (v) that fits my shots x and y data:

Weā€™ll start be recreating our trajectory equation in TensorFlow:

first tell it what data we will feed it when we run the optimisation:

x = tf.placeholder(tf.float32, [None, 1])
y = tf.placeholder(tf.float32, [None, 1])

next tell it what variables we want it to tweak and tune in order to fit the trajectory curve to our data:

angle_variable = tf.Variable(40.0, name='angle_variable')
force_variable = tf.Variable(100.0, name='force_variable')

gravity_constant = tf.constant(9.8, name='gravity_constant')

and join all of these together (warning: itā€™s going to look quite messy, but itā€™s just the maths equation seen before written in TensorFlow syntax):

left_hand_side = x * tf.tan(deg2rad(angle_variable))
top = gravity_constant * x ** 2
bottom = (2*(force_variable)**2) *
(tf.cos(deg2rad(angle_variable))**2)
output = left_hand_side - (top / bottom)

then tell TensorFlow how to tell if itā€™s doing a good job or not at fitting the trajectory function to our data:

# the lower this score, the better
error_score = tf.losses.mean_squared_error(y, output)

create an optimiser that will do the actual tweaking of variables (angle_variable and force_variable) in order to reduce the error_score:

optimiser = tf.train.AdamOptimizer(learning_rate=5) 
optimiser_op = optimiser.minimize(error_score)

Task 5 : Magic

We can now run the optimisation task to find the angle_variable and force_variable values that fit my shot.

sess = tf.Session()
sess.run(tf.global_variables_initializer())
# do 150 steps of optimisation
for i in range(150):
sess.run([optimiser_op],
feed_dict={x: np.array(x_data).reshape(-1, 1),
y: np.array(y_data).reshape(-1, 1)})
found_angle = sess.run(angle_constant.value())print(found_angle)
TensorFlow finding the angle of my shot

At the end of that optimisation we find out that the trajectory function that best fits my shot data has an angle of ~61Ā°ā€¦ not sure what to do with that informationā€¦ I guess I could look at what professional shooting angles are for comparisonā€¦ to be continued.

The lesson to take away: you can always distract your-self with completely unnecessary (but fun) machine learning.

All of the code I used is available on my Github:

--

--