Nerdy McJumpyface — analyzing the jump shot

Authored by: Ramzi BenSaid

Eric Schmidt
Analyzing NCAA Basketball with GCP
14 min readSep 4, 2018

--

In our previous post, we covered the data we collected and the architecture we built for our smart basketball court at Google Cloud NEXT ’18. This post explores the flow of that raw data from Google Cloud Storage into a more manageable one-entry-per-shot state in BigQuery, and some ensuing analysis. Our goal was to gain better insight into human shooting mechanics in order to better understand how they influence shots — specifically, what goes into a good jump shot. While it turns out we didn’t have all that many of them in our dataset, we were still able to uncover some interesting patterns and findings that support some common pieces of basketball shooting advice.

A quick recap: the raw data from our 3D IR system was stored as CSVs in GCS upon collection, which covered the location (x, y, z) and rotation (pitch, yaw, roll) for seven rigid markers on the shooter, the ball and the hoop, as well as timestamp information (each of the 180 frames per second captured) and some user information.

Given how much data we were sitting on (roughly 4,600 shots from 350 shooting sessions), we wanted to plan out some of the metrics we were interested in collecting before starting ingest (after all, ingestion is a function of data engineering). After brainstorming and testing, we started to get an idea of some metrics we wanted to explore in the shooting motion.

We built a few that we knew would be important into the installation before NEXT — things like shot distance and launch angle, which informed our ‘Naismith Score’ game during the event. But our post-event brainstorming added concepts and calculations such as:

  • Vertical jump: the average height off the ground of a shooter’s left foot and right foot at the frame of ball release.
  • Foot spread: the distance between the shooter’s feet at the frame when they begin the shooting motion.
  • Gather midpoint angle: the angle generated by the shooter measured by the ball’s location as it moves through three frames: 1) when the shooting motion starts 2) when the ball is released and 3) the midpoint frame between those two.
  • Form consistency: a measure of how similar one shot is to the next in a shooter’s session.

These metrics weren’t chosen at random — they correspond to much of the received wisdom regarding important features of a jump shot. Whether you’re consulting a quick Google or YouTube search or working closely with experienced coaches and players, you’ve probably heard the following:

  • Use your legs: this allows for greater upper body control from all areas of the court. (vertical jump)
  • Keep a balanced base: without it, it’s difficult to get the right lift. (foot spread)
  • Watch your arm motion: without the proper mechanics, you’ll release the ball on a trajectory that has no chance of ending in the hoop. (gather midpoint angle)
  • Be consistent: at the end of the day, the ball needs to go in the basket. If you consistently get the right release angle and direction it doesn’t matter what your shot looks like. (form consistency)

It’s one thing to have intuition — or even experience — around these concepts. But it’s another to have real data that can reinforce each premise.

With the above metrics (and others) in hand, the next step was the actual ingest. While a simple Dataflow read/write job would have allowed us to move the raw data from GCS to BigQuery, some of our desired metrics were going to require work outside of BigQuery. So we decided it would be better to write these calculations into the Dataflow job to begin with, and for that, we’d need to use Java. Or Python.

(Tip: Try to codify as much transformation and metric formulation within your ETL flow rather than in ad hoc downstream queries or within feature development.)

Ingest: making a guinea pig out of Python

Today, most Dataflow jobs are written in Java since it was the first SDK for Cloud Dataflow. However, since the rest of our processing and downstream predictive modeling was written in Python, we decided to write our Dataflow jobs using the Python SDK for Apache Beam. The main benefit here is that we were able to reuse existing Python code within our new pipelines. (Copy — > paste!)

Like any Dataflow job, the first step is reading the data into a PCollection, which in this case was about 1.5 million entries. The metrics we decided to create relied largely on six important frames of the shooting motion:

  • Catch: the first frame
  • Form start: the frame in which the ball is at its lowest point while still in hand
  • In-hand midpoint: the frame between form start and release
  • Release frame: the frame in which the ball leaves the shooter’s hands
  • Tenth frame after release: roughly 0.055 seconds after release (a frame which later helps gauge the launch angle and release speed)
  • Frame at first return to rim height: assuming shot trajectory follows a parabolic shape and gets above rim height

Before actually isolating any of these frames there was some housekeeping to be done in our opening PTransform; creation of some very important IDs and some simple calculations to each frame:

  • Shot_id
  • Frame_id
  • Frame_id with ball height
  • Rim centroid location (middle of the circle of the rim)
  • Distance from ball to hoop centroid

Next, we started the isolation process by separating frames where the ball was in the shooter’s hand from in-flight frames using a simple ParDo, and joined the frame_ids back using a GroupByKey:

in_hand_frames = first_processed | beam.ParDo(GetInHandFrames()) | ‘In hand frame grouping’ >> beam.GroupByKey()released_frames = first_processed | beam.ParDo(GetReleasedFrames()) | ‘Released frame grouping’ >> beam.GroupByKey()

Then we actually isolated the important frames. We mostly used inline beam.FlatMap lambda functions like this one:

release_iframe = in_hand_frames | beam.FlatMap(lambda element: [(max(element[1]), element[0])])

With each of the important frame_ids isolated, we used a CoGroupByKey to join the frame information back in a new PCollection and finally used a CoGroupByKey on each shot_id to join all of the important frames before running all the shot information through a large ParDo where the various metrics were calculated.

down_to_rim_info = ({'frame': down_to_rim_frame, 'processed_frame': fp_bpy} |'Join rim height frame info' >> beam.CoGroupByKey()) | 'Getting rim height stats' >> beam.ParDo(FilterRelevantFrame())all_shot_info = ({'first_iframe': first_iframe_info, 'gather_midpoint_iframe': gather_midpoint_iframe_info, 'release_iframe': release_iframe_info, 'tenth_released_iframe': tenth_released_iframe_info, 'down_to_rim_info': down_to_rim_info, 'rim_bounces': rim_bounces} | 'Joining main shot frames' >> beam.CoGroupByKey())all_shot_stats = all_shot_info | beam.ParDo(GetShotMetrics(), get_dist, get_3D_dist, get_angle, get_angle2, format_date)

Job performance and architecture

By the numbers, this Dataflow job is composed of: ten FlatMap lambda transforms, 11 ParDo functions using up to five side input functions (for calculations), four GroupByKey transforms and eight CoGroupByKey transforms.

In the end, this job turns ~1.5 million rows of raw data from 350 shooting sessions into 4,589 shot entries in BigQuery with 50 data points for each shot. Configured with a fixed worker pool of 10 nodes, this job took about eight minutes of wall clock time.

Here is what the architecture of the job looks like:

Ingesting all shot frame and producing one record per shot

(Note: 1.5M rows is by no means a Big Data challenge. However, the complex reshaping of this data lends itself to using a framework like Apache Beam, which provides abstractions for pruning, grouping, joining, etc. as a means to create repeatable ETL processes. Should our data size grow to 10M, 100M, or 1B rows, we’ll have a graph in place that can automatically be scaled — no refactoring needed.)

Analysis: what makes a good jump shot?

We knew we wanted to see if analyzing human movement measurements could help us determine what makes a good shooter good, but where to begin?

With all of our metrics loaded into BigQuery (except one — more on that later) we began our analysis in R. We used R because we needed to build a plethora of visualizations, and because it fits well over our standard iPython environment for this use case. Fortunately, BigQuery has seamless support for R. (More on BigQuery support for R over here.)

The first step was grouping the various metrics we created (upwards of 27), and understand their distributions. We looked at the middle 90% of values for each and then rounded values to an appropriate measure. For example, 90% of release heights were between 5.5 and 9.3 feet, so we rounded values up to the nearest whole number. Other metrics were a bit less familiar. For example, 90% of values for our verticality metric (a calculation measuring the angle of the shooter’s body to the floor) were between 23.8° and 64.3°, so we grouped by 10-degree increments. With our metrics now clustered into manageable groups, we started to investigate whether there were any significant relationships between them and field goal percentage.

Foot spread to release

One metric that showed above average performance across the board was a particular range of a calculated ratio. That ratio was the distance between the shooter’s feet at the outset of the shot divided by the height at which the ball was released. We wanted to see if shot distance (a common exploratory metric for basketball analysis) had any major effect on the significance of this ratio, but it turned out to be a strong indicator of FG% at all of our shot distance clusters.

We intentionally ignored shots taken within 0–5 feet of the hoop as they are most often layups (which have a vastly different shooting motion, and we were only interested in jump shots). At each cluster beyond five feet, shooters whose feet were separated by between 30% and 40% of their eventual release height shot the best.

This makes sense. If you refer to any jump shot instructions online, or just think back to your last basketball practice, you’ll know it’s often recommended that a shooter’s feet be about shoulder-width apart. Since we didn’t have rigid body markers for the shoulders, we decided to use this ratio as a proxy (consider: taller shooters likely have broader shoulders as well as a higher release point; shorter players likely have narrower shoulders as well as lower release point, but the ratio stays the same). It held up.

Bolstered by finding one metric that worked across a variety of shot distances, we looked for other significant trends by plotting and grouping various combinations of metrics but we didn’t find much. Since we couldn’t go back and just add more shots to the dataset, we wondered if we might be looking at too much noise. No disrespect to any of our attendees, but NEXT is a tech conference, not sports, and we saw some ugly jump shots (some sessions included zero makes in 60 seconds of uncontested shooting). While sessions like that might be able to tell us what a bad jump shot looks like, we were still hunting for good jump shots. So, we pivoted to see if we could find the shooters who were making them.

Wheat vs. chaff: the distance beyond 20.75’

Of our 4,589 shots, over 4,000 were taken between five and 20 feet from the basket, which is a mid-range shot in college basketball. If the modern analytics movement in basketball has done one thing for the game, it’s that it’s replaced mid-range shots with the three-point shot. In Michael Jordan’s final year with the Chicago Bulls (1997–98) there was an average of 12.7 three-point shots attempted per game. That number has been on a steady rise and hit an all-time high of 29.0 last year. The explanation is simple: if Player A attempts 20 two-point shots and makes 45% (an average value) they score 18 points. If Player B attempts 20 three-point shots and makes 35% (an average value), they score 21 points. 21 > 18. If any basketball team today were shooting 87% of their shots from mid-range, they’d a) lose a lot of games and b) see their whole staff get fired.

Analytics aside, we had the NCAA three-point line (20’9”) marked on the court and participants in a(n albeit not very intense) shooting competition. Those who know the game of basketball and have played a little bit are more likely to know a three-pointer is more valuable, and might attempt more of them. But since we were trying to isolate good shooters and not just people who attempt to shoot threes, we decided to look at shooters who made at least one good three-point shot in a session.

There were 123 made NCAA-distance three-pointers spread out among 45 shooters. And upon closer examination, those 45 shooters were better than the rest of our participants from every distance.

Now that we’d found one way to segment a group of good shooters (Group A), we thought that perhaps they’d been doing something differently from the rest (Group B). Once again, we tried grouping our various metrics to see if they were related to whether or not the shooter had made a three-pointer, but there were only minor differences across the board. It was tempting to just chalk it up to ‘shooter’s touch,’ until we started to look at what factors might be influencing touch in the first place.

Demystifying ‘shooter’s touch’ (a little)

‘Shooter’s touch’ is a nebulous term that suggests a fair amount of skill mixed with a “hint of luckiness.” But since we could actually look at the entire shot trajectory, we could look at the literal beginning of a touch (hand placement) and see if or how it affected the rest of the shot.

Hand placement…

Group A shooters, on average, placed their hands about one inch further apart at the beginning of their shooting motion than Group B. This may not seem like much, but on a ball with a circumference of 29.5 inches, that difference meant the angle generated by the centroid of the ball and each hand was about 2° wider, which meant that the ball sat more in the shooting hand.

Affects launch angle…

With the ball sitting more in the shooting hand (as opposed to a more two-handed, push-style shot) Group A shooters had a higher shot trajectory — 2° higher. Again, that may not sound like much, but that launch angle difference meant that the average peak of the shot paths from Group A was almost 9 inches higher than the average paths from Group B.

And yields more bounce opportunities

With all that extra height, Group A’s shots were likely seeing better entrance angles and better downward trajectories. And satisfying though swishes may be, shots that get a lot of touches as they dance around the rim and eventually drop seem like pure luck. But perhaps not entirely. For all shots that hit the rim and bounced up, Group A’s shots were 6% more likely to be good than their counterparts from Group B. Put together, Group A’s hand placement led to higher shot angles, which led to bounced shots that had a better chance of making it in the basket. Broken down this way, you can see how ‘shooter’s touch’ actually adds up to buckets.

Consistency, consistency, consistency

The one metric we created outside of our ingestion job in Dataflow was around consistency. Unlike the metrics created around particular parts of the shooting motion, this is an aggregated metric and is relative to the rest of the shooters in our dataset. So what we did was compare consistency across each of our metrics and score accordingly, and we used a combination of SQL in BigQuery and pandas to do it.

First, in SQL, we started by eliminating layups in order to look only at jump shots. Though we had previously filtered out layups using shot distance clusters, we wanted to try another, more specific method of removing layups this time. We filtered using a release point to rim distance of less than 8 feet, which would capture any layups made from further than 5 feet away. (After all, a good player might start a layup motion from the middle of the key but release the ball only a few feet from the rim.) Since we couldn’t go back and review the tape, we wanted a more expansive definition of a layup to ensure we were truly looking at jump shots, and if we had to sacrifice a handful of very short jump shots for this round, so be it.

Next, we grouped each of our metrics by user_handle and collected the standard deviation of each user with regards to each metric.

SELECT user_handle, COUNT(*) AS entries, round(avg(release_height), 3) AS avg_release_height, round(stddev(release_height), 3) AS stddev_release_height, round(avg(gather_midpoint_angle), 3) AS avg_gather_midpoint_angle, round(stddev(gather_midpoint_angle), 3) AS stddev_gather_midpoint_angle, round(avg(foot_split_to_height), 3) AS avg_foot_split_to_height, round(stddev(foot_split_to_height), 3) AS stddev_foot_split_to_height, round(avg(foot_split_to_release), 3) AS avg_foot_split_to_release, round(stddev(foot_split_to_release), 3) AS stddev_foot_split_to_release, round(avg(release_speed), 3) AS avg_release_speed, round(stddev(release_speed), 3) AS stddev_release_speed, round(avg(form_verticality), 3) AS avg_form_verticality, round(stddev(form_verticality), 3) AS stddev_form_verticality, round(avg(shot_max_height), 3) AS avg_shot_max_height, round(stddev(shot_max_height), 3) AS stddev_shot_max_height, round(avg(max_height_over_dist), 3) AS avg_max_height_over_dist, round(stddev(max_height_over_dist), 3) AS stddev_max_height_over_dist, round(avg(first_ten_frames_dist), 3) AS avg_first_ten_frames_dist, round(stddev(first_ten_frames_dist), 3) AS stddev_first_ten_frames_dist, round(avg(form_to_release_time), 3) AS avg_form_to_release_time, round(stddev(form_to_release_time), 3) AS stddev_form_to_release_time, round(avg(vertical_jump), 3) AS avg_vertical_jump, round(stddev(vertical_jump), 3) AS stddev_vertical_jump, round(avg(spin_rpm), 3) AS avg_spin_rpm, round(stddev(spin_rpm), 3) AS stddev_spin_rpm, round(avg(foot_spread), 3) AS avg_foot_spread, round(stddev(foot_spread), 3) AS stddev_foot_spread, round(avg(jump_angle), 3) AS avg_jump_angle, round(stddev(jump_angle), 3) AS stddev_jump_angle, round(avg(jump_distance), 3) AS avg_jump_distance, round(stddev(jump_distance), 3) AS stddev_jump_distance, round(avg(ball_in_hand_angle), 3) AS avg_ball_in_hand_angle, round(stddev(ball_in_hand_angle), 3) AS stddev_ball_in_hand_angle, round(avg(launch_angle), 3) AS avg_launch_angle, round(stddev(launch_angle), 3) AS stddev_launch_angle, round(avg(hand_spread), 3) AS avg_hand_spread, round(stddev(hand_spread), 3) AS stddev_hand_spread, round(avg(hand_dist_apart), 3) AS avg_hand_dist_apart, round(stddev(hand_dist_apart), 3) AS stddev_hand_dist_apartFROMshots.test2WHERE release_to_rim_dist > 8-- AND user_handle = 'tensorflowsaysno'GROUP BY user_handle

Then, we took these results to Python and compared each shooter-metric’s standard deviation with the rest of the shooters. For each shooter-metric, we pulled the percentile of the standard deviation in relation to the rest of the shooters. We then averaged each percentile for a given user, and that became our consistency score. This method allowed us to look at form consistency across a multitude of different metrics to finally determine how consistent someone’s jump shot was.

The lower a score, the more consistent a shooter was across the metrics we collected. We clustered these results and found that only 12 shooters had a consistency score below 0.2. As we’d hoped, they had the highest field goal percentage of any cluster.

Meanwhile, the cluster with the lowest field goal percentage by consistency score was the one between 0.2 and 0.3, which is a bit anomalous. But upon further investigation, it turns out that cluster was dragged down significantly by a few especially poor shooters. That cluster of 18 shooters included two shooters who failed to make a single shot, two who only made one shot, and two who only made two shots, going a dismal combined six-for-57 from the field.

Awfully consistent, you might say.

Post production

Truthfully, we don’t think this 4,589 shot dataset is enough to conclusively determine what makes a good jump shot, let alone teach someone how to shoot one properly. We saw a lot of inconsistency across basketball ability, and too few shots to determine someone’s skill level. In her talk about analyzing over 22 million shots using Noah basketball at the 2018 MIT Sloan Sports Analytics Conference, data scientist Rachel Marty says you need around 1,000 shots from someone before you can really understand them as a shooter; we had no more than 131 shots from a single person in our dataset. Granted, those numbers are relative to the signal quality of the data — there is a bit of apples and oranges being mixed here — but the gist is the same.

Nevertheless, our data led to a few key outcomes: first, we were able to find a few interesting trends and put some numbers behind conventional wisdom when it comes to shooting — we learned a bit about what contributes to a good jump shot, even if not what defines a good jump shooter. Second, we learned that we need not only more data, but also more balanced data. (Perhaps we should set this court up in a non-tech conference environment!) Finally, by leveraging Google Cloud services, we created a repeatable and scalable workflow that can continue to be implemented as we gather more data to test new hypotheses, abandon dead ends, and improve our analysis.

We’ll be continuing to refine our insights as well as exploring new metrics and adjusting the overall smart court experience. Next up, to see if 4,589 shots are enough to move the needle on our predictive model. See you out there.

--

--