Octopus rigged to the user body with PoseNet

EG
Elena Glazkova
Published in
3 min readSep 28, 2020

Oliver-Rose and I continue working on our two-player game. We are still using webRTC and ngrok, and this week we integrated machine learning model PoseNet into the project.

Oliver-Rose came up with a very cool idea to rig two human bodies to one Octopus image so that Player 1 would control two tentacles with two arms and Player 2 would control the other two. We also discussed controlling remained tentacles with players’ legs but decided to start coding for arms and see how it goes.

Oliver-Rose also designed the Octopus, and I absolutely love how it looks!

So, first of, take a look at the wonderful creature that we created. Thanks to PoseNet the user controls legs and one eye (with their noses).

Challenges

This week we couldn’t get the interaction between different computers to work, and couldn’t quite figure out why: because of the PoseNet and some code inaccuracies, peer connection (which we debug and got perfectly connected or it just seemed so) or something else that we’re missing for now.

In our code we get the array of detected keypoints that we need from PoseNet (for instance— left shoulder, left elbow, left wrist), store each specific keypoint into a variable, and then use the keypoints’ x and y positions as the arguments for the curves that draw Octopus body. We have two separate functions (and two separate arrays) for two users: drawKeypoints() and drawKeypoints2(), and this particular piece of code to make sure that we get data from the second Player (the one that doesn’t have servers running on their machine):

However, we are not getting data from the second player for now and decided to book our class professor’s office hours to get help and get our cutie-Octopus at least two more legs and one more eye.

Process

Both servers and code in p5 web editor up and running.

Early tests

PoseNet recognizes my nose even under the mask which is pretty amazing.

--

--