Hand Manipulated Holographic using JavaScript 3D Model

Develop Interactive JavaScript 3D Model to Move using Hand Gestures

Bilal Rifas
Sep 23 · 8 min read
Image for post
Image for post

Create an Interactive JavaScript 3D Model

Ever wondered what it would be like to move 3D objects inside your screen using your hand gestures like in Iron Man, which Elon Musk introduced as a new era of designing 3D models.

I’m here to tell you that you can do it. You don’t need to be a billionaire, genius, philanthropist — you just need a little bit of JavaScript.

In this article, I’ll walk you through a cool project and we’ll explore some great JavaScript modules. Thanks to JavaScript that we can plug in any module and create awesome things with it.

Without further delay, let’s get started!

Project Prerequisites

Although I’ll be explaining almost all the processes I follow, there are still a few things you should know something about:

  • An IDE (Could be anything you are comfortable using), I’ll be using VS Code for this project.
  • Basic knowledge of JavaScript
  • A PC or Laptop with a working webcam
  • You should be a Marvel fan because with great power comes great responsibility.

A Resource for Absolute Beginners

Image for post
Image for post

Modules You’ll Need

In this whole project, we’ll only be using two modules. You read that right: just two modules will do the trick.

Three.js

Image for post
Image for post

See that cool looking box in the thumbnail video? We can create something like that with this JavaScript library — an easy to use, lightweight, 3D library with a default WebGL renderer. It also provides Canvas 2D, SVG, and CSS3D renderers.

WebGL (Web Graphics Library) is a JavaScript API for rendering interactive 3D and 2D graphics within any compatible web browser without the use of plug-ins. We can directly integrate it into the HTML <canvas> element.

You can create some amazing things with this library. Go and experiment on your own. If you want to read more about three.js visit their website or read their documentation.

Handtrack.js

Image for post
Image for post

It is a library for prototyping real-time hand detection (bounding box), directly in the browser. Underneath, it uses a trained convolutional neural network that provides bounding box predictions for the location of hands in an image. The convolutional neural network (ssdlite, mobilenetv2) is trained using the TensorFlow object detection API.

It provides a wrapper that lets you prototype hand/gesture-based interactions in your web applications. It takes in an HTML image element (img, video, canvas elements) and returns an array of bounding boxes, class names and confidence scores.

And you can do a lot of other things with this library. I’d recommend you read their documentation to learn more about them, but if you only want to stick to this project then this is enough for now.

File Structure

It’s always recommended to structure your folders and components well, so when your project grows and becomes scalable it’s easy to read and maintain.

Image for post
Image for post

For this project, we’ll be structuring our components first. If you want to keep your structure the same as mine then follow this approach but free to make your own structures. Just mention the accurate source in the scripts and CSS files.

The HTML Side

In this section, we’ll be creating the index.html and style.css page. These two are the root files of our whole project and are fairly simple to read and understand. Let’s take a look at the code:

This is a very simple file, with just the libraries and the CDNs we’ll be using for this project.

In the division tracker, we’re creating custom IDs which will be your webcam screen on the top left of the browser. The data division renders the camera feed in the browser and displays the updated X and Y coordinates.

The CSS for this project is pretty self-explanatory:

The #myvideo tag has a display property set to none, which will come in handy later. The idea is to make the default video display as none, then on the user clicking the ‘Toggle Camera’ button the video screen appears. This makes the working experience more user-friendly.

The JS Side

Now comes the fun part of the project. To display any 3D graphics on the screen we need three.js, but just importing the script won’t help. To actually see something on the screen you’ll need three things: scene, camera, and renderer.

Setting up the three objects

Let’s create these three elements in our project:

With these commands, we now have our camera, scene, and renderer set up. Three.js offers us a wide range of cameras but for this particular project, we’ll be using the Perspective Camera.

Let’s see what’s going on here. In the above code we’ve set the camera object by initializing it with them perspective camera which gives us the following configurations: PerspectiveCamera({Field of View}, {aspect ratio}, {near clipping pane}, {far clipping pane});

Last but not least, we add the renderer element to our HTML document. This is a <canvas> element the renderer uses to display the scene to us.

But if you run this you would probably get a black screen with a blue line and the X and Y coordinates at the top left.

But we don’t see the object yet. Why?

Let’s solve that problem.

Image for post
Image for post

Creating and displaying the 3D object

To add the cube we’ll create a BoxGeometry. This object consists of the vertices and faces of the cube.

Again, for the material, three.js comes with a lot of them but we’ll be using MeshBasicMaterial for this project. The third thing we need is a Mesh. A mesh is an object that takes a geometry and applies a material to it, which we then can insert to our scene, and move freely around.

When we call scene.add() the cube is automatically added to the coordinates (0,0,0). This would cause both the camera and the cube to be inside each other — to avoid this, we simply move the camera out a bit camera.position.z = 5;

But if you run this, you still can’t see the cube! It’s frustrating, I know, hang on just a little bit!

Actually, you’ve already created the cube in your project but it’s not visible yet. To make it visible write the following:

What this does is creates a function called animate(). That function has a render call inside which renders the scene, thus allowing you to actually see the cube.

But honestly, it’s a boring green cube which just sits there, right? You might be wondering “I could have done it with simple CSS as well, why go through all this WebGL and render…”

Image for post
Image for post

Allow me to show the magic of three.js to you…

Try typing this code into your project:

Voila, you have a 3D cube:

Image for post
Image for post

Customizing the cube

We already have the cube designed and ready but it doesn’t look the way it did in the demo does it? The demo one looks way better — let’s customize our cube to match it.

All you have to do is change a few settings in your code:

This is the whole file from the start, with a few additional elements in the code. Also, we’ve added a wireframe which gives it that Iron Man Holographic feel. By the end of this file, you should have something like this:

Image for post
Image for post

We are already halfway there.

Let’s play around with handtrack.js

First, we need to load a model with the help of the library. We’ve also defined some parameters, but this step is completely optional:

This process would also have worked if we’d used just the handTrack.load() method, but we want to add some more customizations to our project, so we’ve defined extra parameters.

Once we’re done with loading the handtrack.js model, we need to load the video stream in the canvas, as defined in the HTML. For that, we use a method called handTrack.startVideo()

NOTE: We have a boolean variable isVideo which helps in the toggling of the streams. This is a much better and user-friendly way of handling the camera access rather than having the camera open all the time.

Now comes the part where we need a bit of Machine Learning…but don’t worry, you don’t have to do anything! The handtrack.js will detect/predict data for you with the help of Google’s TensorFlow.

This is the code to get predictions from handtrack.js:

The trick now is to track the coordinates of the hand on video canvas and make changes with respect to it on to the Three js Object.

The prediction object from model.detect() returns the following results:

Image for post
Image for post

The bbox key gives the value coordinates, width and height of the box drawn around the hand. But we’re trying to get the center point coordinates of the hand - to do that we use a simple formula:

Image for post
Image for post

Another problem is that the scale of the object’s canvas and tracker’s canvas is huge. Also, the center point origin of both the sources is not center. To take care of that we first have to shift the coordinates​ so that the origin point of video canvas can be center.

Once that’s done, taking care of the scale issue is easy. The final result is something like this:

Conclusion

There you go! Your very own 3D object, which you can control with your hand.

It’s easy now to develop 3D models using JavaScript and make it interactive as well. Wasn’t it fun doing all the hard work and to get that amazing final result? If you face any difficulties please let me know in the comments section.

Image for post
Image for post

JavaScript In Plain English

New JavaScript + Web Development articles every day.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store