How to get animation morph targets from a webcam with Hallway’s AvatarWebKit

Jacob Muchow
QuarkWorks, Inc.
Published in
3 min readMar 21, 2022

Let’s say you want to animate a 2D/3D model from your webcam. To do that, you’ll likely want to run some algorithms on the webcam’s video stream which output blendshapes (aka. morph targets) you can use in your scene.

Developing this tech is actually quite painstaking (as I learned the last 2 years 🥲). If you search around the web, you will find a lot of solutions that more or less result in you getting a basic “mouth open or shut” look. Hardly the effect you dreamed of. I know it’s not mine. I want my character to BE me.

For a couple years, we’ve been developing a machine learning pipeline that goes beyond the other options you can find on the internet. And the best news is our tech can run anywhere — desktop, mobile, or even in your web browser.

Our Avatar Story

Recently, we released a desktop app we are calling “Hallway Tile” which is available on macOS currently (Windows and Linux soon to come!). The app allows you to customize an avatar, use one of our pre-made ones, or even import your own.

Then you can appear as that avatar in video calls in other apps on your computer like Zoom, Chrome, Microsoft Teams, Discord, FaceTime, etc.

Previously, we had created a video call website called Hallway that we still use every day with our team for our meetings. One of our favorite features from the website is a web-based avatar system that allowed us to join as an emoji that animates along with your facial expressions. We loved the idea of not having to join our team meetings on pure video 24/7. Our work on this generated the idea for the native app. We wanted to put more power in the hands of an individual, instead of working on a team product first.

We have released our underlying machine learning & animation tech for the web in the form of AvatarWebKit. It’s an easy-to-use node module that will get you off and running with real-time blendshapes so you can render your own scenes.

Okay, much like a Pinterest post, now that I’ve told you my life story I’m ready to spill the details.

Down to Business

First, you will need to reach out to us so we can create access tokens for you.

Once you’ve received access tokens, you’re ready to get set up with the blendshape SDK.

  1. Create your predictor.

2. Subscribe to results.

3. Start your predictor with a video stream.

After you’ve done this, you will start receiving blendshapes and transforms. It’s that easy!

The next step is rigging these blendshapes up to some type of scene. Stay tuned for another blog post about how to do that part. Or feel free to reach out to us! We are excited to see what kind of impact this tech can have on video calls, films, games, and more.

For access to our web SDK, get in touch with us here.

If you have other questions, you can also get in touch with us on Discord.

Subscribe to our newsletter for more updates on what we’re doing next!

--

--

Jacob Muchow
QuarkWorks, Inc.

Just trying to do something great. Software engineer and co-founder at QuarkWorks, Inc. Working on realtime animations for digital characters using your camera.