How it’s made: Holobooth

Very Good Ventures Team
Flutter
Published in
7 min readJan 24, 2023

A virtual photo booth experience showcasing Flutter and Machine Learning

Introducing the Flutter Forward Holobooth, a new experience showcasing the power of Flutter, Firebase, and Machine Learning (using MediaPipe and TensorFlow.js) in a virtual photo booth experience. Start by selecting your avatar (Dash or Sparky) and transport yourself to a tropical beach, volcanic mountain, outer space, the ocean floor, or somewhere else! Since we can’t transport everyone to Nairobi to attend Flutter Forward in person, we wanted to provide a virtual experience that is just as exciting! With Holobooth, you can capture a short video to commemorate your virtual visit. Then, show your friends by sharing on Twitter or Facebook.

Landing screen for the Flutter Forward Holobooth web app. On the left, Dash is taking a picture inside a photo booth decorated with purple and blue hues and the Flutter logo. On the right is a button to get started.
Try out the Flutter Forward Holobooth at holobooth.flutter.dev

The Holobooth builds on the first version of the Photo Booth app from Google I/O 2021. Instead of taking photos of you and Dash or Sparky, Holobooth uses machine learning to control animations of Dash or Sparky using your facial expressions.

We’ll break down how our team collaborated with Google to create a more immersive and futuristic photo booth experience by tapping into the power of Google tools. We used Flutter and Firebase to build the Holobooth app. Web ML in JavaScript allowed us to take the experience to the next level with virtual, interactive, 3D avatars. Let’s dive into how we built it!

Detecting faces with TensorFlow.js

One of the most exciting features of the Holobooth is the ability to map live video of your face onto a 3D model of Dash (or Sparky) as you travel through their virtual world. If your face expresses surprise, Dash’s face expresses surprise, and so on. To achieve this, we used the camera plugin for Flutter web and TensorFlow.js to detect the user’s face within the frame of the camera. More specifically, we used the MediaPipe FaceMesh model, which estimates 468 3D face landmarks in real-time, to detect features of the user’s face within the camera frame across web and mobile browsers.

A man with a grey shirt and glasses sitting in a chair. On his face are a bunch of red dots that map onto his features. There is a high concentration of red dots around his eyes and around his mouth.
Features detected with the MediaPipe FaceMesh model

Based on the position of each facial feature, we can determine if the user is in frame, if their eyes or mouth are open, and more. As the user moves around the camera view, the MediaPipe FaceMesh model (available via the TensorFlow.js Face Landmarks Detection package) ensures that we can track the exact coordinates of the user’s features so that we can mirror them on Dash or Sparky. For more details, you can dig into the face_geometry.dart file. While there isn’t an official Dart package for TensorFlow.js yet, the Dart JS package allowed us to import the javascript library into a Flutter web app (see the tensorflow_models package folder for more details).

  FaceGeometry({
required tf.Face face,
required tf.Size size,
}) : this._(
rotation: FaceRotation(keypoints: face.keypoints),
leftEye: LeftEyeGeometry(
keypoints: face.keypoints,
boundingBox: face.boundingBox,
),
rightEye: RightEyeGeometry(
keypoints: face.keypoints,
boundingBox: face.boundingBox,
),
mouth: MouthGeometry(
keypoints: face.keypoints,
boundingBox: face.boundingBox,
),
distance: FaceDistance(
boundingBox: face.boundingBox,
imageSize: size,
),
);

const FaceGeometry._({
required this.rotation,
required this.mouth,
required this.leftEye,
required this.rightEye,
required this.distance,
});

Animating backgrounds and avatars with Rive and TensorFlow.js

We turned to Rive to bring Holobooth animations to life. Rive is a web app built in Flutter that provides tools for building highly performant, lightweight, interactive animations that are easy to integrate into a Flutter app. We collaborated with talented designers at Rive and HOPR design studio to create animated Rive graphics that work seamlessly within our Flutter app. The animated backgrounds and avatars are Rive animations.

On the left is a face that moves to the left, then right, up, down, blinks, then opens its mouth. Dash mimics the same movements as the face on the left moves.
Move your face to see the Rive model mimic your behavior

The avatars use Rive State Machines that allow us to control how an avatar behaves and looks. In the Rive State Machine, designers specify all of the inputs. Inputs are values that are controlled by your app. You can think of them as the contract between design and engineering teams. Your product’s code can change the values of the inputs at any time, and the State Machine reacts to those changes.

For Holobooth, we used inputs to control things like how wide a mouth is open or closed. Using the feature detection from the FaceMesh model, we can map them to the corresponding coordinates on our avatar models. Using the StateMachineController, we transform the input from the models to determine how the avatar appears on screen.

class CharacterStateMachineController extends StateMachineController {
CharacterStateMachineController(Artboard artboard)
: super(
artboard.animations.whereType<StateMachine>().firstWhere(
(stateMachine) => stateMachine.name == 'State Machine 1',
),

For example, the avatar models have a property to measure the openness of the mouth (measured from 0–1 where 0 is fully closed and 1 is fully open). If the user closes their mouth within the camera view, the app provides the corresponding value and feeds it into the avatar model so you see your avatar’s mouth also closes on the screen.

Capturing the dynamic photo with Firebase

The main feature of Holobooth is the GIF or video that you can share to celebrate Flutter Forward. We turned to Cloud Functions for Firebase to help us generate and upload your dynamic photo to Cloud Storage for Firebase. Once you press the camera button, the app starts capturing frames for a duration of 5 seconds. With ffmpeg, we use a Cloud Function to convert the frames into a single GIF and video that are then uploaded to Cloud Storage for Firebase. You can choose to download your GIF or video for later viewing or to manually upload it to social media.

User selects Dash as an avatar, then selects an animated background of outerspace with planets and stars. A rocket moves diagonally up to the left behind Dash. The user selects a blue wizard hat with stars, a matching shirt, and a Flutter mug, then presses the camera button to record a dynamic photo. The final photo is displayed on a separate screen with buttons to share the photo, download it, or retake it.
Capturing the dynamic photo

To share your GIF directly to Twitter or Facebook, you can click the share button. You are then taken to the selected platform with a pre-populated post containing a photo of the first frame of your video. To see the full video, click on the link to your holocard — a web page that displays your video in full and a button directing visitors to try out Holobooth for themselves!

Holocard page with the first frame of a user’s dynamic photo on the left. Dash is wearing an astronaut suit in front of a futuristic city. On the right is the Flutter Forward event logo with the text “Check out my Flutter holocard” and a button that says “Try now” where users can take their own photo in the Holobooth.
Example holocard

Challenges and how we addressed them

Holobooth contains a lot of elements that are expanding what’s possible with Flutter — like using machine learning models and Rive graphics, all while ensuring a performant, smooth experience for users.

Working with TensorFlow.js was a first for us at Very Good Ventures. There are currently no official Flutter libraries, so much of our early work on this project focused on experimenting with the available models to figure out which one fit our needs. Once we settled on the landmark detection model, we then had to make sense of the data that the models output and map them onto the Rive animations. Here is an early exploration with face detection:

A man wearing a light blue shirt and red hoodie is moving his face around the screen. There is a blue box around his face and red dots mapping onto his face, with a high concentration of dots around his eyes and mouth. As he moves his face, red dots move along with his facial features.
Early exploration of face detection

The official Flutter camera plugin gave us a lot of functionality out of the box, but it currently doesn’t support streaming images on the web. For Holobooth, we forked the camera plugin to add this functionality. We hope that this is supported by the official plugin in the future.

Another challenge was optimizing for performance. Recording the screen can be an expensive operation because the app is capturing lots of data. We also had to take into account that users would be accessing this app from different browsers and devices. We wanted to ensure that the app is performant and provides a smooth experience for users no matter what device they’re using. When accessing Holobooth on desktop, video backgrounds are animated and reflect a landscape orientation. To optimize for mobile browsers, backgrounds are static and cropped to fit portrait mode orientation. Since a mobile screen is smaller than on desktop, we also optimized the resolution of image assets to reduce the initial page load and the amount of data used by the device.

For more details on how we addressed these challenges and more, you can check out the open source code. We hope that this can serve as inspiration for developers wanting to experiment with TensorFlow.js, Rive, and videos, or even those just looking to optimize their web apps.

Looking forward

In creating this demo, we wanted to explore the potential for Flutter web apps to integrate with TensorFlow.js models in an easy, performant, and fun way. While a lot of what we’ve built is still experimental, we’re excited for the future of machine learning in Flutter apps to create delightful experiences for users on any device! Join the community conversation, let us know what you think, and how you might use machine learning in your next Flutter project.

Take a video in the Holobooth and share it with us on social media!

--

--

Very Good Ventures Team
Flutter

Team at Very Good Ventures (https://verygood.ventures), a leading Flutter app development consultancy.