Building Voice-Enabled Prototypes in Framer

A set of tools to make VUI (voice user interface) prototypes with Framer.

Philip Belov
Framer
4 min readMay 24, 2017

--

I’d like to share how to make prototypes that can react to voice in Framer. My goal is to give you the tools to make VUI (voice user interface) prototypes quickly and easily. Don’t have Framer yet? Download a 14-day free trial to follow along with this tutorial.

In Framer, you can make any layer—or more accurately any layer’s property—react to voice. By react to voice I mean react to the voice input volume that the microphone receives.

Here’s a simple demonstration of the voice-enabled prototype. You can play around with it, change some properties, etc. It’s all up to you. 😉

How it works

To make all this magic happen, I used Web Audio API to handle all the audio-related stuff.

If you’re interested in diving into the topic further, I added links to helpful articles at the end of this post.

Please note: the prototype only works in browsers that support Web Audio API, which means that you’ll most likely need to run it on desktop.

I also added some comments to the code of the demo prototype so you can better understand how everything works.

How to use the prototype

Next, I’ll explain how you can use the code from the prototype in your own creations.

Let’s take a look at the respondToVoice function.

As you can see, this function takes layer as a parameter. This layer here is what will actually be ‘reacting’ to voice input and changing in the ways you want. In the example below, we’ll be changing the scale and border width of our layer according to input voice volume.

Make note of the following lines of code:

What we’re doing here is setting the property value according to the current input volume. How does this actually work? Our layer has two states, inactive and active, and we need to animate it between the properties of these states. To do this, I’ve used Utils.modulate function:

1. It takes incoming sound volume as the first parameter,

2. and maps it from [0, MAX_VOLUME] range to value range between properties of active and inactive states. MAX_VOLUME is the maximum value of the input sound volume.

Here’s an illustration that shows what’s actually happening:

Mapping Input Voice Volume Value to Layer’s Property Value

In the case above, I am mapping the scale and border width of the layer according to current sound input volume.

Now that we have our layer properties set, what’s next? Animation! 🎉🤘

Above you can see a chunk of code, which animates our layer according to the input volume. I’ve added some animation options like curve and time so the transition looks more smooth and fluid. You can edit these values to get other interesting results.

After creating the animation for our layer, we need to loop it because the voice input we’re getting is continuous:

One small note before I wrap up the article: don’t forget that you can play around with different variable values in the prototype. I’d pay attention to MAX_VOLUME variable, if you want the layer to be less ‘sensitive’ and ‘responsive.’

So yeah, that’s it.

Don’t forget to check out the prototype and create awesome things, including a full-fledged prototype of a voice-enabled personal assistant (you just need to add SpeechRecognition API).

I hope that I’ve saved you a little bit of time. 🕐

--

--

Philip Belov
Framer
Writer for

Product Designer & Manager. Ex- Yandex, Wallarm, Gett, ÖMANKÖ, Future Tech Lab / PANGAIA, ANNA Money