Manipulate your 3D content with gestures in AR.js
A couple of weeks ago, on A-Frame’s Slack group, someone asked if anyone knows of a component that allows to manipulate 3D elements using gestures. Then, Rigel Benton, from 8th Wall, shared an example on how they manipulate A-Frame elements with gestures. As soon as I looked into it, I thought that approach could work to solve one of the most asked questions for AR.js, “How can I handle click events on 3D elements?”.
8th wall’s solution is amazing, they developed a component that listens to touch events on the scene and emits custom events with proper details for further manipulation on each element. Since AR.js and 8th wall works differently, we took advantage of this gesture detection and implemented the solution within our image and marker tracking approaches.
Before going in detail about the technical implementation, let’s see what we can do with it.
Use your fingers!
I guess you are curious to try this out.
And since it’s WebAR, it will be easy as you can imagine. You have to open this link on your phone and scan this picture.
Then start to pinch & zoom the 3D model while it appears on the scene.
Amazing, right? This way, you can easily manipulate the content from your AR scene and your users will be able to play with 3D content using known patterns like pinch, zoom and drag. But you are not tied to just those ones, keep reading and you’ll find how to add your own gesture events!
How it works?
In order to manipulate our 3D element using gestures, we need two A-Frame components: gesture-detector and gesture-handler.
Gesture detector needs to be placed on the a-scene and it listens to regular touch events emitting a custom event indicating how many fingers were involved (“one”, “two”, “three” or “many”) and passing some details of the event, like the position and coordinates where user touched the screen.
Emitted events are named depending on how many fingers were detected when the user started touching the screen, and could be one of three types: start, end and move. For example, if you want to listen a two-finger gesture, you could listen for any of these three events: “twofingerstart”, “twofingermove” and “twofingerend”. These three kinds of events are emitted for “one”, “two”, “three” or “many” fingers. Each one of these events will contain a detail object with the touch count, position, spread, and screen coordinates.
Now that we are listening for touch events, we need to tie them to our element in order to manipulate it. That is what gesture handler does. We listen for custom touch events on the scene and trigger a function that manipulates our 3D element.
For example, if we want to listen for a single finger moving on the screen for rotating the element:
sceneEl.addEventListener("onefingermove", handleRotation);
And for a two-finger event:
sceneEl.addEventListener("twofingermove", handleScale);
As you may have thought, these events will be triggered event if the marker is not visible, so need to ensure that this will work only when the marker has been found. We could easily do that with AR.js marker events:
“markerFound” is fired when a marker in Marker Based, or a picture in Image Tracking, has been found and “markerLost” is fired when it has been lost.
Once we are sure that gesture events are triggered only when the element is visible, we can start manipulating it! Thanks to the event details that provide us the gesture detector, we can start editing 3D properties of our elements.
Gesture handler comes by default with pinch to zoom and finger spin gestures. Let’s take a look how they work.
To rotate an element, we need to define a rotation factor, which will determine how fast the element will be rotated according to the gesture applied. Gesture handler comes with a default rotation factor of 5 but you can modify it if you want via component attributes as we will see later. Beside rotation factor, we also need to know how long and which direction has user touched the screen. Fortunately, custom gesture events come with that information, ready to use.
Since we are listening to “onefingermove” to rotating our element, the event detail we need to consider is positionChange, which stores how X and Y coordinates has changed during the gesture movement. So, our rotation function will be:
To scale our element using “pinch-to-zoom” we need to listen to “twofingermove”. In this case, would like to know how far our fingers has been spread over the screen. Once again, the amazing gesture detector has this information ready for us via startSpread and spreadChange.
Before applying the scale, we need to define a couple of things: a maximum and a minimum scale. We do not want the element to disappear or to be incredibly big, don’t we?
Gesture handler comes with default scale values of 0.3 and 8 respectively which can be modified via component attributes. So, scaling function will be:
This is how the full a-scene will look with gesture detector and handler. Remember to add a “clickable” class to the element and indicating it on the raycaster properties to avoid performance warning in your AR scene. Raycaster and cursor elements should be placed in the marker element within AR.js.
Now that you know how to deal with gesture events on AR.js, you could easily create custom transformations and cool effects on your projects! This sample works with just one marker, but it could be extended to support multiple ones. We will release more tutorials soon on how to do it.
Here you can find the repo that contains this example and more information on how to use it: https://github.com/fcor/arjs-gestures.
Again, you can also open this link on your phone and scan this picture. to try it out.
Credits
Once again, we would like to thank to Rigel Benton and 8th Wall for sharing their work and making this possible. We are all one community and we are making science fiction a reality by making WebAR happen.