How we built GraffitiGala — collaborative augmented-reality graffiti

Joe Romano
3 min readJan 30, 2018

--

At Hack@Brown 2018 we built GraffitiGala, a web app letting users collaborate on AR graffiti, in real time, in a shared physical/virtual space.

Alex and Marley drawing in real-time

How it works

3D Rendering

The 3D rendering and AR features are based heavily off three.js and three.ar.js, which uses ARKit and ARCore on iPhone and Android. For this proof-of-concept, we used the graffiti demo from three.ar.js, but future applications could extend to virtual objects or entire virtual rooms. We also used an experimental AR-enabled browser, but we expect AR browsers to make it into wide-adoption soon.

Backend

The backend relies on Firebase’s real-time database. As each client draws, it sends the current path represented by raw position data (coordinates, path velocity, etc.) to the database. To avoid saving 3D meshes, which can be very large and cumbersome, all paths are rendered client-side. Clients listen for new paths and updates to existing paths and render meshes for all other paths as needed.

Calibrating a device using a square piece of paper

Calibration & shared coordinate systems

By requiring an initial calibration before seeing and editing the world, we synchronize each client’s coordinate and rotation systems. The initial calibration is done by lining up a real-world square with a box on the screen, collecting an initial position and initial rotation (via the device’s quaternion). This allows us to normalize positional data during drawing path upload and download.

Normalization is done by applying the initial quaternion as a transform to the position vector and translating it by the initial position after downloading. When uploading, paths the opposite is done — the vector is translated in the negative direction and the conjugate of the initial quaternion is applied as a transform. Three.js’s Vector3 and Quaternion classes make this very simple.

Alex drawing a long path throughout the room

Trying it out

The application isn’t optimized and can still consume a lot of (potentially expensive) resources, so we’re not hosting it anywhere public.

However, the code is available on Github along with a brief getting-started guide. You will need to install an experimental AR-enabled browser, so you will need to feel comfortable installing a custom Android app, or signing and installing a custom iOS app.

In the future

It would be great to see this expanded to more advanced shared spaces, like a shared working environment, shared classroom, or more, by adopting objects other than graffiti brushstrokes. We would also love to see more optimized code such that a scalable, multi-room system could be created.

Given more advanced GPS technology or more sophisticated computer vision (to recognize known environments), we hope it would be possible to have a single, seamless, worldwide collaborative environment such that calibration is based off of the user’s position and detected environment.

Acknowledgements

Team Members

Alex Sekula, Dan Murphy, Josh Chipman, and Joe Romano

Thanks

Thanks to Marley Rafson for working on and demoing the library, loaning us a Google Pixel, and for the support!

Thanks to Hack@Brown for providing the motivation and support that made it all possible!

Contact

If you have questions, you can reach me on Twitter @joerromano.

AR!

--

--