Learning how to design in 3D is hard, and learning how to do that in a medium that hasn’t existed very long is even harder. That’s the basically the experience I’m having with augmented reality, and I’ve been trying to think of how I can get myself to dedicate more time to solving design problems in AR — but I think I stumbled upon a pretty good plan!
You may have heard about October themed daily challenges, where you do a thing every day in October, but if you haven’t, here are a couple of examples:
- Inktober (people draw with ink every day in October, sometimes to an “official” daily prompt from Jake Parker)
- Hacktoberfest (programmers work on open source software to try and get at least 4 contributions merged into to projects)
And as I was going through those I thought to myself “hmmm, I should get back to ink work … or maybe do more open source ? WAIT I’ve been wanting to do this personal thing (web augmented reality) consistently for a while — let’s do that!” So I’m doing that! The goal is that every day in October I’m going to make an augmented reality experiment and link it at the bottom of this post with examples of the demos. You can also find all the demos using link.
Some notes on the process
- I’m not using prompts for each day because I have a back catalog of ideas I’ve been writing down for like 2 months about potential interesting designs and interactions in AR — so I’ll be trying out versions of those!
- Some days I’m going to be super busy, and on those days I’ll probably just model something or throw a vaguely interesting animation piece together and call it a day. ¯\_(ツ)_/¯
- When I have time though, I’m going to try and consider design & interaction challenges. I have quite a few in my notes to try out, but leave a comment or message me on Twitter or something if you have ideas about interactions/AR things you’d like to see!
- Tools! For those of you that care, I’ll be using A-Frame, AR.js, and hosting all of these on Glitch.com so that you can easily inspect or remix the code! WebAR is v new and I’m trying to make it easy to share what I learn.
Day 1: Here we have a torus-knot looping between furled & unfurled. This is actually an animation I made for a series of 3D loading icons, but I think it makes a fun first post!
Day 2: October 2nd’s AR piece is a remix of a simple noise shader I originally found in A-Frame docs. I threw in some color and made it so that a tap or click changes a variable in the shader. AR’s fun on its own, but I think the really interesting bits emerge when we start to use interaction — so a click or tap like in today’s experiment is the basic building block to making a full-fledged AR app ~experience~ on the web!
Day 3: The third piece utilizes the experimental Web Speech API to speak a word randomly generated using RiTa JS. The pitch & voice are determined using the syllable count of the generated word. If speech isn’t available on the user’s browser, the speech checkbox is grayed out and a little
(Unavailable) message appears.
- Day 4: This fourth experiment is all about planets! 🌏🌍🌎 I was inspired by this wonderful collection of space globe textures, which pointed me to NASA’s database of solar system maps, which I ended up using for all the textures in the demo. Additionally, this is my first proper UI experimentation in AR — it has a slider at the bottom of the screen (out of the way, convenient for mobile users) that shifts the position of the planets, effectively acting as an AR scroll mechanism!
Day 6: This interaction experiment takes day two’s spiky shader sphere & adds some interaction — namely, if you drag your mouse on the screen or drag a finger across a touch screen, we read in the delta of the
x position and scale the sphere up or down (with some clamping between
6). This sort of interaction makes phone-based AR more interesting I think, and I hope to incorporate it more in my future work!
Day 7: Today’s outline experiment was inspired by Professor Lee Stemkoski’s Three.js outline demo, in which he basically does the same thing I do here (sans animation): clone an object, give it some slightly larger size (e.g.
1.01) and set its rendering faces to `
Three.BackSide . You can see the source for this little outline component by remixing this glitch or take a look at this gist — < 20 lines!
Day 8: This combo experiment combines the torus animation from day 1 and the shader from day 2’s demo.
Day 9: This puppy-filled GIF experiment combined a couple interesting things I’ve been toying around with for a while:
- Multiple discrete markers. While it’s difficult to tell from the video below, there are two markers above that are traditional standards from ARtoolkit: the Hiro marker (which I’ve been using every day so far) and the Kanji marker. I simply made two
a-anchors with those presets and added the same template to each anchor. In the future I hope to make my own markers and try out even more complex multi-marker scenes.
- Marker resizing. The video below demonstrates a concept I’ve had for a while: displaying markers digitally can be frustrating (usually because of screen glare) but it does mean you can resize the marker at will (as opposed to the more traditional printed ones). This is applicable to all of my AR.js demos, I just haven’t been demonstrating it — maybe in the future I can actually calculate the size of the marker on screen and use its relative size to affect a shader or model property in the generated scene!
Day 10: I was so inspired by day 9’s GIF experiment that day 10’s demo is another GIF experiment — although this time it’s playing around with the marker animating. I tried for a bit to get this to line up to the marker on screen, but trying to align the scene isn’t the solution — I think the proper solution is to call Three.js/A-Frame using the screen space coordinates returned from AR.js, which I’ve noted down to try out later!
Day 11: This is a modern recreation of one of the earliest forms on animation: a zoetrope! I wrote a little component to actually change the circularly-laid planes in this scene frame by frame very quickly, similarly to the effect caused by the drum of a zoetrope turning very fast. I also prototyped this one in VR, which is a pretty good workflow (easier to test, fast to iterate on when dealing with complicated animations).The animation is Horse In Motion.
Day 12: An animated pirouette made of spheres! This one is using motion capture data from the CMU Graphics Lab Motion Capture Database and the smaller skeleton is the helper from the THREE.js SkeletonHelper, while the blue spheres are my own simple recreation.
Day 13: Playing around with Kinect point cloud data! I revisited an A-Frame point cloud experiment I did earlier this year with my friend Sage Jenson for this experiment — as you might be able to tell from the video below, I was trying to push the limits in terms of geometry and scene-size for marker-based AR with this one.
Day 15: This monochromatic tubular experiment is just me playing around with variables (color from rainbow to blue lightness gradient, shape from boxes to cylinders) from day 14’s 10PRINT experiment. I have some more ideas for simple generative experiments that I think would be interested to translate into AR, and plan on working on those this week!
Day 16: This generative piece alters the color and radius of tori as they tile above the marker, but I took the slider from my day 4 planets demo and make it a bit interactive by allowing you to change the spawn rate of the tori.
Day 17: This weird imaginary organ scene simply cycles through some of Presstube’s imaginary organs. Simple interaction: get a different organ on-click! Will probably revisit this one to solve some loading issues b/c the organ files are pretty large.
Day 19: Some arcs spinning around! This one is more of a fun little animation experiment, just a few
pngs of arcs I made in Illustrator layered over each other with some delayed animation.
Day 21: A few layers of gradients! I made these gradients using only the selection and gradient tools in Photoshop, and noticed they play around nicely with each other, so I threw them into AR and animated their opacity!
Day 22: A prototype for a comic idea! I’m interested in the idea of a comics that not only change visually in augmented reality, but narratively as well — a sort of x-ray super power on the fabric of a narrative. This is my first prototype towards that idea: a skull GIF whose marker superimposes the face that used to be on the now-haunted skull.
Day 23: Some wacky AR furniture! This one isn’t too heavy on interaction (tap or click to get back a random piece from a pre-populated list of IDs), but I wanted an excuse to mess around with Archilogic’s furniture.3d.io API, a collection of 3D scanned furniture! This is partially exciting because it points to the web (once it has access to SLAM) being able to facilitate use cases like IKEA’s home furnishing app.
Day 24: A “holographic” artwork/meme/collage. This is another collaboration with Trauma Doll, this time using randomly generated animation, more layers, and slight opacity to get the layers to appear through each other, creating a more interesting effect that interacts with the surrounding environment.
Day 25: More GIFS, this time from the 90's! I took a few GIFs from searchingthe Internet Archive’s Gif Cities project to make an AR experience that I think qualifies as an interesting toy — it adds more GIFs each time you click/tap & they get closer to you the more there are!
Day 26: A paper airplane experiment! This is a rather involved animation: my hand-drawn illustration (the paper airplane, which is a geometric plane with the illustration showing on both sides) moving along a path as if it’s floating on a funny air current. There’s also a dashed line in the mix, meant to be reminiscent of a doodle of a paper airplane—that was made using equally spaced A-Frame
a-boxes along the path. V happy with how this one came out!
Day 27: A Stranger Things inspired scene! While appropriate for today (October 27, 2017 — the premiere of season 2 of the show) this scene also shows of the use of red lighting to achieve a similar effect as in the title screen, as well as dramatic lights by the rotating waffles — so I actually got to play around with some new techniques!
Day 29: A birthday-themed experiment! This one generates a balloon with a different color & then sends it floating along a path, a relatively complicated animation that was simple to achieve with a few components like aframe-alongpath & Kevin’s animation component!
Day 30: A generative line sculpture! There are lines generated on some Catmull–Rom splines defined by a set points, starting with 3 points. Every couple of seconds a point gets added, while every 4 seconds a point gets deleted. Red & blue lights along with rotation complete the AR kinematic sculpture.
Day 31: The final ARctober experiment: a 🎃 jack-o-lantern with a floating candle inside! This was the last experiment of #ARctober 2017, I hope you’ll join me next year if you’re interested or suggest ideas & feedback!