Building an Oddity on the Web for David Bowie featuring the Moon 🌓

In celebration of the 50th Anniversary of Space Oddity

Jul 11 · 7 min read
This is Ground Control to Major Tom

Today marks the 50th anniversary of the release of David Bowie’s breakthrough single “Space Oddity.” Bowie released this track 50 years ago as a gimmick to take advantage of the excitement surrounding the Apollo 11 moon mission which is also celebrating its 50th anniversary next week. Bowie’s estate will be revealing a never before seen video of “Space Oddity” next week at a NASA event at The Kennedy Center.

As a fan of developing odd things for artists on the web, I was given the chance to build a marketing campaign to drive awareness to this moment. We decided on a simple app that would allow fans to unlock a new mix of “Space Oddity” if they went outside and took a photo of the moon. Use the app to find out if the moon is currently visible where you live. Then, look up, capture it, and listen. Read on to find out where this idea came from and how it was built.

Far Above the Moon

When I returned to freelancing in the Fall of 2017, my first project back involved building a constellation navigator for the Foo Fighters. This web app used a device’s gyroscope to control a user’s viewpoint within a half dome 3D scene of the night sky based on their location. At the time, I was living in Brooklyn and I couldn’t see a single star in the sky to actually test if the thing was working. That’s when I decided to add the moon and sun to the scene. I kinda made a mental note that if a client ever came asking for something moon related, I might be able to revisit that tech. Flash forward to the beginning of this summer and you can just about imagine my excitement when Bowie’s team approached me about doing something for “Space Oddity.”

I pitched a few things but the one that resonated most was an app which allowed fans to unlock something if they journeyed outside and took a photo of the moon. So, I put the proposal together, sent it over, and waited a moon cycle to hear back.

As soon as I received approval, I dusted off the old Foo Fighters project and quickly realized something was wrong. Apple had turned off Device Orientation & Motion by default in iOS Safari. They also didn’t provide the same Permission API users have come to expect from Location and Camera access (though it is coming in iOS 13.) This meant that a user would need to go into their device settings and flip the right switch in order for the app to function correctly. I didn’t like that and became a bit panicked.

Luckily, I spent the majority of May exploring topics like TensorFlow and image recognition APIs. I even wrote a blog about the importance of bananas in accessible marketing concepts. 🍌 So the new technical solution involved leaning on an image recognition service to detect if the moon was visible in a photo. I put together a quick prototype and confirmed that this direction was viable. The client agreed and I got to work.

Forecasting the Moon

Simple transitions powered by Anime.js

Since our app won’t function if the moon isn’t actually visible, it became imperative that we were able to provide our users with forecasts for moon rise and set times. Side note: I had absolutely no idea when the moon was visible and was always sorta surprised to see it both at night and during the day. I’m sure you’ve also said, “Oh look, it’s the moon.” We just don’t give moon rises the same amount of love as a sun rise but let me tell you, a well timed moon rise can be an incredible viewing experience. I should know, I saw about 10 of these during testing.

Anyway, how do you forecast the moon? Easy. You lean on smart folks like Vladimir Agafonkin to write incredible libraries like suncalc which provide the exact set of moon functions you need. Want to find out the moon’s current illumination and phase? Just do this:

SunCalc.getMoonIllumination(new Date())

How about the moon’s position in the sky based on your location?

SunCalc.getMoonPosition(new Date(), latitude, longitude)

Amazing stuff. Pair this with the Geolocation API and you’ve got a lightning fast moon forecast function with a tiny footprint built for the web. We can also use this data to create a moon phase visual so let’s discuss how that was done next.

Moon Phase

You can’t build an app about the moon without creating moon phase visuals. I feel like we all have our favorite phase. Full moon? 🌕 Banana Moon? 🌘 New Moon? 🌑 Vladimir’s suncalc library defines moon phase as a fraction between 0 and 1. Here’s a full breakdown.

Phase        Name        
------- -----------------
0 New Moon
Waxing Crescent
0.25 First Quarter
Waxing Gibbous
0.5 Full Moon
Waning Gibbous
0.75 Last Quarter
Waning Crescent

By using this measurement, we can create a visual of the moon’s current phase. There are many ways to go about this but I chose the vector graphics library Paper.js because I figured the phase really just consisted of a few arcs.

Starting with New Moon, you’ll notice that a limb starts filling in from the left until it completely covers the circle during a Full Moon. The moon then begins to empty from the left until its limb is no longer visible. So, we’re really just dealing with two arcs which originate from the top center and terminate at the bottom center. Their x position is decided by the current phase of the moon. In Paper.js, it might look something like this.

let size = 200
let path = new Path()
if (phase >= 0.5) {
leftX = 0
} else {
leftX = size - (phase / 0.5 * size)
}
if (phase <= 0.5) {
rightX = size
} else {
rightX = size - ((phase - 0.5) / 0.5 * size)
}
path.add([size / 2, 0])
path.arcTo([leftX, size / 2], [size / 2, size])
path.arcTo([rightX, size / 2], [size / 2, 0])

Pretty cool. And suncalc also provides the parallactic angle of the moon which you can use to rotate your visual to match that of the current user’s location. Simply set a pivot point in the center and rotate.

path.pivot    = [size / 2, size / 2]
path.rotation = angle

For no reason at all, I also chose to use this visual to create a dynamic favicon on the desktop site. To do this, you simply need to replace the favicon href with a toDataURL representation of the Paper.js canvas.

let link = document.querySelector("link[rel*='icon']")
link.href = canvas.toDataURL("image/png")

Detecting The Moon

As I mentioned earlier, I replaced the Device Orientation solution with image recognition. These services work easily enough. You send it a photo and it sends you back an array of labels which it believes is in the photo. For example, one of my earlier moon photos returned.

["Sky", "Cloud", "Atmospheric phenomenon", "Daytime", "Atmosphere", "Brown", "Cumulus", "Morning", "Celestial event", "Calm", "Evening", "Horizon", "Meteorological phenomenon", "Dusk", "Sunlight", "Dawn", "Space", "Moon", "Astronomical object", "Night"]

There are many of these image recognition services available. Some of the leaders include the Google Vision API, Amazon Rekognition, Clarifai, IBM Visual Recognition, and Azure Computer Vision. I cared less about price and more about results (and ease of use) so I wrote a simple testing suite which was able to get results from each of these APIs in bulk. Then I pulled down 1000 photos from the #moon tag on Instagram. (Taking a photo of the moon is tough from a mobile phone so I wanted photos which reflected the reality of what the app would be dealing with.) Bad user photos.

After a bit of analysis, it was clear that the Google Vision API returned a bunch of great results (including “Moon”) and would be easy to work with. However, it didn’t always return “Moon.” There were various problems to contend with. Maybe the moon was obscured by clouds. Maybe you were taking a photo of a day moon on a bright day. One evening, we had an incredible sunset here and the app thought the moon was the sun. And this was only in my personal testing. I discussed this with the client and we decided to lean on the side of leniency, accepting other matches such as “Atmosphere” and “Space.” Since we already restricted access to the app to moments when the moon was visible, a picture of “Space” should be acceptable.

I’ve already written a few blogs about using WebRTC to create a camera within a web app. I would suggest checking out the case studies for “CAMERA” and Girls Like You if you’re looking for a breakdown of that technology. The trick is drawing the current image on a <video> tag onto an HTML5 canvas. From here we can use the toDataURL function to grab the Base64 representation of the photo which Google requires.

let data = canvas.toDataURL('image/jpeg', 1.0)data.replace('data:image/jpeg;base64,', '')

You can then send that data to the annotate function of the Google Vision API using your favorite language. Google provides a bunch of interesting detection systems such as Text detection or Face detection. In the case of our project, we used the Label detection service.

Thanks

David Bowie
David Bowie

Thank you to Maddie Brady for initially asking for my help on this campaign and for being a champion for the concept. Thanks to everyone on the digital teams who helped test the app, provide translations, and be spirited members of my tiny operation. Thanks to the estate for understanding that a gimmicky concept such as this fits right into the mischievousness which Bowie was trying to achieve in 1969. And, of course, thank you to the David Bowie for continuing to be an inspiration and wonder to all of us.

Can you hear me, Major Tom?

Lee Martin

Written by

I develop websites for rock 'n' roll bands and get paid in sex and drugs. Previously Silva Artist Management, SoundCloud, and Songkick. Currently: Available

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade