This is the story of an adventure into the maths of music, the wonder of weather and the ups and downs of creating an original web application.
There are countless brilliant art projects out there that take the prosaic and transform it into something beguiling. The ones I’ve found a particular fondness for are those that harness something natural. A great example of this is the Sea Organ in Zadar, Croatia. Here, the Adriatic sea pushes air through a chamber of pipes to produce music. This extraordinary piece of coastal architecture provides an audience seated above with a continuous, morphing soundtrack — an analogue of the moving water.
This got me thinking about how nature might inform my next musical creation. First, I explored how my field recordings (such as those from my Frond project) might be used to seed a novel approach to composition. I considered whether naturally occurring source material could be used within a generative music system.
I also contemplated whether I could leverage one of the many data sources that are now available online, which range from crime data to flight paths. Data visualisation is interesting to me as a web developer. It is something The Guardian has a good reputation for and creatives such as Moritz Stefaner have made it an art form. For him, there is “truth and beauty” in data, just waiting to be brought to people’s attention by his skill and vision. My goal was a little different however: I wanted to create an experience rather than provide insight into a data set.
Another notable project that influenced me was James Bridle’s A Ship Adrift. This plotted the virtual journey (below) of a stationary boat on London’s Southbank centre, using weather data (wind direction and speed).
Meteorology— an interest of mine since geography class — had proven its hold over me yet again. As a data source ripe for creative extraction, it also held the perfect balance between universality, consistency and variety: I was set to build a program that would generate music from the weather.
The starting point
Having chosen the source material and process, it was time to consider what aspects I might draw on and how I would deploy them. Since weather data is tied to a particular place, it made sense to think of each location’s data as a weathervane and therefore the project as a kind of Musical Weathervane. I began with the simple premise that by mapping wind speed (in miles per hour) to the volume of a sound and bearing (in degrees) to the pitch, I could generate a single note. The next logical step was then to make a chord, which was done by taking readings from multiple locations and playing the resulting notes simultaneously.
As the mapping simply converts one number to another (e.g. 180° = 440Hz), the chords that played were nearly always quite dissonant. Though this made for some interesting listening, it didn’t evoke the mood of weather. None-the-less it was a start: I had a service (forecast.io, now darksky.net) which provided current weather data for anywhere in the world and I had a simple program that could convert that data to music. The domains of weather and music had been successfully connected to each other.
As mentioned above, I needed to make multiple requests at once in order to play a chord from multiple locations. Requests like this often require the developer tell the application to wait until all requests have been successful before going ahead with the next step. I was using an open source library, but it could only handle one request at a time, so I forked it and made a some enhancements. I contacted the author with my suggestions and he found them helpful. He made me a core contributor and my updates were incorporated into the library, though I’ve since published my own version on NPM. This was one of many detours I had to take on my journey.
The birth of the Conditional Orchestra
Before embarking on any more technical trickery, I decided to design an interface that would be simple for the user, but deliver the sensory variety necessary to reflect the weather. I felt it was important to offer a clear and enjoyable way for the user to interact, rather than it be a purely passive experience (as was the case with the Musical Weathervane).
After toying with a few ideas, I decided that the primary call to action should be “Play my weather”, which would look up the user’s location, fetch the weather data and generate the appropriate music. “Play my weather” also highlighted that the experience would be personal to the user.
Though I was happy with the idea of the central proposition, I knew that it would provide a richer and more engaging experience if users could listen to any place they wanted. As I was already using Google places to reverse geocode the user’s location it wasn’t a huge amount of effort to create a form that looked up a location’s coordinates from a given place name. So, “Choose a location” became the secondary call to action.
Until this time I didn’t have a way of actually generating a set of notes that used the western musical scale. Many code libraries are available that solve this problem, but I decided to write my own, for two reasons: the available libraries didn’t provide the all flexibility I needed and I was keen to gain a deeper understanding of music theory in order to make better creative decisions. It was time to learn the formula behind the frequencies.
My first major discovery was that most modern music is composed using a system that doesn’t actually mirror how sound works in nature. In nature, sound harmonics are described in a system called Just Intonation, which is a set of ratios. Equal Temperament, on the other hand, uses a simple formula that produces scales containing very little dissonance. Though I was keen to explore Just Intonation, I chose to use Equal Temperament because the notes will be more familiar to listeners and because the formula would allow me to program more efficiently. So, I created a set of pure functions that allowed me to input intervals, such as 0, 2, 4, 5, 7 for a major scale, and get frequencies in return.
It was crucial that my function could return relative frequencies rather than absolute frequencies in Hz. The reason for this is that I used audio files rather than WebAudio Oscillators to generate the individual sounds and audio files don’t use Hz, they use a number system where ‘1’ represents normal playback speed. I wanted to be able to state the number of notes in a scale, the scale itself as intervals, how much to offset the root note by and to be able to produce rootless voicings. Consequently, I wrote and published Freqi, an open source module that generates frequencies from numeric intervals.
The balancing act
Once the basic application was doing what I wanted, I had to tackle the three main user experience challenges:
- Generate music that represented the weather in a given location
- Deliver sonic variety despite similar conditions
- Handle extreme or unusual weather
The first job was to get accurate minimum and maximum values for all the conditions I was using. Finding record high/low temperature and pressure recordings is easy enough, but more arcane data points such as dew point and precipitation intensity were more troublesome. The application uses 16 different data points, but they weren’t enough for me to build the music engine I wanted. I needed more options in order to create the variety that I felt was critical to its success. I decided to create a set of custom conditions, which used combinations of the available data. An example of this would be the condition fog, which doesn’t exist in data form, but which can I infer using a combination of visibility, temperature and dew point.
When testing this application, it became really clear to me that on land, and particularly in the northern hemisphere the weather data for a wide range of places can often be very similar, which meant from a user’s perspective, the music would sound the same. Wind and precipitation are important factors in my application, but they seem to occur a lot more frequently out at sea. The image below shows wind concentration (the orange and red areas) between Greenland, the Eastern Canadian islands and Iceland. As you can see, the wind speeds on land are relatively similar.
So as to avoid the music being too similar from place to place, I did two things:
- made some parts of the music engine more sensitive
- Made the playback of some sounds unsynchronised
The latter adjustment meant that similar conditions (and therefore the arrangement of sounds) were more likely to produce music that could never be exactly repeated.
To handle extreme weather I simply had to test, and test a lot. This is because the number of simultaneous conditions and the range of values are likely to create sonic or melodic clashes. I tested using my own custom values, for faster development, but also actual weather data, which would give me more realistic value combinations. To find the extreme weather I used the website mentioned above: windy.com — a service I came to rely on heavily.
Enlightenment through user feedback
Once it reached a point where it was suitably stable, I showed it to a small number of friends and asked them about their experience. The main take-away was, of course, something I hadn’t thought of at all: how do you see what the app is actually doing? For some people it was about curiosity and for others it was about transparency. It was a fair question, but it meant a total refactor of much of what I had written. I had to decouple all the code that was getting and setting values from the music so that I could share those values with some kind of user display.
This was and still is a first for me. Revealing what an application is doing to users is an unusual thing to do. Normally, users just want content or to complete a task, but in this case the inner workings are the content.
After months of stolen hours during evenings and weekends, I had something that I felt was worthy of a bigger round of feedback. I emailed an array of friends and peers with a live link. The feedback ranged from the subjective to the practical, but all of it was invaluable. This was the point I realised that I hadn’t come to the end of my journey, rather I had to keep going until it was really something worthy of public attention. As one friend pointed out, it had become a labour of love.
The road to beta
I collated the user feedback by entering it all into a spreadsheet, normalising it and scoring each suggestion with two values: effort and value. Calculating my own effort in terms of complexity and time was relatively painless, I do it all the time in my job. Settling on the value to users required a little more thought. When this exercise was complete, I started work on the features which I felt were going to add the most value for the least effort. For example adding imagery, which I did using the Google maps satellite view, ensuring users can’t enter bad characters in the search field and using the spinning sun icon as a “loading” graphic rather than a “ready” one.
Having worked through some of the remaining higher value features, there was one of my own I was keen to add: shareable search URLs. Allowing users to share their experience would yield greater traction for the project and provide me with a means of promotion via social media. Having implemented this, I can now post a link that will load a specific location where the weather and the music are particularly interesting.
The persistent issues are twofold:
- Glitches while the app plays the audio
- Some sounds stopping when the user activates another tab
The glitches on some machines are largely down to processor power, because the application does a lot of real-time processing. Though I have tried to squeeze better performance out of the app, there are only a couple of options I see available to me in order to address this:
- Simplify the music, which I’m not prepared to do, as compromising on quality would take the project in the wrong direction.
- Render the audio on the server and stream it to the user, but this would mean a significant re-architecture and would consume some users’ mobile data quota.
The failure of some sounds to play when the browser tab is inactive has been the single biggest complaint and a real technical headache. The app plays the audio with no regular metre — files loop at differing time intervals via two different methods so as to deliver the uniqueness I believe is fundamental to the user’s enjoyment. Despite spending a couple of days writing workarounds I’ve not found a reliable solution. Browser vendors have implemented features which stop certain tasks running when a tab is inactive. Rather than have only certain sounds play when the tab is inactive I have just ensured no sounds play at all.
A newer feature I feel has added some extra value is the theming. Now the colours and textures respond to the weather conditions and in some cases to day and night.
Conclusion and plans
I’ve hugely enjoyed the research and development involved in producing the Conditional Orchestra, as well as the results. This project has devoured hundreds of hours of my time and tested my programming skills in ways that my day job rarely does. However, it’s been much more than that, it’s been an education in meteorology and music theory, an opportunity to create, and contribute to open source software. Most of all it’s been a test of my dedication.
I intend to build on what I’ve created by adding new features; in particular I’d like to integrate it with Windy.com so that users can simply click on a map rather than having to use a text search. A more ambitious plan is to wire it up to a physical weather station and have it generate the music in real-time.
As it stands it is something I’m really proud of despite some of its performance limitations. If you enjoy it, are inspired by it, or have any comments, I’d love to hear them. If you want to get in touch directly, please DM me on Twitter.