Welcome to the overview page for Pattern Radio: Whale Songs, a site that lets you explore thousands of hours of whale songs using AI.
Hello! A few of us at Google, in collaboration with research oceanographer Ann Allen at National Oceanic and Atmospheric Administration (NOAA) Pacific Islands Fisheries Science Center (PIFSC), created a website called Pattern Radio: Whale Songs. We put this post together to help answer questions about the project. Many thanks to our collaborators — the scientists, musicians, and educators who inspired us and taught us so much. If you have a question that’s not answered here, feel free to drop us a line at email@example.com.
What is Pattern Radio?
Pattern Radio: Whale Songs is a website that lets anyone explore thousands of hours of humpback whale songs. It’s a new kind of tool that visualizes audio at a vast scale and uses AI to make it easy to explore and make discoveries.
Why did we make it?
Back in the 1960s scientists first discovered humpbacks whales weren’t just making sounds, they were singing. And since then, the public has been captivated by their songs. But there’s still so much we don’t understand. Why do humpback whales sing? What is the meaning of the patterns and structure within their songs? We created Pattern Radio so that anyone — not just scientists — can be part of this exciting journey of exploration and discovery.
How did the project get started?
For the past year, researchers at Google AI have been working with NOAA PIFSC to train an AI model on PIFSC’s vast collection of underwater recordings and using it to help scientists gain a better understanding of where and when humpback whales are singing around the Hawaiian Islands. PIFSC has more than 170,000 hours of underwater audio recordings taken from spots around the Pacific Ocean, some dating as far back as 2005. We brought more than 8,000 hours of those recordings online in the Pattern Radio: Whale Songs website for anyone to explore.
What can I do on the site?
You can literally browse through more than a year’s worth of underwater recordings as fast as you can swipe and scroll. You can zoom all the way in to see individual sounds — not only humpback calls, but ships, fish and even unknown noises. And you can zoom all the way out to see months of sound at a time. An AI heat map guides you to where the whale calls most likely are, while highlight bars help you see repetitions and patterns of the sounds within the songs.
If you find something you think others should hear, you can share a link right to that sound. And if you need a bit more context around what you’re hearing, guided tours from whale song experts — like NOAA research oceanographer Ann Allen, bioacoustic scientist Christopher Clark, Cornell music professor Annie Lewandowski, and more — point out moments of interest in the data.
How much audio has PIFSC recorded?
PIFSC’s network extends to 13 different recordings sites, several of which have been monitored for a decade or more. That adds up to more than 200 terabytes (about 170,000 hours worth) of underwater ocean recordings. If you were to sit and listen to all of that audio straight through it would take you more than 19 years.
What audio is hosted on the site?
The Pattern Radio: Whale Songs website has about a year and a half of PIFSC’s recordings, taken off the coast of Hawaii (the big island) from March 2014 to August 2015, totaling more than 8,000 hours.
How is the audio recorded?
The data on the website comes from custom-built recorders called high-frequency acoustic recording packages (HARPs). The HARPs are placed on the ocean floor and contain hydrophones (underwater microphones) that record noises like whale or dolphin calls as well as human produced noise (such as ships) and any environmental noise. The scientists deploy the HARPs from a boat and leave them on the seafloor for up to a year at a time. When the batteries run out or the memory is full they will then retrieve the recorders in order to get the data back.
Why are underwater recordings important?
HARPs are an extremely valuable research tool because the scientists don’t have to be physically present to monitor the whales and dolphins, which saves both time and resources. For example, there are certain places in the ocean where the conditions are too dangerous to sail to in the winter (when the humpbacks are in Hawaii to breed). By listening with underwater recording devices, scientists can drop a recorder in the summer months, when calmer waters prevail, and leave it to record all year.
Why is listening to humpback whales important?
Listening to the ocean is one important way scientists can monitor hard-to-study animals, like whales. Whales and dolphins spend the majority of their life underwater and use sound as their primary means of communication, relying on acoustics for survival. This makes acoustic monitoring an ideal way to study them. For whales that make complex sounds, like the humpback, a greater understanding of when, where, why, and how they sing will help us learn more about their population, migration patterns, location, habits, health and more. All of this information is important to helping scientists make decisions about how to best protect the species.
What kind of sounds do humpback whales make?
While some whales, such as blue whales and fin whales make calls that are relatively straightforward to recognize once you’ve seen some examples, humpback whales have vocalizations that are extremely variable. These calls make up phrases and themes that form songs. The songs change over time, as the whales incorporate new sounds or swap the order of phrases, and many populations of humpbacks sing different songs. This presents a big challenge for humans (and computer algorithms!) How do you recognize — or teach an algorithm to recognize — a whale song when the song keeps changing?
Why do humpback whales sing?
Scientists don’t know, but tools like this could get us closer to an answer. For example, scientists know that only the humpback males sing, so the behavior likely has to do with breeding, but they still do not know what purpose the songs serve, or how or why the songs evolve and change.
How long have we known that humpback whales sing?
Humpback whale songs were “discovered” in the late 1960s by biologists who were given recordings of their sounds by a naval engineer. The biologists, who also had formal musical training, were the first to notice that the calls had a structure (melody and rhythm) and patterns that were not random. They were, in other words, songs.
These songs were shared with the public — most notably in the form of a record album released in 1970 called Songs of the Humpback Whale (still the best-selling natural sounds album of all time). It was one of several factors that contributed to the change in public perception of whales and other marine mammals and helped the “Save the Whales” movement gain momentum. Learn more in pieces like this radio segment about acoustic biologist Katy Payne and this newspaper article about whale songs.
Using the site
What am I looking at on the site?
In the middle of the site, you’ll see the sounds from NOAA’s underwater recordings shown as a spectrogram, a tool that helps you explore sound visually. Beneath the spectrogram is a heat map, which uses AI to help you navigate the data.
What is a spectrogram and how does it work?
A spectrogram is a picture of sound. It shows the frequencies that make up the sound, from low to high, and how they change over time, from left to right. So, when a humpback whale is, for example, making sounds that rise upward in pitch, you’ll see those as upward shapes.
How does the heat map work?
The bars below the spectrogram are generated with machine learning. This “heat map” helps you navigate the data. The brighter bars indicate places where the machine learning model is more confident that there are humpback whale songs, the darker bars indicate places where the model is less confident. For example, you’ll likely see higher densities of humpback whale song — brighter bars — in the winter months (December — April) because that is when the whales come to warmer waters to breed.
Why are sounds being highlighted when I zoom in?
When you zoom all the way in, you’ll see sounds highlighted. When the playhead (the line at the center of your screen) passes over a sound, you’ll see similar nearby sounds highlighted. Higher confidence matches are highlighted with higher opacity. This feature helps you visualize the overall patterns and structures of the songs and find examples of repetition.
Can I use my trackpad?
Yes. If you have a trackpad, you can scroll vertically to zoom in and out, and horizontally to move left/right. You can also use the scroll bar and + and — buttons to scroll and zoom. You can also pinch to zoom on touch screens.
What kind of sounds can I find?
There’s an ocean of sounds to discover, possibly even some that no one has listened to before. Here are some direct links to sounds that could be starting points. Click any one to jump to that moment in the recording.
- Humpback whales: During a workshop with a 7th grade class, we marveled at how humpback whales make such a wide range of sounds when they sing, like these sequences of upward bloops, sound like this one that reminded us of an elephant, even moments where it seems like whales sing together.
- Mysterious sounds: There are plenty of mystery sounds too, like this high-frequency glow, this low-frequency thump, and more.
- Human-made sounds: Of course there are lots of human-made sounds, like passing ships, or the whirring of mechanical recording equipment.
Can I share what I find?
Yes. If you find something interesting you’d like to share, just click “Share Link” to get a direct link to that time.
About the Tech
How does the site work?
Spectrogram tile images, at 16 different zoom levels, were pre-rendered on Google Cloud and stored on Google Storage where they can be retrieved on demand from the website.
How are you displaying all of this in the browser?
Pattern Radio uses WebGL (specifically Pixi.js) to tile pre-rendered spectrogram images. Like the spectrogram images, the audio is split up into small segments and stored on Google Cloud Storage where it can be fetched and played on demand. The spectrogram rendering and audio chunking was done by setting up self-contained, parallelizable tasks which could then be scaled to process the entire large dataset using Kubernetes.
How was the machine learning model trained?
For the image model itself (the “humpback sound finder”) Google AI used a ResNet-50, a convolutional neural network architecture typically used for image classification that has shown success at classifying non-speech audio. Using the spectrograms, they showed the algorithm many examples of labelled sounds (IE: this is a humpback, this is not a humpback). The more examples the algorithm is shown, the better it gets at identifying those sounds.
How does the highlighting of similar sounds work?
Check out this blog post to learn about how the highlighting of similar sounds works.
About the team + what’s next?
Who is involved in the project?
The website is a collaboration between people spanning the fields of marine biology, conservation, tech, music, art, education, and more. Thank you to our collaborators, who provided input into the tool as it was developed, and whose comments throughout the data provide interesting launch points for discovery.
- Ann Allen is a research oceanographer in the Cetacean Research Program at NOAA’s Pacific Islands Fisheries Science Center. She earned her Ph.D. in Biological Oceanography from the Joint Program between MIT and Woods Hole Oceanographic Institution. Ann first reached out to Google to solve the problem of “too much data” back in 2018 and has been collaborating with Google AI ever since.
- Matt Harvey is a software engineer on Google’s AI perception team. He’s been integral in creating the AI model, and has been collaborating closely with Ann and NOAA on this project, even joining them for a research cruise on the NOAA Ship Oscar Elton Sette this past April.
- Christopher Clark is a pioneer in the field of bioacoustics. Having recently retired from the Bioacoustics Research Program at Cornell University (a program he founded 30 years ago) Chris consults with many marine biology and conservation groups and is focused on spreading scientific awareness and advocacy through films and outreach.
- Annie Lewandowski is a Senior Lecturer in the department of music at Cornell University, is a composer/performer whose work in song and improvisation has led to explorations of the creative minds of humpback whales with pioneering bioacoustics researcher Katy Payne and the Hawai’i Marine Mammal Consortium.
- David Rothenberg is a musician, philosopher and Distinguished Professor at the New Jersey Institute of Technology. He’s written about his efforts to make music, live with whales, in Thousand Mile Song, one of his many publications. He has performed or recorded with Pauline Oliveros, Peter Gabriel, Ray Phiri, Suzanne Vega, Scanner, Elliot Sharp, Iva Bittová, and the Karnataka College of Percussion. Nightingales in Berlin is his latest book, CD, and film.
And thank you to …
- The seventh graders from our one-day workshop who asked so many thoughtful questions and inspired us with their new observations
- All of our friends across Google teams that helped along the way — at Creative Lab, PAIR, Google AI and more
- Our collaborators at NOAA PIFSC for making this whole project possible
This tool is an experiment and we’re excited to see how everyone uses it, from researchers to students to teachers to musicians, and more. Stay tuned for updates, and please share the ways you’re using the site on social with #patternradio, or by dropping us a line at firstname.lastname@example.org.
Where can I learn more?
Here are some links if you want to dive in even deeper:
- To read more about the AI used in this project, check out Google software engineer Matt Harvey’s blog.
- To read more about visualization techniques used in the tool, check out Kyle McDonald’s post.
- To read about how the whole collaboration between NOAA and Google got started check out Ann Allen’s blog.
- Here’s an inspirational project visualizing humpback whale songs by our friends David Rothenberg and Mike Deal. And here’s a fun podcast interview with David.
- Listen to Annie Lewandowski’s composition “Cetus: Life After Life” for Whale Song and Chimes, a piece of music which was deeply informed by conversations with pioneering bioacoustics researcher Katy Payne.