Don't wanna be here? Send us removal request.
Text
Humpback songs, without the background noise. Much easier to analyze without the continual cacophony of the ocean clogging up every frame. This effect (separating Humpback songs from everything else) was achieved using the ML Python project known as "Spleeter" and a training set of thousands humpback songs from the NOAA dataset.
0 notes
Text
Humpback songs: Testable hypotheses
To catch up a bit with where I am today, I've been running dialogues with GPT4.0 prompts to help record the basis of theories I've developed, and am now starting to test, which might explain why Humback males all sing their elaborate, ever-changing song each mating season. Several notions have occurred to me as hypotheses that might shed light. Each should be easily testable using the massive NOAA dataset as an giant warehouse for singing behavior going back decades in some cases.
Here is Gemini's reaction to my theories (worth the 15 minute read).
https://g.co/gemini/share/11ea249e4e4b
Below is one of the first recordings I encountered when I began to sift through that datasets included with patternradio.withgoogle.com for specific contexts in which older Humpbacks might be teaching the principles, training rituals, or simple mnemonics for perfecting this elaborate singing technique, and/or meeting its challenges as an art form.
0 notes
Text
Tools of the trade - The NOAA Dataset
In order to analyze the songs of Humpback whales, it's important to start with an industrial strength tool kit that will allow you to do close comparisons and complex analyses between different songs, and to know exactly where and when each song you are listening to was performed by a Humpback. Think of it as an imperfecct record from a large, event driven system. Songs begin and end. Some have significant changes which will eventually propagate to all singers in the population.
When humpbacks sing the mating song, they seem to shift their pitch in very methodical ways, for example, then breaching the songs often seem to go as much as 1.5 octaves higher than when singing continuously at optimum depth. My theory was immediately this: They must be harmonizing.
This can be tracked via the NOAA Dataset, a massive collection of undersea recordings dating back to the Cold War, when nuclear submarine movements were often monitored by the US using a triangulated set of sensitive microphones called "Hydrophones". You'll hear more about this term later. To help new analysts ramp up quickly, and if anyone is curious. Tools are outlined, and in some cases, links to liner notes or other bodies of information are associated.
#NOAA_Dataset The NOAA dataset.
https://gemini.google.com/app/7d5ec4e4527151b7
1 note
·
View note
Text
0 notes
Text
How did I get into this weird hobby?
It all started as I was rehearsing for a Google interview. I was interested in working for Deepmind, but Google had openings in security and I thought: "Why not?" So I engaged. The Google interview process is famously challenging, but they also have great facilities for helping you bone up. A good friend of mine was one of those patient volunteers who participate in "Mock" interviews that give you feedback and advice to help you perform your best.
So I was sitting with my interviewer friend and talking about what I could diagram, at an architectural level, to demonstrate my command of building scalable infrastructure. I proposed the idea of doing Google Translate, but my friend countered with "We already have one of those" which, of course, made sense. "If you want to stand out, do something original" he suggested.
So I thought a bit more. Over lunch we had been discussing the NOAA dataset which had recently been released and studied by a small team of Google developers. They'd built a neural net which learned how to recognize various species by the sounds they make. (I had skimmed over the resulting research paper some time ago.) So I spitballed to my friend as I ate my Cobb salad. "What if I could make something that talks to animals? That would be different." My friend got uncharacteristically excited by the idea, telling me quickly about a project he'd heard of recently which was using deep learning in conjunction with the NOAA dataset to analyze the sounds made by Grey Whales, and had recently released some shocking results, namely that there was a fully consistent set of sounds that seemed to map symbolically to a complex language -- essentially, solid evidence that Grey Whales speak to each other in full sentences. This was mind-blowing to me. I hadn't heard about it at all before that day, but by the time we'd finished eating, I was already obsessed with the notion of building a translator for other species, and all the headaches and wonder that would likely come from such an endeavor.
I ended up not getting the job at Google. It's a long story, and not that interesting. But the pursuit of animal communication stayed with me. I decided to use Humpback Whales as my first species of focused inquiry simply because, as a youth, my mother had bought me a subscription to National Geographic (thanks, mom!) and when I was about 6 or 7 years old, I received in the mail the iconic issue in which a recoding of Humpback songs was included as an insert. I listened to it over and over again on a toy phonograph player I'd received for my birthday. The only other records I owned were "The Osmond Family" and "The Jackson Five" to which I listened in equal measure with Whale Songs, but my most exotic fantasies were all reflections of the wonder I experienced while listening to that Humpback recording. Humpbacks weren't as danceable, but it was clear there was a deep mystery in those resonating calls. My childhood fascination returned to me in fits and starts as I began to piece together a system that might actually tell me what was happening in those recordings.
0 notes