How they made it: the 'I'll Be Seeing You' elephant episode

A savanna elephant in Malawi. (Thoko Chikondi for NPR)

Podcast: I’ll Be Seeing You, a special series “about technologies that watch us”

Episode: Elephants Under Attack Have An Unlikely Ally: Artificial Intelligence

Who we talked to: Reporter and host Dina Temple-Raston, senior producer Michael May, producer Adelina Lancianese

How they got this idea: Dina wanted to find a subject “that would allow people to have an open mind about artificial intelligence. When people talk about A.I., it seems to have this inherent sinister quality. I needed to find where it was being used in a way that just about everyone would agree was a good, positive thing.”

The answer turned out to be elephant conservation — how artificial intelligence is being employed to identify and prevent poaching.

“I found articles about how scientists were just starting to use A.I. in conservation, and from there stumbled onto Paul Allen’s Great Elephant Census and how his company, Vulcan, was starting to come up with new ideas to save the elephant,” Dina says. “It was a perfect vehicle because, well, who doesn’t love elephants?”

What mics they used: Michael, who traveled to Malawi with Dina, packed a standard recording kit that included an Audio Technica AT822, a specialized microphone for stereo recordings. He also brought an Audio Technica AT897 shotgun microphone, which features a line + gradient (i.e. narrow) pickup pattern. This design helps reduce the sound levels of audio sources that are off-axis, aka not directly in front of the mic.

“The stereo microphone worked great for what we wanted it to do, which was getting wildlife sounds,” Michael says. “When I went from the [omnidirectional mic] or the shotgun to listening through the stereo [mic], I definitely was just hearing so much more detail from farther away.”

In this snapshot of the podcast, each division represents an audio clip; the finished product contains nearly 700 individual audio objects.

How they organized all that audio: With audio from the forest recorded by scientists, audio recorded in the field by NPR, actualities, tracks, ambi and music, keeping track of the raw footage was a project in itself. (The end result has nearly 700 distinct pieces of audio.) The secret was a spreadsheet, Adelina says.

“[NPR correspondent] Howard Berkes taught me to create a spreadsheet for all of your audio,” she says. “The left column is the file name. The columns are, is it in Google Drive? Yes? No? Is it in NewsFlex*? Yes? No? Is it transcribed? Yes? No? That way, you know where everything lives and you can even organize it further.”

In this clip from the podcast, narration and music combine to explain how a neural network learns sounds.  

How they explained neural networks with sound: Adelina and Dina chose to use music with narration to explain how neural networks isolate the elephant sounds from the forest audio.

“If the computer is getting 10 hours of forest sounds, it’s having to say ‘monkey, not an elephant’ or ‘gunshot, not an elephant’ until you eventually get to the elephant,” Adelina says. “That’s what we needed to recreate.”

In editing, they ducked (i.e. turned down) tracks to isolate different instruments in a piece of orchestral music. “What you’re left with is piano and violin, because they kind of sound similar, and then the piano eventually disappears,” Adelina says.


Argin Hutchins is the NPR Training team's Audio Production Trainer.