When the Ottomans invaded and conquered Constantinople in 1453, the Orthodox Christian church Hagia Sophia was converted into a mosque. As a result, the Christian choral music that had reverberated in this acoustical masterpiece for centuries was not allowed. But thanks to a digital filter developed by a pair of Stanford researchers, one an art historian (Bissera Pentcheva) and the other an acoustics expert (Jonathan Abel), we are now able to hear what a choir might have sounded like in the Hagia Sophia before the mid 15th century.
When they met, Pentcheva started telling Abel about the Hagia Sophia — how we couldn’t really understand the experience of worshipers there unless we could hear the music the way they did. And as she talked, Abel started to feel a prickling of excitement. They could recreate what that music would sound like. If only they could get in the Hagia Sophia and pop a balloon.
When a balloon pops, it makes an impulse, a sharp, quick sound that takes on the character of whatever space it’s in. So when a balloon pops, you’re really hearing the acoustics of the space itself, says Abel.
In this clip from 2013, the Cappella Romana choir sings a hymn passed through an early version of the Hagia Sophia filter:
The marble interior of Hagia Sophia was 70 meters long, while in height it reached 56 meters at the apex of the great dome. The vast chamber and its reflective surfaces of marble and gold resulted in unprecedented acoustics of over ten seconds reverberation time. As a museum Hagia Sophia today has lost its voice, no performances could take place in it. Using new digital technology developed at CCRMA, the second portion of Cappella Romana’s concert at Bing aims to recreate sound of what singing in Hagia Sophia must have been like. Each singer caries a microphone that records the sound transforming it into a digital signal, which is then imprinted with the reverberant response of Hagia Sophia. What you hear as a wet sound is the product of a digitally produced signal transmitted through loudspeakers placed strategically to create an enveloping soundfield. This digital signal may shock you with the way it relativizes speech, transforming its content into a chiaroscuro of indistinct but immersive sound. For the Byzantines, this sonic experience was associated with the water: the waves of the sea.
Last year, the Cappella Romana released an entire album of choral music recorded with the filter — you can listen on Spotify, Apple Music, Amazon, Tidal, or Pandora.
Needless to say, the album sounds better with the best pair of headphones you can muster. You can find out more information about the filter and the acoustics of the Hagia Sophia at Icons of Sound.
I love echo - any kind of reverberation or atmosphere around a voice or a sound effect that tells you something about the space you are in.
That’s a quote from legendary film editor and sound designer Walter Murch. In the 70s, he pioneered a technique called worldizing, for which he used a mix of pristine studio-recorded and rougher set-recorded sounds to make a more immersive soundscape for theater audiences. He used it in The Godfather, Apocalypse Now, and American Graffiti:
George [Lucas] and I took the master track of the two-hour radio show with Wolfman Jack as DJ and played it back on a Nagra in a real space — a suburban backyard. I was fifty-or-so-feet away with a microphone recording that sound onto another Nagra, keeping it in sync and moving the microphone kind of at random, back and forth, as George moved the speaker through 180 degrees. There were times when microphone and speaker were pointed right at each other, and there were other times when they were pointed in completely opposite directions. So that was a separate track. Then, we did that whole thing again.
When I was mixing the film, I had three tracks to draw from. One of them was what you might call the “dry studio track” of the radio show, where the music was very clear and sharp and everything was in audio focus. Then there were the other two tracks which were staggered a couple of frames to each other, and on which the axis of the microphone and the speakers was never the same because we couldn’t remember what we had done intentionally.
Directed by veteran Hollywood sound editor Midge Costin, the film reveals the hidden power of sound in cinema, introduces us to the unsung heroes who create it, and features insights from legendary directors with whom they collaborate.
Featuring the insights and stories of iconic directors such as George Lucas, Steven Spielberg, David Lynch, Barbra Streisand, Ang Lee, Sofia Coppola and Ryan Coogler, working with sound design pioneers — Walter Murch, Ben Burtt and Gary Rydstrom — and the many women and men who followed in their footsteps.
This morning, instead of crawling straight from bed to desk and diving into the internet cesspool, I went for a walk. I went because I needed the exercise, because it was a nice sunny day out, because the changing leaves are super lovely right now. (Check out my Instagram story for some of what I saw along the way.) But I also wanted to listen to this episode of On Being with Gordon Hempton called Silence and the Presence of Everything. Hempton is an acoustic ecologist who has a lot of interesting things to say about silence and natural sounds.
Oh, grass wind. Oh, that is absolutely gorgeous, grass wind and pine wind. We can go back to the writings of John Muir, which — he turned me on to the fact that the tone, the pitch, of the wind is a function of the length of the needle or the blade of grass. So the shorter the needle on the pine, the higher the pitch; the longer, the lower the pitch. There are all kinds of things like that, but the two folders where I collected, I have, oh, over 100 different recordings which are actually silent from places, and you cannot discern a sense of space, but you can discern a sense of tonal quality, that there is a fundamental frequency for each habitat.
It sounds paradoxical, but I wanted to listen to this podcast in a setting with natural sounds, rather than in my car or on a plane. I had my AirPods in because they don’t block all outside sound, so I could hear the crunch of the road beneath my shoes as I walked and listened. The nature and animal sounds in the episode sounded like they were actually coming from all around me. I paused the episode for a minute or two to listen to a burbling brook I passed along the way. The whole experience was super relaxing and informative.1
The problem Hempton hopes to take on is gargantuan. To understand it, try a little experiment: when you reach the period at the end of this sentence, stop reading for a moment, close your eyes, and listen.
What did you hear? The churn of the refrigerator? The racing hiss of passing traffic? Even if you’re sitting outside, chances are you heard the low hum of a plane passing overhead or an 18-wheeler’s air horn shrieking down a not-so-distant highway.
If you heard only the sounds of birds and the wind in the trees, you’re one of a lucky few. But it’s likely that quiet won’t last.
For The Atlantic, Bianca Bosker writes about the growing problem of noise pollution (because of our love of technology and hands-off governments) and why so few people take it seriously (because of our love of technology and hands-off governments).
Scientists have known for decades that noise — even at the seemingly innocuous volume of car traffic — is bad for us. “Calling noise a nuisance is like calling smog an inconvenience,” former U.S. Surgeon General William Stewart said in 1978. In the years since, numerous studies have only underscored his assertion that noise “must be considered a hazard to the health of people everywhere.” Say you’re trying to fall asleep. You may think you’ve tuned out the grumble of trucks downshifting outside, but your body has not: Your adrenal glands are pumping stress hormones, your blood pressure and heart rate are rising, your digestion is slowing down. Your brain continues to process sounds while you snooze, and your blood pressure spikes in response to clatter as low as 33 decibels-slightly louder than a purring cat.
Experts say your body does not adapt to noise. Large-scale studies show that if the din keeps up-over days, months, years-noise exposure increases your risk of high blood pressure, coronary heart disease, and heart attacks, as well as strokes, diabetes, dementia, and depression. Children suffer not only physically-18 months after a new airport opened in Munich, the blood pressure and stress-hormone levels of neighboring children soared-but also behaviorally and cognitively. A landmark study published in 1975 found that the reading scores of sixth graders whose classroom faced a clattering subway track lagged nearly a year behind those of students in quieter classrooms-a difference that disappeared once soundproofing materials were installed. Noise might also make us mean: A 1969 study suggested that test subjects exposed to noise, even the gentle fuzz of white noise, become more aggressive and more eager to zap fellow subjects with electric shocks.
Being pretty sensitive to noise, I read this piece with a great deal of interest. One of the benefits of living in the middle of nowhere in the country is that when I go outside, the sounds I hear are mostly natural: birds, streams, wind, frogs, and insects. In the winter, the quiet is sometimes so complete that you can only hear the sound of your own heart beating in your ears. But lately, some dipshit who owns a car with a deliberately loud after-market muffler has been driving through the surrounding hills, disrupting the peace. I can’t usually hear cars passing on the nearby road, but this muffler jackass you can hear literally miles away. It makes me want to smash things! I feel like a bit of a crank, but why does this person’s freedom to have a loud muffler override the freedom of the thousands of people within earshot to have quiet? (See also positive versus negative liberty and How Motorcyclists Think People React When They Drive By.)
BBC Radio 4 has done an abridged audio reading of Margaret Atwood’s The Testaments, her followup to The Handmaid’s Tale. The series is composed of 15 episodes that run 14 minutes each — a total of 3.5 hours compared to the full 13+ hour audiobook. The episodes are only going to be available online for a short time though — the first one expires Oct 15 — so get in there if you’re going to listen. I’m reading the book right now, otherwise I’d be right there with you. (via open culture)
Yellowstone National Park maintains a collection of sounds and videos taken in the park that are in the public domain and free for anyone to use. The collection includes the sights and sounds of birds, geysers, bison, bubbling mud pots, fish, wolves, falling snow, storms, and all sorts of other ambient noises and videos.
Wireless headphones are augmented reality devices.
And further down the page:
Much as phones have enabled and concretized the always-on nature of everyday life, introducing the constant interpenetration of physical and digital space to individual experience, wireless earbuds facilitate a deeper integration, an “always in” existence that we need never interrupt by looking down at a screen. Their aural interface means we don’t have to awkwardly switch attention back and forth between IRL and a screen as though the two are starkly separated. Instead, we can seem to occupy both seamlessly, an experience that other augmented-reality devices, like Google Glass, have promised with varying degrees of success.
I bought some AirPods several months ago thinking I was getting wireless headphones, but very quickly realized they were actually an augmented-reality wearable computer. In my media diet post from May, I called them “the first real VR/AR device that feels seamless”. Like regular wired earbuds or even over-the-ear Bluetooth headphones, AirPods provide an audio track layered over the real world, but they’re so light and let just the right amount of ambient sound in that you barely notice you’re wearing them — it just sounds like whatever you’re listening to is playing in your head, automagically. It feels, at least to me, like a totally different and far more immersive experience. Wearable computing still seems like a futuristic thing a few years away, but with AirPods and the Apple Watch, it’s solidly here right now.
Given current phone/camera trends (or, I should say, current camera/phone trends), the Star Trek: TNG combadge is unrealistic because by the 24th century it’d be more like 99.9998% camera and 0.0002% phone.
The natural ancestor of the combadge seems more like AirPods than the iPhone. But the likelihood of AirPods 6.0 having a tiny camera embedded in it for, say, the facial recognition of whoever you’re speaking with (a la Miranda Priestly’s assistants in The Devil Wears Prada) or text-to-speech for whatever you’re looking at (signs, books, menus) seems quite high.
YouTuber Lord Vinheteiro recently played the same pair of tunes on six different pianos, ranging from a $499 used upright to a $112,000 Steinway to a $2.5 million Steinway grand piano that’s tacky af. Which one sounds the best?
I’m not sure that you get the full effect and nuance of the super luxe pianos after the audio has passed through YouTube’s audio compression and whatever phone or computer speaker or headphones you’ve got going, but the more expensive pianos sound better than the lower-end ones for sure. I would have appreciated a medley at the end that repeatedly cycled through all six of the recordings to better hear the differences.
From a 1972 episode of Mister Rogers’ Neighborhood, Mister Rogers demonstrates how to make a record using a machine called a record cutter (also referred to as a “record lathe”). Says Rogers, apparently living his best life: “When I was a little boy, I thought the greatest thing in the world would be to be able to make records.” (via open culture)
The Apollo Flight Journal has put together a 20-minute video of the full descent and landing of the Apollo 11 Lunar Module containing Neil Armstrong and Buzz Aldrin on July 20, 1969.
The video combines data from the onboard computer for altitude and pitch angle, 16mm film that was shot throughout the descent at 6 frames per second. The audio recording is from two sources. The air/ground transmissions are on the left stereo channel and the mission control flight director loop is on the right channel. Subtitles are included to aid comprehension.
I don’t know who needs to hear this, but if you’re in need of some relaxing sounds, a meditative moment, or a chill work soundtrack, I recommend this 71-minute video of Tibetan singing bowl music.
Listen in as “Gong Master Sven” plays a gong that’s 7 feet across. (No seriously, listen…it’s wild. Headphones recommended.)
Ok, show of hands. How many of you of thought it was going to sound like that? I had no idea! He barely hits it! The whole thing sounded like a horror movie soundtrack or slowed-down pop songs. Here’s another demonstration, with some slightly harder hits:
The Memphis Gong Chamber looks like an amazing place. Watching this on YouTube, we’re missing out on a lot of the low-end sounds:
And if you were actually standing here like I am, you can feel all your internal organs being massaged by the vibrations from this. It’s really quite the experience.
This guy drags some objects over a large gong and it sounds like whale song:
Ok well, there’s a new item for the bucket list. (via @tedgioia)
The National Sound Library of Mexico says they have found the only known audio recording of Frida Kahlo’s voice. Take a listen:
The library have unearthed what they believe could be the first known voice recording of Kahlo, taken from a pilot episode of 1955 radio show El Bachiller, which aired after her death in 1954.
The episode featured a profile of Kahlo’s artist husband Diego Rivera. In it, she reads from her essay Portrait of Diego, which was taken from the catalogue of a 1949 exhibition at the Palace of Fine Arts, celebrating 50 years of Rivera’s work.
Film footage of Kahlo is difficult to come by as well; I could only find these two clips:
The first video is in color and shows Kahlo and husband Diego Rivera in her house in Mexico City. The second shows Kahlo painting, drawing, and socializing with the likes of Leon Trotsky. At ~0:56, she walks quickly and confidently down the stairs of a ship, which is a bit surprising given what I’ve read about her health problems.
Update: According to this article (and its translation by Google), the voice on the recording isn’t Kahlo but belongs instead to actress Amparo Garrido:
Yes, I recognize myself. For me it was a big surprise because so many years had passed that I really did not even remember. […] When listening to this audio I remembered some things and I got excited because I did recognize myself.
The National Oceanic and Atmospheric Administration (NOAA) and Google have teamed up on a project to identify the songs of humpback whales from thousands of hours of audio using AI. The AI proved to be quite good at detecting whale sounds and the team has put the files online for people to listen to at Pattern Radio: Whale Songs. Here’s a video about the project:
You can literally browse through more than a year’s worth of underwater recordings as fast as you can swipe and scroll. You can zoom all the way in to see individual sounds — not only humpback calls, but ships, fish and even unknown noises. And you can zoom all the way out to see months of sound at a time. An AI heat map guides you to where the whale calls most likely are, while highlight bars help you see repetitions and patterns of the sounds within the songs.
The audio interface is cool — you can zoom in and out of the audio wave patterns to see the different rhythms of communication. I’ve had the audio playing in the background for the past hour while I’ve been working…very relaxing.
Now, I’d like you to imagine you’re chatting with your conversation partner. But instead of speaking and hearing the words alone, each syllable they utter has a note, sometimes more than one. They speak in tunes and I can sing back their melody. Once I know them a little bit, I can play along to their words as they speak them, accompanying them on the piano as if they’re singing an operatic recitative. They drop a glass on the floor, it plays a particular melody as it hits the tiles. I’ll play that melody back — on a piano, on anything. I can accompany that melody with harmony, chords — or perhaps compose a variation on that melody - develop it into a stupendous symphony filled with strings, or play it back in the style of Chopin, Debussy or Bob Marley. That car horn beeps an F major chord, this kettle’s in A flat, some bedside lights get thrown out because they are out of tune with other appliances. I can play along to every song on the radio whether or not I’ve heard it before, the chord progressions as open to me as if I had the sheet music in front of me. I can play other songs with the same chords and fit them with the song being played. Those bath taps squeak in E, this person sneezes in E flat. That printer’s in D mostly. The microwave is in the same key as the washing machine.
I have a friend with perfect pitch and one of the first times we hung out together, the horn on a tugboat sounded and she said, “C sharp”. I looked puzzled so she explained, and then I peppered her with questions about all the other sounds around us. It was like watching a superhero do their thing.
LJ said she had been a “weird prodigy kid.” For her, perfect pitch had been a nightmare. The whole world seemed out of tune. But then teachers introduced her to Indian ragas, Gamelan music and compositions with quarter tones, unfamiliar modes and atonal structures. As her musical horizons expanded, her anxiety dissipated. (She remains exceedingly sensitive to pitch, though. Her refrigerator, for example, hums in A flat. Working from home, I hear my fridge running 12 hours a day. Blindfolded, I’m not sure I could pick the thing out of a lineup of three other refrigerators.)
This is neat: Peter Mayhew as Chewbacca speaking English to Harrison Ford’s Han Solo in a scene from Empire Strikes Back:
Mayhew’s dialogue provided context for Ford to play off of. Chewbacca’s more familiar voice was dubbed over the on-set dialogue in post production — listen to Star Wars sound designer Ben Burtt describe how he created Chewie’s voice in this video at ~26:18. Mayhew passed away last week at the age of 74.
Centuries of Sound is a podcast that creates mixtapes by year. So far, that’s pretty standard. The main difference is that CoS’s mixtapes begin in 1853.
That’s as early as we’ve been able to recover recorded audio, mostly from technology that did not work particularly well at the time. The technology of the 1850s recorded sound, but couldn’t reliably reproduce it.
The real start date for us is nearly a quarter of a century [before Thomas Edison], in the studio of French printer and bookseller Édouard-Léon Scott de Martinville. The year was 1853 or 1854, and he was working on engravings for a physiology textbook, in particular a diagram of the internal workings of the human ear. What if, he thought, we could photograph sounds in the way we do images? (photography was a quarter-century old at this point) He began to sketch a device, a way of mimicking the inner workings of the human ear in order to make lines on a piece of paper.
I cover a plate of glass with an exceedingly thin stratum of lampblack. Above I fix an acoustic trumpet with a membrane the diameter of a five franc coin at its small end—the physiological tympanum (eardrum). At its center I affix a stylus—a boar’s bristle a centimeter or more in length, fine but suitably rigid. I carefully adjust the trumpet so the stylus barely grazes the lampblack. Then, as the glass plate slides horizontally in a well formed groove at a speed of one meter per second, one speaks in the vicinity of the trumpet’s opening, causing the membranes to vibrate and the stylus to trace figures.
Firstsounds.org did the most work in deciphering these early paper recordings, and that story is well told by the radio show Studio 360.
It even has a perfect name, what these people do: archeophony.
Here, then, is Centuries of Sound’s mix of all the recorded audio up to 1860 that they’ve been able to recreate from those early, not-at-the-time-reproducible pre-Edison audio signal recordings.
I wish I had known about this when I was still writing my dissertation (which was, in part, on paper and multimedia in the 1900s). It would have made many things much easier.
Ben Burtt was the sound designer for the original Star Wars trilogy and was responsible for coming up with many of the movies’ iconic sounds, including the lightsaber and Darth Vader’s breathing.1 In this video, Burtt talks at length about how two dozens sounds from Star Wars were developed.
The base sound for the blaster shots came from a piece of metal hitting the guy-wire of a radio tower — I have always loved the noise that high-tension cables make. And I never noticed that Vader’s use of the force was accompanied by a rumbling sound. Anyway, this is a 45-minute masterclass in scrappy sound design.
Burtt was the sound designer for the Indiana Jones trilogy, E.T. (he got the voice from an old woman he met who smoked Kool cigarettes), and did the voice for Wall-E. He’s also a big reason why you hear the Wilhelm scream in lots of movies.↩
The sound effects mostly represent actions the protagonist Link takes like the “sword slash”, things that happen to him like a grunt when he gets hurt, or the status of the game like the low health alarm that beeps when Link has only half a “heart container” left and can only take one or two more hits before he dies and the game is over. The goal of this project is to create a piece of audio that sounds like a typical playthrough of the game and also accurately tells the story of Nixon’s fall as represented by the data.
What a cool example of using the familiar to explain or illustrate the unfamiliar. If you’ve ever played Zelda, you can clearly hear Nixon doing more and more poorly as the track goes on — he’s taking damage, the dungeon boss sound chimes in right around when Watergate is ramping up, and he’s gaining fewer hearts. It’s like he’s a novice player armed only with the wooden sword trying to defeat the level 3 dungeon without a potion…the end comes pretty quickly.
I am only a casual Beastie Boys fan, but I’ve been hearing nothing but really good things about their goofball memoir, Beastie Boys Book.
With a style as distinctive and eclectic as a Beastie Boys album, Beastie Boys Book upends the typical music memoir. Alongside the band narrative you will find rare photos, original illustrations, a cookbook by chef Roy Choi, a graphic novel, a map of Beastie Boys’ New York, mixtape playlists, pieces by guest contributors, and many more surprises.
The boys also went all-out on the audiobook edition, a 13-hour version of the book that’s as much a mixtape as an audiobook from an all-star cast of more than three dozen readers, including Beasties Mike D and Ad-Rock as well as Steve Buscemi, Elvis Costello, Chuck D, Snoop Dogg, Will Ferrell, Kim Gordon, LL Cool J, Spike Jonze, Pat Kiernan, Talib Kweli, Bette Midler, Nas, Rosie Perez, Amy Poehler, and many more.
There are a pair of excerpts on Soundcloud, the first from the book’s introduction by Ad-Rock and the second from Mike D:
Swiss artist Zimoun makes large-scale sound sculptures out of simple materials like cardboard boxes, wires, washers, tiny motors, and sticks of wood. Here are a few of his works (sound on, obviously):
I would love to see one of these installations in person sometime.
NASA’s InSight mission recently landed on Mars and like other missions before it, the lander is a equipped with a camera and has sent back some pictures of the red planet. But InSight is also carrying a couple of instruments that made it possible to record something no human has ever experienced: what Mars sounds like:
Two very sensitive sensors on the spacecraft detected these wind vibrations: an air pressure sensor inside the lander and a seismometer sitting on the lander’s deck, awaiting deployment by InSight’s robotic arm. The two instruments recorded the wind noise in different ways. The air pressure sensor, part of the Auxiliary Payload Sensor Subsystem (APSS), which will collect meteorological data, recorded these air vibrations directly. The seismometer recorded lander vibrations caused by the wind moving over the spacecraft’s solar panels, which are each 7 feet (2.2 meters) in diameter and stick out from the sides of the lander like a giant pair of ears.
The sounds are best heard with a good pair of headphones.
Smart speaker news briefings didn’t get much love from users in this research. Here are some of the complaints Newman heard:
— Overlong updates — the typical duration is around five minutes, but many wanted something much shorter.
— They are not updated often enough. News and sports bulletins are sometimes hours or days out of date.
— Some bulletins still use synthesized voices (text to speech), which many find hard to listen to.
— Some updates have low production values or poor audio quality.
— Where bulletins from different providers run together, there is often duplication of stories.
— Some updates have intrusive jingles or adverts.
— There is no opportunity to skip or select stories.
Based on my experience with these devices and general trends in news and media consumption, I have a few predictions as to how this will change in the near future:
Audio news updates will get shorter and more specialized. The New York Times using The Daily as a “flash briefing” is really the ne plus ultra of cramming content not designed for smart speakers into the space. I had to pull them as a news source because of it.
Audio news updates will move from pull to push. Unless you put it on “do not disturb,” you’ll hear a news update just after it’s posted, rather than having to ask for it.
Conserve the Sound is a project aimed at the preservation of sounds from old technologies.
»Conserve the sound« is an online museum for vanishing and endangered sounds. The sound of a dial telephone, a walkman, a analog typewriter, a pay phone, a 56k modem, a nuclear power plant or even a cell phone keypad are partially already gone or are about to disappear from our daily life.
Accompanying the archive people are interviewed and give an insight in to the world of disappearing sounds.
If you grew up in the 80s, you might remember Bronson Pinchot as Balki Bartokomous in Perfect Strangers or Serge in Beverly Hills Cop. But Pinchot has built a second career as an award-winning audiobook narrator. I recently listened to him read A Man on the Moon and while the story of the Apollo program is engrossing all by itself, his narration is fantastic. This interview of Pinchot by Jeff VanderMeer (author of the Southern Reach trilogy) is really interesting, particularly the bits about how he approaches his work.
Q: Do you have a philosophy of how to create the perfect audiobook experience?
A: I do, though, like all philosophical resolutions, I only intermittently achieve it. The essential task facing the narrator is to identify or invent a vivid personal definition of what “narrating” ought to be. I am uncomfortable with the chilliness of the word narration. It sounds very much outside the action - the voice on a National Geographic educational film intoning, “These giraffes are just learning how to mate”; or my mother, upon Audrey Hepburn’s entrance in My Fair Lady, informing the room: “She used to have such big doe eyes; what happened to her eyes?”
Simply “reading a book” aloud in an airless audio booth is the kind of mental and physical punishment only ever glimpsed in the lower section of Michelangelo’s Last Judgment. I decided early on that I should not “read” the book but “be” the book, the way I imagine Homer, in performance, “was” the Odyssey. We know he wasn’t “reading” it. In any case, if an audiobook listener doesn’t have the time to curl up with the actual physical text, he or she still yearns for, and deserves, the experience of being carried away by the author’s vision.
What’s the best conference talk/public speech you’ve seen? Topic can be anything. Just the most engaging talk you’ve been present for?
And bonus points: Is there any one particular speaker who’s so good you make an effort to see?
I’ve been to a lot of conferences and seen some very engaging speakers, but the one that sticks out most in my mind is Eloma Simpson Barnes’ performance of a Martin Luther King Jr. speech at PopTech in 2004 (audio-only here).
And so this afternoon, I have a dream. (Go ahead) It is a dream deeply rooted in the American dream.
I have a dream that one day, right down in Georgia and Mississippi and Alabama, the sons of former slaves and the sons of former slave owners will be able to live together as brothers.
I have a dream this afternoon (I have a dream) that one day, [Applause] one day little white children and little Negro children will be able to join hands as brothers and sisters.
In the Drum Major Instinct sermon given two months to the day before his assassination, King told the congregation what he wanted to be said about him at his funeral:
I’d like somebody to mention that day that Martin Luther King, Jr., tried to give his life serving others.
I’d like for somebody to say that day that Martin Luther King, Jr., tried to love somebody.
I want you to say that day that I tried to be right on the war question.
I want you to be able to say that day that I did try to feed the hungry.
And I want you to be able to say that day that I did try in my life to clothe those who were naked.
I want you to say on that day that I did try in my life to visit those who were in prison.
I want you to say that I tried to love and serve humanity.
Some of the power of Barnes’ performance is lost in the video, particularly when audio from King’s actual speeches are availableonline, but sitting in the audience listening to her thundering away in that familiar cadence was thrilling. I can’t imagine how it must have felt to experience the real thing.
Using a JavaScript machine learning package called TensorFlow.js, Abhishek Singh built a program that learned how to translate sign language into verbal speech that an Amazon Alexa can understand. “If voice is the future of computing,” he signs, “what about those who cannot [hear and speak]?”
The McGurk effect pairs different mouth movements with speech, and you tend to hear different things with different mouth movements.
In this video, you hear the word for whatever object is on the screen (bill, mayo, pail) even though the audio doesn’t change:
And in this one, whichever word you focus on, “green needle” or “brainstorm”, that’s what you hear:
What all of these effects demonstrate is that there are (at least) two parts to hearing something. First, there’s the mechanical process of waves moving through the air into the ear canal, which triggers a physical chain reaction involving the ear drum, three tiny bones, and cochlear fluids. But then the brain has to interpret the signal coming from the ear and, as the examples above show, it has a lot of power in determining what is heard.
My kids and I listen to music in the car quite often (here’s our playlist, suggestions welcome) and when Daft Punk’s Get Lucky comes on, my son swears up and down that he hears the mondegreen “up all Mexican lucky” instead of “up all night to get lucky”. If I concentrate really hard, I can hear “Mexican lucky” but mostly my brain knows what the “right” lyric is…as does his brain, but it’s far more convinced of his version.
Update: On the topic of misheard lyrics to Get Lucky, there is this bit of amazingness: