Earlier this month I wrote a story for Ideas on an equation that describes how people move in crowds. Afterward I received an e-mail that said, in effect, if you’re interested in things that are hard to model, look at how sound waves bounce around a room.
That e-mail came from Dinesh Manocha, a computer scientist at the University of North Carolina who works on “three-dimensional audio” — creating technology that produces audio for video games, movies, and virtual reality environments that mimics the way we hear real noises in the real world. It’s an improvement that Manocha says has been a long time coming.
Advertisement
“Visual effects in movies and games are stunning, but audio is about 20-30 years behind,” he says, noting that unlike visual effects, which can be computer-generated, audio effects are still physically recorded.
There are a few reasons for the lag. One is simply that we care about visual effects more — we’re want to see a dazzling high-definition movie, but don’t mind listening to it on tinny earbuds. A second is that real-to-life audio is harder to generate. Standing in the middle of a concert hall, sound waves strike our left and right ears differently and what we end up hearing depends on subtleties like the shape of our head and the exact direction the sound waves are traveling when they reach us. Plus, there are all sorts of environmental effects that change the way a note from a trumpet moves — or propagates — through space, long before we register it as music.
“Audio sounds in movies and games are pre-recorded with simple manipulations like increasing the bass, but the echoes and diffractions are not right,” Manocha says. “We need a simple, automatic technology that overcomes that barrier for propagation.”
Advertisement
To do this, Manocha’s lab, which he runs with his colleague Ming Lin, is creating software that uses the wave equation from physics to model how sounds move in different environments. The software takes into account complexities like the materials the sound waves interact with (a shiny window reflects more sound than a rough carpet), the rate at which a sound-emitting object is moving relative to the person who hears it (the Doppler effect), and differences between how sound strikes your (virtual) left and right ears. These are relatively simple calculations to perform in isolation from each other, and given a lot of time, but they’re extremely difficult to perform at the real-time pace of a video game.
“Physics tells us how we do it fast in huge environments like a city,” Manocha says.
Two of Manocha’s former students have started a company, Impulsonic, that provides three-dimensional audio technology to video game designers and architects (who might want to test the sound properties of building features). There’s still a lot of work to do, though, before simulated environments sound as good as they look. Researchers like Manocha are working on expanding the database of synthetic noises — the raw material behind sound propagation in virtual environments — and they have their eyes, or ears, set on one group of sounds in particular.
“Many sound effects in nature we can’t simulate,” Manocha says. “We want to figure out how these ambient sounds are generated.”
Kevin Hartnett is a writer in South Carolina. He can be reached at kshartnett18@gmail.com.
Advertisement