Immersive spaces can be very immersive visually. But they can still sound pretty flat. This can disrupt immersion in games and social applications but in automotive, engineering, and construction (AEC), understanding how a space sounds can be crucial to the design process. That’s why Treble is working on realistic sound reproduction for virtual spaces.
We spoke with Treble CEO Finnur Pind about the opportunities and obstacles in believable immersive sound in enterprise and beyond.
Sound Simulation and Rendering
A conversation inside of a car can sound a lot different than a conversation in your living room. A conversation in your living room can sound a lot different than a conversation in an auditorium. If you’re trying to hear that conversation with an assistive device like hearing aids, the conversation can be even more complicated.
Right now, a conversation in any of those spaces recreated in a virtual environment probably sounds about the same. Designers can include environmental sound like water or wind or a crackling fire as they often do for games, but the sonic profile of the environment itself is difficult to replicate.
That’s because sound is caused by vibrations of the air. In different physical environments, the environment itself absorbs and reflects those vibrations in unique ways based on their physical properties. But, virtual environments don’t have physical properties and sound is conveyed electronically rather than acoustically.
The closest we’ve come to real immersive sound is “spatial audio.” Spatial audio represents where a sound is coming from and how far away it is from a listener by manipulating stereo volume but it still doesn’t account for environmental factors. That doesn’t mean spatial audio isn’t good enough. It does what it does and it plays a part in “sound simulation and rendering.”
Sound simulation and sound rendering are “two sides of the same coin,” according to Pind. The process, which has its roots in academia before Treble started in 2020, involves simulating acoustics and rendering the environment that produces them.
How Treble Rethinks Virtual Sound
“Solving the mathematics of sound has been developed for some time but it never found practice because it’s too computationally heavy,” said Pind. “What people have been doing until now is this kind of ray-tracing simulation. … It works up to a certain degree.”
Treble uses a “wave-based approach” that accounts for the source of the audio, as well as the geometry of the space and the physical properties of the building material. In the event that the virtual space includes fantastical or unspecified materials, the company assigns a set of physical characteristics from a known real-world material.
That kind of situation doesn’t arise often so far because, while Pind is open to Treble working with entertainment and consumer applications, the company is mainly focused on enhancing digital design models for the AEC industry.
“It’s not just seeing what your building will look like, but hearing what your building will sound like,” said Pind. “As long as you have a 3D building model … our platform connects directly, understands the geometry, building models, and sound sources.”
Pind says that the concept may one day have applications in augmented reality and mixed reality as well. Say in a platform like Microsoft Mesh or Varjo Reality Cloud where users are essentially sharing or exchanging surroundings via VR, recreating the real spaces of one user as the virtual space of the other user can greatly aid immersion and realism.
“Research has shown that having realistic sound in a VR environment improves the immersion,” said Pind. “In AR it’s more the idea of being in a real space but having sound augmented.”
Machine Learning, R&D, and Beyond
As strange as it may sound, this approach also works essentially backwards. Instead of recreating a physical environment, Treble can create sound profiles for physically-based spaces that may or may not exist – or ever exist. Why? To model how sound would behave in that environment. It’s an approach called “synthetic data generation.”
“AI is kind of the talk of the town these days and one of the major issues of training AI is a lack of data,” said Pind. Training AI to work with sound requires a lot of audio which, historically, had to be sourced from physical equipment transported and set up in physical environments. “Now they’re starting to come to us to synthetically generate it.”
This same approach is increasingly being used to test audio hardware ranging from hearing aids to XR headsets.
Sounds Pretty Good
Pind thinks that the idea of using sound simulation and rendering for things like immersive concerts is interesting, even though that’s not what Treble does right now. It’s another resource already in the hands of forward-thinking companies and potentially soon coming to an XR venue in your headset.