Thursday, August 16th 2018
NVIDIA Does a TrueAudio: RT Cores Also Compute Sound Ray-tracing
Positional audio, like Socialism, follows a cycle of glamorization and investment every few years. Back in 2011-12 when AMD maintained a relatively stronger position in the discrete GPU market, and held GPGPU superiority, it gave a lot of money to GenAudio and Tensilica to co-develop the TrueAudio technology, a GPU-accelerated positional audio DSP, which had a whopping four game title implementations, including and limited to "Thief," "Star Citizen," "Lichdom: Battlemage," and "Murdered: Soul Suspect." The TrueAudio Next DSP which debuted with "Polaris," introduced GPU-accelerated "audio ray-casting" technology, which assumes that audio waves interact differently with different surfaces, much like light; and hence positional audio could be made more realistic. There were a grand total of zero takers for TrueAudio Next. Riding on the presumed success of its RTX technology, NVIDIA wants to develop audio ray-tracing further.
A very curious sentence caught our eye in NVIDIA's micro-site for Turing. The description of RT cores reads that they are specialized components that "accelerate the computation of how light and sound travel in 3D environments at up to 10 Giga Rays per second." This is an ominous sign that NVIDIA is developing a full-blown positional audio programming model that's part of RTX, with an implementation through GameWorks. Such a technology, like TrueAudio Next, could improve positional audio realism by treating sound waves like light and tracing their paths from their origin (think speech from an NPC in a game), to the listener as the sound bounces off the various surfaces in the 3D scene. Real-time ray-tracing(-ish) has captured the entirety of imagination at NVIDIA marketing to the extent that it is allegedly willing to replace "GTX" with "RTX" in its GeForce GPU nomenclature. We don't mean to doomsay emerging technology, but 20 years of development in positional audio has shown that it's better left to game developers to create their own technology that sounds somewhat real; and that initiatives from makers of discrete sound cards (a device on the brink of extinction) and GPUs makers bore no fruit.
A very curious sentence caught our eye in NVIDIA's micro-site for Turing. The description of RT cores reads that they are specialized components that "accelerate the computation of how light and sound travel in 3D environments at up to 10 Giga Rays per second." This is an ominous sign that NVIDIA is developing a full-blown positional audio programming model that's part of RTX, with an implementation through GameWorks. Such a technology, like TrueAudio Next, could improve positional audio realism by treating sound waves like light and tracing their paths from their origin (think speech from an NPC in a game), to the listener as the sound bounces off the various surfaces in the 3D scene. Real-time ray-tracing(-ish) has captured the entirety of imagination at NVIDIA marketing to the extent that it is allegedly willing to replace "GTX" with "RTX" in its GeForce GPU nomenclature. We don't mean to doomsay emerging technology, but 20 years of development in positional audio has shown that it's better left to game developers to create their own technology that sounds somewhat real; and that initiatives from makers of discrete sound cards (a device on the brink of extinction) and GPUs makers bore no fruit.
39 Comments on NVIDIA Does a TrueAudio: RT Cores Also Compute Sound Ray-tracing
Flat my ass.
P.S.
I really miss the awesome 3D sound from games such as Unreal, Thief, FEAR, Stalker, Mass Effect 1, Bioshock, Battlefield 2 (the explosions and bullet sounds were incredible), etc, etc. I do hope I can experience again simmilar enviroments.
The outliers were massive dies and they needed high cost for sweet profit. 8800 ultra demanded it based on perf alone. Now, you're getting fractional perf increases at increased prices while their margins increase. Kinda reminds me of ram/flash... Meanwhile on the CPU side...
But yeah, good audio is long gone. Multiple entities are to blame for that. Microsoft took a good chunk out of it starting with Vista; XP was the last OS that allowed true hardware audio processing. I'm not even sure if having a sound card offloads sound processing from the CPU and freeing up CPU clock cycles anymore. HDMI did a number on it as well. As it stands, we have good old Analog, which is quite outdated and not really supported anymore, S/PDIF (coaxial or optical, usually optical though) which is still hanging around, but only good for stereo uncompressed, or only up to 5.1 by DTS, and HDMI, which can handle a ton of uncompressed audio, but must have a video signal to go with it. That means if you want HDMI PC audio, you've got to have a receiver that can handle whatever screen you want. If you like 120hz or better, or worse yet, 4k at 120hz or better, you're gonna need one hell of a receiver to handle that.
I'd rather be reading about a new kind of high end audio connectivity that doesn't need to piggyback off anything like HDMI does. I guess these days that's called a good sound card and a receiver with plenty of analog inputs...
docs.microsoft.com/en-us/windows-hardware/drivers/audio/windows-audio-architecture
"Audio Engine" is what I'm talking about. The only thing *anything* can change is the "Audio Effects." Everything else is fundamentally a multi-channel PCM signal.
I believe ray traced audio is applied via an "audio effect" in the "audio engine." Game engine takes information about the the environment and tweaks the audio effect that is played back in real time using GPU compute code.
But a nearly 1k 8800 ultra, that's fine. Because hey, its old and everything was difficult back then. As if Nvidia totally didn't have 7 generations prior to it in the chart.
Your logic is awesome. Nvidia does obviously play around with pricing but it simply isn't true that their GPUs get 'more expensive' all the time. Pascal saw a price bump, prior to that, nothing tangible.
It stands to reason that you wouldn't want *any* environmental audio effects enabled in the audio engine nor the hardware if you're using ray traced audio because the sounds are pre-processed for environmental effects.
Sound card has no idea that the PCM it is receiving was ray traced unless something is added to the audio stack the flags it as such.
It needs to be duly noted that ray tracing sound is far more complex than ray tracing light because sound is a pressure wave, not radiation. Pressure waves need to calculate for density (air and object) rather than reflectivity. Where ray tracing light can work based off of meshes games already use (just need to assign a reflectivity to each mesh), ray tracing sound requires (unless they cheat, then what's the point?) 3D modeling of spaces the player can't see. For example, the cavities in a studded, sheetrock wall will create a different sound profile than a wall of bricks. Further, the thickness of both can hugely vary what the sound is like especially on the other side.
Imagine 6 panes of glass in a line, one after the other, in a sound deadening box. You clap your hands in front of the first while recording the sound after each one. Each pane of glass makes it sound different because sound is vibrations. By the time you reach the last pane, there is nothing at all. Simply designing a house with double or triple paned glass, gas filled or not, noticeably changes the sound profile.
Said differently: ray tracing light is very expensive on hardware but relatively cheap for developers. Ray tracing sound is relatively cheap on hardware but ridiculously expensive for developers to properly implement. In both cases, this is tech that really needs to be integrated into game engines before it can be widely used. Unreal Engine 4 can take care of the sound problems via templates.
Windows 8
Windows 10