• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Qualcomm Adds Bluetooth Lossless Audio Technology to Snapdragon Sound

I thought that the main reason to use 24-bit or 32-bit floating point was for recording to prevent clipping, same thing with a higher sampling rate; so you can more accurately sample down the audio to commonly used formats (the whole idea that more data is never a bad thing when it comes to conversion later.) I had read something, I'm not sure if it's true, that using higher sampling rates can result in harmonic distortion outside of the audible range, but I'm not sure if I buy that. I do have some 96Khz/24-bit content and the fidelity is fantastic, but in reality, if it were sampled down, I would probably not notice a difference. The real difference is that it's 5.1 (6 channel,) and lossless, which makes for a great sounding track.
There is a good point about recording and editing being done at higher bit depths and sampling rates, that have to do with clipping and the ease of high frequency recording. For example if you record at just 16 bits and need to adjust gain digitally when editing the track by 30 dB's, the noise floor is going to be at least -66 dB, which someone is possibly going to hear. If you recorded at 24 bits, your quantization noise is still inaudible at below -100 dB. This means that you can set the gain a bit lower when recording, to prevent clipping, and not have to deal with unnecessary quantization noise should you need to push levels later on even by a lot.

For consumption though (as is the case with the aptX lossless), it does not make sense at all to deliver at higher than CD quality. Surround is of course good if you like it, and lossless a mandatory thing IMO.
 
#Facepalm
Explain? I'm not including analog intentionally, as it always loses data to noise in transmission.

I admitedly am not on the cutting edge with audio tech though.
 
Last edited:
Trust me, there is a clear difference between 16bit/44kHz and 24bit/96kHz. I'm talking from experience. There are a lot of music samples out there.
BUT, unless you have decent, quality speakers, with very high range, you are correct, you won't distinguish a thing.
Which for bt audio is really out of scope, not only for bandwith reasons but for usage, outdoors. Even for people who drive their massive enegry sucking earphones through massive DACs in their phones in the street, all the noise and the nature of 2 insulated cans on your ears just ruin it, not matter how much they defend that.

I do have some 96Khz/24-bit content and the fidelity is fantastic, but in reality, if it were sampled down, I would probably not notice a difference. The real difference is that it's 5.1 (6 channel,) and lossless, which makes for a great sounding track.
The only true reason above 16/44 music shines is in surround in specific room setups. After years of watching people ardently defending than anything below 24/96 is shit I concluded what most people does is justifying to themselves their purchases. I tried several times doing different 24/96 vs 16/44 sound tests with different albums over the years and yes, you might find some differences, but not the life changing ones audiophiles claim and that goes out of the window when you start doing other activities while listening to music, which is what we all do.
 
Differences in mastering, to be exact. ..or in how your DAC/rest of the digital signal pipeline handles mixed bit depths and/or sampling frequencies.
Yeah, although it's very hard to notice and depends on the (high end) setup, use and the person listening, as I said I really have to concentrate on it to find differences with my setup and I mostly listen to music while doing other things so my attention to that fine detail is out of the window. My point is CD quality or equivalent lossless is pretty much the top for 99% for all uses and critizing bt codecs for not going beyond 16/44 is quite pointless given the use bt audio usually has. And no one who can appreciate 24/96 is going to try anyway. Aiming for full CD quality without reducing bitrate and using the less power as possible is more than enough for now.
 
it's very hard to notice
Truly. If it was possible to notice, we’d have at least one person able to discern the difference in a A/B blind test. I’ve yet to see anyone succeed.

If you do heavy post processing (room correction etc.) it might in theory make sense to go higher, but even then you’d need to listen at absurd volumes to hear the quantization noise.
 
Truly. If it was possible to notice, we’d have at least one person able to discern the difference in a A/B blind test. I’ve yet to see anyone succeed.

If you do heavy post processing (room correction etc.) it might in theory make sense to go higher, but even then you’d need to listen at absurd volumes to hear the quantization noise.
I am about to turn 60. In my 20s I discoed like the wasn't going to be a tomorrow. I however have never used earphones on any regular basis except for telephony and at low volume. I can hear a difference in the test for the results to be statistically significant. My 22-year-old son also has perfect hearing and prefers really good headphones. He can also hear the difference to get it right more times than not. This is not even on audiophile speakers or at high volume. I would argue that some have better hearing than others. This variance is known to happen in visual perception. If you disagree I challenge you to name the extensive auditory study that proves your point.
 
I am about to turn 60. In my 20s I discoed like the wasn't going to be a tomorrow. I however have never used earphones on any regular basis except for telephony and at low volume. I can hear a difference in the test for the results to be statistically significant. My 22-year-old son also has perfect hearing and prefers really good headphones. He can also hear the difference to get it right more times than not. This is not even on audiophile speakers or at high volume. I would argue that some have better hearing than others. This variance is known to happen in visual perception. If you disagree I challenge you to name the extensive auditory study that proves your point.
That test is for discerning 8 bit from 16 bits, which is definitely audible. In order to benefit from 24bit music, you’d need to pass the blind test for noise at -96dB, as otherwise you can’t hear the quantization noise which is the only reason one could go for higher bit depth content.
Here you can test if you can hear noise at a meager -78dB. Don’t kill your ears.

If you claim to be able to pass it, please capture your try on video using a mobile phone. :)
After that we can try to find/make a test for the -96 dB noise floor, which is essentially beyond the human capabilities even in extremely good listening conditions.

As for your request on a study of the matter in question, this is a classic: http://drewdaniels.com/audible.pdf

While the human ear can in theory hear a range of 0-120dB, and cd audio has a quantization noise floor of ’just’ -96db, you can only hear the ’extra’ quantization noise if your environment has a noise floor below that -96db limit. In a typical home environment the noise floor is around 30dB, which means that you’d need to set peaks to around 130dB to even in theory be able to hear the quantization noise, and music at 130dB is very unhealthy, causing immediate pain and possibly loss of hearing.

The only thing 24bit audio does is it lowers the quantization noise floor to -144dB, nothing more and nothing less. To be able to percieve the difference, you’d risk losing your hearing.
 
Trust me, you can’t. Prove me wrong by posting any well made test where the result is as you claim it to be. You simply stating that you can hear a difference means very little.

In the test, the lower bitrate audio clip needs to be produced directly from the high bitrate clip using best practices for downsampling, and the test needs to be a double blind one. Dac used needs to be run at the same bitrate for both clips, with the lower bitrate clip upsampled using best practices on the source device. Otherwise the DAC may be the piece in the signal chain that produces differing outputs.

In many cases music files released at higher bitrates are also mastered differently to any lower bitrate version of the same piece, which explains most of the people who claim to hear the difference.
By any change, do you have a sound card capable to provide at least a sample rate of 24 bit, 96 kHz ? You need this in Windows if you want to listen to very high quality sound files:
1631372674027.png
 
By any change, do you have a sound card capable to provide at least a sample rate of 24 bit, 96 kHz ? You need this in Windows if you want to listen to very high quality sound files:
View attachment 216440
I do. What does that have to do with physics? No way in hell am I going to listen music at peaks set to 130dB (assuming 30dB noise floor). And since my volume is set to lower than that, moving up from 16bit does _absolutely nothing_. Just link a study on the matter where the opposite is true, if you still think that there is any meaning in 24bit audio for _consumption_.
 
I do. What does that have to do with physics? No way in hell am I going to listen music at peaks set to 130dB (assuming 30dB noise floor). And since my volume is set to lower than that, moving up from 16bit does _absolutely nothing_. Just link a study on the matter where the opposite is true, if you still think that there is any meaning in 24bit audio for _consumption_.
I've been busy on other matters, I will be responding to your challenge but not today.
 
Back
Top