• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

An "Audiophile Grade" SSD—Yes, You Heard That Right

Joined
Jan 28, 2021
Messages
853 (0.61/day)
I did read it. No goup got even close to the loose p=0.05 criteria for picking out anything but the 10 year old crappy motherboard. If you are saying otherwise, at least link to the correct place in the ’study’.
If you read the author's conclusions and, look at the subsets of the test for those that work in the audio field from a technical background, are musicians, or professional reviewers it starts to paint a picture of the highest end DAC, (the Oppo being the best) followed closely by the Sony CD player. You can see the same thing as the scale of the quality of the gear increases and interestingly those that tested with speakers vs. headphones.



Are the results statistically meaningful enough to prove it by any scientific standard?, no but that doesn't mean there isn't anything there it just means you need better tests if your goal is to prove it.
Async usb audio was present since at least 10 years, with the default driver (since windows vista). It is part of even the class 1 USB audio.
I don't think thats correct. You need USB Audio 2.0 to support async and that only made its way into Windows 10 in 2017.
And you can’t prove a negative.
Exactly. This is why blind studies are the way to go.
I think you are missing the point. The conclusions you draw are only as good as the test you conduct, the data you collect and how you interpret it. Its pretty common for well conducted scientific tests to draw misleading or incorrect conclusions through no fault at all in how the test was run. It happens in all the time in much bigger well funded studies where the stakes are much higher than something as trivial as audio.
Why wouldn’t you take it seriously? The audible effects are, for all we know, in the same range as the USB cable differences. Absense of proof is not proof of absense.
Displays are electrically noisy, thats not really disputed and why notebooks are often not recommended to use as streaming devices.

The idea of 'high-end' wallpapers is absurd because if the display is causing a problem you'd just turn it off. If a cable is the problem you could turn it off by unplugging it but then well.....
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
If a cable is the problem you could turn it off by unplugging it but then well.....
Bluetooth works ;)


I don't think thats correct. You need USB Audio 2.0 to support async and that only made its way into Windows 10 in 2017.
You are incorrect. Async audio is supported by usbaudio.sys. There is no need for the 2.0 release for that feature to work. And even if it was as you state, it would still be part of the de-facto feature set for five years already. Quite far from your original statement that that it required proprietary drivers…
You also stated that they are an uncommon variety, while most if not all high end DACs use async mode.

Are the results statistically meaningful enough to prove it by any scientific standard?, no but that doesn't mean there isn't anything there it just means you need better tests if your goal is to prove it.
So you agree that the test does not indicate that the results that you said there to be are meaningful. Go ahead and provide better tests, I’m waiting.

And if electrical usb cable interference was an actual thing, it would be easy to prove by just doing a blind study comparing it to an optical input. In a device that is not faulty there is no difference in any measurements, but maybe the human ear can do what no machine can and determine what cable is used.
 
Last edited:
Joined
Jan 28, 2021
Messages
853 (0.61/day)
Bluetooth works
It works but a wireless protocol is never going to compete with a wired one.
You are incorrect. Async audio is supported by usbaudio.sys. There is no need for the 2.0 release for that feature to work. And even if it was as you state, it would still be part of the de-facto feature set for five years already. Quite far from your original statement that that it required proprietary drivers…
You also stated that they are an uncommon variety, while most if not all high end DACs use async mode.
You may be right. It looks like class 1 is limited to 96 kHz so higher sampling rates required either propitiatory drivers the recent update to Windows10 to support class 2 audio. Find information on this is a bit of rats nest of information.
So you agree that the test does not indicate that the results that you said there to be are meaningful. Go ahead and provide better tests, I’m waiting.
I would agree that the tests don't reach a threshold to be statistically meaningful but the subsets of listeners and and equipment is indicating that the difference is there and observable when those thresholds are met in my opinion. There are no better tests that I'm aware of and given the amount of effort that had to have gone into that one I doubt we'll be seeing a better one any time soon.
And if electrical usb cable interference was an actual thing, it would be easy to prove by just doing a blind study comparing it to an optical input. In a device that is not faulty there is no difference in any measurements, but maybe the human ear can do what no machine can and determine what cable is used.
This is just circling back on itself now but, any cable carrying an electrical signal is going to be under the influence of interference. Audio streaming works differently than file transfer or peripheral interface, see ifi USB AUDIO GREMLINS EXPOSED. Sure blind tests would prove it one way or another assuming you conduct the test properly and with a big enough sample size but just like the other test I've been referencing it would be pretty large under taking to get enough data.

I think thats the main takeaway for me at least, we can measure a lot of what we are hearing but not everything. I come from a background of loudspeakers when it comes to audio measurements, and measurements tell you a lot about how speaker will sound but not everything. A ribbon and dome tweeter can measure nearly identically in a speaker with the same woofer and crossover topology yet sound very differently, clearly there is something there we just aren't measuring it.

Until measurements show us everything you have two options, trust your ears or rely on blind studies which are problematic because they can lead you to the wrong conclusions.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
Until measurements show us everything you have two options, trust your ears or rely on blind studies which are problematic because they can lead you to the wrong conclusions.
Trusting ones ears, while doing sighted comparisons, will definitely lead to the wrong conclusions. There is literally tons of data to support this statement. Blind tests _can_ lead to wrong conclusions, but at least it is not the norm.


the subsets of listeners and and equipment is indicating that the difference is there and observable when those thresholds are met in my opinion.
If the indication is there, it should be possible to get some statistically relevant results from that dataset. Otherwise you are just guessing, or believing what you want to believe.
It works but a wireless protocol is never going to compete with a wired one.
Why? Timing is not a problem, as we can use async mode.
any cable carrying an electrical signal is going to be under the influence of interference.
Yes. But if that cable is not electrically connected to the analog side, does its existence matter?
Audio streaming works differently than file transfer or peripheral interface, see ifi USB AUDIO GREMLINS EXPOSED.
I read that and it just repeats what I’ve been writing all along. If your cable, dac or source isn’t faulty, you cannot hear any improvement from more expensive cables. You can clearly hear packet loss, and you can test cable performance by using some mass storage device on it and checking the error rate. If the error rate isn’t zero, throw the cable away.

If USB data transfer errors would be an actual common problem, we would use an error correcting code in the trasferred packets. We would also use radiation hardened DAC chips in a voting lock step configuration, optically isolated analog domain etc.. But we don’t, not even in the most expensive audio DACs on the planet.
 
Last edited:
Joined
Jan 28, 2021
Messages
853 (0.61/day)
Trusting ones ears, while doing sighted comparisons, will definitely lead to the wrong conclusions. There is literally tons of data to support this statement. Blind tests _can_ lead to wrong conclusions, but at least it is not the norm.
Yeah, observed listening tests flaws are pretty known. My only point is that most of blind tests tend to point you to a conclusion that may be wrong. The differences between good DACs, or cheap and high-end cables is going to be small to say the least. That pretty much necessitates that whoever is doing the test would be very familiar with everything else in the system and someone else be switching between the components being tested. There are small scale tests I've seen where people do that (someone goes into the listeners home and helps them swap components) but its always going to be a small sample size due to the familiarity requirements so statistically irrelevant.
If the indication is there, it should be possible to get some statistically relevant results from that dataset. Otherwise you are just guessing, or believing what you want to believe.
Yeah if we are referencing the archimago test then you'd need a bigger test with both people that can actually hear the differences between the various samples as well as have equipment that can resolve the differences between the samples and only test with those individuals.
Why? Timing is not a problem, as we can use async mode.
You can say timing is not a problem because of the techniques used to mitigate the issues but they are fundamentally same techniques used with physical connections. Bandwidth problems still exist but now you medium is air which is pretty much always going to be worse than a physical cable.
Yes. But if that cable is not electrically connected to the analog side, does its existence matter?
Yeah, still matters. Even if the cable isn't directly associated with the analog side its not completely isolated from the circuit. You still have bandwidth considerations on the digital side that are susceptible to interference.
I read that and it just repeats what I’ve been writing all along. If your cable, dac or source isn’t faulty, you cannot hear any improvement from more expensive cables. You can clearly hear packet loss, and you can test cable performance by using some mass storage device on it and checking the error rate. If the error rate isn’t zero, throw the cable away.

If USB data transfer errors would be an actual common problem, we would use an error correcting code in the trasferred packets. We would also use radiation hardened DAC chips in a voting lock step configuration, optically isolated analog domain etc.. But we don’t, not even in the most expensive audio DACs on the planet.
Audio streaming isochronous and bulk data transfer their data in different ways though errors would be handled differently?

My understanding is that there is active error correction in audio streams.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
Bandwidth problems still exist but now you medium is air which is pretty much always going to be worse than a physical cable.
So ”always” has now become ”pretty much always”. How quaint. And I’m not sure why you think that bandwidth is a problem, unless we are talking about wearables. WiFi can stream some gigabits per second, ought to be enough for redbook audio..

and seeing how we can _completely_ get rid of this USB cable ”interference”, I would have thought that you’d prefer this over anything.
Yeah, observed listening tests flaws are pretty known.
So why promote them?

You still have bandwidth considerations on the digital side that are susceptible to interference.
Which any sane engineer knows how to mitigate, via buffering etc.
My understanding is that there is active error correction in audio streams.
That is not the case when using default USB audio drivers. There is only an error checksum, but no error correction.
This is why saying that ’a cable matters because of errors’ is absurd, as one can hear each and every one of them, but no one complains about them. Why? Because they are super rare.
Audio streaming isochronous and bulk data transfer their data in different ways though errors would be handled differently?
Yes, but you can determine the _cable quality_ from it, i.e. is it error prone or not.
 
Joined
Jan 28, 2021
Messages
853 (0.61/day)
So ”always” has now become ”pretty much always”. How quaint. And I’m not sure why you think that bandwidth is a problem, unless we are talking about wearables. WiFi can stream some gigabits per second, ought to be enough for redbook audio..
Wow, ok, this is a internet forum, we aren't writing academic research papers or reference white papers here. To be clear wireless is always going to be inferior to physical medium.

Yeah, there is tons of bandwidth to do bulk data transfer but thats completely different to the requirements of a isochronous audio stream.
So why promote them?
I'm not really promoting them but you can't benchmark your way to the answer and the way most blind tests are conducted in way to lead to the wrong conclusions so listening impressions are whats left. If you could benchmark everything and quantify it or statistically prove it through blind tests would that really be that useful given how subjective audio is in terms of personal preference and perception ability?

Educate yourself do your own listening and make your own determinations.
Which any sane engineer knows how to mitigate, via buffering etc.
Yeah, we keep going over this. You can mitigate the problems with various techniques but not eliminate them in a real time audio stream.
That is not the case when using default USB audio drivers. There is only an error checksum, but no error correction.
This is why saying that ’a cable matters because of errors’ is absurd, as one can hear each and every one of them, but no one complains about them. Why? Because they are super rare.
I don't design these things and am not en EE and information is scarce but my understanding is that all DACs have their own internal handling of errors. Not every bit gets transferred with 100% accuracy and there is no re-try like with bulk data transfers so its up to the DAC to internally handle the error. You easily hear drop outs and artifacts where the stream essentially fails but the argument is that errors are still happening which result in a lose of quality.
Yes, but you can determine the _cable quality_ from it, i.e. is it error prone or not.
Right but what I'm saying / asking is the nature of the data is different and how its transferred is totally different. Audio is being sampled at 44Khz at CD quality all represented by bits, converted to analog voltage and back to bits again, thats a lot going on. I don't know how the data packets are framed and not being an expert on digital audio or an EE of any kind but given the real time nature of how the digital stream works it seems conceivable to me that errors could be a problem.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
argument is that errors are still happening which result in a lose of quality.
How often do they happen? You promote this cable bullshit so much, that one would think that you have some numbers to give. Counting transfer errors is a purely discreet and quantifiable metric.
Right but what I'm saying / asking is the nature of the data is different and how its transferred is totally different. Audio is being sampled at 44Khz at CD quality all represented by bits, converted to analog voltage and back to bits again, thats a lot going on. I don't know how the data packets are framed and not being an expert on digital audio or an EE of any kind but given the real time nature of how the digital stream works it seems conceivable to me that errors could be a problem.
The nature of the data is different, but the physical transfer layer is the same, which is why you can test cable quality with the method that I gave.

Errors are a problem, with shitty cables and DACs placed inside microwave ovens. In other cases, not really. You can easily test it yourself.

Async USB audio audio does not care about any miniscule timing errors in the data transfers, only transfer errors.
Educate yourself do your own listening and make your own determinations.
Educate yourself, do your own blind tests and make your own determinations. I do not have the audacity to think that sighted audio tests that I could make would prove anything.

If you could benchmark everything and quantify it or statistically prove it through blind tests would that really be that useful given how subjective audio is in terms of personal preference and perception ability?
Of course blind tests can be used to gauge subjective preference as well! I mean why wouldn’t that be the case? In order to do that, one just has to be able to differentiate the changing components by listening alone.

If we would be talking about subjecive preference on how audio systems look like, then things would be different.
 
Last edited:
Joined
Jan 28, 2021
Messages
853 (0.61/day)
How often do they happen? You promote this cable bullshit so much, that one would think that you have some numbers to give. Counting transfer errors is a purely discreet and quantifiable metric.
I have no clue how often they happen. And I'm not promoting anything I'm simply stating the cable is not infallible even if all it was doing is transmitting 1s and 0s so you can't go by just that one metric.
The nature of the data is different, but the physical transfer layer is the same, which is why you can test cable quality with the method that I gave.

Errors are a problem, with shitty cables and DACs placed inside microwave ovens. In other cases, not really. You can easily test it yourself.

Async USB audio audio does not care about any miniscule timing errors in the data transfers, only transfer errors.
Same physical layer yes but how that data is packaged is different, audio is sampled at 44Khz a second in a isochronous stream. This is not my area of expertise but I have to think that the fault tolerances and error correction methods used are different than in a bulk data transfer.
Educate yourself, do your own blind tests and make your own determinations. I do not have the audacity to think that sighted audio tests that I could make would prove anything.
I plan to do my own tests, not to prove anything though.
Of course blind tests can be used to gauge subjective preference as well! I mean why wouldn’t that be the case? In order to do that, one just has to be able to differentiate the changing components by listening alone.

If we would be talking about subjecive preference on how audio systems look like, then things would be different.
I'm not saying blind tests can't do that but given that most people are not going to conduct their own blind tests and relying on someone else's blind test conclusions is of limited value because of the subjectivity of audio.
 
D

Deleted member 24505

Guest
I have no clue how often they happen. And I'm not promoting anything I'm simply stating the cable is not infallible even if all it was doing is transmitting 1s and 0s so you can't go by just that one metric.

Same physical layer yes but how that data is packaged is different, audio is sampled at 44Khz a second in a isochronous stream. This is not my area of expertise but I have to think that the fault tolerances and error correction methods used are different than in a bulk data transfer.

I plan to do my own tests, not to prove anything though.

I'm not saying blind tests can't do that but given that most people are not going to conduct their own blind tests and relying on someone else's blind test conclusions is of limited value because of the subjectivity of audio.

Professionals who test audio stuff for a living is who people listen to. If they say this item is better than that and why, then that item is better. Unless you think they are biased or don't test properly. But always personal listening is down to the individual, if you like something then buy that, whatever the pros say.
 
Joined
Jan 28, 2021
Messages
853 (0.61/day)
Professionals who test audio stuff for a living is who people listen to. If they say this item is better than that and why, then that item is better. Unless you think they are biased or don't test properly. But always personal listening is down to the individual, if you like something then buy that, whatever the pros say.
I agree, though I tend to follow reviewers who have some sort of technical background since they tend to run better tests and are better equipped to interpret the results.

Not everyone likes the technical approach though and thats the tricky part about audio, particularly troublesome for those that are dead set quantifying everything into a metric you can put into a chart.
 
D

Deleted member 24505

Guest
I agree, though I tend to follow reviewers who have some sort of technical background since they tend to run better tests and are better equipped to interpret the results.

Not everyone likes the technical approach though and thats the tricky part about audio, particularly troublesome for those that are dead set quantifying everything into a metric you can put into a chart.

Here are a few sites you might know, if not they are good indeed.

https://www.audiosciencereview.com/forum/index.php

http://audiopurist.pl/en/main-page/

https://audiokarma.org/forums/index.php

i read on these a fair bit.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
Same physical layer yes but how that data is packaged is different, audio is sampled at 44Khz a second in a isochronous stream. This is not my area of expertise but I have to think that the fault tolerances and error correction methods used are different than in a bulk data transfer.
The physical layer is the same, and transfer errors happen the same way. With bulk data transfer, these errors are easily quantified, because data retry is logged and you can look up that metric with no special tools or hardware. With audio, the errors are not logged, as they are not reported and there is no retry mechanism. The data is transmitted in the same voltages, with the same 0’s and 1’s. The cables error rate is very much comparable in both uses, if the receiving hardware is of the same general quality when it comes to the USB PHY used, and the data transfer rate is the same.

To think that the raw error rate would somehow depend on the data packaging has no real world basis. The only difference is how it is mitigated, which does not depend on anything related to the physical transfer layer (where the errors happen).

If you somehow think that this is not the case, please describe in detail why that might be. I.e. why the transfer errors might depend on the packet lenght or packet contents.


I have no clue how often they happen. And I'm not promoting anything I'm simply stating the cable is not infallible even if all it was doing is transmitting 1s and 0s so you can't go by just that one metric.
No cable is truly infallible, and neither is the computing inside the DAC chips for that matter. Random bit flips are a very real thing. But if transfer errors happen once in a year, I would not spend thousands on USB cables that are not proven to work any better.
 
Joined
Jan 28, 2021
Messages
853 (0.61/day)
The physical layer is the same, and transfer errors happen the same way. With bulk data transfer, these errors are easily quantified, because data retry is logged and you can look up that metric with no special tools or hardware. With audio, the errors are not logged, as they are not reported and there is no retry mechanism. The data is transmitted in the same voltages, with the same 0’s and 1’s. The cables error rate is very much comparable in both uses, if the receiving hardware is of the same general quality when it comes to the USB PHY used, and the data transfer rate is the same.

To think that the raw error rate would somehow depend on the data packaging has no real world basis. The only difference is how it is mitigated, which does not depend on anything related to the physical transfer layer (where the errors happen).

If you somehow think that this is not the case, please describe in detail why that might be. I.e. why the transfer errors might depend on the packet lenght or packet contents.
No cable is truly infallible, and neither is the computing inside the DAC chips for that matter. Random bit flips are a very real thing. But if transfer errors happen once in a year, I would not spend thousands on USB cables that are not proven to work any better.
I really don't have anything specific to point to as to why the errors would be different in audio vs. bulk data transfer but am trying to understand the argument for it and remain skeptical until I understand it. I certainly wouldn't advocate for high-end cables either.

The argument seems to be two fold. The bits being transferred which as you said on the physical layer are the same regardless. In regard to errors and retry on data transfer that only occurs as chunks of data in some block of bytes as I understand it, not in the bit level and it would be on that basis that errors are logged and retries happen? This is more of a data transmission level question than anything but is there bit over provisioning in the transport layer that protects data integrity that would inherently not be present in a audio stream? I state it as a question because I don't know and the argument is that there isn't enough bandwidth in the cable to maintain represent the bits with 100% accuracy particularly with HD audio.

The other aspect is the cable itself picking up outside interference and affecting the DAC itself. The DAC is sensitive to noise and interference, just because its made up of ICs doesn't make it immune to the outside world. I mean everything in the analog world is prone to interference, from truntables, tubes, and solidstate MOSFETs and half of what the DAC is doing is analog. You can say you can't hear it because blind tests don't prove it or it dosn't show up in the measurements but without rehashing old territory those two things don't tell the whole story. My stance is that if it actually is happening its happening of the fringe high-end spectrum and that its almost certainly irreverent to other short comings you may have.

Here are a few sites you might know, if not they are good indeed.

https://www.audiosciencereview.com/forum/index.php

http://audiopurist.pl/en/main-page/

https://audiokarma.org/forums/index.php

i read on these a fair bit.
I build my own speakers so I mostly frequent forums focused on that. I do check in on ASR though to see whats passing through and getting tested.

Otherwise I mostly stay up on what new on the electronics front from a few Youtube channels. A British Audiophile, reviews a lot high-end gear that I'll probably never buy but he has an EE background and goes into the technical design aspects which gives interesting context into how and why something might sound the way it does. The cheapaudioman reviews cheaper (sub $1,000) stuff in very non-pretentious audiophily way I appreciate.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
In regard to errors and retry on data transfer that only occurs as chunks of data in some block of bytes as I understand it, not in the bit level and it would be on that basis that errors are logged and retries happen?
Correct. I don’t remember the specifics, but there is just some CRC checksum at the end of each block of data, and if that does not match a retry is attempted (and logged).

This is more of a data transmission level question than anything but is there bit over provisioning in the transport layer that protects data integrity that would inherently not be present in a audio stream?
No. The default audio driver does not implement any over provisioning.

I state it as a question because I don't know and the argument is that there isn't enough bandwidth in the cable to maintain represent the bits with 100% accuracy particularly with HD audio.
There is plenty of bandwidth. I mean anything over 16bit/44KHz is a waste of time anyway, as far as playback is concerned, and there is enough bandwidth for 32bit/300+KHz, meaning that you could literally send each packet ten times and still have bandwidth to spare.


The other aspect is the cable itself picking up outside interference and affecting the DAC itself. The DAC is sensitive to noise and interference, just because its made up of ICs doesn't make it immune to the outside world. I mean everything in the analog world is prone to interference, from truntables, tubes, and solidstate MOSFETs and half of what the DAC is doing is analog. You can say you can't hear it because blind tests don't prove it or it dosn't show up in the measurements but without rehashing old territory those two things don't tell the whole story. My stance is that if it actually is happening its happening of the fringe high-end spectrum and that its almost certainly irreverent to other short comings you may have.
It is present in the analog domain, and mostly irrelevant. If you’d have audible interference, it would in most apartment buildings be 99,9% just the 50/60Hz hum. Other frequencies in the auditory range are _very_ under represented. If you can’t even measure the 50/60Hz hum in the auditory decibel range, it is exceedingly unlikely that any other interference would be more pronounced.

There are a lot of other interference, but it is not in the auditory range, and thus does not matter in the analog domain. It can cause a lot of problems in the digital domain, but those would be easy to hear if present, or quantifiable by other means.
 
Joined
Jan 28, 2021
Messages
853 (0.61/day)
Correct. I don’t remember the specifics, but there is just some CRC checksum at the end of each block of data, and if that does not match a retry is attempted (and logged).
No. The default audio driver does not implement any over provisioning.
Right, and thats kinda what I'm getting at. For those reasons you can't treat it the same and say that just because its digital its protected from faults. There is all kinds of mechanisms happening that makes data transfer appear as though the process is infallible but those don't exist in the same way with digital audio.
There is plenty of bandwidth. I mean anything over 16bit/44KHz is a waste of time anyway, as far as playback is concerned, and there is enough bandwidth for 32bit/300+KHz, meaning that you could literally send each packet ten times and still have bandwidth to spare.
The cable may not be limiting the bandwidth but most DACs have an internal resolution of 20 bits or so internally before the accuracy is gone. That loss of resolution is mostly due to design constraints of internal components but also probably susceptible extra noise on the cable.
It is present in the analog domain, and mostly irrelevant. If you’d have audible interference, it would in most apartment buildings be 99,9% just the 50/60Hz hum. Other frequencies in the auditory range are _very_ under represented. If you can’t even measure the 50/60Hz hum in the auditory decibel range, it is exceedingly unlikely that any other interference would be more pronounced.

There are a lot of other interference, but it is not in the auditory range, and thus does not matter in the analog domain. It can cause a lot of problems in the digital domain, but those would be easy to hear if present, or quantifiable by other means.
Its not the noise being in the auditor range that is the problem, all EMI noise is an issue.

Look at the various approaches to negative feedback in amplification which does happen in the auditory range but part of what negative feedback loops do is remove noise and distortion in the amplification circuit. That noise and distortion is effect of the amp design and external noise factors whether it be noise introduced by the power supply or external EMI. Negative feedback in respect to DAC isn't directly comparable (aside from its output stage) but the principles still apply.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
all EMI noise is an issue.
Why?


That loss of resolution is mostly due to design constraints of internal components but also probably susceptible extra noise on the cable.
And a mere human can hardly make use of 16 bits of range. This you can easily test by yourself if you so wish. Again, I have all the time been consistent in saying that some distortion and noise is present, but it simply does not matter.


Right, and thats kinda what I'm getting at. For those reasons you can't treat it the same and say that just because its digital its protected from faults. There is all kinds of mechanisms happening that makes data transfer appear as though the process is infallible but those don't exist in the same way with digital audio.
But it is quantifiable, and does not happen often enough to matter. In the same way your car is not infallible and can kill you at any moment, but you still drive it.

With custom drivers you can over provision as much as you want and completely negate the problem. Too bad the ”high end” market instead focuses on 1000 dollar cables.
 
D

Deleted member 24505

Guest
I have my DAC set at 24/96, is that pointless and should set it to 16/44?
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
I have my DAC set at 24/96, is that pointless and should set it to 16/44?
It depends. If you listen to sources of both 44kHz and 48kHz sample rates (like CD audio at 44 and movies at 48), and your systems won’t be able to automatically switch the sample rate based on content (which is typical for PC’s), then a higher sample rate will diminish re-sampling artefacts, that can be audible when converting from 48 to 44 or the orher way around. Re-sampling both 44 and 48 to 96 does not produce audible re-sampling artefacts.

Because of this re-sampling issue, it is usually better to use a high sample rate for the PC-DAC interconnect. The ’best’ option would be to always set the dac to the same sample rate as the content you listen to, and let the DAC do all the upsampling internally, but my understanding is that it is difficult to accomplish on a PC. As for bit depth, it does not really matter to which value you set it to, as re-sampling is not an issue. There will be no audible difference to you between 16 and 24 bit modes. There are no real downsides either. Theoretically as there is more data being transferred if you select a higher bitdepth, there will be more data corruption as well, but it is still super rare and it’s not a real issue you should spend time pondering about.
 
D

Deleted member 24505

Guest
It depends. If you listen to sources of both 44kHz and 48kHz sample rates (like CD audio at 44 and movies at 48), and your systems won’t be able to automatically switch the sample rate based on content (which is typical for PC’s), then a higher sample rate will diminish re-sampling artefacts, that can be audible when converting from 48 to 44 or the orher way around. Re-sampling both 44 and 48 to 96 does not produce audible re-sampling artefacts.

Because of this re-sampling issue, it is usually better to use a high sample rate for the PC-DAC interconnect. The ’best’ option would be to always set the dac to the same sample rate as the content you listen to, and let the DAC do all the upsampling internally, but my understanding is that it is difficult to accomplish on a PC. As for bit depth, it does not really matter to which value you set it to, as re-sampling is not an issue. There will be no audible difference to you between 16 and 24 bit modes. There are no real downsides either. Theoretically as there is more data being transferred if you select a higher bitdepth, there will be more data corruption as well, but it is still super rare and it’s not a real issue you should spend time pondering about.

Apart from games, i always try to "find" the highest quality audio files i can, usually FLAC if possible, but i will use peasant MP3 if i have to. My DAC is connected via USB C 3.2 on the PC though i don't suppose it makes a difference even if it was USB 2.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
Apart from games, i always try to "find" the highest quality audio files i can, usually FLAC if possible, but i will use peasant MP3 if i have to. My DAC is connected via USB C 3.2 on the PC though i don't suppose it makes a difference even if it was USB 2.
Flac and mp3 have nothing to do with the sample rate and bit depth.

If you use any music streaming service, or audio CD releases, they are going to be at 44,1 kHz, most movie streaming services and dvd stereo tracks are at 48kHz.

The quality of any audio track is usually determined by how it was mastered, and any extra data rate beyond CD audio quality is just a waste. You can easily test it yourself.
 
Joined
Jan 28, 2021
Messages
853 (0.61/day)
It negatively affects things and I already explained why.
And a mere human can hardly make use of 16 bits of range. This you can easily test by yourself if you so wish. Again, I have all the time been consistent in saying that some distortion and noise is present, but it simply does not matter.
16/44 was settled on for a very good reason and a bit depth of 16 bits is 96 dBA of dynamic range which is more than enough for playback so its very easy to dismiss any high-res formats if you understand the basic principles and what Red book audio covers. Still many people who understand it on level higher than anyone on this forum claim that higher resolution (24 bit 192Khz) sound better and the reason comes down to how the reconstruction filters in the DACs work. Its the same reason why oversampling is a thing at all, and the reasoning why Chord uses proprietary filters built around a FPGA in their DACs, or what the whole MQA standard was designed to address.
But it is quantifiable, and does not happen often enough to matter. In the same way your car is not infallible and can kill you at any moment, but you still drive it.

With custom drivers you can over provision as much as you want and completely negate the problem. Too bad the ”high end” market instead focuses on 1000 dollar cables.
How is quantifiable? When you are transferring data of any kind from one component to another inside a computer or from one computer to another it only appears to happen without error because of protection schemes built into the process and bandwidth is not a consideration for them to function. Those schemes don't exist and can't function in the same way for digital audio and that says nothing about what happens in the analog domain.

Ultimately I don't think the answer for better audio is software (over provisioning of data) I was just using it to draw a compassion. The high-end industry is doing plenty of things besides selling high-end cables, look at the research and science that goes into MQA, or the re-appearance of R2R ladder DACs.
Flac and mp3 have nothing to do with the sample rate and bit depth.

If you use any music streaming service, or audio CD releases, they are going to be at 44,1 kHz, most movie streaming services and dvd stereo tracks are at 48kHz.

The quality of any audio track is usually determined by how it was mastered, and any extra data rate beyond CD audio quality is just a waste. You can easily test it yourself.
FLAC vs. MP3 is lossless vs. lossy compression. Lossless compression addresses totally different issues with digital music than sampling rate and bit depth.

Lots of music streaming services offer high-res music now, Tidal being probably the most popular.

Mastering is really far far more important than any of this but its like comparing the farm equipment used to plant the apple tree to the apple itself, for the purposes of discussing digital audio it makes zero sense. That said I usually go for a high-quality vinyl FLAC vs CD FLAC if I can find it because the vinyl master is often times better than the CD master.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
That said I usually go for a high-quality vinyl FLAC
How can you be sure that it does not contain faulty bits of data? I mean, it was captured with some USB audio device, and you claim that they produce errors quite regularly.


FLAC vs. MP3 is lossless vs. lossy compression. Lossless compression addresses totally different issues with digital music than sampling rate and bit depth.
Yup


The high-end industry is doing plenty of things besides selling high-end cables, look at the research and science that goes into MQA, or the re-appearance of R2R ladder DACs.
And none of those matter for anyone, except if you are in the business of extracting money from idiots with cash to spare. Name one (peer reviewed) study where any of the things you mentioned produced better sound. ”Research and science” my ass.


How is quantifiable? When you are transferring data of any kind from one component to another inside a computer or from one computer to another it only appears to happen without error because of protection schemes built into the process and bandwidth is not a consideration for them to function. Those schemes don't exist and can't function in the same way for digital audio and that says nothing about what happens in the analog domain.
The DAC chip can count transfer errors, based on the checksum scheme in place, that is how it’s quantifiable.


It negatively affects things and I already explained why.
How much though, enough for someone to actually hear it?


Still many people who understand it on level higher than anyone on this forum claim that higher resolution (24 bit 192Khz) sound better and the reason comes down to how the reconstruction filters in the DACs work.
And is the ”better sound” quantifiable?
 
Last edited:
Joined
Jan 28, 2021
Messages
853 (0.61/day)
How can you be sure that it does not contain faulty bits of data? I mean, it was captured with some USB audio device, and you claim that they produce errors quite regularly.
I never said it wasn't going to contain faults from AD conversion. The point is to have better source material to work with.
And none of those matter for anyone, except if you are in the business of extracting money from idiots with cash to spare. Name one (peer reviewed) study where any of the things you mentioned produced better sound. ”Research and science” my ass.
Look into the design goals, research to achieve them, and subsequent patents that went into MQA and tell me thats not real research and science. If you think new formats are just overly complex money making schemes thats up to you but MQA is merely one example of whats happening in high-end digital audio. You can argue their target goals are pointless and you can't hear it, and thats your opinion but I'm not going down the "peer reviewed", "blind listening" tests path again because thats already been addressed.
The DAC chip can count transfer errors, based on the checksum scheme in place, that is how it’s quantifiable.
Digital audio is bandwidth limited and time sensitive, it doesn't behave the same way file transfers work or whatever other comparison you want to draw.
How much though, enough for someone to actually hear it?
I'm not here to tell you or anyone else what they can or can't hear but yes, otherwise they wouldn't exist.
And is the ”better sound” quantifiable?
Probably not via the methods you are looking for but drawing conclusions on what sounds better based on what is statistically significant in blind studies or what looks better to an Audio Precision device (the limitations of both of these has already been covered) is missing the point when what the individual hears is only end result that matters.
 
Top