• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

An "Audiophile Grade" SSD—Yes, You Heard That Right

Joined
Oct 15, 2019
Messages
585 (0.31/day)
Look into the design goals, research to achieve them, and subsequent patents that went into MQA and tell me thats not real research and science.
Done.

I'm not going down the "peer reviewed", "blind listening" tests path again because thats already been addressed.
Neither did the MQA people. Sadly. If their ’research’ does not contain validation by controlled testing, I will not consider their statements valid.

Digital audio is bandwidth limited and time sensitive, it doesn't behave the same way file transfers work or whatever other comparison you want to draw.
Not really. There is enough time and bandwidth to do whatever. It was an actual limitation with usb1.0, but we are past that. Another thing is real time audio, but that has nothing to do with end user music listening.
I'm not here to tell you or anyone else what they can or can't hear but yes, otherwise they wouldn't exist.
But if they can hear it, where is the research to prove it?
what the individual hears is only end result that matters.
Exactly. Not what the end user believes, or sees, but what the individual hears. That has been my point this whole time. There are limited methods to determine that though.

Every time someone brings some new bullshit to the audio scene, people should be interested in only one thing: Can you hear it? And specifically, can you hear it without seeing it. Anything else is essentially pointless.
 
Last edited:
Joined
Jan 28, 2021
Messages
853 (0.61/day)
Neither did the MQA people. Sadly. If their ’research’ does not contain validation by controlled testing, I will not consider their statements valid.
I'm not pointing to MQA as a success story just using it as an example of where a ton of R&D went into pushing digital audio. The benefits of MQA seem to be pretty mixed at best from what I've read.
Not really. There is enough time and bandwidth to do whatever. It was an actual limitation with usb1.0, but we are past that. Another thing is real time audio, but that has nothing to do with end user music listening.
I don't think thats true. To my knowledge there isn't a cable or protocol capable of transferring 24 bit digital audio without the use of error interpolation, buffers, and other techniques all of which are susceptible to error.

What do you mean? All audio playback happens in real time.
But if they can hear it, where is the research to prove it?
Every time someone brings some new bullshit to the audio scene, people should be interested in only one thing: Can you hear it? And specifically, can you hear it without seeing it. Anything else is essentially pointless.
All this territory has been covered already. Your stance is pretty clear and I'm not going to argue with it. The only thing I would say is (which I've already said) is anything high-end is hard to test for several reasons. The differences between anything high-end in the audio world is small and with DACs even more so because of all components they really shouldn't be imparting any character of their own unlike speakers or to a lesser extent amplifiers.

How the results of these tests are interpenetrated also have to be taken into consideration, hearing is very personal in terms of what you are sensitive to, what you prefer and what you are even capable of hearing. If you are testing digital sources (DACs) everything else in the signal path has to been good enough to resolve any differences and even if you have gear that is good enough that assumes in the case of speakers that they are setup properly.

Probably the biggest issues is simply arranging the proper test conditions in terms of scale and scope you'd need to do the proper tests to prove it to the standard you are looking for would be a huge undertaking and there just isn't a big insensitive to do it. We already covered in great deal the test done by Archimago's Musings which as impressive as it is still falls short.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
I don't think thats true. To my knowledge there isn't a cable or protocol capable of transferring 24 bit digital audio without the use of error interpolation, buffers, and other techniques all of which are susceptible to error.
Why wouldn’t you buffer data? Are the errors audible, and happen often enough to matter?
What do you mean? All audio playback happens in real time.
I mean minimizing buffers etc. to minimize latency. It’s useful when doing live mixing, audio production etc. See real time computer systems @ wikipedia

Probably the biggest issues is simply arranging the proper test conditions in terms of scale and scope you'd need to do the proper tests to prove it to the standard you are looking for would be a huge undertaking and there just isn't a big insensitive to do it.
Yep. This is the problem. If you prove that it does not work, you won’t be able to sell it at a premium.

If you can’t prove something, it (probably) does not exist, and I would not bet my money on it existing.

Everyone is free to believe in anything, but _claiming_ that something is audibly better needs proof. That’s all.
 
Joined
Jan 28, 2021
Messages
853 (0.61/day)
Sorry, been away from things for awhile...

Why wouldn’t you buffer data? Are the errors audible, and happen often enough to matter?
As part of the data transport information would be buffered but that would only be up until the receiving chip, and data is only checked for errors it dosn't tell the source to re-transmit the data (to my knowledge). What is audible is going to be highly dependent on the rest of the equipment, room, and the individual.
I mean minimizing buffers etc. to minimize latency. It’s useful when doing live mixing, audio production etc. See real time computer systems @ wikipedia
Without a doubt production is more stringent but buffers and other techniques are just mitigation efforts to address the issues with digitizing audio.
Yep. This is the problem. If you prove that it does not work, you won’t be able to sell it at a premium.

If you can’t prove something, it (probably) does not exist, and I would not bet my money on it existing.

Everyone is free to believe in anything, but _claiming_ that something is audibly better needs proof. That’s all.
Thats one way to look at it I guess. I think its within the realm of possibility to settle some of these type of ambiguous issues but the effort would be a massive undertaking and existing studies and tests to date are not good enough and ultimately lead to the wrong conclusions.

Proof
is a tricky thing when it comes to technical objective improvements that are open to subjective experience. Prove it to whom and and under what conditions? Massive double blind tests to prove every claim by every manufacture? Thats simply impractical and dosn't happen in any other industry but it you have to have it in audio or its a scam? As a completely random example if I buy a new suspension fork for my mountain bike that is 15% more responsive to small bump sensitivity or resists flexing by 10% does Fox need to conduct a massive test with a 100 riders and prove performs better on the trail within statistical significance? 1,000 random people off the street wouldn't even know what to check for, 100 highly experienced riders maybe but I feel like most of the double blind audio tests that are big enough are more like the former example (A large number of randos that don't know what they are even listening for or have gear that is not capable of resolving any difference). Aside from these tests being insanely difficult to do it its always highly specific to the user so how useful would any kind proof you gather really ultimately be?
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
As part of the data transport information would be buffered but that would only be up until the receiving chip, and data is only checked for errors it dosn't tell the source to re-transmit the data (to my knowledge). What is audible is going to be highly dependent on the rest of the equipment, room, and the individual.
Whether or not retransmit is requested depends on the protocol. If streaming over IP you can do whatever you want. If you are limited to default USB audio drivers, then there obviously isn’t any retransmitting as it’s not defined in the spec.

And errors aren’t audible if they don’t happen. If you have faulty cables then it might be an actual problem. As I wrote earlier, you can easily test your cables for error rate.
but buffers and other techniques are just mitigation efforts to address the issues with digitizing audio.
They have to do with timing accuracy, not much more. Btw, how accurate do you think analog audio is, timing wise? Cassettes, vinyl, etc. are dependant on the accuracy of electric motors, moving masses, rubber strings, bearing quality and wear level etc. all of which change over time.

Timing accuracy is now vastly better than it ever was for analog media home audio equipment.


Proof is a tricky thing when it comes to technical objective improvements that are open to subjective experience.
Subjectivity has nothing to do with proof. All I’m asking is that an audible difference can be heard. Whether or not the experience is better - i don’t really care.

Massive double blind tests to prove every claim by every manufacture?
I’m ok with a repeatable n=1 study. The main audio ”engineer” of a high end audio company as the one under stydy, with sample size large enough to verify the claim. Should not take more than an hour per product.

As a completely random example if I buy a new suspension fork for my mountain bike that is 15% more responsive to small bump sensitivity or resists flexing by 10% does Fox need to conduct a massive test with a 100 riders and prove performs better on the trail within statistical significance?
Sounds like a bullshit claim, to be honest. How do they know that it’s 15% more responsive? If they base the numbers on some lab test (and disclose how they were made), then it’s fine by me. If they claimed that with the different parts your subjective experience would change (even if it is immeasurable by technical means), then they would need to conduct a study of sorts to back it up.

Aside from these tests being insanely difficult to do it its always highly specific to the user so how useful would any kind proof you gather really ultimately be?
They are not difficult to do. All I’m asking is that at least someone who claims there to be a difference to be able to show it. With his setup or whatever, in a controlled blind study.

And as for usefullness, how useful do you think unproven baseless marketing claims are?
 
Joined
Jan 28, 2021
Messages
853 (0.61/day)
Whether or not retransmit is requested depends on the protocol. If streaming over IP you can do whatever you want. If you are limited to default USB audio drivers, then there obviously isn’t any retransmitting as it’s not defined in the spec.

And errors aren’t audible if they don’t happen. If you have faulty cables then it might be an actual problem. As I wrote earlier, you can easily test your cables for error rate.
You aren't testing the data transmission on the cable you are testing the cable and whatever protocol you are using. Bulk data transfer protocols have built in redundancy built in that compensate for errors on the transmitting and receiving ends of the connection. Those things don't exist in real time audio streams like USB Audio or SPDIF.
They have to do with timing accuracy, not much more. Btw, how accurate do you think analog audio is, timing wise? Cassettes, vinyl, etc. are dependant on the accuracy of electric motors, moving masses, rubber strings, bearing quality and wear level etc. all of which change over time.

Timing accuracy is now vastly better than it ever was for analog media home audio equipment.
I think the difference here is when you dealing with something like vinyl you are never leaving analog.
I’m ok with a repeatable n=1 study. The main audio ”engineer” of a high end audio company as the one under stydy, with sample size large enough to verify the claim. Should not take more than an hour per product.
How would that work? I mean most if not all companies do that kinda testing internally when comparing new products. Lots of reviewers do the same thing but unless someone is there to observe and document the procedure you don't really know that they did it right or at all. Most people don't care about this kind of thing, they usually fall into one of two camps. Those that read / watch subjective reviews and those that care about specs.
Sounds like a bullshit claim, to be honest. How do they know that it’s 15% more responsive? If they base the numbers on some lab test (and disclose how they were made), then it’s fine by me. If they claimed that with the different parts your subjective experience would change (even if it is immeasurable by technical means), then they would need to conduct a study of sorts to back it up.
Whats bullshit about it? The initial first few % of the travel is a pretty important aspect of performance. The more reactive it is the better it tracks with the ground and less fatiguing it is. As to how, put whatever is being tested on a standardized jig and measure it.

Nobody is going to conduct a huge study to prove it though for the same reasons as with audio, crazy hard and time consuming and there isn't enough of audience to justify the effort but that dosn't make it bullshit.
They are not difficult to do. All I’m asking is that at least someone who claims there to be a difference to be able to show it. With his setup or whatever, in a controlled blind study.

And as for usefullness, how useful do you think unproven baseless marketing claims are?
Its been done before but what constitutes proof? If a reviewer or manufacture conducts their own test and outlines their procedure is that good enough? I mean you are pretty much taking their word for it as you don't really have anything tangible to point to like you would with a medical study with treatment A vs. treatment B and correlate the outcome with your test procedure.

More interesting and useful than marketing claims for sure but not particularly useful in a practical sense since it comes down the individual.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
As to how, put whatever is being tested on a standardized jig and measure it.
And in high end audio, this is never done.

Those things don't exist in real time audio streams like USB Audio or SPDIF.
USB audio is not technically a ’real time audio stream’, but a packet based one. It doesn’t contain timing information at all in asyc mode, which is the one everyone currently uses.
And those things don’t exist in the default usb audio driver, but there are no technical limitations for implementing error correcting code for audio transmission. The lack of effort from high end audio companies to implement such things simply speaks for the lack of need for such things. Transmission errors are very rare.
I think the difference here is when you dealing with something like vinyl you are never leaving analog.
So what? There are more errors present in a full analog setup than when using digital media.
How would that work? I mean most if not all companies do that kinda testing internally when comparing new products. Lots of reviewers do the same thing but unless someone is there to observe and document the procedure you don't really know that they did it right or at all.
Name one high end audio company that does blind testing. From my experience, they tell people that ask that they consider it unnecessary. As for third party reviewers of high end audio gear, no one does that in blind testing.
As for the correctness of testing etc. They can just release the papers describing the testing procedure, and welcome observers if someone wants to come see how it’s done. Doesn’t seem too complicated.
If a reviewer or manufacture conducts their own test and outlines their procedure is that good enough? I mean you are pretty much taking their word for it as you don't really have anything tangible to point to
See above.

More interesting and useful than marketing claims for sure but not particularly useful in a practical sense since it comes down the individual.
Baseless claims are not simply ’less useful’ than based claims, they are harmful.
 
Joined
Jan 28, 2021
Messages
853 (0.61/day)
And in high end audio, this is never done.
Its done all the time. Speakers are measured in free space, a anechoic chamber Klippel system (or similar system). Electronics are measured by various analyzers or specific equipment like Audio Precision.
USB audio is not technically a ’real time audio stream’, but a packet based one. It doesn’t contain timing information at all in asyc mode, which is the one everyone currently uses.
And those things don’t exist in the default usb audio driver, but there are no technical limitations for implementing error correcting code for audio transmission. The lack of effort from high end audio companies to implement such things simply speaks for the lack of need for such things. Transmission errors are very rare.
With async USB timing is still transmitted you are just relying on DACs clock gen as reference because PCs have horribly inaccurate clocks. USB audio is packet based but it dosn't have the data protection schemes that file transfers, or network connectivity rely on which is the only reason why it works at all. What the error rate is I don't know but they do happen which is the why the "its all 1s and 0's" point of view is a falsify and why quality matters.

I think the lack of effort on the transportation method just shows that resources are better spent elsewhere on the DAC. Aysnc USB is pretty good but if you look at some of the best very high-end DACs the i2s I2S interface would be an example of a superior interface.
So what? There are more errors present in a full analog setup than when using digital media.
Analog audio and digital audio are completely different things, you can't compare them. If you are analyzing the signal on AP digital is better in every regard but thats not how we hear things. Human ears are not linear and there are so many levels of psychoacoustics involved with sound it makes direct comparisons to what is perceived to be better / more accurate vs what measures to be better and more accurate very difficult let alone drawing direct correlational between the two.
Name one high end audio company that does blind testing. From my experience, they tell people that ask that they consider it unnecessary. As for third party reviewers of high end audio gear, no one does that in blind testing.
As for the correctness of testing etc. They can just release the papers describing the testing procedure, and welcome observers if someone wants to come see how it’s done. Doesn’t seem too complicated.
Schiit did a test with Audio Head comparing all four of their pre-amps. I've seen in a few interviews with Schiit that when they are internally testing different prototypes they are unlabeled and make the rounds with the team members and the design that ppl like the most wins and goes to production. One of more popular YouTube video guys did a test with four different RCA cables and was able rank them. I've seen similar tests to different degrees of procedure and documention, but nobody is publishing papers or anything like that.
Baseless claims are not simply ’less useful’ than based claims, they are harmful.
I don't think anyone is at risk of being harmed, its just subjective impressions of audio gear.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
I don't think anyone is at risk of being harmed, its just subjective impressions of audio gear.
Someone making financial decisions based on baseless claims is being harmed.
Its done all the time. Speakers are measured in free space, a anechoic chamber Klippel system (or similar system). Electronics are measured by various analyzers or specific equipment like Audio Precision.
Really? For example the item of this topic, do you think any measurement electronics were used to determine the change in quality it makes to sound reproduction?
With async USB timing is still transmitted
No. The USB packets contain zero timing information. It is basically interrupt based system dictated by the DAC.

What the error rate is I don't know but they do happen which is the why the "its all 1s and 0's" point of view is a falsify and why quality matters.
How much does it matter though? I guess you don’t know that either. If there is one error per year of listening, does it make sense to spend big bucks on the cables?

Analog audio and digital audio are completely different things, you can't compare them.
Both are audio, of course you can compare them. You could, for example, do a blind study to compare them. ;)
One of more popular YouTube video guys did a test with four different RCA cables and was able rank them.
Link please.

Schiit did a test with Audio Head comparing all four of their pre-amps.
Not really ’high end’, the devices are pretty cheap. Anyway, I’ve never stated that you would’nt be able to hear differences in amps.

And not a proper blind test either, btw. The listeners were always told if the equipment changed, and to which of the four it was switched to (a, b, c, d). It was ’blind’ only to the degree that the participants weren’t told which letter denominates which equipment. This produces huge confirmation bias.
I've seen in a few interviews with Schiit that when they are internally testing different prototypes they are unlabeled and make the rounds with the team members and the design that ppl like the most wins and goes to production.
If they are unlabeled, how can they select what they like? And if they are labeled, it’s not a proper blind test.
if you look at some of the best very high-end DACs the i2s I2S interface would be an example of a superior interface.
Very best? How is that determined? Price? :)
And how exactly is i2s superior? It has better timing characteristics than spdif, but if we compare to async USB, why is it better?
 
Joined
May 2, 2022
Messages
1,633 (1.73/day)
Location
G-City, UK
System Name AMDWeapon
Processor Ryzen 7 7800X3D
Motherboard X670E MSI Tomahawk WiFi
Cooling Thermalright Peerless Assassin 120 ARGB with Silverstone Air Blazer 2200rpm fans
Memory G-Skill Trident Z Neo RGB 6000 CL30 32GB@EXPO
Video Card(s) Powercolor 7900 GRE Red Devil
Storage Samsung 870 QVO 1TB x 2, Lexar 256 GB, TeamGroup MP44L 2TB, Crucial T700 1TB, Seagate Firecuda 2TB
Display(s) 32" LG UltraGear GN600-B
Case Montech 903 MAX AIR
Audio Device(s) Steel Series Arctis Nova Pro Wireless
Power Supply MSI MPG AGF 850 watt gold
Mouse Glorious Model D l Pad GameSir G7 SE
Keyboard Redragon Vara K551P
Software Windows 11 Pro 24H2
Benchmark Scores Some points. More than your setup!
If I can't hear any better, it's bollocks. I'm by no means an audiophile, but I LOVE my drum and bass. How is an SSD going to make that sound any better than what it already is? It's nonsensical fantasy land crap. My Lexar SSD is audiophile cause it's got my music on it. It doesn't have a fancy codec or anything like that, just my tunes, mostly from YouTube. Someone sang a real hip-hop song, his name was Flavour Flav and it goes "Don't believe the hype!!!"
 
Joined
Jan 28, 2021
Messages
853 (0.61/day)
Someone making financial decisions based on baseless claims is being harmed.
There is a difference between knowingly making fraudulent claims and making claims that don't meet your personal standard.
Really? For example the item of this topic, do you think any measurement electronics were used to determine the change in quality it makes to sound reproduction?
On the top of an audiophile SSD or DACs? The SSD I doubt it but maybe, if they did it would be good for a laugh I guess. With DACs of course; it would be blind luck if you got something like a DAC to work without the use of measurement equipment in the design process let alone design to goal of specific performance threshold.
No. The USB packets contain zero timing information. It is basically interrupt based system dictated by the DAC.
Ok. I'll take your word for it, or if you have any easy to read link. Still however the USB audio protocol works timing is an intrinsic function for it work and relying on the DACs clock is better than the PCs clock it dosn't make it immune to problems outlined previously.
How much does it matter though? I guess you don’t know that either. If there is one error per year of listening, does it make sense to spend big bucks on the cables?
This is a data science question. I would argue it probably matters more than most people think. Network connectivity and file transfers work only because the protocols used to protected the data are in place. When you you transfer a file over a CAT5 cable the binary bits are represented by analog voltage differences. Its not a binary on / off signal a computer needs it to be for it work with it, its up to the receiver to determine what the state of the voltage is at that exact moment in time represents. Errors are happening all the time and get corrected in these processes. Its why for example you can push a CAT5 cable past its specified max run and will work without getting corrupt data, the speed will just be reduced.

Digital audio streams are using the same fundamental process but don't have the same resilience and because of that are fundamentally different That is why a audiophile SSD makes no sense but why cable quality is a factor even in digital audio. I'm not advocating for big dollar high-end digital cables and never was. Personally I'm skeptical but on principle of how digital audio works differences are possible.
Both are audio, of course you can compare them. You could, for example, do a blind study to compare them. ;)
You can compare them to the extent of which you prefer and why but as to why, the reasons you would prefer one or the other are fundamentally different and not comparable.
Link please.
I'll have to follow up when I'm at home.
Not really ’high end’, the devices are pretty cheap. Anyway, I’ve never stated that you would’nt be able to hear differences in amps.

And not a proper blind test either, btw. The listeners were always told if the equipment changed, and to which of the four it was switched to (a, b, c, d). It was ’blind’ only to the degree that the participants weren’t told which letter denominates which equipment. This produces huge confirmation bias.
"High-end" is subjective. For my audio budget anything in the $500-1000 range is high-end.

Yeah, its not a perfect test but that dosn't make it meaningless either.
If they are unlabeled, how can they select what they like? And if they are labeled, it’s not a proper blind test.
From what I remember of the interview they pass around generic prototypes not knowing who's design they have and just live with them for a few days. They are labeled or at least identifiable as to differentiate the devices but they don't know specifically what it is or who is responsible for it.

Lol, wow, I never said it was a proper blind test.
Very best? How is that determined? Price? :)
And how exactly is i2s superior? It has better timing characteristics than spdif, but if we compare to async USB, why is it better?
Reviews.

The clock and data are transmitted separately so its better and better than USB for all the reasons async USB isn't infallible which have been previously mentioned.
How is an SSD going to make that sound any better than what it already is?
Yeah, the SSD is dumb and makes zeror sense. We're not even talking about that anymore.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
"High-end" is subjective. For my audio budget anything in the $500-1000 range is high-end.
Fair enough. For me it has more to do with moving away from technical specifications to ’subjective opinions’ when it comes to audio quality that can be achieved with the products.

The clock and data are transmitted separately so its better and better than USB for all the reasons async USB isn't infallible which have been previously mentioned.
But i2s is more fallible than usb. It doesn’t have any error correction, or even error detection mechanisms. The signaling isn’t even differential. The only benefit is the clock signal, but it’s really unnecessary unless you have multiple digital things that you want clocked together. I don’t know of any real use for such a setup. It’s usually used inside a DAC, connecting the USB chip and the actual DAC chip together. It was never intended for connecting separate devices together, and isn’t really suited for it.

It’s better than spdif in some usages, I’ll give you that.

I would argue it probably matters more than most people think.
Why do you think that?

in file transfers the error correction is strictly mandatory, in audio it is technically not. How much it matters is a matter of how often the errors happen. If it was truly a problem, then it would get solved by data science, not expensive cables.

If it was a problem, you could test it by comparing the sound quality of two inputs on a device that supports both USB and DLNA(over tcp-ip). One has error correction, the other just error detection.

kef ls50 wireless ii, for example as the device to do the tests with.
 
Last edited:
Joined
Apr 13, 2022
Messages
1,174 (1.22/day)
Fancy ass cables matter for analogue kinda, they don't for digital really. Even then you don't need to spend $$$$$$$$$$$ just get you a copper or silver cable and call it a day.
 
Joined
Jul 31, 2014
Messages
272 (0.07/day)
Location
Singapore
System Name Garbage / Trash
Processor Ryzen 5600X / 5600
Motherboard MSI B450M Mortar Ti / GB B550M Aorus Pro-P
Cooling Deepcool GTE / ID Cooling SE224XT
Memory Micron 32GB DDR4-3200 E-die @ 3600C16 / Ballistix Elite 16GB 3600C16
Video Card(s) MSI 2060 Super Armor OC / Zotac 3070 Twin Edge
Storage HP EX920 1TB Micron 1100 2TB, Crucial M550 1TB, Hynix P31 1TB
Display(s) Acer XB271HU @ 150Hz x2
Audio Device(s) JBL LSR305 + Topping D50S / iLoud MM + SMSL DO100
Power Supply Seasonic Focus Plus 850W / G-series 650W
Mouse Logitech G304 x2
Fancy ass cables matter for analogue kinda, they don't for digital really. Even then you don't need to spend $$$$$$$$$$$ just get you a copper or silver cable and call it a day.

Just don't tell the owner of fancy pants 4+ figure audio cables about the fact that every speaker and audio electronic manufacturer is gonna be so much comparatively cheapskate when it comes to the innards.
 
Joined
Apr 13, 2022
Messages
1,174 (1.22/day)
Just don't tell the owner of fancy pants 4+ figure audio cables about the fact that every speaker and audio electronic manufacturer is gonna be so much comparatively cheapskate when it comes to the innards.

Those people are idiots to start with and there is no reasoning with them. Is a 4k HDMI cable worth it, fuck no. Nor is a 4k copper cable for speakers. Now is there a difference between a 10 buck HDMI cable and a 100 buck one, fuck no. But is there a difference between a 10 buck cheap recycled metal speaker cable and a 100 buck pure copper speaker cable, hell yes. However if we are talking about "spending 10x as much for device" that money is better spent on better speakers than cables. The issue is that 10x the cost of a cable is something people can still afford and feel good about, now look at a good pair of 2000 buck speakers and 10x that and people shit bricks.

Or just do what I do. Buy a spool of copper wire at like 500 bucks and make your own damn cables and if it breaks who cares, just make another. All copper shielded wire isn't hard to find. Strip shielding at ends twist up, terminate end and off you go! You have all pure copper wire for decades.
 
Joined
Jan 28, 2021
Messages
853 (0.61/day)
Link to interconnect cable test I mentioned earlier. From what I remember of it it wasn't good enough of a test to anyone could definitively point to as proof but unless you think he was doing something nefarious its legit enough.

Fair enough. For me it has more to do with moving away from technical specifications to ’subjective opinions’ when it comes to audio quality that can be achieved with the products.
I'm not sure I get what you mean. Are you saying that once audio becomes "high-end" its all about the subjective and not objective results?
But i2s is more fallible than usb. It doesn’t have any error correction, or even error detection mechanisms. The signaling isn’t even differential. The only benefit is the clock signal, but it’s really unnecessary unless you have multiple digital things that you want clocked together. I don’t know of any real use for such a setup. It’s usually used inside a DAC, connecting the USB chip and the actual DAC chip together. It was never intended for connecting separate devices together, and isn’t really suited for it.

It’s better than spdif in some usages, I’ll give you that.
From my understanding that one key benefit of keeping the clock signals (there are multiple) separate from the data is what is key to making better for digital audio.

Interesting article from Hackaday on I2S and its use cases.
Why do you think that?

in file transfers the error correction is strictly mandatory, in audio it is technically not. How much it matters is a matter of how often the errors happen. If it was truly a problem, then it would get solved by data science, not expensive cables.

If it was a problem, you could test it by comparing the sound quality of two inputs on a device that supports both USB and DLNA(over tcp-ip). One has error correction, the other just error detection.

kef ls50 wireless ii, for example as the device to do the tests with.
I say that because at the end of the day these signals are still analog transmissions which we've already covered and when it comes to the sampling rate of high frequencies you are talking about minuscule moments in time and digital audio being a stream without error correction is susceptible to minor errors where a cable quality difference could make an impact. Data science can only go so far if you don't have the physical medium to support the type of data you are transmitting.

Again I'm not advocating for high-end (digital) cables but am stating the reasons why "its just digital 1's and 0's" is wrong and how its possible for a cable to make a difference. Or at least trying to understand how a cable could make a difference, I'm completely open to having it proven to be BS.

Yeah that would be an interesting test. I don't know what DAC is in the KEF but thats is supposed to be a really good speaker (at least the regular LS50 is) so it should stand a chance of being good enough to pickup differences.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
but am stating the reasons why "its just digital 1's and 0's" is wrong and how its possible for a cable to make a difference.
A cable can definitely make a difference, but it requires packet loss. If packet loss would be a large problem, it would be handled with slightly larger buffers and packet retransmission, or packet redundancy.

In for example spfif, there are no checksums in place, which means that transmission errors would be immeaditely audible, and i have never heard an audible crack in my livingroom setup. So either I’m deaf, or errors don’t happen with frequency that actually matters.

With USB audio, if an error happens, it will just be interpolated over. Maybe this is the problem here: if the error would always be clearly audible, you’d believe me when I say that they don’t happen often enough to matter.

As for the british audiophile and his tests, I call total bullshit on the dude. Not that he knows it’s bullshit what he does, but I wouldn’t buy anything based on his biased crap.

From my understanding that one key benefit of keeping the clock signals (there are multiple) separate from the data is what is key to making better for digital audio.
But only needed if you do digital signal processing and want to keep the processing delays to a minimum. Use cases like that for home equipment are non existent.
 
Joined
Jan 28, 2021
Messages
853 (0.61/day)
A cable can definitely make a difference, but it requires packet loss. If packet loss would be a large problem, it would be handled with slightly larger buffers and packet retransmission, or packet redundancy.

In for example spfif, there are no checksums in place, which means that transmission errors would be immeaditely audible, and i have never heard an audible crack in my livingroom setup. So either I’m deaf, or errors don’t happen with frequency that actually matters.

With USB audio, if an error happens, it will just be interpolated over. Maybe this is the problem here: if the error would always be clearly audible, you’d believe me when I say that they don’t happen often enough to matter.

As for the british audiophile and his tests, I call total bullshit on the dude. Not that he knows it’s bullshit what he does, but I wouldn’t buy anything based on his biased crap.
Would larger buffers in real time stream where timing is critical even be beneficial? DACs have very precise clocks to keep everything operating in the same time domain, and its supposition on my part but a large buffer would probably present a pretty big challenge to keep in sync.

Just because there are checksums in USB dosn't mean USB audio is immune to errors as those checkssums only apply word data, not the clock data. Errors in the word data should get corrected through interpolation and if they don't result an audible error, timing errors to the DAC and within the DAC though are not going to be corrected this way. Thats why high-end DACs have precise clocks and why I2S interfaces with clocks on a different cable exist.

Whats bullshit with his test? I haven't rewatched it in full but Its a pretty simple test from what I remember. If he dosn't know what cable is being tested t and he's just identifying the cable I don't see where the problem lies.
But only needed if you do digital signal processing and want to keep the processing delays to a minimum. Use cases like that for home equipment are non existent.
Its all processing though. Things like DSP EQ and room correction are timing sensitive but that processing is happening before the AD conversion process.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
Would larger buffers in real time stream where timing is critical even be beneficial? DACs have very precise clocks to keep everything operating in the same time domain, and its supposition on my part but a large buffer would probably present a pretty big challenge to keep in sync.
A bigger buffer makes timing and sync simpler, because your data input from the PC doesn’t need as tight timing tolerances and you can load longer parts of the audio at once. Should you buffer full songs, the pc communication would be completely unlinked from the timing of data on the dac. Compared to having absolutely zero buffers, you’d need a 16bit packet from the pc every 1/44000th of a second, and any timing errors would be immeadiately noticeable in the audio output.

The only downside to increased buffering is added playback delay, but that’s a problem only in enthusiast level gaming and real time audio systems (like running filter loops through your pc when playing a guitar). Many of the ’high-end’ dacs feature much longer buffers compared to the basic stuff.


Just because there are checksums in USB dosn't mean USB audio is immune to errors as those checkssums only apply word data, not the clock data.
The clock data holds no information in async USB audio. I thought we went through this already. It only matters if the packet comes so late, that the playback buffer on the dac is already empty, producing clearly audible lack of music, or a click.

Whats bullshit with his test?
It might be that I just dont know where to look, but he doesn’t publish how he did the test setup or anything like that. I assume that his test methology is full of bias based on the lack of transparency. He even doesn’t tell what interconnect he is testing the cables for. Pc and dac? Dac and amp? Who knows.

Its all processing though. Things like DSP EQ and room correction are timing sensitive but that processing is happening before the AD conversion process.
Yup. All the i2s stuff happens before the AD conversion. I personally don’t understand what the use case is for a separate real time device connected to the DAC via i2s. All the same processing can be done on the PC, and the audio somehow will anyway come from a non timed source like a PC (via async USB audio), a network drive or local mass media.
 
Last edited:
Joined
Jan 28, 2021
Messages
853 (0.61/day)
A bigger buffer makes timing and sync simpler, because your data input from the PC doesn’t need as tight timing tolerances and you can load longer parts of the audio at once. Should you buffer full songs, the pc communication would be completely unlinked from the timing of data on the dac. Compared to having absolutely zero buffers, you’d need a 16bit packet from the pc every 1/44000th of a second, and any timing errors would be immeadiately noticeable in the audio output.

The only downside to increased buffering is added playback delay, but that’s a problem only in enthusiast level gaming and real time audio systems (like running filter loops through your pc when playing a guitar). Many of the ’high-end’ dacs feature much longer buffers compared to the basic stuff.
I'm just referring to the DAC's internal buffer, thats where the timing is critical.
The clock data holds no information in async USB audio. I thought we went through this already. It only matters if the packet comes so late, that the playback buffer on the dac is already empty, producing clearly audible lack of music, or a click.
There are several clocks in any digital audio stream, and I'm not really sure what you mean by it holds no information.

We probably did go though this already and I probably already said this but the audio can have errors and playback will continue without dropouts or audible artifacts.
It might be that I just dont know where to look, but he doesn’t publish how he did the test setup or anything like that. I assume that his test methology is full of bias based on the lack of transparency. He even doesn’t tell what interconnect he is testing the cables for. Pc and dac? Dac and amp? Who knows.
Yeah, idk its not at a scale or transparent enough to be conclusive or anything but I don't get the impression he's making up anything. If I remember correctly he basically recommend the more affordable professional interconnects. I thought he listed the test procedure and equipment... I guess I'll have to watch it again in full.

I will say though that a test like that no trivial thing, blind listening tests are time consuming and difficult to do. I've only done it with lossy vs lossless audio so no hardware to swap in or out and even that was a time consuming process that you (or at least me) can only do for so long before fatigue sets in. I would like to see someone put the effort in and do it at bigger scale though.
Yup. All the i2s stuff happens before the AD conversion. I personally don’t understand what the use case is for a separate real time device connected to the DAC via i2s. All the same processing can be done on the PC, and the audio somehow will anyway come from a non timed source like a PC (via async USB audio), a network drive or local mass media.
Well everything in the context of what we're talking about happens before the AD conversion. I2S is just a different way to stream from the host device to the DAC.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
I'm just referring to the DAC's internal buffer, thats where the timing is critical.
Me too.

There are several clocks in any digital audio stream, and I'm not really sure what you mean by it holds no information.
I mean that the packet’s arrival time holds no information. The packet just needs to arrive before the playback buffer is empty. No timing on the DAC relies on the packets arriving with any higher precision.

the audio can have errors and playback will continue without dropouts or audible artifacts.
Yup, if there is packet loss.
I2S is just a different way to stream from the host device to the DAC.
But why is it necessary? Why not just have the host on the same device as the dac? Or use async mode?

For example if we had a i2s interface card on a pc, wouldn’t the timing issue simply move from the pc-dac interface to the processor-interface card -interface (which would run in async mode because of pci express)? I don’t understand the point.

From what I’ve seen, it’s just a way to sell people shit they don’t need, like dedicated playback devices (that use async communication to get the data from storage media anyway, because that’s the way flash storage works), or to add buffering boxes, retimers, DSP boxes, or other stuff that could either be part of the dac, or done in a non-realtime fashion to the media being played.

Basically all digital media is read in an async way. Adding a i2s connection somewhere in no way fixes this ”problem”.
 
Joined
Jan 28, 2021
Messages
853 (0.61/day)
Well you can't just increase buffers without consequence. DACs use very accurate clocks to keep all their functions in sync including the incoming data stream. Putting a bunch of data in buffer would be compounding the issue of keeping everything in the stream in sync.
I mean that the packet’s arrival time holds no information. The packet just needs to arrive before the playback buffer is empty. No timing on the DAC relies on the packets arriving with any higher precision.
Right but in real time stream the data and the timing are integral. It doesn't matter if the data gets there correctly to the DAC if the timing associated with any of the clocks that associated with that particular piece of data are wrong.
But why is it necessary? Why not just have the host on the same device as the dac? Or use async mode?
Because it does a better job of preserving the timing and data by transmitting them over dedicated conductors. If you read / listen DAC designers talk about interfaces USB is pretty universally unliked do to the noise of interface and all the other traffic on the bus. Its only used because its convenient and because async USB makes vastly better than TOSLINK or SPIDIF from a PC because they are garbage sources.
From what I’ve seen, it’s just a way to sell people shit they don’t need, like dedicated playback devices (that use async communication to get the data from storage media anyway, because that’s the way flash storage works), or to add buffering boxes, retimers, DSP boxes, or other stuff that could either be part of the dac, or done in a non-realtime fashion to the media being played.
I think thats a convenient answer to the excess and gate keeping nature that tends to prevail in audiophile circles cause that stuff certainly does exist (the product that spawned this thread) but that dosen't make everything "high-end" overlly complex "shit that people don't need" with no tangible benefit. There are so many simpler ways to con people out of their money than develop new high performance DACs the associated controllers and proprietary interfaces that go along with them.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
Because it does a better job of preserving the timing and data by transmitting them over dedicated conductors.
But the data being played as audio does not come from any ’timed’ source anyway. Flash memory doesn’t spin at a constant speed or anything like that.

There is a need only for a single clock source for the DAC and any reliable method for getting data to it’s playback buffer in time. All modern systems fetch the data from a non timed source.
Well you can't just increase buffers without consequence. DACs use very accurate clocks to keep all their functions in sync including the incoming data stream. Putting a bunch of data in buffer would be compounding the issue of keeping everything in the stream in sync.
This view of yours doesn’t have any standing in reality. Increasing the buffer of the DAC makes timing everything much easier, as you have less and less dependency on other devices. Everything in the stream doesn’t need to be ’in sync’, as the only thing you hear is the rate at which the last piece of the pipeline processes stuff.

You won’t be able to hear when you ripped the song onto your hard drive, and you won’t be able to hear when that song got buffered to your DAC. what matters is that the DAC loads bits off the last buffer at a constant (and correct) rate.
 
Joined
Jan 28, 2021
Messages
853 (0.61/day)
But the data being played as audio does not come from any ’timed’ source anyway. Flash memory doesn’t spin at a constant speed or anything like that.

There is a need only for a single clock source for the DAC and any reliable method for getting data to it’s playback buffer in time. All modern systems fetch the data from a non timed source.
Once its read off of whatever the storage media is and process and transmitted to the DAC its timed, and its not just a single clock shared between the DAC and host, there are several muxed together that have to be separated and then processed by the DAC.

This view of yours doesn’t have any standing in reality. Increasing the buffer of the DAC makes timing everything much easier, as you have less and less dependency on other devices. Everything in the stream doesn’t need to be ’in sync’, as the only thing you hear is the rate at which the last piece of the pipeline processes stuff.

You won’t be able to hear when you ripped the song onto your hard drive, and you won’t be able to hear when that song got buffered to your DAC. what matters is that the DAC loads bits off the last buffer at a constant (and correct) rate.
Except that no DAC to my knowledge has ever leveraged a larger buffer as a technique to preserve the integrity of the stream. It would be pretty easy to add some memory or cache in a DAC and give it some kinda of easily marketable audiophile name and mark it up 200%. Instead you get complex solutions like I2S and crazy things like temperature controlled clock generators. If a larger buffer was effective I think it would have been done before. I also don't really see how that would make things easier as the more data you store the more work you have to do keep track of it and manage it.

What is stored on your filesystem is not the same thing as what is stored in the buffer of your DAC, even if its RAW PCM. How its transmitted is interface specific and in the case of PCM gets segmented into tiny frames (sample rate dependent), not blocks of data like whats on your hard drive (in whatever filesystem it happens to be using) and continuously processed. If any of it dosn't get there at the right time or a bit is misinterpreted its still continuous processing the stream with a loss of quality not an audible artifact, only when the process competely brakes down do you get audible gliches.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
Once its read off of whatever the storage media is and process and transmitted to the DAC its timed
Except that you can think of the previous (async) step always as the storage media. The DAC doesn’t know when a certain bit was read from the storage media to cpu cache, to ram, to cache, to pcie, to usb controller, to usb receiver buffer, to i2s connecting the usb receiver to the actual DAC chip. Only the last step is actually timed.


and its not just a single clock shared between the DAC and host, there are several muxed together that have to be separated and then processed by the DAC.
Nope. There is just a single clock on the DAC, which is split up and combined to create any other (minor)clocks that are necessary.


Except that no DAC to my knowledge has ever leveraged a larger buffer as a technique to preserve the integrity of the stream.
For example the ARES II has a 10+ms buffer. Many other ”high-end” DACs have similar stuff.

Instead you get complex solutions like I2S and crazy things like temperature controlled clock generators.
Temperature controlled main clock source makes sense and can actually affect how things sound. I2S is used in even most sub 50$ DACs, just internally. It’s just a basic board level interconnect, nothing crazy. What’s crazy is trying to use it for something it was never designed for, and is not good for, like connecting a PC to a DAC.
I also don't really see how that would make things easier as the more data you store the more work you have to do keep track of it and manage it.
The size of a buffer makes no difference in the amount of work needed to ’keep track of and manage it’. You can utilize the same exact data handling code for buffers of almost any size, by just changing one input parameter.

What is stored on your filesystem is not the same thing as what is stored in the buffer of your DAC, even if its RAW PCM. How its transmitted is interface specific and in the case of PCM gets segmented into tiny frames (sample rate dependent), not blocks of data like whats on your hard drive (in whatever filesystem it happens to be using) and continuously processed. If any of it dosn't get there at the right time or a bit is misinterpreted its still continuous processing the stream with a loss of quality not an audible artifact, only when the process competely brakes down do you get audible gliches.
It is the same bits, they are just repackaged to differing lenghts depending on the tranfer interface. For example on USB the DAC just sends a request for the ’next n bytes of data’ and the cpu then fetches them from RAM or HDD, packages to the USB packet and sends it off.
If the bits would be different, it would sound different.
 
Top