# An "Audiophile Grade" SSD—Yes, You Heard That Right



## btarunr (Dec 20, 2021)

A company dealing with niche audiophile-grade electronics on Audiophile Style, a popular site and marketplace for the community, conjured up an SSD that it feels offers the best possible audio. Put simply, this is an M.2-2280 NVMe SSD with a fully-independent power delivery mechanism (one that's isolated from the motherboard's power delivery), and an over-the-top discrete clock-source for its controller. The drive has its own 5 V 2-pin DC input and switching hardware onboard, including [get this] a pair of Audionote Kaisei audio-grade electrolytic capacitors in place of what should have been simple solid-state SMD capacitors that are hard to even notice on any other drive. It doesn't end there. 

Most NVMe SSDs have a tiny 2 mm x 2 mm SMD oscillator that's used by the controller for clock-generation. This drive features a Crystek CCHD-957 high-grade Femto oscillator. These oscillators are found in some very high-grade production or scientific equipment, such as data-loggers. For the drive itself, you get a Realtek DRAM-less controller, and a single 1 TB TLC NAND flash chip that's forced to operate in SLC mode (333 GB). On a scale of absurdity, this drive is right up there with $10,000 HDMI cables. Digital audio is stored in ones and zeroes, and nothing is accomplished through an isolated power delivery or clock generation on the storage media. It's nice of the designers to include jumpers that let you switch between the discrete power source and motherboard power; so listeners can see the snake-oil for themselves.



 

 

 



*View at TechPowerUp Main Site*


----------



## thesmokingman (Dec 20, 2021)

In for one to go with those wire world $10K hdmi cables!


----------



## Pepamami (Dec 20, 2021)

No vacuum tubes? Lame


----------



## Deleted member 24505 (Dec 20, 2021)

I bet loads of Fidiots will buy this ugly thing though.

Isn't the external supply dependant on the quality of it though? one from a pound shop aint gonna be better than using the MB supplied 5v is it.


----------



## Kissamies (Dec 20, 2021)

thesmokingman said:


> In for one to go with those wire world $10K hdmi cables!


I remember reading about 200EUR SATA cables for audio usage. Must be some amazing technology to improve the quality of those bits on the drive.


----------



## johnspack (Dec 20, 2021)

Oh c'mon....  monster cables.  Just silly.


----------



## elghinnarisa (Dec 20, 2021)

Dont forget the SD cards as well, gotta sound good on the go! https://www.theverge.com/2015/2/19/8068465/sony-memory-card-premium-sound-sr-64hxa


----------



## Thorsthimble (Dec 20, 2021)

Do they use oxygen-free copper in the PCB? FFS, lmao....


----------



## Deleted member 202104 (Dec 20, 2021)

Pepamami said:


> No vacuum tubes? Lame



^^ This ^^

Single-ended Triode or GTFO.


----------



## noel_fs (Dec 20, 2021)

audiophile here this is fucking retarded


----------



## thesmokingman (Dec 20, 2021)

Maenad said:


> I remember reading about 200EUR SATA cables for audio usage. Must be some amazing technology to improve the quality of those bits on the drive.


This stuff has always been kind of nutty but since they got into digital the whole grey area these scammers operated in is gone.

Anyone remember audiophile rocks? Or them crystals?   



			Audiophile Rocks – Down The Rabbit Hole Once Again. » Adventures in Hifi Audio


----------



## RealKGB (Dec 20, 2021)

C to disappointment, no vacuum tubes, they haven't even made an audiophile-grade filesystem yet


----------



## Steevo (Dec 20, 2021)

Do they come with acoustic rocks?


----------



## fluxc0d3r (Dec 20, 2021)

I actually built an audiophile grade PC back then with the Gigabyte H81 Amp-up motherboard that featured a gold plated USB that isolated the motherboard from the USB power. I did notice a difference in sound quality when paired with my DAC using a quality USB cable, compared to a conventional USB port. It was simply clearer sounding with less hash. Point is that it did not cost an arm and a leg. The motherboard was only $80 cdn back then. 

Even using tantalum capacitors on motherboards improves the onboard audio sound quality. Too bad companies ignore stuff like this and try to charge an arm and an leg for USB type-C ESS Sabre DAC add-on in high-end motherboards. 

What's the difference in getting a cheap motherboard that has USB typ-C and buying your own separate add-on DAC. Might as well, incorporate USB isolators or better quality capacitors into the board without huge markups.


----------



## Mistral (Dec 20, 2021)

So, you plug your headphones into that SSD?


----------



## Post Nut Clairvoyance (Dec 20, 2021)

fluxc0d3r said:


> I actually built an audiophile grade PC back then with the Gigabyte H81 Amp-up motherboard that featured a gold plated USB that isolated the motherboard from the USB power. I did notice a difference in sound quality when paired with my DAC using a quality USB cable, compared to a conventional USB port. It was simply clearer sounding with less hash. Point is that it did not cost an arm and a leg. The motherboard was only $80 cdn back then.
> 
> Even using tantalum capacitors on motherboards improves the onboard audio sound quality. Too bad companies ignore stuff like this and try to charge an arm and an leg for USB type-C ESS Sabre DAC add-on in high-end motherboards.
> 
> What's the difference in getting a cheap motherboard that has USB typ-C and buying your own separate add-on DAC. Might as well, incorporate USB isolators or better quality capacitors into the board without huge markups.


that is alot of marketing nonsense in the H81 Amp-up page. Gold plating a USB socket does absolutely NOTHING, since one of the 4 USB 2.0 pins IS the ground, the socket/cover is not used for anything electrical. 
I am guessing that they (the motherboard) takes 5V from the power supply ad pass it through additional filtering circuits before outputting to that one USB port. 
Other than that, if you are using your own USB DAC/Amp, there is no gains to be had because a motherboard can only offer so much for audio, there are only two changes(compare to any other mobo) on the Amp-up, the additional 5V filtering for that one specific USB port, and possibly more filtering for the onboard DAC/Amp. If you want this ""isolated USB"" just buy a USB power/data splitter cable and use any good usb power adapter like a apple 5w that almost everyone have, or even an analog supply, or batteries...
Buying a ""audiophile"" motherboard has always been misleading, since any self-respecting audiophile will have their own DAC/Amp, the products are more just for people who wants to/thinks they are buy a board knowing it will not have problem with their basic music needs. The newest sabre/USB realtek boards from a bunch of flagship motherboards have audio problems with implementation anyways, so buying separate good DACAMP that will last you lifespan of multiple PC upgrades is a much better idea.
Nowadays all but the cheapest most cut-down boards have decent, functional audio section, and if anything the high ends have problem with messing up the implementation.


----------



## Selaya (Dec 20, 2021)

i guess u could call this an idiot tax.


----------



## the54thvoid (Dec 20, 2021)

It ought to come with one of those ionising hologram bracelets. You know, to purify the air to allow more consistent sound amplification through equally spaced air molecules.


----------



## Octopuss (Dec 20, 2021)

Fuck me or what...
First thing I read after waking up is this, and I'm like hello, what dimension is this?


----------



## nguyen (Dec 20, 2021)

DJ be like


----------



## Fouquin (Dec 20, 2021)

Wow it took 17 days for this to get into the news cycle. Man I thought people would have picked this story up earlier, it's so absurd.


----------



## Tartaros (Dec 20, 2021)

I don't understand, why?


----------



## Kissamies (Dec 20, 2021)

Octopuss said:


> Fuck me or what...
> First thing I read after waking up is this, and I'm like hello, what dimension is this?


I bet you checked the calendar to make sure it's not April Fool's day..


----------



## ipo3nk (Dec 20, 2021)

well....  333GB 'forced SLC' in 1TB 3D TLC flash is much more interesting.....  waiting for techpowerup review.


----------



## INSTG8R (Dec 20, 2021)

ipo3nk said:


> well....  333GB 'forced SLC' in 1TB 3D TLC flash is much more interesting.....  waiting for techpowerup review.


That would mean someone might actually have to buy one…


----------



## Athlonite (Dec 20, 2021)

this is designed to go right along with the Audioidiot network switch that's cost $2500


----------



## bogami (Dec 20, 2021)

Oh yeah! the numbers will definitely sound better. .


----------



## TheUn4seen (Dec 20, 2021)

Tartaros said:


> I don't understand, why?


That's how you separate fools from their money. Technology is great for this purpose because “Any sufficiently advanced technology is indistinguishable from magic”. Even so-called tech enthusiasts have a very limited understanding of the technology they're so fond of, so it's surprisingly easy to fool them with meaningless but smart-sounding words.


----------



## Fry178 (Dec 20, 2021)

some seem to forget that just because there isn't an improvement on paper, doesnt mean it wont make a diff in real world (even if its just the user imagining things).
e.g. the "perfect" signal response (from an engineering point) is a flat line for all frequencies, yet its not what we use for listening to music.

if someone has the funds, why would i care if they waste it on this, definitely not gonna sour it for them.
if "cheap", i know i would get it, just for the pseudo slc stuff.


----------



## iO (Dec 20, 2021)

Burn-in time of the OS SSD improves the sound? Raid0 for lowest latency and "sub-byte security"?! High end CPUs for maximum sound clarity, like a 5950x with just 1 core enabled to not disturb the data on the SSD??!

Wow, that source link is a goldmine of some grade A+ audiophoolery.


----------



## mechtech (Dec 20, 2021)

All I have to say is

Bahahahaha


----------



## fluxc0d3r (Dec 20, 2021)

Post Nut Clairvoyance said:


> that is alot of marketing nonsense in the H81 Amp-up page. Gold plating a USB socket does absolutely NOTHING, since one of the 4 USB 2.0 pins IS the ground, the socket/cover is not used for anything electrical.
> I am guessing that they (the motherboard) takes 5V from the power supply ad pass it through additional filtering circuits before outputting to that one USB port.
> Other than that, if you are using your own USB DAC/Amp, there is no gains to be had because a motherboard can only offer so much for audio, there are only two changes(compare to any other mobo) on the Amp-up, the additional 5V filtering for that one specific USB port, and possibly more filtering for the onboard DAC/Amp. If you want this ""isolated USB"" just buy a USB power/data splitter cable and use any good usb power adapter like a apple 5w that almost everyone have, or even an analog supply, or batteries...
> Buying a ""audiophile"" motherboard has always been misleading, since any self-respecting audiophile will have their own DAC/Amp, the products are more just for people who wants to/thinks they are buy a board knowing it will not have problem with their basic music needs. The newest sabre/USB realtek boards from a bunch of flagship motherboards have audio problems with implementation anyways, so buying separate good DACAMP that will last you lifespan of multiple PC upgrades is a much better idea.
> Nowadays all but the cheapest most cut-down boards have decent, functional audio section, and if anything the high ends have problem with messing up the implementation.



I can hear differences in USB cables, that's why I use a Chord USB cable rather than a generic one. Yes, everyone will tell you a digital cable does not make difference- it either works or it doesn't. I am firm believer that better cables do make a difference. Even isolation feet, things you would usually ignore, actually bring out a difference in sound as well. Every little thing brings out a difference in sound, that's why hi-fi is frustrating to those who have can hear a difference and that's why you see many changing out their equipment often- much more often than someone who upgrades their PC components often. 

It sounds like snake oil, but even I can see and hear differences between different HDMI cables on my brand new Samsung QLED TV. I went with AOC (optical) HDMI cables in the end as it delivered a punchier, more vibrant, and smoother picture. You can say all these are waste of money, some even say it is money well spent even if the gains are small.


----------



## Octopuss (Dec 20, 2021)

fluxc0d3r said:


> I can hear differences in USB cables, that's why I use a Chord USB cable rather than a generic one. Yes, everyone will tell you a digital cable does not make difference- it either works or it doesn't. I am firm believer that better cables do make a difference. Even isolation feet, things you would usually ignore, actually bring out a difference in sound as well. Every little thing brings out a difference in sound, that's why hi-fi is frustrating to those who have can hear a difference and that's why you see many changing out their equipment often- much more often than someone who upgrades their PC components often.
> 
> It sounds like snake oil, but even I can see and hear differences between different HDMI cables on my brand new Samsung QLED TV. I went with AOC (optical) HDMI cables in the end as it delivered a punchier, more vibrant, and smoother picture. You can say all these are waste of money, some even say it is money well spent even if the gains are small.


You can't see or hear shit, my friend. It's in your head.
You obviously drank too much snake oil and there's no way to reverse the damage that has been done.


----------



## Steevo (Dec 20, 2021)

fluxc0d3r said:


> I can hear differences in USB cables, that's why I use a Chord USB cable rather than a generic one. Yes, everyone will tell you a digital cable does not make difference- it either works or it doesn't. I am firm believer that better cables do make a difference. Even isolation feet, things you would usually ignore, actually bring out a difference in sound as well. Every little thing brings out a difference in sound, that's why hi-fi is frustrating to those who have can hear a difference and that's why you see many changing out their equipment often- much more often than someone who upgrades their PC components often.
> 
> It sounds like snake oil, but even I can see and hear differences between different HDMI cables on my brand new Samsung QLED TV. I went with AOC (optical) HDMI cables in the end as it delivered a punchier, more vibrant, and smoother picture. You can say all these are waste of money, some even say it is money well spent even if the gains are small.



Digital truly does work or it doesn’t, USB can carry so much more data than a audio stream that with error correction you could use wire coat hanger to make the connection and it would sound as good as a $1000 cable


----------



## mechtech (Dec 20, 2021)

I prefer true analog myself so installed these in my PC


----------



## Post Nut Clairvoyance (Dec 21, 2021)

fluxc0d3r said:


> I can hear differences in USB cables, that's why I use a Chord USB cable rather than a generic one. Yes, everyone will tell you a digital cable does not make difference- it either works or it doesn't. I am firm believer that better cables do make a difference. Even isolation feet, things you would usually ignore, actually bring out a difference in sound as well. Every little thing brings out a difference in sound, that's why hi-fi is frustrating to those who have can hear a difference and that's why you see many changing out their equipment often- much more often than someone who upgrades their PC components often.
> 
> It sounds like snake oil, but even I can see and hear differences between different HDMI cables on my brand new Samsung QLED TV. I went with AOC (optical) HDMI cables in the end as it delivered a punchier, more vibrant, and smoother picture. You can say all these are waste of money, some even say it is money well spent even if the gains are small.


I don’t think everything is snake oil, but unless there is active image processor circuitry inside the cable that require external power (see: linus’s 150$ HDMI cable video), there will not be difference in image quality, you will notice obvious visual artifacts or complete image loss. And this is differentiated from ordinary active cable where the power is just used to carry signal over longer distance. Any lower quality cable would either work, or fails, without appreciable image differences. 
An USB cable for audio is much simpler than HDMI, I still standby that gold plated connector is complete cosmetic- the connector has nothing to do with isolation or signal integrity. A good USB cable may use copper conductors for power, for high current(power) application, with sufficient EMI shielding, but audiophile USB cables is just snake oil pricing on a otherwise just “good USB cable”. I would check my own PSU for EMI shielding before suspecting a cabling, unless my setup is riddled with cables everywhere.
You can post reviews as a user so if you want to show that difference, you can very well use cameras to make image comparison, and ask manufacturers what they did(I.e) image processing, that makes a perceived difference, if there is any. I don’t think there is anything that they can explain, that anybody would be unable to understand.


----------



## OldAndSlowDev (Dec 21, 2021)

If only hdmi cables were properly shielded. I can hear the heater pulses in my subwoofers because the hdmi cable is “following” the heater power cable and acts as an antenna. I never got a clean signal coming from my pc so maybe for some people this can be useful. I won’t call this snake oil if it can solve some noise. A PC can generate a lot of noise because of all the high frequency clocks.

I have to unplug my hdmi from the PC to remove subwoofer humming.


----------



## R-T-B (Dec 21, 2021)

The only notable thing about this drive is it runs the whole chip in SLC mode...  wouldn't mind paying a premium for that but the audiophile stuff is retarded and only drives the price into the stratosphere.

The sad thing is, I would consider myself an audiophile (I do enjoy a good set of headphones) were it not for BS like this...



OldAndSlowDev said:


> If only hdmi cables were properly shielded. I can hear the heater pulses in my subwoofers because the hdmi cable is “following” the heater power cable and acts as an antenna.


HDMI audio is digital, ie doesn't work like that.  It's more likely something analog in your woofer picking it up.


----------



## INSTG8R (Dec 21, 2021)

I mean use a cheap DP cable on a high refresh monitor ¯\_(ツ)_/¯


----------



## Steevo (Dec 21, 2021)

OldAndSlowDev said:


> If only hdmi cables were properly shielded. I can hear the heater pulses in my subwoofers because the hdmi cable is “following” the heater power cable and acts as an antenna. I never got a clean signal coming from my pc so maybe for some people this can be useful. I won’t call this snake oil if it can solve some noise. A PC can generate a lot of noise because of all the high frequency clocks.
> 
> I have to unplug my hdmi from the PC to remove subwoofer humming.


Ground potential difference caused by different circuits and possibly faulty grounding. Your subwoofer/PC circuits offer a easier path to ground to balance the difference


----------



## OldAndSlowDev (Dec 21, 2021)

Steevo said:


> Ground potential difference caused by different circuits and possibly faulty grounding. Your subwoofer/PC circuits offer a easier path to ground to balance the difference


Not a ground potential difference. AVR, subwoofer and pc are on the same isolated ground. It’s really an antenna effect. Even with the pc off, plugged or unplugged, on same or on a different ground, I have humming. Unplugging the hdmi -> no humming.


----------



## MachineLearning (Dec 21, 2021)

It's amazing that a manufacturer has found such a creative way to harm the long-term reliability of an SSD, by adding electrolytic capacitors to it. Those caps may live quite a while but I'd bet they do not outlast SLC NAND.

Also, this will require a specialized heatsink due to the caps, crystal and barrel plug. This is all without mentioning the exploitative and stupid idea behind the product, what a mess. I didn't know the world had enough snakes for all that oil


----------



## RealKGB (Dec 21, 2021)

Hey, we have a price tag and a name now!





						Revelation Audio SSD | Zzyzx
					

Femto NVMe M2 SSD designed for audiophiles and music lovers




					www.zzyzxphile.com


----------



## claes (Dec 21, 2021)

Have to admit, that heatsink is baller.


----------



## Fry178 (Dec 21, 2021)

@Steevo
then how can wapping out HDMI (1.4) cable on device that acted up (BD player)/not respond to remote commands,
with a 2.0 cable make a difference?
cables i tried (1.4) were working on other things, so i know the only difference is certification.


----------



## Octopuss (Dec 21, 2021)

R-T-B said:


> The only notable thing about this drive is it runs the whole chip in SLC mode...


What does this mean? I thought SLC was type of NAND. What's SLC *mode*?


----------



## Athlonite (Dec 21, 2021)

Octopuss said:


> What does this mean? I thought SLC was type of NAND. What's SLC *mode*?


The nand used on this "Device" is actually TLC ( Triple Level Cell) ie: can hold 3 bits per cell,  but in this case the manufacturer has decided to limit it to just 1 bit per cell via it's firmware 
nand flash goes like this 

SLC = 1 bit per cell and is the fastest for read and write and is also the most expensive 
MLC = 2 bits per cell not quite as fat as SLC but close
TLC = 3 bits per cell and again much slower than SLC
QLC = 4 bits per cell and has a very fast read speed but quite slow write speed


----------



## LabRat 891 (Dec 21, 2021)

I think I'm starting to get more irritated at the disinfo spread by derisive commentary more than any absurd marketing.
If the audiophile marketing label were removed, and replaced with 'Minimal EMI Design' would y'all calm down?

At least some of the 'advertised improvements' are worthwhile in specific applications. (beyond the 'audiophile').
Home/DIY Lab work and device development come to mind. HAMs have had to go to extreme lengths to silence interference as well*.

*You remember Linksys getting in trouble with the FCC for the WRT54 series? It was the HAM operator community that brought the complaints. You'd think the Wi-Fi radio was the issue, right? While it was exceeding power limits, largely the circuits for the wired leads were broadcasting the interference.

You buy stuff like this either as an example of having more $ than mind, *or* you have an identifiable need, which otherwise might point you towards even less available and more expensive components.




fluxc0d3r said:


> I can hear differences in USB cables, that's why I use a Chord USB cable rather than a generic one. Yes, everyone will tell you a digital cable does not make difference- it either works or it doesn't. I am firm believer that better cables do make a difference. Even isolation feet, things you would usually ignore, actually bring out a difference in sound as well. Every little thing brings out a difference in sound, that's why hi-fi is frustrating to those who have can hear a difference and that's why you see many changing out their equipment often- much more often than someone who upgrades their PC components often.
> 
> It sounds like snake oil, but even I can see and hear differences between different HDMI cables on my brand new Samsung QLED TV. I went with AOC (optical) HDMI cables in the end as it delivered a punchier, more vibrant, and smoother picture. You can say all these are waste of money, some even say it is money well spent even if the gains are small.


Every thing you mentioned (except MAYBE the HDMI perceived change*) can be tested and reproduced. (You'd need test equipment and sensors across several 'disciplines') Even the capacitance of a shielding layer, or static-buildup can have a definable effect in seemingly unrelated or disconnected systems.

"Digital" is almost always communicated using high frequency analog waveforms and differential signalling.
That said, expensive audiophile and professional studio media production cabling has less to do with signal integrity and more rejection of EMI/RFI, reducing ground loops, and dampening electromechanical-source 'noise'.
Every time I hear/read the phrase "Digital works, or it doesn't", I think about my experience with HDMI cables that would crash displays and occasionally a PC when you walked by them.
I've also experienced digital links 'malfunctioning' rather than 'work, or doesn't'. 
*An HDMI link that is the source of inducting EMI into the TV or a lossy link with error correction theoretically might cause such changes in perception. The amount of postprocessing done inside 'the black box' of The Scaler/Image Processor could make errors appear as changes in the image. I used to have an HDTV that would store a frame and slowly start to 'overwrite' the live input, altering color, etc. By all means, it shouldn't have been possible, but it happened every day I used it as a PC display.


----------



## Chrispy_ (Dec 21, 2021)

The fact that Audiophile snake-oil products are so prevalent just proves that there are plenty of uneducated idiots with too much money. 

If snake-oil makes a profit, then good for them I say. How idiots throw their money away is none of my business.


----------



## OldAndSlowDev (Dec 21, 2021)

Chrispy_ said:


> The fact that Audiophile snake-oil products are so prevalent just proves that there are plenty of uneducated idiots with too much money.
> 
> If snake-oil makes a profit, then good for them I say. How idiots throw their money away is none of my business.


Actually the fact there are very expensive snake oil product is because there are a lot of very rich people who want "the best" and don't understand the physics laws but think a 8000$ speaker cable will bring a better sound fidelity to their 2m$ system. But there are also stuff that are expensive because they aren't produced to mass market but still relevant for very specific usage. I am not saying that this product is or isn't something that helps, to do so we need someone who REVIEW it using a bench and some measurement tools. Sadly appart a few serious reviews like audio science review, Audioholics who use a procedure and measurement tools, most of the time reviewers are hearing to something and saying "yes it's a very good product"


----------



## TheoneandonlyMrK (Dec 21, 2021)

I do actually like the Look of it though, less chewing gum, more steampunky.

Not going to buy it mind ,ever.


----------



## Dredi (Dec 21, 2021)

LabRat 891 said:


> If the audiophile marketing label were removed, and replaced with 'Minimal EMI Design' would y'all calm down?


Well, since the design makes the device not capable of utilizing spread spectrum on the pcie communication due to the custom clock, it arguably introduces _more_ EMI, not less.




LabRat 891 said:


> Every time I hear/read the phrase "Digital works, or it doesn't", I think about my experience with HDMI cables that would crash displays and occasionally a PC when you walked by them.
> I've also experienced digital links 'malfunctioning' rather than 'work, or doesn't'.


In this case it didn’t work, correct? It barely worked in ideal conditions and when something happened it failed. The same is with other digital interconnects, it either works, or doesn’t in _a very obvious_ way. The way people discuss them on the ”high end” forums makes it seem like the colours get punchier if you have the proper hdmi cable, which is utter nonsense.


----------



## Steevo (Dec 21, 2021)

Fry178 said:


> @Steevo
> then how can wapping out HDMI (1.4) cable on device that acted up (BD player)/not respond to remote commands,
> with a 2.0 cable make a difference?
> cables i tried (1.4) were working on other things, so i know the only difference is certification.


So one cable didn't work, but another one did? The cable could have been bad. If it just didn't work you are proving what I said, digital will either work or not work. There could have been a break in a conductor, a bad crimp on a pin, a thousand things that a sample size of one doesn't prove.


----------



## Operandi (Dec 21, 2021)

OldAndSlowDev said:


> Actually the fact there are very expensive snake oil product is because there are a lot of very rich people who want "the best" and don't understand the physics laws but think a 8000$ speaker cable will bring a better sound fidelity to their 2m$ system. But there are also stuff that are expensive because they aren't produced to mass market but still relevant for very specific usage. I am not saying that this product is or isn't something that helps, to do so we need someone who REVIEW it using a bench and some measurement tools. Sadly appart a few serious reviews like audio science review, Audioholics who use a procedure and measurement tools, most of the time reviewers are hearing to something and saying "yes it's a very good product"


Yeah, its really all marginal gains.  With anything operating at the high-end of the market the gains are going to be small and likely to most people not worth it or detectable at all.  And sometimes the gains aren't there at all and its all BS marketing like this thing. 

With cables from a objective perspecitive (and of course this had to come up...) its about rejecting EMI and keeping signal integrity.  Naysayers will say that the EMI operates at frequencies beyond human hearing and therefor can't have an impact.  That EMI impacts everything in the process though.  A cheap poorly shielded interconnect will be more susceptible to interference than a quality shielded one.  You could use coat hangers or whatever ridiculous example you want to come up with but anything ferromagnetic is going to interfere with the signal, thats not really debatable.  Super fancy braided multi-conductor cables cut down on that even more and people say they can hear a difference but whatever the gains are they are marginal and really the last thing you should be looking at.


----------



## Steevo (Dec 21, 2021)

Operandi said:


> Yeah, its really all marginal gains.  With anything operating at the high-end of the market the gains are going to be small and likely to most people not worth it or detectable at all.  And sometimes the gains aren't there at all and its all BS marketing like this thing.
> 
> With cables from a objective perspecitive (and of course this had to come up...) its about rejecting EMI and keeping signal integrity.  Naysayers will say that the EMI operates at frequencies beyond human hearing and therefor can't have an impact.  That EMI impacts everything in the process though.  A cheap poorly shielded interconnect will be more susceptible to interference than a quality shielded one.  You could use coat hangers or whatever ridiculous example you want to come up with but anything ferromagnetic is going to interfere with the signal, thats not really debatable.  Super fancy braided multi-conductor cables cut down on that even more and people say they can hear a difference but whatever the gains are they are marginal and really the last thing you should be looking at.












						KIMBER KABLE: Do High-end USB Cables Make A Difference?
					

This is a review and detailed measurements of the KIMBER KABLE B-BUS USB cable.  It is on kind loan from a member.  The B-BUS in 1 meter length costs US $60 from the company.  Complain all you want about high-end cables.  Where they usually distinguish themselves is the fancy look of their...




					www.audiosciencereview.com
				




Read comprehend then post. EMI doesn't change the audio signal, its digital and thus can be corrupted, but too much corruption will result in CRC errors and... things not working. 

If you or anyone wants to buy a better USB cable cause it has better looking ends, its shiny, it matches your color scheme, you want it for some subconscious primal reason.... feel free to do so. But don't try and sugar coat your personal preference as scientific truth without expecting pushback.


----------



## Operandi (Dec 21, 2021)

Steevo said:


> KIMBER KABLE: Do High-end USB Cables Make A Difference?
> 
> 
> This is a review and detailed measurements of the KIMBER KABLE B-BUS USB cable.  It is on kind loan from a member.  The B-BUS in 1 meter length costs US $60 from the company.  Complain all you want about high-end cables.  Where they usually distinguish themselves is the fancy look of their...
> ...


In terms of cables I was speaking of cables in general.  I've heard bad analog interconnects but not USB.  Not that I've really tried but I'm skeptical there is a difference to be heard so its not really on my list of things to spend money on.  I have two nice'ish DACs in different rooms, I think one is an Amazon Basic and other is a Belkin.

As to USB, a DAC operates in the digital and analog domain its not just 1s and 0s, any noise introduced in the system an manipulate the system in terms of noise and jitter.  There are design techniques to largely mitigate those issues but no system is perfect so the best solution is to not introduce those issues into the system in the first place.  That would be the _theory _behind a high-end audio USB cable.

I don't own any high-end cables or advocate that anyone should buy them, I don't think my system is good enough or setup well enough nor do I think they worth the money spent on the _potential _benefit. I'm trying to draw a line but what is clearly a scam (this SSD) and what can make a difference, though through very high diminishing returns.


----------



## Dredi (Dec 22, 2021)

Operandi said:


> As to USB, a DAC operates in the digital and analog domain its not just 1s and 0s, any noise introduced in the system an manipulate the system in terms of noise and jitter. There are design techniques to largely mitigate those issues but no system is perfect so the best solution is to not introduce those issues into the system in the first place. That would be the _theory _behind a high-end audio USB cable.


But a better shielded usb cable is simply more resistant to EMI for that cable run, i.e. less shit gets induced to the USB signals and ground. It however does nothing, absolutely nothing, to the noise that comes from the USB source.

Unless you are living inside a microwave oven, the 1m usb cable you use to connect your PC to the DAC is not long enough to introduce problems even if it was completely unshielded, regardless of the DAC.


----------



## Fry178 (Dec 23, 2021)

@Dredi/Steevo
nope, just because its digital, doesnt mean it "works or doesnt".

im talking about switching the TYPE of hdmi cable (not brand/quality), one being 1.4, one being certified for 2.0.
customer had upgraded media room, and the new UHD/media player (running 4k/2k from disc or usb),
had issues with picture/sound, as well as not properly responding to remote input.
updated the customer tv and the bd player, no change. 
Tried 2 different remotes (even one from 2k player), nothing, while the customers remote would work on my (demo) display.
then decided to run a "certified" 4k cable (2.0), and all problems were gone.

tested the same (1.4) cable with three different tvs (2k/4k) using 1080p BD player without single issue,
so i know it wasnt broken, and i actually kept it for demo use (swapped with customer).


----------



## Dredi (Dec 23, 2021)

Fry178 said:


> @Dredi/Steevo
> nope, just because its digital, doesnt mean it "works or doesnt".
> 
> im talking about switching the TYPE of hdmi cable (not brand/quality), one being 1.4, one being certified for 2.0.
> ...


So it clearly didn’t work, and then did work.
The customer location might have had tighter bends in the cable, more EMI or other stuff that made the difference to the other location.

what the ”high end” folk say, is that changing the cable changes how things sound or look, which is utter nonsense.


----------



## Fry178 (Dec 23, 2021)

nope, as im doing this kinda stuff for 15y, and made sure it was not a broken cable,
by using another 1.4 later on (with a different player of same model with same outcome),
and even if it was, how do I have the same trouble/interference, 15 mls away from his home?

(if you read my post again) it DID work, as image/sound was coming up now and then,
so the "digital works or doesnt", is incorrect.


----------



## Operandi (Dec 23, 2021)

Dredi said:


> But a better shielded usb cable is simply more resistant to EMI for that cable run, i.e. less shit gets induced to the USB signals and ground. It however does nothing, absolutely nothing, to the noise that comes from the USB source.


Any interference / noise picked by the cable is going to go back into the DAC and that has to be dealt with so its going to affect it on some level the question is to what extent.  High-end DACs go through a lot of trouble and expense to build very high-quality power supplies and source very accurate clocks for their DACs to ensure that the DAC chip is operating as cleanly as possible.  A DAC is both digital and analog so yeah, the 1s and 0s get there from the source and converted into analog sound so it works in the sense you get audio with pops or clicks but internally there is more than just it working or not going on.  A poor quality USB cable could certainly introduce enough noise to affect the operation of the circuit.  The cable has to have some affect on the circuit, physics dictates it, to the extent that you can hear it?  Idk, I'm not really convinced I would but on principle the idea that it _could _is sound (ahhaaha... punny).  Otherwise there is a lot of engineers out there that have all fooled each into building crazy expensive DACs for no reason, nor would there be a reason for optical interfaces and cables exist.

Your view point that it isn't audible is totally valid view point but for those that claim they can hear a difference and purported reasons why are equally valid view points.  Its not the same as this audiophile SSD or a audiophile network switch, that stuff is nonsense.

Again I'm not promoting the idea of high-end digital audio cables, my NuForce DAC on my desktop and my Pioneer A9 are both USB DACs and are connected via Amazon and Belkin cable respectively, they are shielded and seem well made so they seem good enough to me.  At some point I'll see if I can get a deal on "high-end" USB cable on the used market because I'm intellectually curious about it but I need to build some better speakers first and do some room treatments.  My expectations are low.


----------



## Athlonite (Dec 23, 2021)

Fry178 said:


> nope, as im doing this kinda stuff for 15y, and made sure it was not a broken cable,
> by using another 1.4 later on (with a different player of same model with same outcome),
> and even if it was, how do I have the same trouble/interference, 15 mls away from his home?
> 
> ...


so basically the cable wasn't capable of the bandwidth needed to run 2/4K content is what your saying then


----------



## Operandi (Dec 24, 2021)

Athlonite said:


> so basically the cable wasn't capable of the bandwidth needed to run 2/4K content is what your saying then


In this case it actually is "works or it dosn't", and in this case it doesn't.  There isn't enough bandwidth to push video, audio, and control signaling through the cable.  It works sorta but stuff gets dropped intermittently, thats just how HDMI works and handles errors.  It would be like forcing a CAT5 cable to do what only a CAT6, if that were possible, which it isn't because nobody wants useless corrupt data. 

This is completely different than the implications of cable quality between a source and DAC.


----------



## Dredi (Dec 25, 2021)

Operandi said:


> Your view point that it isn't audible is totally valid view point but for those that claim they can hear a difference and purported reasons why are equally valid view points.


Yup. Although I’ve yet to see any studies that show the usb cables influence to be audible. For whatever reason the premium cable manufacturers never show in practice (i.e. With an independent abx double blind study) that their products are better.


----------



## cornemuse (Dec 27, 2021)

fluxc0d3r said:


> I can hear differences in USB cables, that's why I use a Chord USB cable rather than a generic one. Yes, everyone will tell you a digital cable does not make difference- it either works or it doesn't. I am firm believer that better cables do make a difference. Even isolation feet, things you would usually ignore, actually bring out a difference in sound as well. Every little thing brings out a difference in sound, that's why hi-fi is frustrating to those who have can hear a difference and that's why you see many changing out their equipment often- much more often than someone who upgrades their PC components often.
> 
> It sounds like snake oil, but even I can see and hear differences between different HDMI cables on my brand new Samsung QLED TV. I went with AOC (optical) HDMI cables in the end as it delivered a punchier, more vibrant, and smoother picture. You can say all these are waste of money, some even say it is money well spent even if the gains are small.


Like X-files
"I want to Believe"


----------



## Shrek (Jan 6, 2022)

I would guess a large cache might help avoid stuttering.


----------



## claes (Jan 6, 2022)

You really need to get an SSD my friend


----------



## skoolsella (Jan 8, 2022)

Of course cables can make a difference, even in the digital domain, just one example off the top of my head is Nyquist rate and bandwidth.  High frequency response can be reduced and even completely nullified by a cable with poor bandwidth.
An oscilloscope probe measuring a digital signal can only measure up to its rated frequency before its sensitivity is reduced, they have a -3dB rating where the probe is rated to have lost half its sensitivity by the stated frequency.  

It works until it doesn't, not, it works or it doesn't.


----------



## Dredi (Jan 10, 2022)

skoolsella said:


> Of course cables can make a difference, even in the digital domain, just one example off the top of my head is Nyquist rate and bandwidth.  High frequency response can be reduced and even completely nullified by a cable with poor bandwidth.
> An oscilloscope probe measuring a digital signal can only measure up to its rated frequency before its sensitivity is reduced, they have a -3dB rating where the probe is rated to have lost half its sensitivity by the stated frequency.
> 
> It works until it doesn't, not, it works or it doesn't.


It either works for a given bandwidth (in a given environment), or doesn’t.

the important thing is that in reality it is extremely easy to know if it works or not, where as these ”high end” people think that changing a digital interconnect will affect the colours on your tv, or how the music sounds.


----------



## Operandi (Jan 21, 2022)

Dredi said:


> the important thing is that in reality it is extremely easy to know if it works or not, where as these ”high end” people think that changing a digital interconnect will affect the colours on your tv, or how the music sounds.


Not trying to make this argument here but this statement is arguing the point on the wrong merit.

The signal is digital, if it gets from the source to the destination without errors then its all the same regardless of what cable you use, thats not in for debate.  Drawing comparisons with a TV is not applicable because thats digital all the way through from your source to the processor driving the pixels in the panel.  A audio DAC is operating in the analog domain and thats why the cable is a factor.  The cable is just another component of the DAC really and any cable that is susceptible to picking up noise and interference is going to have some kind of impact.  DAC chips can and will sound differently (better / worse than one or the other) based on how the chip is implemented.  It all comes down to how well the circuit is designed, how good the power supply is, and how much noise you are or are not introducing into the system.

Is it going to make a huge difference?, no but it would be similar to the differences of one ESS Sabre power with cheap switching PSU and one that uses and analog toroidal PS.  Diminishing returns, marginal gains for sure and probably the last thing anyone should look at but simply saying there can't be a difference because is all 1's and 0's is also wrong.


----------



## Dredi (Jan 25, 2022)

Operandi said:


> Not trying to make this argument here but this statement is arguing the point on the wrong merit.
> 
> The signal is digital, if it gets from the source to the destination without errors then its all the same regardless of what cable you use, thats not in for debate.  Drawing comparisons with a TV is not applicable because thats digital all the way through from your source to the processor driving the pixels in the panel.  A audio DAC is operating in the analog domain and thats why the cable is a factor.  The cable is just another component of the DAC really and any cable that is susceptible to picking up noise and interference is going to have some kind of impact.  DAC chips can and will sound differently (better / worse than one or the other) based on how the chip is implemented.  It all comes down to how well the circuit is designed, how good the power supply is, and how much noise you are or are not introducing into the system.
> 
> Is it going to make a huge difference?, no but it would be similar to the differences of one ESS Sabre power with cheap switching PSU and one that uses and analog toroidal PS.  Diminishing returns, marginal gains for sure and probably the last thing anyone should look at but simply saying there can't be a difference because is all 1's and 0's is also wrong.


The colours you see on your display are also analog. A bad PSU on a display can cause the colours to flutter, and it is completely plausible that one could design a display so bad that the HDMI input would change how the picture is presented. Luckily tv manufacturers seem to be more or less on top of circuit design when it comes to things like this. And no-one has been able to differentiate even basic quality DACs in blind studies where the signal levels have been matched. For the digital interconnects to bleed some interference (that exceed in amount the human ability to hear) to the analog side just indicate that the device was not designed well.

Your claim that people can differentiate (modern) DAC chips is completely unfounded. The current ones have snr so good that it really is impossible to differentiate them by using hearing alone.

Badly designed products can be identified, but it requires non-flat frequency response, or snr of less than 90 dB (and it gets easy only after 70 dB). Some DACs also do some unnecessary digital processing that can be heard, and there are test methods for determining if that is the reason people hear differences between given DACs.


----------



## Deleted member 24505 (Jan 25, 2022)

Steevo said:


> KIMBER KABLE: Do High-end USB Cables Make A Difference?
> 
> 
> This is a review and detailed measurements of the KIMBER KABLE B-BUS USB cable.  It is on kind loan from a member.  The B-BUS in 1 meter length costs US $60 from the company.  Complain all you want about high-end cables.  Where they usually distinguish themselves is the fancy look of their...
> ...



Just seen this. Wow. I remember the thing with £50 HDMI cables. I buy the cheapest DP and HDMI cables i can, pretty sure they are fine. I have seen USB cables that are $100/metre, i don't know how they justify it, or who the fuck buys them.

Incidentally here is my favourite DAC, not cheap like. 
https://www.whathifi.com/nagra/hd-dacmps/review


----------



## Operandi (Jan 26, 2022)

Dredi said:


> The colours you see on your display are also analog. A bad PSU on a display can cause the colours to flutter, and it is completely plausible that one could design a display so bad that the HDMI input would change how the picture is presented. Luckily tv manufacturers seem to be more or less on top of circuit design when it comes to things like this.


A modern LCD or OLED's pixels are operating in the digital domain.  Maybe the back light is an analog circuit but if its PWM then thats digital too.  If the display is "fluttering" then its not working properly, this comparison dosn't make any sense.


Dredi said:


> Your claim that people can differentiate (modern) DAC chips is completely unfounded. The current ones have snr so good that it really is impossible to differentiate them by using hearing alone.
> 
> Badly designed products can be identified, but it requires non-flat frequency response, or snr of less than 90 dB (and it gets easy only after 70 dB). Some DACs also do some unnecessary digital processing that can be heard, and there are test methods for determining if that is the reason people hear differences between given DACs.


All modern DACs are pretty good I agree.  90 dBA or greater is kinda considered what you want to look for, frequency response is relatively easy to get flat but if its not thats tell tail sign that is not a good source.  The biggest distinction between DACs is the tonality and distortion they produce and there are many forms of it, different ways to measure it and more ways to interpret the results.


Dredi said:


> And no-one has been able to differentiate even basic quality DACs in blind studies where the signal levels have been matched. For the digital interconnects to bleed some interference (that exceed in amount the human ability to hear) to the analog side just indicate that the device was not designed well.


About the only thing you can safely say is the differences are small compared other changes you can make.  The differences exist, thats not debatable, whats audible is but whats the point of that, as that varies from subject to subject and can also be subjective.  

Its super hard to just isolate just a DAC and do a blind test and have people hear the differences between different sources (or any other single component for that matter) when they are not familiar with the rest of what they are hearing and/or what they are listening to/for.

I think the best test I've seen is from Archimago's Musings is super long and detailed but I think its really about the only way to do a test like this if you want statistically meaningful results.


----------



## Dredi (Jan 26, 2022)

Operandi said:


> I think its really about the only way to do a test like this if you want statistically meaningful results.


Transparency tests are another great method. I.e. you place different DACs in the audio path, with a reference grade ADC just after, and try to figure out if you can determine in a blind study whether you can spot the ’extra’ DAC in the signal path or not. These can also done remotely, as the output from the ADC can be recorded bit-perfectly.
This test is super hard to pass even with the shittiest of DACs in the signal path. I can post some examples later if you are interested.




Operandi said:


> A modern LCD or OLED's pixels are operating in the digital domain. Maybe the back light is an analog circuit but if its PWM then thats digital too. If the display is "fluttering" then its not working properly, this comparison dosn't make any sense.


Light isn’t digital, and modern DACs are basically just some DSP and PWM anyway.


----------



## Operandi (Jan 26, 2022)

Dredi said:


> Transparency tests are another great method. I.e. you place different DACs in the audio path, with a reference grade ADC just after, and try to figure out if you can determine in a blind study whether you can spot the ’extra’ DAC in the signal path or not. These can also done remotely, as the output from the ADC can be recorded bit-perfectly.
> This test is super hard to pass even with the shittiest of DACs in the signal path. I can post some examples later if you are interested.


It is hard to pass but its also really hard to setup and conduct properly.  The test in my link above is one of the few to really do it right and you can see the effort that went into that.

From my perspective the difference between any _good _DAC is going to be small to begin with so if you aren't familiar with the rest of the system and reference material its pretty close impossible.  You need to be familiar with the rest of the gear, the room, and material so that means people listening on their or stuff  in their own room (or at least one they are very familiar with) and with music they know well.  There is also a threshold in quality to the rest of the system that determines whether or not you are going to be able to hear a difference, a $500 setup from Best Buy isn't going to be good enough to resolve any difference.

I am interested, but I'll be more interested after I've done more testing on my own.  Right now I have three DACs (all USB) a Burr-brown in my Pioneer integrated in the living room, a nuForce Icon on my desktop setup and I have a cheap Behringer UCA202 to move around with.  I've tested the Behringer against the built in DAC in the Pioneer which is powering custom DIY TriTrix MTs and I don't really hear anything different but I didn't spend alot of time with it either.  The TriTrix are also a budget design and while they sound better than anything I've heard sub $500 I plan on building better stuff in the near future.  I recently just got some KRK monitors and a JBL sub for my desktop setup so it might be worth trying between the Behringer and Icon in that setup.


Dredi said:


> Light isn’t digital, and modern DACs are basically just some DSP and PWM anyway.


No the light isn't digital but all the processing is digital until the light gets emitted from the pixels.

With audio the DACs output is analog, which is going to a analog pre-amp, an analog amplifier, and then the transducer (speakers or headphones).  The DAC is operating in the analog domain which is why all of this matters.


----------



## Dredi (Jan 27, 2022)

Operandi said:


> No the light isn't digital but all the processing is digital until the light gets emitted from the pixels.
> 
> With audio the DACs output is analog, which is going to a analog pre-amp, an analog amplifier, and then the transducer (speakers or headphones). The DAC is operating in the analog domain which is why all of this matters.


And in a modern DAC all the processing is essentially digital until the signal is output to the preamplifier.




Operandi said:


> From my perspective the difference between any _good _DAC is going to be small to begin with so if you aren't familiar with the rest of the system and reference material its pretty close impossible. You need to be familiar with the rest of the gear, the room, and material so that means people listening on their or stuff in their own room (or at least one they are very familiar with) and with music they know well.


And still, these people can’t differentiate between good quality DACs in a controlled blind study (level matched and no other clear faults in how the test is done). Even if it is conducted with their own setups, in their own rooms with music of their choice.


----------



## Steevo (Jan 27, 2022)

In all DACs the signal is digital until it hits the divider transistors which supply the additive voltage to recreate the sample, at that point the cable that carried the digital signal there no longer matters as CRC checks have been done and the data is either there and valid or it doesn’t work.

I’m about ready to build a wire coat hanger  USB cable and get a oscilloscope to show people, but some still would be convinced their feelings are more right than science and post more about how it feels like it sounds better cause it cost more…..


----------



## Deleted member 24505 (Jan 27, 2022)

imo buy the cheapest digital cable you can. Only analogue cable quality matters. Anyone spending $120/metre for a USB cable is a retard


----------



## Operandi (Jan 27, 2022)

Dredi said:


> And in a modern DAC all the processing is essentially digital until the signal is output to the preamplifier.


Right, the output is analog, thats the difference and thats why it matters.  I think I said that several times.


Dredi said:


> And still, these people can’t differentiate between good quality DACs in a controlled blind study (level matched and no other clear faults in how the test is done). Even if it is conducted with their own setups, in their own rooms with music of their choice.


They can though, the test I linked to shows it pretty conclusively.  Its a shit ton of work to do it but when you control for only the source its pretty easy to.  If you put a bunch of random people in a room they are not familiar with, speakers they are not familiar with, an amplifier they are not familiar with your test is going to fail because even the difference between a mediocre DAC and a great one is small compared to flaws between very good speakers, the room interactions ect.





Steevo said:


> In all DACs the signal is digital until it hits the divider transistors which supply the additive voltage to recreate the sample, at that point the cable that carried the digital signal there no longer matters as CRC checks have been done and the data is either there and valid or it doesn’t work.
> 
> I’m about ready to build a wire coat hanger USB cable and get a oscilloscope to show people, but some still would be convinced their feelings are more right than science and post more about how it feels like it sounds better cause it cost more…..


Thats wrong because even though the cable is only transmitting digital data thats corrected its still picking up noise that is going back into the circuit (which is analog) that needs to be rejected but realistically never completely is.  Thats why high-end DACs use toroidal transformer based PSUs instead of switching mode PSUs or put the clock generator as close to the DAC IC as possible.  If noise and interference didn't matter in a DAC you wouldn't do anything of that because it would be a total waste of time and resources.

If your argument is that the influence isn't audible thats fine but a USB cable is still susceptible to the noise in an environment and introducing it into the signal path.


----------



## Steevo (Jan 29, 2022)

Operandi said:


> Right, the output is analog, thats the difference and thats why it matters.  I think I said that several times.
> 
> They can though, the test I linked to shows it pretty conclusively.  Its a shit ton of work to do it but when you control for only the source its pretty easy to.  If you put a bunch of random people in a room they are not familiar with, speakers they are not familiar with, an amplifier they are not familiar with your test is going to fail because even the difference between a mediocre DAC and a great one is small compared to flaws between very good speakers, the room interactions ect.
> 
> ...



There is a small buffer for the data that eventually becomes audio, corrections happen there removing the direct coupling to the DAC.

Your ideas are still wrong the further down this rabbit hole you go.


----------



## Dredi (Jan 30, 2022)

Steevo said:


> In all DACs the signal is digital until it hits the divider transistors which supply the additive voltage to recreate the sample, at that point the cable that carried the digital signal there no longer matters as CRC checks have been done and the data is either there and valid or it doesn’t work.
> 
> I’m about ready to build a wire coat hanger  USB cable and get a oscilloscope to show people, but some still would be convinced their feelings are more right than science and post more about how it feels like it sounds better cause it cost more…..


A coat hanger probably won’t work as a usb cable, but for spdif that would be perfect.



Operandi said:


> They can though, the test I linked to shows it pretty conclusively. Its a shit ton of work to do it but when you control for only the source its pretty easy to. If you put a bunch of random people in a room they are not familiar with, speakers they are not familiar with, an amplifier they are not familiar with your test is going to fail because even the difference between a mediocre DAC and a great one is small compared to flaws between very good speakers, the room interactions ect.


Use headphones. Everyone can bring their own. Room does not matter and the relevant parts of the setup stays the same (meaning the headphones).



Operandi said:


> Right, the output is analog, thats the difference and thats why it matters. I think I said that several times.


Even the TV’s output is analog, light that is.


----------



## Operandi (Jan 31, 2022)

Steevo said:


> There is a small buffer for the data that eventually becomes audio, corrections happen there removing the direct coupling to the DAC.
> 
> Your ideas are still wrong the further down this rabbit hole you go.


..... You are completely missing the point.  Noise picked up by a digital cable is going to affect the analog domain of the DAC.  Bit correction, buffers things that happen digital domain are completely irrelevant.

Moreover you assume that digital buffers and bit correction are 100% effective which is not the case.  And the accuracy and effectiveness of those measures too is susceptible to being affected by external noise.


Dredi said:


> Use headphones. Everyone can bring their own. Room does not matter and the relevant parts of the setup stays the same (meaning the headphones).


The test I linked to included both headphones and speakers.

The result; the two lower quality by sources (by spec) were statistically identified as sounding worse, it became particularly more apparent the higher up the scale the rest of the system was.  In other words the better the rest of your gear was the more likely you were able to identify the differences.

Which totally makes sense given that most DACs are _good _and you need a system with enough resolution to reveal a difference, and goes a long way to explaining why there is a large consensus that they don't matter.


Dredi said:


> Even the TV’s output is analog, light that is.


Of course it is.  We live in a analog word (until the metaverse consumes us all).

The point is with digital video the signal is bit perfect until it hits the pixels.  If you were to draw an analogy to audio you would need an all digital pre-amp, and a digital amplifier (class D amps are still analog) and only the trandcucers pushing the air would be analog.  That simply doesn't exist, the closest thing we have is servo subs even those are driven with a analog signal.


----------



## Steevo (Jan 31, 2022)

Operandi said:


> ..... You are completely missing the point.  Noise picked up by a digital cable is going to affect the analog domain of the DAC.  Bit correction, buffers things that happen digital domain are completely irrelevant.
> 
> Moreover you assume that digital buffers and bit correction are 100% effective which is not the case.  And the accuracy and effectiveness of those measures too is susceptible to being affected by external noise.







__





						USB (Communications) - Wikipedia
					






					en.wikipedia.org
				








__





						Cyclic redundancy check - Wikipedia
					






					en.wikipedia.org
				




For a DAC/amplifier the power source will introduce more errors than USB will allow before faulting out.


----------



## Operandi (Jan 31, 2022)

Steevo said:


> For a DAC/amplifier the power source will introduce more errors than USB will allow before faulting out.


I don't even know what that means.  How can power source introduce "errors".

Again you are missing the point, the USB protocol has nothing to do with anything in the analog domain of a DAC yet that is going to susceptible to noise from a digital cable.

Noise counts on the digital side as well, the nanosecond the signal is transformed from digital to analog errors can be introduced by noise.


----------



## robot zombie (Jan 31, 2022)

I've experienced USB noise before. Nothing short of a DAC with better isolation solves it. The machine Im using now will still get squirrely with certain ones. It sounds a lot like vrms hard at work, only coming through the speakers loud and clear.

Try this... put a paper towel tube up to your mobo VRMs while the CPU is working. Even better, run RGB on a pulse mode too. I hear all that through my speakers when I plug most dacs under $200 into my PC via USB.

Worth noting, I never found a cable or dongle that touches it. Got a lot of people so sure it was anything other than what it obviously was though. The only workable solution was a better isolated DAC. Or straight up convert to something like spdif or aes. Isolation should be a natural byproduct, just by virtue of what has to happen to do that conversion.


----------



## Dredi (Feb 1, 2022)

Operandi said:


> The point is with digital video the signal is bit perfect until it hits the pixels. If you were to draw an analogy to audio you would need an all digital pre-amp, and a digital amplifier (class D amps are still analog) and only the trandcucers pushing the air would be analog. That simply doesn't exist, the closest thing we have is servo subs even those are driven with a analog signal.


But the light is aplified in our eyes through some analog means, before the information is shot through the nerves into our brain. Why does it matter if the analog process is using electricity or not?

I’d understand your point of view if we were talking about amps or speakers, but modern DACs are essentially completely digital.

As for the point of ’electric interference’, it of course also applies to the voltage the pixels are driven at, meaning that the brightness of pixels can be affected in the same way as in a shitty DAC the audio output can be affected. Any decent DAC (and TV) is essentially immune to that kind of shit, and the interconnect cable does not matter.


----------



## Operandi (Feb 1, 2022)

robot zombie said:


> I've experienced USB noise before. Nothing short of a DAC with better isolation solves it. The machine Im using now will still get squirrely with certain ones. It sounds a lot like vrms hard at work, only coming through the speakers loud and clear.
> 
> Try this... put a paper towel tube up to your mobo VRMs while the CPU is working. Even better, run RGB on a pulse mode too. I hear all that through my speakers when I plug most dacs under $200 into my PC via USB.
> 
> Worth noting, I never found a cable or dongle that touches it. Got a lot of people so sure it was anything other than what it obviously was though. The only workable solution was a better isolated DAC. Or straight up convert to something like spdif or aes. Isolation should be a natural byproduct, just by virtue of what has to happen to do that conversion.


Yeah, I heard something like that once before on Fucusrite Scarlet, shouldn't really happen if things are working properly but clearly things can go pretty wrong.

A cable can only shield the signal from outside interference / noise.  With something like this no cable is going to solve it because the noise is being conducted through the cable from the PC to DAC's analog output stage.  Something is really wrong in a situation like this as you are hearing things that are not even part of the signal.  Nothing really to do with how (external) noise can affect things but it illustrates a point of how sensitive the signal path really is even though its _"digital"_.


Dredi said:


> But the light is aplified in our eyes through some analog means, before the information is shot through the nerves into our brain. Why does it matter if the analog process is using electricity or not?


Yeah, it still matters and errors could occur at the output stage of the light source or even in the final conversion of signal to analog and frankly they probably do but our panel tech isn't that great anyway and I'm not sure our vision is at the same level of acuity as our hearing to pick up on things like that even if the panels were good enough.


Dredi said:


> I’d understand your point of view if we were talking about amps or speakers, but modern DACs are essentially completely digital.
> 
> As for the point of ’electric interference’, it of course also applies to the voltage the pixels are driven at, meaning that the brightness of pixels can be affected in the same way as in a shitty DAC the audio output can be affected. Any decent DAC (and TV) is essentially immune to that kind of shit, and the interconnect cable does not matter.


They are not completely digital, they are half analog, its right in the name.

Look at the example above as to how wrong things can go with a DAC.  Granted thats an extreme example of noise making it all the way through to the output stage but any kind of noise picked up (not isolated) by the cable can make its way into path and affect the analog domain.  Even the digital domain is not immune to noise, errors happen all the time, its are not immune errors to just because its digital.  Asynchronous clocks and FIFO buffers largely mitigate the issue but its not eliminated, thats a fact.  Now can you hear the difference?, thats the part that depends.


----------



## Dredi (Feb 2, 2022)

Operandi said:


> Look at the example above as to how wrong things can go with a DAC. Granted thats an extreme example of noise making it all the way through to the output stage but any kind of noise picked up (not isolated) by the cable can make its way into path and affect the analog domain.


ONLY IN BADLY DESIGNED PRODUCTS!

Why would you even connect the USB power on the DAC to anything? As for the USB’s digital signal, it is a BALANCED input, which negates essentially all interference BY DESIGN. You could also use optoisolators on the DACs PCB to get rid of all (theoretically)possible interference from the digital signal, but that is not necessary.

The noise you hear from bad cables when listening to badly designed DACs come from the power supply, not the USB signaling, nor the cable. Use a DAC with separate power supply, and you won’t get the ”cable interference” anymore.

Please, in the future, could you be more specific when describing things like ’the USB signal’, as you seem to mix that constantly with the USB 5 volt power output. The USB power is usually of bad quality, and shielding that does not help in most instances, as it does absolutely nothing to fix the power source itself.

Edit: and just to be clear;
There is no need to shield the power cable (unless you live inside a microwave). Go look at some topping DAC measurements. Do you think that they use shielded power supply cables, to get rid of this interference you speak of? No. And they still get some of the best SNR measurements ever on a consumer product.

the scarlett devices (in the example you quoted) use USB power, and are thus at the mercy of your power supply and motherboard. If they are shit, then you might get some interference. You can fix that by using a powered USB hub that has a stable power supply, or a USB cable with a separate power input of decent quality.


----------



## Operandi (Feb 3, 2022)

Dredi said:


> ONLY IN BADLY DESIGNED PRODUCTS!


Yeah, thats an example of something performing really poorly due to (bad) design.  I only pointed it out because it illustrates that just being a digital interface dosn't make it immune.


Dredi said:


> Why would you even connect the USB power on the DAC to anything? As for the USB’s digital signal, it is a BALANCED input, which negates essentially all interference BY DESIGN. You could also use optoisolators on the DACs PCB to get rid of all (theoretically)possible interference from the digital signal, but that is not necessary.


It mitigates it, it dosn't solve it.  A digital signal transmitted over a cable happens via analog waveform and there is only so much you can do post reviving the signaling.  Errors happen because you have limits on bandwidth and time when it comes to realtime audio streams.


Dredi said:


> Please, in the future, could you be more specific when describing things like ’the USB signal’, as you seem to mix that constantly with the USB 5 volt power output. The USB power is usually of bad quality, and shielding that does not help in most instances, as it does absolutely nothing to fix the power source itself.


In the grand scheme of things it dosn't matter for reasons I already mentioned.  The cable its able to pick up noise that affects both the analog and digital domain both, neither are immune.


Dredi said:


> Edit: and just to be clear;
> There is no need to shield the power cable (unless you live inside a microwave). Go look at some topping DAC measurements. Do you think that they use shielded power supply cables, to get rid of this interference you speak of? No. And they still get some of the best SNR measurements ever on a consumer product.


Measurements should only be used to used to verify what you hear and to aid in the design process.  If you were to just go by audio measurements alone we could have stopped developing DACs 10-15 years ago, yet were still at it but maybe you think it everything beyond a certain measurement threshold is just over engineered snake oil?


----------



## Dredi (Feb 4, 2022)

Operandi said:


> The cable its able to pick up noise that affects both the analog and digital domain both, neither are immune.


HOW???

The usb d+ and d- signals are just compared to each other and that comparison then drives a transistor that gives information to the rest of the IC whether it’s a 0 or 1 on a given time. NOT A SINGLE ELECTRON FROM THE DATA LINES IS TRANSMITTED TO THE ANALOG DOMAIN.



Operandi said:


> Yeah, thats an example of something performing really poorly due to (bad) design. I only pointed it out because it illustrates that just being a digital interface dosn't make it immune.


I NEVER STATED THAT DIGITAL INTERFACES WOULD BE IMMUNE!!!

I even had an example of a TV picture fluttering because of a bad power supply.

What I’m trying to say is that (most) digital signaling cables either work or do not. (There are examples where the devices are given power with the signaling itself, which is IMO out of scope here)

YOU are trying to somehow obfuscate shitty powersupplies with USB cables, which makes zero sense.


----------



## Operandi (Feb 10, 2022)

Dredi said:


> The usb d+ and d- signals are just compared to each other and that comparison then drives a transistor that gives information to the rest of the IC whether it’s a 0 or 1 on a given time. NOT A SINGLE ELECTRON FROM THE DATA LINES IS TRANSMITTED TO THE ANALOG DOMAIN.


Digital 1's and 0's are represented as a polarity changes over a analog signal.  Because you don't have unlimited bandwidth or unlimited time in a audio stream errors will happen and that is why noise of any kind is a factor.


Dredi said:


> OU are trying to somehow obfuscate shitty powersupplies with USB cables, which makes zero sense.


No, I'm simply saying noise from any source is an issue, it doesn't matter where it comes from.  The fact that shitty power supplies are one of the most prominent and easily identifiable sources of noise changes nothing in relation to other sources of noise.


----------



## Dredi (Feb 12, 2022)

Operandi said:


> Digital 1's and 0's are represented as a polarity changes over a analog signal. Because you don't have unlimited bandwidth or unlimited time in a audio stream errors will happen and that is why noise of any kind is a factor.


So what kind of errors are we talking about here? Jitter? Data corruption?

EMI based shit can affect only the latter, which is clearly audible if present (USB packet corruption -> missing segments of audio. There is no ’retry’ mechanism in USB audio). I.e. It either works or doesn’t.


----------



## Operandi (Feb 14, 2022)

Dredi said:


> So what kind of errors are we talking about here? Jitter? Data corruption?
> 
> EMI based shit can affect only the latter, which is clearly audible if present (USB packet corruption -> missing segments of audio. There is no ’retry’ mechanism in USB audio). I.e. It either works or doesn’t.


Both jitter and data corruption.  Errors can happen for the reasons I already stated (limited bandwidth, and the way digital audio streaming works).  Buffers and error correction mitigate the problems associated with the conversion process but do not eliminate the issue.  The clock signal is also carried on the same analog waveform that carries data so that too is susceptible to EMI.  

Its not _"works or it doesn't" _thats the fallacy, you never get a perfect reproduction of the analog signal, distortion is present in the entire signal chain and presents as audible quality differences.


----------



## Dredi (Feb 14, 2022)

Operandi said:


> you never get a perfect reproduction of the analog signal


Of course not. But it is not because of USB cables influence over the sound. What is perfect reproduction anyway? In order to quantify it, you need to measure it, and due to the duality problem in (electrical) measurements it is impossible to measure anything perfectly. Perfect reproduction of the original analog signal (as in sound waves) is already fucked when recording, as no-one knows how to make massless microphone diaphrams.



Operandi said:


> The clock signal is also carried on the same analog waveform that carries data so that too is susceptible to EMI.


I suspect that you don’t understand how USB audio transfer works. There is no clock signal, like in spdif/toslink.


----------



## Operandi (Feb 16, 2022)

Dredi said:


> Of course not. But it is not because of USB cables influence over the sound. What is perfect reproduction anyway? In order to quantify it, you need to measure it, and due to the duality problem in (electrical) measurements it is impossible to measure anything perfectly. Perfect reproduction of the original analog signal (as in sound waves) is already fucked when recording, as no-one knows how to make massless microphone diaphrams.


Perfect would be no measurable or audible difference.  And there is a difference between what can hear and what you can measure.  If you are in the camp that believes things like Audio Precision are the be all end all in audio analysis and tells us all there is to ever know then there probably isn't much more to really discus.

The (USB) cable is active component, physics dictates its going to have an influence for all the reasons mentioned already.  You can have the opinion that the whatever differences are are not audible, but they exist.


Dredi said:


> I suspect that you don’t understand how USB audio transfer works. There is no clock signal, like in spdif/toslink.


Yeah there is.  All DACs have a clock that has to be maintained with the source.  USB has different ways to go about it depending on the DAC and OS and driver combination though.


----------



## Dredi (Feb 17, 2022)

Operandi said:


> Yeah there is. All DACs have a clock that has to be maintained with the source. USB has different ways to go about it depending on the DAC and OS and driver combination though.


Almost all mediocre quality and up USB DACs have an internal clock only (when listening to music anyway, computer games might be different). They request data from the source. Go read up on asynchronous USB audio.




Operandi said:


> Perfect would be no measurable or audible difference. And there is a difference between what can hear and what you can measure. If you are in the camp that believes things like Audio Precision are the be all end all in audio analysis and tells us all there is to ever know then there probably isn't much more to really discus.
> 
> The (USB) cable is active component, physics dictates its going to have an influence for all the reasons mentioned already. You can have the opinion that the whatever differences are are not audible, but they exist.


I agree that audibility is the only thing that really matters. USB cables do not partake in that.

Blind studies show that hardly any of ’high-end’ audio crap has any meaning at all when it comes to audibility. USB cables or m.2 SSD:s definitely do not.

The USB cable is not an active component, but a passive one. It has no electronics inside it.








						What is Passive Component? - Definition from Techopedia
					

This definition explains the meaning of Passive Component and why it matters.




					www.techopedia.com
				



Physics also dictate that the way you breathe across the globe has an effect on the music I listen to right now. Is that relevant to anyone? No.

You are free to disagree and to
be wrong.


----------



## Operandi (Feb 17, 2022)

Dredi said:


> Almost all mediocre quality and up USB DACs have an internal clock only (when listening to music anyway, computer games might be different). They request data from the source. Go read up on asynchronous USB audio.


There are clocks on both sides, it just depends on which side is the reference clock.  Windows doesn't even support asynchronous USB audio without a third party driver.


Dredi said:


> I agree that audibility is the only thing that really matters. USB cables do not partake in that.
> 
> Blind studies show that hardly any of ’high-end’ audio crap has any meaning at all when it comes to audibility. USB cables or m.2 SSD:s definitely do not.
> 
> The USB cable is not an active component, but a passive one. It has no electronics inside it.


Because its able to pickup interference its prone to distorting the signal and causing errors its _active_.  Error correction and buffers mitigate the issue but do not eliminate it ,that makes it audible.  In that sense its an active component in the system, I didn't mean to imply it was powered.

Blind studies with this kind of thing hard to do with proper control.  Without proper control you are not really getting any useful data and your conclusions are only as good as the data.  I already linked to one that shows pretty conclusively you can pick out the differences between sources pretty easily when the rest of the system is good enough.  A cable is going to have less of an influence than a changing out one source for another but if something is prone to producing errors and affecting the sound (which it is) who are you to say that someone can't hear the difference between one cable and another?



Dredi said:


> You are free to disagree and to
> be wrong.


Yep, you too.


----------



## Dredi (Feb 17, 2022)

Operandi said:


> Windows doesn't even support asynchronous USB audio without a third party driver.


It has for at least ten years. Maybe think about upgrading.




Operandi said:


> Because its able to pickup interference its prone to distorting the signal and causing errors its _active_. Error correction and buffers mitigate the issue but do not eliminate it ,that makes it audible. In that sense its an active component in the system, I didn't mean to imply it was powered.


So your wallpaper is also an active component? It can affect how your display draws power, which can cause ripple to the power lines in your house, which can propagate through your DAC’s power supply to the ’analog side’ of it.

Now that you know that your wallpaper affects your music experience, do you advocate for ’high-end’ wallpapers?

Just by knowing that timing inaccuracies are present does not make them audible. Audibility needs to be proven.




Operandi said:


> Blind studies with this kind of thing hard to do with proper control. Without proper control you are not really getting any useful data and your conclusions are only as good as the data.


Yes, and without blind studies you have no data.




Operandi said:


> I already linked to one that shows pretty conclusively you can pick out the differences between sources pretty easily when the rest of the system is good enough.


Yes, a meaningful amount of people could pick out a crappy 10 year old motherboard audio out of other options. That has nothing to do with USB cables.


----------



## Operandi (Feb 17, 2022)

Dredi said:


> It has for at least ten years. Maybe think about upgrading.


It used to be a Mac and Linux thing only unless  you had proprietary driver.  It looks like MS snuck class 2 USB audio support into Windows 10 with one of the creator updates though a while back.

Either way its a recent change in how Windows handles USB audio and I don't think its the default even if you have support for it.  Unless you know otherwise.


Dredi said:


> So your wallpaper is also an active component? It can affect how your display draws power, which can cause ripple to the power lines in your house, which can propagate through your DAC’s power supply to the ’analog side’ of it.
> 
> Now that you know that your wallpaper affects your music experience, do you advocate for ’high-end’ wallpapers?


Am I supposed to take this seriously?  A cable that is prone to interference and producing errors is a real thing.


Dredi said:


> Just by knowing that timing inaccuracies are present does not make them audible. Audibility needs to be proven.


Proven to whom and how?  Measurements don't tell you everything and often conflict or show no difference with what you hear to be superior or inferior.  We have a long way to go in terms of getting a complete picture with audio measurements.  Measurements are a tool to confirm what you hear, nothing more.  

Blind studies sure, but its an insane amount of work to do properly.  Its a ton of work to even do improperly (not enough controls) and draw incorrect conclusions from it.

From my perspective what x number of people are able to hear from one component change out of a given sample is kind of pointless when there are so many other variables.  I'm more interested in why something could make a difference and understanding why and then hearing for myself.  Blind studies, measurements, specsheets are interesting but even if those things were completely conclusive it dosn't meaning you or I are going to hear what they are purporting to prove.


Dredi said:


> Yes, and without blind studies you have no data.


To that I would just say absence of proof is not proof of absence.


Dredi said:


> Yes, a meaningful amount of people could pick out a crappy 10 year old motherboard audio out of other options. That has nothing to do with USB cables.


Most picked out the onboard audio (which had pretty decent specs) on a pretty wide range of gear.  If you read further into the test people that had higher end gear were able to hear differences between the higher-end sources as well.


----------



## Dredi (Feb 18, 2022)

Operandi said:


> To that I would just say absence of proof is not proof of absence.


And you can’t prove a negative.




Operandi said:


> If you read further into the test people that had higher end gear were able to hear differences between the higher-end sources as well.


I did read it. No goup got even close to the loose p=0.05 criteria for picking out anything but the 10 year old crappy motherboard. If you are saying otherwise, at least link to the correct place in the ’study’.


Operandi said:


> It used to be a Mac and Linux thing only unless you had proprietary driver. It looks like MS snuck class 2 USB audio support into Windows 10 with one of the creator updates though a while back.


Async usb audio was present since at least 10 years, with the default driver (since windows vista). It is part of even the class 1 USB audio.



Operandi said:


> Measurements don't tell you everything and often conflict or show no difference with what you hear to be superior or inferior. We have a long way to go in terms of getting a complete picture with audio measurements. Measurements are a tool to confirm what you hear, nothing more.


Exactly. This is why blind studies are the way to go.



Operandi said:


> Am I supposed to take this seriously? A cable that is prone to interference and producing errors is a real thing.


Why wouldn’t you take it seriously? The audible effects are, for all we know, in the same range as the USB cable differences. Absense of proof is not proof of absense.


----------



## Operandi (Mar 11, 2022)

Dredi said:


> I did read it. No goup got even close to the loose p=0.05 criteria for picking out anything but the 10 year old crappy motherboard. If you are saying otherwise, at least link to the correct place in the ’study’.


If you read the author's conclusions and, look at the subsets of the test for those that work in the audio field from a technical background, are musicians, or professional reviewers it starts to paint a picture of the highest end DAC, (the Oppo being the best) followed closely by the Sony CD player.  You can see the same thing as the scale of the quality of the gear increases and interestingly those that tested with speakers vs. headphones. 










Are the results statistically meaningful enough to prove it by any scientific standard?, no but that doesn't mean there isn't anything there it just means you need better tests if your goal is to _prove _it.


Dredi said:


> Async usb audio was present since at least 10 years, with the default driver (since windows vista). It is part of even the class 1 USB audio.


I don't think thats correct.  You need USB Audio 2.0 to support async and that only made its way into Windows 10 in 2017.


Dredi said:


> And you can’t prove a negative.





Dredi said:


> Exactly. This is why blind studies are the way to go.


I think you are missing the point.  The conclusions you draw are only as good as the test you conduct, the data you collect and how you interpret it.  Its pretty common for well conducted scientific tests to draw misleading or incorrect conclusions through no fault at all in how the test was run.  It happens in all the time in much bigger well funded studies where the stakes are much higher than something as trivial as audio. 


Dredi said:


> Why wouldn’t you take it seriously? The audible effects are, for all we know, in the same range as the USB cable differences. Absense of proof is not proof of absense.


Displays are electrically noisy, thats not really disputed and why notebooks are often not recommended to use as streaming devices.  

The idea of 'high-end' wallpapers is absurd because if the display is causing a problem you'd just turn it off.  If a cable is the problem you could turn it off by unplugging it but then well.....


----------



## Dredi (Mar 11, 2022)

Operandi said:


> If a cable is the problem you could turn it off by unplugging it but then well.....


Bluetooth works 




Operandi said:


> I don't think thats correct. You need USB Audio 2.0 to support async and that only made its way into Windows 10 in 2017.


You are incorrect. Async audio is supported by usbaudio.sys. There is no need for the 2.0 release for that feature to work. And even if it was as you state, it would still be part of the de-facto feature set for five years already. Quite far from your original statement that that it required proprietary drivers…
You also stated that they are an uncommon variety, while most if not all high end DACs use async mode. 



Operandi said:


> Are the results statistically meaningful enough to prove it by any scientific standard?, no but that doesn't mean there isn't anything there it just means you need better tests if your goal is to _prove _it.


So you agree that the test does not indicate that the results that you said there to be are meaningful. Go ahead and provide better tests, I’m waiting.

And if electrical usb cable interference was an actual thing, it would be easy to prove by just doing a blind study comparing it to an optical input. In a device that is not faulty there is no difference in any measurements, but maybe the human ear can do what no machine can and determine what cable is used.


----------



## Operandi (Mar 14, 2022)

Dredi said:


> Bluetooth works


It works but a wireless protocol is never going to compete with a wired one.


Dredi said:


> You are incorrect. Async audio is supported by usbaudio.sys. There is no need for the 2.0 release for that feature to work. And even if it was as you state, it would still be part of the de-facto feature set for five years already. Quite far from your original statement that that it required proprietary drivers…
> You also stated that they are an uncommon variety, while most if not all high end DACs use async mode.


You may be right.  It looks like class 1 is limited to 96 kHz so higher sampling rates required either propitiatory drivers the recent update to Windows10 to support class 2 audio.  Find information on this is a bit of rats nest of information.


Dredi said:


> So you agree that the test does not indicate that the results that you said there to be are meaningful. Go ahead and provide better tests, I’m waiting.


I would agree that the tests don't reach a threshold to be statistically meaningful but the subsets of listeners and and equipment is indicating that the difference is there and observable when those thresholds are met in my opinion.  There are no better tests that I'm aware of and given the amount of effort that had to have gone into that one I doubt we'll be seeing a better one any time soon.


Dredi said:


> And if electrical usb cable interference was an actual thing, it would be easy to prove by just doing a blind study comparing it to an optical input. In a device that is not faulty there is no difference in any measurements, but maybe the human ear can do what no machine can and determine what cable is used.


This is just circling back on itself now but, any cable carrying an electrical signal is going to be under the influence of interference.  Audio streaming works differently than file transfer or peripheral interface, see ifi USB AUDIO GREMLINS EXPOSED.  Sure blind tests would prove it one way or another assuming you conduct the test properly and with a big enough sample size but just like the other test I've been referencing it would be pretty large under taking to get enough data. 

I think thats the main takeaway for me at least, we can measure a lot of what we are hearing but not everything.  I come from a background of loudspeakers when it comes to audio measurements, and measurements tell you a lot about how speaker will sound but not everything.  A ribbon and dome tweeter can measure nearly identically in a speaker with the same woofer and crossover topology yet sound very differently, clearly there is something there we just aren't measuring it.  

Until measurements show us everything you have two options, trust your ears or rely on blind studies which are problematic because they can lead you to the wrong conclusions.


----------



## Dredi (Mar 28, 2022)

Operandi said:


> Until measurements show us everything you have two options, trust your ears or rely on blind studies which are problematic because they can lead you to the wrong conclusions.


Trusting ones ears, while doing sighted comparisons, will definitely lead to the wrong conclusions. There is literally tons of data to support this statement. Blind tests _can_ lead to wrong conclusions, but at least it is not the norm.




Operandi said:


> the subsets of listeners and and equipment is indicating that the difference is there and observable when those thresholds are met in my opinion.


If the indication is there, it should be possible to get some statistically relevant results from that dataset. Otherwise you are just guessing, or believing what you want to believe.


Operandi said:


> It works but a wireless protocol is never going to compete with a wired one.


Why? Timing is not a problem, as we can use async mode.


Operandi said:


> any cable carrying an electrical signal is going to be under the influence of interference.


Yes. But if that cable is not electrically connected to the analog side, does its existence matter?


Operandi said:


> Audio streaming works differently than file transfer or peripheral interface, see ifi USB AUDIO GREMLINS EXPOSED.


I read that and it just repeats what I’ve been writing all along. If your cable, dac or source isn’t faulty, you cannot hear any improvement from more expensive cables. You can clearly hear packet loss, and you can test cable performance by using some mass storage device on it and checking the error rate. If the error rate isn’t zero, throw the cable away.

If USB data transfer errors would be an actual common problem, we would use an error correcting code in the trasferred packets. We would also use radiation hardened DAC chips in a voting lock step configuration, optically isolated analog domain etc.. But we don’t, not even in the most expensive audio DACs on the planet.


----------



## Operandi (Apr 12, 2022)

Dredi said:


> Trusting ones ears, while doing sighted comparisons, will definitely lead to the wrong conclusions. There is literally tons of data to support this statement. Blind tests _can_ lead to wrong conclusions, but at least it is not the norm.


Yeah, observed listening tests flaws are pretty known.  My only point is that most of blind tests tend to point you to a conclusion that may be wrong.  The differences between good DACs, or cheap and high-end cables is going to be small to say the least.  That pretty much necessitates that whoever is doing the test would be very familiar with everything else in the system and someone else be switching between the components being tested.  There are small scale tests I've seen where people do that (someone goes into the listeners home and helps them swap components) but its always going to be a small sample size due to the familiarity requirements so statistically irrelevant.


Dredi said:


> If the indication is there, it should be possible to get some statistically relevant results from that dataset. Otherwise you are just guessing, or believing what you want to believe.


Yeah if we are referencing the archimago test then you'd need a bigger test with both people that can actually hear the differences between the various samples as well as have equipment that can resolve the differences between the samples and only test with those individuals. 


Dredi said:


> Why? Timing is not a problem, as we can use async mode.


You can say timing is not a problem because of the techniques used to mitigate the issues but they are fundamentally same techniques used with physical connections.  Bandwidth problems still exist but now you medium is air which is pretty much always going to be worse than a physical cable. 


Dredi said:


> Yes. But if that cable is not electrically connected to the analog side, does its existence matter?


Yeah, still matters.  Even if the cable isn't directly associated with the analog side its not completely isolated from the circuit.  You still have bandwidth considerations on the digital side that are susceptible to interference.


Dredi said:


> I read that and it just repeats what I’ve been writing all along. If your cable, dac or source isn’t faulty, you cannot hear any improvement from more expensive cables. You can clearly hear packet loss, and you can test cable performance by using some mass storage device on it and checking the error rate. If the error rate isn’t zero, throw the cable away.
> 
> If USB data transfer errors would be an actual common problem, we would use an error correcting code in the trasferred packets. We would also use radiation hardened DAC chips in a voting lock step configuration, optically isolated analog domain etc.. But we don’t, not even in the most expensive audio DACs on the planet.


Audio streaming isochronous and bulk data transfer their data in different ways though errors would be handled differently?

My understanding is that there is active error correction in audio streams.


----------



## Dredi (Apr 13, 2022)

Operandi said:


> Bandwidth problems still exist but now you medium is air which is pretty much always going to be worse than a physical cable.


So ”always” has now become ”pretty much always”. How quaint. And I’m not sure why you think that bandwidth is a problem, unless we are talking about wearables. WiFi can stream some gigabits per second, ought to be enough for redbook audio..

and seeing how we can _completely_ get rid of this USB cable ”interference”, I would have thought that you’d prefer this over anything.


Operandi said:


> Yeah, observed listening tests flaws are pretty known.


So why promote them?



Operandi said:


> You still have bandwidth considerations on the digital side that are susceptible to interference.


Which any sane engineer knows how to mitigate, via buffering etc. 


Operandi said:


> My understanding is that there is active error correction in audio streams.


That is not the case when using default USB audio drivers. There is only an error checksum, but no error correction.
This is why saying that ’a cable matters because of errors’ is absurd, as one can hear each and every one of them, but no one complains about them. Why? Because they are super rare. 


Operandi said:


> Audio streaming isochronous and bulk data transfer their data in different ways though errors would be handled differently?


Yes, but you can determine the _cable quality_ from it, i.e. is it error prone or not.


----------



## Operandi (Apr 13, 2022)

Dredi said:


> So ”always” has now become ”pretty much always”. How quaint. And I’m not sure why you think that bandwidth is a problem, unless we are talking about wearables. WiFi can stream some gigabits per second, ought to be enough for redbook audio..


Wow, ok, this is a internet forum, we aren't writing academic research papers or reference white papers here.  To be clear wireless is _always _going to be inferior to physical medium. 

Yeah, there is tons of bandwidth to do bulk data transfer but thats completely different to the requirements of a isochronous audio stream.


Dredi said:


> So why promote them?


I'm not really promoting them but you can't benchmark your way to the answer and the way most blind tests are conducted in way to lead to the wrong conclusions so listening impressions are whats left.  If you could benchmark everything and quantify it or statistically prove it through blind tests would that really be that useful given how subjective audio is in terms of personal preference and perception ability? 

Educate yourself do your own listening and make your own determinations. 


Dredi said:


> Which any sane engineer knows how to mitigate, via buffering etc.


Yeah, we keep going over this.  You can mitigate the problems with various techniques but not eliminate them in a real time audio stream.


> That is not the case when using default USB audio drivers. There is only an error checksum, but no error correction.
> This is why saying that ’a cable matters because of errors’ is absurd, as one can hear each and every one of them, but no one complains about them. Why? Because they are super rare.


I don't design these things and am not en EE and information is scarce but my understanding is that all DACs have their own internal handling of errors.  Not every bit gets transferred with 100% accuracy and there is no re-try like with bulk data transfers so its up to the DAC to internally handle the error.  You easily _hear _drop outs and artifacts where the stream essentially fails but the argument is that errors are still happening which result in a lose of quality.


> Yes, but you can determine the _cable quality_ from it, i.e. is it error prone or not.


Right but what I'm saying / asking is the nature of the data is different and how its transferred is totally different.  Audio is being sampled at 44Khz at CD quality all represented by bits, converted to analog voltage and back to bits again, thats a lot going on.  I don't know how the data packets are framed and not being an expert on digital audio or an EE of any kind but given the real time nature of how the digital stream works it seems conceivable to me that errors could be a problem.


----------



## Dredi (Apr 14, 2022)

Operandi said:


> argument is that errors are still happening which result in a lose of quality.


How often do they happen? You promote this cable bullshit so much, that one would think that you have some numbers to give. Counting transfer errors is a purely discreet and quantifiable metric.


Operandi said:


> Right but what I'm saying / asking is the nature of the data is different and how its transferred is totally different. Audio is being sampled at 44Khz at CD quality all represented by bits, converted to analog voltage and back to bits again, thats a lot going on. I don't know how the data packets are framed and not being an expert on digital audio or an EE of any kind but given the real time nature of how the digital stream works it seems conceivable to me that errors could be a problem.


The nature of the data is different, but the physical transfer layer is the same, which is why you can test cable quality with the method that I gave.

Errors are a problem, with shitty cables and DACs placed inside microwave ovens. In other cases, not really. You can easily test it yourself.

Async USB audio audio does not care about any miniscule timing errors in the data transfers, only transfer errors.


Operandi said:


> Educate yourself do your own listening and make your own determinations.


Educate yourself, do your own blind tests and make your own determinations. I do not have the audacity to think that sighted audio tests that I could make would prove anything.



Operandi said:


> If you could benchmark everything and quantify it or statistically prove it through blind tests would that really be that useful given how subjective audio is in terms of personal preference and perception ability?


Of course blind tests can be used to gauge subjective preference as well! I mean why wouldn’t that be the case? In order to do that, one just has to be able to differentiate the changing components by listening alone.

If we would be talking about subjecive preference on how audio systems look like, then things would be different.


----------



## Deleted member 24505 (Apr 14, 2022)

My Cambridge DacMagic 100 is using Async USB. It's not the best USB DAC but it is good enough for me. I am using just a standard USB cable, as i think a digital cable quality does not matter. Unless i see proof that it does. 
https://www.whathifi.com/cambridge-audio/dacmagic-100/review


----------



## Operandi (Apr 14, 2022)

Dredi said:


> How often do they happen? You promote this cable bullshit so much, that one would think that you have some numbers to give. Counting transfer errors is a purely discreet and quantifiable metric.


I have no clue how often they happen.  And I'm not promoting anything I'm simply stating the cable is not infallible even if all it was doing is transmitting 1s and 0s so you can't go by just that one metric.


Dredi said:


> The nature of the data is different, but the physical transfer layer is the same, which is why you can test cable quality with the method that I gave.
> 
> Errors are a problem, with shitty cables and DACs placed inside microwave ovens. In other cases, not really. You can easily test it yourself.
> 
> Async USB audio audio does not care about any miniscule timing errors in the data transfers, only transfer errors.


Same physical layer yes but how that data is packaged is different, audio is sampled at 44Khz a second in a isochronous stream.  This is not my area of expertise but I have to think that the fault tolerances and error correction methods used are different than in a bulk data transfer.


Dredi said:


> Educate yourself, do your own blind tests and make your own determinations. I do not have the audacity to think that sighted audio tests that I could make would prove anything.


I plan to do my own tests, not to prove anything though.


Dredi said:


> Of course blind tests can be used to gauge subjective preference as well! I mean why wouldn’t that be the case? In order to do that, one just has to be able to differentiate the changing components by listening alone.
> 
> If we would be talking about subjecive preference on how audio systems look like, then things would be different.


I'm not saying blind tests can't do that but given that most people are not going to conduct their own blind tests and relying on someone else's blind test conclusions is of limited value because of the subjectivity of audio.


----------



## Deleted member 24505 (Apr 14, 2022)

Operandi said:


> I have no clue how often they happen.  And I'm not promoting anything I'm simply stating the cable is not infallible even if all it was doing is transmitting 1s and 0s so you can't go by just that one metric.
> 
> Same physical layer yes but how that data is packaged is different, audio is sampled at 44Khz a second in a isochronous stream.  This is not my area of expertise but I have to think that the fault tolerances and error correction methods used are different than in a bulk data transfer.
> 
> ...



Professionals who test audio stuff for a living is who people listen to. If they say this item is better than that and why, then that item is better. Unless you think they are biased or don't test properly. But always personal listening is down to the individual, if you like something then buy that, whatever the pros say.


----------



## Operandi (Apr 14, 2022)

Tigger said:


> Professionals who test audio stuff for a living is who people listen to. If they say this item is better than that and why, then that item is better. Unless you think they are biased or don't test properly. But always personal listening is down to the individual, if you like something then buy that, whatever the pros say.


I agree, though I tend to follow reviewers who have some sort of technical background since they tend to run better tests and are better equipped to interpret the results.  

Not everyone likes the technical approach though and thats the tricky part about audio, particularly troublesome for those that are dead set quantifying everything into a metric you can put into a chart.


----------



## Deleted member 24505 (Apr 14, 2022)

Operandi said:


> I agree, though I tend to follow reviewers who have some sort of technical background since they tend to run better tests and are better equipped to interpret the results.
> 
> Not everyone likes the technical approach though and thats the tricky part about audio, particularly troublesome for those that are dead set quantifying everything into a metric you can put into a chart.



Here are a few sites you might know, if not they are good indeed.

https://www.audiosciencereview.com/forum/index.php

http://audiopurist.pl/en/main-page/

https://audiokarma.org/forums/index.php

i read on these a fair bit.


----------



## Dredi (Apr 15, 2022)

Operandi said:


> Same physical layer yes but how that data is packaged is different, audio is sampled at 44Khz a second in a isochronous stream. This is not my area of expertise but I have to think that the fault tolerances and error correction methods used are different than in a bulk data transfer.


The physical layer is the same, and transfer errors happen the same way. With bulk data transfer, these errors are easily quantified, because data retry is logged and you can look up that metric with no special tools or hardware. With audio, the errors are not logged, as they are not reported and there is no retry mechanism. The data is transmitted in the same voltages, with the same 0’s and 1’s. The cables error rate is very much comparable in both uses, if the receiving hardware is of the same general quality when it comes to the USB PHY used, and the data transfer rate is the same.

To think that the raw error rate would somehow depend on the data packaging has no real world basis. The only difference is how it is mitigated, which does not depend on anything related to the physical transfer layer (where the errors happen).

If you somehow think that this is not the case, please describe in detail why that might be. I.e. why the transfer errors might depend on the packet lenght or packet contents.




Operandi said:


> I have no clue how often they happen. And I'm not promoting anything I'm simply stating the cable is not infallible even if all it was doing is transmitting 1s and 0s so you can't go by just that one metric.


No cable is truly infallible, and neither is the computing inside the DAC chips for that matter. Random bit flips are a very real thing. But if transfer errors happen once in a year, I would not spend thousands on USB cables *that are not proven to work any better*.


----------



## Operandi (Apr 20, 2022)

Dredi said:


> The physical layer is the same, and transfer errors happen the same way. With bulk data transfer, these errors are easily quantified, because data retry is logged and you can look up that metric with no special tools or hardware. With audio, the errors are not logged, as they are not reported and there is no retry mechanism. The data is transmitted in the same voltages, with the same 0’s and 1’s. The cables error rate is very much comparable in both uses, if the receiving hardware is of the same general quality when it comes to the USB PHY used, and the data transfer rate is the same.
> 
> To think that the raw error rate would somehow depend on the data packaging has no real world basis. The only difference is how it is mitigated, which does not depend on anything related to the physical transfer layer (where the errors happen).
> 
> If you somehow think that this is not the case, please describe in detail why that might be. I.e. why the transfer errors might depend on the packet lenght or packet contents.





Dredi said:


> No cable is truly infallible, and neither is the computing inside the DAC chips for that matter. Random bit flips are a very real thing. But if transfer errors happen once in a year, I would not spend thousands on USB cables *that are not proven to work any better*.


I really don't have anything specific to point to as to why the errors would be different in audio vs. bulk data transfer but am trying to understand the argument for it and remain skeptical until I understand it.  I certainly wouldn't advocate for high-end cables either.

The argument seems to be two fold.  The bits being transferred which as you said on the physical layer are the same regardless.  In regard to errors and retry on data transfer that only occurs as chunks of data in some block of bytes as I understand it, not in the bit level and it would be on that basis that errors are logged and retries happen?  This is more of a data transmission level question than anything but is there bit over provisioning in the transport layer that protects data integrity that would inherently not be present in a audio stream?  I state it as a question because I don't know and the argument is that there isn't enough bandwidth in the cable to maintain represent the bits with 100% accuracy particularly with HD audio.

The other aspect is the cable itself picking up outside interference and affecting the DAC itself.  The DAC is sensitive to noise and interference, just because its made up of ICs doesn't make it immune to the outside world.  I mean everything in the analog world is prone to interference, from truntables, tubes, and solidstate MOSFETs and half of what the DAC is doing is analog. You can say you can't hear it because blind tests don't prove it or it dosn't show up in the measurements but without rehashing old territory those two things don't tell the whole story.  My stance is that if it _actually _is happening its happening of the fringe high-end spectrum and that its almost certainly irreverent to other short comings you may have.



Tigger said:


> Here are a few sites you might know, if not they are good indeed.
> 
> https://www.audiosciencereview.com/forum/index.php
> 
> ...


I build my own speakers so I mostly frequent forums focused on that.  I do check in on ASR though to see whats passing through and getting tested.

Otherwise I mostly stay up on what new on the electronics front from a few Youtube channels. A British Audiophile, reviews a lot high-end gear that I'll probably never buy but he has an EE background and goes into the technical design aspects which gives interesting context into how and why something might sound the way it does. The cheapaudioman reviews cheaper (sub $1,000) stuff in very non-pretentious audiophily way I appreciate.


----------



## Dredi (Apr 21, 2022)

Operandi said:


> In regard to errors and retry on data transfer that only occurs as chunks of data in some block of bytes as I understand it, not in the bit level and it would be on that basis that errors are logged and retries happen?


Correct. I don’t remember the specifics, but there is just some CRC checksum at the end of each block of data, and if that does not match a retry is attempted (and logged).



Operandi said:


> This is more of a data transmission level question than anything but is there bit over provisioning in the transport layer that protects data integrity that would inherently not be present in a audio stream?


No. The default audio driver does not implement any over provisioning.



Operandi said:


> I state it as a question because I don't know and the argument is that there isn't enough bandwidth in the cable to maintain represent the bits with 100% accuracy particularly with HD audio.


There is plenty of bandwidth. I mean anything over 16bit/44KHz is a waste of time anyway, as far as playback is concerned, and there is enough bandwidth for 32bit/300+KHz, meaning that you could literally send each packet ten times and still have bandwidth to spare.




Operandi said:


> The other aspect is the cable itself picking up outside interference and affecting the DAC itself. The DAC is sensitive to noise and interference, just because its made up of ICs doesn't make it immune to the outside world. I mean everything in the analog world is prone to interference, from truntables, tubes, and solidstate MOSFETs and half of what the DAC is doing is analog. You can say you can't hear it because blind tests don't prove it or it dosn't show up in the measurements but without rehashing old territory those two things don't tell the whole story. My stance is that if it _actually _is happening its happening of the fringe high-end spectrum and that its almost certainly irreverent to other short comings you may have.


It is present in the analog domain, and mostly irrelevant. If you’d have audible interference, it would in most apartment buildings be 99,9% just the 50/60Hz hum. Other frequencies in the auditory range are _very_ under represented. If you can’t even measure the 50/60Hz hum in the auditory decibel range, it is exceedingly unlikely that any other interference would be more pronounced.

There are a lot of other interference, but it is not in the auditory range, and thus does not matter in the analog domain. It can cause a lot of problems in the digital domain, but those would be easy to hear if present, or quantifiable by other means.


----------



## Operandi (May 5, 2022)

Dredi said:


> Correct. I don’t remember the specifics, but there is just some CRC checksum at the end of each block of data, and if that does not match a retry is attempted (and logged).





Dredi said:


> No. The default audio driver does not implement any over provisioning.


Right, and thats kinda what I'm getting at.  For those reasons you can't treat it the same and say that just because its digital its protected from faults.  There is all kinds of mechanisms happening that makes data transfer appear as though the process is infallible but those don't exist in the same way with digital audio.


Dredi said:


> There is plenty of bandwidth. I mean anything over 16bit/44KHz is a waste of time anyway, as far as playback is concerned, and there is enough bandwidth for 32bit/300+KHz, meaning that you could literally send each packet ten times and still have bandwidth to spare.


The cable may not be limiting the bandwidth but most DACs have an internal resolution of 20 bits or so internally before the accuracy is gone.  That loss of resolution is mostly due to design constraints of internal components but also probably susceptible extra noise on the cable. 


Dredi said:


> It is present in the analog domain, and mostly irrelevant. If you’d have audible interference, it would in most apartment buildings be 99,9% just the 50/60Hz hum. Other frequencies in the auditory range are _very_ under represented. If you can’t even measure the 50/60Hz hum in the auditory decibel range, it is exceedingly unlikely that any other interference would be more pronounced.
> 
> There are a lot of other interference, but it is not in the auditory range, and thus does not matter in the analog domain. It can cause a lot of problems in the digital domain, but those would be easy to hear if present, or quantifiable by other means.


Its not the noise being in the auditor range that is the problem, all EMI noise is an issue.

Look at the various approaches to negative feedback in amplification which does happen in the auditory range but part of what negative feedback loops do is remove noise and distortion in the amplification circuit.  That noise and distortion is effect of the amp design and external noise factors whether it be noise introduced by the power supply or external EMI.  Negative feedback in respect to DAC isn't directly comparable (aside from its output stage) but the principles still apply.


----------



## Dredi (May 7, 2022)

Operandi said:


> all EMI noise is an issue.


Why?




Operandi said:


> That loss of resolution is mostly due to design constraints of internal components but also probably susceptible extra noise on the cable.


And a mere human can hardly make use of 16 bits of range. This you can easily test by yourself if you so wish. Again, I have all the time been consistent in saying that some distortion and noise is present, but it simply does not matter.




Operandi said:


> Right, and thats kinda what I'm getting at. For those reasons you can't treat it the same and say that just because its digital its protected from faults. There is all kinds of mechanisms happening that makes data transfer appear as though the process is infallible but those don't exist in the same way with digital audio.


But it is quantifiable, and does not happen often enough to matter. In the same way your car is not infallible and can kill you at any moment, but you still drive it.

With custom drivers you can over provision as much as you want and completely negate the problem. Too bad the ”high end” market instead focuses on 1000 dollar cables.


----------



## Deleted member 24505 (May 7, 2022)

I have my DAC set at 24/96, is that pointless and should set it to 16/44?


----------



## Dredi (May 7, 2022)

Tigger said:


> I have my DAC set at 24/96, is that pointless and should set it to 16/44?


It depends. If you listen to sources of both 44kHz and 48kHz sample rates (like CD audio at 44 and movies at 48), and your systems won’t be able to automatically switch the sample rate based on content (which is typical for PC’s), then a higher sample rate will diminish re-sampling artefacts, that can be audible when converting from 48 to 44 or the orher way around. Re-sampling both 44 and 48 to 96 does not produce audible re-sampling artefacts.

Because of this re-sampling issue, it is usually better to use a high sample rate for the PC-DAC interconnect. The ’best’ option would be to always set the dac to the same sample rate as the content you listen to, and let the DAC do all the upsampling internally, but my understanding is that it is difficult to accomplish on a PC. As for bit depth, it does not really matter to which value you set it to, as re-sampling is not an issue. There will be no audible difference to you between 16 and 24 bit modes. There are no real downsides either. Theoretically as there is more data being transferred if you select a higher bitdepth, there will be more data corruption as well, but it is still super rare and it’s not a real issue you should spend time pondering about.


----------



## Deleted member 24505 (May 7, 2022)

Dredi said:


> It depends. If you listen to sources of both 44kHz and 48kHz sample rates (like CD audio at 44 and movies at 48), and your systems won’t be able to automatically switch the sample rate based on content (which is typical for PC’s), then a higher sample rate will diminish re-sampling artefacts, that can be audible when converting from 48 to 44 or the orher way around. Re-sampling both 44 and 48 to 96 does not produce audible re-sampling artefacts.
> 
> Because of this re-sampling issue, it is usually better to use a high sample rate for the PC-DAC interconnect. The ’best’ option would be to always set the dac to the same sample rate as the content you listen to, and let the DAC do all the upsampling internally, but my understanding is that it is difficult to accomplish on a PC. As for bit depth, it does not really matter to which value you set it to, as re-sampling is not an issue. There will be no audible difference to you between 16 and 24 bit modes. There are no real downsides either. Theoretically as there is more data being transferred if you select a higher bitdepth, there will be more data corruption as well, but it is still super rare and it’s not a real issue you should spend time pondering about.



Apart from games, i always try to "find" the highest quality audio files i can, usually FLAC if possible, but i will use peasant MP3 if i have to. My DAC is connected via USB C 3.2 on the PC though i don't suppose it makes a difference even if it was USB 2.


----------



## Dredi (May 7, 2022)

Tigger said:


> Apart from games, i always try to "find" the highest quality audio files i can, usually FLAC if possible, but i will use peasant MP3 if i have to. My DAC is connected via USB C 3.2 on the PC though i don't suppose it makes a difference even if it was USB 2.


Flac and mp3 have nothing to do with the sample rate and bit depth.

If you use any music streaming service, or audio CD releases, they are going to be at 44,1 kHz, most movie streaming services and dvd stereo tracks are at 48kHz.

The quality of any audio track is usually determined by how it was mastered, and any extra data rate beyond CD audio quality is just a waste. You can easily test it yourself.


----------



## Operandi (May 19, 2022)

Dredi said:


> Why?


It negatively affects things and I already explained why.


Dredi said:


> And a mere human can hardly make use of 16 bits of range. This you can easily test by yourself if you so wish. Again, I have all the time been consistent in saying that some distortion and noise is present, but it simply does not matter.


16/44 was settled on for a very good reason and a bit depth of 16 bits is 96 dBA of dynamic range which is more than enough for playback so its very easy to dismiss any high-res formats if you understand the basic principles and what Red book audio covers.  Still many people who understand it on level higher than anyone on this forum claim that higher resolution (24 bit 192Khz) sound better and the reason comes down to how the reconstruction filters in the DACs work.  Its the same reason why oversampling is a thing at all, and the reasoning why Chord uses proprietary filters built around a FPGA in their DACs, or what the whole MQA standard was designed to address.


Dredi said:


> But it is quantifiable, and does not happen often enough to matter. In the same way your car is not infallible and can kill you at any moment, but you still drive it.
> 
> With custom drivers you can over provision as much as you want and completely negate the problem. Too bad the ”high end” market instead focuses on 1000 dollar cables.


How is quantifiable?  When you are transferring data of any kind from one component to another inside a computer or from one computer to another it only appears to happen without error because of protection schemes built into the process and bandwidth is not a consideration for them to function.  Those schemes don't exist and can't function in the same way for digital audio and that says nothing about what happens in the analog domain.

Ultimately I don't think the answer for better audio is software (over provisioning of data) I was just using it to draw a compassion.  The high-end industry is doing plenty of things besides selling high-end cables, look at the research and science that goes into MQA, or the re-appearance of R2R ladder DACs.


Dredi said:


> Flac and mp3 have nothing to do with the sample rate and bit depth.
> 
> If you use any music streaming service, or audio CD releases, they are going to be at 44,1 kHz, most movie streaming services and dvd stereo tracks are at 48kHz.
> 
> The quality of any audio track is usually determined by how it was mastered, and any extra data rate beyond CD audio quality is just a waste. You can easily test it yourself.


FLAC vs. MP3 is lossless vs. lossy compression.  Lossless compression addresses totally different issues with digital music than sampling rate and bit depth.

Lots of music streaming services offer high-res music now, Tidal being probably the most popular.

Mastering is really far far more important than any of this but its like comparing the farm equipment used to plant the apple tree to the apple itself, for the purposes of discussing digital audio it makes zero sense.  That said I usually go for a high-quality vinyl FLAC vs CD FLAC if I can find it because the vinyl master is often times better than the CD master.


----------



## Dredi (May 20, 2022)

Operandi said:


> That said I usually go for a high-quality vinyl FLAC


How can you be sure that it does not contain faulty bits of data? I mean, it was captured with some USB audio device, and you claim that they produce errors quite regularly.




Operandi said:


> FLAC vs. MP3 is lossless vs. lossy compression. Lossless compression addresses totally different issues with digital music than sampling rate and bit depth.


Yup




Operandi said:


> The high-end industry is doing plenty of things besides selling high-end cables, look at the research and science that goes into MQA, or the re-appearance of R2R ladder DACs.


And none of those matter for anyone, except if you are in the business of extracting money from idiots with cash to spare. Name one (peer reviewed) study where any of the things you mentioned produced better sound. ”Research and science” my ass.




Operandi said:


> How is quantifiable? When you are transferring data of any kind from one component to another inside a computer or from one computer to another it only appears to happen without error because of protection schemes built into the process and bandwidth is not a consideration for them to function. Those schemes don't exist and can't function in the same way for digital audio and that says nothing about what happens in the analog domain.


The DAC chip can count transfer errors, based on the checksum scheme in place, that is how it’s quantifiable.




Operandi said:


> It negatively affects things and I already explained why.


How much though, enough for someone to actually hear it?




Operandi said:


> Still many people who understand it on level higher than anyone on this forum claim that higher resolution (24 bit 192Khz) sound better and the reason comes down to how the reconstruction filters in the DACs work.


And is the ”better sound” quantifiable?


----------



## Operandi (May 20, 2022)

Dredi said:


> How can you be sure that it does not contain faulty bits of data? I mean, it was captured with some USB audio device, and you claim that they produce errors quite regularly.


I never said it wasn't going to contain faults from AD conversion.  The point is to have better source material to work with.


Dredi said:


> And none of those matter for anyone, except if you are in the business of extracting money from idiots with cash to spare. Name one (peer reviewed) study where any of the things you mentioned produced better sound. ”Research and science” my ass.


Look into the design goals, research to achieve them, and subsequent patents that went into MQA and tell me thats not real research and science.  If you think new formats are just overly complex money making schemes thats up to you but MQA is merely one example of whats happening in high-end digital audio.  You can argue their target goals are pointless and you can't hear it, and thats your opinion but I'm not going down the "peer reviewed", "blind listening" tests path again because thats already been addressed.


Dredi said:


> The DAC chip can count transfer errors, based on the checksum scheme in place, that is how it’s quantifiable.


Digital audio is bandwidth limited and time sensitive, it doesn't behave the same way file transfers work or whatever other comparison you want to draw.


Dredi said:


> How much though, enough for someone to actually hear it?


I'm not here to tell you or anyone else what they can or can't hear but yes, otherwise they wouldn't exist.


Dredi said:


> And is the ”better sound” quantifiable?


Probably not via the methods you are looking for but drawing conclusions on what _sounds better _based on what is statistically significant in blind studies or what looks better to an Audio Precision device (the limitations of both of these has already been covered) is missing the point when what the individual hears is only end result that matters.


----------



## Dredi (May 24, 2022)

Operandi said:


> Look into the design goals, research to achieve them, and subsequent patents that went into MQA and tell me thats not real research and science.


Done.



Operandi said:


> I'm not going down the "peer reviewed", "blind listening" tests path again because thats already been addressed.


Neither did the MQA people. Sadly. If their ’research’ does not contain validation by controlled testing, I will not consider their statements valid.



Operandi said:


> Digital audio is bandwidth limited and time sensitive, it doesn't behave the same way file transfers work or whatever other comparison you want to draw.


Not really. There is enough time and bandwidth to do whatever. It was an actual limitation with usb1.0, but we are past that. Another thing is real time audio, but that has nothing to do with end user music listening.


Operandi said:


> I'm not here to tell you or anyone else what they can or can't hear but yes, otherwise they wouldn't exist.


But if they can hear it, where is the research to prove it?


Operandi said:


> what the individual hears is only end result that matters.


Exactly. Not what the end user believes, or sees, but what the individual hears. That has been my point this whole time. There are limited methods to determine that though.

Every time someone brings some new bullshit to the audio scene, people should be interested in only one thing: Can you hear it? And specifically, can you hear it without seeing it. Anything else is essentially pointless.


----------



## Operandi (May 31, 2022)

Dredi said:


> Neither did the MQA people. Sadly. If their ’research’ does not contain validation by controlled testing, I will not consider their statements valid.


I'm not pointing to MQA as a success story just using it as an example of where a ton of R&D went into pushing digital audio.  The benefits of MQA seem to be pretty mixed at best from what I've read.


Dredi said:


> Not really. There is enough time and bandwidth to do whatever. It was an actual limitation with usb1.0, but we are past that. Another thing is real time audio, but that has nothing to do with end user music listening.


I don't think thats true.  To my knowledge there isn't a cable or protocol capable of transferring 24 bit digital audio without the use of error interpolation, buffers, and other techniques all of which are susceptible to error. 

What do you mean?  All audio playback happens in real time.


Dredi said:


> But if they can hear it, where is the research to prove it?





Dredi said:


> Every time someone brings some new bullshit to the audio scene, people should be interested in only one thing: Can you hear it? And specifically, can you hear it without seeing it. Anything else is essentially pointless.


All this territory has been covered already.  Your stance is pretty clear and I'm not going to argue with it.  The only thing I would say is (which I've already said) is anything high-end is hard to test for several reasons.  The differences between anything high-end in the audio world is small and with DACs even more so because of all components they really shouldn't be imparting any character of their own unlike speakers or to a lesser extent amplifiers.  

How the results of these tests are interpenetrated also have to be taken into consideration, hearing is very personal in terms of what you are sensitive to, what you prefer and what you are even capable of hearing.  If you are testing digital sources (DACs) everything else in the signal path has to been good enough to resolve any differences and even if you have gear that is good enough that assumes in the case of speakers that they are setup properly.

Probably the biggest issues is simply arranging the proper test conditions in terms of scale and scope you'd need to do the proper tests to _prove _it to the standard you are looking for would be a huge undertaking and there just isn't a big insensitive to do it. We already covered in great deal the test done by Archimago's Musings which as impressive as it is still falls short.


----------



## Dredi (Jun 6, 2022)

Operandi said:


> I don't think thats true. To my knowledge there isn't a cable or protocol capable of transferring 24 bit digital audio without the use of error interpolation, buffers, and other techniques all of which are susceptible to error.


Why wouldn’t you buffer data? Are the errors audible, and happen often enough to matter?


Operandi said:


> What do you mean? All audio playback happens in real time.


I mean minimizing buffers etc. to minimize latency. It’s useful when doing live mixing, audio production etc. See real time computer systems @ wikipedia



Operandi said:


> Probably the biggest issues is simply arranging the proper test conditions in terms of scale and scope you'd need to do the proper tests to _prove _it to the standard you are looking for would be a huge undertaking and there just isn't a big insensitive to do it.


Yep. This is the problem. If you prove that it does not work, you won’t be able to sell it at a premium.

If you can’t prove something, it (probably) does not exist, and I would not bet my money on it existing.

Everyone is free to believe in anything, but _claiming_ that something is audibly better needs proof. That’s all.


----------



## Operandi (Jul 28, 2022)

Sorry, been away from things for awhile...



Dredi said:


> Why wouldn’t you buffer data? Are the errors audible, and happen often enough to matter?


As part of the data transport information would be buffered but that would only be up until the receiving chip, and data is only checked for errors it dosn't tell the source to re-transmit the data (to my knowledge).  What is audible is going to be highly dependent on the rest of the equipment, room, and the individual.


Dredi said:


> I mean minimizing buffers etc. to minimize latency. It’s useful when doing live mixing, audio production etc. See real time computer systems @ wikipedia


Without a doubt production is more stringent but buffers and other techniques are just mitigation efforts to address the issues with digitizing audio.


Dredi said:


> Yep. This is the problem. If you prove that it does not work, you won’t be able to sell it at a premium.
> 
> If you can’t prove something, it (probably) does not exist, and I would not bet my money on it existing.
> 
> Everyone is free to believe in anything, but _claiming_ that something is audibly better needs proof. That’s all.


Thats one way to look at it I guess.  I think its within the realm of possibility to settle some of these type of ambiguous issues but the effort would be a massive undertaking and existing studies and tests to date are not good enough and ultimately lead to the wrong conclusions._ 

Proof_ is a tricky thing when it comes to technical objective improvements that are open to subjective experience. Prove it to whom and and under what conditions? Massive double blind tests to prove every claim by every manufacture? Thats simply impractical and dosn't happen in any other industry but it you have to have it in audio or its a scam? As a completely random example if I buy a new suspension fork for my mountain bike that is 15% more responsive to small bump sensitivity or resists flexing by 10% does Fox need to conduct a massive test with a 100 riders and prove performs better on the trail within statistical significance? 1,000 random people off the street wouldn't even know what to check for, 100 highly experienced riders maybe but I feel like most of the double blind audio tests that are big enough are more like the former example (A large number of randos that don't know what they are even listening for or have gear that is not capable of resolving any difference). Aside from these tests being insanely difficult to do it its always highly specific to the user so how useful would any kind proof you gather really ultimately be?


----------



## Dredi (Jul 28, 2022)

Operandi said:


> As part of the data transport information would be buffered but that would only be up until the receiving chip, and data is only checked for errors it dosn't tell the source to re-transmit the data (to my knowledge). What is audible is going to be highly dependent on the rest of the equipment, room, and the individual.


Whether or not retransmit is requested depends on the protocol. If streaming over IP you can do whatever you want. If you are limited to default USB audio drivers, then there obviously isn’t any retransmitting as it’s not defined in the spec.

And errors aren’t audible if they don’t happen. If you have faulty cables then it might be an actual problem. As I wrote earlier, you can easily test your cables for error rate.


Operandi said:


> but buffers and other techniques are just mitigation efforts to address the issues with digitizing audio.


They have to do with timing accuracy, not much more. Btw, how accurate do you think analog audio is, timing wise? Cassettes, vinyl, etc. are dependant on the accuracy of electric motors, moving masses, rubber strings, bearing quality and wear level etc. all of which change over time.

Timing accuracy is now vastly better than it ever was for analog media home audio equipment.




Operandi said:


> _Proof_ is a tricky thing when it comes to technical objective improvements that are open to subjective experience.


Subjectivity has nothing to do with proof. All I’m asking is that an audible difference can be heard. Whether or not the experience is better - i don’t really care. 



Operandi said:


> Massive double blind tests to prove every claim by every manufacture?


I’m ok with a repeatable n=1 study. The main audio ”engineer” of a high end audio company as the one under stydy, with sample size large enough to verify the claim. Should not take more than an hour per product.



Operandi said:


> As a completely random example if I buy a new suspension fork for my mountain bike that is 15% more responsive to small bump sensitivity or resists flexing by 10% does Fox need to conduct a massive test with a 100 riders and prove performs better on the trail within statistical significance?


Sounds like a bullshit claim, to be honest. How do they know that it’s 15% more responsive? If they base the numbers on some lab test (and disclose how they were made), then it’s fine by me. If they claimed that with the different parts your subjective experience would change (even if it is immeasurable by technical means), then they would need to conduct a study of sorts to back it up. 



Operandi said:


> Aside from these tests being insanely difficult to do it its always highly specific to the user so how useful would any kind proof you gather really ultimately be?


They are not difficult to do. All I’m asking is that at least someone who claims there to be a difference to be able to show it. With his setup or whatever, in a controlled blind study.

And as for usefullness, how useful do you think unproven baseless marketing claims are?


----------



## Operandi (Jul 29, 2022)

Dredi said:


> Whether or not retransmit is requested depends on the protocol. If streaming over IP you can do whatever you want. If you are limited to default USB audio drivers, then there obviously isn’t any retransmitting as it’s not defined in the spec.
> 
> And errors aren’t audible if they don’t happen. If you have faulty cables then it might be an actual problem. As I wrote earlier, you can easily test your cables for error rate.


You aren't testing the data transmission on the cable you are testing the cable and whatever protocol you are using.  Bulk data transfer protocols have built in redundancy built in that compensate for errors on the transmitting and receiving ends of the connection.  Those things don't exist in real time audio streams like USB Audio or SPDIF.


Dredi said:


> They have to do with timing accuracy, not much more. Btw, how accurate do you think analog audio is, timing wise? Cassettes, vinyl, etc. are dependant on the accuracy of electric motors, moving masses, rubber strings, bearing quality and wear level etc. all of which change over time.
> 
> Timing accuracy is now vastly better than it ever was for analog media home audio equipment.


I think the difference here is when you dealing with something like vinyl you are never leaving analog.


Dredi said:


> I’m ok with a repeatable n=1 study. The main audio ”engineer” of a high end audio company as the one under stydy, with sample size large enough to verify the claim. Should not take more than an hour per product.


How would that work?  I mean most if not all companies do that kinda testing internally when comparing new products.  Lots of reviewers do the same thing but unless someone is there to observe and document the procedure you don't really know that they did it right or at all.  Most people don't care about this kind of thing, they usually fall into one of two camps.  Those that read / watch subjective reviews and those that care about specs.


Dredi said:


> Sounds like a bullshit claim, to be honest. How do they know that it’s 15% more responsive? If they base the numbers on some lab test (and disclose how they were made), then it’s fine by me. If they claimed that with the different parts your subjective experience would change (even if it is immeasurable by technical means), then they would need to conduct a study of sorts to back it up.


Whats bullshit about it?  The initial first few % of the travel is a pretty important aspect of performance.  The more reactive it is the better it tracks with the ground and less fatiguing it is.  As to how, put whatever is being tested on a standardized jig and measure it.  

Nobody is going to conduct a huge study to prove it though for the same reasons as with audio, crazy hard and time consuming and there isn't enough of audience to justify the effort but that dosn't make it bullshit.


Dredi said:


> They are not difficult to do. All I’m asking is that at least someone who claims there to be a difference to be able to show it. With his setup or whatever, in a controlled blind study.
> 
> And as for usefullness, how useful do you think unproven baseless marketing claims are?


Its been done before but what constitutes proof?  If a reviewer or manufacture conducts their own test and outlines their procedure is that good enough?  I mean you are pretty much taking their word for it as you don't really have anything tangible to point to like you would with a medical study with treatment A vs. treatment B and correlate the outcome with your test procedure. 

More interesting and useful than marketing claims for sure but not particularly useful in a practical sense since it comes down the individual.


----------



## Dredi (Aug 7, 2022)

Operandi said:


> As to how, put whatever is being tested on a standardized jig and measure it.


And in high end audio, this is never done.



Operandi said:


> Those things don't exist in real time audio streams like USB Audio or SPDIF.


USB audio is not technically a ’real time audio stream’, but a packet based one. It doesn’t contain timing information at all in asyc mode, which is the one everyone currently uses.
And those things don’t exist in the default usb audio driver, but there are no technical limitations for implementing error correcting code for audio transmission. The lack of effort from high end audio companies to implement such things simply speaks for the lack of need for such things. Transmission errors are very rare.


Operandi said:


> I think the difference here is when you dealing with something like vinyl you are never leaving analog.


So what? There are more errors present in a full analog setup than when using digital media.


Operandi said:


> How would that work? I mean most if not all companies do that kinda testing internally when comparing new products. Lots of reviewers do the same thing but unless someone is there to observe and document the procedure you don't really know that they did it right or at all.


Name one high end audio company that does blind testing. From my experience, they tell people that ask that they consider it unnecessary. As for third party reviewers of high end audio gear, no one does that in blind testing.
As for the correctness of testing etc. They can just release the papers describing the testing procedure, and welcome observers if someone wants to come see how it’s done. Doesn’t seem too complicated.


Operandi said:


> If a reviewer or manufacture conducts their own test and outlines their procedure is that good enough? I mean you are pretty much taking their word for it as you don't really have anything tangible to point to


See above.



Operandi said:


> More interesting and useful than marketing claims for sure but not particularly useful in a practical sense since it comes down the individual.


Baseless claims are not simply ’less useful’ than based claims, they are harmful.


----------



## Operandi (Aug 9, 2022)

Dredi said:


> And in high end audio, this is never done.


Its done all the time.  Speakers are measured in free space, a anechoic chamber Klippel system (or similar system).  Electronics are measured by various analyzers or specific equipment like Audio Precision.


Dredi said:


> USB audio is not technically a ’real time audio stream’, but a packet based one. It doesn’t contain timing information at all in asyc mode, which is the one everyone currently uses.
> And those things don’t exist in the default usb audio driver, but there are no technical limitations for implementing error correcting code for audio transmission. The lack of effort from high end audio companies to implement such things simply speaks for the lack of need for such things. Transmission errors are very rare.


With async USB timing is still transmitted you are just relying on DACs clock gen as reference because PCs have horribly inaccurate clocks.  USB audio is packet based but it dosn't have the data protection schemes that file transfers, or network connectivity rely on which is the only reason why it works at all.  What the error rate is I don't know but they do happen which is the why the _"its all 1s and 0's" _point of view is a falsify and why  quality matters.

I think the lack of effort on the transportation method just shows that resources are better spent elsewhere on the DAC.  Aysnc USB is pretty good but if you look at some of the best very high-end DACs the i2s I2S interface would be an example of a superior interface.  


Dredi said:


> So what? There are more errors present in a full analog setup than when using digital media.


Analog audio and digital audio are completely different things, you can't compare them.  If you are analyzing the signal on AP digital is better in every regard but thats not how we hear things.  Human ears are not linear and there are so many levels of psychoacoustics involved with sound it makes direct comparisons to what is perceived to be better / more accurate vs what measures to be better and more accurate very difficult let alone drawing direct correlational between the two.


Dredi said:


> Name one high end audio company that does blind testing. From my experience, they tell people that ask that they consider it unnecessary. As for third party reviewers of high end audio gear, no one does that in blind testing.
> As for the correctness of testing etc. They can just release the papers describing the testing procedure, and welcome observers if someone wants to come see how it’s done. Doesn’t seem too complicated.


Schiit did a test with Audio Head comparing all four of their pre-amps.  I've seen in a few interviews with Schiit that when they are internally testing different prototypes they are unlabeled and make the rounds with the team members and the design that ppl like the most wins and goes to production.  One of more popular YouTube video guys did a test with four different RCA cables and was able rank them.  I've seen similar tests to different degrees of procedure and documention, but nobody is publishing papers or anything like that.


Dredi said:


> Baseless claims are not simply ’less useful’ than based claims, they are harmful.


I don't think anyone is at risk of being harmed, its just subjective impressions of audio gear.


----------



## Dredi (Aug 9, 2022)

Operandi said:


> I don't think anyone is at risk of being harmed, its just subjective impressions of audio gear.


Someone making financial decisions based on baseless claims is being harmed.


Operandi said:


> Its done all the time. Speakers are measured in free space, a anechoic chamber Klippel system (or similar system). Electronics are measured by various analyzers or specific equipment like Audio Precision.


Really? For example the item of this topic, do you think any measurement electronics were used to determine the change in quality it makes to sound reproduction?


Operandi said:


> With async USB timing is still transmitted


No. The USB packets contain zero timing information. It is basically interrupt based system dictated by the DAC.



Operandi said:


> What the error rate is I don't know but they do happen which is the why the _"its all 1s and 0's" _point of view is a falsify and why quality matters.


How much does it matter though? I guess you don’t know that either. If there is one error per year of listening, does it make sense to spend big bucks on the cables?



Operandi said:


> Analog audio and digital audio are completely different things, you can't compare them.


Both are audio, of course you can compare them. You could, for example, do a blind study to compare them. 


Operandi said:


> One of more popular YouTube video guys did a test with four different RCA cables and was able rank them.


Link please.



Operandi said:


> Schiit did a test with Audio Head comparing all four of their pre-amps.


Not really ’high end’, the devices are pretty cheap. Anyway, I’ve never stated that you would’nt be able to hear differences in amps.

And not a proper blind test either, btw. The listeners were always told if the equipment changed, and to which of the four it was switched to (a, b, c, d). It was ’blind’ only to the degree that the participants weren’t told which letter denominates which equipment. This produces huge confirmation bias. 


Operandi said:


> I've seen in a few interviews with Schiit that when they are internally testing different prototypes they are unlabeled and make the rounds with the team members and the design that ppl like the most wins and goes to production.


If they are unlabeled, how can they select what they like? And if they are labeled, it’s not a proper blind test.


Operandi said:


> if you look at some of the best very high-end DACs the i2s I2S interface would be an example of a superior interface.


Very best? How is that determined? Price? 
And how exactly is i2s superior? It has better timing characteristics than spdif, but if we compare to async USB, why is it better?


----------



## Blaeza (Aug 9, 2022)

If I can't hear any better, it's bollocks.  I'm by no means an audiophile, but I LOVE my drum and bass.  How is an SSD going to make that sound any better than what it already is?  It's nonsensical fantasy land crap.  My Lexar SSD is audiophile cause it's got my music on it.  It doesn't have a fancy codec or anything like that, just my tunes, mostly from YouTube.  Someone sang a real hip-hop song, his name was Flavour Flav and it goes "Don't believe the hype!!!"


----------



## Operandi (Aug 9, 2022)

Dredi said:


> Someone making financial decisions based on baseless claims is being harmed.


There is a difference between knowingly making fraudulent claims and making claims that don't meet your personal standard.


Dredi said:


> Really? For example the item of this topic, do you think any measurement electronics were used to determine the change in quality it makes to sound reproduction?


On the top of an audiophile SSD or DACs?  The SSD I doubt it but maybe, if they did it would be good for a laugh I guess.  With DACs of course; it would be blind luck if you got something like a DAC to work without the use of measurement equipment in the design process let alone design to goal of specific performance threshold. 


Dredi said:


> No. The USB packets contain zero timing information. It is basically interrupt based system dictated by the DAC.


Ok.  I'll take your word for it, or if you have any easy to read link.  Still however the USB audio protocol works timing is an intrinsic function for it work and relying on the DACs clock is better than the PCs clock it dosn't make it immune to problems outlined previously.


Dredi said:


> How much does it matter though? I guess you don’t know that either. If there is one error per year of listening, does it make sense to spend big bucks on the cables?


This is a data science question.  I would argue it probably matters more than most people think.  Network connectivity and file transfers work only because the protocols used to protected the data are in place.  When you you transfer a file over a CAT5 cable the binary bits are represented by analog voltage differences.  Its not a binary on / off signal a computer needs it to be for it work with it, its up to the receiver to determine what the state of the voltage is at that exact moment in time represents.  Errors are happening all the time and get corrected in these processes.  Its why for example you can push a CAT5 cable past its specified max run and will work without getting corrupt data, the speed will just be reduced.

Digital audio streams are using the same fundamental process but don't have the same resilience and because of that are fundamentally different  That is why a audiophile SSD makes no sense but why cable quality is a factor even in digital audio.  I'm not advocating for big dollar high-end digital cables and never was.  Personally I'm skeptical but on principle of how digital audio works differences are possible.


Dredi said:


> Both are audio, of course you can compare them. You could, for example, do a blind study to compare them.


You can compare them to the extent of which you prefer and why but as to why, the reasons you would prefer one or the other are fundamentally different and not comparable.


Dredi said:


> Link please.


I'll have to follow up when I'm at home.


Dredi said:


> Not really ’high end’, the devices are pretty cheap. Anyway, I’ve never stated that you would’nt be able to hear differences in amps.
> 
> And not a proper blind test either, btw. The listeners were always told if the equipment changed, and to which of the four it was switched to (a, b, c, d). It was ’blind’ only to the degree that the participants weren’t told which letter denominates which equipment. This produces huge confirmation bias.


"High-end" is subjective.  For my audio budget anything in the $500-1000 range is high-end.

Yeah, its not a perfect test but that dosn't make it meaningless either.


Dredi said:


> If they are unlabeled, how can they select what they like? And if they are labeled, it’s not a proper blind test.


From what I remember of the interview they pass around generic prototypes not knowing who's design they have and just live with them for a few days.  They are labeled or at least identifiable as to differentiate the devices but they don't know specifically what it is or who is responsible for it.

Lol, wow, I never said it was a proper blind test.


Dredi said:


> Very best? How is that determined? Price?
> And how exactly is i2s superior? It has better timing characteristics than spdif, but if we compare to async USB, why is it better?


Reviews.

The clock and data are transmitted separately so its better and better than USB for all the reasons async USB isn't infallible which have been previously mentioned. 


Blaeza said:


> How is an SSD going to make that sound any better than what it already is?


Yeah, the SSD is dumb and makes zeror sense.  We're not even talking about that anymore.


----------



## Dredi (Aug 10, 2022)

Operandi said:


> "High-end" is subjective. For my audio budget anything in the $500-1000 range is high-end.


Fair enough. For me it has more to do with moving away from technical specifications to ’subjective opinions’ when it comes to audio quality that can be achieved with the products.



Operandi said:


> The clock and data are transmitted separately so its better and better than USB for all the reasons async USB isn't infallible which have been previously mentioned.


But i2s is more fallible than usb. It doesn’t have any error correction, or even error detection mechanisms. The signaling isn’t even differential. The only benefit is the clock signal, but it’s really unnecessary unless you have multiple digital things that you want clocked together. I don’t know of any real use for such a setup. It’s usually used inside a DAC, connecting the USB chip and the actual DAC chip together. It was never intended for connecting separate devices together, and isn’t really suited for it.

It’s better than spdif in some usages, I’ll give you that.



Operandi said:


> I would argue it probably matters more than most people think.


Why do you think that?

in file transfers the error correction is strictly mandatory, in audio it is technically not. How much it matters is a matter of how often the errors happen. If it was truly a problem, then it would get solved by data science, not expensive cables.

If it was a problem, you could test it by comparing the sound quality of two inputs on a device that supports both USB and DLNA(over tcp-ip). One has error correction, the other just error detection.

kef ls50 wireless ii, for example as the device to do the tests with.


----------



## SOAREVERSOR (Aug 10, 2022)

Fancy ass cables matter for analogue kinda, they don't for digital really.  Even then you don't need to spend $$$$$$$$$$$ just get you a copper or silver cable and call it a day.


----------



## Palladium (Aug 10, 2022)

SOAREVERSOR said:


> Fancy ass cables matter for analogue kinda, they don't for digital really.  Even then you don't need to spend $$$$$$$$$$$ just get you a copper or silver cable and call it a day.



Just don't tell the owner of fancy pants 4+ figure audio cables about the fact that every speaker and audio electronic manufacturer is gonna be so much comparatively cheapskate when it comes to the innards.


----------



## SOAREVERSOR (Aug 10, 2022)

Palladium said:


> Just don't tell the owner of fancy pants 4+ figure audio cables about the fact that every speaker and audio electronic manufacturer is gonna be so much comparatively cheapskate when it comes to the innards.



Those people are idiots to start with and there is no reasoning with them.  Is a 4k HDMI cable worth it, fuck no.  Nor is a 4k copper cable for speakers.  Now is there a difference between a 10 buck HDMI cable and a 100 buck one, fuck no.  But is there a difference between a 10 buck cheap recycled metal speaker cable and a 100 buck pure copper speaker cable, hell yes.  However if we are talking about "spending 10x as much for device" that money is better spent on better speakers than cables.  The issue is that 10x the cost of a cable is something people can still afford and feel good about, now look at a good pair of 2000 buck speakers and 10x that and people shit bricks.

Or just do what I do.  Buy a spool of copper wire at like 500 bucks and make your own damn cables and if it breaks who cares, just make another.  All copper shielded wire isn't hard to find.  Strip shielding at ends twist up, terminate end and off you go!  You have all pure copper wire for decades.


----------



## Operandi (Aug 22, 2022)

Link to interconnect cable test I mentioned earlier.  From what I remember of it it wasn't good enough of a test to anyone could definitively point to as proof but unless you think he was doing something nefarious its legit enough.











Dredi said:


> Fair enough. For me it has more to do with moving away from technical specifications to ’subjective opinions’ when it comes to audio quality that can be achieved with the products.


I'm not sure I get what you mean.  Are you saying that once audio becomes _"high-end" _its all about the subjective and not objective results?


Dredi said:


> But i2s is more fallible than usb. It doesn’t have any error correction, or even error detection mechanisms. The signaling isn’t even differential. The only benefit is the clock signal, but it’s really unnecessary unless you have multiple digital things that you want clocked together. I don’t know of any real use for such a setup. It’s usually used inside a DAC, connecting the USB chip and the actual DAC chip together. It was never intended for connecting separate devices together, and isn’t really suited for it.
> 
> It’s better than spdif in some usages, I’ll give you that.


From my understanding that one key benefit of keeping the clock signals (there are multiple) separate from the data is what is key to making better for digital audio.

Interesting article from Hackaday on I2S and its use cases.


Dredi said:


> Why do you think that?
> 
> in file transfers the error correction is strictly mandatory, in audio it is technically not. How much it matters is a matter of how often the errors happen. If it was truly a problem, then it would get solved by data science, not expensive cables.
> 
> ...


I say that because at the end of the day these signals are still analog transmissions which we've already covered and when it comes to the sampling rate of high frequencies you are talking about minuscule moments in time and digital audio being a stream without error correction is susceptible to minor errors where a cable quality difference could make an impact.  Data science can only go so far if you don't have the physical medium to support the type of data you are transmitting.

Again I'm not advocating for high-end (digital) cables but am stating the reasons why _"its just digital 1's and 0's"_ is wrong and how its possible for a cable to make a difference.  Or at least trying to understand how a cable could make a difference, I'm completely open to having it proven to be BS.

Yeah that would be an interesting test.  I don't know what DAC is in the KEF but thats is supposed to be a really good speaker (at least the regular LS50 is) so it should stand a chance of being good enough to pickup differences.


----------



## Dredi (Aug 26, 2022)

Operandi said:


> but am stating the reasons why _"its just digital 1's and 0's"_ is wrong and how its possible for a cable to make a difference.


A cable can definitely make a difference, but it requires packet loss. If packet loss would be a large problem, it would be handled with slightly larger buffers and packet retransmission, or packet redundancy.

In for example spfif, there are no checksums in place, which means that transmission errors would be immeaditely audible, and i have never heard an audible crack in my livingroom setup. So either I’m deaf, or errors don’t happen with frequency that actually matters.

With USB audio, if an error happens, it will just be interpolated over. Maybe this is the problem here: if the error would always be clearly audible, you’d believe me when I say that they don’t happen often enough to matter.

As for the british audiophile and his tests, I call total bullshit on the dude. Not that he knows it’s bullshit what he does, but I wouldn’t buy anything based on his biased crap.



Operandi said:


> From my understanding that one key benefit of keeping the clock signals (there are multiple) separate from the data is what is key to making better for digital audio.


But only needed if you do digital signal processing and want to keep the processing delays to a minimum. Use cases like that for home equipment are non existent.


----------



## Operandi (Sep 6, 2022)

Dredi said:


> A cable can definitely make a difference, but it requires packet loss. If packet loss would be a large problem, it would be handled with slightly larger buffers and packet retransmission, or packet redundancy.
> 
> In for example spfif, there are no checksums in place, which means that transmission errors would be immeaditely audible, and i have never heard an audible crack in my livingroom setup. So either I’m deaf, or errors don’t happen with frequency that actually matters.
> 
> ...


Would larger buffers in real time stream where timing is critical even be beneficial?  DACs have very precise clocks to keep everything operating in the same time domain, and its supposition on my part but a large buffer would probably present a pretty big challenge to keep in sync.

Just because there are checksums in USB dosn't mean USB audio is immune to errors as those checkssums only apply word data, not the clock data.  Errors in the word data should get corrected through interpolation and if they don't result an audible error, timing errors to the DAC and within the DAC though are not going to be corrected this way.  Thats why high-end DACs have precise clocks and why I2S interfaces with clocks on a different cable exist.

Whats bullshit with his test?  I haven't rewatched it in full but Its a pretty simple test from what I remember. If he dosn't know what cable is being tested t and he's just identifying the cable I don't see where the problem lies.


Dredi said:


> But only needed if you do digital signal processing and want to keep the processing delays to a minimum. Use cases like that for home equipment are non existent.


Its all _processing _though. Things like DSP EQ and room correction are timing sensitive but that processing is happening before the AD conversion process.


----------



## Dredi (Sep 20, 2022)

Operandi said:


> Would larger buffers in real time stream where timing is critical even be beneficial? DACs have very precise clocks to keep everything operating in the same time domain, and its supposition on my part but a large buffer would probably present a pretty big challenge to keep in sync.


A bigger buffer makes timing and sync simpler, because your data input from the PC doesn’t need as tight timing tolerances and you can load longer parts of the audio at once. Should you buffer full songs, the pc communication would be completely unlinked from the timing of data on the dac. Compared to having absolutely zero buffers, you’d need a 16bit packet from the pc every 1/44000th of a second, and any timing errors would be immeadiately noticeable in the audio output.

The only downside to increased buffering is added playback delay, but that’s a problem only in enthusiast level gaming and real time audio systems (like running filter loops through your pc when playing a guitar). Many of the ’high-end’ dacs feature much longer buffers compared to the basic stuff.




Operandi said:


> Just because there are checksums in USB dosn't mean USB audio is immune to errors as those checkssums only apply word data, not the clock data.


The clock data holds no information in async USB audio. I thought we went through this already. It only matters if the packet comes so late, that the playback buffer on the dac is already empty, producing clearly audible lack of music, or a click.



Operandi said:


> Whats bullshit with his test?


It might be that I just dont know where to look, but he doesn’t publish how he did the test setup or anything like that. I assume that his test methology is full of bias based on the lack of transparency. He even doesn’t tell what interconnect he is testing the cables for. Pc and dac? Dac and amp? Who knows.



Operandi said:


> Its all _processing _though. Things like DSP EQ and room correction are timing sensitive but that processing is happening before the AD conversion process.


Yup. All the i2s stuff happens before the AD conversion. I personally don’t understand what the use case is for a separate real time device connected to the DAC via i2s. All the same processing can be done on the PC, and the audio somehow will anyway come from a non timed source like a PC (via async USB audio), a network drive or local mass media.


----------



## Operandi (Sep 22, 2022)

Dredi said:


> A bigger buffer makes timing and sync simpler, because your data input from the PC doesn’t need as tight timing tolerances and you can load longer parts of the audio at once. Should you buffer full songs, the pc communication would be completely unlinked from the timing of data on the dac. Compared to having absolutely zero buffers, you’d need a 16bit packet from the pc every 1/44000th of a second, and any timing errors would be immeadiately noticeable in the audio output.
> 
> The only downside to increased buffering is added playback delay, but that’s a problem only in enthusiast level gaming and real time audio systems (like running filter loops through your pc when playing a guitar). Many of the ’high-end’ dacs feature much longer buffers compared to the basic stuff.


I'm just referring to the DAC's internal buffer, thats where the timing is critical.


Dredi said:


> The clock data holds no information in async USB audio. I thought we went through this already. It only matters if the packet comes so late, that the playback buffer on the dac is already empty, producing clearly audible lack of music, or a click.


There are several clocks in any digital audio stream, and I'm not really sure what you mean by it holds no information.

We probably did go though this already and I probably already said this but the audio can have errors and playback will continue without dropouts or audible artifacts.


Dredi said:


> It might be that I just dont know where to look, but he doesn’t publish how he did the test setup or anything like that. I assume that his test methology is full of bias based on the lack of transparency. He even doesn’t tell what interconnect he is testing the cables for. Pc and dac? Dac and amp? Who knows.


Yeah, idk its not at a scale or transparent enough to be conclusive or anything but I don't get the impression he's making up anything.  If I remember correctly he basically recommend the more affordable professional interconnects.  I thought he listed the test procedure and equipment... I guess I'll have to watch it again in full.

I will say though that a test like that no trivial thing, blind listening tests are time consuming and difficult to do.  I've only done it with lossy vs lossless audio so no hardware to swap in or out and even that was a time consuming process that you (or at least me) can only do for so long before fatigue sets in.  I would like to see someone put the effort in and do it at bigger scale though.


Dredi said:


> Yup. All the i2s stuff happens before the AD conversion. I personally don’t understand what the use case is for a separate real time device connected to the DAC via i2s. All the same processing can be done on the PC, and the audio somehow will anyway come from a non timed source like a PC (via async USB audio), a network drive or local mass media.


Well everything in the context of what we're talking about happens before the AD conversion.  I2S is just a different way to stream from the host device to the DAC.


----------



## Dredi (Sep 22, 2022)

Operandi said:


> I'm just referring to the DAC's internal buffer, thats where the timing is critical.


Me too.



Operandi said:


> There are several clocks in any digital audio stream, and I'm not really sure what you mean by it holds no information.


I mean that the packet’s arrival time holds no information. The packet just needs to arrive before the playback buffer is empty. No timing on the DAC relies on the packets arriving with any higher precision.



Operandi said:


> the audio can have errors and playback will continue without dropouts or audible artifacts.


Yup, if there is packet loss.


Operandi said:


> I2S is just a different way to stream from the host device to the DAC.


But why is it necessary? Why not just have the host on the same device as the dac? Or use async mode?

For example if we had a i2s interface card on a pc, wouldn’t the timing issue simply move from the pc-dac interface to the processor-interface card -interface (which would run in async mode because of pci express)? I don’t understand the point.

From what I’ve seen, it’s just a way to sell people shit they don’t need, like dedicated playback devices (that use async communication to get the data from storage media anyway, because that’s the way flash storage works), or to add buffering boxes, retimers, DSP boxes, or other stuff that could either be part of the dac, or done in a non-realtime fashion to the media being played.

Basically all digital media is read in an async way. Adding a i2s connection somewhere in no way fixes this ”problem”.


----------



## Operandi (Sep 22, 2022)

Dredi said:


> Me too.


Well you can't just increase buffers without consequence.  DACs use very accurate clocks to keep all their functions in sync including the incoming data stream.  Putting a bunch of data in buffer would be compounding the issue of keeping everything in the stream in sync.


Dredi said:


> I mean that the packet’s arrival time holds no information. The packet just needs to arrive before the playback buffer is empty. No timing on the DAC relies on the packets arriving with any higher precision.


Right but in real time stream the data and the timing are integral.  It doesn't matter if the data gets there correctly to the DAC if the timing associated with any of the clocks that associated with that particular piece of data are wrong.


Dredi said:


> But why is it necessary? Why not just have the host on the same device as the dac? Or use async mode?


Because it does a better job of preserving the timing and data by transmitting them over dedicated conductors.  If you read / listen DAC designers talk about interfaces USB is pretty universally unliked do to the noise of interface and all the other traffic on the bus.  Its only used because its convenient and because async USB makes vastly better than TOSLINK or SPIDIF from a PC because they are garbage sources.


Dredi said:


> From what I’ve seen, it’s just a way to sell people shit they don’t need, like dedicated playback devices (that use async communication to get the data from storage media anyway, because that’s the way flash storage works), or to add buffering boxes, retimers, DSP boxes, or other stuff that could either be part of the dac, or done in a non-realtime fashion to the media being played.


I think thats a convenient answer to the excess and gate keeping nature that tends to prevail in audiophile circles cause that stuff certainly does exist (the product that spawned this thread) but that dosen't make everything "high-end" overlly complex "shit that people don't need" with no tangible benefit.  There are so many simpler ways to con people out of their money than develop new high performance DACs the associated controllers and proprietary interfaces that go along with them.


----------



## Dredi (Sep 22, 2022)

Operandi said:


> Because it does a better job of preserving the timing and data by transmitting them over dedicated conductors.


But the data being played as audio does not come from any ’timed’ source anyway. Flash memory doesn’t spin at a constant speed or anything like that.

There is a need only for a single clock source for the DAC and any reliable method for getting data to it’s playback buffer in time. All modern systems fetch the data from a non timed source.


Operandi said:


> Well you can't just increase buffers without consequence. DACs use very accurate clocks to keep all their functions in sync including the incoming data stream. Putting a bunch of data in buffer would be compounding the issue of keeping everything in the stream in sync.


This view of yours doesn’t have any standing in reality. Increasing the buffer of the DAC makes timing everything much easier, as you have less and less dependency on other devices. Everything in the stream doesn’t need to be ’in sync’, as the only thing you hear is the rate at which the last piece of the pipeline processes stuff. 

You won’t be able to hear when you ripped the song onto your hard drive, and you won’t be able to hear when that song got buffered to your DAC. what matters is that the DAC loads bits off the last buffer at a constant (and correct) rate.


----------



## Operandi (Sep 22, 2022)

Dredi said:


> But the data being played as audio does not come from any ’timed’ source anyway. Flash memory doesn’t spin at a constant speed or anything like that.
> 
> There is a need only for a single clock source for the DAC and any reliable method for getting data to it’s playback buffer in time. All modern systems fetch the data from a non timed source.


Once its read off of whatever the storage media is and process and transmitted to the DAC its timed, and its not just a single clock shared between the DAC and host, there are several muxed together that have to be separated and then processed by the DAC.



Dredi said:


> This view of yours doesn’t have any standing in reality. Increasing the buffer of the DAC makes timing everything much easier, as you have less and less dependency on other devices. Everything in the stream doesn’t need to be ’in sync’, as the only thing you hear is the rate at which the last piece of the pipeline processes stuff.
> 
> You won’t be able to hear when you ripped the song onto your hard drive, and you won’t be able to hear when that song got buffered to your DAC. what matters is that the DAC loads bits off the last buffer at a constant (and correct) rate.


Except that no DAC to my knowledge has ever leveraged a larger buffer as a technique to preserve the integrity of the stream.  It would be pretty easy to add some memory or cache in a DAC and give it some kinda of easily marketable audiophile name and mark it up 200%.  Instead you get complex solutions like I2S and crazy things like temperature controlled clock generators.  If a larger buffer was effective I think it would have been done before.  I also don't really see how that would make things easier as the more data you store the more work you have to do keep track of it and manage it.

What is stored on your filesystem is not the same thing as what is stored in the buffer of your DAC, even if its RAW PCM.  How its transmitted is interface specific and in the case of PCM gets segmented into tiny frames (sample rate dependent), not blocks of data like whats on your hard drive (in whatever filesystem it happens to be using) and continuously processed.  If any of it dosn't get there at the right time or a bit is misinterpreted its still continuous processing the stream with a loss of quality not an audible artifact, only when the process competely brakes down do you get audible gliches.


----------



## Dredi (Sep 23, 2022)

Operandi said:


> Once its read off of whatever the storage media is and process and transmitted to the DAC its timed


Except that you can think of the previous (async) step always as the storage media. The DAC doesn’t know when a certain bit was read from the storage media to cpu cache, to ram, to cache, to pcie, to usb controller, to usb receiver buffer, to i2s connecting the usb receiver to the actual DAC chip. Only the last step is actually timed.




Operandi said:


> and its not just a single clock shared between the DAC and host, there are several muxed together that have to be separated and then processed by the DAC.


Nope. There is just a single clock on the DAC, which is split up and combined to create any other (minor)clocks that are necessary.




Operandi said:


> Except that no DAC to my knowledge has ever leveraged a larger buffer as a technique to preserve the integrity of the stream.


For example the ARES II has a 10+ms buffer. Many other ”high-end” DACs have similar stuff.



Operandi said:


> Instead you get complex solutions like I2S and crazy things like temperature controlled clock generators.


Temperature controlled main clock source makes sense and can actually affect how things sound. I2S is used in even most sub 50$ DACs, just internally. It’s just a basic board level interconnect, nothing crazy. What’s crazy is trying to use it for something it was never designed for, and is not good for, like connecting a PC to a DAC. 


Operandi said:


> I also don't really see how that would make things easier as the more data you store the more work you have to do keep track of it and manage it.


The size of a buffer makes no difference in the amount of work needed to ’keep track of and manage it’. You can utilize the same exact data handling code for buffers of almost any size, by just changing one input parameter.



Operandi said:


> What is stored on your filesystem is not the same thing as what is stored in the buffer of your DAC, even if its RAW PCM. How its transmitted is interface specific and in the case of PCM gets segmented into tiny frames (sample rate dependent), not blocks of data like whats on your hard drive (in whatever filesystem it happens to be using) and continuously processed. If any of it dosn't get there at the right time or a bit is misinterpreted its still continuous processing the stream with a loss of quality not an audible artifact, only when the process competely brakes down do you get audible gliches.


It is the same bits, they are just repackaged to differing lenghts depending on the tranfer interface. For example on USB the DAC just sends a request for the ’next n bytes of data’ and the cpu then fetches them from RAM or HDD, packages to the USB packet and sends it off.
If the bits would be different, it would sound different.


----------



## Operandi (Sep 23, 2022)

Dredi said:


> Except that you can think of the previous (async) step always as the storage media. The DAC doesn’t know when a certain bit was read from the storage media to cpu cache, to ram, to cache, to pcie, to usb controller, to usb receiver buffer, to i2s connecting the usb receiver to the actual DAC chip. Only the last step is actually timed.


Errors can still happen in that last step of the stream being read by the receiving chip in the DAC and while being processed.


Dredi said:


> Nope. There is just a single clock on the DAC, which is split up and combined to create any other (minor)clocks that are necessary.


Right they communicate via a single clock but the stream consists of several clocks, any of them are subject to error though.


Dredi said:


> For example the ARES II has a 10+ms buffer. Many other ”high-end” DACs have similar stuff.


Cool, so it is a thing, I'll have to read up on that.


Dredi said:


> Temperature controlled main clock source makes sense and can actually affect how things sound. I2S is used in even most sub 50$ DACs, just internally. It’s just a basic board level interconnect, nothing crazy. What’s crazy is trying to use it for something it was never designed for, and is not good for, like connecting a PC to a DAC.


Yeah I know the origins of I2S.  Using it as an external interface seems crazy I suppose if you think USB async is without fault and that kinda seems like the majority of our disagreement here.  The interface itself is more robust and putting each clock along with the dataon its own path would have tangible benefits in my opinion. 


Dredi said:


> The size of a buffer makes no difference in the amount of work needed to ’keep track of and manage it’. You can utilize the same exact data handling code for buffers of almost any size, by just changing one input parameter.


That dosen't seem right to me.  If you increase the size of CPUs cache latency goes up.  If a DAC has to buffer more frames of PCM stream and keep track of them for the event in which it needs to use whats in the buffer rather than what was next in the stream how is that not more work for it to manage and keep track of the timing of these additional frames in the buffer?  I mean this is happening at 44,000 times a second in the case of plebeian CD quality audio.


Dredi said:


> It is the same bits, they are just repackaged to differing lenghts depending on the tranfer interface. For example on USB the DAC just sends a request for the ’next n bytes of data’ and the cpu then fetches them from RAM or HDD, packages to the USB packet and sends it off.
> If the bits would be different, it would sound different.


Well the bits being sent are the same but how they get there is whats in question.  We already went over how a realtime digital audio stream is different than say transferring a file to a USB flash drive which is honestly beyond most peoples awareness of how this works so no need to go over that.  

I get your points but something as fundamental as the async feature of USB Audio 2 it is essentially a technique that was added to USB audio to compensate for the problems encountered in a real time digital stream.  If digital streams didn't have these problems async DACs wouldn't be needed.  In USB Audio 1 (none async DAC) the bits being sent would be the same but the interface is at fault so the sound would be different.   So either USB async DACs completely solve everything and things like I2S are waste of time or its just further down the path to mitigate the issues with digital streams.


----------



## Dredi (Sep 26, 2022)

Operandi said:


> Right they communicate via a single clock but the stream consists of several clocks, any of them are subject to error though.


What different clocks? Any data transfer related clock accuracy is completely irrelevant in async transfer.



Operandi said:


> Errors can still happen in that last step of the stream being read by the receiving chip in the DAC and while being processed.


Yes! The same goes for i2s, and literally any transfer protocol.



Operandi said:


> if you think USB async is without fault and that kinda seems like the majority of our disagreement here.


For non-realtime applications, I really don’t see any major faults with it. If I had the ability to change something, I’d maybe add some error correcting bits in the packet structure, just so that we wouldn’t need to have this discussion about how likely transfer errors are over USB.



Operandi said:


> The interface itself is more robust and putting each clock along with the dataon its own path would have tangible benefits in my opinion.


What are the benefits, in your opinion?
Comparing the two options of: ’async source -> DAC’ and ’async source -> i2s transmitter -> DAC’.
I really don’t see any, but maybe I have missed something.



Operandi said:


> That dosen't seem right to me. If you increase the size of CPUs cache latency goes up.


But the cache latencies are not a problem at all in this use case. They just add additional consistent latency to the overall signal chain. You can get constant latency memory chips, clocked to the DACs master clock.



Operandi said:


> If a DAC has to buffer more frames of PCM stream and keep track of them for the event in which it needs to use whats in the buffer rather than what was next in the stream how is that not more work for it to manage and keep track of the timing of these additional frames in the buffer?


The frames can be connected together, and reading can happen sequentally based on memory address. I guess programming isn’t your strong suite, if you think that the size of a data structure makes it more complex in itself.



Operandi said:


> I get your points but something as fundamental as the async feature of USB Audio 2 it is essentially a technique that was added to USB audio to compensate for the problems encountered in a real time digital stream.


It was added, so that the computers inaccurate clocks would be disconnected from the audio output. In USB audio 1 that was a real problem, and it was solved via the means of data science in usb audio 2. The only downside of that was the need to increase minimum buffer size of the DAC, making them less real time. For most use that does not matter.



Operandi said:


> So either USB async DACs completely solve everything and things like I2S are waste of time or its just further down the path to mitigate the issues with digital streams.


It solved everything for most users. Real time users have moved to thunderbolt. I2s as an external interface is just some audiophile marketing bullshit and ”solves” nothing.


----------



## Operandi (Sep 30, 2022)

Dredi said:


> What different clocks? Any data transfer related clock accuracy is completely irrelevant in async transfer.


All of the clocks that are present in I2S internally (the master clock, bit clock, word, clock, maybe more) are part of PCM and have to muxed and demuxed when they are transferred over USB because there is only one data line.  Async helps keep everything in sync but its only mitigating the issue, it can't be perfect, therefor not irrelevant.  


Dredi said:


> Yes! The same goes for i2s, and literally any transfer protocol.





Dredi said:


> What are the benefits, in your opinion?
> Comparing the two options of: ’async source -> DAC’ and ’async source -> i2s transmitter -> DAC’.
> I really don’t see any, but maybe I have missed something.


Of course nothing is ever perfect but giving all the separate data lines and clocks dedicated is more bandwidth and less interference for those specific components of the stream.


Dredi said:


> But the cache latencies are not a problem at all in this use case. They just add additional consistent latency to the overall signal chain. You can get constant latency memory chips, clocked to the DACs master clock.


Given how crucial clock accuracy is to the internal operations of a DAC (how fast data is coming in, the sampling rates involved) it would seem like there would have to be some performance cost adding more cache.  If you are holding data in a cache to compare it to data coming in thats more work.  I still have to read up on what the Ares II DAC does in this regard.


Dredi said:


> The frames can be connected together, and reading can happen sequentally based on memory address. I guess programming isn’t your strong suite, if you think that the size of a data structure makes it more complex in itself.


I'm not a programmer by any means.  For reference I work on the infrastructure side of datacenter IT, specifically configuring high performance hierarchical database systems, and a bit on the SAN related side of said systems.  They key difference with the type of data I'm working with or really any traditional data science is that error checking is always present before any data is considered good (this is covering old ground).  When data is sent over a HBA via 50' plastic fiber cable in one of my servers there is no doubt errors happening all of the time, database dosen't get corrupt because the system taking time to compare the data to what it expects.  The principles of binary data and how its transmitted are no doubt largely the same the error handling is not the same in real time digital audio stream.


Dredi said:


> It solved everything for most users. Real time users have moved to thunderbolt. I2s as an external interface is just some audiophile marketing bullshit and ”solves” nothing.


There honestly isn't any way to determine what it solves or doesn't from forum posts despite having a strong knowledge of the fundamentals of transferring digital data, inferring old interface (SPIDIF, USB1) have weaknesses vs new and improved interfaces (USB2) and concluding that its completely solved.

If it were just marketing bullshit its years and years of work to develop, test, manufacture it.  Despite whatever you may think of its technological merits or goals developing any new product from scratch is hard let alone one that uses new and largely proprietary interfaces and protocols (in a new way).  There are far, far easier ways to market bullshit than to go through all that effort especially in the audio world.


----------



## Dredi (Sep 30, 2022)

Operandi said:


> Of course nothing is ever perfect but giving all the separate data lines and clocks dedicated is more bandwidth and less interference for those specific components of the stream.


In the given example, could you point out how and where the different interference aspects happen. As you know, USB is more robust when it comes to interference than i2s (due to differential data lines).



Operandi said:


> All of the clocks that are present in I2S internally (the master clock, bit clock, word, clock, maybe more) are part of PCM and have to muxed and demuxed when they are transferred over USB because there is only one data line.


Umm, it’s just one clock. The word clock is just the data clock divided by 8(?) and so on. only the single master clock is a hardware level thing, the rest is done with software and some transistors.



Operandi said:


> it would seem like there would have to be some performance cost adding more cache. If you are holding data in a cache to compare it to data coming in thats more work.


Just no. You don’t need to compare data in cache to data you are receiveing. It’s just a simple FIFO data buffer.



Operandi said:


> If it were just marketing bullshit its years and years of work to develop, test, manufacture it.


Just waiting for someone to publish the tests that prove it does something. Any second now.
If no definitive benefits have been proven to exist, then why the hell has money been pumped into this?


Operandi said:


> There honestly isn't any way to determine what it solves or doesn't from forum posts despite having a strong knowledge of the fundamentals of transferring digital data, inferring old interface (SPIDIF, USB1) have weaknesses vs new and improved interfaces (USB2) and concluding that its completely solved.


Yup. And until someone determines with credible methods that something exists, then it probably doesn’t. Or at least we should not base things on the premise that it does.

edit: but yeah, we have very fundamental differences in the way we think about stuff. You seem to think that if something isn’t proven to not exist, that it might exist. I think that if something is thought to exist, it needs to be able to be proven.

That’s all. Proving negatives doesn’t really work for anyone, so it’s maybe best to tap out.


----------



## Operandi (Oct 7, 2022)

Dredi said:


> In the given example, could you point out how and where the different interference aspects happen. As you know, USB is more robust when it comes to interference than i2s (due to differential data lines).


Its ultimately a more robust cable and connector.  1's and 0's converted to voltage and back to 1's and 0's right?  And there is no reason to go over again how a digital audio stream is different than how traditional data is transferred.


Dredi said:


> Umm, it’s just one clock. The word clock is just the data clock divided by 8(?) and so on. only the single master clock is a hardware level thing, the rest is done with software and some transistors.


Its really hard to find information on how audio gets encoded and decoded but its my understanding that there are several clocks used in PCM and I assume other formats all work in a similar way.  The master clock is what is used to send / receive from host to device but thats irrelevant to how the audio is packaged.



Dredi said:


> Just no. You don’t need to compare data in cache to data you are receiveing. It’s just a simple FIFO data buffer.


Yeah, ok so thats kinda what I thought anyway.

I couldn't find any information about the Ares II (or any other DAC) having anything particularly special going on with its buffer.


Dredi said:


> Just waiting for someone to publish the tests that prove it does something. Any second now.
> If no definitive benefits have been proven to exist, then why the hell has money been pumped into this?


Yeah, I mean I'm with you but who would certify such a test and who's the target audience.  (digital) Audio enthusiasts that have a half way decent understanding of how data transfer works, and the differences in a digital real time stream (I'm pretty much at my limits in being able to talk about this from a technical perspective)?  Its just a such a minuscule small number of people that would care or know what is being presented to them that I don't think it would make any difference one way or another.  Most people are in one of two camps; its "just 1's and 0's all DACs sound the same" or are only interested in subjective interpretations of in the form audiophile vernacular.


Dredi said:


> Yup. And until someone determines with credible methods that something exists, then it probably doesn’t. Or at least we should not base things on the premise that it does.
> 
> edit: but yeah, we have very fundamental differences in the way we think about stuff. You seem to think that if something isn’t proven to not exist, that it might exist. I think that if something is thought to exist, it needs to be able to be proven.
> 
> That’s all. Proving negatives doesn’t really work for anyone, so it’s maybe best to tap out.


Yeah, I think we maxed it out, and I don't think anyone else is following along anymore.  I'm not trying to be right here just learn.  

I _gave up _on high-end audio like 8-10 years ago when it seemed like the consensus was a $150 DAC could be "_bit perfect" _if it was configured correctly and its all 1's and 0's anyway so anything beyond that point literally doing nothing. That notion never made sense to me but was insanely frustrating knowing what a $150 speaker sounds like vs what a $1,500 speaker yet all DACs are the same? Didn't really matter then as I didn't have the resources to buy anything much more than a $150 DAC anyway so kinda moot and no reason dwell on it.

Thats just a bit of context into motivation of my thought process.  As to what exists or doesn't, ideally yeah prove it but most people either care about "_how it sounds" _and a much smaller subset just want _"proof"_. It seems like most people that claim to be looking for proof are just looking any kind of data to prove their point that there is no difference and probably wouldn't be moved regardless of what was presented to them. I don't think there are enough people that both genuinely interested and know enough to have a conversation like we are here to really move the needle either way. Thats my cynical impression of the state of the issue though I guess.


----------



## Dredi (Oct 11, 2022)

Operandi said:


> Its ultimately a more robust cable and connector. 1's and 0's converted to voltage and back to 1's and 0's right?


It has no dedicated connector. Most use just a repurposed hdmi cable and connectors. Nothing amazing, or high end there. Specs for hdmi cables are not meaningfully different to usb. Electrically usb is superior to i2s, when it comes to interference handling.



Operandi said:


> Its really hard to find information on how audio gets encoded and decoded but its my understanding that there are several clocks used in PCM and I assume other formats all work in a similar way. The master clock is what is used to send / receive from host to device but thats irrelevant to how the audio is packaged.


Your knowledge is lacking, sadly. There is just one clock in pcm that is important, which is the sample rate. The others are either derived from it, or are arbitrary to the output. 



Operandi said:


> I couldn't find any information about the Ares II (or any other DAC) having anything particularly special going on with its buffer.


It’s just bigger, that’s it. You can determine the buffer size by measuring output latency.



Operandi said:


> Yeah, I mean I'm with you but who would certify such a test and who's the target audience.


There are plenty of techical journals that publish data science stuff. If that’s a too high bar to clear (lol), then just publish it at your homepage for others to see. No tests means no improvement.



Operandi said:


> It seems like most people that claim to be looking for proof are just looking any kind of data to prove their point that there is no difference and probably wouldn't be moved regardless of what was presented to them.


Nice assumptions there. I for one would love for there to be new and improved ways for DA converters to work, and for nyqvists theorems to be proven false. It would be extremely interesting for many fields outside of digital audio.



Operandi said:


> consensus was a $150 DAC could be "_bit perfect" _if it was configured correctly and its all 1's and 0's anyway so anything beyond that point literally doing nothing. That notion never made sense to me but was insanely frustrating knowing what a $150 speaker sounds like vs what a $1,500 speaker yet all DACs are the same?


What makes sense is being able to prove that something exists or not. Non proven bullshit is what doesn’t make sense.


----------



## Operandi (Oct 15, 2022)

Dredi said:


> It has no dedicated connector. Most use just a repurposed hdmi cable and connectors. Nothing amazing, or high end there. Specs for hdmi cables are not meaningfully different to usb. Electrically usb is superior to i2s, when it comes to interference handling.


I know that its simply a repurposed HDMI cable and connector.  Whatever the spec may be high quality, well built HDMI cables are plentiful and are better than USB 2.0 cables which is what USB audio uses, how it compares to USB 3.0+ I don't know.  As to I2S all DACs internally work with with I2S so its specifically the separate clocks used in an external I2S interface that have dedicated conductors whereas with USB or SPIDIF it gets muxed together.


Dredi said:


> Your knowledge is lacking, sadly. There is just one clock in pcm that is important, which is the sample rate. The others are either derived from it, or are arbitrary to the output.


I know my knowledge is lacking, I said as much, I didn't don't have a degree in any of this.  Having a strong knowledge of how digital data is transferred in the traditional sense, and how PCM works dosn't make you an authority on this subject either however.  PCM is not the same as thing as I2S, but were developed at the same time as part of the development of CD audio (I think?... before my time).


Dredi said:


> There are plenty of techical journals that publish data science stuff. If that’s a too high bar to clear (lol), then just publish it at your homepage for others to see. No tests means no improvement.


I'd like to see that too but the cross section between high-end audio, and outlets that publish indepth technical studies and conduct proper blind tests is small, the target audience would be even smaller.  99% of the people buying this gear don't have slightest idea of how any of it works and don't care.  I waste a lot of time on the internet (obvious at this point) and I can't think of anyone that would conduct and publish such as study.

No tests means just that, "no tests" not "no improvement".  The tests have to get better to show the improvement or lack thereof.


Dredi said:


> Nice assumptions there. I for one would love for there to be new and improved ways for DA converters to work, and for nyqvists theorems to be proven false. It would be extremely interesting for many fields outside of digital audio.


Sure they are assumptions but without the proof you require thats all there is left. 

It very well could all be marketing but the notion that people (engineers, project managers, ect.) are going to devote their education (in the case of the engineers at least), career, and thousands of hours to develop high performance equipment that makes no appreciable difference, makes absolutely no logical deductive sense.  It could also be a shared delusion and the words biggest example of confirmation bias but for the same underlying reasons it seems unlikely, and the fact that regardless of what you think of their design objectives achieving them required a lot time and effort by people from a highly skilled technical background.  Just get a job in the sales and marketing department for one of the many brands owned by Sound United if you want to make bank or find a real religion if you need to believe in something.


Dredi said:


> What makes sense is being able to prove that something exists or not. Non proven bullshit is what doesn’t make sense.


To you.  Do you only buy wine or coffee thats gone through blind tests and proven to be the best?  Personally I just buy what tastes good to me.  Most people don't base their decisions on what has been proven in blind studies in general.  Its even less practical in audio due to the possible variations and individualism of audio.

Like I already mentioned I would like to see some really good tets in this area.  It should be doable on a smaller scale (it wouldn't be statistically meaningful though) with a few subjects on gear they are familiar with.  Who do we talk to lol?


----------



## Dredi (Nov 14, 2022)

Operandi said:


> No tests means just that, "no tests" not "no improvement".


I disagree. If you, as a high end audio company, make claims about the sound difference of doing things differently, and are unable to show that in tests, there is zero merit to the different approach.
There could still be a difference, sure, but to which direction is it? No one knows.



Operandi said:


> It very well could all be marketing but the notion that people (engineers, project managers, ect.) are going to devote their education (in the case of the engineers at least), career, and thousands of hours to develop high performance equipment that makes no appreciable difference, makes absolutely no logical deductive sense.


Almost as much sense, as doing all the work and then ”forgetting” to test if it made a difference…




Operandi said:


> To you. Do you only buy wine or coffee thats gone through blind tests and proven to be the best? Personally I just buy what tastes good to me. Most people don't base their decisions on what has been proven in blind studies in general. Its even less practical in audio due to the possible variations and individualism of audio.


I do blind tasting of drinks to find out what I _actually_ like. It’s a fun hobby and shows how bias is king. And it’s anyway a poor comparison, as wine taste differences can actually be consistently detected by humans in blind testing, unlike differences between functioning USB cables. 




Operandi said:


> Like I already mentioned I would like to see some really good tets in this area. It should be doable on a smaller scale (it wouldn't be statistically meaningful though) with a few subjects on gear they are familiar with. Who do we talk to lol?


The burden of proof should be on the ones making claims. Harman audio does a bunch of testing on audio related things, but tend to focus on actual things that could matter, like eq, speaker element types etc.


Operandi said:


> As to I2S all DACs internally work with with I2S so its specifically the separate clocks used in an external I2S interface that have dedicated conductors whereas with USB or SPIDIF it gets muxed together.


USB for the most part works asynchronously, treating the computer much like a file storage, and no timing information is transmitted over it. Spdif is as you said.


Operandi said:


> PCM is not the same as thing as I2S


I2s is a physical layer protocol for transferring data. PCM is a data format.


Operandi said:


> Whatever the spec may be high quality, well built HDMI cables are plentiful and are better than USB 2.0 cables which is what USB audio uses


How much better? Enough to get a better outcome, compared to more robust signaling and error handling of USB? Got any tests to link, or is this just an uninformed opinion?


----------



## Operandi (Nov 19, 2022)

Dredi said:


> I disagree. If you, as a high end audio company, make claims about the sound difference of doing things differently, and are unable to show that in tests, there is zero merit to the different approach.
> There could still be a difference, sure, but to which direction is it? No one knows.


Yeah we disagree.  Audio being subjective, people perceive it differently and the difficulty of doing meaningful tests again has already been gone over several times all make tests unpractical for a manufacture to carry out of each and every product.  People prefer different things and its not always based on the price of the equipment.  A manufacture like anyone else could do a closed test and show that people can pick out DAC A or DAC B but that would be purely academic.


Dredi said:


> Almost as much sense, as doing all the work and then ”forgetting” to test if it made a difference…


Forgetting?  I mentioned several posts back how Schiit does their internal testing.  I'm sure all manufactures do something similar.


Dredi said:


> I do blind tasting of drinks to find out what I _actually_ like. It’s a fun hobby and shows how bias is king. And it’s anyway a poor comparison, as wine taste differences can actually be consistently detected by humans in blind testing, unlike differences between functioning USB cables.


Personally I've never felt the need.  I've tried several times to get into more expensive roasts of light and medium of coffee and always end up preferring the mid-range medium and dark roasts.  I didn't say it was a good comparison but its _a comparison. _Auditory perception and memory recall makes any audio comparisons uniquely difficult.


Dredi said:


> The burden of proof should be on the ones making claims. Harman audio does a bunch of testing on audio related things, but tend to focus on actual things that could matter, like eq, speaker element types etc.


All manufactures in all industries do testing.  Harmon is known for the Harmon Curve, they are known for that but I'm not aware of them doing anything more than that outside of publishing test results.


Dredi said:


> USB for the most part works asynchronously, treating the computer much like a file storage, and no timing information is transmitted over it. Spdif is as you said.


USB is muxing the parts of the stream the same way SPDIF is, USB being async dosn't have any bearing on that.  I2S is different in that its using the different conductors and pinouts of a HDMI cable to separate those out.


Dredi said:


> I2s is a physical layer protocol for transferring data. PCM is a data format.


Right.


Dredi said:


> How much better? Enough to get a better outcome, compared to more robust signaling and error handling of USB? Got any tests to link, or is this just an uninformed opinion?


Honestly no clue.  I'm really only interested in it from an academic point of view as to why it _could _impact that sound. My opinion is is uniformed to the extent I don't work in digital signal processing and analog circuits nor am I an expect in digital audio formats. Most people that post in these forums have a fairly strong understanding of how binary information is stored and transmitted but the few really know how a DAC works on even a basic level are often conflating the two which is understandable but more often than not leading to the wrong cluclusion in my opinion.


----------



## Dredi (Nov 19, 2022)

Operandi said:


> but that would be purely academic.


Well, audio research is purely academic.



Operandi said:


> Audio being subjective, people perceive it differently and the difficulty of doing meaningful tests again has already been gone over several times all make tests unpractical for a manufacture to carry out of each and every product.


At least do it for the developed technologies that supposedly work to produce better sound. If no proof exists, it’s just bullshit.



Operandi said:


> Forgetting? I mentioned several posts back how Schiit does their internal testing. I'm sure all manufactures do something similar.


Their testing, as you referenced it to be, does nothing to prove that their tech improves sound quality. They don’t control for most biases. And they don’t publish any results.



Operandi said:


> few really know how a DAC works


A DAC is an exceedingly simple device. It is given some integer value, and it then outputs a voltage that represents that value.

Sound quality is then just a matter of the timing of the values being fed in, and some basic things like noise, cross talk etc.

Timing wise, the only thing that matters is the last clock that feeds samples to the DAC, and that we don’t run out of data in the input buffer.

Any ”deeper” knowledge to DAC designs is completely unnecessary as far as this discussions topics are concerned (async vs. sync transfer of data to the input buffer). As long as the buffer next to the DAC is not empty at any time, it is impossible to hear a difference between the data acquisition methods. This is because they are not influencing the last clock in any way, nor the data itself. And even then, any difference in sound would be because of bad design, not whether we have sync or async input of data next to the DAC chip.


----------



## Operandi (Nov 22, 2022)

Dredi said:


> Well, audio research is purely academic.


So a company manufacturing audio gear is supposed to do a study / test that is purely academic is 100% on a subject that is 100% open to subjective impression?  Why? and what would be the point?


Dredi said:


> At least do it for the developed technologies that supposedly work to produce better sound. If no proof exists, it’s just bullshit.


I don't understand this view point either.  The technology and the product are intrinsically linked and also open to subjective impression.


Dredi said:


> Their testing, as you referenced it to be, does nothing to prove that their tech improves sound quality. They don’t control for most biases. And they don’t publish any results.


Nobody is looking for _proof_ from manufactures.  If third party publications take it on thats different and there is an audience for that but its still fundamentally problematic for the reasons outlined above and a lot of people shopping and buying this gear don't base their decisions on those types of publications.


Dredi said:


> A DAC is an exceedingly simple device. It is given some integer value, and it then outputs a voltage that represents that value.
> 
> Sound quality is then just a matter of the timing of the values being fed in, and some basic things like noise, cross talk etc.
> 
> ...


You can describe any complex system in basic terms to explain how they work on conceptual level, that dosn't make them simple.  There is ton of stuff happening and variety of approaches to go from integer values to voltage output of varying degrees of accuracy and thats where the sound quality of a particular DAC lies.  We're just going in circles here so I won't go into how a realtime digital audio stream is inherently different again but that speaks to the question of timing.


----------

