Errors can still happen in that last step of the stream being read by the receiving chip in the DAC and while being processed.Except that you can think of the previous (async) step always as the storage media. The DAC doesn’t know when a certain bit was read from the storage media to cpu cache, to ram, to cache, to pcie, to usb controller, to usb receiver buffer, to i2s connecting the usb receiver to the actual DAC chip. Only the last step is actually timed.
Right they communicate via a single clock but the stream consists of several clocks, any of them are subject to error though.Nope. There is just a single clock on the DAC, which is split up and combined to create any other (minor)clocks that are necessary.
Cool, so it is a thing, I'll have to read up on that.For example the ARES II has a 10+ms buffer. Many other ”high-end” DACs have similar stuff.
Yeah I know the origins of I2S. Using it as an external interface seems crazy I suppose if you think USB async is without fault and that kinda seems like the majority of our disagreement here. The interface itself is more robust and putting each clock along with the dataon its own path would have tangible benefits in my opinion.Temperature controlled main clock source makes sense and can actually affect how things sound. I2S is used in even most sub 50$ DACs, just internally. It’s just a basic board level interconnect, nothing crazy. What’s crazy is trying to use it for something it was never designed for, and is not good for, like connecting a PC to a DAC.
That dosen't seem right to me. If you increase the size of CPUs cache latency goes up. If a DAC has to buffer more frames of PCM stream and keep track of them for the event in which it needs to use whats in the buffer rather than what was next in the stream how is that not more work for it to manage and keep track of the timing of these additional frames in the buffer? I mean this is happening at 44,000 times a second in the case of plebeian CD quality audio.The size of a buffer makes no difference in the amount of work needed to ’keep track of and manage it’. You can utilize the same exact data handling code for buffers of almost any size, by just changing one input parameter.
Well the bits being sent are the same but how they get there is whats in question. We already went over how a realtime digital audio stream is different than say transferring a file to a USB flash drive which is honestly beyond most peoples awareness of how this works so no need to go over that.It is the same bits, they are just repackaged to differing lenghts depending on the tranfer interface. For example on USB the DAC just sends a request for the ’next n bytes of data’ and the cpu then fetches them from RAM or HDD, packages to the USB packet and sends it off.
If the bits would be different, it would sound different.
I get your points but something as fundamental as the async feature of USB Audio 2 it is essentially a technique that was added to USB audio to compensate for the problems encountered in a real time digital stream. If digital streams didn't have these problems async DACs wouldn't be needed. In USB Audio 1 (none async DAC) the bits being sent would be the same but the interface is at fault so the sound would be different. So either USB async DACs completely solve everything and things like I2S are waste of time or its just further down the path to mitigate the issues with digital streams.