Sunday, December 19th 2021
An "Audiophile Grade" SSD—Yes, You Heard That Right
A company dealing with niche audiophile-grade electronics on Audiophile Style, a popular site and marketplace for the community, conjured up an SSD that it feels offers the best possible audio. Put simply, this is an M.2-2280 NVMe SSD with a fully-independent power delivery mechanism (one that's isolated from the motherboard's power delivery), and an over-the-top discrete clock-source for its controller. The drive has its own 5 V 2-pin DC input and switching hardware onboard, including [get this] a pair of Audionote Kaisei audio-grade electrolytic capacitors in place of what should have been simple solid-state SMD capacitors that are hard to even notice on any other drive. It doesn't end there.
Most NVMe SSDs have a tiny 2 mm x 2 mm SMD oscillator that's used by the controller for clock-generation. This drive features a Crystek CCHD-957 high-grade Femto oscillator. These oscillators are found in some very high-grade production or scientific equipment, such as data-loggers. For the drive itself, you get a Realtek DRAM-less controller, and a single 1 TB TLC NAND flash chip that's forced to operate in SLC mode (333 GB). On a scale of absurdity, this drive is right up there with $10,000 HDMI cables. Digital audio is stored in ones and zeroes, and nothing is accomplished through an isolated power delivery or clock generation on the storage media. It's nice of the designers to include jumpers that let you switch between the discrete power source and motherboard power; so listeners can see the snake-oil for themselves.
Sources:
Audiophile Style, HotHardware
Most NVMe SSDs have a tiny 2 mm x 2 mm SMD oscillator that's used by the controller for clock-generation. This drive features a Crystek CCHD-957 high-grade Femto oscillator. These oscillators are found in some very high-grade production or scientific equipment, such as data-loggers. For the drive itself, you get a Realtek DRAM-less controller, and a single 1 TB TLC NAND flash chip that's forced to operate in SLC mode (333 GB). On a scale of absurdity, this drive is right up there with $10,000 HDMI cables. Digital audio is stored in ones and zeroes, and nothing is accomplished through an isolated power delivery or clock generation on the storage media. It's nice of the designers to include jumpers that let you switch between the discrete power source and motherboard power; so listeners can see the snake-oil for themselves.
160 Comments on An "Audiophile Grade" SSD—Yes, You Heard That Right
You also stated that they are an uncommon variety, while most if not all high end DACs use async mode. So you agree that the test does not indicate that the results that you said there to be are meaningful. Go ahead and provide better tests, I’m waiting.
And if electrical usb cable interference was an actual thing, it would be easy to prove by just doing a blind study comparing it to an optical input. In a device that is not faulty there is no difference in any measurements, but maybe the human ear can do what no machine can and determine what cable is used.
I think thats the main takeaway for me at least, we can measure a lot of what we are hearing but not everything. I come from a background of loudspeakers when it comes to audio measurements, and measurements tell you a lot about how speaker will sound but not everything. A ribbon and dome tweeter can measure nearly identically in a speaker with the same woofer and crossover topology yet sound very differently, clearly there is something there we just aren't measuring it.
Until measurements show us everything you have two options, trust your ears or rely on blind studies which are problematic because they can lead you to the wrong conclusions.
If USB data transfer errors would be an actual common problem, we would use an error correcting code in the trasferred packets. We would also use radiation hardened DAC chips in a voting lock step configuration, optically isolated analog domain etc.. But we don’t, not even in the most expensive audio DACs on the planet.
My understanding is that there is active error correction in audio streams.
and seeing how we can _completely_ get rid of this USB cable ”interference”, I would have thought that you’d prefer this over anything. So why promote them? Which any sane engineer knows how to mitigate, via buffering etc. That is not the case when using default USB audio drivers. There is only an error checksum, but no error correction.
This is why saying that ’a cable matters because of errors’ is absurd, as one can hear each and every one of them, but no one complains about them. Why? Because they are super rare. Yes, but you can determine the _cable quality_ from it, i.e. is it error prone or not.
Yeah, there is tons of bandwidth to do bulk data transfer but thats completely different to the requirements of a isochronous audio stream. I'm not really promoting them but you can't benchmark your way to the answer and the way most blind tests are conducted in way to lead to the wrong conclusions so listening impressions are whats left. If you could benchmark everything and quantify it or statistically prove it through blind tests would that really be that useful given how subjective audio is in terms of personal preference and perception ability?
Educate yourself do your own listening and make your own determinations. Yeah, we keep going over this. You can mitigate the problems with various techniques but not eliminate them in a real time audio stream. I don't design these things and am not en EE and information is scarce but my understanding is that all DACs have their own internal handling of errors. Not every bit gets transferred with 100% accuracy and there is no re-try like with bulk data transfers so its up to the DAC to internally handle the error. You easily hear drop outs and artifacts where the stream essentially fails but the argument is that errors are still happening which result in a lose of quality. Right but what I'm saying / asking is the nature of the data is different and how its transferred is totally different. Audio is being sampled at 44Khz at CD quality all represented by bits, converted to analog voltage and back to bits again, thats a lot going on. I don't know how the data packets are framed and not being an expert on digital audio or an EE of any kind but given the real time nature of how the digital stream works it seems conceivable to me that errors could be a problem.
Errors are a problem, with shitty cables and DACs placed inside microwave ovens. In other cases, not really. You can easily test it yourself.
Async USB audio audio does not care about any miniscule timing errors in the data transfers, only transfer errors. Educate yourself, do your own blind tests and make your own determinations. I do not have the audacity to think that sighted audio tests that I could make would prove anything. Of course blind tests can be used to gauge subjective preference as well! I mean why wouldn’t that be the case? In order to do that, one just has to be able to differentiate the changing components by listening alone.
If we would be talking about subjecive preference on how audio systems look like, then things would be different.
www.whathifi.com/cambridge-audio/dacmagic-100/review
Not everyone likes the technical approach though and thats the tricky part about audio, particularly troublesome for those that are dead set quantifying everything into a metric you can put into a chart.
www.audiosciencereview.com/forum/index.php
audiopurist.pl/en/main-page/
audiokarma.org/forums/index.php
i read on these a fair bit.
To think that the raw error rate would somehow depend on the data packaging has no real world basis. The only difference is how it is mitigated, which does not depend on anything related to the physical transfer layer (where the errors happen).
If you somehow think that this is not the case, please describe in detail why that might be. I.e. why the transfer errors might depend on the packet lenght or packet contents. No cable is truly infallible, and neither is the computing inside the DAC chips for that matter. Random bit flips are a very real thing. But if transfer errors happen once in a year, I would not spend thousands on USB cables that are not proven to work any better.
The argument seems to be two fold. The bits being transferred which as you said on the physical layer are the same regardless. In regard to errors and retry on data transfer that only occurs as chunks of data in some block of bytes as I understand it, not in the bit level and it would be on that basis that errors are logged and retries happen? This is more of a data transmission level question than anything but is there bit over provisioning in the transport layer that protects data integrity that would inherently not be present in a audio stream? I state it as a question because I don't know and the argument is that there isn't enough bandwidth in the cable to maintain represent the bits with 100% accuracy particularly with HD audio.
The other aspect is the cable itself picking up outside interference and affecting the DAC itself. The DAC is sensitive to noise and interference, just because its made up of ICs doesn't make it immune to the outside world. I mean everything in the analog world is prone to interference, from truntables, tubes, and solidstate MOSFETs and half of what the DAC is doing is analog. You can say you can't hear it because blind tests don't prove it or it dosn't show up in the measurements but without rehashing old territory those two things don't tell the whole story. My stance is that if it actually is happening its happening of the fringe high-end spectrum and that its almost certainly irreverent to other short comings you may have. I build my own speakers so I mostly frequent forums focused on that. I do check in on ASR though to see whats passing through and getting tested.
Otherwise I mostly stay up on what new on the electronics front from a few Youtube channels. A British Audiophile, reviews a lot high-end gear that I'll probably never buy but he has an EE background and goes into the technical design aspects which gives interesting context into how and why something might sound the way it does. The cheapaudioman reviews cheaper (sub $1,000) stuff in very non-pretentious audiophily way I appreciate.
There are a lot of other interference, but it is not in the auditory range, and thus does not matter in the analog domain. It can cause a lot of problems in the digital domain, but those would be easy to hear if present, or quantifiable by other means.
Look at the various approaches to negative feedback in amplification which does happen in the auditory range but part of what negative feedback loops do is remove noise and distortion in the amplification circuit. That noise and distortion is effect of the amp design and external noise factors whether it be noise introduced by the power supply or external EMI. Negative feedback in respect to DAC isn't directly comparable (aside from its output stage) but the principles still apply.
With custom drivers you can over provision as much as you want and completely negate the problem. Too bad the ”high end” market instead focuses on 1000 dollar cables.
Because of this re-sampling issue, it is usually better to use a high sample rate for the PC-DAC interconnect. The ’best’ option would be to always set the dac to the same sample rate as the content you listen to, and let the DAC do all the upsampling internally, but my understanding is that it is difficult to accomplish on a PC. As for bit depth, it does not really matter to which value you set it to, as re-sampling is not an issue. There will be no audible difference to you between 16 and 24 bit modes. There are no real downsides either. Theoretically as there is more data being transferred if you select a higher bitdepth, there will be more data corruption as well, but it is still super rare and it’s not a real issue you should spend time pondering about.
If you use any music streaming service, or audio CD releases, they are going to be at 44,1 kHz, most movie streaming services and dvd stereo tracks are at 48kHz.
The quality of any audio track is usually determined by how it was mastered, and any extra data rate beyond CD audio quality is just a waste. You can easily test it yourself.
Ultimately I don't think the answer for better audio is software (over provisioning of data) I was just using it to draw a compassion. The high-end industry is doing plenty of things besides selling high-end cables, look at the research and science that goes into MQA, or the re-appearance of R2R ladder DACs. FLAC vs. MP3 is lossless vs. lossy compression. Lossless compression addresses totally different issues with digital music than sampling rate and bit depth.
Lots of music streaming services offer high-res music now, Tidal being probably the most popular.
Mastering is really far far more important than any of this but its like comparing the farm equipment used to plant the apple tree to the apple itself, for the purposes of discussing digital audio it makes zero sense. That said I usually go for a high-quality vinyl FLAC vs CD FLAC if I can find it because the vinyl master is often times better than the CD master.
Every time someone brings some new bullshit to the audio scene, people should be interested in only one thing: Can you hear it? And specifically, can you hear it without seeing it. Anything else is essentially pointless.