Tuesday, June 9th 2020
NVIDIA's Next-Gen Reference Cooler Costs $150 By Itself, to Feature in Three SKUs
Pictures of alleged next-generation GeForce "Ampere" graphics cards emerged over the weekend, which many of our readers found hard to believe. It's features a dual-fan cooling solution, in which one of the two fans is on the reverse side of the card, blowing air outward from the cooling solution, while the PCB extends two-thirds the length of the card. Since then, there have been several fan-made 3D renders of the card. NVIDIA is not happy with the leak, and started an investigation into two of its contractors responsible for manufacturing Founders Edition (reference design) GeForce graphics cards, Foxconn and BYD (Build Your Dreams), according to a report by Igor's Lab.
According to the report, the cooling solution, which looks a lot more overengineered than the company's RTX 20-series Founders Edition cooler, costs a hefty USD $150, or roughly the price of a 280 mm AIO CLC. It wouldn't surprise us if Asetek's RadCard costs less. The cooler consists of several interconnected heatsink elements with the PCB in the middle. Igor's Lab reports that the card is estimated to be 21.9 cm in length. Given its cost, NVIDIA is reserving this cooler for only the top three SKUs in the lineup, the TITAN RTX successor, the RTX 2080 Ti successor, and the RTX 2080/SUPER successor.All three will use the same cooling solution, and a common PCB design codenamed PG132. Further, all three cards will be based on a common ASIC, codenamed "GA102," with varying hardware specs. The "SKU10" (TITAN RTX successor) could ditch the TITAN brand to carry the model name "GeForce RTX 3090," max out the 384-bit wide memory bus of the GA102 ASIC, and feature a whopping 24 GB of GDDR6X memory, with 350 W typical board power.
The next SKU, the SK20, which is the RTX 2080 Ti successor, will be cut down from SKU10. It will feature 11 GB of GDDR6X memory across a 352-bit wide memory interface, and have a 320 W typical board power rating. This board will likely feature the RTX 3080 Ti branding. Lastly, there's the SKU30, which is further cut-down, features 10 GB of GDDR6X memory across a 320-bit wide memory interface, and it bears the RTX 3080 model number, succeeding the RTX 2080 / RTX 2080 Super.
When launched, "Ampere" could be the first implementation of the new GDDR6X memory standard, which could come with data-rates above even the 16 Gbps of today's GDDR6, likely in the 18-20 Gbps range, if not more. Lesser SKUs could use current-gen GDDR6 memory at data-rates of up to 16 Gbps.
Sources:
Igor's Lab, tor6770 (Reddit), VideoCardz, ChipHell Forums
According to the report, the cooling solution, which looks a lot more overengineered than the company's RTX 20-series Founders Edition cooler, costs a hefty USD $150, or roughly the price of a 280 mm AIO CLC. It wouldn't surprise us if Asetek's RadCard costs less. The cooler consists of several interconnected heatsink elements with the PCB in the middle. Igor's Lab reports that the card is estimated to be 21.9 cm in length. Given its cost, NVIDIA is reserving this cooler for only the top three SKUs in the lineup, the TITAN RTX successor, the RTX 2080 Ti successor, and the RTX 2080/SUPER successor.All three will use the same cooling solution, and a common PCB design codenamed PG132. Further, all three cards will be based on a common ASIC, codenamed "GA102," with varying hardware specs. The "SKU10" (TITAN RTX successor) could ditch the TITAN brand to carry the model name "GeForce RTX 3090," max out the 384-bit wide memory bus of the GA102 ASIC, and feature a whopping 24 GB of GDDR6X memory, with 350 W typical board power.
The next SKU, the SK20, which is the RTX 2080 Ti successor, will be cut down from SKU10. It will feature 11 GB of GDDR6X memory across a 352-bit wide memory interface, and have a 320 W typical board power rating. This board will likely feature the RTX 3080 Ti branding. Lastly, there's the SKU30, which is further cut-down, features 10 GB of GDDR6X memory across a 320-bit wide memory interface, and it bears the RTX 3080 model number, succeeding the RTX 2080 / RTX 2080 Super.
When launched, "Ampere" could be the first implementation of the new GDDR6X memory standard, which could come with data-rates above even the 16 Gbps of today's GDDR6, likely in the 18-20 Gbps range, if not more. Lesser SKUs could use current-gen GDDR6 memory at data-rates of up to 16 Gbps.
92 Comments on NVIDIA's Next-Gen Reference Cooler Costs $150 By Itself, to Feature in Three SKUs
$150 is so much money in manufacturing that they could have made some kind of water cooling instead.
As an example i can buy cpu water cooling "Cooler Master MasterLiquid Lite 120 120mm" at $50 retail and that even includes 25% VAT.
Without VAT that is $40 and then the retailer and Cooler Master both makes some kind of profit on that so maybe it cost $20-25 in manufacturing cost or even lower.
Then Imagine nVidia paying $150 in manufacturing cost for air cooling, no f**king way. nVidia is a lot of things but stupid is not one of them when it comes to money.
Not complaining!
More complexity like 2 PCB equals + fragility of the solderings.
Two PCBs? More expensive than one, and now NVIDIA needs to design and test a separate reference PCB for partners.
Unnecessarily convoluted and expensive cooler? Why do they need it, given that the Turing Founders Edition cooler is perfectly capable?
I wouldn't put it past NVIDIA to do something weird and wacky, but this isn't weird and wacky, it's just stupid and expensive.
- It's very clear that the front fan needs to be a radial fan. An ordinary axial fan without the outer ring would be a better choice than the one Nvidia have picked that effectively prevents the blade tips from acting as a psuedo-radial blower.
- With the PCIe plugs on the end of the card connected via that daughterboard, half of the effective cooling from the rear fan is blocked.
I mean, it didn't look like the best design to start off with but if this rendition is accurate then it's even worse than I thought.1K the xx80 seems on track.
And they just work!
Either the perf jump is massive, or Nvidia is running out of ideas. Surely they won't do an RTG now....?