Sunday, September 29th 2019
AMD "Navi 14" and "Navi 12" GPUs Detailed Some More
The third known implementation of AMD's "Navi" generation of GPUs with RDNA architecture is codenamed "Navi 14." This 7 nm chip is expected to be a cut-down, mainstream chip designed to compete with a spectrum of NVIDIA GeForce GTX 16-series SKUs, according to a 3DCenter.org report. The same report sheds more light on the larger "Navi 12" GPU that could power faster SKUs competing with the likes of the GeForce RTX 2080 and RTX 2080 Super. The two follow the July launch of the architecture debut with "Navi 10." There doesn't appear to be any guiding logic behind the numerical portion of the GPU codename. When launched, the pecking order of the three Navi GPUs will be "Navi 12," followed by "Navi 10," and "Navi 14."
"Navi 14" is expected to be the smallest of the three, with an estimated 170 mm² die-area, about 24 RDNA compute units (1,536 stream processors), and expected to feature a 128-bit wide GDDR6 memory interface. It will be interesting to see how AMD carves out an SKU that can compete with the GTX 1660 Ti, which has 6 GB of 192-bit GDDR6 memory. The company would have to wait for 16 Gbit (2 GB) GDDR6 memory chips, or piggy-back eight 8 Gbit chips to achieve 8 GB, or risk falling short of recommended system requirements of several games at 1080p, if it packs just 4 GB of memory.The 350-400 mm² "Navi 12" is a whole different beast, with an estimated 64 compute units (4,096 stream processors). The big news in the 3DCenter.org report concerns its memory interface. AMD will stick to 256-bit GDDR6 memory with the "Navi 12," and probably dial up memory clocks compared to the 14 Gbps speed the "Navi 10" uses. This design choice is influenced by NVIDIA's decision to stick to 256-bit bus width with its "TU104" silicon. AMD appears to have had enough of expensive memory solutions such as HBM2, at least in this market segment.
Source:
3DCenter.org
"Navi 14" is expected to be the smallest of the three, with an estimated 170 mm² die-area, about 24 RDNA compute units (1,536 stream processors), and expected to feature a 128-bit wide GDDR6 memory interface. It will be interesting to see how AMD carves out an SKU that can compete with the GTX 1660 Ti, which has 6 GB of 192-bit GDDR6 memory. The company would have to wait for 16 Gbit (2 GB) GDDR6 memory chips, or piggy-back eight 8 Gbit chips to achieve 8 GB, or risk falling short of recommended system requirements of several games at 1080p, if it packs just 4 GB of memory.The 350-400 mm² "Navi 12" is a whole different beast, with an estimated 64 compute units (4,096 stream processors). The big news in the 3DCenter.org report concerns its memory interface. AMD will stick to 256-bit GDDR6 memory with the "Navi 12," and probably dial up memory clocks compared to the 14 Gbps speed the "Navi 10" uses. This design choice is influenced by NVIDIA's decision to stick to 256-bit bus width with its "TU104" silicon. AMD appears to have had enough of expensive memory solutions such as HBM2, at least in this market segment.
38 Comments on AMD "Navi 14" and "Navi 12" GPUs Detailed Some More
I would think that today the HBM memory used in the Fury X cards would be cheap(er) so they could slap that on everything but guess not.
HBM1...maximum density was 1 GiB per chip. 4 GiB ain't enough for anything these days (like 1280x720 res is all and not even in all newer games).
One advantage I can see is that brands like XFX and MSI won't be able to cheapen out on VRAM cooling. Has anyone confirmed this? That VRAM OC actually increases decent performance?
I know GPU oc does very little anymore but does vram oc quantifiably increase performance to say with confidence that current cards are bandwidth starved.
And iirc Nvidia (and later AMD I think) now does some kind of memory compression so not be bandwidth starved.
Recommended for ya all:
We should filter things better and not believe everything youtubers say or appear on the internet.
I thought people had learned after the Zen 2 rumors fiasco, but apparently not!
Many reviewers and users have had problems with Navi's drivers since launch. Many people still dont have working freesync, or blackscreen crash issues. This is reminiscint of AMD's driver problems back in the late 2000s, you know, the ones that persisted over the course of months? Those driver ssues are why fermi sold so well despite being hot garbage, AMD was just too unstable/unpredictable.
AMD was getting a lot better, but that largely is due ot them pushing the same architecture for 7 years straight. They had plenty of time to iron the bugs out, now they are back to finding bugs and taking months to fix them. These drivers would undermine the launch of a hypothetical 5800/5900XT card. AMD should absolutely be held accountable for these problems, lest t hey think letting the community hang out to dry is acceptable (again).
Similarly, many people had problems with the 2080ti, was that blown out of proportion... likely. Welcome to the internet.
The problems AMD tend to have have nothing to do with Navi itself. These "rumors" are just speculation. Some may end up being true, some may not, but this is just random speculation from various forums etc. recycled as "news".
As you point out, there isn't a whole lot faster 256-bit memory available, more bandwidth will be required to compete in the high end.
And then there is heat. A bigger Navi will quickly consume in the ~300W range. Wonderful, the guy who claims that AMD is supposedly "renaming" chip codenames to make "leaks" fit and AMD is potentially "holding big navi back".
No one should be watching that trash.
Datamining a linux driver shows that Navi 12 has the same number of memory interfaces as the 5700-series. That is literally all they have. Everything else is pure speculation and guesswork, they even say that they're guessing when listing sources of the information.
Navi 14 is definitely a cheaper, smaller product aimed at the lower end of the market.
Navi 12 could be any number of things:
- A revision of Navi 10 that makes improvements to clockspeeds, bugfixes, or adds additional compute functions for workstations and servers
- A revision of Navi 10 that includes extra hardware for the PS5 and XBox Scarlet (raytracing or DRM? Who knows!)
- A larger GPU than Navi 10 with more shaders but no extra VRAM interfaces.
That last option is what everyone wants it to be, but if the datamining concludes that it's still limited to 8GB of GDDR6 on a 256-bit bus, it's unlikely to be in the next performance class up and even if it does have, say, 3584 shaders - it's going to be hamstrung by memory bandwidth.It's good to be optimistic, but in this case there is very little information to go on, and that little information indicates that it's not actually a larger GPU than Navi 10. I was late to the Navi game, still doing boring compatibility and performance testing on (now returned to Nvidia) Titan RTX and Quadro RTX cards.
As such, the first Navi driver I experienced was 19.9.1 which seemed fine, and 19.9.2 came out a couple of days later and has been flawless in a whole bunch of compute/encode/gaming on W10 1903.
Tiny sample size, but it seems better than Vega's early drivers to me at least.
14% behind 2080 (545 mm² )
21% behind 2080s (>545 mm²)
39% behind 2080Ti (754 mm²)
350mm2 would mean 40% bigger die area, 400mm would mean +60%.
Scaling wouldn't be perfect, but it should comfortably land AMD in "well beyond 2080/2080s" performance area, and in "not intentionally crippled by green" games possibly spanking 2080Ti. Why would that thing need a Linux driver?
The 5800's look good and will be interesting to benchmark.
The short answer is "I don't know" much like everyone outside of AMD at this point. 99.9% speculation with no hard evidence to support a larger chip other than the echo chamber of hopes ;)