Monday, July 29th 2019
AMD Readies Larger 7nm "Navi 12" Silicon to Power Radeon RX 5800 Series?
AMD is developing a larger GPU based on its new "Navi" architecture to power a new high-end graphics card family, likely the Radeon RX 5800 series. The codename "Navi 12" is doing rounds on social media through familiar accounts that have high credibility with pre-launch news and rumors. The "Navi 10" silicon was designed to compete with NVIDIA's "TU106," as its "XT" and "Pro" variants outperform NVIDIA's original RTX 2060 and RTX 2070, forcing it to develop the RTX 20 Super series, by moving up specifications a notch.
Refreshing its $500 price-point was particularly costly for NVIDIA, as it was forced to tap into the 13.6 billion-transistor "TU104" silicon to carve out the RTX 2070 Super; while for the RTX 2060 Super, it had to spend 33 percent more on the memory chips. With the "Navi 12" silicon, AMD is probably looking to take a swing at NVIDIA's "TU104" silicon, which has been maxed out by the RTX 2080 Super, disrupting the company's $500-700 lineup once again, with its XT and Pro variants. There's also a remote possibility of "Navi 12" being an even bigger chip, targeting the "TU102."
Source:
KOMACHI_ENSAKA (Twitter)
Refreshing its $500 price-point was particularly costly for NVIDIA, as it was forced to tap into the 13.6 billion-transistor "TU104" silicon to carve out the RTX 2070 Super; while for the RTX 2060 Super, it had to spend 33 percent more on the memory chips. With the "Navi 12" silicon, AMD is probably looking to take a swing at NVIDIA's "TU104" silicon, which has been maxed out by the RTX 2080 Super, disrupting the company's $500-700 lineup once again, with its XT and Pro variants. There's also a remote possibility of "Navi 12" being an even bigger chip, targeting the "TU102."
132 Comments on AMD Readies Larger 7nm "Navi 12" Silicon to Power Radeon RX 5800 Series?
The Halo market, OTOH, is one of only growing sectors of the PC market, along with gaming in general. Look at the cash Nvidia rakes in with gaming GPUs, and tell me that high end gaming chips are not printing money. Sure, they have lots of mid range GPUs sold, but their high end also sells respectably well, and prints cash. Now sure, its expensive to operate in, but any market with high margins will be. You have high risk to go with high reward. Also doesnt say a single thing. All you are proving is that 336,466 users of UserBenchmark have 2080tis.
If you are insinuating that 1.8% isnt a good number of users, then by that measurement every single GPU AMD makes has a tiny number of users compared to Nvidia, as far as steam is concerned.
We know the 2080ti isnt the volume seller, but to say it isnt selling well and making money is just preposterous.
Three shading engines (six prim units, 60 CU, 96 ROPS, 6MB L2 cache) with 384 bit bus
Four shading engines (eight prim units, 80 CU, 128 ROPS, 8MB L2 cache) with 512 bit bus or four stack HBM v2
Navi 10 is faster than TU106 Bravo!
TU104 is faster than Navi 10 (2070 Super)
Navi 12 would barely match unlocked TU104 (2080 Super)
As for TU102 AMD has Navi 20 November or later .... Nvidia probably release an unlocked TU102 (2080Ti Super)
Super is Superior to Navi
End of Story
Far Cry 5 with RX 5700 consumes about 150 watts, hence AMD needs to scale from RX-5700 e.g.
RX-5700's dual shader engines (four prim, 36 CU, 64 ROPS, 4MB L2 cache) with 256 bit bus scales by 2, hence quad shader engines (eight prim, 72 CU, 128 ROPS, 8 MB L2 cache) with 512 bit bus config
RX-5700's dual shader engines (four prim, 36 CU, 64 ROPS, 4MB L2 cache) with 256 bit bus could scale into three shader engines (eight six, 72 CU, 96 ROPS, 6 MB L2 cache) with 384 bit bus config.
Scaling just CU count is not enough.
R9-290X's quad shader engines (four prim, 44 CU, 64 ROPS, 1MB L2 cache) with 512 bit bus is 2X scale Radeon HD 7870's dual shader engines (two prim, 20 CU, 64 ROPS, 512 KB L2 cache) with 256 bit bus
Maybe the trick is to wait for AMD and see what will happen.
Better products come out all the time, why rage about it now? Should AMD not release a 5800 in the future because that will screw 5700 buyers?
Things evolve, who doesn't want to be "outdated", better not buy anything!
And let us compare.
Turing is a big die. In the beginning, inevitably it has more defects and you have to disable some CUs. As manufacturing matures, the number of defects goes down and you don't have to disable as many CUs to get the same number of working dies from a waffer. So what do you do? Do you keep disabling the same number of CUs just because or do you disable just the number you need and sell a better product for the same $$$?
I don't think yields is much of a reason for this.
That and me being unable to follow your wall of text.