Tuesday, August 11th 2020
![AMD Radeon Graphics](https://tpucdn.com/images/news/amdradeon-v1739475473466.png)
AMD RDNA 2 "Big Navi" to Feature 12 GB and 16 GB VRAM Configurations
As we are getting close to the launch of RDNA 2 based GPUs, which are supposedly coming in September this year, the number of rumors is starting to increase. Today, a new rumor coming from the Chinese forum Chiphell is coming our way. A user called "wjm47196" known for providing rumors and all kinds of pieces of information has specified that AMD's RDNA 2 based "Big Navi" GPU will come in two configurations - 12 GB and 16 GB VRAM variants. Being that that is Navi 21 chip, which represents the top-end GPU, it is logical that AMD has put a higher amount of VRAM like 12 GB and 16 GB. It is possible that AMD could separate the two variants like NVIDIA has done with GeForce RTX 2080 Ti and Titan RTX, so the 16 GB variant is a bit faster, possibly featuring a higher number of streaming processors.
Sources:
TweakTown, via Chiphell
104 Comments on AMD RDNA 2 "Big Navi" to Feature 12 GB and 16 GB VRAM Configurations
stadt-bremerhaven.de/lgs-oled-aus-dem-jahr-2019-muessen-auf-amd-freesync-verzichten/
RTX 2080 is 16nm/12nm.
Sorry for jumping on you new germen blog poster, I read quoted message which was truncated from your original greatly changing the meaning. Will not function is different than will not support VRR universally.
Please use real sources that cite things not blogs...
www.thefpsreview.com/2020/08/10/lgs-2019-oled-tvs-arent-getting-amd-freesync/ Notice how they link their source?
/Facepalm.
Navi10 is great, but it's about a cheap, good-enough part for AMD. At 10.3 billion transistors its closest Nvidia relative is the original 2070 (full-fat TU106) that has 10.8 billion transistors.
Here's the thing(s) though:
- AMD has the process node advantage; 7nm vs 12nm
- AMD has the clock frequency advantage; ~1905MHz vs 1620MHz
- AMD has the shader count advantage; 2560 vs 2304
- AMD needs 30% more power, despite the more efficient node; 225W vs 175W
- AMD uses all 10.3bn transistors without tensor cores or raytracing support; TU106's 10.8bn transistors includes all that.
So yeah, Nvidia has the architectural advantage. If you took the exact same specs that Navi10 has and made a 7nm, TU106 part with 2560 CUDA cores and let it use 225W, it would stomp all over the 5700XT. Oh, and it would still have DLSS and hardware raytracing support that Navi10 lacks.www.techpowerup.com/gpu-specs/geforce-rtx-2080-super.c3439
www.techpowerup.com/gpu-specs/geforce-rtx-2080-ti.c3305
The big video of 2070 vs 5700XT was the first clue, but also all the specs in that bullet-point list are 2070 specs. There are three mentions of TU106 and two mentions of 2070.
I picked the 2070 because it's the closest price/transistor count match for navi10 and is also the fully enabled silicon, not a chopped down variant like the 5600XT or 2060S.
[/QUOTE]
What?
Do you even have a clue what you're talking about? Are you suggesting that a 10900K can reach 5.3GHz so the i3-10100 should be able to as well?
Are you sober right now, even?
Not confirmed but I am willing to bet both next-gen GPUs will have HDMI 2.1 ports and VRR support.
Sure, Geforce boost is more dynamic than AMD; There are plenty of videos from mainstream channels like Jayz/GN/HW Unboxed reviewing 5700XT AIB cards with 2GHz+ game clocks at stock though so the point you're trying to make falls apart even as you're pushing it. So what? Clockspeeds was only one of my points and if you're going to argue with official specs then your argument is with Nvidia, not me. You might want to take up all the AIB cards on their power consumption figures too, if you're in that sort of mood.
You've set up a straw man by introducing a 2080Ti for no reason to a 2070/5700XT discussion and I'm not buying it.
Reference RX 5700XT - average 1887MHz. Best cards average at around 2000MHz, ASUS Strix is the only one that averages above that at 2007MHz but couple others are very close.
Pretty even overall.
Performance and power consumption seem to be pretty much at the same level as well.
May be worth noting that non-super Turings are relatively modestly clocked to keep them in power-efficient range.
In terms of shader units and other resources, RX 5700XT is equal to RTX 2070 Super but the latter uses a bigger cut-down chip.
Similarly in terms of shaders and resources RX 5700 (non-XT) is really equal to RTX 2070 (non-Super) but this time RX 5700 uses a bigger (more shader units and stuff) cut-down chip.
This is my overall observation on the things and overall impressions.
See 2.16 GHz on the RTX 2070 FE.
www.anandtech.com/show/13431/nvidia-geforce-rtx-2070-founders-edition-review/15
Besides that, no AMD GPU can reach modern game’s 4K 60.
So the RDNA II’s large VRAM is a nonsense unless people who will buy it are creators.