Tuesday, August 11th 2020

AMD RDNA 2 "Big Navi" to Feature 12 GB and 16 GB VRAM Configurations

As we are getting close to the launch of RDNA 2 based GPUs, which are supposedly coming in September this year, the number of rumors is starting to increase. Today, a new rumor coming from the Chinese forum Chiphell is coming our way. A user called "wjm47196" known for providing rumors and all kinds of pieces of information has specified that AMD's RDNA 2 based "Big Navi" GPU will come in two configurations - 12 GB and 16 GB VRAM variants. Being that that is Navi 21 chip, which represents the top-end GPU, it is logical that AMD has put a higher amount of VRAM like 12 GB and 16 GB. It is possible that AMD could separate the two variants like NVIDIA has done with GeForce RTX 2080 Ti and Titan RTX, so the 16 GB variant is a bit faster, possibly featuring a higher number of streaming processors.
Sources: TweakTown, via Chiphell
Add your own comment

104 Comments on AMD RDNA 2 "Big Navi" to Feature 12 GB and 16 GB VRAM Configurations

#52
F-man4
Large VRAM is useful for working but useless for gaming.
Posted on Reply
#53
ARF
JAB CreationsIf you think a $700 on par with a 2080 is mid-range then you're going to have to wait to get high end until the aliens visit.
The price tag itself doesn't tell the whole story. In this case, it's super overpriced because of several factors, but yes, on 7nm technology, that is mid-range performance.
RTX 2080 is 16nm/12nm.
Posted on Reply
#55
Valantar
PatriotDon't care the language, personal blogs yours or otherwise are not sources they are opinions. The blog itself appears to be lower than WCCFtech...
LG has been pretty clear on this for a while now. The G-sync implementation on these TVs was a bespoke implementation, not FS-over-HDMI, and does not work on AMD cards. They have also confirmed publicly that FreeSync support will not be added to them in the future. What isn't quite clear is whether this means that future HDMI 2.1-equipped AMD cards will also be out of luck, or if these are compatible through HDMI VRR - because it isn't clear whether these TVs fully support HDMI VRR. It could go either way depending on how interested LG is in providing this service to existing users, given that their new 2020 models are advertised as working with GPUs from both vendors.
Posted on Reply
#56
FreedomOfSpeech
Thanks Valantar. Sry to disappoint you, Patriot and medi01.
Posted on Reply
#57
kapone32
F-man4Large VRAM is useful for working but useless for gaming.
Unless you like to play at 4K.
Posted on Reply
#58
Patriot
ValantarLG has been pretty clear on this for a while now. The G-sync implementation on these TVs was a bespoke implementation, not FS-over-HDMI, and does not work on AMD cards. They have also confirmed publicly that FreeSync support will not be added to them in the future. What isn't quite clear is whether this means that future HDMI 2.1-equipped AMD cards will also be out of luck, or if these are compatible through HDMI VRR - because it isn't clear whether these TVs fully support HDMI VRR. It could go either way depending on how interested LG is in providing this service to existing users, given that their new 2020 models are advertised as working with GPUs from both vendors.
Ah more TVs that claim specs that they don't actually support, cool cool.

Sorry for jumping on you new germen blog poster, I read quoted message which was truncated from your original greatly changing the meaning. Will not function is different than will not support VRR universally.
Please use real sources that cite things not blogs...

www.thefpsreview.com/2020/08/10/lgs-2019-oled-tvs-arent-getting-amd-freesync/ Notice how they link their source?
Posted on Reply
#59
Chrispy_
medi01Looking at 250mm2 5700/XT vs 2070/2060 and supers, seriously?
Are you really going to use die size to equate two products on completely different process nodes? Like, really?
/Facepalm.

Navi10 is great, but it's about a cheap, good-enough part for AMD. At 10.3 billion transistors its closest Nvidia relative is the original 2070 (full-fat TU106) that has 10.8 billion transistors.

They trade blows but I reckon the 5700XT wins more than it loses and is about 10% ahead. That sounds about right, if you look at all the various reviews around the web, the 5700XT is a little bit faster than the 5700XT and how much faster depends on the game selection tested.

Here's the thing(s) though:
  • AMD has the process node advantage; 7nm vs 12nm
  • AMD has the clock frequency advantage; ~1905MHz vs 1620MHz
  • AMD has the shader count advantage; 2560 vs 2304
  • AMD needs 30% more power, despite the more efficient node; 225W vs 175W
  • AMD uses all 10.3bn transistors without tensor cores or raytracing support; TU106's 10.8bn transistors includes all that.
So yeah, Nvidia has the architectural advantage. If you took the exact same specs that Navi10 has and made a 7nm, TU106 part with 2560 CUDA cores and let it use 225W, it would stomp all over the 5700XT. Oh, and it would still have DLSS and hardware raytracing support that Navi10 lacks.
Posted on Reply
#60
ARF
Chrispy_
  • AMD has the clock frequency advantage; ~1905MHz vs 1620MHz
I thought that every RTX 2080 Ti card can hit 2000-2100 MHz.
Posted on Reply
#62
Chrispy_
ARFI thought that every RTX 2080 Ti card can hit 2000-2100 MHz.
Where did I mention a 2080Ti? You need to read, then think, then comment ;)

The big video of 2070 vs 5700XT was the first clue, but also all the specs in that bullet-point list are 2070 specs. There are three mentions of TU106 and two mentions of 2070.

I picked the 2070 because it's the closest price/transistor count match for navi10 and is also the fully enabled silicon, not a chopped down variant like the 5600XT or 2060S.
Posted on Reply
#63
ARF
2080 Ti Owners, what are your overclock settings? (Core clock/memory clock)

Pretty much every card hits 2000-2100mhz overclocked. So just boost your power to whatever maximum your card allows, add anywhere from 100-200mhz or so depending on your card to reach the before mentioned overclock range and you good to go.
Pretty much every card also gets the same memory overclock from anywhere to +500-800mhz.
FE cards just need the fan rpm turned up a bit higher but get the same clocks as the rest.
nvidia/comments/9jzbon
Chrispy_Where did I mention a 2080Ti? You need to read, then think, then comment ;)
If the largest chip with the highest power consumption can hit over 2 GHz with ease, then the smaller chips should do at least on par.
Posted on Reply
#64
kapone32
If the largest chip with the highest power consumption can hit over 2 GHz with ease, then the smaller chips should do at least on par.
[/QUOTE]
What?
Posted on Reply
#65
JAB Creations
ARFThe price tag itself doesn't tell the whole story. In this case, it's super overpriced because of several factors, but yes, on 7nm technology, that is mid-range performance.
RTX 2080 is 16nm/12nm.
Okay, that makes much more sense relative to itself though the 5000 series did not exist until the Radeon 7 was discontinued. It was high end for what it was: Vega at 7nm.
Posted on Reply
#66
mouacyk
kapone32Unless you like to play at 4K.
Or lots of mip-map levels or lots of texture varieties in the same scene. Photogrammetry is the new mega texture.
Posted on Reply
#67
kapone32
mouacykOr lots of mip-map levels or lots of texture varieties in the same scene. Photogrammetry is the new mega texture.
Exactly why I bought a 1440P monitor for Gaming the difference in picture quality is not as drastic as 1080P to 1440P but 4K eats GPUs for breakfast and lunch.
Posted on Reply
#68
Chrispy_
ARFI thought that every RTX 2080 Ti card can hit 2000-2100 MHz.
It probably can, but WTF does that have to do with my point? I wasn't even replying to you - you've just successfully trolled the discussion with your inability to understand english.
ARFIf the largest chip with the highest power consumption can hit over 2 GHz with ease, then the smaller chips should do at least on par.
What?
Do you even have a clue what you're talking about? Are you suggesting that a 10900K can reach 5.3GHz so the i3-10100 should be able to as well?
Are you sober right now, even?
Posted on Reply
#69
ARF
Chrispy_It probably can, but WTF does that have to do with my point? I wasn't even replying to you - you've just successfully trolled the discussion with your inability to understand english.


What?
Do you even have a clue what you're talking about? Are you suggesting that a 10900K can reach 5.3GHz so the i3-10100 should be able to as well?
Are you sober right now, even?
You are wrong on every angle with regards to your claim that Radeons achieve higher clocks. That is some very serious shifting of the reality to some imaginary "facts". :D
Posted on Reply
#70
londiste
ValantarLG has been pretty clear on this for a while now. The G-sync implementation on these TVs was a bespoke implementation, not FS-over-HDMI, and does not work on AMD cards. They have also confirmed publicly that FreeSync support will not be added to them in the future. What isn't quite clear is whether this means that future HDMI 2.1-equipped AMD cards will also be out of luck, or if these are compatible through HDMI VRR - because it isn't clear whether these TVs fully support HDMI VRR. It could go either way depending on how interested LG is in providing this service to existing users, given that their new 2020 models are advertised as working with GPUs from both vendors.
LG OLED TVs do not have a bespoke GSync implementation. These have HDMI 2.1 and its VRR which is a pretty standard thing (and not compatible with bespoke FS-over-HDMI). Although no cards have HDMI 2.1 ports, Nvidia added support for some HDMI 2.1 features - in this context namely VRR - to some of their cards with HDMI 2.0. Nothing really prevents AMD from doing the same. FS-over-HDMI will not be added but AMD can add VRR support in the same way Nvidia did. And it will probably be branded as Freesync something or another.

Not confirmed but I am willing to bet both next-gen GPUs will have HDMI 2.1 ports and VRR support.
Posted on Reply
#71
Chrispy_
ARFYou are wrong on every angle with regards to your claim that Radeons achieve higher clocks. That is some very serious shifting of the reality to some imaginary "facts". :D
My claim? It's literally the official specs.

Sure, Geforce boost is more dynamic than AMD; There are plenty of videos from mainstream channels like Jayz/GN/HW Unboxed reviewing 5700XT AIB cards with 2GHz+ game clocks at stock though so the point you're trying to make falls apart even as you're pushing it. So what? Clockspeeds was only one of my points and if you're going to argue with official specs then your argument is with Nvidia, not me. You might want to take up all the AIB cards on their power consumption figures too, if you're in that sort of mood.

You've set up a straw man by introducing a 2080Ti for no reason to a 2070/5700XT discussion and I'm not buying it.
Posted on Reply
#72
londiste
Clock speeds? TPU has reviews.

Reference RX 5700XT - average 1887MHz. Best cards average at around 2000MHz, ASUS Strix is the only one that averages above that at 2007MHz but couple others are very close.
RTX 2070 Founder's Edition - average 1862MHz. There are less reviews and average speeds seem to end up somewhere in 19xx MHz range. Best card is Zotac's AMP Extreme with average of 1987MHz.

Pretty even overall.
Performance and power consumption seem to be pretty much at the same level as well.

May be worth noting that non-super Turings are relatively modestly clocked to keep them in power-efficient range.

In terms of shader units and other resources, RX 5700XT is equal to RTX 2070 Super but the latter uses a bigger cut-down chip.
Similarly in terms of shaders and resources RX 5700 (non-XT) is really equal to RTX 2070 (non-Super) but this time RX 5700 uses a bigger (more shader units and stuff) cut-down chip.
Posted on Reply
#74
F-man4
kapone32Unless you like to play at 4K.
Even 4K it’s useless. Only one or two games can consume 11GB+ VRAM. Most of the modern games are around 4-8GB according to TechPowerUp’s reviews.
Besides that, no AMD GPU can reach modern game’s 4K 60.
So the RDNA II’s large VRAM is a nonsense unless people who will buy it are creators.
Posted on Reply
#75
Valantar
londisteLG OLED TVs do not have a bespoke GSync implementation. These have HDMI 2.1 and its VRR which is a pretty standard thing (and not compatible with bespoke FS-over-HDMI). Although no cards have HDMI 2.1 ports, Nvidia added support for some HDMI 2.1 features - in this context namely VRR - to some of their cards with HDMI 2.0. Nothing really prevents AMD from doing the same. FS-over-HDMI will not be added but AMD can add VRR support in the same way Nvidia did. And it will probably be branded as Freesync something or another.

Not confirmed but I am willing to bet both next-gen GPUs will have HDMI 2.1 ports and VRR support.
Implementing HDMI 2.1 features on a HDMI 2.0 GPU with only HDMI 2.0 hardware is by definition a bespoke implementation. It bypasses and supersedes the standard, and is thus made especially for that (combination of) part(s) - thus it is bespoke, custom-made. Beyond that, nothing you said contradicts anything I said, and to reiterate: it is still unconfirmed from LG whether 2019 OLEDs will support HDMI 2.1 VRR universally - which would after all make sense to do given that both next-gen consoles support it, as well as upcoming GPUs. The absence of unequivocal confirmation might mean nothing at all, or it might mean that LG didn't bother to implement this part of the standard properly (which isn't unlikely given how early it arrived). And yes, I am also willing to bet both camps will have HDMI 2.1 ports with VRR support on their upcoming GPUs.
Chrispy_It probably can, but WTF does that have to do with my point? I wasn't even replying to you - you've just successfully trolled the discussion with your inability to understand english.
I'm not arguing the same as @ARF here, but using on-paper boost specs for Nvidia-vs-AMD comparisons is quite misleading. GPU Boost 3.0 means that every card exceeds its boost clock spec. Most reviews seem to place real-world boost clock speeds for FE cards in the high 1800s or low 1900s, definitely above 1620MHz. On the other hand, AMD's "boost clock" spec is a peak clock spec, with "game clock" being the expected real-world speed (yes, it's quite dumb - why does the boost spec exist at all?). Beyond that though, I agree (and frankly think it's rather preposterous that anyone would disagree) that Nvidia still has a significant architectural efficiency advantage (call it "IPC" or whatever). They still get more gaming performance per shader core and TFlop, and are on par in perf/W despite being on a much less advanced node. That being said, AMD has (partially thanks to their node advantage, but also due to RDNA's architectural improvements - just look at the VII vs. 5700 XT, both on 7nm) gained on Nvidia in a dramatic way over the past generation, with the 5700 (non-XT) and especially the 5600 outright beating Nvidia's best in perf/W for the first time in recent history. With the promise of dramatically increased perf/W for RDNA 2 too, while Nvidia is moving to a better (though not quite matched) node makes this a very interesting launch cycle.
Posted on Reply
Add your own comment
Feb 16th, 2025 11:32 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts