Tuesday, August 11th 2020

AMD RDNA 2 "Big Navi" to Feature 12 GB and 16 GB VRAM Configurations

As we are getting close to the launch of RDNA 2 based GPUs, which are supposedly coming in September this year, the number of rumors is starting to increase. Today, a new rumor coming from the Chinese forum Chiphell is coming our way. A user called "wjm47196" known for providing rumors and all kinds of pieces of information has specified that AMD's RDNA 2 based "Big Navi" GPU will come in two configurations - 12 GB and 16 GB VRAM variants. Being that that is Navi 21 chip, which represents the top-end GPU, it is logical that AMD has put a higher amount of VRAM like 12 GB and 16 GB. It is possible that AMD could separate the two variants like NVIDIA has done with GeForce RTX 2080 Ti and Titan RTX, so the 16 GB variant is a bit faster, possibly featuring a higher number of streaming processors.
Sources: TweakTown, via Chiphell
Add your own comment

104 Comments on AMD RDNA 2 "Big Navi" to Feature 12 GB and 16 GB VRAM Configurations

#76
londiste
ValantarImplementing HDMI 2.1 features on a HDMI 2.0 GPU with only HDMI 2.0 hardware is by definition a bespoke implementation.
On the GPU, not TV side.
Valantarit is still unconfirmed from LG whether 2019 OLEDs will support HDMI 2.1 VRR universally
What, why? Nvidia is using it, at least Xbox side of consoles are using it, why would it not be universal?

If I had to guess, AMD is playing the branding game here. There were some wins on getting Freesync TVs out on the market and Nvidia - while branding it as Gsync Compatible - is using a standard approach behind the marketing this time around. HDMI 2.1 VRR support is not that large of a win before actually having HDMI 2.1 outputs because technically it is a bit of a mess. With HDMI 2.0 you are limited to 2160p 40-60Hz range and no LFC or 1440p 40-120Hz. For proper 2160p 40-120Hz range, you need a card with HDMI 2.1 output.
Posted on Reply
#77
kapone32
F-man4Even 4K it’s useless. Only one or two games can consume 11GB+ VRAM. Most of the modern games are around 4-8GB according to TechPowerUp’s reviews.
Besides that, no AMD GPU can reach modern game’s 4K 60.
So the RDNA II’s large VRAM is a nonsense unless people who will buy it are creators.
Do you have a 4K screen to validate that? I also have a very extensive Game library as well. As it stands there are only 2 cards that give you 60+ FPS (2080TI, Vega VII) from what I have read on reviews. Even 4GB is pushing it with AAA modern Games. I am pretty confident that CP2077 will use a ton of VRAM at 4K. I know that Games have to support a range of hardware but 30 FPS is considered playable.
Posted on Reply
#78
Valantar
londisteOn the GPU, not TV side.
What, why? Nvidia is using it, at least Xbox side of consoles are using it, why would it not be universal?

If I had to guess, AMD is playing the branding game here. There were some wins on getting Freesync TVs out on the market and Nvidia - while branding it as Gsync Compatible - is using a standard approach behind the marketing this time around. HDMI 2.1 VRR support is not that large of a win before actually having HDMI 2.1 outputs because technically it is a bit of a mess. With HDMI 2.0 you are limited to 2160p 40-60Hz range and no LFC or 1440p 40-120Hz. For proper 2160p 40-120Hz range, you need a card with HDMI 2.1 output.
For a HDMI 2.1 TV to recognize and enable HDMI 2.1 features when connected to a HDMI 2.0 source, it must by definition have some sort of non-standard implementation. If not, it would refuse to enable VRR as it would (correctly according to the standard) identify the source device as incompatible with HDMI 2.1 VRR. Thus this is not just a custom solution on Nvidia's GPUs, but on both pieces of hardware. If Nvidia's GPUs were masquerading as HDMI 2.1 devices this would work on any HDMI 2.1 TV, not just these LG ones. As for branding, at this point (and going forward, most likely) that's the only real difference, as both Freesync and G-sync are now essentially identical implementations of the same standards (VESA AS and HDMI 2.1 VRR). Sure, there are proprietary versions of each, but those will just grow more rare as the standardized ones grow more common.
Posted on Reply
#79
F-man4
kapone32Do you have a 4K screen to validate that? I also have a very extensive Game library as well. As it stands there are only 2 cards that give you 60+ FPS (2080TI, Vega VII) from what I have read on reviews. Even 4GB is pushing it with AAA modern Games. I am pretty confident that CP2077 will use a ton of VRAM at 4K. I know that Games have to support a range of hardware but 30 FPS is considered playable.
First, I own a 75 inch Sony 4K TV.
Second, no evidence that over 3 games can consume 11.01GB or more VRAM. At least TechPowerUp’s reviews don’t have it.
Third, according to VII & 5700XT’s POOR 4K performance, I don’t think RDNA II is available to handle 4K gaming either.
So RDNA II’s VRAM appealing is still a nonsense. It’s definitely for mining etc. but not for gaming.
AMD should not cost up for larger but useless VRAM but improve their GPU core performance.
Posted on Reply
#80
TheoneandonlyMrK
F-man4Even 4K it’s useless. Only one or two games can consume 11GB+ VRAM. Most of the modern games are around 4-8GB according to TechPowerUp’s reviews.
Besides that, no AMD GPU can reach modern game’s 4K 60.
So the RDNA II’s large VRAM is a nonsense unless people who will buy it are creators.
Who told you that shit.

I've been gaming at 4k /75hz for two years and 90% of game's can easily be made to run at 4k with a vega64 so for about 10% of game's I absolutely have to drop resolution.

8Gb is the new minimum/old minimum for me, more would make the same GPU last hopefully a year longer with probable comprises , something most in reality accept ,like 95% of gamer's. At least.
Posted on Reply
#81
mouacyk
Without knowing the internal details of a game engine implementation, no one can really know how much VRAM is required. 99% of the time, the games are just doing greedy allocation, where they load as many assets (not just textures) as possible into VRAM in order to cache them. Where a game doesn't do that but can properly stream assets into VRAM on-demand, it likely needs way less (perhaps only 2-4GB) to render the current scene.
Posted on Reply
#82
medi01
mouacykWithout knowing the internal details of a game engine implementation, no one can really know how much VRAM is required. 99% of the time, the games are just doing greedy allocation, where they load as many assets (not just textures) as possible into VRAM in order to cache them. Where a game doesn't do that but can properly stream assets into VRAM on-demand, it likely needs way less (perhaps only 2-4GB) to render the current scene.
Next gen consoles going with 16GB while targeting 4k speaks a lot.
Sony's engineer addressed that amount of RAM required and game installation size are drastically reduced, if faster SSD is available and there is no need to pre-load / have multiple copies of the same stuff just so that it's faster to load.
ValantarFor a HDMI 2.1 TV to recognize and enable HDMI 2.1 features when connected to a HDMI 2.0 source, it must by definition have some sort of non-standard implementation.
I've briefly touched on HDMI-COC (when my sat receiver didn't want to play along with... wait a sec, LG TV).
My observations:

1) HDMI implementations is a clusterf*ck of quirks, adding yet another one is no big deal
2) "Which vendor just connected?" is part of the standard handshake.
3) Vendor specific codes are part of the standard. ("Oh, you are LG too? Let's speak Klingon!!!!")
Posted on Reply
#83
INSTG8R
Vanguard Beta Tester
AnarchoPrimitiv12GB and 16GB seems more than enough to me, any higher than that is just adding expense. Depending on how Navi2 is, I might make the jump into a higher tier. I'll have to sell my Sapphire 5700xt Nitro though which I've only had since November
I‘m keeping mine until the dust totally settles or until I can’t fight the “new shiny” urge My 1440p performance is more than adequate right now. I’m sure Sapphire will suck me as they have the last 4 gens...
Posted on Reply
#84
Valantar
F-man4First, I own a 75 inch Sony 4K TV.
Second, no evidence that over 3 games can consume 11.01GB or more VRAM. At least TechPowerUp’s reviews don’t have it.
Third, according to VII & 5700XT’s POOR 4K performance, I don’t think RDNA II is available to handle 4K gaming either.
So RDNA II’s VRAM appealing is still a nonsense. It’s definitely for mining etc. but not for gaming.
AMD should not cost up for larger but useless VRAM but improve their GPU core performance.
Mining? Mining is dead. And AMD has already increased their GPUs' gaming performance quite significantly - hence the 40CU RDNA 5700 XT nearly matching the 60CU Vega Radeon VII (sure, the 5700 clocks higher, but not that much - it's still more than 4TFLOPS behind in raw compute power, and that's calculated using the unrealistically high "boost clocl" spec for the 5700 XT, not the realistic "game clock" spec). There's definitely an argument to be made about huge frame buffers being silly, but at least on the non-controlled system that is the world of PCs, they are somewhat necessary as developers often need methods of compensating for potential bottlenecks such as disk I/O. And as graphical fidelity increases, texture quality goes up, etc., we'll definitely still see VRAM usage increase still. Given the non-controlled system that is the PC gaming space, developers will always need ways of mitigating or compensating for various bottlenecks, such as I/O speeds, and pre-loading assets aggressively is a widely used tactic for this.
medi01Next gen consoles going with 16GB while targeting 4k speaks a lot.
Sony's engineer addressed that amount of RAM required and game installation size are drastically reduced, if faster SSD is available and there is no need to pre-load / have multiple copies of the same stuff just so that it's faster to load.
Yep, that's exactly it. PC game makers can't afford to alienate the heaps of users still using HDDs (or even SATA SSDs to some extent), and there aren't systems in place to determine which install you get depending on the nature of your storage device etc. (not to mention that making 2-3 different distributions of your game depending on the storage medium would be quite work intensive), so they need to develop for the lowest common denominator. With the upcoming consoles going all NVMe, they know how fast they can get data off the drive, and thus never have to do so prematurely. That will drive down VRAM usage dramatically.

Of course VRAM usage in games is an interesting topic in and of itself given how much it can vary due to different factors. I've seen examples of the same game at the same resolution and settings and similar performance use several GB more VRAM on GPUs from one vendor compared to the other, for example (IIRC it was a case where Nvidia GPUs hit near 6GB while AMD GPUs stayed around 4GB). Whether that is due to the driver, something weird in the game code, or something else entirely is beyond me, but it's a good illustration of how this can't be discussed as a straightforward "game A at resolution X will ALWAYS need *GB of VRAM" situation.
medi01I've briefly touched on HDMI-COC (when my sat receiver didn't want to play along with... wait a sec, LG TV).
My observations:

1) HDMI implementations is a clusterf*ck of quirks, adding yet another one is no big deal
2) "Which vendor just connected?" is part of the standard handshake.
3) Vendor specific codes are part of the standard. ("Oh, you are LG too? Let's speak Klingon!!!!")
That's true, HDMI is indeed a mess - but my impression is that HDMI 2.1 is supposed to try to alleviate this by integrating a lot of optional things into the standard, rather than having vendors make custom extensions. Then again, can you even call something a standard if it isn't universally applicable by some reasonable definition. I would say not. Unless, of course, you're a fan of XKCD. But nonetheless, even if "which vendor just connected" is part of the handshake, "if vendor=X, treat HDMI 2.0 device as HDMI 2.1 device" goes quite a ways beyond this.
Posted on Reply
#85
londiste
medi01Next gen consoles going with 16GB while targeting 4k speaks a lot.
Keep in mind that this is total RAM. When compared to a PC, it is both RAM and VRAM in one. In the current gen, both consoles have 8GB but initially, PS4 had 3.5GB available for games and Xbox One had 5GB. Later, the available RAM for games was slightly increased on both.

Pretty sure new generation will reserve 3-4GB for system use, if not more - operating system, caches and stuff. 12-13GB that remain includes both RAM and VRAM for a game to use. There are some savings on not having to load textures to RAM and them transfer to VRAM but that does not make too big of a difference.
Posted on Reply
#86
Valantar
londisteKeep in mind that this is total RAM. When compared to a PC, it is both RAM and VRAM in one. In the current gen, both consoles have 8GB but initially, PS4 had 3.5GB available for games and Xbox One had 5GB. Later, the available RAM for games was slightly increased on both.

Pretty sure new generation will reserve 3-4GB for system use, if not more - operating system, caches and stuff. 12-13GB that remain includes both RAM and VRAM for a game to use. There are some savings on not having to load textures to RAM and them transfer to VRAM but that does not make too big of a difference.
2.5GB for the XSX. 13.5GB is available for software, of which 10GB is of the "GPU optimal" kind thanks to the XSX's weird dual-bandwidth RAM layout.
Posted on Reply
#87
londiste
Valantar2.5GB for the XSX. 13.5GB is available for software, of which 10GB is of the "GPU optimal" kind thanks to the XSX's weird dual-bandwidth RAM layout.
This is pretty likely what both the next-gen consoles' memory availability will look like.
Posted on Reply
#88
medi01
ValantarXSX's weird dual-bandwidth RAM layout.
Cheaper that way. Every buck counts when you sell stuff in tens of millions.
Posted on Reply
#89
Valantar
medi01Cheaper that way. Every buck counts when you sell stuff in tens of millions.
Sure, that makes a bit of sense. Probably also board space (the XSX motherboard is tiny!) and a likely correct judgement that non-graphics memory doesn't actually need all of that bandwidth to begin with.
Posted on Reply
#90
modmax
[QUOTE = "FreedomOfSpeech, post: 4326708, membro: 200814"]
Questa volta volevo passare ad AMD per la prima volta nella mia vita. Lo scorso Natale ho acquistato un LG C9 per via del suo 4K @ 120Hz @VRR (GSync / Freesync). Nvidias RTX Generation può eseguire VRR con HDMI 2.0 come G-Sync Combatible. Ieri ho letto che gli OLED LG del 2019 non funzioneranno con Big Navi @Freesync. Ciò significa che devo acquistare di nuovo Nvidia ...
[/ CITAZIONE]
non ha mai avuto amd? lascia stare sono impegnative in ogni senso, non che non siano potenti.....ma..
Posted on Reply
#91
medi01
modmax[QUOTE = "FreedomOfSpeech, post: 4326708, membro: 200814"]
Questa volta volevo passare ad AMD per la prima volta nella mia vita. Lo scorso Natale ho acquistato un LG C9 per via del suo 4K @ 120Hz @VRR (GSync / Freesync). Nvidias RTX Generation può eseguire VRR con HDMI 2.0 come G-Sync Combatible. Ieri ho letto che gli OLED LG del 2019 non funzioneranno con Big Navi @Freesync. Ciò significa che devo acquistare di nuovo Nvidia ...
[/ CITAZIONE]
non ha mai avuto amd? lascia stare sono impegnative in ogni senso, non che non siano potenti.....ma..
Oh look, burnt fuse II.

By the way, why doesn't chiplet approach work with GPUs?
Posted on Reply
#92
deu
bonehead123If past trends are any indications, there WILL be 12 and/or 16GB variants, and possibly even 24GB for the very top end....
remember the infamous "nobody will ever need more than 16k of ram" quote from way back, and look at where we are nowadays....

And since the current crop of 11GB cards are really, really expensive as compared to the 8/6GB models, so if you want one of the above, perhaps you should shore up your finances, then put your banker and HIS gold cards on retainer, hehehe ..:roll:..:eek:...:fear:..
Is this the quote?

www.computerworld.com/article/2534312/the--640k--quote-won-t-go-away----but-did-gates-really-say-it-.html

Someone else might have said it with 16k of RAM, I dont know
Posted on Reply
#93
londiste
medi01By the way, why doesn't chiplet approach work with GPUs?
It works perfectly fine for stuff that isn't time-sensitive like HPC, AI and other compute stuff. When it comes to graphics, the problem is about orchestrating all the work in time and having all the necessary information in the right place - VRAM, caches, across IF/NVLink/whicheverbus. Both AMD and Nvidia have been working on this for years but the current result seems to be that even Crossfire and SLI are being wound down.
Posted on Reply
#94
bonehead123
deuIs this the quote?

www.computerworld.com/article/2534312/the--640k--quote-won-t-go-away----but-did-gates-really-say-it-.html

Someone else might have said it with 16k of RAM, I dont know
something like that anyways.... my memory is not what it used to be, but oh well :)
londistebut the current result seems to be that even Crossfire and SLI are being wound down
This is my understanding also, and as far as I am concerned, good riddance.... it was never really worth the hassle, since the drivers never really worked the way they were supposed to anyways, athough the GPU's themselves seemed to be relatively robust for that era...
Posted on Reply
#95
Super XP
JAB CreationsIf you think a $700 on par with a 2080 is mid-range then you're going to have to wait to get high end until the aliens visit.
All GPU prices are out of whack. All low, mid, med, high and enthusiast prices. They are ALL overpriced!
Posted on Reply
#96
Valantar
londisteIt works perfectly fine for stuff that isn't time-sensitive like HPC, AI and other compute stuff. When it comes to graphics, the problem is about orchestrating all the work in time and having all the necessary information in the right place - VRAM, caches, across IF/NVLink/whicheverbus. Both AMD and Nvidia have been working on this for years but the current result seems to be that even Crossfire and SLI are being wound down.
Chiplet-based GPUs aren't supposed to be SLI/CF though, the point is precisely to avoid that can of worms. Ideally the chiplets would seamlessly add into a sort of virtual monolithic GPU, with the system not knowing there are several chips at all. As you say there have been persistent rumors about this for a while, and we know both major GPU vendors are working on it (Intel too). If performance is supposed to keep increasing as it has without costs going even more crazy than they already have, MCM GPUs will soon be a necessity - >500mm² dice on 7nm EUV or 5nm aren't going to be economically viable for mass market consumer products in the long run, so going MCM is the only logical solution. 3D stacking might help with latencies, by stacking multiple GPU compute chiplets on top of an I/O + interconnect die or active interposer. But of course that comes with its own added cost, and massive complexity. Hopefully it will all come together when it truly becomes a necessity.
Posted on Reply
#97
londiste
SLI/Crossfire are the technologies that have shown most promise. Splitting the GPU functionally onto different dies have not yet been very successful. Moving the data and coherency of data are a big problem, especially with the latency in play. We will see if some contemporary packaging stuff like interposers or EMIB will make inter-die communication more viable but that seems to be the sticking point so far. Even with data moving over silicon, going from one die to another might come with a surprisingly big efficiency hit.
Posted on Reply
#98
Valantar
londisteSLI/Crossfire are the technologies that have shown most promise. Splitting the GPU functionally onto different dies have not yet been very successful. Moving the data and coherency of data are a big problem, especially with the latency in play. We will see if some contemporary packaging stuff like interposers or EMIB will make inter-die communication more viable but that seems to be the sticking point so far. Even with data moving over silicon, going from one die to another might come with a surprisingly big efficiency hit.
There are definitely challenges that need to be overcome, and they might even require a fundamental rearrangement of the current "massive grid of compute elements" GPU design paradigm. Advanced scheduling will also be much more important than today. But sying SLI and CF have "shown promise" is... a dubious claim. Both of these are dead-end technologies due to their reliance on developer support, difficulty of optimization, and poor scaling (~70% scaling as a best case with 30-40% being normal is nowhere near worth it). A lot of the shortcomings of multi-GPU can be overcome "simply" by drastically reducing the latency and drastically increasing the bandwidth between GPUs (Nvlink beats SLI by quite a lot), but tighter integration will still be needed to remove the requirement for active developer support. Still, this is more realistic and has more of a path towards further scaling than SLI/CF - after all, that just goes up to two GPUs, with scaling dropping off dramatically beyond that, which won't help GPU makers shrink die sizes by all that much. We need a new paradigm of GPU design to overcome this challenge, not just a shrunk-down version of what we already have.
Posted on Reply
#99
londiste
SLI/CF really do not rely on developers, IHVs do most of the job there. Unfortunately, current rendering technologies do not lend themselves well to the idea. Scaling really was not (and is not) that bad in GPU-limited situations.

I would say DX12 multi-GPU was not a bad idea when it comes to completely relying on developers but that seems to be a no-go as a whole.

You are right about the required paradigm shift but not sure exactly that will be. Right now, hardware vendors do not seem to have very good ideas for that either :(
Posted on Reply
#100
Anymal
Gddr6 is slower than gddr6x on top amperes.
Posted on Reply
Add your own comment
Nov 21st, 2024 23:28 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts