• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD RDNA 2 "Big Navi" to Feature 12 GB and 16 GB VRAM Configurations

LG OLED TVs do not have a bespoke GSync implementation. These have HDMI 2.1 and its VRR which is a pretty standard thing (and not compatible with bespoke FS-over-HDMI). Although no cards have HDMI 2.1 ports, Nvidia added support for some HDMI 2.1 features - in this context namely VRR - to some of their cards with HDMI 2.0. Nothing really prevents AMD from doing the same. FS-over-HDMI will not be added but AMD can add VRR support in the same way Nvidia did. And it will probably be branded as Freesync something or another.

Not confirmed but I am willing to bet both next-gen GPUs will have HDMI 2.1 ports and VRR support.
Implementing HDMI 2.1 features on a HDMI 2.0 GPU with only HDMI 2.0 hardware is by definition a bespoke implementation. It bypasses and supersedes the standard, and is thus made especially for that (combination of) part(s) - thus it is bespoke, custom-made. Beyond that, nothing you said contradicts anything I said, and to reiterate: it is still unconfirmed from LG whether 2019 OLEDs will support HDMI 2.1 VRR universally - which would after all make sense to do given that both next-gen consoles support it, as well as upcoming GPUs. The absence of unequivocal confirmation might mean nothing at all, or it might mean that LG didn't bother to implement this part of the standard properly (which isn't unlikely given how early it arrived). And yes, I am also willing to bet both camps will have HDMI 2.1 ports with VRR support on their upcoming GPUs.

It probably can, but WTF does that have to do with my point? I wasn't even replying to you - you've just successfully trolled the discussion with your inability to understand english.
I'm not arguing the same as @ARF here, but using on-paper boost specs for Nvidia-vs-AMD comparisons is quite misleading. GPU Boost 3.0 means that every card exceeds its boost clock spec. Most reviews seem to place real-world boost clock speeds for FE cards in the high 1800s or low 1900s, definitely above 1620MHz. On the other hand, AMD's "boost clock" spec is a peak clock spec, with "game clock" being the expected real-world speed (yes, it's quite dumb - why does the boost spec exist at all?). Beyond that though, I agree (and frankly think it's rather preposterous that anyone would disagree) that Nvidia still has a significant architectural efficiency advantage (call it "IPC" or whatever). They still get more gaming performance per shader core and TFlop, and are on par in perf/W despite being on a much less advanced node. That being said, AMD has (partially thanks to their node advantage, but also due to RDNA's architectural improvements - just look at the VII vs. 5700 XT, both on 7nm) gained on Nvidia in a dramatic way over the past generation, with the 5700 (non-XT) and especially the 5600 outright beating Nvidia's best in perf/W for the first time in recent history. With the promise of dramatically increased perf/W for RDNA 2 too, while Nvidia is moving to a better (though not quite matched) node makes this a very interesting launch cycle.
 
Last edited:
Implementing HDMI 2.1 features on a HDMI 2.0 GPU with only HDMI 2.0 hardware is by definition a bespoke implementation.
On the GPU, not TV side.
it is still unconfirmed from LG whether 2019 OLEDs will support HDMI 2.1 VRR universally
What, why? Nvidia is using it, at least Xbox side of consoles are using it, why would it not be universal?

If I had to guess, AMD is playing the branding game here. There were some wins on getting Freesync TVs out on the market and Nvidia - while branding it as Gsync Compatible - is using a standard approach behind the marketing this time around. HDMI 2.1 VRR support is not that large of a win before actually having HDMI 2.1 outputs because technically it is a bit of a mess. With HDMI 2.0 you are limited to 2160p 40-60Hz range and no LFC or 1440p 40-120Hz. For proper 2160p 40-120Hz range, you need a card with HDMI 2.1 output.
 
Last edited:
Even 4K it’s useless. Only one or two games can consume 11GB+ VRAM. Most of the modern games are around 4-8GB according to TechPowerUp’s reviews.
Besides that, no AMD GPU can reach modern game’s 4K 60.
So the RDNA II’s large VRAM is a nonsense unless people who will buy it are creators.
Do you have a 4K screen to validate that? I also have a very extensive Game library as well. As it stands there are only 2 cards that give you 60+ FPS (2080TI, Vega VII) from what I have read on reviews. Even 4GB is pushing it with AAA modern Games. I am pretty confident that CP2077 will use a ton of VRAM at 4K. I know that Games have to support a range of hardware but 30 FPS is considered playable.
 
On the GPU, not TV side.
What, why? Nvidia is using it, at least Xbox side of consoles are using it, why would it not be universal?

If I had to guess, AMD is playing the branding game here. There were some wins on getting Freesync TVs out on the market and Nvidia - while branding it as Gsync Compatible - is using a standard approach behind the marketing this time around. HDMI 2.1 VRR support is not that large of a win before actually having HDMI 2.1 outputs because technically it is a bit of a mess. With HDMI 2.0 you are limited to 2160p 40-60Hz range and no LFC or 1440p 40-120Hz. For proper 2160p 40-120Hz range, you need a card with HDMI 2.1 output.
For a HDMI 2.1 TV to recognize and enable HDMI 2.1 features when connected to a HDMI 2.0 source, it must by definition have some sort of non-standard implementation. If not, it would refuse to enable VRR as it would (correctly according to the standard) identify the source device as incompatible with HDMI 2.1 VRR. Thus this is not just a custom solution on Nvidia's GPUs, but on both pieces of hardware. If Nvidia's GPUs were masquerading as HDMI 2.1 devices this would work on any HDMI 2.1 TV, not just these LG ones. As for branding, at this point (and going forward, most likely) that's the only real difference, as both Freesync and G-sync are now essentially identical implementations of the same standards (VESA AS and HDMI 2.1 VRR). Sure, there are proprietary versions of each, but those will just grow more rare as the standardized ones grow more common.
 
Do you have a 4K screen to validate that? I also have a very extensive Game library as well. As it stands there are only 2 cards that give you 60+ FPS (2080TI, Vega VII) from what I have read on reviews. Even 4GB is pushing it with AAA modern Games. I am pretty confident that CP2077 will use a ton of VRAM at 4K. I know that Games have to support a range of hardware but 30 FPS is considered playable.
First, I own a 75 inch Sony 4K TV.
Second, no evidence that over 3 games can consume 11.01GB or more VRAM. At least TechPowerUp’s reviews don’t have it.
Third, according to VII & 5700XT’s POOR 4K performance, I don’t think RDNA II is available to handle 4K gaming either.
So RDNA II’s VRAM appealing is still a nonsense. It’s definitely for mining etc. but not for gaming.
AMD should not cost up for larger but useless VRAM but improve their GPU core performance.
 
Last edited:
Even 4K it’s useless. Only one or two games can consume 11GB+ VRAM. Most of the modern games are around 4-8GB according to TechPowerUp’s reviews.
Besides that, no AMD GPU can reach modern game’s 4K 60.
So the RDNA II’s large VRAM is a nonsense unless people who will buy it are creators.
Who told you that shit.

I've been gaming at 4k /75hz for two years and 90% of game's can easily be made to run at 4k with a vega64 so for about 10% of game's I absolutely have to drop resolution.

8Gb is the new minimum/old minimum for me, more would make the same GPU last hopefully a year longer with probable comprises , something most in reality accept ,like 95% of gamer's. At least.
 
Last edited:
Without knowing the internal details of a game engine implementation, no one can really know how much VRAM is required. 99% of the time, the games are just doing greedy allocation, where they load as many assets (not just textures) as possible into VRAM in order to cache them. Where a game doesn't do that but can properly stream assets into VRAM on-demand, it likely needs way less (perhaps only 2-4GB) to render the current scene.
 
Without knowing the internal details of a game engine implementation, no one can really know how much VRAM is required. 99% of the time, the games are just doing greedy allocation, where they load as many assets (not just textures) as possible into VRAM in order to cache them. Where a game doesn't do that but can properly stream assets into VRAM on-demand, it likely needs way less (perhaps only 2-4GB) to render the current scene.

Next gen consoles going with 16GB while targeting 4k speaks a lot.
Sony's engineer addressed that amount of RAM required and game installation size are drastically reduced, if faster SSD is available and there is no need to pre-load / have multiple copies of the same stuff just so that it's faster to load.

For a HDMI 2.1 TV to recognize and enable HDMI 2.1 features when connected to a HDMI 2.0 source, it must by definition have some sort of non-standard implementation.
I've briefly touched on HDMI-COC (when my sat receiver didn't want to play along with... wait a sec, LG TV).
My observations:

1) HDMI implementations is a clusterf*ck of quirks, adding yet another one is no big deal
2) "Which vendor just connected?" is part of the standard handshake.
3) Vendor specific codes are part of the standard. ("Oh, you are LG too? Let's speak Klingon!!!!")
 
12GB and 16GB seems more than enough to me, any higher than that is just adding expense. Depending on how Navi2 is, I might make the jump into a higher tier. I'll have to sell my Sapphire 5700xt Nitro though which I've only had since November
I‘m keeping mine until the dust totally settles or until I can’t fight the “new shiny” urge My 1440p performance is more than adequate right now. I’m sure Sapphire will suck me as they have the last 4 gens...
 
First, I own a 75 inch Sony 4K TV.
Second, no evidence that over 3 games can consume 11.01GB or more VRAM. At least TechPowerUp’s reviews don’t have it.
Third, according to VII & 5700XT’s POOR 4K performance, I don’t think RDNA II is available to handle 4K gaming either.
So RDNA II’s VRAM appealing is still a nonsense. It’s definitely for mining etc. but not for gaming.
AMD should not cost up for larger but useless VRAM but improve their GPU core performance.
Mining? Mining is dead. And AMD has already increased their GPUs' gaming performance quite significantly - hence the 40CU RDNA 5700 XT nearly matching the 60CU Vega Radeon VII (sure, the 5700 clocks higher, but not that much - it's still more than 4TFLOPS behind in raw compute power, and that's calculated using the unrealistically high "boost clocl" spec for the 5700 XT, not the realistic "game clock" spec). There's definitely an argument to be made about huge frame buffers being silly, but at least on the non-controlled system that is the world of PCs, they are somewhat necessary as developers often need methods of compensating for potential bottlenecks such as disk I/O. And as graphical fidelity increases, texture quality goes up, etc., we'll definitely still see VRAM usage increase still. Given the non-controlled system that is the PC gaming space, developers will always need ways of mitigating or compensating for various bottlenecks, such as I/O speeds, and pre-loading assets aggressively is a widely used tactic for this.
Next gen consoles going with 16GB while targeting 4k speaks a lot.
Sony's engineer addressed that amount of RAM required and game installation size are drastically reduced, if faster SSD is available and there is no need to pre-load / have multiple copies of the same stuff just so that it's faster to load.
Yep, that's exactly it. PC game makers can't afford to alienate the heaps of users still using HDDs (or even SATA SSDs to some extent), and there aren't systems in place to determine which install you get depending on the nature of your storage device etc. (not to mention that making 2-3 different distributions of your game depending on the storage medium would be quite work intensive), so they need to develop for the lowest common denominator. With the upcoming consoles going all NVMe, they know how fast they can get data off the drive, and thus never have to do so prematurely. That will drive down VRAM usage dramatically.

Of course VRAM usage in games is an interesting topic in and of itself given how much it can vary due to different factors. I've seen examples of the same game at the same resolution and settings and similar performance use several GB more VRAM on GPUs from one vendor compared to the other, for example (IIRC it was a case where Nvidia GPUs hit near 6GB while AMD GPUs stayed around 4GB). Whether that is due to the driver, something weird in the game code, or something else entirely is beyond me, but it's a good illustration of how this can't be discussed as a straightforward "game A at resolution X will ALWAYS need *GB of VRAM" situation.
I've briefly touched on HDMI-COC (when my sat receiver didn't want to play along with... wait a sec, LG TV).
My observations:

1) HDMI implementations is a clusterf*ck of quirks, adding yet another one is no big deal
2) "Which vendor just connected?" is part of the standard handshake.
3) Vendor specific codes are part of the standard. ("Oh, you are LG too? Let's speak Klingon!!!!")
That's true, HDMI is indeed a mess - but my impression is that HDMI 2.1 is supposed to try to alleviate this by integrating a lot of optional things into the standard, rather than having vendors make custom extensions. Then again, can you even call something a standard if it isn't universally applicable by some reasonable definition. I would say not. Unless, of course, you're a fan of XKCD. But nonetheless, even if "which vendor just connected" is part of the handshake, "if vendor=X, treat HDMI 2.0 device as HDMI 2.1 device" goes quite a ways beyond this.
 
Next gen consoles going with 16GB while targeting 4k speaks a lot.
Keep in mind that this is total RAM. When compared to a PC, it is both RAM and VRAM in one. In the current gen, both consoles have 8GB but initially, PS4 had 3.5GB available for games and Xbox One had 5GB. Later, the available RAM for games was slightly increased on both.

Pretty sure new generation will reserve 3-4GB for system use, if not more - operating system, caches and stuff. 12-13GB that remain includes both RAM and VRAM for a game to use. There are some savings on not having to load textures to RAM and them transfer to VRAM but that does not make too big of a difference.
 
Keep in mind that this is total RAM. When compared to a PC, it is both RAM and VRAM in one. In the current gen, both consoles have 8GB but initially, PS4 had 3.5GB available for games and Xbox One had 5GB. Later, the available RAM for games was slightly increased on both.

Pretty sure new generation will reserve 3-4GB for system use, if not more - operating system, caches and stuff. 12-13GB that remain includes both RAM and VRAM for a game to use. There are some savings on not having to load textures to RAM and them transfer to VRAM but that does not make too big of a difference.
2.5GB for the XSX. 13.5GB is available for software, of which 10GB is of the "GPU optimal" kind thanks to the XSX's weird dual-bandwidth RAM layout.
 
2.5GB for the XSX. 13.5GB is available for software, of which 10GB is of the "GPU optimal" kind thanks to the XSX's weird dual-bandwidth RAM layout.
This is pretty likely what both the next-gen consoles' memory availability will look like.
 
Cheaper that way. Every buck counts when you sell stuff in tens of millions.
Sure, that makes a bit of sense. Probably also board space (the XSX motherboard is tiny!) and a likely correct judgement that non-graphics memory doesn't actually need all of that bandwidth to begin with.
 
[QUOTE = "FreedomOfSpeech, post: 4326708, membro: 200814"]
Questa volta volevo passare ad AMD per la prima volta nella mia vita. Lo scorso Natale ho acquistato un LG C9 per via del suo 4K @ 120Hz @VRR (GSync / Freesync). Nvidias RTX Generation può eseguire VRR con HDMI 2.0 come G-Sync Combatible. Ieri ho letto che gli OLED LG del 2019 non funzioneranno con Big Navi @Freesync. Ciò significa che devo acquistare di nuovo Nvidia ...
[/ CITAZIONE]
non ha mai avuto amd? lascia stare sono impegnative in ogni senso, non che non siano potenti.....ma..
 
[QUOTE = "FreedomOfSpeech, post: 4326708, membro: 200814"]
Questa volta volevo passare ad AMD per la prima volta nella mia vita. Lo scorso Natale ho acquistato un LG C9 per via del suo 4K @ 120Hz @VRR (GSync / Freesync). Nvidias RTX Generation può eseguire VRR con HDMI 2.0 come G-Sync Combatible. Ieri ho letto che gli OLED LG del 2019 non funzioneranno con Big Navi @Freesync. Ciò significa che devo acquistare di nuovo Nvidia ...
[/ CITAZIONE]
non ha mai avuto amd? lascia stare sono impegnative in ogni senso, non che non siano potenti.....ma..


Oh look, burnt fuse II.

By the way, why doesn't chiplet approach work with GPUs?
 
If past trends are any indications, there WILL be 12 and/or 16GB variants, and possibly even 24GB for the very top end....
remember the infamous "nobody will ever need more than 16k of ram" quote from way back, and look at where we are nowadays....

And since the current crop of 11GB cards are really, really expensive as compared to the 8/6GB models, so if you want one of the above, perhaps you should shore up your finances, then put your banker and HIS gold cards on retainer, hehehe ..:roll:..:eek:...:fear:..

Is this the quote?


Someone else might have said it with 16k of RAM, I dont know
 
By the way, why doesn't chiplet approach work with GPUs?
It works perfectly fine for stuff that isn't time-sensitive like HPC, AI and other compute stuff. When it comes to graphics, the problem is about orchestrating all the work in time and having all the necessary information in the right place - VRAM, caches, across IF/NVLink/whicheverbus. Both AMD and Nvidia have been working on this for years but the current result seems to be that even Crossfire and SLI are being wound down.
 
Is this the quote?


Someone else might have said it with 16k of RAM, I dont know

something like that anyways.... my memory is not what it used to be, but oh well :)

but the current result seems to be that even Crossfire and SLI are being wound down

This is my understanding also, and as far as I am concerned, good riddance.... it was never really worth the hassle, since the drivers never really worked the way they were supposed to anyways, athough the GPU's themselves seemed to be relatively robust for that era...
 
If you think a $700 on par with a 2080 is mid-range then you're going to have to wait to get high end until the aliens visit.
All GPU prices are out of whack. All low, mid, med, high and enthusiast prices. They are ALL overpriced!
 
It works perfectly fine for stuff that isn't time-sensitive like HPC, AI and other compute stuff. When it comes to graphics, the problem is about orchestrating all the work in time and having all the necessary information in the right place - VRAM, caches, across IF/NVLink/whicheverbus. Both AMD and Nvidia have been working on this for years but the current result seems to be that even Crossfire and SLI are being wound down.
Chiplet-based GPUs aren't supposed to be SLI/CF though, the point is precisely to avoid that can of worms. Ideally the chiplets would seamlessly add into a sort of virtual monolithic GPU, with the system not knowing there are several chips at all. As you say there have been persistent rumors about this for a while, and we know both major GPU vendors are working on it (Intel too). If performance is supposed to keep increasing as it has without costs going even more crazy than they already have, MCM GPUs will soon be a necessity - >500mm² dice on 7nm EUV or 5nm aren't going to be economically viable for mass market consumer products in the long run, so going MCM is the only logical solution. 3D stacking might help with latencies, by stacking multiple GPU compute chiplets on top of an I/O + interconnect die or active interposer. But of course that comes with its own added cost, and massive complexity. Hopefully it will all come together when it truly becomes a necessity.
 
SLI/Crossfire are the technologies that have shown most promise. Splitting the GPU functionally onto different dies have not yet been very successful. Moving the data and coherency of data are a big problem, especially with the latency in play. We will see if some contemporary packaging stuff like interposers or EMIB will make inter-die communication more viable but that seems to be the sticking point so far. Even with data moving over silicon, going from one die to another might come with a surprisingly big efficiency hit.
 
SLI/Crossfire are the technologies that have shown most promise. Splitting the GPU functionally onto different dies have not yet been very successful. Moving the data and coherency of data are a big problem, especially with the latency in play. We will see if some contemporary packaging stuff like interposers or EMIB will make inter-die communication more viable but that seems to be the sticking point so far. Even with data moving over silicon, going from one die to another might come with a surprisingly big efficiency hit.
There are definitely challenges that need to be overcome, and they might even require a fundamental rearrangement of the current "massive grid of compute elements" GPU design paradigm. Advanced scheduling will also be much more important than today. But sying SLI and CF have "shown promise" is... a dubious claim. Both of these are dead-end technologies due to their reliance on developer support, difficulty of optimization, and poor scaling (~70% scaling as a best case with 30-40% being normal is nowhere near worth it). A lot of the shortcomings of multi-GPU can be overcome "simply" by drastically reducing the latency and drastically increasing the bandwidth between GPUs (Nvlink beats SLI by quite a lot), but tighter integration will still be needed to remove the requirement for active developer support. Still, this is more realistic and has more of a path towards further scaling than SLI/CF - after all, that just goes up to two GPUs, with scaling dropping off dramatically beyond that, which won't help GPU makers shrink die sizes by all that much. We need a new paradigm of GPU design to overcome this challenge, not just a shrunk-down version of what we already have.
 
SLI/CF really do not rely on developers, IHVs do most of the job there. Unfortunately, current rendering technologies do not lend themselves well to the idea. Scaling really was not (and is not) that bad in GPU-limited situations.

I would say DX12 multi-GPU was not a bad idea when it comes to completely relying on developers but that seems to be a no-go as a whole.

You are right about the required paradigm shift but not sure exactly that will be. Right now, hardware vendors do not seem to have very good ideas for that either :(
 
Back
Top