• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD RDNA 2 "Big Navi" to Feature 12 GB and 16 GB VRAM Configurations

Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
LG OLED TVs do not have a bespoke GSync implementation. These have HDMI 2.1 and its VRR which is a pretty standard thing (and not compatible with bespoke FS-over-HDMI). Although no cards have HDMI 2.1 ports, Nvidia added support for some HDMI 2.1 features - in this context namely VRR - to some of their cards with HDMI 2.0. Nothing really prevents AMD from doing the same. FS-over-HDMI will not be added but AMD can add VRR support in the same way Nvidia did. And it will probably be branded as Freesync something or another.

Not confirmed but I am willing to bet both next-gen GPUs will have HDMI 2.1 ports and VRR support.
Implementing HDMI 2.1 features on a HDMI 2.0 GPU with only HDMI 2.0 hardware is by definition a bespoke implementation. It bypasses and supersedes the standard, and is thus made especially for that (combination of) part(s) - thus it is bespoke, custom-made. Beyond that, nothing you said contradicts anything I said, and to reiterate: it is still unconfirmed from LG whether 2019 OLEDs will support HDMI 2.1 VRR universally - which would after all make sense to do given that both next-gen consoles support it, as well as upcoming GPUs. The absence of unequivocal confirmation might mean nothing at all, or it might mean that LG didn't bother to implement this part of the standard properly (which isn't unlikely given how early it arrived). And yes, I am also willing to bet both camps will have HDMI 2.1 ports with VRR support on their upcoming GPUs.

It probably can, but WTF does that have to do with my point? I wasn't even replying to you - you've just successfully trolled the discussion with your inability to understand english.
I'm not arguing the same as @ARF here, but using on-paper boost specs for Nvidia-vs-AMD comparisons is quite misleading. GPU Boost 3.0 means that every card exceeds its boost clock spec. Most reviews seem to place real-world boost clock speeds for FE cards in the high 1800s or low 1900s, definitely above 1620MHz. On the other hand, AMD's "boost clock" spec is a peak clock spec, with "game clock" being the expected real-world speed (yes, it's quite dumb - why does the boost spec exist at all?). Beyond that though, I agree (and frankly think it's rather preposterous that anyone would disagree) that Nvidia still has a significant architectural efficiency advantage (call it "IPC" or whatever). They still get more gaming performance per shader core and TFlop, and are on par in perf/W despite being on a much less advanced node. That being said, AMD has (partially thanks to their node advantage, but also due to RDNA's architectural improvements - just look at the VII vs. 5700 XT, both on 7nm) gained on Nvidia in a dramatic way over the past generation, with the 5700 (non-XT) and especially the 5600 outright beating Nvidia's best in perf/W for the first time in recent history. With the promise of dramatically increased perf/W for RDNA 2 too, while Nvidia is moving to a better (though not quite matched) node makes this a very interesting launch cycle.
 
Last edited:
Joined
Feb 3, 2017
Messages
3,747 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Implementing HDMI 2.1 features on a HDMI 2.0 GPU with only HDMI 2.0 hardware is by definition a bespoke implementation.
On the GPU, not TV side.
it is still unconfirmed from LG whether 2019 OLEDs will support HDMI 2.1 VRR universally
What, why? Nvidia is using it, at least Xbox side of consoles are using it, why would it not be universal?

If I had to guess, AMD is playing the branding game here. There were some wins on getting Freesync TVs out on the market and Nvidia - while branding it as Gsync Compatible - is using a standard approach behind the marketing this time around. HDMI 2.1 VRR support is not that large of a win before actually having HDMI 2.1 outputs because technically it is a bit of a mess. With HDMI 2.0 you are limited to 2160p 40-60Hz range and no LFC or 1440p 40-120Hz. For proper 2160p 40-120Hz range, you need a card with HDMI 2.1 output.
 
Last edited:
Joined
Jun 2, 2017
Messages
9,127 (3.34/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
Even 4K it’s useless. Only one or two games can consume 11GB+ VRAM. Most of the modern games are around 4-8GB according to TechPowerUp’s reviews.
Besides that, no AMD GPU can reach modern game’s 4K 60.
So the RDNA II’s large VRAM is a nonsense unless people who will buy it are creators.
Do you have a 4K screen to validate that? I also have a very extensive Game library as well. As it stands there are only 2 cards that give you 60+ FPS (2080TI, Vega VII) from what I have read on reviews. Even 4GB is pushing it with AAA modern Games. I am pretty confident that CP2077 will use a ton of VRAM at 4K. I know that Games have to support a range of hardware but 30 FPS is considered playable.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
On the GPU, not TV side.
What, why? Nvidia is using it, at least Xbox side of consoles are using it, why would it not be universal?

If I had to guess, AMD is playing the branding game here. There were some wins on getting Freesync TVs out on the market and Nvidia - while branding it as Gsync Compatible - is using a standard approach behind the marketing this time around. HDMI 2.1 VRR support is not that large of a win before actually having HDMI 2.1 outputs because technically it is a bit of a mess. With HDMI 2.0 you are limited to 2160p 40-60Hz range and no LFC or 1440p 40-120Hz. For proper 2160p 40-120Hz range, you need a card with HDMI 2.1 output.
For a HDMI 2.1 TV to recognize and enable HDMI 2.1 features when connected to a HDMI 2.0 source, it must by definition have some sort of non-standard implementation. If not, it would refuse to enable VRR as it would (correctly according to the standard) identify the source device as incompatible with HDMI 2.1 VRR. Thus this is not just a custom solution on Nvidia's GPUs, but on both pieces of hardware. If Nvidia's GPUs were masquerading as HDMI 2.1 devices this would work on any HDMI 2.1 TV, not just these LG ones. As for branding, at this point (and going forward, most likely) that's the only real difference, as both Freesync and G-sync are now essentially identical implementations of the same standards (VESA AS and HDMI 2.1 VRR). Sure, there are proprietary versions of each, but those will just grow more rare as the standardized ones grow more common.
 
Joined
Jan 15, 2018
Messages
55 (0.02/day)
Do you have a 4K screen to validate that? I also have a very extensive Game library as well. As it stands there are only 2 cards that give you 60+ FPS (2080TI, Vega VII) from what I have read on reviews. Even 4GB is pushing it with AAA modern Games. I am pretty confident that CP2077 will use a ton of VRAM at 4K. I know that Games have to support a range of hardware but 30 FPS is considered playable.
First, I own a 75 inch Sony 4K TV.
Second, no evidence that over 3 games can consume 11.01GB or more VRAM. At least TechPowerUp’s reviews don’t have it.
Third, according to VII & 5700XT’s POOR 4K performance, I don’t think RDNA II is available to handle 4K gaming either.
So RDNA II’s VRAM appealing is still a nonsense. It’s definitely for mining etc. but not for gaming.
AMD should not cost up for larger but useless VRAM but improve their GPU core performance.
 
Last edited:
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Even 4K it’s useless. Only one or two games can consume 11GB+ VRAM. Most of the modern games are around 4-8GB according to TechPowerUp’s reviews.
Besides that, no AMD GPU can reach modern game’s 4K 60.
So the RDNA II’s large VRAM is a nonsense unless people who will buy it are creators.
Who told you that shit.

I've been gaming at 4k /75hz for two years and 90% of game's can easily be made to run at 4k with a vega64 so for about 10% of game's I absolutely have to drop resolution.

8Gb is the new minimum/old minimum for me, more would make the same GPU last hopefully a year longer with probable comprises , something most in reality accept ,like 95% of gamer's. At least.
 
Last edited:
Joined
Feb 1, 2013
Messages
1,265 (0.29/day)
System Name Gentoo64 /w Cold Coffee
Processor 9900K 5.2GHz @1.312v
Motherboard MXI APEX
Cooling Raystorm Pro + 1260mm Super Nova
Memory 2x16GB TridentZ 4000-14-14-28-2T @1.6v
Video Card(s) RTX 4090 LiquidX Barrow 3015MHz @1.1v
Storage 660P 1TB, 860 QVO 2TB
Display(s) LG C1 + Predator XB1 QHD
Case Open Benchtable V2
Audio Device(s) SB X-Fi
Power Supply MSI A1000G
Mouse G502
Keyboard G815
Software Gentoo/Windows 10
Benchmark Scores Always only ever very fast
Without knowing the internal details of a game engine implementation, no one can really know how much VRAM is required. 99% of the time, the games are just doing greedy allocation, where they load as many assets (not just textures) as possible into VRAM in order to cache them. Where a game doesn't do that but can properly stream assets into VRAM on-demand, it likely needs way less (perhaps only 2-4GB) to render the current scene.
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Without knowing the internal details of a game engine implementation, no one can really know how much VRAM is required. 99% of the time, the games are just doing greedy allocation, where they load as many assets (not just textures) as possible into VRAM in order to cache them. Where a game doesn't do that but can properly stream assets into VRAM on-demand, it likely needs way less (perhaps only 2-4GB) to render the current scene.

Next gen consoles going with 16GB while targeting 4k speaks a lot.
Sony's engineer addressed that amount of RAM required and game installation size are drastically reduced, if faster SSD is available and there is no need to pre-load / have multiple copies of the same stuff just so that it's faster to load.

For a HDMI 2.1 TV to recognize and enable HDMI 2.1 features when connected to a HDMI 2.0 source, it must by definition have some sort of non-standard implementation.
I've briefly touched on HDMI-COC (when my sat receiver didn't want to play along with... wait a sec, LG TV).
My observations:

1) HDMI implementations is a clusterf*ck of quirks, adding yet another one is no big deal
2) "Which vendor just connected?" is part of the standard handshake.
3) Vendor specific codes are part of the standard. ("Oh, you are LG too? Let's speak Klingon!!!!")
 

INSTG8R

Vanguard Beta Tester
Joined
Nov 26, 2004
Messages
8,042 (1.10/day)
Location
Canuck in Norway
System Name Hellbox 5.1(same case new guts)
Processor Ryzen 7 5800X3D
Motherboard MSI X570S MAG Torpedo Max
Cooling TT Kandalf L.C.S.(Water/Air)EK Velocity CPU Block/Noctua EK Quantum DDC Pump/Res
Memory 2x16GB Gskill Trident Neo Z 3600 CL16
Video Card(s) Powercolor Hellhound 7900XTX
Storage 970 Evo Plus 500GB 2xSamsung 850 Evo 500GB RAID 0 1TB WD Blue Corsair MP600 Core 2TB
Display(s) Alienware QD-OLED 34” 3440x1440 144hz 10Bit VESA HDR 400
Case TT Kandalf L.C.S.
Audio Device(s) Soundblaster ZX/Logitech Z906 5.1
Power Supply Seasonic TX~’850 Platinum
Mouse G502 Hero
Keyboard G19s
VR HMD Oculus Quest 3
Software Win 11 Pro x64
12GB and 16GB seems more than enough to me, any higher than that is just adding expense. Depending on how Navi2 is, I might make the jump into a higher tier. I'll have to sell my Sapphire 5700xt Nitro though which I've only had since November
I‘m keeping mine until the dust totally settles or until I can’t fight the “new shiny” urge My 1440p performance is more than adequate right now. I’m sure Sapphire will suck me as they have the last 4 gens...
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
First, I own a 75 inch Sony 4K TV.
Second, no evidence that over 3 games can consume 11.01GB or more VRAM. At least TechPowerUp’s reviews don’t have it.
Third, according to VII & 5700XT’s POOR 4K performance, I don’t think RDNA II is available to handle 4K gaming either.
So RDNA II’s VRAM appealing is still a nonsense. It’s definitely for mining etc. but not for gaming.
AMD should not cost up for larger but useless VRAM but improve their GPU core performance.
Mining? Mining is dead. And AMD has already increased their GPUs' gaming performance quite significantly - hence the 40CU RDNA 5700 XT nearly matching the 60CU Vega Radeon VII (sure, the 5700 clocks higher, but not that much - it's still more than 4TFLOPS behind in raw compute power, and that's calculated using the unrealistically high "boost clocl" spec for the 5700 XT, not the realistic "game clock" spec). There's definitely an argument to be made about huge frame buffers being silly, but at least on the non-controlled system that is the world of PCs, they are somewhat necessary as developers often need methods of compensating for potential bottlenecks such as disk I/O. And as graphical fidelity increases, texture quality goes up, etc., we'll definitely still see VRAM usage increase still. Given the non-controlled system that is the PC gaming space, developers will always need ways of mitigating or compensating for various bottlenecks, such as I/O speeds, and pre-loading assets aggressively is a widely used tactic for this.
Next gen consoles going with 16GB while targeting 4k speaks a lot.
Sony's engineer addressed that amount of RAM required and game installation size are drastically reduced, if faster SSD is available and there is no need to pre-load / have multiple copies of the same stuff just so that it's faster to load.
Yep, that's exactly it. PC game makers can't afford to alienate the heaps of users still using HDDs (or even SATA SSDs to some extent), and there aren't systems in place to determine which install you get depending on the nature of your storage device etc. (not to mention that making 2-3 different distributions of your game depending on the storage medium would be quite work intensive), so they need to develop for the lowest common denominator. With the upcoming consoles going all NVMe, they know how fast they can get data off the drive, and thus never have to do so prematurely. That will drive down VRAM usage dramatically.

Of course VRAM usage in games is an interesting topic in and of itself given how much it can vary due to different factors. I've seen examples of the same game at the same resolution and settings and similar performance use several GB more VRAM on GPUs from one vendor compared to the other, for example (IIRC it was a case where Nvidia GPUs hit near 6GB while AMD GPUs stayed around 4GB). Whether that is due to the driver, something weird in the game code, or something else entirely is beyond me, but it's a good illustration of how this can't be discussed as a straightforward "game A at resolution X will ALWAYS need *GB of VRAM" situation.
I've briefly touched on HDMI-COC (when my sat receiver didn't want to play along with... wait a sec, LG TV).
My observations:

1) HDMI implementations is a clusterf*ck of quirks, adding yet another one is no big deal
2) "Which vendor just connected?" is part of the standard handshake.
3) Vendor specific codes are part of the standard. ("Oh, you are LG too? Let's speak Klingon!!!!")
That's true, HDMI is indeed a mess - but my impression is that HDMI 2.1 is supposed to try to alleviate this by integrating a lot of optional things into the standard, rather than having vendors make custom extensions. Then again, can you even call something a standard if it isn't universally applicable by some reasonable definition. I would say not. Unless, of course, you're a fan of XKCD. But nonetheless, even if "which vendor just connected" is part of the handshake, "if vendor=X, treat HDMI 2.0 device as HDMI 2.1 device" goes quite a ways beyond this.
 
Joined
Feb 3, 2017
Messages
3,747 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Next gen consoles going with 16GB while targeting 4k speaks a lot.
Keep in mind that this is total RAM. When compared to a PC, it is both RAM and VRAM in one. In the current gen, both consoles have 8GB but initially, PS4 had 3.5GB available for games and Xbox One had 5GB. Later, the available RAM for games was slightly increased on both.

Pretty sure new generation will reserve 3-4GB for system use, if not more - operating system, caches and stuff. 12-13GB that remain includes both RAM and VRAM for a game to use. There are some savings on not having to load textures to RAM and them transfer to VRAM but that does not make too big of a difference.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Keep in mind that this is total RAM. When compared to a PC, it is both RAM and VRAM in one. In the current gen, both consoles have 8GB but initially, PS4 had 3.5GB available for games and Xbox One had 5GB. Later, the available RAM for games was slightly increased on both.

Pretty sure new generation will reserve 3-4GB for system use, if not more - operating system, caches and stuff. 12-13GB that remain includes both RAM and VRAM for a game to use. There are some savings on not having to load textures to RAM and them transfer to VRAM but that does not make too big of a difference.
2.5GB for the XSX. 13.5GB is available for software, of which 10GB is of the "GPU optimal" kind thanks to the XSX's weird dual-bandwidth RAM layout.
 
Joined
Feb 3, 2017
Messages
3,747 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
2.5GB for the XSX. 13.5GB is available for software, of which 10GB is of the "GPU optimal" kind thanks to the XSX's weird dual-bandwidth RAM layout.
This is pretty likely what both the next-gen consoles' memory availability will look like.
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Cheaper that way. Every buck counts when you sell stuff in tens of millions.
Sure, that makes a bit of sense. Probably also board space (the XSX motherboard is tiny!) and a likely correct judgement that non-graphics memory doesn't actually need all of that bandwidth to begin with.
 
Joined
Oct 25, 2019
Messages
17 (0.01/day)
[QUOTE = "FreedomOfSpeech, post: 4326708, membro: 200814"]
Questa volta volevo passare ad AMD per la prima volta nella mia vita. Lo scorso Natale ho acquistato un LG C9 per via del suo 4K @ 120Hz @VRR (GSync / Freesync). Nvidias RTX Generation può eseguire VRR con HDMI 2.0 come G-Sync Combatible. Ieri ho letto che gli OLED LG del 2019 non funzioneranno con Big Navi @Freesync. Ciò significa che devo acquistare di nuovo Nvidia ...
[/ CITAZIONE]
non ha mai avuto amd? lascia stare sono impegnative in ogni senso, non che non siano potenti.....ma..
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
[QUOTE = "FreedomOfSpeech, post: 4326708, membro: 200814"]
Questa volta volevo passare ad AMD per la prima volta nella mia vita. Lo scorso Natale ho acquistato un LG C9 per via del suo 4K @ 120Hz @VRR (GSync / Freesync). Nvidias RTX Generation può eseguire VRR con HDMI 2.0 come G-Sync Combatible. Ieri ho letto che gli OLED LG del 2019 non funzioneranno con Big Navi @Freesync. Ciò significa che devo acquistare di nuovo Nvidia ...
[/ CITAZIONE]
non ha mai avuto amd? lascia stare sono impegnative in ogni senso, non che non siano potenti.....ma..


Oh look, burnt fuse II.

By the way, why doesn't chiplet approach work with GPUs?
 

deu

Joined
Apr 24, 2016
Messages
493 (0.16/day)
System Name Bo-minator (my name is bo)
Processor AMD 3900X
Motherboard Gigabyte X570 AORUS MASTER
Cooling Noctua NH-D15
Memory G-SkiLL 2x8GB RAM 3600Mhz (CL16-16-16-16-36)
Video Card(s) ASUS STRIX 1080Ti OC
Storage Samsung EVO 850 1TB
Display(s) ACER XB271HU + DELL 2717D
Case Fractal Design Define R4
Audio Device(s) ASUS Xonar Essence STX
Power Supply Antec HCP 1000W
Mouse G403
Keyboard CM STORM Quick Fire Rapid
Software Windows 10 64-bit Pro
Benchmark Scores XX
If past trends are any indications, there WILL be 12 and/or 16GB variants, and possibly even 24GB for the very top end....
remember the infamous "nobody will ever need more than 16k of ram" quote from way back, and look at where we are nowadays....

And since the current crop of 11GB cards are really, really expensive as compared to the 8/6GB models, so if you want one of the above, perhaps you should shore up your finances, then put your banker and HIS gold cards on retainer, hehehe ..:roll:..:eek:...:fear:..

Is this the quote?


Someone else might have said it with 16k of RAM, I dont know
 
Joined
Feb 3, 2017
Messages
3,747 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
By the way, why doesn't chiplet approach work with GPUs?
It works perfectly fine for stuff that isn't time-sensitive like HPC, AI and other compute stuff. When it comes to graphics, the problem is about orchestrating all the work in time and having all the necessary information in the right place - VRAM, caches, across IF/NVLink/whicheverbus. Both AMD and Nvidia have been working on this for years but the current result seems to be that even Crossfire and SLI are being wound down.
 
Joined
Oct 18, 2013
Messages
6,186 (1.53/day)
Location
Over here, right where you least expect me to be !
System Name The Little One
Processor i5-11320H @4.4GHZ
Motherboard AZW SEI
Cooling Fan w/heat pipes + side & rear vents
Memory 64GB Crucial DDR4-3200 (2x 32GB)
Video Card(s) Iris XE
Storage WD Black SN850X 4TB m.2, Seagate 2TB SSD + SN850 4TB x2 in an external enclosure
Display(s) 2x Samsung 43" & 2x 32"
Case Practically identical to a mac mini, just purrtier in slate blue, & with 3x usb ports on the front !
Audio Device(s) Yamaha ATS-1060 Bluetooth Soundbar & Subwoofer
Power Supply 65w brick
Mouse Logitech MX Master 2
Keyboard Logitech G613 mechanical wireless
Software Windows 10 pro 64 bit, with all the unnecessary background shitzu turned OFF !
Benchmark Scores PDQ
Is this the quote?


Someone else might have said it with 16k of RAM, I dont know

something like that anyways.... my memory is not what it used to be, but oh well :)

but the current result seems to be that even Crossfire and SLI are being wound down

This is my understanding also, and as far as I am concerned, good riddance.... it was never really worth the hassle, since the drivers never really worked the way they were supposed to anyways, athough the GPU's themselves seemed to be relatively robust for that era...
 
Joined
Mar 23, 2005
Messages
4,085 (0.57/day)
Location
Ancient Greece, Acropolis (Time Lord)
System Name RiseZEN Gaming PC
Processor AMD Ryzen 7 5800X @ Auto
Motherboard Asus ROG Strix X570-E Gaming ATX Motherboard
Cooling Corsair H115i Elite Capellix AIO, 280mm Radiator, Dual RGB 140mm ML Series PWM Fans
Memory G.Skill TridentZ 64GB (4 x 16GB) DDR4 3200
Video Card(s) ASUS DUAL RX 6700 XT DUAL-RX6700XT-12G
Storage Corsair Force MP500 480GB M.2 & MP510 480GB M.2 - 2 x WD_BLACK 1TB SN850X NVMe 1TB
Display(s) ASUS ROG Strix 34” XG349C 180Hz 1440p + Asus ROG 27" MG278Q 144Hz WQHD 1440p
Case Corsair Obsidian Series 450D Gaming Case
Audio Device(s) SteelSeries 5Hv2 w/ Sound Blaster Z SE
Power Supply Corsair RM750x Power Supply
Mouse Razer Death-Adder + Viper 8K HZ Ambidextrous Gaming Mouse - Ergonomic Left Hand Edition
Keyboard Logitech G910 Orion Spectrum RGB Gaming Keyboard
Software Windows 11 Pro - 64-Bit Edition
Benchmark Scores I'm the Doctor, Doctor Who. The Definition of Gaming is PC Gaming...
If you think a $700 on par with a 2080 is mid-range then you're going to have to wait to get high end until the aliens visit.
All GPU prices are out of whack. All low, mid, med, high and enthusiast prices. They are ALL overpriced!
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
It works perfectly fine for stuff that isn't time-sensitive like HPC, AI and other compute stuff. When it comes to graphics, the problem is about orchestrating all the work in time and having all the necessary information in the right place - VRAM, caches, across IF/NVLink/whicheverbus. Both AMD and Nvidia have been working on this for years but the current result seems to be that even Crossfire and SLI are being wound down.
Chiplet-based GPUs aren't supposed to be SLI/CF though, the point is precisely to avoid that can of worms. Ideally the chiplets would seamlessly add into a sort of virtual monolithic GPU, with the system not knowing there are several chips at all. As you say there have been persistent rumors about this for a while, and we know both major GPU vendors are working on it (Intel too). If performance is supposed to keep increasing as it has without costs going even more crazy than they already have, MCM GPUs will soon be a necessity - >500mm² dice on 7nm EUV or 5nm aren't going to be economically viable for mass market consumer products in the long run, so going MCM is the only logical solution. 3D stacking might help with latencies, by stacking multiple GPU compute chiplets on top of an I/O + interconnect die or active interposer. But of course that comes with its own added cost, and massive complexity. Hopefully it will all come together when it truly becomes a necessity.
 
Joined
Feb 3, 2017
Messages
3,747 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
SLI/Crossfire are the technologies that have shown most promise. Splitting the GPU functionally onto different dies have not yet been very successful. Moving the data and coherency of data are a big problem, especially with the latency in play. We will see if some contemporary packaging stuff like interposers or EMIB will make inter-die communication more viable but that seems to be the sticking point so far. Even with data moving over silicon, going from one die to another might come with a surprisingly big efficiency hit.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
SLI/Crossfire are the technologies that have shown most promise. Splitting the GPU functionally onto different dies have not yet been very successful. Moving the data and coherency of data are a big problem, especially with the latency in play. We will see if some contemporary packaging stuff like interposers or EMIB will make inter-die communication more viable but that seems to be the sticking point so far. Even with data moving over silicon, going from one die to another might come with a surprisingly big efficiency hit.
There are definitely challenges that need to be overcome, and they might even require a fundamental rearrangement of the current "massive grid of compute elements" GPU design paradigm. Advanced scheduling will also be much more important than today. But sying SLI and CF have "shown promise" is... a dubious claim. Both of these are dead-end technologies due to their reliance on developer support, difficulty of optimization, and poor scaling (~70% scaling as a best case with 30-40% being normal is nowhere near worth it). A lot of the shortcomings of multi-GPU can be overcome "simply" by drastically reducing the latency and drastically increasing the bandwidth between GPUs (Nvlink beats SLI by quite a lot), but tighter integration will still be needed to remove the requirement for active developer support. Still, this is more realistic and has more of a path towards further scaling than SLI/CF - after all, that just goes up to two GPUs, with scaling dropping off dramatically beyond that, which won't help GPU makers shrink die sizes by all that much. We need a new paradigm of GPU design to overcome this challenge, not just a shrunk-down version of what we already have.
 
Joined
Feb 3, 2017
Messages
3,747 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
SLI/CF really do not rely on developers, IHVs do most of the job there. Unfortunately, current rendering technologies do not lend themselves well to the idea. Scaling really was not (and is not) that bad in GPU-limited situations.

I would say DX12 multi-GPU was not a bad idea when it comes to completely relying on developers but that seems to be a no-go as a whole.

You are right about the required paradigm shift but not sure exactly that will be. Right now, hardware vendors do not seem to have very good ideas for that either :(
 
Top