• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 6000 Series "Big Navi" GPU Features 320 W TGP, 16 Gbps GDDR6 Memory

Joined
Nov 11, 2016
Messages
3,400 (1.16/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
We used to have two slot gpu's as of last gen that went upto 3 slots, and now we are seeing 4 slot gpu's and it is all about to cool the power hungry beasts. But this trend surely has to stop. Yes, we can undervolt to bring power consumption to our desired needs and will most certainly be more efficient to last gen. But is that what 99% of users would do? I think about not only the power bill, the heat that is outputted, the spinning of fans,band consequently the faster detoriation of those fans and other components in the system. The noise output, and the heat that comes from the case will be uncomfortable.

I'm all for AIBs equipping 3-4 slot coolers to GPU and sell them at MSRP :D
That means with a little undervolting I can get similar experience to custom watercooling regarding thermal/noise.
Well if only there are any 3080/3090 available :roll: .
 
Joined
Apr 7, 2011
Messages
1,380 (0.28/day)
System Name Desktop
Processor Intel Xeon E5-1680v2
Motherboard ASUS Sabertooth X79
Cooling Intel AIO
Memory 8x4GB DDR3 1866MHz
Video Card(s) EVGA GTX 970 SC
Storage Crucial MX500 1TB + 2x WD RE 4TB HDD
Display(s) HP ZR24w
Case Fractal Define XL Black
Audio Device(s) Schiit Modi Uber/Sony CDP-XA20ES/Pioneer CT-656>Sony TA-F630ESD>Sennheiser HD600
Power Supply Corsair HX850
Mouse Logitech G603
Keyboard Logitech G613
Software Windows 10 Pro x64
We used to have two slot gpu's as of last gen that went upto 3 slots, and now we are seeing 4 slot gpu's and it is all about to cool the power hungry beasts. But this trend surely has to stop. Yes, we can undervolt to bring power consumption to our desired needs and will most certainly be more efficient to last gen. But is that what 99% of users would do? I think about not only the power bill, the heat that is outputted, the spinning of fans,band consequently the faster detoriation of those fans and other components in the system. The noise output, and the heat that comes from the case will be uncomfortable.

We still have a 2 slot card which can handle 4k gaming with "ease".
 
Joined
Mar 24, 2012
Messages
533 (0.12/day)
Performance per watt did go up on Ampere, but that's to be expected given that Nvidia moved from TSMCs 12nm to Samsung’s 8nm 8LPP, a 10nm extension node. What is not impressive is only 10% performance per watt increase over Turing while being build on 25% denser node. RDNA2 arch being on 7 nm+ looks to be even worse efficiency wise given that density of 7nm+ is much higher, but let's wait for the actual benchmarks.

the days smaller/improved node = better power consumptions are over.
 
Joined
Apr 26, 2008
Messages
232 (0.04/day)
System Name 3950X Workstation
Processor AMD Ryzen 9 3950X
Motherboard ASUS Crosshair VIII Impact
Cooling Cryorig C1 with Noctua NF-A12x15
Memory G.Skill F4-3600C16D-32GTZNC
Video Card(s) ASUS GTX 1650 LP OC
Storage 2 x Corsair MP510 1920GB M.2 SSD
Case Realan E-i7
Power Supply G-Unique 400W
Software Win 10 Pro
Benchmark Scores https://smallformfactor.net/forum/threads/the-saga-of-the-little-gem-continues.12877/
With this rate even a successor to gtx 1650, which is a below 75w gpu will consume around 125w.

If it's a successor to GTX 1650, it HAS TO be a 75W card :banghead:
And, if the performance/watt numbers for this generation hold, we should get a decent upgrade in performance in the same 75W envelope.
 
Joined
Apr 10, 2020
Messages
503 (0.30/day)
the days smaller/improved node = better power consumptions are over.
That's simply not true. Higher density = less power consumption or more transistors per mm2. That's what node shrinkage is all about. Smaller node with the same transistor count (lower wattage) or the same node with higher transistor count (same wattage or compromise between performance gain and wattage advantage).
 
D

Deleted member 190774

Guest
So let me get this straight.

Some
people are grumpy with AMD for supposedly not competing with the 3080, while some thought 3070 - and now speculation that one of the higher end cards looks as though it's getting a bit more juice (for whatever competitive reason) - people seem grumpy with that too...
 
Joined
Apr 26, 2008
Messages
232 (0.04/day)
System Name 3950X Workstation
Processor AMD Ryzen 9 3950X
Motherboard ASUS Crosshair VIII Impact
Cooling Cryorig C1 with Noctua NF-A12x15
Memory G.Skill F4-3600C16D-32GTZNC
Video Card(s) ASUS GTX 1650 LP OC
Storage 2 x Corsair MP510 1920GB M.2 SSD
Case Realan E-i7
Power Supply G-Unique 400W
Software Win 10 Pro
Benchmark Scores https://smallformfactor.net/forum/threads/the-saga-of-the-little-gem-continues.12877/
Thinking about these developments, I think AMD is simply following NVIDIA's footsteps in power draw. I mean NVIDIA opened the floodgates and AMD saw an opportunity to max out their performance within similar power ratings. I bet AMD has been working on tweaking their clock speeds in the last several weeks after NVIDIA launch.
 
Joined
Jun 3, 2010
Messages
2,540 (0.48/day)
is that what 99% of users would do?
Well, these gpus are packed with unified shaders. They don't work, "All the time". They wait for instructions and depending on what stage they are at the pipeline, they can throttle according to the workload and that is what any user should do, since that is what consoles do anyway. Every form of development effort is directed to the consoles and I have to say, they have gone into the intrinsics pretty wild. Let's wait to see SM6.0. I'm sure after introducing per-lane operations to expand instructional fidelity to be 4 times higher, they will go into full clock control.

I mean NVIDIA opened the floodgates and AMD saw an opportunity to max out their performance within similar power ratings.
Tis wrong.
I bet AMD has been working on tweaking their clock speeds in the last several weeks after NVIDIA launch.
Tis right. The way AMD and Nvidia approach gpu clock monitoring is different. Nvidia uses real time monitoring, AMD uses emulated monitoring. Nvidia can adapt to real changes in post launch stage better, but AMD can respond faster due to prelaunch approximated settings. If they simulated a scenario, the algorithm could emulate the power surge and what not.
 
Last edited:
Joined
Mar 24, 2012
Messages
533 (0.12/day)
That's simply not true. Higher density = less power consumption or more transistors per mm2. That's what node shrinkage is all about. Smaller node with the same transistor count (lower wattage) or the same node with higher transistor count (same wattage or compromise between performance gain and wattage advantage).

yes higher density means more transistor per mm2. but less power? i don't think that one is guaranteed. in fact higher density will lead to another problem: heat. how much power are being wasted as heat instead of increasing performance? in the end we are bound by the law of physics. we cannot get the improvement infinitely. even at 20nm we already see some problem. back then TSMC decided to ditch high performance node for 20nm because the power saving are not that better than enhanced 28nm process.
 
Joined
Jun 3, 2010
Messages
2,540 (0.48/day)
Higher density indeed means less clocks.
 

Nkd

Joined
Sep 15, 2007
Messages
364 (0.06/day)
If TGP is 320W than peak power draw must be north of 400W, just like 3080 and 3090. That's really, really bad. Any single decent GPU should not peak over 300W, that's the datacenter rule of thumb and it's getting stumbled upon with Ampere and RDNA2. How long will air cooled 400W GPU last? I'm having hard time believing that there will be many fully functioning air cooled Big Navis/3080-90s around in 3-5 years time. Maybe that's the intend, 1080TIs are still killing new sales.

someone didn’t read the article. Seriously common now. It’s frickin only explains it in the article how much wattage is where.
 

ThanatosPy

New Member
Joined
Jul 28, 2020
Messages
3 (0.00/day)
The problem with AMD allways gonna be the Drivers, man how them can be so bad?
 
Joined
Apr 10, 2020
Messages
503 (0.30/day)
Laws of physics apply always, it's just a matter of cost. As you go to smaller geometries it gets more and more expensive. One of the things that was driving Moore’s Law is that the cost per transistor was dropping. It hasn’t been dropping noticeably recently (going below 5nm), and in some cases it’s going flat. So yes, you can still get more transistors and lower wattage at smaller nodes, but the cost per die is going up significantly, so those two things balance out. I'd say 3nm is a sweet-spot for compute hardware for now, because it is great for compute density and relatively low leakage power. Below that we probably won't see retail GPUs and CPU shrinkage anytime soon as arch become very, very complex which means A LOT of R&D $$$ and abysmal die yields. But hey we're still talking about Samsung's 8nm here (aka 10nm in reality) and 7nm with Ampere/RDNA2 not below 3nm nodes, so there is still plenty power efficiency to gain simply by moving to smaller node. The problem Ampere has is that it was not build exclusively for gaming and Samsung's 8nm node that was never meant for big dies and it shows. It's Nvidia's "GCN 5 Vega" moment, trying to sit on 2 chairs at the same time & cheat on cheap inferior node. Luckily for them AMD is so far behind that they can pull it of without having to worry about competition too much. 3080 on TSMC 7nm euv process would be 250W TPD GPU, that's all NVidia had to do to obliterate RDNA2, but they've chosen profit margins over efficiency and maybe, just maybe that will bite them in the ass.
 
Last edited:
Joined
Dec 31, 2009
Messages
19,371 (3.56/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
So let me get this straight.

Some people are grumpy with AMD for supposedly not competing with the 3080, while some thought 3070 - and now speculation that one of the higher end cards looks as though it's getting a bit more juice (for whatever competitive reason) - people seem grumpy with that too...
lol, the fickle outweigh the logical these days... especially in forums.
only 3 known issues as of last release. Time to move on.
I think the worry is launch day and the annual adrenalin drivers. It took them over a year to get rid of the black screen issue, for example. I'm glad they are pulling it together, but do understand the valid concerns.
 
Joined
Sep 17, 2014
Messages
22,424 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
So let me get this straight.

Some people are grumpy with AMD for supposedly not competing with the 3080, while some thought 3070 - and now speculation that one of the higher end cards looks as though it's getting a bit more juice (for whatever competitive reason) - people seem grumpy with that too...

I think in general the fact that more performance is achieved with more power isn't exactly something to get all hyped up about.

We could practically do that already but never did, if you think about it - without major investments and price hikes. Just drag out 14nm a while longer and make it bigger?

The reality is, we're seeing the cost of RT and 4K added to the pipeline. Efficiency gains don't translate to lower res due to engine or CPU constraints. We're moving into a new era, in that sense. It doesn't look good right now because we're used to a very efficient era in resolutions. Hardly anyone plays at 4K yet, but their GPUs slaughter 1080p and sometimes 1440p. Basically these are new GPUs waiting to solve new problems we don't really have.
 
Joined
Nov 6, 2016
Messages
1,751 (0.60/day)
Location
NH, USA
System Name Lightbringer
Processor Ryzen 7 2700X
Motherboard Asus ROG Strix X470-F Gaming
Cooling Enermax Liqmax Iii 360mm AIO
Memory G.Skill Trident Z RGB 32GB (8GBx4) 3200Mhz CL 14
Video Card(s) Sapphire RX 5700XT Nitro+
Storage Hp EX950 2TB NVMe M.2, HP EX950 1TB NVMe M.2, Samsung 860 EVO 2TB
Display(s) LG 34BK95U-W 34" 5120 x 2160
Case Lian Li PC-O11 Dynamic (White)
Power Supply BeQuiet Straight Power 11 850w Gold Rated PSU
Mouse Glorious Model O (Matte White)
Keyboard Royal Kludge RK71
Software Windows 10
Does anybody not care about electricity bills anymore, or most not having responsibilty to pay the bills? Who would buy these cards?

First of all, the average price of electricty in America is $0.125/kWHr, which is very cheap. This results in, for example, a 100 watt difference in Videocards equating to $36.40/year if it's used 8 hours per day, 365 days per year... And that's a lot of gaming.

Don't get me wrong, I think efficiency should be the paramount concern considering the impending ecological collapse and all, but I think because Nvidia opened the door for a complete disregard for efficiency this time around, I think AMD is following suit and going all out with clocks because they realize they don't have to care about efficiency.

That being said, I wouldn't be surprised if you downclock and undervolt RDNA2, it'll probably be extremely efficient, much more than Ampere could or can be.
 
Joined
Apr 10, 2020
Messages
503 (0.30/day)
someone didn’t read the article. Seriously common now. It’s frickin only explains it in the article how much wattage is where.
What's wrong with my numbers? Igor writes 320W TBP for FE NAVI 21 XT and 355W for AIB variants ('Die 6800XT ist heiss, bis zu 355 Watt++') That translates into +400W peak power draw.
 
Last edited:
Joined
Dec 31, 2009
Messages
19,371 (3.56/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
What's wrong with my numbers? Igor writes 320W TBP for FE NAVI 21 XT and 355W for AIB variants ('bis zu 355Watt++').... That translates into +400W peak power draw.
How? Isn't TBP Total BOARD Power and that encompasses everything? How are you seein a TBP value of XXX and coming up with YYY (more)?
 
Joined
Apr 10, 2020
Messages
503 (0.30/day)
How? Isn't TBP Total BOARD Power and that encompasses everything? How are you seein a TBP value of XXX and coming up with YYY (more)?
Actually it doesn't. Official TBP of 3080 is 320 Watts, yet it peaks at 370W (FE) and up to 470W (AIBs). I have no reason to assume RDNA2 will be any different. That's why Igor wrote 355W++.
 
Joined
Dec 31, 2009
Messages
19,371 (3.56/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Actually it doesn't. Official TBP of 3080 is 320 Watts, yet it peaks at 370W (FE) and up to 470W (AIBs). I have no reason to assume RDNA2 will be any different. That's why Igor wrote 355W++.
You should link some support....As it stands, NV cards have a power limit where clocks and voltage lower to maintain that limit. In my experience, it doesn't go over that much...not even close. It depends on the power limits of the card. If it is set to 320W max, that is all they get, generally. It's true there are BIOS' with higher limits, but out of the factory at stock (FE speeds) it's a 320W card.
 
Last edited:
Joined
May 15, 2020
Messages
697 (0.42/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
So let me get this straight.

Some people are grumpy with AMD for supposedly not competing with the 3080, while some thought 3070 - and now speculation that one of the higher end cards looks as though it's getting a bit more juice (for whatever competitive reason) - people seem grumpy with that too...
In short, some people are always grumpy.

People, unhappy with high TDB GPUs? Buy a mid-tier one. Really interested in efficiency? Buy the biggest die, undervolt, underclock.

But how about waiting to see some actual numbers (performance, consumption, prices) before getting the pitchforks out?

Who am I kidding, those pitchforks are always out...

Actually it doesn't. Official TBP of 3080 is 320 Watts, yet it peaks at 370W (FE) and up to 470W (AIBs). I have no reason to assume RDNA2 will be any different. That's why Igor wrote 355W++.
You're assuming that based on Igor assumptions about his leak and you see no flaw in your reasoning? :)
 
Joined
Jun 3, 2010
Messages
2,540 (0.48/day)
And that's a lot of gaming.
This is not about gaming, it is about gpu behaviour. You cannot schedule work for 100%.
That being said, I wouldn't be surprised if you downclock and undervolt RDNA2, it'll probably be extremely efficient,
I just read the Timothy Lottes guide and funny coincidence we have a local console developer who reverberated his steps verbatim, so I can say very groundedly that this is a matter of scheduling and how the gpu can 'see' the same workload progress that developers can see using radeon profiler. Work isn't parallel, in fact most times it is serial. If you have 64 compute units, the instruction engine assigns each one by one. Even in the best circumstances* that is ~5% time lost to idle. You don't even need to keep the shaders working up until they meet the work to idle requirement.
*PS: when 1 kernel is running, when multiple kernels are running this increases linearly, 4 workgroups increase idle time to ~18%.
 
Last edited:
Joined
Oct 10, 2018
Messages
943 (0.42/day)
Of course we are going to complain when a new gpu comes with 400W + consumption. How many of you can be so ignorant and dismassal towards others about it is beyond belief. I said the same thing on Nvidia when it released, a lovely card, a great performance job and not increasing cost from previous gen but all that at the cost of power consumption and complications that brings with it is a no go for me.
 
Joined
Apr 12, 2013
Messages
7,520 (1.77/day)
I don't follow, the limited edition of the 5700XT wasn't a different product, it was still named 5700XT. "6900XTX" implies a different product.
The naming scheme really doesn't matter, functionally the 3900X & 3900XT are the same products as well. AMD could, in theory, do the same with "big" Navi.
 
Top