# AMD Radeon RX 6000 Series "Big Navi" GPU Features 320 W TGP, 16 Gbps GDDR6 Memory



## AleksandarK (Oct 20, 2020)

AMD is preparing to launch its Radeon RX 6000 series of graphics cards codenamed "Big Navi", and it seems like we are getting more and more leaks about the upcoming cards. Set for October 28th launch, the Big Navi GPU is based on Navi 21 revision, which comes in two variants. Thanks to the sources over at Igor's Lab, Igor Wallossek has published a handful of information regarding the upcoming graphics cards release. More specifically, there are more details about the Total Graphics Power (TGP) of the cards and how it is used across the board (pun intended). To clarify, TDP (Thermal Design Power) is a measurement only used to the chip, or die of the GPU and how much thermal headroom it has, it doesn't measure the whole GPU power as there are more heat-producing components.

So the break down of the Navi 21 XT graphics card goes as follows: 235 Watts for the GPU alone, 20 Watts for Samsung's 16 Gbps GDDR6 memory, 35 Watts for voltage regulation (MOSFETs, Inductors, Caps), 15 Watts for Fans and other stuff, and 15 Watts that are used up by PCB and the losses found there. This puts the combined TGP to 320 Watts, showing just how much power is used by the non-GPU element. For custom OC AIB cards, the TGP is boosted to 355 Watts, as the GPU alone is using 270 Watts. When it comes to the Navi 21 XL GPU variant, the cards based on it are using 290 Watts of TGP, as the GPU sees a reduction to 203 Watts, and GDDR6 memory uses 17 Watts. The non-GPU components found on the board use the same amount of power.


 


When it comes to the selection of memory, AMD uses Samsung's 16 Gbps GDDR6 modules (K4ZAF325BM-HC16). The bundle AMD ships to its AIBs contains 16 GB of this memory paired with GPU core, however, AIBs are free to put different memory if they want to, as long as it is a 16 Gbps module. You can see the tables below and see the breakdown of the TGP of each card for yourself.


 

 



*View at TechPowerUp Main Site*


----------



## jesdals (Oct 20, 2020)

What about the rumors of a 6900XTX card?


----------



## SLK (Oct 20, 2020)

Looks like this gen of GPUs are all power-hungry. Efficiency is out of the window!


----------



## Rebe1 (Oct 20, 2020)

jesdals said:


> What about the rumors of a 6900XTX card?


 Same as with 5700 XT - probably the 6900XTX will be a limited edition of 6900XT.


----------



## lemoncarbonate (Oct 20, 2020)

SLK said:


> Looks like this gen of GPUs are all power-hungry. Efficiency is out of the window!



You get more framerate with 3080 despite the insane power draw, some said it's the most-power-per-frame-efficient GPU out there.

But, I agree with you... I wish they could have made something that less hungrier. Imagine how amazing it would be if we could get <200W card that can beat 2080 Ti.


----------



## Vya Domus (Oct 20, 2020)

Rebe1 said:


> Same as with 5700 XT - probably the 6900XTX will be a limited edition of 6900XT.



I don't follow, the limited edition of the 5700XT wasn't a different product, it was still named 5700XT. "6900XTX" implies a different product.


----------



## okbuddy (Oct 20, 2020)

how real is that, isn't 2.4ghz?


----------



## Turmania (Oct 20, 2020)

Does anybody not care about electricity bills anymore, or most not having responsibilty to pay the bills? Who would buy these cards?


----------



## jesdals (Oct 20, 2020)

Turmania said:


> Does anybody not care about electricity bills anymore, or most not having responsibilty to pay the bills? Who would buy these cards?


Gaming at 7680x1440 I could do with the upgrade


----------



## Raevenlord (Oct 20, 2020)

SLK said:


> Looks like this gen of GPUs are all power-hungry. Efficiency is out of the window!



Efficiency is one thing, power consumption is another.

NVIDIA's RTX 3080 is a much more power-efficient design than anything that came before (at 1440p and 4K), as our review clearly demonstrates.










One other metric for discussion, however, is power envelope. So, for anyone who has concerns regarding energy consumption, and wants to reduce overall power consumption for environmental or other concerns, one can always just drop a few rungs in the product stack for an RTX 3070, or the (virtual) RTX 3060 or AMD equivalents, which will certainly deliver even higher power efficiency, within a smaller envelope.

We'll have to wait for reviews on other cards in NVIDIA's product stack (not to mention all of AMD's offerings, such as this one leaked card), but it seems clear that this generation will deliver higher performance at the same power level as older generations. You may have to drop in the product stack, yes - but if performance is higher on the same envelope, you will have a better-performing RTX 2080 in a 3070 at the same power envelope, or a better performing RTX 2070 in a RTX 3060 at the same power envelope, and so on.

These are two different concepts, and I can't agree with anyone talking about inefficiency. The numbers, in a frame/watt basis, don't lie.


----------



## repman244 (Oct 20, 2020)

Turmania said:


> Does anybody not care about electricity bills anymore, or most not having responsibilty to pay the bills? Who would buy these cards?



Electricity is cheap in my country and I don't play games everyday for 12 hours so it will hardly show.


----------



## Mussels (Oct 20, 2020)

Well, looks like i'll just wire up my PC turn the AC on this summer


----------



## FinneousPJ (Oct 20, 2020)

Turmania said:


> Does anybody not care about electricity bills anymore, or most not having responsibilty to pay the bills? Who would buy these cards?


How much more do you think a 320 W board will cost to use over a 250 W board?


----------



## Turmania (Oct 20, 2020)

With this rate even a successor to gtx 1650, which is a below 75w gpu will consume around 125w.


----------



## RedelZaVedno (Oct 20, 2020)

If TGP is 320W than peak power draw must be north of 400W, just like 3080 and 3090. That's really, really bad. Any single decent GPU should not peak over 300W, that's the datacenter rule of thumb and it's getting stumbled upon with Ampere and RDNA2. How long will air cooled 400W GPU last? I'm having hard time believing that there will be many fully functioning air cooled Big Navis/3080-90s around in 3-5 years time. Maybe that's the intend, 1080TIs are still killing new sales.


----------



## kayjay010101 (Oct 20, 2020)

FinneousPJ said:


> How much more do you think a 320 W board will cost to use over a 250 W board?


28% more. 

For me it doesn't matter as electricity is quite cheap. 1 kWh (so 4x 250W per hour) is equivalent to one or two cents over in the States, so 70W? We're talking maybe one or two bucks more over the course of a year...


----------



## Turmania (Oct 20, 2020)

FinneousPJ said:


> How much more do you think a 320 W board will cost to use over a 250 W board?


Reverse your thinking, say newer one that consumes 200w to 250w board. The trend to improve performance with increasing power consumption is for me not an ideal technological improvement. But that is me.


----------



## RedelZaVedno (Oct 20, 2020)

kayjay010101 said:


> 28% more.
> 
> For me it doesn't matter as electricity is quite cheap. 1 kWh (so 4x 250W per hour) is equivalent to one or two cents over in the States, so 70W? We're talking maybe one or two bucks more over the course of a year...


It's not about the bill, it's about the heat. 400W GPU, 150W CPU, 50-150W for the rest of the system and you get yourself 0.6-0.7 KWh room heater. That's a no go in a 16m2 or smaller room in late spring and summer months if you live in moderate or warm climate.


----------



## nguyen (Oct 20, 2020)

Raevenlord said:


> Efficiency is one thing, power consumption is another.
> 
> NVIDIA's RTX 3080 is a much more power-efficient design than anything that came before (at 1440p and 4K), as our review clearly demonstrates.
> 
> ...



Finally someone who can explain the difference between power consumption vs efficiency.
Really tiring to see all the posts complaining about Ampere high power consumption.
High power consumption is so easy to fix, just drag the Power Limit slider down to where you want it to be (no undervolt/overclock). From TPU numbers, one can expect the 3080 to be 20-25% faster than 2080 Ti when limited to 260W TGP.

It seems like AMD is also clocking the shit outta Big Navi to catch up with Nvidia  , people can just forget about RDNA2 achieve higher efficiency than Ampere.


----------



## FinneousPJ (Oct 20, 2020)

kayjay010101 said:


> 28% more.
> 
> For me it doesn't matter as electricity is quite cheap. 1 kWh (so 4x 250W per hour) is equivalent to one or two cents over in the States, so 70W? We're talking maybe one or two bucks more over the course of a year...


Exactly. If you're worried about two bucks per year, maybe don't buy a GPU... try saving up a buffer first.



Turmania said:


> Reverse your thinking, say newer one that consumes 200w to 250w board. The trend to improve performance with increasing power consumption is for me not an ideal technological improvement. But that is me.


Well, let's see whether these boards can offer better performance at the same power or not. If they can, what's the problem? You can undervolt & underclock to your desired power.


----------



## RedelZaVedno (Oct 20, 2020)

Performance per watt did go up on Ampere, but that's to be expected given that Nvidia moved from TSMCs 12nm to Samsung’s 8nm 8LPP, a 10nm extension node. What is not impressive is only 10% performance per watt increase over Turing while being build on 25% denser node. RDNA2 arch being on 7 nm+ looks to be even worse efficiency wise given that density of 7nm+ is much higher, but let's wait for the actual benchmarks.


----------



## Khonjel (Oct 20, 2020)

SLK said:


> Looks like this gen of GPUs are all power-hungry. Efficiency is out of the window!


I sincerely apologise to every build suggestions where I recommended lower wattage PSUs because I said and I quote, "aS tiMe pROGressEs, Pc cOmpoNenTs will coNSuMe lESs poWeR"


----------



## springs113 (Oct 20, 2020)

nguyen said:


> Finally someone who can explain the difference between power consumption vs efficiency.
> Really tiring to see all the posts complaining about Ampere high power consumption.
> High power consumption is so easy to fix, just drag the Power Limit slider down to where you want it to be (no undervolt/overclock). From TPU numbers, one can expect the 3080 to be 20-25% faster than 2080 Ti when limited to 260W TGP.
> 
> It seems like AMD is also clocking the shit outta Big Navi to catch up with Nvidia  , people can just forget about RDNA2 achieve higher efficiency than Ampere.


 seems to that every gpu thread you need to defend the almighty Nvidia.  Go be happy with your purchase and stop spewing garbo.  Ik there's no ying without the yang but please cut it out.   Nvidia numbers are known...navi2s are all speculative.


----------



## Turmania (Oct 20, 2020)

We used to have two slot gpu's as of last gen that went upto 3 slots, and now we are seeing 4 slot gpu's and it is all about to cool the power hungry beasts. But this trend surely has to stop. Yes, we can undervolt to bring power consumption to our desired needs and will most certainly be more efficient to last gen. But is that what 99% of users would do?  I think about not only the power bill, the heat that is outputted, the spinning of fans,band consequently the faster detoriation of those fans and other components in the system. The noise output, and the heat that comes from the case will be uncomfortable.


----------



## EarthDog (Oct 20, 2020)

Yikes... feels again like they are punching up with clocks moving out of the sweetspot. Since power is seemingly similar between ampere and rdna2 cards, its going to come down to performance, price, and driver stability between them. Will amd have to go below ampere pricing due to performance or will they take the crown and proced similar?


----------



## nguyen (Oct 20, 2020)

Turmania said:


> We used to have two slot gpu's as of last gen that went upto 3 slots, and now we are seeing 4 slot gpu's and it is all about to cool the power hungry beasts. But this trend surely has to stop. Yes, we can undervolt to bring power consumption to our desired needs and will most certainly be more efficient to last gen. But is that what 99% of users would do?  I think about not only the power bill, the heat that is outputted, the spinning of fans,band consequently the faster detoriation of those fans and other components in the system. The noise output, and the heat that comes from the case will be uncomfortable.



I'm all for AIBs equipping 3-4 slot coolers to GPU and sell them at MSRP 
That means with a little undervolting I can get similar experience to custom watercooling regarding thermal/noise.
Well if only there are any 3080/3090 available   .


----------



## repman244 (Oct 20, 2020)

Turmania said:


> We used to have two slot gpu's as of last gen that went upto 3 slots, and now we are seeing 4 slot gpu's and it is all about to cool the power hungry beasts. But this trend surely has to stop. Yes, we can undervolt to bring power consumption to our desired needs and will most certainly be more efficient to last gen. But is that what 99% of users would do?  I think about not only the power bill, the heat that is outputted, the spinning of fans,band consequently the faster detoriation of those fans and other components in the system. The noise output, and the heat that comes from the case will be uncomfortable.



We still have a 2 slot card which can handle 4k gaming with "ease".


----------



## renz496 (Oct 20, 2020)

RedelZaVedno said:


> Performance per watt did go up on Ampere, but that's to be expected given that Nvidia moved from TSMCs 12nm to Samsung’s 8nm 8LPP, a 10nm extension node. What is not impressive is only 10% performance per watt increase over Turing while being build on 25% denser node. RDNA2 arch being on 7 nm+ looks to be even worse efficiency wise given that density of 7nm+ is much higher, but let's wait for the actual benchmarks.



the days smaller/improved node = better power consumptions are over.


----------



## theGryphon (Oct 20, 2020)

Turmania said:


> With this rate even a successor to gtx 1650, which is a below 75w gpu will consume around 125w.



If it's a successor to GTX 1650, it HAS TO be a 75W card 
And, if the performance/watt numbers for this generation hold, we should get a decent upgrade in performance in the same 75W envelope.


----------



## RedelZaVedno (Oct 20, 2020)

renz496 said:


> the days smaller/improved node = better power consumptions are over.


That's simply not true. Higher density = less power consumption or more transistors per mm2. That's what node shrinkage is all about. Smaller node with the same transistor count (lower wattage) or the same node with higher transistor count (same wattage or compromise between performance gain and wattage advantage).


----------



## Deleted member 190774 (Oct 20, 2020)

So let me get this straight.
_
Some _people are grumpy with AMD for supposedly not competing with the 3080, while some thought 3070 - and now speculation that one of the higher end cards looks as though it's getting a bit more juice (for whatever competitive reason) - people _seem _grumpy with that too...


----------



## theGryphon (Oct 20, 2020)

Thinking about these developments, I think AMD is simply following NVIDIA's footsteps in power draw. I mean NVIDIA opened the floodgates and AMD saw an opportunity to max out their performance within similar power ratings. I bet AMD has been working on tweaking their clock speeds in the last several weeks after NVIDIA launch.


----------



## mtcn77 (Oct 20, 2020)

Turmania said:


> is that what 99% of users would do?


Well, these gpus are packed with unified shaders. They don't work, "All the time". They wait for instructions and depending on what stage they are at the pipeline, they can throttle according to the workload and that is what any user should do, since that is what consoles do anyway. Every form of development effort is directed to the consoles and I have to say, they have gone into the intrinsics pretty wild. Let's wait to see SM6.0. I'm sure after introducing per-lane operations to expand instructional fidelity to be 4 times higher, they will go into full clock control.



theGryphon said:


> I mean NVIDIA opened the floodgates and AMD saw an opportunity to max out their performance within similar power ratings.


Tis wrong.


theGryphon said:


> I bet AMD has been working on tweaking their clock speeds in the last several weeks after NVIDIA launch.


Tis right. The way AMD and Nvidia approach gpu clock monitoring is different. Nvidia uses real time monitoring, AMD uses emulated monitoring. Nvidia can adapt to real changes in post launch stage better, but AMD can respond faster due to prelaunch approximated settings. If they simulated a scenario, the algorithm could emulate the power surge and what not.


----------



## renz496 (Oct 20, 2020)

RedelZaVedno said:


> That's simply not true. Higher density = less power consumption or more transistors per mm2. That's what node shrinkage is all about. Smaller node with the same transistor count (lower wattage) or the same node with higher transistor count (same wattage or compromise between performance gain and wattage advantage).



yes higher density means more transistor per mm2. but less power? i don't think that one is guaranteed. in fact higher density will lead to another problem: heat. how much power are being wasted as heat instead of increasing performance? in the end we are bound by the law of physics. we cannot get the improvement infinitely. even at 20nm we already see some problem. back then TSMC decided to ditch high performance node for 20nm because the power saving are not that better than enhanced 28nm process.


----------



## mtcn77 (Oct 20, 2020)

Higher density indeed means less clocks.



Spoiler: SRAM density


----------



## Nkd (Oct 20, 2020)

RedelZaVedno said:


> If TGP is 320W than peak power draw must be north of 400W, just like 3080 and 3090. That's really, really bad. Any single decent GPU should not peak over 300W, that's the datacenter rule of thumb and it's getting stumbled upon with Ampere and RDNA2. How long will air cooled 400W GPU last? I'm having hard time believing that there will be many fully functioning air cooled Big Navis/3080-90s around in 3-5 years time. Maybe that's the intend, 1080TIs are still killing new sales.



someone didn’t read the article. Seriously common now. It’s frickin only explains it in the article how much wattage is where.


----------



## ThanatosPy (Oct 20, 2020)

The problem with AMD allways gonna be the Drivers, man how them can be so bad?


----------



## Nkd (Oct 20, 2020)

ThanatosPy said:


> The problem with AMD allways gonna be the Drivers, man how them can be so bad?



only 3 known issues as of last release. Time to move on.


----------



## RedelZaVedno (Oct 20, 2020)

Laws of physics apply always, it's just a matter of cost. As you go to smaller geometries it gets more and more expensive. One of the things that was driving Moore’s Law is that the cost per transistor was dropping. It hasn’t been dropping noticeably recently (going below 5nm), and in some cases it’s going flat. So yes, you can still get more transistors and lower wattage at smaller nodes, but the cost per die is going up significantly, so those two things balance out. I'd say 3nm is a sweet-spot for compute hardware for now, because it is great for compute density and relatively low leakage power. Below that we probably won't see retail GPUs and CPU shrinkage anytime soon as arch become very, very complex which means A LOT of R&D $$$ and abysmal die yields. But hey we're still talking about Samsung's 8nm here (aka 10nm in reality) and 7nm with Ampere/RDNA2 not below 3nm nodes, so there is still plenty power efficiency to gain simply by moving to smaller node. The problem Ampere has is that it was not build exclusively for gaming and Samsung's 8nm node that was never meant for big dies and it shows. It's Nvidia's "GCN 5 Vega" moment, trying to sit on 2 chairs at the same time & cheat on cheap inferior node. Luckily for them AMD is so far behind that they can pull it of without having to worry about competition too much. 3080 on TSMC 7nm euv process would be 250W TPD GPU, that's all NVidia had to do to obliterate RDNA2, but they've chosen profit margins over efficiency and maybe, just maybe that will bite them in the ass.


----------



## EarthDog (Oct 20, 2020)

beedoo said:


> So let me get this straight.
> 
> _Some _people are grumpy with AMD for supposedly not competing with the 3080, while some thought 3070 - and now speculation that one of the higher end cards looks as though it's getting a bit more juice (for whatever competitive reason) - people _seem _grumpy with that too...


lol, the fickle outweigh the logical these days... especially in forums.


Nkd said:


> only 3 known issues as of last release. Time to move on.


I think the worry is launch day and the annual adrenalin drivers. It took them over a year to get rid of the black screen issue, for example. I'm glad they are pulling it together, but do understand the valid concerns.


----------



## Vayra86 (Oct 20, 2020)

beedoo said:


> So let me get this straight.
> 
> _Some _people are grumpy with AMD for supposedly not competing with the 3080, while some thought 3070 - and now speculation that one of the higher end cards looks as though it's getting a bit more juice (for whatever competitive reason) - people _seem _grumpy with that too...



I think in general the fact that more performance is achieved with more power isn't exactly something to get all hyped up about.

We could practically do that already but never did, if you think about it - without major investments and price hikes. Just drag out 14nm a while longer and make it bigger?

The reality is, we're seeing the cost of RT and 4K added to the pipeline. Efficiency gains don't translate to lower res due to engine or CPU constraints. We're moving into a new era, in that sense. It doesn't look good right now because we're used to a very efficient era in resolutions. Hardly anyone plays at 4K yet, but their GPUs slaughter 1080p and sometimes 1440p. Basically these are new GPUs waiting to solve new problems we don't really have.


----------



## AnarchoPrimitiv (Oct 20, 2020)

Turmania said:


> Does anybody not care about electricity bills anymore, or most not having responsibilty to pay the bills? Who would buy these cards?



First of all, the average price of electricty in America is $0.125/kWHr, which is very cheap.  This results in, for example, a 100 watt difference in Videocards equating to $36.40/year if it's used 8 hours per day, 365 days per year... And that's a lot of gaming.  

Don't get me wrong, I think efficiency should be the paramount concern considering the impending ecological collapse and all, but I think because Nvidia opened the door for a complete disregard for efficiency this time around, I think AMD is following suit and going all out with clocks because they realize they don't have to care about efficiency. 

That being said, I wouldn't be surprised if you downclock and undervolt RDNA2, it'll probably be extremely efficient, much more than Ampere could or can be.


----------



## RedelZaVedno (Oct 20, 2020)

Nkd said:


> someone didn’t read the article. Seriously common now. It’s frickin only explains it in the article how much wattage is where.


What's wrong with my numbers? Igor writes 320W TBP for FE NAVI 21 XT and 355W for AIB variants ('Die 6800XT ist heiss, bis zu 355 Watt++') That translates into +400W peak power draw.


----------



## EarthDog (Oct 20, 2020)

RedelZaVedno said:


> What's wrong with my numbers? Igor writes 320W TBP for FE NAVI 21 XT and 355W for AIB variants ('bis zu 355Watt++').... That translates into +400W peak power draw.


How? Isn't TBP Total BOARD Power and that encompasses everything? How are you seein a TBP value of XXX and coming up with YYY (more)?


----------



## RedelZaVedno (Oct 20, 2020)

EarthDog said:


> How? Isn't TBP Total BOARD Power and that encompasses everything? How are you seein a TBP value of XXX and coming up with YYY (more)?


Actually it doesn't. Official TBP of 3080 is 320 Watts, yet it peaks at 370W (FE) and up to 470W (AIBs). I have no reason to assume RDNA2 will be any different. That's why Igor wrote 355W++.


----------



## EarthDog (Oct 20, 2020)

RedelZaVedno said:


> Actually it doesn't. Official TBP of 3080 is 320 Watts, yet it peaks at 370W (FE) and up to 470W (AIBs). I have no reason to assume RDNA2 will be any different. That's why Igor wrote 355W++.


You should link some support....As it stands, NV cards have a power limit where clocks and voltage lower to maintain that limit. In my experience, it doesn't go over that much...not even close. It depends on the power limits of the card. If it is set to 320W max, that is all they get, generally. It's true there are BIOS' with higher limits, but out of the factory at stock (FE speeds) it's a 320W card.


----------



## BoboOOZ (Oct 20, 2020)

beedoo said:


> So let me get this straight.
> 
> _Some _people are grumpy with AMD for supposedly not competing with the 3080, while some thought 3070 - and now speculation that one of the higher end cards looks as though it's getting a bit more juice (for whatever competitive reason) - people _seem _grumpy with that too...


In short, some people are always grumpy.

People, unhappy with high TDB GPUs? Buy a mid-tier one. Really interested in efficiency? Buy the biggest die, undervolt, underclock.

But how about waiting to see some actual numbers (performance, consumption, prices) before getting the pitchforks out?

Who am I kidding, those pitchforks are always out...



RedelZaVedno said:


> Actually it doesn't. Official TBP of 3080 is 320 Watts, yet it peaks at 370W (FE) and up to 470W (AIBs). I have no reason to assume RDNA2 will be any different. That's why Igor wrote 355W++.


You're assuming that based on Igor assumptions about his leak and you see no flaw in your reasoning?


----------



## mtcn77 (Oct 20, 2020)

AnarchoPrimitiv said:


> And that's a lot of gaming.


This is not about gaming, it is about gpu behaviour. You cannot schedule work for 100%.


AnarchoPrimitiv said:


> That being said, I wouldn't be surprised if you downclock and undervolt RDNA2, it'll probably be extremely efficient,


I just read the Timothy Lottes guide and funny coincidence we have a local console developer who reverberated his steps verbatim, so I can say very groundedly that this is a matter of scheduling and how the gpu can 'see' the same workload progress that developers can see using radeon profiler. Work isn't parallel, in fact most times it is serial. If you have 64 compute units, the instruction engine assigns each one by one. Even in the best circumstances* that is ~5% time lost to idle. You don't even need to keep the shaders working up until they meet the work to idle requirement.
*PS: when 1 kernel is running, when multiple kernels are running this increases linearly, 4 workgroups increase idle time to ~18%.


----------



## Turmania (Oct 20, 2020)

Of course we are going to complain when a new gpu comes with 400W + consumption. How many of you can be so ignorant and dismassal towards others about it is beyond belief. I said the same thing on Nvidia when it released, a lovely card, a great performance job and not increasing cost from previous gen but all that at the cost of power consumption and complications that brings with it is a no go for me.


----------



## R0H1T (Oct 20, 2020)

Vya Domus said:


> I don't follow, the limited edition of the 5700XT wasn't a different product, it was still named 5700XT. "6900XTX" implies a different product.


The naming scheme really doesn't matter, functionally the 3900X & 3900XT are the same products as well. AMD could, in theory, do the same with "big" Navi.


----------



## Chrispy_ (Oct 20, 2020)

At a rumoured 2.4GHz I'm expecting a lot of that GPU TDP to be caused by AMD's typical preference to ignore the performance/Watt sweet spot.

Underclockers and undervolters will likely be running their cards at 2GHz and sacrificing ~15% of the potential performance to get Sub-200W total board powers.

I am looking forward to reviews but looking forward even more to see what the undervolting potential is. Nothing says "quiet computing" more than an overengineered cooling system for 350+ Watts that barely breaks a sweat at 200W.


----------



## RedelZaVedno (Oct 20, 2020)

BoboOOZ said:


> You're assuming that based on Igor assumptions about his leak and you see no flaw in your reasoning?


It's all speculation at this point. But knowing that Sapphire rated Nitro+ 5700XT at 235W _TBP and GPU at 170W_  while real life PPD actually measured 310W, I'd say it's pretty safe to assume +50W over estimated TBP IF leak of 230W GPU Power holds water.


----------



## mtcn77 (Oct 20, 2020)

Chrispy_ said:


> Underclockers and undervolters will likely be running their cards at 2GHz and sacrificing ~15% of the potential performance to get Sub-200W total board powers.


You can still overclock and throttle these cards. You just drop the power limit. It is crazy how much you can do.


----------



## EarthDog (Oct 20, 2020)

I just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage...


----------



## R0H1T (Oct 20, 2020)

RedelZaVedno said:


> It's all speculation at this point. But knowing that Sapphire rated Nitro+ 5700XT at 235W _TBP and GPU at 170W_  while real life PPD actually measured 310W, I'd say it's pretty safe to assume +*50W over estimated TBP* IF leak of 230W GPU Power holds water.


GPUs draw more power at higher voltages & temps, that's called physics. Of course this varies from Si to Si so the cooling has to be over-engineered, not to mention AIBs can't possibly know the max power draw of the card in each & every scenario.


EarthDog said:


> I just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage...


Yeah image buying a *9590 *& undervolting or underclocking it to 4Ghz


----------



## Kaleid (Oct 20, 2020)

EarthDog said:


> I just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage...



Haven't noticed the card changing it's MHZ down because of the undervolting. And I don't have much to gain by overclocking it either, it just adds another 50Mhz.
And of course some also want their cards to be quieter, which is nice.


----------



## Chrispy_ (Oct 20, 2020)

EarthDog said:


> I just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage...


We do it because we can afford to pay for higher tier cards and run them quieter. A 5700XT limted to 150W is much quieter and still faster than a 5600XT running close to its power and voltage limit. The higher-tier card usually has a better cooler and better quality of manufacture too, because the margins aren't as slim higher up the product stack.

If you're running on a tight budget then you buy a lower-tier card and overclock the snot out of it. AMD's cards have always typically been very close to their overclock limits at factory stocks, much like Nvidia's 3000-series are now.

I chose to run my 5700XT at 1750MHz and 120W (TGP probably about 150W board power) and I could afford to leave the fans on minimum speed for silent 4K gaming. At default clocks it would initially boost to about 1950MHz, get hot and then stabilise at about an 1850 game clock over a longer period. 1750MHz instead of 1850MHz is a small performance drop, basically negligible - but it was the difference between loud and silent - expecially important for an HTPC in a quiet living room using an SFF case with relatively low airflow.


----------



## Vya Domus (Oct 20, 2020)

R0H1T said:


> The naming scheme really doesn't matter, functionally the 3900X & 3900XT are the same products as well. AMD could, in theory, do the same with "big" Navi.



Still, they have different clockspeeds, even though they are minuscule.


----------



## EarthDog (Oct 20, 2020)

Chrispy_ said:


> We do it because we can afford to pay for higher tier cards and run them quieter. A 5700XT limted to 150W is much quieter and still faster than a 5600XT running close to its power and voltage limit. The higher-tier card usually has a better cooler and better quality of manufacture too, because the margins aren't as slim higher up the product stack.
> 
> If you're running on a tight budget then you buy a lower-tier card and overclock the snot out of it. AMD's cards have always typically been very close to their overclock limits at factory stocks, much like Nvidia's 3000-series are now.


lol, it's your money... it screams a waste of cash to me.



Kaleid said:


> Haven't noticed the card changing it's MHZ down because of the undervolting.


Depends on different factors. I don't practice this curious waste of money, I was going off the mention of 15% performance loss. That's more than an entire card tier....


----------



## mtcn77 (Oct 20, 2020)

EarthDog said:


> I just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage...


Because it does not scale with available resources, in fact the more compute units you have, the more serial it gets to feed draw batches. Fury X, Vega 64, Navi, Radeon VII are all 16:1 cu per rasterizer. They take twice longer to issue commands than a simple Bonaire or one of those smaller gpus.


----------



## EarthDog (Oct 20, 2020)

mtcn77 said:


> Because it does not scale with available resources, in fact the more compute units you have, the more serial it gets to feed them. Fury X, Vega, Navi VII are all 16:1 cu per rasterizer. They take twice longer to issue commands than a simple Bonaire or one of those smaller gpus.


Cool story... but this doesn't answer the question of why people pay for XX, lower volts and (generally) performance to WW... Get WW card, save money, be quieter. Or, if it bothers you that much (the noise) and money isn't an issue like the other dude asserts, buy an aftermarket heatsink...better performance for more money and quiet!


----------



## mtcn77 (Oct 20, 2020)

EarthDog said:


> Cool story... but this doesn't answer the question of why people pay for XX, lower volts and performance to WW... Get WW card, save money, be quiet. Or, if it bothers you that much (the noise) and money isn't an issue, buy an aftermarket heatsink...better performance for more money and quiet!


It is instruction scheduling limited. You can either wait until there is enough frontend decoding going on, or run them idle and kill any hope for overclocking gains. You aren't wasting, you are saving oc potential.
Look, people didn't question when Nvidia started gpuboost, or AMD started ulps, but today nobody knows what the hell these cards are doing. Buildzoid undervolted Pascal to 0.6v, it still kept going. I'm attributing it to voltage pumps - the gpu is buffering power to counteract vdroops.


----------



## RedelZaVedno (Oct 20, 2020)

I'm looking for 250W GPU max. Best price/performance at this wattage gets my money. 3070 looks promising, but I do expect RDNA2 to beat it in performance/watt and performance/dollar given that it's on superior node and AMD is an underdog in GPU game. 52CU clocked at 2100MHz (around 14 Tflops) should match 2080ti/3070 and have favorable performance/watt ratio. It will all come down to pricing. I really hope AMD doesn't become greedy. All these 300-400W GPUs are a no go in my eyes. I have no need for expensive room heaters.


----------



## EarthDog (Oct 20, 2020)

mtcn77 said:


> It is instruction scheduling limited. You can either wait until there is enough frontend decoding going on, or run them idle and kill any hope for overclocking gains. You aren't wasting, you are saving oc potential.
> Look, people didn't question when Nvidia started gpuboost, or AMD started ulps, but today nobody knows what the hell these cards are doing. Buildzoid undervolted Pascal to 0.6v, it still kept going. I'm attributing it to voltage pumps - the gpu is buffering power to counteract vdroops.


The story doesn't lay in the minutia...at least I couldn't care less about it (thanks for the deets though ). I get it.. good to know... but that's minutia. Look at it from a big picture perspective.

The end result is XXX power and people are reducing voltage and at times clocks and performance to do it. I don't get it (the losing performance part), especially if it's several percent/card tier.


----------



## R0H1T (Oct 20, 2020)

Undervolting + OCing is the way to go, now if AMD's really pushed the card to the max that may not be feasible. I recall the VII could be undervolted & not lose much if at all in terms of consistent performance, though the max boost clocks might have gone down a bit.


----------



## Vayra86 (Oct 20, 2020)

EarthDog said:


> The story doesn't lay in the minutia...at least I couldn't care less about it (thanks for the deets though ). I get it.. good to know... but that's minutia. Look at it from a big picture perspective.
> 
> The end result is XXX power and people are reducing voltage and at times clocks and performance to do it. I don't get it (the losing performance part), especially if it's several percent/card tier.



AIB cards generally get clocked out of their efficiency curve, and in this case even the FE does that. Its the same deal as with Vega, you underclock it because the gain you get is bigger than the FPS loss. Sometimes you can even get lower volts and the exact same performance, or you get better consistency.

I even see it on my Pascal 1080. 100% power target gets me just as far as 110%, but it still runs cooler, quieter and maintains a stable boost clock better. At the same time, the FPS gain isn't very linear with the clockspeed gain especially if that clock fluctuates all the time. Boost is great for the extra few hundred Mhz it gives, and you keep letting it do that, but the minor gains above that come at a high noise/power cost. This is technically not an undervolt of course, but it illustrates the point. The bar has been pushed further with the generations past Pascal, closer to the edge of what GPUs can do - note the 3080 2Ghz clock issue, and the general power draw increase across the board. Turing was also more hungry already while boosting a bit lower.


----------



## mechtech (Oct 20, 2020)

SLK said:


> Looks like this gen of GPUs are all power-hungry. Efficiency is out of the window!



Well they are huge chips and lots of memory.  Any large engine will use a large amount of fuel.


----------



## mtcn77 (Oct 20, 2020)

EarthDog said:


> The end result is XXX power and people are reducing voltage and at times clocks and performance to do it.


That is the crazy part - we are only aware of it _since it doesn't do it automatically!_
I'm sure the next big thing they will try is full on shader-pipeline interlocking. Look at it this way: they were just decoding serially, then they introduced per-lane intrinsics and lane-switching, now it will instantiate when and where to power up and down.
Yes, I think it will come to that. Gpus are getting fully customizable and there is zero benefit to leaving it to the customer. I mean the workloads are the same, the pipeline is the same, they have to do something... what better way than to split the instruction pipeline from the shader pipelines and just power them when it covers their cost to run. Every watt saved from static losses is one more available to faster switching.

The vrms don't even care how much you pull, they just work until temperature kills them.


----------



## RedelZaVedno (Oct 20, 2020)

One stupid question... Could AMD just modify XBoX X 52CU GPU inside APU, clock it to lets say 2.23Ghz like in PS5 and offer it as discrete PC GPU?
That would probably be cheap to produce yet powerful solution. It would be a 3070 killer if equipped with 10/12 GB of VRAM and priced at 350/400 bucks (like 5700 series).
Is there any possibility of that happening?


----------



## FinneousPJ (Oct 20, 2020)

RedelZaVedno said:


> One stupid question... Could AMD just modify XBoX X 52CU GPU inside APU, clock it to lets say 2.23Ghz like in PS5 and offer it as discrete PC GPU?
> That would probably be cheap to produce yet powerful solution. It would be a 3070 killer if equipped with 10/12 GB of VRAM and priced at 350/400 bucks (like 5700 series).
> Is there any possibility of that happening?


Isn't the console solution a single chip, i.e. there isn't a GPU there to copy.


----------



## EarthDog (Oct 20, 2020)

Vayra86 said:


> AIB cards generally get clocked out of their efficiency curve, and in this case even the FE does that. Its the same deal as with Vega, you underclock it because the gain you get is bigger than the FPS loss. Sometimes you can even get lower volts and the exact same performance, or you get better consistency.
> 
> I even see it on my Pascal 1080. 100% power target gets me just as far as 110%, but it still runs cooler, quieter and maintains a stable boost clock better. At the same time, the FPS gain isn't very linear with the clockspeed gain especially if that clock fluctuates all the time. Boost is great for the extra few hundred Mhz it gives, and you keep letting it do that, but the minor gains above that come at a high noise/power cost. This is technically not an undervolt of course, but it illustrates the point. The bar has been pushed further with the generations past Pascal, closer to the edge of what GPUs can do - note the 3080 2Ghz clock issue, and the general power draw increase across the board. Turing was also more hungry already while boosting a bit lower.


It seems like Ampere so far is and the 5700XT... clocked out of their efficiency curves... and it seems the same with RDNA2 with the rumors so far...I don't think any of the AMD fanatics saw similar power envelopes coming (they are awfully quiet here... go figure) and here we are.


----------



## RedelZaVedno (Oct 20, 2020)

FinneousPJ said:


> Isn't the console solution a single chip, i.e. there isn't a GPU there to copy.


Yes it is an APU, but still it has CPU and GPU inside it. I don't know how much modifying one needs to do to make it discrete GPU.


----------



## mtcn77 (Oct 20, 2020)

FinneousPJ said:


> Isn't the console solution a single chip, i.e. there isn't a GPU there to copy.


Under dx12, they are still discrete chips. I don't see any change any how in the near future. They haven't done it until they shift the gpu shaders into cpu fpus.


----------



## Chrispy_ (Oct 20, 2020)

EarthDog said:


> lol, it's your money... it screams a waste of cash to me.


The entire "quiet computing" industry is a waste of cash. It doesn't add any performance at all but people pay serious money for it.
The entire "RGBLED" industry is a waste of cash. It doesn't add any performance at all but it costs quite a bit more whilst adding additional software bloat and cable spaghetti.
As you can tell from the current retail market - both of those segments are _so successful  _that they utterly dominate the market and leave almost nothing else available.

Underclocking and undervolting a graphics card is exactly what every laptop manufacturer has ever done. Nvidia went one step further with their Max-Q models and gave people the option to buy far more expensive GPUs than their laptop cooling is capable of, but dialled back to heavily-reduced clocks and TDPs. They sold in their millions, Max-Q was a huge success in the laptop world, despite the high cost.

I think we can agree to disagree because having options on the market is good and more consumer choice is always better than less. At least AMD's graphics driver is an excellent tuning tool for undervolting and underclocking.


----------



## FinneousPJ (Oct 20, 2020)

RedelZaVedno said:


> Yes it is an APU, but still it has CPU and GPU inside it. I don't know how much modifying one needs to do to make it discrete GPU.


I'd guess more modifying than is worth doing if they aren't doing it...


----------



## mtcn77 (Oct 20, 2020)

I think, if they split instruction pipelines from shader pipelines, they can do a frontend overclock until the pipelines are full, say the gpu works at not just 2.3GHz, but say 3.0GHz when shaders are idle. How much it would help is relatable since they have pinpointed exactly where the bottlenecks are - 18% idle for 4 workgroups(just enough work for 1 shader of each 4096).


----------



## R0H1T (Oct 20, 2020)

Chrispy_ said:


> They sold in their millions, Max-Q was a huge success in the laptop world, despite the high cost.


High costs for? I get your point but Nvidia probably made more money off their Max Q models especially for top tier cards like the 2080, 2070 et al. It's the manufacturer & the buyer who's had to pay through their collective noses even for reduced performance.


----------



## Vayra86 (Oct 20, 2020)

Chrispy_ said:


> The entire "quiet computing" industry is a waste of cash. It doesn't add any performance at all but people pay serious money for it.
> The entire "RGBLED" industry is a waste of cash. It doesn't add any performance at all but it costs quite a bit more whilst adding additional software bloat and cable spaghetti.
> As you can tell from the current retail market - both of those segments are _so successful  _that they utterly dominate the market and leave almost nothing else available.
> 
> ...



Couldn't agree more.

For me the biggest win is silence. I want a quiet rig above everything else really. I play music and games over speakers. Noticeable fan noise from the case is the most annoying immersion breaker - much more so than the loss of single digit FPS. Is that worth buying a bigger GPU for that I run at lower power? Probably, yes. Its that, or I can jump through a million hoops trying to dampen the noise coming out of the case... which is also adding extra cost but not offering the option of more performance should I want it. Because I haven't lost that, when I buy a bigger GPU.


----------



## ebivan (Oct 20, 2020)

I am glad I already upgraded to a 750W PSU in anticipation of the 3080 that I never got  Now it will be powering Big Navi if AMD can deliver...


----------



## Chrispy_ (Oct 20, 2020)

RedelZaVedno said:


> I'm looking for 250W GPU max. Best price/performance at this wattage gets my money. 3070 looks promising, but I do expect RDNA2 to beat it in performance/watt and performance/dollar given that it's on superior node and AMD is an underdog in GPU game. 52CU clocked at 2100MHz (around 14 Tflops) should match 2080ti/3070 and have favorable performance/watt ratio. It will all come down to pricing. I really hope AMD doesn't become greedy. All these 300-400W GPUs are a no go in my eyes. I have no need for expensive room heaters.



I'll be trying out the RDNA2 cards for the exact same reason as you. 250W max in my HTPC but the reason I'm back to Nvidia in the HTPC at the moment is the AMD HDMI audio driver cutting out with Navi cards. Didn't happen when I swapped to an RX480 or a 2060S, but when I tried a vanilla 5700 the exact same bug reappeared. A microsoft update was the trigger but AMD haven't put out a fix yet and after 3 months I got bored of watching the thread of people complaining on AMD's forum get longer without acknowledgement and moved on.


----------



## mtcn77 (Oct 20, 2020)

ebivan said:


> I am glad I already upgraded to a 750W PSU in anticipation of the 3080 that I never got  Now it will be powering Big Navi if AMD can deliver...


You'll have +7 years of safe operation until mean power deliver is down to 385w ~ what it averages for a 3080 system.


----------



## ebivan (Oct 20, 2020)

mtcn77 said:


> You'll have +7 years of safe operation until mean power deliver is down to 385w ~ what it averages for a 3080 system.


Sorry, I dont understand what you mean.


----------



## EarthDog (Oct 20, 2020)

Chrispy_ said:


> The entire "quiet computing" industry is a waste of cash. It doesn't add any performance at all but people pay serious money for it.
> The entire "RGBLED" industry is a waste of cash. It doesn't add any performance at all but it costs quite a bit more whilst adding additional software bloat and cable spaghetti.
> As you can tell from the current retail market - both of those segments are _so successful  _that they utterly dominate the market and leave almost nothing else available.
> 
> ...


I think my talking point went over your head (seeing some of your talking points).... but we'll agree to disagree.


----------



## mtcn77 (Oct 20, 2020)

ebivan said:


> Sorry, I dont understand what you mean.


Psu's lose 10% capacity annually on average.


----------



## Punkenjoy (Oct 20, 2020)

Well this is not surprising

Why AMD would let Watt on the table if they can make their cards faster ? These 320  watts card exist because people are buying it. They even complain hard when they can't buy them because they are back order. 

If nobody was buying a 250+ cards, AMD and Nvidia wouldn't produce them, that is as simple as that. That just show how little people really care about power consumption in general. 

The good things is there will also be 200 watt GPU that will have very good performance increase for people that want a GPU that consume less power while still having better performance than current Gen. 

But if people want to get the highest performance possible no matter the cost, why would AMD and Nvidia hold back ? 

If they could sell a 1000w card that is twice the performance like they sell the 3080, they would certainly do it.



mtcn77 said:


> Psu's lose 10% capacity annually on average.


Any proof of this? 

i mean if that is true, my PSU would just die right now with my current setup. But hey, it's still running strong and rock stable. 

That is a myth or maybe true for cheap PSU with bad componement but it's certainly not true for good PSU.


----------



## mtcn77 (Oct 20, 2020)

Punkenjoy said:


> Any proof of this?


Cannot search it right now, it is due to capacitor wear.

I mean warranty period + 7 years they probably add 20% on this as well. Not just 7.


----------



## ebivan (Oct 20, 2020)

mtcn77 said:


> Psu's lose 10% capacity annually on average.


Most hardware wont get to that age here. I never had noticeable  loses at my psu before, maybe because I always oversized them and always bought the "better" brands. People tend to save on PSUs because its an easy way to save some bucks at total system cost. I don't. This one is a pretty solid one from Seasonic. And even if I ever encounter instability during its lifetime, I am not afraid to take out the good old solder iron and switch out those big caps that have aged. Anyways caps mostly age in warm environments, since I'm not even playing more than 15h a week, thermal stress and therefore ageing should not be a problem during its lifetime.
I think 10% per year is a pretty "worst case" scenario. That may be true to cheap PSUs with even cheaper caps in  24/7 full load in a 50°C environment....


----------



## Chrispy_ (Oct 20, 2020)

mtcn77 said:


> Psu's lose 10% capacity annually on average.


That's not necessarily a false statement, but it doesn't represent the huge variety of PSU quality, the way different platform designs age differently, and the quality of the power grid that effectively wears PSUs out over time.

I think 10% loss per year is a safe bet for a _worst-case-scenario _but there are plenty of people and independent tests proving that decade-old PSUs are still capable of delivering all or nearly all of their rated power. PSUs components are overprovisioned when new so that as the capacitors and other components wear out, they are still up to the rated specification during the warranty period. I forget where I read it but I seem to recall a review of a decade old OCZ 700W supply that had been in nearly 24/7 operation, yet it still hit the rated specs without any problems. The temperature it ran at was much higher (but still in spec) and the ripple was worse than when it was new (but still in spec) and it shutdown when tested at 120% load, something it managed to cope with when new.

I would not be using a decade-old PSU for a new build with high-end parts, but at the same time I would expect a new 750W PSU to still deliver 750W in 7 years from now.


----------



## AusWolf (Oct 20, 2020)

Going by an electricity price of 20p per kWh (which is a relatively expensive UK price) and an average game time of 2 hours per day; a 300 W difference in total system power consumption is going to cost £43.8 *per year*, which comes to £3.65 *per month*! So everybody stop crying about bills!

On the other hand, I'd be happy if more AIBs (other than EVGA) adopted the idea of AIO watercooled graphics cards, especially with Ampere and RDNA2. It's not only good for these hungry GPUs, but using the radiator as exhaust helps keeping other components cool as well.


----------



## medi01 (Oct 20, 2020)

Raevenlord said:


> Efficiency is one thing, power consumption is another.
> 
> NVIDIA's RTX 3080 is a much more power-efficient design than anything that came before (at 1440p and 4K), as our review clearly demonstrates.
> 
> ...



Remember that these charts are based on GPU performance in a bunch of games, but power consumption in just one game.
Actual perf/watt might be well off claimed one due to this uncertainty.


----------



## EarthDog (Oct 20, 2020)

medi01 said:


> Remember that these charts are based on GPU performance in a bunch of games, but power consumption in just one game.
> Actual perf/watt might be well off claimed one due to this uncertainty.


One of the first good points you've brought up in a while. I thought the power testing was at least across a few games... 

That said, when the new AMD card is released, it will be apples to apples if only across one title.


----------



## Nkd (Oct 20, 2020)

RedelZaVedno said:


> What's wrong with my numbers? Igor writes 320W TBP for FE NAVI 21 XT and 355W for AIB variants ('Die 6800XT ist heiss, bis zu 355 Watt++') That translates into +400W peak power draw.


NO! That is the entire damn board and all the power lol. Reference cards that are set to reference specs will never go above the power limit unless you manually adjust it in software. Plug and play it will maintain that power limit and be under it. That is how it works. You are comparing AIB cards that might have more power unlocked and consume more under peak load.


----------



## Jism (Oct 20, 2020)

RedelZaVedno said:


> If TGP is 320W than peak power draw must be north of 400W, just like 3080 and 3090. That's really, really bad. Any single decent GPU should not peak over 300W, that's the datacenter rule of thumb and it's getting stumbled upon with Ampere and RDNA2. How long will air cooled 400W GPU last? I'm having hard time believing that there will be many fully functioning air cooled Big Navis/3080-90s around in 3-5 years time. Maybe that's the intend, 1080TIs are still killing new sales.



Come'on, you got a slider now in your driver that limits the current limit your GPU can consume. If you think it's pushing out too many watts, slide it down. If you think you shoudnt be pushing for more framerates then your screen can handle turn on Vsync or Freesync. If you think it's consuming too much undervolt and underclock it. You have so much freedom in relation of cards these days. For a long time i ran a RX580 at 1200Mhz with a undervolt, since the performance difference is only a few percent but the power reduction was huge.

The Vega as well is at certain clocks and voltages, very efficient > Untill AMD decided that pushing the Vega to compete with the 1080 was beyond it's efficiency curve. Like a Ryzen 2700x > From 4Ghz and above you need quite more and more voltages to clock higher, untill the point that the more voltage needed for just that few more Mhz is proportional. Makes no sense.






Here's a old screenie of my RX580 running at 300 watts power consumption. Really if your hardware is capable of it it shoud'nt cause issues. I'm running with Vsync anyway capped at 70Hz. Its not using 300W sustained here, more like 160 to 190 Watts in gaming. Same goes out for AMD cards. Their enveloppe is set at up to xxx watts; and you can play / tweak / tune it if desired.


----------



## Nater (Oct 20, 2020)

Turmania said:


> Does anybody not care about electricity bills anymore, or most not having responsibilty to pay the bills? Who would buy these cards?



I don't think I've ever once thought of the electricity bill when it comes to computers, except when it comes to convincing the wife that upgrading will actually SAVE us money.  "Honey, it literally pays for itself!"

We have an 18k BTU mini-split that runs in our 1300 sq. ft. garage virtually 24/7.  The 3-5 PC's in the home that run at any given moment are NOTHING compared to that.


----------



## ebivan (Oct 20, 2020)

Nater said:


> I don't think I've ever once thought of the electricity bill when it comes to computers, except when it comes to convincing the wife that upgrading will actually SAVE us money.  "Honey, it literally pays for itself!"
> 
> We have an 18k BTU mini-split that runs in our 1300 sq. ft. garage virtually 24/7.  The 3-5 PC's in the home that run at any given moment are NOTHING compared to that.



Haha, youre American, power is practically free in the US. I pay 0.30€ for a kWh, so at least for servers etc I have to have an eye on power consumption. 
But for Desktops etc i dont care since more power often means more performance and high power is only drawn during high  load which is only minutes when working or 2h a night when gaming...


----------



## dragontamer5788 (Oct 20, 2020)

mtcn77 said:


> I think, if they split instruction pipelines from shader pipelines, they can do a frontend overclock until the pipelines are full, say the gpu works at not just 2.3GHz, but say 3.0GHz when shaders are idle. How much it would help is relatable since they have pinpointed exactly where the bottlenecks are - 18% idle for 4 workgroups(just enough work for 1 shader of each 4096).



Shaders run instructions. I'm not entirely sure what you mean by this. 

Currently, RDNA (and GCN) split instructions into two categories: Scalar, and Vector. "Scalar" instructions handle branching and looping for the most part (booleans are often a Scalar 64-bit or 32-bit value), while "vector" instructions are replicated across 32 (64 on GCN) copies of the program.


----------



## TheoneandonlyMrK (Oct 20, 2020)

ebivan said:


> Haha, youre American, power is practically free in the US. I pay 0.30€ for a kWh, so at least for servers etc I have to have an eye on power consumption.
> But for Desktops etc i dont care since more power often means more performance and high power is only drawn during high  load which is only minutes when working or 2h a night when gaming...


It's a concern for me, UK power isn't cheap, having said that ,as you say power use depends on load, and few cards use flat out power much of the day, even folding at home or mining doesn't Max a cards power use in reality.
Still,  Some game's are going to cook people while gaming, warm winter perhaps, hopefully that looto tickets not as shit as all my last one's.


----------



## dragontamer5788 (Oct 20, 2020)

ebivan said:


> Haha, youre American, power is practically free in the US. I pay 0.30€ for a kWh, so at least for servers etc I have to have an eye on power consumption.
> But for Desktops etc i dont care since more power often means more performance and high power is only drawn during high  load which is only minutes when working or 2h a night when gaming...



I mean, if you care a lot about power consumption, you could just game at 1080p instead of 4k (as an example). All of these components run power based off of the complexity of the computation. If you lower the complexity (lowering resolution, or graphical quality in other ways), then you'll use less power.

All of these GPUs idle at levels we can pretty much ignore.





Even the 14W idle of RX Vega is 0.0042€ per hour. It only ramps up to max power if you give it a game, or other load, that requires that kind of power draw. Cap your framerate to lower values (especially if you have VSync / GSync), etc. etc.

On the other hand, I don't think most people even give power-consumption thoughts to their computers. But... its not like these things are running full tilt all the time. If you really cared about power, there's plenty of things you can do right now, today, with your current GPU to reduce power consumption.


----------



## Franzen4Real (Oct 20, 2020)

EarthDog said:


> I just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage...


I was under the assumption from watching Optimum Tech that the point of undervolting was to achieve the lowest stable power draw at the same performance/clocks, as if the stock voltage curve 'overfeeds' the cards, causing throttling etc. I was thinking of going by his guide and entertaining the idea this gen for the sake of learning and getting first hand experience, just to try and save some heat/noise but not at the expense of performance. (Please correct me if I'm off here?)



Turmania said:


> Does anybody not care about electricity bills anymore, or most not having responsibilty to pay the bills? Who would buy these cards?


to me it has always been more about controlling noise than energy cost.


----------



## dragontamer5788 (Oct 20, 2020)

EarthDog said:


> I just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage...



You pay for a bigger and wider GPU. Effectively: you're paying for the silicon (as well as the size of a successful die. The larger the die, the harder it is to produce and naturally, the more expensive it is).

Whether you run it at maximum power, or minimum power, is up to you. Laptop chips, such as the Laptop RTX 2070 Super, are effectively underclocked versions of the desktop chip. The same thing, just running at lower power (and greater energy efficiency) for portability reasons. Similarly, a mini-PC user may have a harder time cooling down their computer, or maybe a silent-build wants to reduce the fan noise.

A wider GPU (ex: 3090) will still provide more power-efficiency than a narrower GPU (ex: 3070), even if you downclock a 3090 to 3080 or 3070 levels. More performance at the same levels of power, that's the main benefit of "more silicon".

--------

Power consumption is something like the voltage-cubed (!!!). If you reduce voltage by 10%, you get something like 30% less power draw. Dropping 10% of your voltage causes a 10% loss of frequency, but you drop in power-usage by a far greater number.


----------



## mtcn77 (Oct 20, 2020)

dragontamer5788 said:


> Currently, RDNA (and GCN) split instructions into two categories: Scalar, and Vector.


There is also the semi-permanent vector operations(vector packed scalars, afaik) which are all the buzz.


dragontamer5788 said:


> Shaders run instructions. I'm not entirely sure what you mean by this.


Frontend and backend are different. The gpu has to decode first, then shaders run them. For the initial period, shaders don't do much. The graphics command processor & workload managers(4 as per each rasterizer) download instructions that shaders will use up.


----------



## TheoneandonlyMrK (Oct 20, 2020)

mtcn77 said:


> There is also the semi-permanent vector operations(vector packed scalars, afaik) which are all the buzz.
> 
> Frontend and backend are different. The gpu has to decode first, then shaders run them. For the initial period, shaders don't do much. The graphics command processor & workload managers(4 as per each rasterizer) download instructions that shaders will use up.


Wouldn't there be a flow through the shaders while the decoders work on the next batch and the batch before is returned to memory, except on startup.
I thought GPU were made to stream data in and out not do one job at a time.
The command processor and scheduling keep the flow going..


----------



## dragontamer5788 (Oct 20, 2020)

mtcn77 said:


> There is also the semi-permanent vector operations(vector packed scalars, afaik) which are all the buzz.



Those are just vector ops from the perspective of the assembly language.



mtcn77 said:


> Frontend and backend are different. The gpu has to decode first, then shaders run them. For the initial period, shaders don't do much. The graphics command processor & workload managers(4 as per each rasterizer) download instructions that shaders will use up.



What I'm talking about is in the compute units themselves. See page 12: https://developer.amd.com/wp-content/resources/Vega_Shader_ISA_28July2017.pdf






sALU processes Scalar instructions (loops, branching, booleans), where sGPRs are primarily booleans, but also function-pointers, the call stack, and things of that nature.

vALUs process vector instructions, which include those "packed" instructions. If we wanted to get more specific, there are also LDS, load/store, and DPP instructions going to different units. But by and large, the two instructions that constitute the majority of AMD GPUs are classified as vector, or scalar.

You're right in that the fixed-function pipeline (not shown in the above diagram), in particular rasterization ("ROPs") constitute a significant portion of the modern GPU.  But you can see that the command-processor is very far away from the vALUs / sALUs inside of the compute units.



theoneandonlymrk said:


> Wouldn't there be a flow through the shaders while the decoders work on the next batch and the batch before is returned to memory, except on startup.
> I thought GPU were made to stream data in and out not do one job at a time.
> The command processor and scheduling keep the flow going..



AMD's command processors are poorly documented. I can't find anything that describes their operation very well. (Well... I could read the ROCm source code, but I'm not *THAT* curious...)

But from my understanding: the command processor simply launches wavefronts. That is: it sets up the initial sGPRs for a workgroup (x, y, and z coordinate of the block), as well as VGPR0, VGPR1, and VGPR2 (for the x, y, and z coordinate of the thread). Additional parameters go into sGPRs (shared between all threads). Then, it issues a command to jump (or function call) the compute unit to a location in memory. AMD command processors have a significant amount of hardware scheduling logic for events and ordering of wavefronts: priorities and the like.

But the shader has already been converted into machine code by the OpenCL or Vulkan or DirectX driver, and loaded somewhere. The command processor only has to setup the parameters, and issue a jump command to get a compute unit to that code (once all synchronization functions, such as OpenCL Events, have proven that this particular wavefront is ready to run).


----------



## Cheeseball (Oct 20, 2020)

Chrispy_ said:


> I'll be trying out the RDNA2 cards for the exact same reason as you. 250W max in my HTPC but the reason I'm back to Nvidia in the HTPC at the moment is the AMD HDMI audio driver cutting out with Navi cards. Didn't happen when I swapped to an RX480 or a 2060S, but when I tried a vanilla 5700 the exact same bug reappeared. A microsoft update was the trigger but AMD haven't put out a fix yet and after 3 months I got bored of watching the thread of people complaining on AMD's forum get longer without acknowledgement and moved on.



The new 20.10.1 driver seems to address the HDMI audio issue with AV receivers. I have not tested this on the RX 5700 XT and Onkyo yet.


----------



## Zach_01 (Oct 20, 2020)

RedelZaVedno said:


> Performance per watt did go up on Ampere, but that's to be expected given that Nvidia moved from TSMCs 12nm to Samsung’s 8nm 8LPP, a 10nm extension node. What is not impressive is only 10% performance per watt increase over Turing while being build on 25% denser node. RDNA2 arch being on 7 nm+ looks to be even worse efficiency wise given that density of 7nm+ is much higher, but let's wait for the actual benchmarks.


You can’t actually compare nodes nor estimate performance/W gains by node shrink alone. Look at ZEN3. On exact same node of ZEN2 has a 20% better perf/W just by architectural improvements. Don’t confuse this with higher IPC on same speed. If you increase IPC alone without perf/W improvements the power consumption goes up. It’s physics. It’s not only clock that draws power.

RDNA2 is on a better (from RDNA1) 7nm node (the 7NP DUV and not 7nm+ EUV) that (by rumors) offers a 10-15% higher density and combined with the improvements in RDNA2 architecture it is “said” to have +50% better perf/W.

If true, where exactly is going to place 6900 against Ampere, is yet to be seen



EarthDog said:


> It seems like Ampere so far is and the 5700XT... clocked out of their efficiency curves... and it seems the same with RDNA2 with the rumors so far...I don't think any of the AMD fanatics saw similar power envelopes coming (they are awfully quiet here... go figure) and here we are.


I was expecting it... the 300~320W TBP. It couldn’t be anything else in order to offer similar 3080 perf. Less watts didn’t add up, and why AMD shouldn’t use all Watts up to Ampere. Again, my thoughts.

—————————————

Personally I don’t care about a GPU drawing 350 or 400W. I used to have a R9 390X OC model with 2.5 slot cooler and it was just fine. That was rated 375W TBP.
The 5700XT now is more than x2 the perf with 240W peaks and 220W avg power draw.

Every flagship GPU is set to work (when maxed) out of the efficiency curve. Unless there is no competition.

AMD Drivers, except from power, perf bars, also offer the function “chill”. You can set a min/max FPS target. In most games if I use this feature to cap FPS at min/max 40/60 the avg draw of the card is less than 100W.
60Hz is my monitor, and that the target within movement. If you stop moving in game the FPS drops to 40. I can set it 60/60 if I like.
My monitor is 13,5year old 1920x1200 16:10 and I was planing to switch to ultra wide 6 months ago but the human malware changed that, along other aspects of my(our) life(s).

There is no point for me to complain about the amount of power GPU are drawing. Buy a lower tier model. And perf/W is a continuously improved matter. We just can’t use flagship models as examples, sometimes.


----------



## dragontamer5788 (Oct 20, 2020)

Cheeseball said:


> The new 20.10.1 driver seems to address the HDMI audio issue with AV receivers. I have not tested this on the RX 5700 XT and Onkyo yet.



My real issue with the RX 5700 XT series is the lack of ROCm support.

Unofficial ROCm is beginning to happen in ROCm 3.7 (released in August 2020). But there's been over a year where compute-fans were unable to use ROCm at all on NAVI. To be fair: AMD never promised ROCm support on all of their cards. But it really knocks the wind from people's sails when they're unable to "play" with their cards. Even older cards like the RX 550 never really got ROCm support (only RX 580 got official support).

For now, my recommendation for AMD GPU-compute fans is to read the documentation carefully before buying. Wait for a Radeon Machine Intelligence card, like MI25 (aka: Vega64) to come out before buying that model. AMD ROCm is clearly aimed at their MI-platform and not really their consumer cards. MI8 (aka: RX 580) and MI6 (aka: Rx Fury) have good support, but not necessarily other cards.

---------

ROCm suddenly getting support for NAVI in 3.7 suggests that this new NAVI 2x series might have a MI-card in the works, and therefore might be compatible with ROCm.


----------



## AusWolf (Oct 20, 2020)

theoneandonlymrk said:


> It's a concern for me, UK power isn't cheap, having said that ,as you say power use depends on load, and few cards use flat out power much of the day, even folding at home or mining doesn't Max a cards power use in reality.
> Still,  Some game's are going to cook people while gaming, warm winter perhaps, hopefully that looto tickets not as shit as all my last one's.


I just made my calculations a few posts above yours. If you pay 20p per kWh, then 2 hours of gaming (or folding, or whatever) every day on a computer that eats 300 W more than the one you currently own will increase your bills by *£3.65 a month*! If you fold 24/7, fair enough, but other than that, I wouldn't worry too much.


----------



## Vayra86 (Oct 20, 2020)

AusWolf said:


> I just made my calculations a few posts above yours. If you pay 20p per kWh, then 2 hours of gaming (or folding, or whatever) every day with a computer that eats 300 W more than the one you currently own will increase your bills by *£3.65 a month*! If you fold 24/7, fair enough, but other than that, I wouldn't worry too much.



3,5 pounds is a few pints in the pub you cant go to. Definitely worth considering I say


----------



## TheoneandonlyMrK (Oct 20, 2020)

dragontamer5788 said:


> Those are just vector ops from the perspective of the assembly language.
> 
> 
> 
> ...


Sooo, work Does flow through then? Lol Ty.


----------



## thesmokingman (Oct 20, 2020)

Turmania said:


> Does anybody not care about electricity bills anymore, or most not having responsibilty to pay the bills? Who would buy these cards?



That's ironic... 



Turmania said:


> It is expected and rightly deserved by AMD, I expect this tike around both their CPU & GPU are both fully matured and we wont see the bios and software issues that happened last year at least not in that scale. However, when it comes to power consumption i do not believe the 65w. It will consume just as much as i5 10600k.



The same ppl who buy 10600K cpus who think they only use 125w?? The same ppl who think they're saving the world by consuming less power but are actually running at PL2 all the time thus consuming way more power?


----------



## TheoneandonlyMrK (Oct 20, 2020)

AusWolf said:


> I just made my calculations a few posts above yours. If you pay 20p per kWh, then 2 hours of gaming (or folding, or whatever) every day on a computer that eats 300 W more than the one you currently own will increase your bills by *£3.65 a month*! If you fold 24/7, fair enough, but other than that, I wouldn't worry too much.


Technically I don't directly pay the bill  she does  damn electric company .


----------



## Tomgang (Oct 20, 2020)

320 watt is a lot what ever it's nvidia or amd. But for for those that might don't want there card to consume 320 watt+ all the time and I am one of them.

There are ways to to keep consumption down. You can limit fps in some games, you can activate v-sync so you have 60 fps and keep's gpu load down or you can download example msi afterburner. There is this little slider Called power target. With that you can limit the maximum power the card is allowed to to use. What I know, rtx 3080 can be limited all the way down to only 100 watt. Also under volting can save you some watt. Again it seems rtx 3080 can be good for up to 100 watt saving just by limiting max voltage to gpu, with out offering to much performance loss. I used the power target slider for years to adjust a fitting power consumption. 

I am not expecting to get RDNA2 based card. But I do hope amd can still provide a good amount of resistance to rtx 3080, cause we all know. Competition is good for consumer pricing.


----------



## mtcn77 (Oct 20, 2020)

dragontamer5788 said:


> But you can see that the command-processor is very far away from the vALUs / sALUs inside of the compute units.


Well, far or near, on time scale they are consecutively placed, one precedes the other which puts the pressure on gcn frontend.


dragontamer5788 said:


> vALUs process vector instructions, which include those "packed" instructions.


Semi persistent stuff are scalar timed vector ops which save on critical timing. It consumes vector memory in a scalar fashion which saves on decode latency since it follows the developer's instruction and allows for lane instrinsics and full memory utilization.


dragontamer5788 said:


> But from my understanding: the command processor simply launches wavefronts.


Yes. There are 2560 wavefronts in a CU and there are 64 CU's per command processor. It takes 64 cycles for each CU to get 1 operation workgroup issued and thereafter 64 cycles for every wave per CU. It takes a lot of time until shaders are fully operational.


dragontamer5788 said:


> The command processor only has to issue a jump command to get a compute unit to that code.


----------



## EarthDog (Oct 20, 2020)

dragontamer5788 said:


> You pay for a bigger and wider GPU. Effectively: you're paying for the silicon (as well as the size of a successful die. The larger the die, the harder it is to produce and naturally, the more expensive it is).
> 
> Whether you run it at maximum power, or minimum power, is up to you. Laptop chips, such as the Laptop RTX 2070 Super, are effectively underclocked versions of the desktop chip. The same thing, just running at lower power (and greater energy efficiency) for portability reasons. Similarly, a mini-PC user may have a harder time cooling down their computer, or maybe a silent-build wants to reduce the fan noise.
> 
> ...


we can think of situations where it cod be worthwhile. Im not talking about shoehorning these things in tiny boxes, etc. We can all think of exceptions.


----------



## AusWolf (Oct 20, 2020)

theoneandonlymrk said:


> Technically I don't directly pay the bill  *she does*  damn electric company .


And in my case, she shares the costs of living, so with my calculations, I would only pay £1.82 more per month.  I really don't understand why some of you guys are so scared of power-hungry PC components (unless you run your PCs on full load 24/7).


----------



## Turmania (Oct 20, 2020)

Nater said:


> I don't think I've ever once thought of the electricity bill when it comes to computers, except when it comes to convincing the wife that upgrading will actually SAVE us money.  "Honey, it literally pays for itself!"
> 
> We have an 18k BTU mini-split that runs in our 1300 sq. ft. garage virtually 24/7.  The 3-5 PC's in the home that run at any given moment are NOTHING compared to that.



When I buy a new system, I have to pay wifey tax. Which in the end costs me the same as a new system and in many cases more! But at least, everyone is happy.


----------



## dragontamer5788 (Oct 20, 2020)

mtcn77 said:


> Yes. There are 2560 wavefronts in a CU and there are 64 CU's per command processor. It takes 64 cycles for each CU to get 1 operation workgroup issued and thereafter 64 cycles for every wave per CU. It takes a lot of time until shaders are fully operational.



By my tests, it takes 750 clock cycles to read a single wavefront's worth of data from VRAM (64x32 bit reads). So on the timescales of computations, 64 cycles isn't very much. Its certainly non-negligible, but I expect that the typical shader will at least read one value of memory, then write one value of memory (or take 1500 clocks), plus all of the math operations it has to do. If you're doing heavy math, that will only increase the number of cycles per shader.

If you are shader-launch constrained, it isn't a big deal to have a for(int i=0; i<16; i++){} statement wrapping your shader code. Just loop your shader 16 times before returning.



> View attachment 172592



Yeah, I remember seeing the slide but I couldn't remember where to find it. Thanks for the reminder. You'd think something like that would be in the ISA. Really, AMD needs to put out a new optimization guide that contains information like this (which they haven't written one since the 7950 series)


----------



## Makaveli (Oct 20, 2020)

SLK said:


> Looks like this gen of GPUs are all power-hungry. Efficiency is out of the window!



Pretty much to be expected everyone is trying to make 4k playable with the new hardware and was not going to happen on a low power budget.



Turmania said:


> Does anybody not care about electricity bills anymore, or most not having responsibilty to pay the bills? Who would buy these cards?



You assume they cannot afford it and that everyone pays the same price for power.


----------



## Turmania (Oct 20, 2020)

thesmokingman said:


> That's ironic...
> 
> 
> 
> The same ppl who buy 10600K cpus who think they only use 125w?? The same ppl who think they're saving the world by consuming less power but are actually running at PL2 all the time thus consuming way more power?


Flattering to see you search my posts in other topics but I still do not understand what you tried to say here? perhaps i just had a very boring day at work and not focused enough...


----------



## mtcn77 (Oct 20, 2020)

dragontamer5788 said:


> Yeah, I remember seeing the slide but I couldn't remember where to find it.


It is the 'engine optimization hot lap' by Timothy Lottes.




__





						ENGINE OPTIMIZATION HOT LAP -  ppt download
					

Goals of this talk Teach you more about how the GPU works closer to the hardware level To help you reason about your own GPU workloads and engine design Present results of various optimizations And ultimately help you better realize your visual and performance goals as GPUs continue to scale



					slideplayer.com
				





dragontamer5788 said:


> By my tests, it takes 750 clock cycles to read a single wavefront's worth of data from VRAM (64x32 bit reads). So on the timescales of computations, 64 cycles isn't very much. Its certainly non-negligible, but I expect that the typical shader will at least read one value of memory, then write one value of memory (or take 1500 clocks), plus all of the math operations it has to do. If you're doing heavy math, that will only increase the number of cycles per shader.









There is also, "AMD GPU Hardware Basics".
Basically, a hodge-podge of why we cannot keep gpus on duty. Pretty funny stuff, an engineered list of excuses why their hardware don't work.


----------



## thesmokingman (Oct 20, 2020)

Turmania said:


> Flattering to see you search my posts in other topics but I still do not understand what you tried to say here? perhaps i just had a very boring day at work and not focused enough...



Nah I was just reading the other thread and flabbergasted at how misinformed you are and it was ironic to see it here.



Makaveli said:


> Pretty much to be expected everyone is trying to make 4k playable with the new hardware and was not going to happen on a low power budget.



Good point, especially considering the the megapixel density is four fold at 4k vs 1080. These new gpus' power draws have not risen at the same increment as the megapixel density.


----------



## Turmania (Oct 20, 2020)

Makaveli said:


> Pretty much to be expected everyone is trying to make 4k playable with the new hardware and was not going to happen on a low power budget.



I can not speak about new Radeons yet, but with ampere, I would have settled for 25% improvement in performance whilst keeping the same power envelope.
Of course, there will be many people happy with the current situation as well.  So, I can see and understand both sides of the argument.


----------



## Icon Charlie (Oct 20, 2020)

Well it looks like I'll be keeping my 5700 for awhile.   MY  entire load when running with this card is 247watts max from the wall outlet. Do you think I'm going to buy a video card that is going to add another 200 watts without a 150% increase in performance???

Absolutely not.  I bitched about Nvidia and their wattage vs performance and when this card comes out I will bitch about that one too.
This has nor ever will be a Nivida vs AMD.  This is and always will be the Best bang for the buck


----------



## B-Real (Oct 20, 2020)

Here it's 255W.



lemoncarbonate said:


> You get more framerate with 3080 despite the insane power draw, some said it's the most-power-per-frame-efficient GPU out there.
> 
> But, I agree with you... I wish they could have made something that less hungrier. Imagine how amazing it would be if we could get <200W card that can beat 2080 Ti.



Of course it's the most power-per-frame-efficient GPU, but if you compare it against the 1080-980, the 1080 gained near equal performance as the 3080 (a bit more for the 1080), the efficiency gain there was more than 3x bigger (18% vs. 59%).


----------



## thesmokingman (Oct 20, 2020)

B-Real said:


> Here it's 255W.



Interesting. Igor's calculating what they expect it to be. The tweet source just lists TGP not TBP which is what Igor's revised list is. Both are generally speaking in line with each other.

Again, the power draw numbers are not real so relax until we have actual real numbers. But don't be surprised they will be in the same range as Nvidia's because it will take MORE POWER to run realistic framerates at 4K because the pixel density is really steep!


----------



## Makaveli (Oct 20, 2020)

RedelZaVedno said:


> It's not about the bill, it's about the heat. 400W GPU, 150W CPU, 50-150W for the rest of the system and you get yourself 0.6-0.7 KWh room heater. That's a no go in a 16m2 or smaller room in late spring and summer months if you live in moderate or warm climate.



I don't see that as big problem at all. You are taking fully loaded numbers and applying them very generally. When you are gaming that 3080 isn't using that full 320 watts. You can notice this by using something like Furmark which is considered a power virus by the GPU makers. Check how much wattage the card is using during this vs playing a game.

Same applied for a 105watt AM4 cpu and the rest of the system.

Unless you have everything running fully loaded non stop you won't hit those maximum power numbers you are trying to use to make your argument. Current hardware is very good at quickly dropping in lower power states when needed. And pretty much everything out today is very good at idle power draw.


----------



## Vayra86 (Oct 20, 2020)

Power is a non issue.

Power does result in more heat, and more heat is always an issue, inside any case.

Worth considering is that CPU TDPs have been all over the place as well. The net result is you'll be taking a lot more measures than before just to keep a nice temp equilibrium. More fans, higher fan speeds, higher airflow requirements. Current day case design is of no real help either, in that sense. In that way, power increases directly translate to increased purchase price of the complete setup. And that is on top of the mild increase to a monthly (!) energy bill. 3,5 pounds per month... is another 42 pounds per year. Three years of high power GPU versus the same tier of past gen...  +126 pounds sterling. 700 just became 826. Its not nothing. Its a structural increase of TCO. And not even considering the power/money used for that first 250W we always did.

Also worth considering is the fact that people desire smaller cases. ITX builds are gaining in popularity. Laptops are a growth market and a larger one than consumer desktops.

So... is power truly a non issue... not entirely then?



Makaveli said:


> Unless you have everything running fully loaded non stop you won't hit those maximum power numbers you are trying to use to make your argument. Current hardware is very good at quickly dropping in lower power states when needed. And pretty much everything out today is very good at idle power draw.



You can rest assured a common use case for GPU is to run it at a100% utilization. Even if that doesn't always translate to 100% of power budget... its still going to be close.


----------



## Chrispy_ (Oct 20, 2020)

Cheeseball said:


> The new 20.10.1 driver seems to address the HDMI audio issue with AV receivers. I have not tested this on the RX 5700 XT and Onkyo yet.


Goddamnit! I waited three months and the 2060S has only been in there for four days before AMD fix it. I'm using Yamaha, but I suspect if they say "AV recievers" it basically means any situation where there's an intermediate device extracting audio between the GPU and the final display.

I'll have to give this a try at the weekend. The 5700XT is faster AND significantly quieter than the 2060S. 

Or rather, I should say that the 5700XT I have is quieter than the 2060S I have. I shouldn't make sweeping generalisations since both cards have a wide variety of performance and acoustics depending on which exact model. Still, though, the 5700XT undervolts more gracefully than the 2060S, I guess that's 7nm vs 12nm for you....


----------



## Cheeseball (Oct 20, 2020)

Chrispy_ said:


> Goddamnit!
> I waited three months and the 2060S has only been in there for four days before AMD fix it.
> I'm on Yamaha, but I suspect if they say "AV recievers" it basically means any situation where there's an intermediate device between the GPU and the final display.
> I'll have to give this a try at the weekend. The 5700XT is faster AND quieter than the 2060S and my highest-priority in an HTPC graphics card is silence and 4K60 performance, something I've recently realised the 2060S sucks at.



I would think that depends on which 2060S AIB card you purchased (in relation to the silence since the cooling layout varies). No doubt that the 2060S is a weaker card in terms of gaming performance, but it should be on-par when it comes to using NVENC/NVDEC compared to VCE (except in real-time H.264 transcoding/streaming).


----------



## Makaveli (Oct 20, 2020)

Vayra86 said:


> You can rest assured a common use case for GPU is to run it at a100% utilization. Even if that doesn't always translate to 100% of power budget... its still going to be close.



Yes for those that are using them for work and miners.

But for gamers you are rarely sitting at 100% utilization.


----------



## mtcn77 (Oct 20, 2020)

Makaveli said:


> Unless you have everything running fully loaded non stop you won't hit those maximum power numbers you are trying to use to make your argument. Current hardware is very good at quickly dropping in lower power states when needed. And pretty much everything out today is very good at idle power draw.





Makaveli said:


> Yes for those that are using them for work and miners.


I have to say, after all the miner craze I have encountered, it seems very plausible that those numbers are not just real, they are vital to the operating life of the gpu. People have been cancelling factory overclocks just to last them a couple months longer. All those 20% overclocks, available power budget limits are thrown out the window when you have a brick.


----------



## Zach_01 (Oct 20, 2020)

Makaveli said:


> But for gamers you are rarely sitting at 100% utilization.


2~3 games I play lately I see an average of 98~99% GPU usage when GPU is unrestricted at full speed (no FPS cap). 1920x1200 max settings


----------



## EarthDog (Oct 20, 2020)

Makaveli said:


> But for gamers you are rarely sitting at 100% utilization.


Sorry, what? Any modernish game that doesn't have any limitations (cpu or vsync) will run a gpu at that 98/99% threshold this is normal behavior. I cant think of a game I own outside of gemcraft that doesn't show ~99% use...


----------



## Mysteoa (Oct 20, 2020)

Vya Domus said:


> I don't follow, the limited edition of the 5700XT wasn't a different product, it was still named 5700XT. "6900XTX" implies a different product.



Navi 10 XTX is 5700 XT 50th Anniversary Lisa Su edition, so essential  NAVI 21XTX is higher binned  NAVI 21XT. Maybe NAVI 21XTX is a 6900XT watercooled edition.


----------



## Makaveli (Oct 20, 2020)

EarthDog said:


> Sorry, what? Any modernish game that doesn't have any limitations (cpu or vsync) will run a gpu at that 98/99% threshold this is normal behavior. I cant think of a game I own outside of gemcraft that doesn't show ~99% use...



I mean it been pegged to 100% consistently. In most games usage will vary with load screens, where you are on the map, how many enemies etc.


----------



## dragontamer5788 (Oct 20, 2020)

Makaveli said:


> I mean it been pegged to 100% consistently. In most games usage will vary with load screens, where you are on the map, how many enemies etc.



Even at 100% utilization, that doesn't mean that the GPU is using 100% of the power. Utilization is usually OS-level, which is to say how full the GPU-command queues are. Its not actually about power usage at all.

Different games will be mostly at high utilization (because the command queues have something in them constantly), but if you watch the power-usage, it will vary.


----------



## EarthDog (Oct 21, 2020)

Makaveli said:


> I mean it been pegged to 100% consistently. In most games usage will vary with load screens, where you are on the map, how many enemies etc.


Load screens, sure.. otherwise, its pretty consistent 98/99%.. very consistent (again unless vsync, or CPU bottleneck). As was said, power can vary though.


----------



## Th3pwn3r (Oct 21, 2020)

Turmania said:


> We used to have two slot gpu's as of last gen that went upto 3 slots, and now we are seeing 4 slot gpu's and it is all about to cool the power hungry beasts. But this trend surely has to stop. Yes, we can undervolt to bring power consumption to our desired needs and will most certainly be more efficient to last gen. But is that what 99% of users would do?  I think about not only the power bill, the heat that is outputted, the spinning of fans,band consequently the faster detoriation of those fans and other components in the system. The noise output, and the heat that comes from the case will be uncomfortable.




I suggest you shutdown your computer or power off your phone because you're just wasting electricity and generating heat for what reason? I'm not serious. There's a lot of goofy going on in your post but if you don't want noise and heat then just put your PC in the next room over and use extension cables for everything OR you could vent your PC somehow. A LONG time ago I vented my PC into my Attic and while many would say small, low pressure PC fans won't push the air up and out I can say for sure that it worked for me.


----------



## Mussels (Oct 21, 2020)

EarthDog said:


> I just don't get this undervolting crowd... You pay for a certain level of performance, but due to power you strip a fair amount of performance off of it to run lower. Why not pay less for a card down the stack with less power use and use it to its fullest potential? I don't buy a performance coupe and decide to chip it to lower the horsepower because I don't like the gas mileage...



freedom to run either way. right now at 1440p 144hz, i doubt a 3080 will get maxed out very often - so i'd either run Vsync on, a power limiter or both to keep the heat down until i actually need the performance.

To put it another way, i dont want a 400W screamer to run CSGO - i'd rather it become a 200W card in that situation.

Edit: with my 1080, its almost always 100% load, except for the instances i'm CPU limited. Even if its not an issue NOW, it WILL be as the cards age.


----------



## Th3pwn3r (Oct 21, 2020)

Mussels said:


> freedom to run either way. right now at 1440p 144hz, i doubt a 3080 will get maxed out very often - so i'd either run Vsync on, a power limiter or both to keep the heat down until i actually need the performance.
> 
> To put it another way, i dont want a 400W screamer to run CSGO - i'd rather it become a 200W card in that situation.



Fair point but this isn't the same scenario others describe. But what you've described is also wasteful if you're not going to use the full potential of the card you have installed. In your case you probably shouldn't have upgraded at all. Seems like a hassle to me to water down a card and then boost it back up just because I'm gonna play a different game .


----------



## mtcn77 (Oct 21, 2020)

Th3pwn3r said:


> But what you've described is also wasteful if you're not going to use the full potential of the card you have installed.


You have to consider whatever you do, that gpu is never gonna run its workloads serially. There is an order of magnitude power difference between running the card 99% and 100%. Are you going to pursue that 1%? It is not 99p's, or 999th's either. Just 99fps, or 100 for all disrupted case internals, cpu and psu overheating as a result. Not cool.


----------



## Mussels (Oct 21, 2020)

Th3pwn3r said:


> Fair point but this isn't the same scenario others describe. But what you've described is also wasteful if you're not going to use the full potential of the card you have installed. In your case you probably shouldn't have upgraded at all. Seems like a hassle to me to water down a card and then boost it back up just because I'm gonna play a different game .



You dont see the point in getting a GPU twice as fast as what i have, for future games? You may have odd views on this stuff.


----------



## Camm (Oct 21, 2020)

Discussions on boosting from Sony and AMD continuing to separate Game from Boost clock is much more interesting that the 'TDP' numbers IMO.

Much like CPU's, TDP's will start becoming irrelevant, and I believe this is the first move as such, with boosting becoming much more deterministic and transitory.


----------



## nguyen (Oct 21, 2020)

Th3pwn3r said:


> Fair point but this isn't the same scenario others describe. But what you've described is also wasteful if you're not going to use the full potential of the card you have installed. In your case you probably shouldn't have upgraded at all. Seems like a hassle to me to water down a card and then boost it back up just because I'm gonna play a different game .



have you ever seen those anime where the villains just keep on unleashing their power when MC power up   .


----------



## EarthDog (Oct 21, 2020)

Mussels said:


> freedom to run either way. right now at 1440p 144hz, i doubt a 3080 will get maxed out very often - so i'd either run Vsync on, a power limiter or both to keep the heat down until i actually need the performance.
> 
> To put it another way, i dont want a 400W screamer to run CSGO - i'd rather it become a 200W card in that situation.
> 
> Edit: with my 1080, its almost always 100% load, except for the instances i'm CPU limited. Even if its not an issue NOW, it WILL be as the cards age.


Fair point. But in cases like that, put a frame limiter on of some sort in-game... This was when you play titles like csgo hitting 300 fps, you'll cap it to your refresh, power use, noise, and temps all drop while other games where the horsepower is needed isnt then lacking.


----------



## Th3pwn3r (Oct 21, 2020)

mtcn77 said:


> You have to consider whatever you do, that gpu is never gonna run its workloads serially. There is an order of magnitude power difference between running the card 99% and 100%. Are you going to pursue that 1%? It is not 99p's, or 999th's either. Just 99fps, or 100 for all disrupted case internals, cpu and psu overheating as a result. Not cool.



If your GPU is causing your CPU and PSU to overheat then you have some serious build issues and I suggest making necessary modifications. Maybe your case is one of those full glass, RGB pieces of junk with zero airflow.



Mussels said:


> You dont see the point in getting a GPU twice as fast as what i have, for future games? You may have odd views on this stuff.



No, I don't see a point in paying a premium for a premium video card now to play future games later. You'd probably be better off buying a future card when the future games are out BUT I'm saying games a couple of years out or so. Personally I don't think future proofing is really a thing. I don't always upgrade out of necessity. However, I'm also not concerned about power consumption or heat. The smallest power supply I have is a 750 watt, followed by 850s, and 1200s (this laptop excluded).


----------



## mtcn77 (Oct 21, 2020)

Th3pwn3r said:


> If your GPU is causing your CPU and PSU to overheat then you have some serious build issues and I suggest making necessary modifications. Maybe your case is one of those full glass, RGB pieces of junk with zero airflow.


Yes, because everybody who isn't using a blower type reference card is in this group together.
I don't wanna fight, since I haven't sorted which type of internet Overlord you are, but it is wildly apparent that open-bench type cases do not constitute the bulk of pc users. I agree it doesn't matter in some cases, but they aren't in the majority of cases.


----------



## EarthDog (Oct 21, 2020)

mtcn77 said:


> Yes, because everybody who isn't using a blower type reference card is in this group together.
> I don't wanna fight, since I haven't sorted which type of internet Overlord you are, but it is wildly apparent that open-bench type cases do not constitute the bulk of pc users. I agree it doesn't matter in some cases, but they aren't in the majority of cases.


Nor is shoehorning a 320W+ card into a shoebox and thinking it would be OK. That's a two way street.


----------



## Vayra86 (Oct 21, 2020)

EarthDog said:


> Nor is shoehorning a 320W+ card into a shoebox and thinking it would be OK. That's a two way street.



Take off the top and cut out the displayports... External GPU enclosure for ultra cheap.


----------



## Chrispy_ (Oct 21, 2020)

Oh man, you guys are still trying to get your head around this, huh?
Here's a 5700XT of mine, graphed for various things, but at the lowest stable OCCT voltages for each clock:






You buy the product and can run it at any clock and power level you choose as long as it's stable. You can see that in an ideal world, best performance/Watt for this card was ~1375MHz. 
AMD sold it at 1850MHz and had a much higher TDP and subsequent heat/noise levels than the 12nm TU106 it competed against. That's taking the efficiency advantage of TSMC's 7nm node and throwing it away, and then throwing away even more just to get fractionally higher benchmark scores.

You literally get a slider in the driver where you can undo this dumb decision. What you do with that slider is entirely up to you, it's not going to change how much you paid for the card, only how much you want to trade peace and quiet for performance. Clearly noise is a big problem because quiet GPUs are a big selling point for all AIB vendors, all trying to compete with larger fans at lower RPMs, features like idle fan stop etc. If you have a huge case with tons of low-noise airflow you can afford to buy a gargantuan graphics card that'll dissipate 300W quietly and let the case deal with that 300W problem seperately.

If you *don't *have a high airflow case, or loads of room, such cards may not even physically fit and the card's own fan noise is irrelevant because it'll dump so much heat into a smaller case that _all the other fans in the case ramp up _in their attempt to compensate with the additional 300W burden of the graphics card.

I haven't even mentioned electricity cost or the unwanted effect of heating up the room. Those are also valid arguments but not necessary as the noise level created by higher power consumption is enough for me to justify undervolting (and minor underclocking) all by itself.


----------



## squallheart (Oct 21, 2020)

RedelZaVedno said:


> Performance per watt did go up on Ampere, but that's to be expected given that Nvidia moved from TSMCs 12nm to Samsung’s 8nm 8LPP, a 10nm extension node.* What is not impressive is only 10% performance per watt *increase over Turing while being build on 25% denser node. RDNA2 arch being on 7 nm+ looks to be even worse efficiency wise given that density of 7nm+ is much higher, but let's wait for the actual benchmarks.



Did you literally just completely ignore the chart that was a few posts above you? 100/85 =  117.6% so still a 17.6% improvement in performance/watt over the most efficient Turing GPU.


----------



## mtcn77 (Oct 22, 2020)

dragontamer5788 said:


> Really, AMD needs to put out a new optimization guide that contains information like this (which they haven't written one since the 7950 series


Thank you for some very valuable insight. It is all a game to me, however it is a learning opportunity nonetheless.


dragontamer5788 said:


> If you are shader-launch constrained, it isn't a big deal to have a for(int i=0; i<16; i++){} statement wrapping your shader code. Just loop your shader 16 times before returning.


I'm intrigued, this trains up the L2 caches, I estimate?
What I find a general lack of is, to put very simply, how very easily demonstrable what the workloads are in comparison to what they could have been.
Suppose, we say there are 64 CU's - let's just say 80 CU's for the sake of the latest series - according to the 'engine optimisation hot lap' guideline, the CU's start up one by one to be issued work. This is on average 40,5 CU's not working for the next 80 cycles when we calculate via the Gaussian Method. We could either take it as 50.6% duty for 80 cycles latency, or statically placed 40.5 cycles of latency at the start of all gpu workflow. The issue is what we could do with the hardware, in case we directed our gpu power budget differently. If we instructed the gpu to 'load', but not do any work, we could not just keep loading it in for all 80 CU's, but also for each of the 80 CU's times 40 waves per CU. If the gpu is working at 2.5GHz, that is 2‰ of the gpu time! There is a giant window of opportunity when the gpu can be omitted from any real shader work and just tracking the instruction flow to prepare the shaders for operation.
It is crazy, but I think Nvidia won't let AMD sit on its laurels, if they don't discover buffered instruction and data flow cycling first. Imagine; the execution mask is off for the whole shader array and the gpu waits for 5 MHz until all waves are loaded, then it releases it and off it goes! I know there are kinks. I just don't know any better.


----------



## mahirzukic2 (Oct 22, 2020)

mtcn77 said:


> Thank you for some very valuable insight. It is all a game to me, however it is a learning opportunity nonetheless.
> 
> I'm intrigued, this trains up the L2 caches, I estimate?
> What I find a general lack of is, to put very simply, how very easily demonstrable what the workloads are in comparison to what they could have been.
> ...


If this is really possible, it would be really awesome.


----------



## mtcn77 (Oct 22, 2020)

mahirzukic2 said:


> If this is really possible, it would be really awesome.


Yes, just power gate them until they are ready for full operation with no delay since they denote it is already an established problem to keep pipelines full rather than to empty them. If it helps, turning off shader array could provide a overclock ceiling expansion which also speeds up the idle recovery.
Funny thing is the rgp looks like a tapered trapezoid at the time distal end, so they ought to work on the retiring speed also.
I don't get it, still. All thread blocks are limited to 1024 size. Even an ai could pattern all possible permutations of a 1024 units workgroup. They aren't trying hard enough, haven't they even played any Starcraft... build orders are everything. Just 4pool, gg wp.


----------

