• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

New Specs of AMD RDNA3 GPUs Emerge

i don't think nvidia 900w is a geforce product but a workstation or server class gpu this feature 48gb vram so defi not a geforce one . A 550W may be but from red camp is around 450w i think 100w is not much and by undervolting both 6800xt and 3080 both uc + uv they gave me same results as fps + power consumption so i guess they're kinda close in term of watt/per plus nvidia gives more features than amd is also a big plus .
Facts show us that the navi can't do ray tracing even worse than rtx 2000 , my best bet the next navi ray tracing level is between rtx 2000 & 3000 still too weak i wish amd just remove this ray tracing and reduce the price by 30% and 10 years later make a big come back with real ray tracing .

I think next navi will likley beat rtx 3000 soundly in magic cherry picked scenarios for marketing slides with a bunch of little asterisks and paragraphs of notes at the bottom that says something like in 1080P Medium with FSR Performance+++ enabled!!! Got a 6900XT and it just so disappointing to lose 100+ frames when switching to RT on.
 
Chiplets with infinity cache on the interposer with the memory controller as well, make it easier to use GDDR or HBM, maybe with the improvements in TSV's they could do a flip chip BGA with stacked dies.
 
maybe something like this
View attachment 245882
what's the source on that?

It would be super dumb from a business perspective to make special 3-shader-engine GCD die just for a low-volume halo part, and use 2-shader GCDs for everything else.

Look at Zen/Threadripper/Epyc - there's a single CCD chiplet that serves the entire non-APU product stack, something like 35+ SKUs from the lowly R5 5500 all the way up to the ridiculous $9000 EPYC 7700-series using the exact same piece of silicon binned, harvested, and combined in many different ways. It's a relatively small die that is easy to make with extremely good yields and it's 100% built from the ground up to scale to multiple dies.

It's way more likely that AMD have had a single GCD chiplet desing with a scalable interconnect and will add 1, 2, 3, 4, 6, 8 of them together as necessary.
 
what's the source on that?

It would be super dumb from a business perspective to make special 3-shader-engine GCD die just for a low-volume halo part, and use 2-shader GCDs for everything else.

Look at Zen/Threadripper/Epyc - there's a single CCD chiplet that serves the entire non-APU product stack, something like 35+ SKUs from the lowly R5 5500 all the way up to the ridiculous $9000 EPYC 7700-series using the exact same piece of silicon binned, harvested, and combined in many different ways. It's a relatively small die that is easy to make with extremely good yields and it's 100% built from the ground up to scale to multiple dies.

It's way more likely that AMD have had a single GCD chiplet desing with a scalable interconnect and will add 1, 2, 3, 4, 6, 8 of them together as necessary.

Are Stream Processors locked to work groups or can they be rearranged? I don't know how binning like that works but I would imagine they can be rearranged (you wouldn't throw out an entire shader array because a couple stream processors are broken) so it might align (full bore navi 31 2 complete dies, navi32 bad dies, navi 33 single die).

All rumours have been pointing at 2 compute dies, it would be really cool if they could do what they're doing with epyc with multiple CCDs but it doesn't seem like graphics will be so modular so soon, at least not in the consumer side of things (as pointed last week, Instinct MI300 is supposed to go up to 8 compute dies)
 
I think next navi will likley beat rtx 3000 soundly in magic cherry picked scenarios for marketing slides with a bunch of little asterisks and paragraphs of notes at the bottom that says something like in 1080P Medium with FSR Performance+++ enabled!!! Got a 6900XT and it just so disappointing to lose 100+ frames when switching to RT on.
If RT performance was so important to you, why did you spring for an AMD card?
 
Oh man can't wait for it to come out so I can't buy it.
 
I thought RDNA3 was going to be MCM, so we'd see fewer unique dies and a range of GPUs based more like Zen2 when it first lauched MCM on desktop:

Single harvested die (3600/3600X)
Single fully-enabled die (3700X/3800X)
Dual harvested dies (3900X)
Dual fully-enabled die (3950X)

If AMD was going MCM it wouldn't need three different sizes, would it? Perhaps just a performance-class die to scale up with multiple modules and a tiny one for the entry-level and lower midrange.
Only the top source is going to be MCM from qll reports,
 
Last edited:
what's the source on that?

It would be super dumb from a business perspective to make special 3-shader-engine GCD die just for a low-volume halo part, and use 2-shader GCDs for everything else.

Look at Zen/Threadripper/Epyc - there's a single CCD chiplet that serves the entire non-APU product stack, something like 35+ SKUs from the lowly R5 5500 all the way up to the ridiculous $9000 EPYC 7700-series using the exact same piece of silicon binned, harvested, and combined in many different ways. It's a relatively small die that is easy to make with extremely good yields and it's 100% built from the ground up to scale to multiple dies.

It's way more likely that AMD have had a single GCD chiplet desing with a scalable interconnect and will add 1, 2, 3, 4, 6, 8 of them together as necessary.
Google search:

Logically there is a penalty in using MCM in how well it scales (actual vs theoretical performance gains as you go up) in relation with a monolithic design.

So 15360 RIP along with the race to 100tflops now?
 
I think next navi will likley beat rtx 3000 soundly in magic cherry picked scenarios for marketing slides with a bunch of little asterisks and paragraphs of notes at the bottom that says something like in 1080P Medium with FSR Performance+++ enabled!!! Got a 6900XT and it just so disappointing to lose 100+ frames when switching to RT on.

What did you expect, it's their first attempt at Ray tracing, Nvidia wasn't that much better. Bottom line, AMD'S R&D budget is only $2 billion compared to Nvidia's $5.27 billion, and more than half of AMD'S goes to x86, meaning AMD is able to basically match Nvidia on raster while spending a fifth of the amount, which is seriously impressive, people need to give AMD credit for doing more than anyone else with so little....besides, even if AMD made faster videocards, everyone would still buy Nvidia anyway for some irrational reason I can't understand.
 
bots and third party sellers rule the world now, so none of us will get one for a solid 3 months after launch, if that.

this new world we live in is pure shit.

needs to be a law banning third party sellers on new launch items over a set value price for 1 year or something.
 
Strange shader counts. You’d think they would keep it symmetrical and drop from 2560 to 1280??
 
even if AMD made faster videocards, everyone would still buy Nvidia anyway for some irrational reason I can't understand

The tide is turning but it takes a long time to turn. What I mean is, nvidia still controlls a lot of key technologies and has a lot of influence with game developers so their technologies get preferential treatment. That, plus features like DLSS, nvidia broadcast, nvenc, etc. made the hill very high for AMD to climb but step by step they're doing it (FSR and it being both completely open (supports any gpu) and very easy to use were big steps on the right direction, being able to match raster performance was another one - I would say nvidia market dominance is hanging by a thread)
 
The tide is turning but it takes a long time to turn. What I mean is, nvidia still controlls a lot of key technologies and has a lot of influence with game developers so their technologies get preferential treatment. That, plus features like DLSS, nvidia broadcast, nvenc, etc. made the hill very high for AMD to climb but step by step they're doing it (FSR and it being both completely open (supports any gpu) and very easy to use were big steps on the right direction, being able to match raster performance was another one - I would say nvidia market dominance is hanging by a thread)
It will need few generations, huge social media campaign (all the streamers get offered Nvidia top end cards for free as soon as they have some kind of public), and more mind share before the masses buy AMD over Nvidia.

If you are hardware enthusiast, you probably know both brand, but for a lot of people, to game on PC, your option is Nvidia and they don't know anything else.

As for Navi 31, i am very interested to see the outcome. AMD is trying new things to get the lead and is getting audacious where Nvidia is playing conservative putting their card on what they know.

The Navi line remind me of the Zen line.

Ryzen 1xxx: ok CPU, not good enough to beat the competition except maybe in special case, Memory compatibility problem at launch, not super mature product.
Navi 1x: ok Architecture, no high end chip, somehow ok in rasterization perf, lack of feature, stability problem at launch, not super mature product (Black screen issue for many people)

Ryzen 2xxx: Better Platform support, slight increase in performance, better stability but still behind.
Navi 2x: Better platform support, more stable product, Good rasterization perf but bad raytracing performance. still somehow a bit behind Nvidia

Ryzen 3xxxx: First MCM CPU for consumer, taking lead in all but low rez game. Stable platform.
Navi 3x: First MCM GPU for Consumer, but ?????

We will see. And we will see what are the future for both Intel and Nvidia, but AMD is clearly no longer a company that is slacking. They are audacious, they push the technology forward and they want it all. (that include your money too)
 
MCM wouldn't make sense on a GPU

It does, just not for anything other than the absolute highest end offerings.

A MCM approach can cut into half of that consumption if done right.
MCM designs are actually going to use way more power vs monolithic because of the interconnect. Inside any chip the thing that consumes the most power is moving data around, the power consumed by these processor to do actual "work" is almost inconsequential.

The further you have to move data the more power you need, so interconnects that need to go across chips are going to use more power.
Are Stream Processors locked to work groups or can they be rearranged?

The thing they call an work group is in fact the GPU "core", stream processor are just execution pipelines, AMD and Nvidia insist to call these things "processors" and "cores" but they're not. A defective shader means the entire work group has to be disabled because of the way instructions get executed, it's like having a defect in an integer pipeline in a CPU core, you can't do anything about it, it has to be disabled.
 
Last edited:
It does, just not for anything other than the absolute highest end offerings.


MCM designs are actually going to use way more power vs monolithic because of the interconnect. Inside any chip the thing that consumes the most power is moving data around, the power consumed by these processor to do actual "work" is almost inconsequential.

The further you have to move data the more power you need, so interconnects that need to go across chips are going to use more power.


The thing they call an work group is in fact the GPU "core", stream processor are just execution pipelines, AMD and Nvidia insist to call these things "processors" and "cores" but they're not. A defective shader means the entire work group has to be disabled because of the way instructions get executed, it's like having a defect in an integer pipeline in a CPU core, you can't do anything about it, it has to be disabled.
Hm, Even EPYC with 8 chipplets is quite efficient. Don't you think you're exaggerating the consumption of interconnectors?
 
It does, just not for anything other than the absolute highest end offerings.


MCM designs are actually going to use way more power vs monolithic because of the interconnect. Inside any chip the thing that consumes the most power is moving data around, the power consumed by these processor to do actual "work" is almost inconsequential.

The further you have to move data the more power you need, so interconnects that need to go across chips are going to use more power.


The thing they call an work group is in fact the GPU "core", stream processor are just execution pipelines, AMD and Nvidia insist to call these things "processors" and "cores" but they're not. A defective shader means the entire work group has to be disabled because of the way instructions get executed, it's like having a defect in an integer pipeline in a CPU core, you can't do anything about it, it has to be disabled.
Your interconnect point is countered by massive on die cache ,but is otherwise sound.
 
Less than reported earlier, so they are either smaller dies with WGP's or each WGP is larger? Greymon said the performance targets are the same, so I doubt it'll be slower than was planned.
 
AMD has historically taken the route of testing new tech on smaller chips/lower end products. Maybe we will get a hint from product leaks.

Yup tonga was 1
 
even if AMD made faster videocards, everyone would still buy Nvidia anyway for some irrational reason I can't understand.
So true.
The Geforce FX series of cards were hugely inferior to the ATi 9000-series and people bought terrible Geforce FX cards instead of Radeons.
Fermi was a dumpster fire that was a year late and barely performed faster than AMD's previous generation at twice the power draw, but people still bought it in droves.
History is proof that when AMD has a better product, people will still buy Nvidia. Goes to show that having the best product doesn't automatically make it a success.
 
I think next navi will likley beat rtx 3000 soundly in magic cherry picked scenarios for marketing slides with a bunch of little asterisks and paragraphs of notes at the bottom that says something like in 1080P Medium with FSR Performance+++ enabled!!! Got a 6900XT and it just so disappointing to lose 100+ frames when switching to RT on.
You knew this was a first gen RT product and inferior when you bought it. If RT was important to me I know which brand I would have bought.

RDAN3 will massively improve RT performance, but I doubt it will match Lovelace. Should be much stronger than Ampere though. And FSR will be there to pick up the slack. And rumours it could be hardware accelerated.
 
33, is the 6600 successor, expect 50-60 performance. They can double the processing power but what if it can only do fp32 or int32 but not at the same time, just like ampere. Then you get the equivalent of 2560 old is 4096 of the new. Doubling the cache doesn't compensate for the narrow bus. So the expectations are blown out of proportion again, and the hype is setting it up for a disappointment.
 
Nvidia's top end model will consume roughly 900W for a big monolitic chip.
No it wont. Don't believe all the clickbait leak articles you read.
people need to give AMD credit for doing more than anyone else with so little
Sorry but no I won't. It's a fun talking point, but if they want to compete on this stage, they don't get a pass because they spend less on R&D.
even if AMD made faster videocards, everyone would still buy Nvidia anyway for some irrational reason I can't understand.
The tide is turning but it takes a long time to turn.
This is why. AMD need to be strong, very competitive, increase features set, and perhaps even have the outright performance lead for multiple successive generations to put them into GPU market dominance. That ship takes a long time to turn, and AMD still have a reputation among the masses in the GPU space that needs a good, clean and long track record to mend.
What I mean is, nvidia still controlls a lot of key technologies and has a lot of influence with game developers so their technologies get preferential treatment. That, plus features like DLSS, nvidia broadcast, nvenc, etc. made the hill very high for AMD to climb
This also adds to the situation. Not only we're they outright on top for the better part of the last ~decade, they also have a much richer feature set.

Believe me, I want AMD to succeed, I want RDNA3 to be great, and if it is, I'll certainly have models on my radar to shop for. but people expecting their 75%+ market share to change overnight if RDNA3 has the fastest halo product are dreaming.
 
So true.
The Geforce FX series of cards were hugely inferior to the ATi 9000-series and people bought terrible Geforce FX cards instead of Radeons.
Fermi was a dumpster fire that was a year late and barely performed faster than AMD's previous generation at twice the power draw, but people still bought it in droves.
History is proof that when AMD has a better product, people will still buy Nvidia. Goes to show that having the best product doesn't automatically make it a success.

Users who know use Amd.

No it wont. Don't believe all the clickbait leak articles you read.

Sorry but no I won't. It's a fun talking point, but if they want to compete on this stage, they don't get a pass because they spend less on R&D.


This is why. AMD need to be strong, very competitive, increase features set, and perhaps even have the outright performance lead for multiple successive generations to put them into GPU market dominance. That ship takes a long time to turn, and AMD still have a reputation among the masses in the GPU space that needs a good, clean and long track record to mend.

This also adds to the situation. Not only we're they outright on top for the better part of the last ~decade, they also have a much richer feature set.

Believe me, I want AMD to succeed, I want RDNA3 to be great, and if it is, I'll certainly have models on my radar to shop for. but people expecting their 75%+ market share to change overnight if RDNA3 has the fastest halo product are dreaming.


They need to advertise and sponsor more Gaming Tourneys
 
This is why. AMD need to be strong, very competitive, increase features set, and perhaps even have the outright performance lead for multiple successive generations to put them into GPU market dominance. That ship takes a long time to turn, and AMD still have a reputation among the masses in the GPU space that needs a good, clean and long track record to mend.

This also adds to the situation. Not only we're they outright on top for the better part of the last ~decade, they also have a much richer feature set.
Not only the GPUs, but the CPUs mustn't slip in performance over intel. AMD CPU mind share is bigger than their GPU. A good CPU brigs up the GPUs. You can see that whenever AMD CPU didn't perform, the GPU sales also took a dive.
They still can't get out of the bad driver stigma. It was getting better, then the 5000 series black screen happened. I think their hardware design isn't any less inferior to their competition, but they need to put more resource in software, if they are not doing already.
 
Last edited:
Back
Top