• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Navi 21 XT Seemingly Confirmed to Run at ~2.3, 2.4 GHz Clock, 250 W+

Ah yes, a random twitter post from a random person == confirmed.
The info is confirmed by other, more established leakers (NAAF, RedGamingTech). Chu, chu, chu, chu!
 
''infinity cache'

yea, looks all amd fans speek it..well it dont help, of coz, if so nvidia use it for sure,its so simple to do, so forget it.

well, big navi get higher clocks,ok, so what, performance it what the one..and clocks speed ever bfore doing it for top gpu,example rrtx 2080 ti

rtx 3080 is beast,more than many know,bfore you use it for gaming.... specially 4K.. that peformance need power!

big navi and 300W not enough for sure.

some1 wait rx 6700xt gpu, i wait rtx 3080 Super 20Gb gpu and rtx 3070, indeed, but also i guess,moust popular, rtx 3060ti

also...rtx 3080 ti coming... with 7nm TSMC version and with 20gb gddr6x mems.

note: alot specks and name or so amd leaks out...erhh..
but how about little fps results, real one, even few from amd 'leaks'..why not show?
its will so much better commercial than 2400mhz clocks. amd cant do it?
 
Boils and Ghouls, please keep in mind that the forums are to be a place to express opinions, share ideas and knowledge, and interact with others of the tech community, while maintaining respect for each other. Name calling and other schoolyard behavior is unacceptable. Do not take personal digs at each other, do not call names, and if you just flat can't stand someone else's opinion, put them on your ignore list and move on.
We can't have a forum without you guys and your knowledge and participation. However. We will not have a forums where people feel they are attacked for politely sharing their opinions.
Feel free to review the forum guidelines if you can't remember your behavioral rules for being in polite society, and remember the unwritten golden rule: Don't be a Dick.
 
note: alot specks and name or so amd leaks out...erhh..
I love speck, especially on my pizzas
download.jpeg
 
I don't get this notion that for the same performance they should be 50-100 cheaper than their nvidia counterpart though its fairly common presumption :confused:

I'm a fanboy of paying less and that only possible with big navi being competitive but unfortunately I don't see it matching rtx 3080.
Big navi has 2x rx5700 = 80. But the thing is that the performance doesn't scale 100% especially if you keep the same mem bus that now has to feed double the CUs. Higher clocks and the cache should help with the performance scaling but it's very optimistic that they can achieve 2x 5700xt perf as that what you need to match rtx 3080. Plus all the biggest unknown how does AMD raytracing stacks up quality/per vs the nvidia one. I really hope AMD pulls a miracle.
 
Plus all the biggest unknown how does AMD raytracing stacks up quality/per vs the nvidia one. I really hope AMD pulls a miracle.
AMD does raytracing the way it will be done in all coming next-gen games, excepting a handful of games paid by Nvidia to do it RTX style. That's more than enough for me. It's raytracing that will actually work in most games.
 
AMD does raytracing the way it will be done in all coming next-gen games, excepting a handful of games paid by Nvidia to do it RTX style. That's more than enough for me. It's raytracing that will actually work in most games.
...you're inferring RT doesn't work with Nvida? I'm confused at what your point is here.

Don't they both use MS DXR?
 
Last edited:
...you're inferring RT doesn't work with Nvida? I'm confused at what your point is here.
It's 2 different, implementations, most chances (99.99%) are that they require different inputs to work.

Developers need to put some things in the games in order for them to work with RTX, they need to put different things for RDNA2 RT to work. I have no idea how difficult is the translation from the AMD to the Nvidia format, but what we know is that most developers will do the effort for the consoles, whether they will add further effort for the Nvidia format remains to be seen.
 
It's 2 different, implementations, most chances (99.99%) are that they require different inputs to work.

Developers need to put some things in the games in order for them to work with RTX, they need to put different things for RDNA2 RT to work. I have no idea how difficult is the translation from the AMD to the Nvidia format, but what we know is that most developers will do the effort for the consoles, whether they will add further effort for the Nvidia format remains to be seen.
Your complicating it RTX still using base DXR which is what AMD will use. Nvidia doesn’t have an exclusive RT implementation just a a hardware solution to make use of it
 
Last edited:
It's 2 different, implementations, most chances (99.99%) are that they require different inputs to work.

Developers need to put some things in the games in order for them to work with RTX, they need to put different things for RDNA2 RT to work. I have no idea how difficult is the translation from the AMD to the Nvidia format, but what we know is that most developers will do the effort for the consoles, whether they will add further effort for the Nvidia format remains to be seen.
It's true to say that each implementation is architecturally different but as Earthdog says they talk through an API, Dx12 and Dx12 ultimate using DxR.
So optimising a game's use of features will be necessary not changing the API.

Rtx is Nvidia's implementation of DxR.
 
Your complicating it RTX still using base DXR which is what AMD will use. Nvidia doesn’t have an exclusive RYT implementation just a a hardware solution to make use of it
True, my bad.

Still, next-gen games will be developed, optimized and tested so that they look good with RDNA2 RT implementations.
 
I'm curious, what in that statement made you two (@theoneandonlymrk, @INSTG8R) thank that post? What was correct about any of it? (I'm genuinely asking. Judging by both of your responses it seems patently false?).

AMD and Nvidia have 'ways' to do things, but its all DXR based with hardware support. "The way it will be done in all next-gen console...." is the same way it will be done now...right?
 
I think it all depends what AMD's answer will be for DLSS 2.1. Look it would be great to have competition as we need it.
 
I think this:
It is still based on DXR... and any improvements will take time. Look at how RT has come along, slow and steady. I can see more momentum since it's in consoles, but like the use of more than a few cores/threads we've been waiting for since 2010, I won't hold my breath that will make a significant difference soon/this generation.

I have to imagine this 1st gen implementation, like NV, will need some bugs worked out and isn't as efficient/effective as second gen hardware driving DXR like NV is running. I also would have imagined AMD to let loose some DXR performance metrics like they did with raster based, but, it either isn't great, or they are holding the info close to their vest... either are perfectly viable.

Only time will tell.
 
I'm curious, what in that statement made you two (@theoneandonlymrk, @INSTG8R) thank that post? What was correct about any of it? (I'm genuinely asking. Judging by both of your responses it seems patently false?).

AMD and Nvidia have 'ways' to do things, but its all DXR based with hardware support. "The way it will be done in all next-gen console...." is the same way it will be done now...right?
It's still not clear , what is clear is that the Xbox and pc implementation will be similar.
 
I think it all depends what AMD's answer will be for DLSS 2.1. Look it would be great to have competition as we need it.
I haven't heard any rumors about a DLSS concurrent coming from AMD. AMD users might have to stick with native resolutions this gen.
 
True, my bad.

Still, next-gen games will be developed, optimized and tested so that they look good with RDNA2 RT implementations.
Which will pretty much have to be based on DXR(Tho Sony kinda throws a parity wrench in the that pretty sure MS won”t be handing over DX)
 
I have to imagine this 1st gen implementation, like NV, will need some bugs worked out and isn't as efficient/effective as second gen hardware driving DXR like NV is running. I also would have imagined AMD to let loose some DXR performance metrics like they did with raster based, but, it either isn't great, or they are holding the info close to their vest... either are perfectly viable.

Only time will tell.
Frankly, if you look at the benchmark results with RT on and off between Turing and Ampere, the performance hit seems to be the same or worse for Ampere. The only relative performance increase seems to come for fully path-traced games, which are games with very light graphical requirements. For normal AAA games, the gains seem to come from raw performance increase.

But yeah, maybe we'll see something different in future.
 
Well I wonder if AMD ditching the compute cores may have been premature they could have repurposed them as RT cores NV has baked in. But that is me trying to find a use for them on gaming cards where they were just taking up die space
 
Frankly, if you look at the benchmark results with RT on and off between Turing and Ampere, the performance hit seems to be the same or worse for Ampere.
You may want to do the math on that... at least looking at TPUs results with Metro and Control, those results do not speak to your point (assuming my math was correct). I'd stop digging the hole you're in if I was you. :p
 
Ok, so, basically, don't buy into that unsupported statement. Gotcha.
They're are likely to be exclusively supported features on some games on some hardware but that's going to be a dev time or marketing decision IMHO.
It's also clear to me that there are going to be performance differences between the let's say four pliable implementations, Xbox ps5, Rtx DxR and rDNA DxR.

These subtle differences will show themselves with time, for example Rtx might shine on global illumination with rays but the ps5 does reflections marginally better, those are purely hypothetical examples not based on my opinions of reality, just to describe the possibility of different outcomes.
 
I'm a fanboy of paying less and that only possible with big navi being competitive but unfortunately I don't see it matching rtx 3080.
Big navi has 2x rx5700 = 80. But the thing is that the performance doesn't scale 100% especially if you keep the same mem bus that now has to feed double the CUs. Higher clocks and the cache should help with the performance scaling but it's very optimistic that they can achieve 2x 5700xt perf as that what you need to match rtx 3080. Plus all the biggest unknown how does AMD raytracing stacks up quality/per vs the nvidia one. I really hope AMD pulls a miracle.
This is were you’re wrong, thinking that the 80CU GPU is 2x5700XT on some kind of crossfire. Understand that RDNA2 has different architecture. An improved one... It’s not how you think it is. The architecture will have better IPC, better performance/watt and at least 10% higher boost clock. In raw numbers it’s performance is higher that 2x5700XT. In real life it will be around 2x5700XT and that is placing it matching 3080.

''infinity cache'

yea, looks all amd fans speek it..well it dont help, of coz, if so nvidia use it for sure,its so simple to do, so forget it.

well, big navi get higher clocks,ok, so what, performance it what the one..and clocks speed ever bfore doing it for top gpu,example rrtx 2080 ti

rtx 3080 is beast,more than many know,bfore you use it for gaming.... specially 4K.. that peformance need power!

big navi and 300W not enough for sure.

some1 wait rx 6700xt gpu, i wait rtx 3080 Super 20Gb gpu and rtx 3070, indeed, but also i guess,moust popular, rtx 3060ti

also...rtx 3080 ti coming... with 7nm TSMC version and with 20gb gddr6x mems.

note: alot specks and name or so amd leaks out...erhh..
but how about little fps results, real one, even few from amd 'leaks'..why not show?
its will so much better commercial than 2400mhz clocks. amd cant do it?
And you are wrong about your assumptions. Because when nVidia doesn’t use something, it doesn’t mean that it’s without value. nVidia is not the ruler of tech nor the God of GPUs. nVidia can’t use right now big caches because the architecture in use is compute based/oriented and not game oriented. It happens to do well on games and particularly on 4K. If you see the gains on 1080/1440p are smaller that 2080Ti.

AMD and nVidia are choosing different paths. Does not make one or the other better or worst. It will come to users to choose what is best for them and what they want in the end.
 
Well I wonder if AMD ditching the compute cores may have been premature they could have repurposed them as RT cores NV has baked in. But that is me trying to find a use for them on gaming cards where they were just taking up die space
If what is required is just floating-point operations, running them on the shaders would be much more efficient. Sounds to me like Nvidia took the reversed approach, they had some compute cores laying around, let's find them a job to do, but the problem is that in many situations they're still a waste of die space.
 
Back
Top