Sunday, October 18th 2020

AMD Navi 21 XT Seemingly Confirmed to Run at ~2.3, 2.4 GHz Clock, 250 W+

AMD's RDNA2-based cards are just around the corner, with the company's full debut of the secrecy-shrouded cards being set for October 28th. Rumors of high clocks on AMD's new architecture - which were nothing more than unsubstantiated rumors up to now - have seemingly been confirmed, with Patrick Schur posting on Twitter some specifications for upcoming RNDA2-based Navi 21 XT. Navi 21 XT falls under the big Navi chip, but likely isn't the top performer from AMD - the company is allegedly working on a Navi 21 XTX solution, which ought to be exclusive to their reference designs, with higher clocks and possibly more CUs.

The specs outed by Patrick are promising, to say the least; that AMD's Big Navi can reach clocks in excess of 2.4 GHz with a 250 W+ TGP (quoted at around 255 W) is certainly good news. The 2.4 GHz (game clock) speeds are being associated with AIB cards; AMD's own reference designs should be running at a more conservative 2.3 GHz. A memory pool of 16 GB GDDR6 has also been confirmed. AMD's assault on the NVIDIA 30-series lineup should embody three models carved from the Navi 21 chip - the higher performance, AMD-exclusive XTX, XT, and the lower performance Navi 21 XL. All of these are expected to ship with the same 256 bit bus and 16 GB GDDR6 memory, whilst taking advantage of AMD's (rumored, for now) Infinity Cache to make up for the lower memory speeds and bus. Hold on to your hats; the hype train is going full speed ahead, luckily stopping in a smooth manner come October 28th.
Sources: Patrick Schur @ Twitter, via Videocardz
Add your own comment

229 Comments on AMD Navi 21 XT Seemingly Confirmed to Run at ~2.3, 2.4 GHz Clock, 250 W+

#101
INSTG8R
Vanguard Beta Tester
nguyenWell if 720p upscaled to 4K have equal IQ as 4K, I say go for it.
Every hardware editorial seems to agree that if DLSS is available, use it. They didn't say anything about trading visual quality for performance (as least with DLSS in Quality mode).

The only thing with DLSS is that it's unfair to AMD, who has no answer yet. But it's a competition out there, not some fun race between friends.

Let think of it this way, while AMD continue to optimize rasterization performance for their GPU, Nvidia effort is to incorporate DLSS into new games. But somehow benching DLSS is too unfair for AMD, it makes no fricking sense.

Also equal settings and equal conditions were never actually equal, some games just have optimizations for specific brand, just like Far Cry 5 feature rapid-packed math.
Okay again fir the last time ...That’s not how benchmarks work! you compare every card under equal conditions or else it’s not a valid benchmark PERIOD. If AMD has a similar feature it ALSO could not be used...no ones denying some games are optim better for one brand but the cards are run at DEFAULT settings and results are based on their raw performance at those settings a# equal as possible I’m glad you really like DLSS but it’s a “feature“ nobody gets to run features in benchmarks. all cards are run stock/default always. Why do you think W1zzard has. single test bench where the only difference is card he“s testing and the drivers installed for it.? 5o every card is tested under EQUAL conditions.
Posted on Reply
#102
nguyen
INSTG8ROkay again fir the last time ...That’s not how benchmarks work! you compare every card under equal conditions or else it’s not a valid benchmark PERIOD. If AMD has a similar feature it ALSO could not be used...no ones denying some games are optim better for one brand but the cards are run at DEFAULT settings and results are based on their raw performance at those settings a# equal as possible I’m glad you really like DLSS but it’s a “feature“ nobody gets to run features in benchmarks. all cards are run stock/default always. Why do you think W1zzard has. single test bench where the only difference is card he“s testing and the drivers installed for it.? 5o every card is tested under EQUAL conditions.
I have been saying this all along, 5700XT is being benched with PCIe 4.0 config in HUB testing, while 2070S is PCIe 3.0, get it ?
Are those equal testing conditions ?
Posted on Reply
#103
rtwjunkie
PC Gaming Enthusiast
nguyenI have been saying this all along, 5700XT is being benched with PCIe 4.0 config in HUB testing, while 2070S is PCIe 3.0, get it ?
Are those equal testing conditions ?
It will be tested under against every other card at the same standard The test is card to card, not card 1 under this standard and card 2 under this standard plus some hardware the other can’t have.

where you WILL see 4.0 tested with bells and whistles is in the AAA game performance reviews that W1zz does occasionally. In thise he DOES test what a game can do under the soecial abitlities or feathres that different cards have, because it is not a card to card comparison.
Posted on Reply
#104
INSTG8R
Vanguard Beta Tester
nguyenI have been saying this all along, 5700XT is being benched with PCIe 4.0 config in HUB testing, while 2070S is PCIe 3.0, get it ?
Are those equal testing conditions ?
And were bac’k to Appless and Pumpkins it does NOT effect the results n any way. Did you read the 3080 PCI scaling tests? i saw 1FPS difference across all resolutions that is literally margin of error iW1zzards current test bench is 4.0 but again ir will have absolutely zero difference you’re trying to make a metric that doesn’t exist to justify your favourite new pet feature should somehow make for valid benchmarks. you know where PCI4 would be an unfair advantage? Testing JNVME drives...something that can actually util the extra bandwidth GFX still don’t fully utilize 3.0. It was also pointed out that ithe HUB reviews f DLSS was available the results were listed but would NEVER be included in the benchmark result s
Posted on Reply
#105
Camm
nguyenI have been saying this all along, 5700XT is being benched with PCIe 4.0 config in HUB testing, while 2070S is PCIe 3.0, get it ?
Are those equal testing conditions ?
I'm not sure why you have such a hard on for this, testing shows negligible to no difference, and all Nvidia GPU's are now at PCIE 4.0.

Furthermore, DLSS & PCIE are not comparable. One is a lossy upscaling technology only available on a few games, the other is a ubiquitous connection standard.

But if you really want to split hairs, everyone testing on an Intel system tested at PCIE3.
Posted on Reply
#106
dicktracy
2.4Ghz but slower than the 3080 especially when RT is turned on. Insta-Passss!
Posted on Reply
#107
nguyen
INSTG8RAnd were bac’k to Appless and Pumpkins it does NOT effect the results n any way. Did you read the 3080 PCI scaling tests? i saw 1FPS difference across all resolutions that is literally margin of error iW1zzards current test bench is 4.0 but again ir will have absolutely zero difference you’re trying to make a metric that doesn’t exist to justify your favourite new pet feature should somehow make for valid benchmarks. you know where PCI4 would be an unfair advantage? Testing JNVME drives...something that can actually util the extra bandwidth GFX still don’t fully utilize 3.0. It was also pointed out that ithe HUB reviews f DLSS was available the resu were listed but would NEVER be included in the benchmark result s
holy jeez, since when were I talking about TPU benchmarks.
I was responding to a post about HUB testing, not TPU.
In the HUB testing, Steve also said that PCIe 4.0 contribute to a few % net gain for 5700XT, which 2070S and 2060S were left out.

This is the vid I was responding to, not some PCIe scaling benchmark at TPU.

Take a hint will you.
Posted on Reply
#108
Camm
dicktracy2.4Ghz but slower than the 3080 especially when RT is turned on. Insta-Passss!
Sauce? Since you have links to prerelease benchmarks across the product stack.
Posted on Reply
#109
INSTG8R
Vanguard Beta Tester
nguyenholy jeez, since when were I talking about TPU benchmarks.
I’m just informing you that TPU is now benchmarking on PCI 4.0 as well so you better clutch your pearls at all the benches done here too then. You literally can’t let go that PCI 4.0 is offering no advantage. All the AmPete cards are 4.0 I’ll be sure to cry foul when the have an unfair advantage over your 2080ti... :rolleyes:
Posted on Reply
#110
nguyen
INSTG8RI’m just informing you that TPU is now benchmarking on PCI 4.0 as well so you better clutch your pearls at all the benches done here too then. You literally can’t let go that PCI 4.0 is offering no advantage. All the AmPete cards are 4.0 I’ll be sure to cry foul when the have an unfair advantage over your 2080ti... :rolleyes:
So you are comparing apple to pumpkin and accusing me of doing so. TPU benchmark suit are different to HUB, different game different testing config.
If TPU testing show no difference then it must be the same for every other testing condition ?
Pretty small minded aren't you, and you called yourself enthusiast ?

Well since you have X570 and 5700XT, might as well test them yourself against HUB



Yeah some time next month once I get 5950X + 3090 I might do some PCIe scaling benchmark for you :D
Posted on Reply
#111
INSTG8R
Vanguard Beta Tester
nguyenSo you are comparing apple to pumpkin and accusing me of doing so. TPU benchmark suit are different to HUB, different game different testing config.
If TPU testing show no difference then it must be the same for every other testing condition ?
Pretty small minded aren't you, and you called yourself enthusiast ?

Well since you have X570 and 5700XT, might as well test them yourself against HUB

The problem is YOU cannot let go of this literally insignificant deviation like 4.0 is cheating 3.0 cards somehow. Still not seeing any advantage her!e? W1z ha# always done PCI scaling tests and he’s tested this as well and I trust his methods and he literally useful the most recent flagship 4.0 card I just picked one bench at 4K but you can pic any title or resolution he tested and the all look like this...I don’t use HIUB for hardware reviews or benchmarks so no I don’t know their methods but I do know W1zzards there is no more than 2FPS difference in every resolution and game he tested with a 4.0 3080 if any card could possibly use the extra bandwidth it would be that one. Your“re chasing a metric that ha# zero impact on benchmark scoring
Posted on Reply
#112
nguyen
INSTG8RThe problem is YOU cannot let go of this literally insignificant deviation like 4.0 is cheating 3.0 cards somehow. Still not seeing any advantage her!e? W1z ha# always done PCI scaling tests and he’s tested this as well and I trust his methods and he literally useful the most recent flagship 4.0 card I just picked one bench at 4K but you can pic any title or resolution he tested and the all look like this...I don’t use HIUB for hardware reviews or benchmarks so no I don’t know their methods but I do know W1zzards there is no more than 2FPS difference in every resolution and game he tested with a 4.0 3080 if any card could possibly use the extra bandwidth it would be that one. Your“re chasing a metric that ha# zero impact on benchmark scoring
I litterally talked about how flawed HUB testing was, yet here you are defending TPU ?
When did I even talked about TPU
:roll:
OK sure mister "zero" impact, "equal testing condition" :D, whatever you said
Posted on Reply
#113
Camm
Lets just pretend for a sec that PCIE 4 made more than a statistical anomaly of a difference for a sec.

Why should a PCIE 4.0 card be tested at 3.0 when its a review of its performance? It's like testing USB 3 drives against a USB 2 drive on a USB 2 slot, it doesn't make any sense.

And before you utter DLSS, PCIE 4.0 is A: A standard available regardless of game, B: Isn't a lossy upscaling solution that isn't rendering at the benchmarked resolution.
Posted on Reply
#114
nguyen
CammLets just pretend for a sec that PCIE 4 made more than a statistical anomaly of a difference for a sec.

Why should a PCIE 4.0 card be tested at 3.0 when its a review of its performance? It's like testing USB 3 drives against a USB 2 drive on a USB slot, it doesn't make any sense.

And before you utter DLSS, PCIE 4.0 is A: A standard available regardless of game, B: A lossy upscaling solution that isn't rendering at the benchmarked resolution.
C: should you disable DLSS where it is available to you ?

It's like DX12 vs DX11, you use whatever API give your better FPS and frametimes, forcing DX12 benchmark onto Nvidia cards where it is detrimental is quite unfair no ? same as disabling DLSS.

I believe the point of benchmarking is that it must resemble real world usage ?
Posted on Reply
#115
Camm
nguyenC: should you disable DLSS where it is available to you ?

It's like DX12 vs DX11, you use whatever API give your better FPS and frametimes, forcing DX12 benchmark onto Nvidia cards where it is detrimental is quite unfair no ?
techreport.com/review/3089/how-atis-drivers-optimize-quake-iii/

A lossy image isn't the same as a lossless image, and suggesting as such reminds me of the Quake 3 driver shens from ATi & Nvidia back in the day. Or should we benchmark AMD cards running at the same internal resolution of DLSS & apply sharpening with native render target and call it comparable?
Posted on Reply
#116
nguyen
Cammtechreport.com/review/3089/how-atis-drivers-optimize-quake-iii/

A lossy image isn't the same as a lossless image, and suggesting as such reminds me of the Quake 3 driver shens from ATi & Nvidia back in the day. Or should we benchmark AMD cards running at the same internal resolution of DLSS & apply sharpening with native render target and call it comparable?

DLSS vs Fidelity FX
Posted on Reply
#117
phanbuey
CammLets just pretend for a sec that PCIE 4 made more than a statistical anomaly of a difference for a sec.

Why should a PCIE 4.0 card be tested at 3.0 when its a review of its performance? It's like testing USB 3 drives against a USB 2 drive on a USB 2 slot, it doesn't make any sense.

And before you utter DLSS, PCIE 4.0 is A: A standard available regardless of game, B: Isn't a lossy upscaling solution that isn't rendering at the benchmarked resolution.
Except drives are limited by the bandwidth offered by USB 2.0, where as the 5700xt doesn't come anywhere near tapping out the bandwidth of a PCIE 2.0 slot, nevermind 3 or 4. It's more like testing a USB 1 drive in a 2 or 3 slot.

Also PCI-E devices are specc'ed to be backwards compatible, or are supposed to be. So if the 5700XT can't properly operate at 3.0 even though it has more than enough bandwidth, then there is something wrong with the design/implementation of that standard on the card.
Posted on Reply
#118
Camm
nguyen

DLSS vs Fidelity FX
So now one lossy solution isn't allowed to be benchmarked, but another one is? At least be consistent in your bullshit.
Posted on Reply
#119
INSTG8R
Vanguard Beta Tester
nguyenI litterally talked about how flawed HUB testing was, yet here you are defending TPU ?
When did I even talked about TPU
:roll:
OK sure mister "zero" impact, "equal testing condition" :D, whatever you said
How many more charts showing zero difference do you need to see before you drop it as advantage so you can keep typing to justify DLSS a is not?
Posted on Reply
#120
nguyen
CammSo now one lossy solution isn't allowed to be benchmarked, but another one is? At least be consistent in your bullshit.
Sure, with equal Image Quality let benchmark DLSS Performance (or Ultra Performance) vs FidelityFX
Posted on Reply
#121
Camm
nguyenSure, with equal Image Quality let benchmark DLSS Performance (or Ultra Performance) vs FidelityFX
Bit hard to say its equal image quality since DLSS's processing of the image uses graphical techniques that aren't exposed by the game engine, making comparison difficult. Secondly, not all frames have equal graphical fidelity, as when new data is introduced to the scene, the AI algorithm has to 'race' to gain info from multiple frames to reconstruct the image.

Is DLSS good? Yup, will we probably play all games like this? Yup (if Nvidia bothers to make a solution that works across all games, which is another reason why it shouldn't be used in benchmarking scenarios). But it has fatal flaws that preclude it from being compared apples to apples, and benchmarking data with DLSS should be shown, but called out separately so the reader can make a value call on the limited support and performance subset.....

Which both HWU & TPU do. So again, whats your problem?
Posted on Reply
#122
INSTG8R
Vanguard Beta Tester
nguyenSure, with equal Image Quality let benchmark DLSS Performance (or Ultra Performance) vs FidelityFX
Again neither of those things would be used in any benchmark except comparing them to each other and you h literally found the only example where it’s ev possible because it’s the only game that has both. Still gives zero weight to your argument For DLSS.But hey I can actually argue for Fidelity FX being the better of the 2 in that game I own I finished it and use$ Fidelity F.X but that comparison shows DLSS with some obvious visual issues Fidelity FX diid‘t and overall looked better.I know my performance in the game was fantastic and looked totally amazing . Sti , can’t use it in benchmarks. .
Posted on Reply
#123
nguyen
CammBit hard to say its equal image quality since DLSS's processing of the image uses graphical techniques that aren't exposed by the game engine, making comparison difficult. Secondly, not all frames have equal graphical fidelity, as when new data is introduced to the scene, the AI algorithm has to 'race' to gain info from multiple frames to reconstruct the image.

Is DLSS good? Yup, will we probably play all games like this? Yup (if Nvidia bothers to make a solution that works across all games, which is another reason why it shouldn't be used in benchmarking scenarios). But it has fatal flaws that preclude it from being compared apples to apples, and benchmarking data with DLSS should be shown, but called out separately so the reader can make a value call on the limited support and performance subset.....

Which both HWU & TPU do. So again, whats your problem?
DLSS is like a specific customization for a game, doesn't mean it should be excluded though.
I was responding to a recent HUB testing that did not include DLSS result, so there is that.
TPU did some DLSS 2.0 testing, where is that ?
INSTG8RAgain neither of those things would be used in any benchmark except comparing them to each other and you h literally found the only example where it’s ev possible because it’s the only game that has both. Still gives zero weight to your argument For DLSS.But hey I can actually argue for Fidelity FX being the better of the 2 in that game I own I finished it and use$ Fidelity F.X but that comparison shows DLSS with some obvious visual issues Fidelity FX diid‘t and overall looked better.I know my performance in the game was fantastic and looked totally amazing . Sti , can’t use it in benchmarks. .
Kinda funny your conclusion is entirely contrary to what the author was saying. But hey, I was saying HUB was pretty unfair too :D, reviewer vs normal user yeah ?
You can apply FidelityFX to any game, just create a custom resolution and use the GPU scaler to upscale it to fit the screen, 5 seconds custom job.
Posted on Reply
#124
Searing
nguyenDLSS is like a specific customization for a game, doesn't mean it should be excluded though.
I was responding to a recent HUB testing that did not include DLSS result, so there is that.
TPU did some DLSS 2.0 testing, where is that ?
DLSS is terrible and doesn't work in 99 percent of the games I play. Next.
evernessinceNo because polls from hardware unboxed 3 months back suggest that only 7% of card owners were having severe issues like what you were describing (and lets be honest here, a number of those users are likely Nvidia owners checking the box that makes AMD look worst). You are suggesting that 100% of 5700 / 5700 XTs had issues and that simply isn't true. I should not have to say that though, if 100% had issues, drivers or otherwise, AMD would have been decimated. Why you make the assumption that 5/5 cards having issues is "likely" . No, in no universe is it. None of the data supports that and we haven't ever seen anywhere near 100%.

My prior statement stands: Either you are extraordinary unlucky or there is something else going on.



No one's saying AMD shouldn't improve it's drivers. Navi clearly had / has issues.

The problem is when people come into threads making claims like 100% of AMD cards have issues. You can't complain about partisanship when you yourself added to it.
I didn't contribute to any partisanship. I bought 5 RX 5700 XT cards over 13 months for 5 different clients and all of them had problems. 5/5 had problems. I'm pointing out it was not rare at all. There was mass misery online with those people knowledgeable enough about the problem trying to bring it to AMD's attention. Many people using those cards didn't even know it if they didn't play the same games, or didn't watch Netflix on Windows for example.

Lived experiences are better than random people talking about something they haven't experienced.

AGAIN: RMA of a broken card, vs. unfixable crashing from driver issues... those two things are totally different.
Posted on Reply
#125
INSTG8R
Vanguard Beta Tester
nguyenDLSS is like a specific customization for a game, doesn't mean it should be excluded though.
I was responding to a recent HUB testing that did not include DLSS result, so there is that.
TPU did some DLSS 2.0 testing, where is that ?



Kinda funny your conclusion is entirely contrary to what the author was saying. But hey, I was saying HUB was pretty unfair too :D, reviewer vs normal user yeah ?
You can apply FidelityFX to any game, just create a custom resolution and use the GPU scaler to upscale it to fit the screen, 5 seconds custom job.
Funny you try to cast off fidelity FX as so simple it’s little bit more than that but guess what it looks good to compare to your precious DLSS and looks better doing for your crude description of what it’s doing while your tech is down res then upscaling that fancy AI working hard to hide what’s actually going to not look like crap on the fly. Apparently I just need to make a custom Res and mine looks just as good and performs a#s well or better and apparently I can apply it to any game I want...Sounds to me like Fidelity FX should be added to more games being pretty easy to eo. See the weakness of DLSS is that its constantly having to keep the illusion up on the fly and you can where it struggles to keep up with fast changes, Fidelity FX shows no such visual “artifacts”
Posted on Reply
Add your own comment
Dec 22nd, 2024 08:31 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts