• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Navi 21 XT Seemingly Confirmed to Run at ~2.3, 2.4 GHz Clock, 250 W+

Uh again totally off track with F1 yes pun intended. F1 has a strict set of rules that all cars must abide by from ride heights to fuel tank size and yes they all run the same fuel. Your asking to include what is basically a cheat to be included as a valid benchmark. Tha5 would be like one driver gets an extra turbo.. benchmarking is about comparing cards under equal settings and equal conditions DLSS is the furthest thing from equal conditions. By your logic every cards 4K results are accurate while DLSS is actually 720p upscaling to 4K How you can even consider that an equal benc( result is truly laughable ...

Well if 720p upscaled to 4K have equal IQ as native 4K, I say go for it.
Every hardware editorial seems to agree that if DLSS is available, use it. They didn't say anything about trading visual quality for performance (as least with DLSS in Quality mode).

The only thing with DLSS is that it's unfair to AMD, who has no answer yet. But it's a competition out there, not some fun race between friends.

Let think of it this way, while AMD continue to optimize rasterization performance for their GPU, Nvidia effort is to incorporate DLSS into new games. But somehow benching DLSS is too unfair for AMD, it makes no fricking sense.

Also equal settings and equal conditions were never actually equal, some games just have optimizations for specific brand, just like Far Cry 5 feature rapid-packed math.
 
Well if 720p upscaled to 4K have equal IQ as 4K, I say go for it.
Every hardware editorial seems to agree that if DLSS is available, use it. They didn't say anything about trading visual quality for performance (as least with DLSS in Quality mode).

The only thing with DLSS is that it's unfair to AMD, who has no answer yet. But it's a competition out there, not some fun race between friends.

Let think of it this way, while AMD continue to optimize rasterization performance for their GPU, Nvidia effort is to incorporate DLSS into new games. But somehow benching DLSS is too unfair for AMD, it makes no fricking sense.

Also equal settings and equal conditions were never actually equal, some games just have optimizations for specific brand, just like Far Cry 5 feature rapid-packed math.
Okay again fir the last time ...That’s not how benchmarks work! you compare every card under equal conditions or else it’s not a valid benchmark PERIOD. If AMD has a similar feature it ALSO could not be used...no ones denying some games are optim better for one brand but the cards are run at DEFAULT settings and results are based on their raw performance at those settings a# equal as possible I’m glad you really like DLSS but it’s a “feature“ nobody gets to run features in benchmarks. all cards are run stock/default always. Why do you think W1zzard has. single test bench where the only difference is card he“s testing and the drivers installed for it.? 5o every card is tested under EQUAL conditions.
 
Okay again fir the last time ...That’s not how benchmarks work! you compare every card under equal conditions or else it’s not a valid benchmark PERIOD. If AMD has a similar feature it ALSO could not be used...no ones denying some games are optim better for one brand but the cards are run at DEFAULT settings and results are based on their raw performance at those settings a# equal as possible I’m glad you really like DLSS but it’s a “feature“ nobody gets to run features in benchmarks. all cards are run stock/default always. Why do you think W1zzard has. single test bench where the only difference is card he“s testing and the drivers installed for it.? 5o every card is tested under EQUAL conditions.

I have been saying this all along, 5700XT is being benched with PCIe 4.0 config in HUB testing, while 2070S is PCIe 3.0, get it ?
Are those equal testing conditions ?
 
I have been saying this all along, 5700XT is being benched with PCIe 4.0 config in HUB testing, while 2070S is PCIe 3.0, get it ?
Are those equal testing conditions ?
It will be tested under against every other card at the same standard The test is card to card, not card 1 under this standard and card 2 under this standard plus some hardware the other can’t have.

where you WILL see 4.0 tested with bells and whistles is in the AAA game performance reviews that W1zz does occasionally. In thise he DOES test what a game can do under the soecial abitlities or feathres that different cards have, because it is not a card to card comparison.
 
Last edited:
I have been saying this all along, 5700XT is being benched with PCIe 4.0 config in HUB testing, while 2070S is PCIe 3.0, get it ?
Are those equal testing conditions ?
And were bac’k to Appless and Pumpkins it does NOT effect the results n any way. Did you read the 3080 PCI scaling tests? i saw 1FPS difference across all resolutions that is literally margin of error iW1zzards current test bench is 4.0 but again ir will have absolutely zero difference you’re trying to make a metric that doesn’t exist to justify your favourite new pet feature should somehow make for valid benchmarks. you know where PCI4 would be an unfair advantage? Testing JNVME drives...something that can actually util the extra bandwidth GFX still don’t fully utilize 3.0. It was also pointed out that ithe HUB reviews f DLSS was available the results were listed but would NEVER be included in the benchmark result s
 
I have been saying this all along, 5700XT is being benched with PCIe 4.0 config in HUB testing, while 2070S is PCIe 3.0, get it ?
Are those equal testing conditions ?

I'm not sure why you have such a hard on for this, testing shows negligible to no difference, and all Nvidia GPU's are now at PCIE 4.0.

Furthermore, DLSS & PCIE are not comparable. One is a lossy upscaling technology only available on a few games, the other is a ubiquitous connection standard.

But if you really want to split hairs, everyone testing on an Intel system tested at PCIE3.
 
And were bac’k to Appless and Pumpkins it does NOT effect the results n any way. Did you read the 3080 PCI scaling tests? i saw 1FPS difference across all resolutions that is literally margin of error iW1zzards current test bench is 4.0 but again ir will have absolutely zero difference you’re trying to make a metric that doesn’t exist to justify your favourite new pet feature should somehow make for valid benchmarks. you know where PCI4 would be an unfair advantage? Testing JNVME drives...something that can actually util the extra bandwidth GFX still don’t fully utilize 3.0. It was also pointed out that ithe HUB reviews f DLSS was available the resu were listed but would NEVER be included in the benchmark result s

holy jeez, since when were I talking about TPU benchmarks.
I was responding to a post about HUB testing, not TPU.
In the HUB testing, Steve also said that PCIe 4.0 contribute to a few % net gain for 5700XT, which 2070S and 2060S were left out.

This is the vid I was responding to, not some PCIe scaling benchmark at TPU.

Take a hint will you.
 
2.4Ghz but slower than the 3080 especially when RT is turned on. Insta-Passss!

Sauce? Since you have links to prerelease benchmarks across the product stack.
 
holy jeez, since when were I talking about TPU benchmarks.
I’m just informing you that TPU is now benchmarking on PCI 4.0 as well so you better clutch your pearls at all the benches done here too then. You literally can’t let go that PCI 4.0 is offering no advantage. All the AmPete cards are 4.0 I’ll be sure to cry foul when the have an unfair advantage over your 2080ti... :rolleyes:
 
I’m just informing you that TPU is now benchmarking on PCI 4.0 as well so you better clutch your pearls at all the benches done here too then. You literally can’t let go that PCI 4.0 is offering no advantage. All the AmPete cards are 4.0 I’ll be sure to cry foul when the have an unfair advantage over your 2080ti... :rolleyes:

So you are comparing apple to pumpkin and accusing me of doing so. TPU benchmark suit are different to HUB, different game different testing config.
If TPU testing show no difference then it must be the same for every other testing condition ?
Pretty small minded aren't you, and you called yourself enthusiast ?

Well since you have X570 and 5700XT, might as well test them yourself against HUB

death.png


Yeah some time next month once I get 5950X + 3090 I might do some PCIe scaling benchmark for you :D
 
Last edited:
So you are comparing apple to pumpkin and accusing me of doing so. TPU benchmark suit are different to HUB, different game different testing config.
If TPU testing show no difference then it must be the same for every other testing condition ?
Pretty small minded aren't you, and you called yourself enthusiast ?

Well since you have X570 and 5700XT, might as well test them yourself against HUB

View attachment 172317
The problem is YOU cannot let go of this literally insignificant deviation like 4.0 is cheating 3.0 cards somehow. Still not seeing any advantage her!e? W1z ha# always done PCI scaling tests and he’s tested this as well and I trust his methods and he literally useful the most recent flagship 4.0 card I just picked one bench at 4K but you can pic any title or resolution he tested and the all look like this...I don’t use HIUB for hardware reviews or benchmarks so no I don’t know their methods but I do know W1zzards there is no more than 2FPS difference in every resolution and game he tested with a 4.0 3080 if any card could possibly use the extra bandwidth it would be that one. Your“re chasing a metric that ha# zero impact on benchmark scoring
87D6D5C3-264E-4744-B5CE-A25DE2618ED9.png
 
The problem is YOU cannot let go of this literally insignificant deviation like 4.0 is cheating 3.0 cards somehow. Still not seeing any advantage her!e? W1z ha# always done PCI scaling tests and he’s tested this as well and I trust his methods and he literally useful the most recent flagship 4.0 card I just picked one bench at 4K but you can pic any title or resolution he tested and the all look like this...I don’t use HIUB for hardware reviews or benchmarks so no I don’t know their methods but I do know W1zzards there is no more than 2FPS difference in every resolution and game he tested with a 4.0 3080 if any card could possibly use the extra bandwidth it would be that one. Your“re chasing a metric that ha# zero impact on benchmark scoring
View attachment 172318

I litterally talked about how flawed HUB testing was, yet here you are defending TPU ?
When did I even talked about TPU
:roll:
OK sure mister "zero" impact, "equal testing condition" :D, whatever you said
 
Lets just pretend for a sec that PCIE 4 made more than a statistical anomaly of a difference for a sec.

Why should a PCIE 4.0 card be tested at 3.0 when its a review of its performance? It's like testing USB 3 drives against a USB 2 drive on a USB 2 slot, it doesn't make any sense.

And before you utter DLSS, PCIE 4.0 is A: A standard available regardless of game, B: Isn't a lossy upscaling solution that isn't rendering at the benchmarked resolution.
 
Lets just pretend for a sec that PCIE 4 made more than a statistical anomaly of a difference for a sec.

Why should a PCIE 4.0 card be tested at 3.0 when its a review of its performance? It's like testing USB 3 drives against a USB 2 drive on a USB slot, it doesn't make any sense.

And before you utter DLSS, PCIE 4.0 is A: A standard available regardless of game, B: A lossy upscaling solution that isn't rendering at the benchmarked resolution.

C: should you disable DLSS where it is available to you ?

It's like DX12 vs DX11, you use whatever API give your better FPS and frametimes, forcing DX12 benchmark onto Nvidia cards where it is detrimental is quite unfair no ? same as disabling DLSS.

I believe the point of benchmarking is that it must resemble real world usage ?
 
C: should you disable DLSS where it is available to you ?

It's like DX12 vs DX11, you use whatever API give your better FPS and frametimes, forcing DX12 benchmark onto Nvidia cards where it is detrimental is quite unfair no ?


A lossy image isn't the same as a lossless image, and suggesting as such reminds me of the Quake 3 driver shens from ATi & Nvidia back in the day. Or should we benchmark AMD cards running at the same internal resolution of DLSS & apply sharpening with native render target and call it comparable?
 

A lossy image isn't the same as a lossless image, and suggesting as such reminds me of the Quake 3 driver shens from ATi & Nvidia back in the day. Or should we benchmark AMD cards running at the same internal resolution of DLSS & apply sharpening with native render target and call it comparable?


DLSS vs Fidelity FX
 
Lets just pretend for a sec that PCIE 4 made more than a statistical anomaly of a difference for a sec.

Why should a PCIE 4.0 card be tested at 3.0 when its a review of its performance? It's like testing USB 3 drives against a USB 2 drive on a USB 2 slot, it doesn't make any sense.

And before you utter DLSS, PCIE 4.0 is A: A standard available regardless of game, B: Isn't a lossy upscaling solution that isn't rendering at the benchmarked resolution.

Except drives are limited by the bandwidth offered by USB 2.0, where as the 5700xt doesn't come anywhere near tapping out the bandwidth of a PCIE 2.0 slot, nevermind 3 or 4. It's more like testing a USB 1 drive in a 2 or 3 slot.

Also PCI-E devices are specc'ed to be backwards compatible, or are supposed to be. So if the 5700XT can't properly operate at 3.0 even though it has more than enough bandwidth, then there is something wrong with the design/implementation of that standard on the card.
 

DLSS vs Fidelity FX

So now one lossy solution isn't allowed to be benchmarked, but another one is? At least be consistent in your bullshit.
 
I litterally talked about how flawed HUB testing was, yet here you are defending TPU ?
When did I even talked about TPU
:roll:
OK sure mister "zero" impact, "equal testing condition" :D, whatever you said
How many more charts showing zero difference do you need to see before you drop it as advantage so you can keep typing to justify DLSS a is not?
 
So now one lossy solution isn't allowed to be benchmarked, but another one is? At least be consistent in your bullshit.

Sure, with equal Image Quality let benchmark DLSS Performance (or Ultra Performance) vs FidelityFX
 
Sure, with equal Image Quality let benchmark DLSS Performance (or Ultra Performance) vs FidelityFX

Bit hard to say its equal image quality since DLSS's processing of the image uses graphical techniques that aren't exposed by the game engine, making comparison difficult. Secondly, not all frames have equal graphical fidelity, as when new data is introduced to the scene, the AI algorithm has to 'race' to gain info from multiple frames to reconstruct the image.

Is DLSS good? Yup, will we probably play all games like this? Yup (if Nvidia bothers to make a solution that works across all games, which is another reason why it shouldn't be used in benchmarking scenarios). But it has fatal flaws that preclude it from being compared apples to apples, and benchmarking data with DLSS should be shown, but called out separately so the reader can make a value call on the limited support and performance subset.....

Which both HWU & TPU do. So again, whats your problem?
 
Sure, with equal Image Quality let benchmark DLSS Performance (or Ultra Performance) vs FidelityFX
Again neither of those things would be used in any benchmark except comparing them to each other and you h literally found the only example where it’s ev possible because it’s the only game that has both. Still gives zero weight to your argument For DLSS.But hey I can actually argue for Fidelity FX being the better of the 2 in that game I own I finished it and use$ Fidelity F.X but that comparison shows DLSS with some obvious visual issues Fidelity FX diid‘t and overall looked better.I know my performance in the game was fantastic and looked totally amazing . Sti , can’t use it in benchmarks. .
 
Bit hard to say its equal image quality since DLSS's processing of the image uses graphical techniques that aren't exposed by the game engine, making comparison difficult. Secondly, not all frames have equal graphical fidelity, as when new data is introduced to the scene, the AI algorithm has to 'race' to gain info from multiple frames to reconstruct the image.

Is DLSS good? Yup, will we probably play all games like this? Yup (if Nvidia bothers to make a solution that works across all games, which is another reason why it shouldn't be used in benchmarking scenarios). But it has fatal flaws that preclude it from being compared apples to apples, and benchmarking data with DLSS should be shown, but called out separately so the reader can make a value call on the limited support and performance subset.....

Which both HWU & TPU do. So again, whats your problem?

DLSS is like a specific customization for a game, doesn't mean it should be excluded though.
I was responding to a recent HUB testing that did not include DLSS result, so there is that.
TPU did some DLSS 2.0 testing, where is that ?

Again neither of those things would be used in any benchmark except comparing them to each other and you h literally found the only example where it’s ev possible because it’s the only game that has both. Still gives zero weight to your argument For DLSS.But hey I can actually argue for Fidelity FX being the better of the 2 in that game I own I finished it and use$ Fidelity F.X but that comparison shows DLSS with some obvious visual issues Fidelity FX diid‘t and overall looked better.I know my performance in the game was fantastic and looked totally amazing . Sti , can’t use it in benchmarks. .

Kinda funny your conclusion is entirely contrary to what the author was saying. But hey, I was saying HUB was pretty unfair too :D, reviewer vs normal user yeah ?
You can apply FidelityFX to any game, just create a custom resolution and use the GPU scaler to upscale it to fit the screen, 5 seconds custom job.
 
Last edited:
DLSS is like a specific customization for a game, doesn't mean it should be excluded though.
I was responding to a recent HUB testing that did not include DLSS result, so there is that.
TPU did some DLSS 2.0 testing, where is that ?

DLSS is terrible and doesn't work in 99 percent of the games I play. Next.

No because polls from hardware unboxed 3 months back suggest that only 7% of card owners were having severe issues like what you were describing (and lets be honest here, a number of those users are likely Nvidia owners checking the box that makes AMD look worst). You are suggesting that 100% of 5700 / 5700 XTs had issues and that simply isn't true. I should not have to say that though, if 100% had issues, drivers or otherwise, AMD would have been decimated. Why you make the assumption that 5/5 cards having issues is "likely" . No, in no universe is it. None of the data supports that and we haven't ever seen anywhere near 100%.

My prior statement stands: Either you are extraordinary unlucky or there is something else going on.



No one's saying AMD shouldn't improve it's drivers. Navi clearly had / has issues.

The problem is when people come into threads making claims like 100% of AMD cards have issues. You can't complain about partisanship when you yourself added to it.

I didn't contribute to any partisanship. I bought 5 RX 5700 XT cards over 13 months for 5 different clients and all of them had problems. 5/5 had problems. I'm pointing out it was not rare at all. There was mass misery online with those people knowledgeable enough about the problem trying to bring it to AMD's attention. Many people using those cards didn't even know it if they didn't play the same games, or didn't watch Netflix on Windows for example.

Lived experiences are better than random people talking about something they haven't experienced.

AGAIN: RMA of a broken card, vs. unfixable crashing from driver issues... those two things are totally different.
 
Last edited:
Back
Top