Sunday, October 18th 2020
AMD Navi 21 XT Seemingly Confirmed to Run at ~2.3, 2.4 GHz Clock, 250 W+
AMD's RDNA2-based cards are just around the corner, with the company's full debut of the secrecy-shrouded cards being set for October 28th. Rumors of high clocks on AMD's new architecture - which were nothing more than unsubstantiated rumors up to now - have seemingly been confirmed, with Patrick Schur posting on Twitter some specifications for upcoming RNDA2-based Navi 21 XT. Navi 21 XT falls under the big Navi chip, but likely isn't the top performer from AMD - the company is allegedly working on a Navi 21 XTX solution, which ought to be exclusive to their reference designs, with higher clocks and possibly more CUs.
The specs outed by Patrick are promising, to say the least; that AMD's Big Navi can reach clocks in excess of 2.4 GHz with a 250 W+ TGP (quoted at around 255 W) is certainly good news. The 2.4 GHz (game clock) speeds are being associated with AIB cards; AMD's own reference designs should be running at a more conservative 2.3 GHz. A memory pool of 16 GB GDDR6 has also been confirmed. AMD's assault on the NVIDIA 30-series lineup should embody three models carved from the Navi 21 chip - the higher performance, AMD-exclusive XTX, XT, and the lower performance Navi 21 XL. All of these are expected to ship with the same 256 bit bus and 16 GB GDDR6 memory, whilst taking advantage of AMD's (rumored, for now) Infinity Cache to make up for the lower memory speeds and bus. Hold on to your hats; the hype train is going full speed ahead, luckily stopping in a smooth manner come October 28th.
Sources:
Patrick Schur @ Twitter, via Videocardz
The specs outed by Patrick are promising, to say the least; that AMD's Big Navi can reach clocks in excess of 2.4 GHz with a 250 W+ TGP (quoted at around 255 W) is certainly good news. The 2.4 GHz (game clock) speeds are being associated with AIB cards; AMD's own reference designs should be running at a more conservative 2.3 GHz. A memory pool of 16 GB GDDR6 has also been confirmed. AMD's assault on the NVIDIA 30-series lineup should embody three models carved from the Navi 21 chip - the higher performance, AMD-exclusive XTX, XT, and the lower performance Navi 21 XL. All of these are expected to ship with the same 256 bit bus and 16 GB GDDR6 memory, whilst taking advantage of AMD's (rumored, for now) Infinity Cache to make up for the lower memory speeds and bus. Hold on to your hats; the hype train is going full speed ahead, luckily stopping in a smooth manner come October 28th.
229 Comments on AMD Navi 21 XT Seemingly Confirmed to Run at ~2.3, 2.4 GHz Clock, 250 W+
Are those equal testing conditions ?
where you WILL see 4.0 tested with bells and whistles is in the AAA game performance reviews that W1zz does occasionally. In thise he DOES test what a game can do under the soecial abitlities or feathres that different cards have, because it is not a card to card comparison.
Furthermore, DLSS & PCIE are not comparable. One is a lossy upscaling technology only available on a few games, the other is a ubiquitous connection standard.
But if you really want to split hairs, everyone testing on an Intel system tested at PCIE3.
I was responding to a post about HUB testing, not TPU.
In the HUB testing, Steve also said that PCIe 4.0 contribute to a few % net gain for 5700XT, which 2070S and 2060S were left out.
Take a hint will you.
If TPU testing show no difference then it must be the same for every other testing condition ?
Pretty small minded aren't you, and you called yourself enthusiast ?
Well since you have X570 and 5700XT, might as well test them yourself against HUB
Yeah some time next month once I get 5950X + 3090 I might do some PCIe scaling benchmark for you :D
When did I even talked about TPU
:roll:
OK sure mister "zero" impact, "equal testing condition" :D, whatever you said
Why should a PCIE 4.0 card be tested at 3.0 when its a review of its performance? It's like testing USB 3 drives against a USB 2 drive on a USB 2 slot, it doesn't make any sense.
And before you utter DLSS, PCIE 4.0 is A: A standard available regardless of game, B: Isn't a lossy upscaling solution that isn't rendering at the benchmarked resolution.
It's like DX12 vs DX11, you use whatever API give your better FPS and frametimes, forcing DX12 benchmark onto Nvidia cards where it is detrimental is quite unfair no ? same as disabling DLSS.
I believe the point of benchmarking is that it must resemble real world usage ?
A lossy image isn't the same as a lossless image, and suggesting as such reminds me of the Quake 3 driver shens from ATi & Nvidia back in the day. Or should we benchmark AMD cards running at the same internal resolution of DLSS & apply sharpening with native render target and call it comparable?
DLSS vs Fidelity FX
Also PCI-E devices are specc'ed to be backwards compatible, or are supposed to be. So if the 5700XT can't properly operate at 3.0 even though it has more than enough bandwidth, then there is something wrong with the design/implementation of that standard on the card.
Is DLSS good? Yup, will we probably play all games like this? Yup (if Nvidia bothers to make a solution that works across all games, which is another reason why it shouldn't be used in benchmarking scenarios). But it has fatal flaws that preclude it from being compared apples to apples, and benchmarking data with DLSS should be shown, but called out separately so the reader can make a value call on the limited support and performance subset.....
Which both HWU & TPU do. So again, whats your problem?
I was responding to a recent HUB testing that did not include DLSS result, so there is that.
TPU did some DLSS 2.0 testing, where is that ? Kinda funny your conclusion is entirely contrary to what the author was saying. But hey, I was saying HUB was pretty unfair too :D, reviewer vs normal user yeah ?
You can apply FidelityFX to any game, just create a custom resolution and use the GPU scaler to upscale it to fit the screen, 5 seconds custom job.
Lived experiences are better than random people talking about something they haven't experienced.
AGAIN: RMA of a broken card, vs. unfixable crashing from driver issues... those two things are totally different.