Thursday, December 3rd 2020

AMD Radeon RX 6900 XT Graphics Card OpenCL Score Leaks

AMD has launched its RDNA 2 based graphics cards, codenamed Navi 21. These GPUs are set to compete with NVIDIA's Ampere offerings, with the lineup covering the Radeon RX 6800, RX 6800 XT, and RX 6900 XT graphics cards. Until now, we have had reviews of the former two, but not the Radeon RX 6900 XT. That is because the card is coming at a later date, specifically on December 8th, in just a few days. As a reminder, the Radeon RX 6900 XT GPU is a Navi 21 XTX model with 80 Compute Units that give a total of 5120 Stream Processors. The graphics card uses a 256-bit bus that connects the GPU with 128 MB of its Infinity Cache to 16 GB of GDDR6 memory. When it comes to frequencies, it has a base clock of 1825 MHz, with a boost speed of 2250 MHz.

Today, in a GeekBench 5 submission, we get to see the first benchmarks of AMD's top-end Radeon RX 6900 XT graphics card. Running an OpenCL test suite, the card was paired with AMD's Ryzen 9 5950X 16C/32T CPU. The card managed to pass the OpenCL test benchmarks with a score of 169779 points. That makes the card 12% faster than RX 6800 XT GPU, but still slower than the competing NVIDIA GeForce RTX 3080 GPU, which scores 177724 points. However, we need to wait for a few more benchmarks to appear to jump to any conclusions, including the TechPowerUp review, which is expected to arrive once NDA lifts. Below, you can compare the score to other GPUs in the GeekBench 5 OpenCL database.
Sources: @TUM_APISAK, via VideoCardz
Add your own comment

35 Comments on AMD Radeon RX 6900 XT Graphics Card OpenCL Score Leaks

#26
EarthDog
There are AMD people who do the saaaaaaaaaaaame thing in NV/Intel threads bud. This place is a cesspool for the polarizing toxic fanboys. I've put many, from both sides, on ignore, but at this point, since these people (somehow) continue to exist at this site, a lot of these threads look like swiss cheese. :p
spnidelfirst/second gen tessellation was slow as fuck too, who cares. RTRT effects won't become REALLY mainstream until at least 5 years, and won't be mandatory in any game for at least 10 more years
Just comparing apples to apples, bud. It's slower, period. And if you want to use that feature, it won't be as good as the NV offerings. I would also bet good money says if their RT was as fast, suddenly a lot of AMD people would be jocking the merits................. but here we are. Let the (curiously) loyal defend their brethren at all costs.
Posted on Reply
#27
spnidel
EarthDogJust comparing apples to apples, bud. It's slower, period. And if you want to use that feature, it won't be as good as the NV offerings.
I won't want to use that feature for at least 5 years, just like how I didn't want to use tessellation when it first emerged back in 2010 :D, the hardware is simply not mature enough yet, and the effects themselves do not provide a large enough visual uplift to justify the framerate impact
Posted on Reply
#28
EarthDog
spnidelI won't want to use that feature for at least 5 years, just like how I didn't want to use tessellation when it first emerged back in 2010 :D, the hardware is simply not mature enough yet, and the effects themselves do not provide a large enough visual uplift to justify the framerate impact
That's you. there are PLENTY of others who use it without issue or concern. 2nd gen RT is VASTLY improved and in many titles, you don't need DLSS for 60 FPS+ at 1440p even with a 3070.

Justify it all you want, but you know what I said is the truth. If you don't use it, that's fine, but there are plenty who do, more daily, and that train gains momentum thanks to consoles.
Posted on Reply
#29
Chrispy_
spnidelthere he is, the cute little shill, moving goalposts as usual!
I'm not particularly keen on raytracing performance and wouldn't use that as a deciding factor to judge a GPU at the moment, but I do value NVENC, DLSS is actually useful in some instances, and WFH I'm actually finding RTX Voice useful too.

Given how popular streaming and WFH is at the moment, those are features that can't really be ignored and AMD simply doesn't compete at all. Like, they haven't even bothered trying! On top of that, the number of things that are vastly superior when run using CUDA instead of OpenCL is starting to get ridiculous. I know CUDA is proprietary, but Nvidia have put the work in to make it a thing, and spent years investing in it at this point. AMD have basically said "sure, we do OpenCL" and left it to rot with developers left to fend for themselves. Is it any wonder that CUDA is now the dominant application acceleration API?

Before anyone wonders, No, I'm not an Nvidia fanboy. If anything I hate Nvidia for regular unethical and shady practices and have a strong preference for the underdog (because that's the better option that promotes healthy competition and benefits us as consumers the most) but I can't deny the facts - RDNA2 is completely uncompetitive in many ways. The only thing it is actually competitive in is traditional raster-based gaming and that's a shrinking slice of the market.
Posted on Reply
#30
spnidel
EarthDogThat's you. there are PLENTY of others who use it without issue.

Justify it all you want, but you know what I said is the truth. :)
fair enough :)
Posted on Reply
#31
warrior420
FluffmeisterDon't worry, you can't buy them anyway.

Like literally.
I have one. :)
Posted on Reply
#32
N3M3515
EarthDog2nd gen RT is VASTLY improved and in many titles
Actually they haven't improved RT AT ALL, they just added a LOT more cores, the impact is exactly the same as the RTX 2000 series.
Posted on Reply
#33
EarthDog
N3M3515Actually they haven't improved RT AT ALL, they just added a LOT more cores, the impact is exactly the same as the RTX 2000 series.
Correct. Regardless of how they got there, RT performance is a lot better.
Posted on Reply
#34
wolf
Better Than Native
EarthDogCorrect. Regardless of how they got there, RT performance is a lot better.
I also think most all current RT games were optimised for their 1st gen RT cores, as time goes on I'm optimistic that if games are optimised more heavily for Ampere (which irrespective of RT will certainly be the case) then the divide between Turing and Ampere in RT processing will widen.

AMD's architectures improving over time relative to their Nvidia counterpart when they launched may be at least in part due to their compute heavy nature, and it might be how Ampere fares over time too.

To hammer in an old point, indeed if RT, DLSS, voice, streaming, CUDA etc etc mean bupkiss to you, and I know there would be MANY out there that fit the bill, all the power to you, why pay for features you don't want or wont use. But accept that the other side of that coin is if you want any or all of those, AMD is either weaker on non-existent in those spaces.
Posted on Reply
#35
Chrispy_
wolfI also think most all current RT games were optimised for their 1st gen RT cores, as time goes on I'm optimistic that if games are optimised more heavily for Ampere (which irrespective of RT will certainly be the case) then the divide between Turing and Ampere in RT processing will widen.
I'm only guessing here, but I suspect that game devs will optimise for AMD's DXR implementation first and foremost. It's a huge market with (relatively) fixed hardware that commands the lion's share of the sales volumes and profit.

As for the PC gaming market, yes - it's pretty big - but it's not all running AMD 6800-series raytracing cards or better.
Posted on Reply
Add your own comment
Nov 23rd, 2024 23:54 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts