Are you aware that DX12 and DX12 Ultimate are two separate and different standards?
DX12 launched in July 2015 (announced March 2014); DX12U launched in November 2020 (announced March 2020). Ultimate is treated as an extension of 12, but they are separate standards, and many, many GPUs have full DX12 support but not DX12U support. Making a "DX12" benchmark suite that includes DX12U-exclusive features would thus be misleading and unrepresentative of overall DX12 performance. You'll also do well to note that there are both DX11 and Vulkan games in the rasterization suite. It is not, nor should it be, a DX12 suite, as that would make it overall unrepersentative of gaming overall. Now, you could always argue that DX11 and Vulkan benchmarks should be separated out, which would to some extent be a valid argument - but following by that logic, DX12U games would
also need to be separated, as they are materially different from non-Ultimate DX12 titles. Different APIs, different featuresets, different hardware requirements.
Also, for the record, at least one of the RT-enabled games ... uses Vulkan. Doom Eternal, that is. That
definitely makes it a poor fit for a DX12 or DX12U test suite ....
Like ... what? "Ignored" RT performance? They delivered a reasonably competitive solution
one generation after Nvidia first launched the feature at all. That is a
very fast turnaround for adding support for a feature that up until then
didn't exist at all. It would literally not have been possible for them to respond more quickly than they did - it would necessarily need to arrive with their next generation after Turing, which was RDNA2. Now, Ampere's RT is significantly faster - again, nobody here is denying that in any way - but that's not what we're discussing here. Your description of events here just demonstrates a massive and plain-faced irrational level of bias.
What you are describing here is DX12U, not DX12. Also, "faster storage"? You mean DirectStorage, right? That is not "faster storage", it is a technology for allowing direct data transfer from an SSD to VRAM without going through the SSD, and when it gets implemented, also in-GPU decompression of assets. "Faster storage" is not a fitting description of DirectStorage.
The future? I thought you were saying it was currently the norm, that rasterization was dead? Now you've got me all confused.
Yes. Yes we did. Dedicated testing for new features that stand out distinctly from others has been the norm across essentially all good benchmarking sites. This has been true for things like PhysX. This has been true for bespoke smaller features like TressFX and HairWorks (though mostly they are just explicitly disabled). This is true for essentially every comparable technology across pretty much every respectable review site out there.
Has any reviewer, ever, two years after the launch of a new API, dedicated the entirety of their test suite to that API?
There might definitely be a tipping point where RT-enabled games
should be blended into the overall test suite, but that point is not now. IMO, that point would be when RT benchmarks are relevant for all GPUs across all product stacks - i.e. where it wouldn't break overall performance charts because half the GPUs on the chart can't even run half the benchmarks.
Seriously, your conspiratorical logic here is outright disturbing. There are perfectly reasonable arguments for separating these two out, as has been presented to you at length over these past four pages of discussion. You are refusing to even engage in any kind of discussing, just repeating hollow non-arguments centered around the contradictory pairing of "RT is the norm now"/"RT is the future of graphics". You're welcome to disagree with people's judgements, but for that to be taken seriously you need to actually present reasonable, on-topic, impersonal arguments, and not start accusing everyone of bias and conspiracy right out of the gate. All you're achieving by that is antagonizing everyone - even the people inclined to agree with you on some or all points - and make yourself look entirely irrational and unreasonable.
- These are not DX12 features, they are DX12 Ultimate features.
- This does in no way make one manufacturer look stronger, as the "downplayed" features
are tested and the results of that testing are included in the conclusion (=overall summary) of the review.
So ... you want a test suite that is fundamentally unrepresentative of current game development? Doesn't that seem ...
biased to you? Because while DX11 adoption is waning, and Vulkan is relatively niche, both are still relevant, both for (some) new games as well as legacy titles. What you are asking for is a test suite that inherently prioritizes a specific subset of features found in games, because you think those features are more important. The thing about that: your opinions are not universal, and the reviews are not written for you personally. They are meant to paint a broadly representative picture of these products. Limiting the test suite to only DX12 would make the test suite
much worse at what it is supposed to do.
The issue with this is that in quite a few titles, RT makes very little difference in terms of graphics. This is entirely dependent on the implementation - and as with all new tools, learning to use them well takes time, while you might be able to do a comparable job with the tools you're familiar with despite them being much older and technically less capable. So in many titles, baked lighting and reflections can look
very good, while a low quality RT implementation can look worse. That obviously isn't the reality in even a majority of games, but it is a relevant issue. Far Cry 6, for example, has been near universally criticized for its RT implementation being ... well, essentially unnoticeable outside of the performance drop.
You seem to have a rather odd view of both the relative marketshare and mindshare of these companies, as well as recent sales. Radeon GPUs have been just as sold out as Geforce GPUs across every price bracket except for ultra-premium until the past couple of months. It's true that Radeon supplies improved before Geforce supplies did, which is likely due to the same reason that flagship RTX cards have persistently been selling out: they're much better at cryptomining.
Other than that: Nvidia outselling AMD ~4:1 is a continuation of the status quo. There is nothing new about this. On top of this, AMD has been far more supply constrained than Nvidia, due to there being less pressure on Samsung's 8nm node than TSMC 7nm, and AMD on top of this needing to split wafer supplies between CPUs, APUs, GPUs, and console chips. AMD has, put simply, not had the wafer capacity to deliver very high volumes of GPUs since the launch of RDNA2 - which also obviously plays into them being sold out. Which makes it all the more understandable if Nvidia is gaining market share. This is not due to the market prioritizing RT performance above all else, it is down to cryptomining + Nvidia's massive mindshare advantage + AMD supply constraints + Nvidia's economics and their deals with OEMs (which goes some way towards explaining why it's so hard for AMD to get a real foothold in the laptop space, for example, despite delivering better efficiency than Ampere).
Nobody here is denying that if RT is what you're looking for, Nvidia delivers the best performance. Heck, you don't even need benchmarks within this generation to tell that - it's a completely established fact, beyond any doubt, and no tweaking from AMD's side will change that. If anything, this means highlighting RT benchmarks is
less important (until the next generation from both sides comes around, as that'll make them interesting again): that side of the picture is fixed, it isn't changing, it is well established and not subject for debate.
If RT is what you're looking for, Nvidia GPUs are clearly superior. If you don't care about RT - which is reality for many, many people, especially those in the market for more affordable GPUs, which generally don't really handle RT passably today (say,
RTX 3050,
RX 6600, both of which deliver passable performance at 1080p in very lightweight RT titles, but unplayable performance in heavier ones), separating out RT performance from the general performance assessment better lets you judge how the GPU will perform in games you'll be playing at settings you'll actually be using.
There is also a discussion to be had about whether high end/flagship cards should have different test routines than midrange and low end ones. I tend to think so (and, for example, TechSpot/Hardware Unboxed generally does this), but it also results in (much) more work for the reviewer, and is thus less feasible for many sites. Tradeoffs always need to be made. But I also acknowledge that testing on a level playing field is valuable in and of itself - even if it's not the choice I would make myself in an unconstrained situation. Luckily there are still quite a few GPU reviewers out there, so we can check multiple to ensure that the perspectives of one aren't skewing our impressions.
You, on the other hand, are arguing that your specific perspective - which on top of being yours, rather than universal, has quite a few logical flaws, inconsistencies, and seems to be based on a factually untrue understanding of current reality - should be the only one presented. I see no reason why such an argument should be accepted, or even let stand uncontested, as it inherently makes testing
less valuable for everyone else. This isn't bias. It's broad representativity, including a broad featureset to ensure that as many scenarios as possible are tested. You are explicitly arguing for more limited and myopic testing. Remember: you're always allowed to read a review and choose which parts of the results are the most important to you. That's how reviews are supposed to work. You do not, on the other hand, have the right to dictate that only what you see as important should be tested, and nothing else. If you want that, go start your own review site.