I don't think so. You're - very generously - positing that the CEO of Intel likely said made some overblown claims (presumably, that he didn't quite mean, or that weren't intended to come out like that) because he
Which, again, is
extremely generous and IMO a rather unreasonable assumption. I see no reason to give him the benefit of any doubt here - this is a PR move. A calculated, thought-through PR move, likely mostly targeting investors and finance people, as they tend to gobble up this type of silly macho posturing. I see no reason to think otherwise.
... sigh. I said they are tied at everything except 2160p, with the difference there being minor. I also specifically brought up that you can argue for 2160p being of particular interest for this GPU. So yes, I follow you there - I did say it before you, after all. But I also pointed out that 1440p high refresh rate is likely more common still (there are just barely any proper 2160p gaming monitors on the market, after all). In which case they would perform the same (varying between individual titles, of course).
Relevance? We're talking about overall performance here, not technical details as to why performance differs. Nobody here is disputing that the Nvidia GPUs are slightly faster at higher resolutions.
Yep, DLSS is good, and improves performance. So does FSR. And FSR being newer means less adoption - once it's been on the market a while we'll see how this plays out. My money's on FSR gaining traction faster than DLSS due to its openness, universal compatibility and ease of implementation, but I don't think DLSS will die off entirely either. But now you're suddenly adding a lot of caveats to what was previously a statement that
So, either you're changing your tune (by adding further caveats), or you're admitting that it isn't as simple as you first said. Either way, your statement strongly implies that the 3080 Ti
beats the 6900 XT at lower-than-2160p resolutions (if not, then it wouldn't be "
especially at 4k"), which your own source showed is just not true.
Sure. But then, all high end/flagship GPUs are terrible value. The 6800 XT, 6800, and 3070 are much better value propositions too. The point being: that logic works both ways, not just the one way you're using it. The 3080 is great value for 2160p, but otherwise relatively unremarkable (beyond being a crazy powerful GPU overall, of course). Ignoring the pricing clusterf*ck that is current reality, the best value GPUs are, in rough order, 3060, 6600 XT, 3060 Ti (very close, essentially tied), 6800, 3070 (again, essentially tied), and then we start getting into a royal mess of far too many SKUs to make sense of. Arguments like what you're saying here can be made at literally every step down this ladder.
"Disrupting progress in a company" is not the same as "probably [being] the reason why Intel was stuck on 14nm for so long". You're conflating overall strategy with specific technical issues. These do of course overlap - which I exemplified in my previous post - but you're assigning a far too simplistic blame for a highly complex situation. The world's best CEO can't fix your engineering problems (unless they're also a
brilliant engineer who happens to have the correct skills and temporarily steps down from their CEO position to work as an engineer, which ... yeah, I don't think that happens often). There is a relation between executives and overall progress, but the link between that and the specific constituent parts of that progress is complicated, tenuous, and extremely difficult to pin down.
Again: yes, I said as much. But you're completely ignoring the major IPC improvements that happened at the same time! As I said (and linked to), Zen2 beat Zen+ by ~15% in independent testing using industry-standard methods.
Zen3 delivered a further 19% IPC increase according to the same testing. That means that, regardless of the clock speed afforded by the node, 5000-series CPUs outperform 2000-series CPUs by nearly 37%. That is a
major performance increase. Saying that the improved overall performance is
only down to clock speeds
is wrong. Period. It is down to clock speeds
and architectural improvements. The node enables both in some way, but this is not a 1:1 relation - the architecture also needs to be built to reach those clock speeds, and tuned to hit those IPC numbers. Saying this is all down to the node is an oversimplification.
Apple is hardly the only TSMC 5nm customer at this point. Huawei was working with them even last year, and while
this leaked roadmap is clearly no longer up to date (the Snapdragon 875 never materialized, instead we got the Samsung 5nm 888; HiSilicon got stomped down even further by trade embargoes), it shows that there are plenty of TSMC 5nm customers in the 2021-2022 time frame.
We'll see - Intel CPUs have also become more RAM speed/timing sensitive lately (mostly due to higher core counts putting more pressure on the interconnects and keeping cores fed). They still aren't as sensitive as AMD, but we have no idea how this will develop in the future. I would also guess that both upcoming architectures will support both DDR4 and DDR5, with motherboards supporting either being available. I wouldn't expect an all-out DDR5 change until the generation after these.
Close what gap?
Intel still has a slight IPC deficit with their 11th gen, but they clock higher, so everything mostly evens out. Until we see reviews we have no idea how these upcoming chips (from either company) will perform.