You're on that website.
You start with the performance of your CPU, because you can't "turn down" CPU intensive settings in a game - that'd be kicking half the players out or killing half the units, and isn't possible.
You can lower GPU settings, a 4090 on ultra is no different to a 3090 on medium, or a 5700XT on lower settings again.
Find the latest CPU review, look for minimum FPS. The rest is upto you, pair with any GPU you want, lower settings til you're happy with the FPS. Preferably run an FPS cap to keep things within the 1% low range of the CPU, and you'll get stutter free happiness.
Notice how nothing can really reach 144FPS, despite high refresh displays being a big deal? This is why they don't matter yet.
Intel Core i9-14900K Review - Reaching for the Performance Crown - Minimum FPS / RTX 4090 | TechPowerUp
4K results are what matter the most IMO, because they show what they'll be like under a more demanding load, which you can use as an example of what next-gen games will run like in the coming years, even at lowered settings.
This is pretty much how I go about it. It's absolutely not a perfect science as some people prefer seeing where CPU limitations exist at lower resolutions so they can run at higher framerates, and I can understand some other critiques of it, but I do think it's fairly sound. One really does start to see how certain CPUs/GPUs pair together. FWIW, I think Blackwell (and Navi 5?) will assume/best be matched with at least a 7600/5800X3D. Not a huge difference, but enough to let the GPUs makers skimp a little over the expected generational uplift to obtain desirable framerates (60/120hz mins at x resolution).
I also think it's one of those things where many modern cards assume you will be pairing it with a CPU that is faster than a 12400/13400; a 'gaming'/'performance' cpu from the last couple years. You'll notice the very deliniated gap there that exists pretty much nowhere else in the (low-priced) stack. You start to notice they want something around 112-113fps for many GPUs to hit certain thresholds, or perhaps something around a 5600x3d or an overclocked 6/8 5-series. I feel like that's a pretty good bet for where the PS5 Pro will land in terms of CPU performance as well. If you figure the PS5 is somewhere around a 3700x (98.5), it's likely the 3.5ghz (max) clock will be upped to ~4ghz, which in theory puts in right in line with what I'm talking about. If you want to really get into the weeds, I think the CPU/GPU will be on a ~2/3 divider, so if the GPU is ~2600mhz, the CPU will be ~3900mhz. If the GPU is ~2.67ghz, the cpu 4ghz. If the GPU 2.733ghz, the CPU 4100. If the GPU 2800mhz, the CPU 4.2ghz. I also think when someone looks at many current games at 1440p (Hogwarts, Alan Wake 2, Spiderman 2) you start to get a very good idea of where that GPU will perform (although the assumed doubling of GPU perf + arch improvements also gets you there). Those appear like games literally waiting on a patch for 1440p/60 with nice settings on a Pro, and I'd be willing to bet if they weren't designed with that in mind, they certainly have comparable PC settings that will slot in nicely.
Again, is that a perfect science? No. Could it be *slightly* different, just as the PS5 is not on a perfect ratio? Absolutely, it's just a rule of thumb that should get you pretty close within a pretty small margin of error. It's what I consider well-informed speculation and should give people an idea of where they would want to be have an adequete-level CPU/GPU ratio for the console-like settings (which may actually be 'high' on the pro) for the next few years. TLDR: They want you to buy a 12600k/7600/5800X3D or better, and a now 7900xt/4080 (soon to be Navi 4, 4070 Ti 16GB, or perhaps Battlemage), but you'll
probably be able to get by with an overclocked 5800x/7800xt. Maybe even a 5600x given not all 16 threads are used for gaming in a playstation; I don't know exactly how a PS5pro will line up with how those overclock (say 4.7ghz)...It should be close. It's one of those things where the stock 7800xt may overclock to 2721mhz, and a stock 4070 Ti is 2730mhz. Might the console be literally 7680/2.733mhz so they could both claim a meaningless win and differenciate the market (even if not really)? It might be exactly that. They *want* people to upgrade, but if you're savvy there's always a cheaper way.
I also am very curious how next-gen CPUs (say Zen 5) pair with MANY gpus. I pity W1zard for the work he'll have to do, but it should be fascinating. While there are, and certainly will absolutely be cases of straight GPU/VRAM bottlenecks, I'm curious if cards that were very-clearly intended for a certain market (nVIDIA GPUs sure appear to drift from ~60fps at launch to at 50-55fps on newer titles, or say a 4090 4k 100-<120fps) will be bumped up in their minimums. For instance, AMD did everything in their power to make sure the 7800xt is not compared to the 4070 Ti, as they very clearly wanted market differentiation there between it and 7900xt/probably Navi 4. Will a Zen 5 + 7800xt outperform a Zen 4 + 4070 Ti, or achieve certain threshold performance in cases it may not have when both were using the same expected (older platform)? I think it's very possible.
In that case, a CPU upgrade *could* potentially save you a GPU upgrade (or negate price differences for a certain level of perf between the companies), which I think is rather interesting. Will most people think about that? Probably not.
I *think* this is probably why AMD aims to release GPUs around the time, but after new CPUs. They probably expect many people to bench the GPU on the new CPU, which may make the GPU look better (comparably) than it may have on the older systems. For instance, maybe in AW2 a Navi 4 goes from being able to run 4k FSR balanced on Zen 4 to FSR Quality on Zen 5 with 60fps mins, but people will only notice that the GPU is capable of the higher-end performance because of a new bench platform for many reviewers. What many won't think to realize is that the 7800xt may have gone from 55fps 1440p mins to 60fps 1440 mins on the new platform as well. I don't think many people think about the fact people like W1zard uses close to the best, if not the best other side of the equation when testing current CPUs/GPUs, but both of those things evolve over time. Simiarly, they may not be running *quite* as high-end of a CPU at any given time, which can/does make a variable amount of difference depending upon a certain person's focus on a particular resolution.
Now, could a lot of these differences (in gaming) be solved by overclocking the old platform to get to the new stock performance? Sure, but that's not what's on the chart, and not all people do that.
The whole thing can come across as rather precarious, but it really isn't. I generally try to assume baseline absolute (oc) performance for GPUs, say something like 2850/22000 on a 7900xt, which can make a fairly noticable difference in 'playable' (accepted as 60hz, although I know not everyone agrees) settings, and then think to myself the range that would perform across the most common CPUs, which generally isn't a huge gap as long as you're in that 'performance' category. It certainly can matter *just* enough though, which is why I think it's important for each person to look at the chart you pointed to (and other resolutions) based upon their goals. You can generally get a fair idea of what you need to pair to get what you want in *most* circumstances, as the most-demanding games have a fairly consistant demand for performance and it doesn't change that often. It's mostly just dictated by consoles (and then building on top of them to an extent to bridge the gap between GPU perf levels and expected resolutions), and while the console resolution goals shift slightly over time (what might have been a 4k goal for cross-gen may have become 1440p60 currently, and by the end of gen that may become 1080p or a similarish FSR resoltution and/or 30fps, especially after the introduction of the 'pro') it's fairly predictable, and does often match up with predictable tiers of GPUs to get performance over that baseline as long as again, you are in that 'performance' level of CPU.
I wish I knew how to write that in a more concise manner, I apologize for that. Hopefully it makes sense.
Perhaps I over-complicated it, but it is kind of interesting imho. When you dig into it, you start to see the tricks all companies play so certain configurations will or won't be adequete (especially over time), as it has become somewhat predictable if you
really crunch the numbers. The fact is though, most people don't, or won't.