- Joined
- May 13, 2008
- Messages
- 818 (0.13/day)
System Name | HTPC whhaaaat? |
---|---|
Processor | 2600k @ 4500mhz |
Motherboard | Asus Maximus IV gene-z gen3 |
Cooling | Noctua NH-C14 |
Memory | Gskill Ripjaw 2x4gb |
Video Card(s) | EVGA 1080 FTW @ 2037/11016 |
Storage | 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD |
Display(s) | Vizio P 65'' 4k tv |
Case | Lian Li pc-c50b |
Audio Device(s) | Denon 3311 |
Power Supply | Corsair 620HX |
Here's my thinking, feel free to lambaste it, as it's a little bit red-string theory.
AMD has always needed a 12400/13400 competitor on it's old platform, and this should be that. The 12600(k) can currently be had for ~$200 and this would finally be a decent alternative in that range. One can obviously argue the merits of each, or who would currently buy them (I think it's a surprising amount of people for reasons I'll get into) but they should overall be good competition. One also has to consider something like a PS5pro, and how it will (re)shape the landscape. The current PS5 may (read: will almost certainly) be relegated to something like FSR balanced (or in Horizon's case 3200x1800cb; very similar), 2259x1270, while the Pro will run 3200x1800 (or some similar ratio) in the near future. Don't fret if you're a PS main, 1270p still meets the top THX spec of 40 degrees/.1ft away/inch tv, which is DAMN close to a TV. (TV, guys. Not monitors. Obviously that's a different use-case).
While yes, that will be a mid-generation refresh, I think most will agree that not only is a PS5 the measure to measure what's 'min-spec' for current gaming (read: A PS5 is essentially the exact spec needed to run Collisto Protocol at 1080p60min with everything cranked, and that in-turn is around the performance of a 7600), but also those consoles being representative of where the current value/performance threshold is. I think exactly nobody would be surprised if they used something like a 'compact' 8-core design, but again leveraging 13 threads for gaming at around 4ghz (or so). This would be a good PC alternative to that just as the previous 6-core parts to the PS5 vanilla. It probably will also have 64MB of IC/L3 (if it's leveraging something like an internal Navi32 - ex: 3584sp/7168sp @ ~2930mhz give or take; a ~6800XT in RDNA3 flavor). While it would be needed for the GPU, the CPU will probably also leverage that bw in some cases. Just throwing that out there bc synergy.
Perhaps the same way that you want 16GB of both video and system memory for console ports bc some don't have the unified memory calls pulled apart, some games MAY (perhaps?) program toward a larger cache in the future. Just a random non-sequitur.
It's the same way you should consider that eventually there will be a 170W APU with what breaks down to a 65W CPU and 105W GPU. Why? Because the discrete cards (6144/12288/18432) that are probably 225/450/675w (max). Half the bottom-end would be 112.5w. 4 Ram modules use 8-10W (or 8 16-20w). Why does it matter? Because a 1536sp (3072) part @ 3400mhz is similar to a PS5 (not counting arch improvements). 3nm should yield that clock fairly easily. The only way that part make sense though is with 64MB of cache (and at least 6000mhz memory), and the same could be said for the XSX with a 4ghz clock and ~7000 (7200mhz?) ram (which should also encapsulate balanced FSR scaling from a 7900xtx/4080 at 4k; similar to a XSX does with Fortnite), which would demand that higher power envelope. From there the low-end discrete GPUs parity the PS5pro (also the same amount of cache), and so on.
Off the topic of console parity and more on-topic of relevant desktop usage,
Take a look at this:
When you look at this CPU chart, you may think to yourself that you would want a 7700x or greater CPU to achieve a powerhouse system of 120fps minimums. You would be right, if you use a stock 4090 (16384 @ 2730mhz). The reality is that the NEXT gen of GPUs, the ones that will replace Navi31 and AD103 in the market, the ones that more people will buy, will probably be (very) slightly stronger (and 32GB). While you can probably use deductive/market reasoning by looking at that chart to figure out how much stronger, please don't spoil my math class.
Using the formula of 32 L2 or 64MB L3 * GPU clock = 6mhz across a 128-bit bus. This is VERY close to accurate (according to my testing) and similar to what nvidia has stated: they claim 32MB L2 on 4060ti is similar to 16625mhz on the bus which would equal a GPU clock of ~2770mhz, Wizard's test show an average clock of 2767 (probably 2767.5 because nvidia's clock steps are 2760/2775), so it does indeed pan out (despite people chastising them for it). I think VERY slightly less, nVIDIA may argue VERY slightly more (we're talking minuscule margin of error over/under that 6mhz that could come down to different games tested). You could surmise the next 'performance' parts will be 12288sp with 64MB L2 or 128MB L3 and 256-bit. They also will likely use 36gbps GDDR7. We can determine bw limitations a myriad of ways, but an easy one I can point to again is the Colisto Protocol, where the 7600 is bw limited. Using the above formula and the minimum (58.4fps) it would imply it is running utilizing 2448mhz. If we assume a PS5 could achieve 60fps minimums exactly, it would be running at 2445mhz. While this doesn't take into account arch changes, it can't USE those extra flops. So yeah, pretty close to accurate. Don't fret on that one either if you bought a 7600; overclock it and it'll be good-enough. Wouldn't be surprised if that was part of AMD's plan (total possible perf is good-enough, even if not at stock).
Obviously not everything scales the same (and 4090 has excessive bw which helps perf), but quick and dirty 16384 @ 2730 = 12288 @ 3640mhz. 1.4x AMD's MBA 7900xtx 2631mhz avg would be 3684mhz.
If we equalize clock/bw with a 256-bit bus, we come up with ~3780mhz. nVIDIA's current clock steps would imply ~3780. I'm not saying Red/Green WILL clock it there, I'm saying they COULD. Just like AMD *could* clock 7900 xtx @ ~2720mhz, or 7900 xt at 2591mhz with their current stock ram. Obv the reality is slightly lower (to allow partner cards OC w/ stock ram).
Roughly. Within a very small margin of error. I doubt they'll perform WORSE stock than 4090, or AMD will shoot for under 1.4x, but you never know. This is just a rough-in to prove a long-ass winded point.
Now, let's reverse-engineer 120fps with this slightly better GPU. What would be slowest CPU that would pair?
No surprise, it is a 7600. The lowest-end part on AMD's new platform. It's almost like they plan these things. I find it an amusing way to cut out the 12400/13400, but I digress.
Let's say you overclock said GPU, what CPU would you need then? Well, we don't know. Will they clock 38XXmhz? 3900? 4ghz? I don't know. I DO think a refresh(es) will come at 4200/40000 and/or 4800/44000 (although the 50% greater spec vs 25% higher clock rule might come into effect at that point), so less than the former (which is probably close to the ram/3nm limitation). I doubt they'll let them hit 100TF (if initial 3nm will scale that high) because that will probably be a huge marketing thing when that refresh happens, same with 50% faster than 7900xtx/4080. That would be 4069/4070mhz. So less than that, but anything below that appears POSSIBLE, granted less likely toward that top figure. I could see either/both limiting RAM overclocking to 38400mhz (~4030mhz gpu normalized), and I can't really imagine the 6144/12288/18432 doing much better than that with 225/450/675w.
Point is, even then you would want something faster than a 5800x. People may not want to splurge on a 5800x3d. They want their current platform.
But what about overclocking a 5000-gen CPU? Shh, you. Yes, it'll probably be good-enough. We're talking about folks that want a drop-in replacement and run stock, and want to keep up with the PS5 pro.
I absolutely think those 4090-level 7900xtx-market GPUs will probably pair fine on an overclocked 5000 non-x3d. 5600(x) will be GOAT by the end of the PS5 generation, if not recognized as such already.
That, and the 2080ti. Running at least 1080p, if not DLSS balanced on a 4k tv. Until the PS6 (probably).
But anyway, this isn't meant to be a post about GPUs. It wasn't meant to be a post about consoles (although their current/future relevancy constantly looms). It wasn't meant to be about visual acuity, neck strain, or other resolution/seating distance stuffs.
This was just a bloviated way of saying: You see that gap, the one between the 12400F and the 12600(/7600)? AMD NEEDED an option there. They NEED an option there. They WILL NEED an option there.
They will have a option there.
Finally.
On that we can hopefully agree.
AMD has always needed a 12400/13400 competitor on it's old platform, and this should be that. The 12600(k) can currently be had for ~$200 and this would finally be a decent alternative in that range. One can obviously argue the merits of each, or who would currently buy them (I think it's a surprising amount of people for reasons I'll get into) but they should overall be good competition. One also has to consider something like a PS5pro, and how it will (re)shape the landscape. The current PS5 may (read: will almost certainly) be relegated to something like FSR balanced (or in Horizon's case 3200x1800cb; very similar), 2259x1270, while the Pro will run 3200x1800 (or some similar ratio) in the near future. Don't fret if you're a PS main, 1270p still meets the top THX spec of 40 degrees/.1ft away/inch tv, which is DAMN close to a TV. (TV, guys. Not monitors. Obviously that's a different use-case).
While yes, that will be a mid-generation refresh, I think most will agree that not only is a PS5 the measure to measure what's 'min-spec' for current gaming (read: A PS5 is essentially the exact spec needed to run Collisto Protocol at 1080p60min with everything cranked, and that in-turn is around the performance of a 7600), but also those consoles being representative of where the current value/performance threshold is. I think exactly nobody would be surprised if they used something like a 'compact' 8-core design, but again leveraging 13 threads for gaming at around 4ghz (or so). This would be a good PC alternative to that just as the previous 6-core parts to the PS5 vanilla. It probably will also have 64MB of IC/L3 (if it's leveraging something like an internal Navi32 - ex: 3584sp/7168sp @ ~2930mhz give or take; a ~6800XT in RDNA3 flavor). While it would be needed for the GPU, the CPU will probably also leverage that bw in some cases. Just throwing that out there bc synergy.
Perhaps the same way that you want 16GB of both video and system memory for console ports bc some don't have the unified memory calls pulled apart, some games MAY (perhaps?) program toward a larger cache in the future. Just a random non-sequitur.
It's the same way you should consider that eventually there will be a 170W APU with what breaks down to a 65W CPU and 105W GPU. Why? Because the discrete cards (6144/12288/18432) that are probably 225/450/675w (max). Half the bottom-end would be 112.5w. 4 Ram modules use 8-10W (or 8 16-20w). Why does it matter? Because a 1536sp (3072) part @ 3400mhz is similar to a PS5 (not counting arch improvements). 3nm should yield that clock fairly easily. The only way that part make sense though is with 64MB of cache (and at least 6000mhz memory), and the same could be said for the XSX with a 4ghz clock and ~7000 (7200mhz?) ram (which should also encapsulate balanced FSR scaling from a 7900xtx/4080 at 4k; similar to a XSX does with Fortnite), which would demand that higher power envelope. From there the low-end discrete GPUs parity the PS5pro (also the same amount of cache), and so on.
Off the topic of console parity and more on-topic of relevant desktop usage,
Take a look at this:
When you look at this CPU chart, you may think to yourself that you would want a 7700x or greater CPU to achieve a powerhouse system of 120fps minimums. You would be right, if you use a stock 4090 (16384 @ 2730mhz). The reality is that the NEXT gen of GPUs, the ones that will replace Navi31 and AD103 in the market, the ones that more people will buy, will probably be (very) slightly stronger (and 32GB). While you can probably use deductive/market reasoning by looking at that chart to figure out how much stronger, please don't spoil my math class.
Using the formula of 32 L2 or 64MB L3 * GPU clock = 6mhz across a 128-bit bus. This is VERY close to accurate (according to my testing) and similar to what nvidia has stated: they claim 32MB L2 on 4060ti is similar to 16625mhz on the bus which would equal a GPU clock of ~2770mhz, Wizard's test show an average clock of 2767 (probably 2767.5 because nvidia's clock steps are 2760/2775), so it does indeed pan out (despite people chastising them for it). I think VERY slightly less, nVIDIA may argue VERY slightly more (we're talking minuscule margin of error over/under that 6mhz that could come down to different games tested). You could surmise the next 'performance' parts will be 12288sp with 64MB L2 or 128MB L3 and 256-bit. They also will likely use 36gbps GDDR7. We can determine bw limitations a myriad of ways, but an easy one I can point to again is the Colisto Protocol, where the 7600 is bw limited. Using the above formula and the minimum (58.4fps) it would imply it is running utilizing 2448mhz. If we assume a PS5 could achieve 60fps minimums exactly, it would be running at 2445mhz. While this doesn't take into account arch changes, it can't USE those extra flops. So yeah, pretty close to accurate. Don't fret on that one either if you bought a 7600; overclock it and it'll be good-enough. Wouldn't be surprised if that was part of AMD's plan (total possible perf is good-enough, even if not at stock).
Obviously not everything scales the same (and 4090 has excessive bw which helps perf), but quick and dirty 16384 @ 2730 = 12288 @ 3640mhz. 1.4x AMD's MBA 7900xtx 2631mhz avg would be 3684mhz.
If we equalize clock/bw with a 256-bit bus, we come up with ~3780mhz. nVIDIA's current clock steps would imply ~3780. I'm not saying Red/Green WILL clock it there, I'm saying they COULD. Just like AMD *could* clock 7900 xtx @ ~2720mhz, or 7900 xt at 2591mhz with their current stock ram. Obv the reality is slightly lower (to allow partner cards OC w/ stock ram).
Roughly. Within a very small margin of error. I doubt they'll perform WORSE stock than 4090, or AMD will shoot for under 1.4x, but you never know. This is just a rough-in to prove a long-ass winded point.
Now, let's reverse-engineer 120fps with this slightly better GPU. What would be slowest CPU that would pair?
No surprise, it is a 7600. The lowest-end part on AMD's new platform. It's almost like they plan these things. I find it an amusing way to cut out the 12400/13400, but I digress.
Let's say you overclock said GPU, what CPU would you need then? Well, we don't know. Will they clock 38XXmhz? 3900? 4ghz? I don't know. I DO think a refresh(es) will come at 4200/40000 and/or 4800/44000 (although the 50% greater spec vs 25% higher clock rule might come into effect at that point), so less than the former (which is probably close to the ram/3nm limitation). I doubt they'll let them hit 100TF (if initial 3nm will scale that high) because that will probably be a huge marketing thing when that refresh happens, same with 50% faster than 7900xtx/4080. That would be 4069/4070mhz. So less than that, but anything below that appears POSSIBLE, granted less likely toward that top figure. I could see either/both limiting RAM overclocking to 38400mhz (~4030mhz gpu normalized), and I can't really imagine the 6144/12288/18432 doing much better than that with 225/450/675w.
Point is, even then you would want something faster than a 5800x. People may not want to splurge on a 5800x3d. They want their current platform.
But what about overclocking a 5000-gen CPU? Shh, you. Yes, it'll probably be good-enough. We're talking about folks that want a drop-in replacement and run stock, and want to keep up with the PS5 pro.
I absolutely think those 4090-level 7900xtx-market GPUs will probably pair fine on an overclocked 5000 non-x3d. 5600(x) will be GOAT by the end of the PS5 generation, if not recognized as such already.
That, and the 2080ti. Running at least 1080p, if not DLSS balanced on a 4k tv. Until the PS6 (probably).
But anyway, this isn't meant to be a post about GPUs. It wasn't meant to be a post about consoles (although their current/future relevancy constantly looms). It wasn't meant to be about visual acuity, neck strain, or other resolution/seating distance stuffs.
This was just a bloviated way of saying: You see that gap, the one between the 12400F and the 12600(/7600)? AMD NEEDED an option there. They NEED an option there. They WILL NEED an option there.
They will have a option there.
Finally.
On that we can hopefully agree.
Last edited: