Sunday, June 9th 2019
Sony PlayStation 5 Promises 4K 120Hz Gaming
Sony has finalized the design and specification of its PlayStation 5 entertainment system. Unlike buzzwords Microsoft threw around like "8K capable" for its "Project Scarlett" console, Sony has a slightly different design goal: 4K UHD at 120 Hz, guaranteed. The most notable absentee at E3 2019, Sony is designing the PlayStation 5 to leverage the latest hardware to guarantee 120 frames per second on your 4K display. Much like "Project Scarlett," the SoC at the heart of the PlayStation 5 is a semi-custom chip co-designed by AMD and Sony.
This unnamed SoC reportedly features an 8-core/16-thread CPU based on AMD's latest "Zen 2" microarchitecture, which is a massive leap from the 8 low-power "Jaguar" cores pulling the PS4 Pro. The GPU will implement AMD's new RDNA architecture. The SoC will use GDDR6 memory, shared between the CPU and GPU. Much like "Project Scarlett," the PS5 will include an NVMe SSD as standard equipment, and the operating system will use a portion of it as virtual memory. There will also be dedicated hardware for 3D positional audio. Sony also confirmed full backwards compatibility with PS4 titles.
Sources:
The Verge, CNet
This unnamed SoC reportedly features an 8-core/16-thread CPU based on AMD's latest "Zen 2" microarchitecture, which is a massive leap from the 8 low-power "Jaguar" cores pulling the PS4 Pro. The GPU will implement AMD's new RDNA architecture. The SoC will use GDDR6 memory, shared between the CPU and GPU. Much like "Project Scarlett," the PS5 will include an NVMe SSD as standard equipment, and the operating system will use a portion of it as virtual memory. There will also be dedicated hardware for 3D positional audio. Sony also confirmed full backwards compatibility with PS4 titles.
95 Comments on Sony PlayStation 5 Promises 4K 120Hz Gaming
edit: Actually to be fair, the engine was based on Sunset Overdrive, which was an XOne original, in the strange instance when Insomniac jumped ship (worth playing btw).. so I guess you could partly call it a neutral game.
Edit: Well speaking of digital foundry I finally remembered where did I heard about xbox ones 120Hz mode, their Sekiro console review it was.
I can't be bothered responding to the rest of your ... "post", as you seem utterly incapable of putting forward an informed and though-out argument, and prefer screaming and yelling. No thanks. Again, consoles aren't PCs, and overhead is a real thing. Then there's the fact that console settings are generally not "PC Ultra", as that's usually the "turn everything to 11, regardless of what makes sense in terms of performance impact" option, and console game designers tend to spend a lot of time optimizing this. The Xbox One X does 4k30 in quite a few titles with a 40 CU 1172MHz 12nm GCN GPU (and a terrible CPU). If RDNA delivers the IPC/"perf per FLOP" increase promised, imagine what a 60-ish (perhaps higher) CU design at slightly higher clocks could do. While Microsofts stated "4x the power" is vague AF, and likely includes CPU perf, native 4k60 this generation is a given (a Radeon VII can do 4k60 just fine at High settings in a lot of games, and has to deal with Windows), and 4k120 in select esport titles (think Rocket League etc.) ought to be doable. And 1080p120 shouldn't be a problem whatsoever. Heck, my 4-year-old Fury X does that in esports games, and the CPU-crippled One X does 1080p60 in a bunch of games.
As for 8k, I can't imagine that being for anything but streaming, or even just saying "we have HDMI 2.1" in a "cool" way.
RDNA is basically GCN with a different skin, and the partial confirmation of that is that they didn't talk about TDP or merely consumption at e3 nor at Computex So what you said is incorrect, Navi 10 is the same uarch they're going to use into consoles, and it has its limits just like Polaris 10 had, they will probably use a chip with a die size between 5700 and 5700XT and lower the clocks so much the performance will go under that of a Desktop RX 5700, which is not even close to offer 4k120fps, since not even 5700XT is able to offer that in normal conditions. I never said they used same die size or chips they use for PC hardware, i'm just saying that whatever they use, performance will fall inevitably behind their PC hardware solutions, both for CPU and GPU parts.
The specs of Navi 10 are pretty well known at this point. 5700 XT is a 225W TBP card. It has almost 2x the transistors and considerably higher clocks versus RX590, yet has the same TBP. How hard is AMD pushing 5700XT? I suspect pretty hard since they are still playing from behind. We don't really know what the specs of Navi will be in the consoles. XboxOne X had a 40CU GPU with "Polaris features," which was 4 more CUs than any available retail Polaris card. It wasn't full Polaris, but we don't really know how AMD customized those chips for the consoles. Still, I suspect Sony and MS will get more CUs than the 5700 XT, but AMD will probably lop 400-500mhz off to hit that power sweet spot. Just using last gen's history as a guide.
As for what you're saying about RDNA being "basically GCN with a different skin", that's nonsense. There are significant low-level changes to the architecture, even if it retains compatibility with the GCN ISA and previous instruction handling and wavefront widths. Do we know yet if the changes translate into improved performance? Not really, as reviews haven't arrived. But every single technically competent analyst out there says these changes should help increase the hardware utilization and alleviate memory bandwidth strain - the two biggest issues with GCN. GamersNexus has an excellent overview with David Kanter, or you could take a look at AnandTech's brief summary here.
As for die size, it's true that consoles tend to go for mid-sized chips (nothing else makes much sense given their mass-produced nature), but at this point we have zero idea how they will be laid out. CU count? No idea. Clock speeds? No idea. MCM packaging or monolithic die? No idea. It's kind of funny that you say "it'll probably use a chip with a die size between 5700 and 5700XT" though, as they use different bins of the exact same die and die size is thus identical. They're both Navi 10, the lower-binned version with 4 CUs disabled just gets a suffix added to it.
Lastly, you seem to read me as saying the 5700 XT can do 4k120FPS, for some reason. I've never said that - though I'm sure it can in lightweight esports titles. CS:GO? Very likely. Rocket League? Sure. I wouldn't be surprised. But obviously this isn't a 4k gaming card - heck, I stated this quite categorically a few posts back. Every single demonstration AMD has done has been at 1440p. Does this translate 1:1 to console gaming (even if we make the rather significant assumption of equal hardware specs)? No. Consoles have far more optimized performance (as developers have a single (or at least no more than 3-4) hardware configuration to optimize for, rather than the ~infinite combinations in the PC space. Consoles also have lower-level hardware access for games, leading to better utilization and less overhead. The lightweight OSes also lead to less overhead. And, of course, they tend to skip very expensive rendering options that are often a part of "Ultra" settings on PC. This is how an Xbox One X can do 1800p or even native 2160p30 on a GPU with very similar power to an RX 480 with a terrible, low-speed CPU. Not trying to convince you to upgrade ;) Heck, I'm very much a proponent of keeping hardware as long as one can, and not upgrading until it's actually necessary. I've stuck with my Fury X for quite a while now, and I'm still happy with it at 1440p, even if newer games tend to require noticeably lowered settings. I wouldn't think an RX 5700 makes much sense as an upgrade from a Vega 64 - I'd want at least a 50% performance increase to warrant that kind of expense. Besides that, isn't AC Odyssey notoriously crippling? I haven't played it, so I don't know, but I seem to remember reading that it's a hog. You're right that a "true 4k" card doesn't exist yet if that means >60fps in "all" games at Ultra settings - but this (all games at max settings at the highest available resolution) isn't something even ultra-high-end hardware has typically been capable of. We've gotten a bit spoiled with recent generations of hardware and how developers have gotten a lot better at adjusting quality settings to match available hardware. Remember that the original Crysis was normally played at resolutions way below 1080p even on the best GPUs of the time - and still chugged! :)
How is it funny what i said? When they ever used bigger die sizes on consoles compared to PC hardware? They never did that, Xbox One X is Vega based, and the smallest Vega at 16nm on PC hardware is 495mm^2, and that's how they pulled out ~6 TFLops, mantaining relatively low frequency (Xbox One X GPU die size is 359mm^2). Binning won't change that much anything really, even because talking about Binning in consoles chips is a bit crazy...
No, i don't read you as saying 5700XT capable of 4k120fps, but then how do you think 4k120fps on PS5 or Xbox scarlett possible? Since whatever consoles will be equipped with can't be as powerful as the PC hardware version of it. And i don't want to hear about upscales because 4k is 4k or UHD to be more precise, if they claim 4k that needs to be the resolution in game, and no upscales are allowed. About fps i don't even comment, because it's totally absurd.
Edit: I agree though I don't think it's any way feasible to have 4k120fps apu out of console like power budgets.
2: What you're saying about not being able to "do miracles" here has no bearing on this discussion whatsoever. You were saying that consoles use off-the-shelf GPU dice. They don't. They use custom silicon.
3: The number of games in which the Xbox One X can do native 4k30 isn't huge, but it's there nonetheless. An RX 480, which has roughly the same FLOPS, can't match its performance at equivalent quality settings in Windows. This demonstrates how the combination of lower-level hardware access and more optimization works in consoles' favor.
4: Performance can mean more than FLOPS. Case in point: RDNA. GCN does a lot of compute (FLOPS) but translates that rather poorly to gaming performance (for various reasons). RDNA aims to improve this ratio significantly - and according to the demonstrations and technical documentation provided, this makes sense. My Fury X does 8.6 TFLOPs, which is 9% more than the RX 5700, yet the 5700 is supposed to match or beat the RTX 2060, which is 34% faster than my GPU. In other words, "6x performance" does not in any way have to mean "6x FLOPS". Also, console makers are likely factoring in CPU comparisons, where they're easily getting a 2-3x improvement with the move from Jaguar to Zen2. Still, statements like "The PS5 will be 6x the performance of the Xbox One X" are silly and unrealistic simply because you can't summarize performance into a single number that applies across all workloads. For all we know they're factoring in load time reductions from a fast SSD into that, which would be downright dumb. Again: NO THEY DIDN'T. You just stated - literally one paragraph ago! - that the consoles used a GPU "more Vega based than anything else", but now you're saying it's Vega 10? You're contradicting yourself within the span of five lines of text. Can you please accept that saying "Vega 10" means something different and more specific than "Vega"? Because it does. Now, "Vega 10" can even mean two different things: a code name for a die (never used officially/publicly), which is the chip in the V56/64, or the marketing name Vega 10, which is the iGPU in Ryzen APUs with 10 CUs enabled. These are entirely different pieces of silicon (one is a pure GPU die, the other a monolithic APU die), but neither are found in any console. Navi 10 (the RX 5700/XT die) will never, ever, be found in an Xbox or Playstation console, unless something very weird happens. Consoles use semi-custom designs based on the same architecture, but semi-custom means not the same. Yes, it will change something technically. That's the difference between an architecture and a specific rendition of/design based on said architecture. One is a general categorization, one is a specific thing. Saying "battle royale game" and saying "Apex Legends" isn't the same thing either, but have a very similar relation to each other - the latter being a specific rendition of the broader category described by the former. Again, you're arguing against some sort of straw man that has no basis in what I was saying. We know nothing specific about the GPU configurations of the upcoming consoles. We can make semi-educated guesses, but quite frankly that's a meaningless exercise IMO. We'll get whatever MS and Sony thinks is the best balance of die area, performance, features, power draw and cost. There are many ways this can play out. Your numbers are likely in the ballpark of correct, but we don't know nearly enough to say anything more specific about this - all we can make are general statements like "the GPU isn't likely to be very small or overly large" or "power draw is likely to be within what can be reasonably cooled in a mass-produced console form factor".
Now, one thing we can say with relative certainty is that the new console GPUs will likely clock lower than retail PC Navi dGPUs, both for power consumption and QC reasons (fewer discarded dice due to failure to meet frequency targets, less chance of failure overall). AMD tends to push the clocks of their dGPUs high, so it's reasonable to assume that if the RX 5700 XT consumes ~225W at ~1755MHz "game clock" (average in-game boost), downclocking it by 2-300MHz is likely to result in significant power savings. After all, power scaling in silicon ICs is nowhere near linear, and pushing clocks always leads to disproportionate increases in power consumption. Just look at the gains people got from underclocking (and to some extent undervolting) Vega. If a card consumes 225W at "pushed" clocks, it's not unlikely to get it to, say, 150W (33% reduced power) with much less than 33% performance lost. And if their power target is, for example, 200W, they could go for a low-clocked (< 3GHz) 8c Zen2 CPU at <65W and a "slow-and-wide" GPU that's ultimately faster than the 5700XT. I'm not saying this will happen (heck, I'm not even saying I think it's likely), but I'm saying it's possible.
The current consoles demonstrate this quite well: An RX 580 consumes somewhere between 180 and 200W alone. An Xbox One X consumes around the same power with 4 more CUs, 8 CPU cores, 50% more VRAM, a HDD, an optical drive, and so on. In other words, the Xbox One X has more CUs than the 480, but consumes noticeably less power for the GPU itself. Have I said that something you said is funny? Xbox One X is a Vega-Polaris hybrid, and what I said is that it's bigger than any PC Polaris die - the biggest of which has 36 CUs. I never mentioned Vega in that context. I never mentioned binning in relation to consoles, I mentioned binning because you were talking as if the 5700 and 5700XT are based off different dice, which they aren't. 1: We don't know the CU count of the upcoming consoles. I don't expect it to be much more than 40, but we've been surprised before. The Xbox One X has 40 CUs, and MS is promising a significant performance uplift from that - even with, let's say 20% higher clocks due to 7nm (still far lower than dGPUs) and 25% more work per clock, they'd need more CUs to make a real difference - after all, that just adds up to a 50% performance increase, which doesn't even hit 4k60 if the current consoles can at best hit 4k30. Increasing the CU count to, say, 50 (for ease of calculation, not that I think it's a likely number) would boost that to a 87,5% increase instead, for a relatively low power cost (<25% increase) compared to boosting clock speeds to a similar level of performance.
2: As I've been trying to make clear for a few posts now, frame rates depend on the (type of) game. Forza runs flawlessly at 4k30 on the One X. High-end AAA games tend not to. A lightweight esports title might even exceed this, though the X isn't capable of outputting more frames - consoles generally run strict VSYNC (and the FreeSync implementation on Xbox is ... bad). I'm saying I don't doubt they'll be able to run lightweight esports titles at 4k120. Again: My 2015-era Fury X runs Rocket League at 1440p120 flawlessly (I've limited it there, no idea how high it really goes, but likely not above 150), so a newer console with a significantly faster architecture, a similar CU count, more VRAM and other optimizations might absolutely be able to reach 4k120 in a lightweight title like that. Heck, they might even implement lower-quality high FPS modes in some games for gamers who prefer resolution and sharpness over texture quality and lighting - it can make sense for fast-paced games. Am I saying we'll see the next CoD or BF at 4k120 on the upcoming consoles? Of course not. 4k60 is very likely the goal for games like that, but they might not be able to hit it consistently if they keep pushing expensive graphical effects for bragging rights. 1080p120 is reasonably realistic for games like that, though. The CPU is the main bottleneck for high FPS gaming on the One X, after all, so at 1080p these consoles should really take off. My point is: games are diverging into more distinct performance categories than just a few years ago, where esports titles prioritize FPS and response times, while a lot of other games prioritize visual quality and accept lower frame rates to achieve this. This makes it very difficult to say whether or not a console "can run 4k120" or anything of the sort, as the performance difference between different games on the same platform can be very, very significant. Backwards compatibility on consoles compound this effect.
3: I was quite specific about whether or not I meant upscaled resoultions - please reread my posts if you missed this. When I say native 4k, I mean native 4k (yes, technically UHD, but we aren't cinematographers now, are we?).
2: It's a way of saying that they can't do much of what they have already developed which is already at its limit, and yes we do know it because it's been like that for years, and it's not going to change
3: Because they're different architectures. And optimizations still can't do miracles either.
4: Yes they might have improved that, they might be approaching nvidia's level of optimization in that term, but the fact stands, AMD themselves compared their 5700 XT to a 2070, and a 5700 to a 2060, and while most of the time take the performance crown, they sometimes lose, let's say they're probably going to battle the new "super" cards nvidia is preparing, which sounds like they're going to basically a 2060Ti and a 2070Ti. Anyway 5700 XT 5700 are roughly that performance level, and it doesn't seem to me that they're capable of doing what Sony is saying in no way. Nothing close to 6x performance, but much closer to a 2x. Well if they're factoring other stuff in that number i don't know, but it's not fair because it's simply not true to state that, if you talk about performance, it's computing performance only, unless you want to kinda scam your customers, and it wouldn't be the first time... Again they did: Polaris 10 includes All RX 4xx cards; Polaris 20 includes RX 570 and 580; Polaris 30 includes RX 590, Vega 10 includes Vega 3/6/8/10/11/20/48/56/64 and Vega 20 includes Vega VII
Polaris 10/20/30, they're all polaris, Vega 10 and 20, both Vega. Xbox One X GPU is built based on Vega 10 family, it's not like any of those mentioned before, but it's still Vega and very possibly part of Vega 10 group. But even if they aren't, even if it's not Vega 10 but it's Vega "Xbox", what does it matter, same stuff, same performance per watt, same die limitations as it's still part of the Vega family and they share everything. I agree it doesn't mean the same at 100% but at 90% it's still considerable the same, not exact but almost. What will it change technically? The disposition inside the die? The configuration? Apart from that? Performance/Watt are still the same, and won't go any higher. Not quite an accurate analogy, i'd say one is Half Life 2, and the other is Half Life 2 Episode 1 or 2. Basically Half life 2, apart from the story. Well what are you arguing about then? We'll get whatever AMD is capable of doing with their chips, nothing more, and what Sony or Microsoft will be able to sell at a decent price, and not consume like an oven. Alright, but 33% reduction in clocks, won't give you only 3% performance loss, or even only 13%, it'll be something around 20-25%, which is still pretty significant. Again no, they never did that, because if they struggle meeting the Power limit by reducing clocks, widening the die, will only give them back that Power consumption they got rid of by lowering clocks, maybe not at the same price, but very close to that, so it doesn't make sense. You're still comparing Polaris based chips with a Vega based chip of a Xbox one X. Xbox One X die has 16 less CUs than the slower PC version of Vega, which is 56, with lower clocks ofc. But if you want to keep comparing it to Polaris have it you way. Well you said "It's kind of funny that you say" Which is not actually the same thing and i probably misunderstood that, but that doesn't sound that much different, anyway i don't mind that, no worries. Vega, not polaris. I actually understood they were different die sizes but we're still not sure they're the same die size tho (or i missed something official from AMD?), anyway it makes perfectly sense, it costs less to just use lower binned die sizes to make a slower chip that way, i guess we'll see a different die size if they ever make a more powerful chip and name it RX 5800 or something. 1: We don't but we can assume it'll be something smaller than 5700 XT, maybe as big as 5700, with slower clocks. Nothing close to 6x the performance or 4k120fps.
2: Agreed but most of the games won't be able to do that, plain simple, and with most i'm talking about a good 70% if not more...Not on consoles, since there's no real Pro gamer community or esposts of any kind on consoles, and 80% of the console market is casual gaming. It's actually the CPU what i'm excited about, since previous consoles had a decent graphics chip, and an absolute pile of garbage as CPU, hopefully Zen 2 will change that forever, so that freaking devs can develop proper games and not being bottlenecked by consoles' ridiculous CPUs
3: Well you were talking about "1500-1800p" in one of your previous posts, so that's why i stated that. We're not cinematographers, and i wasn't trying to correct you when i said "UHD to be more precise", i just don't like the "4k" term, i started hating it in the latest years, maybe because it has been used inappropriately for years now, for any sort of advertisement.
You can argue that that wasn't what you meant, but that's what I've been trying to get you to get on board with this whole time: that a GPU architecture (such as Navi, Vega or Polaris) is something different than a specific rendition of/design based on said architecture (such as Navi 10, Vega 10 or Polaris 10, 20, etc.). You consistently mix the two up - whether intentionally or not - which makes understanding what you're trying to say very difficult. I really shouldn't need to be arguing for the value of precise wording here.
It's also worth pointing out that even "Navi" is a specific rendition of something more general - the RDNA architecture. Just like Vega, Polaris, Fiji, and all the rest were different renditions of various iterations of GCN. Which just serves to underscore how important it is to be specific with what you're saying. Navi is RDNA, but in a few years, RDNA will mean more than just Navi. ... which is exactly why differentiating between an architecture and its specific iterations is quite important. After all, both a Vega 64 dPGU and a Vega 3 iGPU are based off the same architecture, but are radically different products. Unless you're talking only about the architecture, then, being specific about what you're talking about becomes quite important to getting your point across. 1: The One X has been reported to be a hybrid between Vega and Polaris, though it's not entirely clear what that means (not surprising, given that there are never deep-level presentations or whitepapers published on console APUs). I'd assume it means it has Vega NCUs with some other components (such as the GDDR5 controller) being ported from Polaris. After all, there's no other rendition of Vega with GDDR5. Also, they've likely culled some of the compute-centric features of Vega to bring the die size down.
2: That argument entirely neglects the low-level changes between GCN and RDNA. We still don't know their real-world benefits, but all competent technical analysts seem to agree that the "IPC" or perf/clock and perf/TFLOP gains AMD are promoting are believable.
3: There's no real performance per CU per clock difference between Polaris and Vega, at least not for gaming. Gaming performance scaling between Polaris and Vega is very close to linear (or when looking at "performance per FLOP") when factoring in CU counts, clock speeds (and to some degree memory bandwidth). In other words, expecting a GDDR5-equipped, low-clocked, 40-CU Vega to perform close to a GDDR5-equipped, higher-clocked, 36-CU Polaris is entirely reasonable.
4: The 5700/XT is a ~250mm2 die. The RTX 2080 is a 545mm2 die, and the 2080 Ti is a 754mm2 die. Of course AMD aren't aiming for the high end with this - it's a clear (upper) midrange play. Remember, AMD is no stranger to large dice - Fiji was 596mm2. Vega 10 is 495mm2. If AMD were going for the high end, they'd add more CUs. Aiming for the mid-range with a smaller die makes sense on a new node with somewhat limited availability, but there's no doubt they have plans for a (likely much) larger RDNA-based die. This might be "Navi 2" or "Arcturus" or whatever, but nonetheless it's obviously coming at some point in the future. You're arguing as if Navi 10 is the biggest/highest performance configuration possible for Navi, which ... well, we don't know, but it's one hell of an assumption. What's more likely - that AMD spent 3-4 years developing an architecture that at best matches their already existing products at a smaller die size, but with no chance of scaling higher, or that they made an architecture that scales from low to high performance? My money is definitely on the latter.
As for the "6x performance" for the PS5, I never said that, but I tried giving you an explanation as to how they might arrive at those kinds of silly numbers - essentially by adding up multiple performance increases from different components (this is of course both misleading and rather dishonest, but that's how vague PR promises work, and we have to pick them apart ourselves). There's no way whatsoever they're claiming it to have 6x the GPU performance of the One X. That, as you say, is impossible - and if it did, they would definitely say so very specifically, as that'd be a huge selling point. But 2-3x GPU performance (not in FLOPS, but in actual gaming performance)? Doesn't sound too unlikely if they're willing to pay for a large enough die.Correct.Correct.Correct.Correct. Nope. Vega 10 (the die) is Vega 56 and 64. The issue here is that "Vega 10" is also the marketing name for the highest-specced mobile Vega APU. A marketing name and an internal die code name is not the same thing whatsoever, even if the name is identical. Vega APUs are based off an entirely different die design than the Vega 10 die - otherwise they'd also be 495mm2 GPU-only dice. The same goes for the Macbook Pro lineup's Vega 20 and Vega 16 chips, which are based off the Vega 12 die. This is of course confusing as all hell - after all, the names look the same, but the marketing naming scheme is based on the number of CUs enabled on the die, while the die code names are (seemingly) arbitrary - but that's how it is. Yes, that's how architectures work. Again: no. If there is such a thing as a "Vega 10 family", it includes only the cards listed on the page linked here. I'm really sounding like a broken record here, but "Vega" and "Vega 10" are not the same thing. Again, what you're describing is, very roughly, the difference between an architecture and a specific rendition of that architecture, yet you refuse outright to acknowledge that these two concepts are different things. Again: If your analogy is that the game engine is the architecture and the story/specific use of the engine (the game) is the silicon die, then yes, absolutely. Both games based on an engine and dice based on an architecture share similar underpinnings (though often with minor variations for different reasons), but belonging within the same family. Which is why we have designations such as "architecture" and "die" - different levels of similar things. Though to be exact the engine isn't HalfLife 2, it's Source, and both HL2, HL2 EP1 and HL2 EP2 are all specific expressions of that engine - as well as a bunch of other games. I'm arguing against you bombastically claiming that consoles will have Navi 10 - a specific die which they with 100% certainty won't have - and can't possibly be larger than this (which they can, if console makers want to pay for it). I didn't say 33% reduction in clocks, i said power. The entire point was that - for example - with a 33% reduction in power, you wouldn't lose anywhere close to 33% of performance due to how voltage/clock scaling works. Similarly, a 33% reduction in clocks would likely lead to a much more than 33% reduction in power consumption. The exact specifics of this depends on both the architecture and the process node it's implemented on. Which is why widening the die is far "cheaper" in terms of power consumption than increasing clocks. Adding 25% more CUs will increase performance by close to 25% (given sufficient memory bandwidth and other required resources) while also increasing power by about 25%. Increasing clocks by 25% will give more or less the same performance (again, assuming the CUs are being fed sufficiently), but will increase power by far more than 25%. The downside with a wider die is that it's larger (duh), so it's more expensive to produce and will have lower production yields. PC GPUs and consoles make different calls on the balance between die size and speed, which was what I was trying to explain - which is why looking at PC clocks and die sizes isn't a very good predictor of future console GPU designs. Hybrid. A lot of Vega, but not as compute-centric.They are indeed the same die, with the 5700 being a "harvested" part with 4 CUs disabled. 1: That's a rather bombastic assumption with no real basis. There's little precedent for huge console dice, true, but we still don't know which features they'll keep and cut from their custom design, which can potentially give significant size savings compared to PC GPUs. It's entirely possible that the upcoming consoles will have GPUs with more CUs than a 5700XT. I'm not saying that they will,but it's entirely possible.
2: Which is irrelevant, as nobody has said that all games will run at those frame rates, just that they might happen in some games. The whole point here is trying to figure out what is the truth and what is not in an intentionally vague statement.
3: Seems like we agree here. The only reason I use "4k" is that typing 2160p is a hassle, and everyone uses 4k no matter if it's technically incorrect - people don't read that and expect DCI 4k. Anyhow, this wasn't the point of the part of my post you were respdonding to with that statement, but rather an explanation of the reasons why consoles can get more performance out of their GPU resources than similar PC hardware. Which they definitely can. No miracles, no, but undoubtedly more fps and resolution per amount of hardware resources. That's what a lightweight OS and a single development platform allowing for specific optimizations will do for you.
2: Well it is relevant, if you claim your console can do 4k120fps, people is expecting to see that in most the games, and not just a few. It's like saying "Hey look my 2600K and 1060 do 300fps on CSGO in FHD" and then i open up most of the games launched in the last 3 years, and i barely reach 60fps with just high settings.
3: Just in part, and that "phenomenon" is already dead basically, i still have to see a game running better on a console than on a PC with similar characteristics, at the very least they're on par. But what you say was kinda true some years back.
Jump to 15:42: