Guys, it's not like i ever said consoles use 100% exact silicon used on PC hardware,
Sorry, but you did:
Navi 10 is a specific die yes, that's what they'll use on consoles, just like they used Polaris 10 on PS4 Pro, and Vega 10 on Xbox one X
"Specific die" means "100% exact [same] silicon".
You can argue that that wasn't what you meant, but that's what I've been trying to get you to get on board with this whole time: that a GPU architecture (such as Navi, Vega or Polaris) is something different than a specific rendition of/design based on said architecture (such as Navi 10, Vega 10 or Polaris 10, 20, etc.). You consistently mix the two up - whether intentionally or not - which makes understanding what you're trying to say very difficult. I really shouldn't need to be arguing for the value of precise wording here.
It's also worth pointing out that even "Navi" is a specific rendition of something more general - the RDNA architecture. Just like Vega, Polaris, Fiji, and all the rest were different renditions of various iterations of GCN. Which just serves to underscore how important it is to be specific with what you're saying. Navi is RDNA, but in a few years, RDNA will mean more than just Navi.
but the performance and the architecture is the same, they might change configuration, they might build monolithic die APUs, but the juice is the same one essentially, even if Microsoft pays AMD for a custom chip, they're going to use the same technology they have for PC, just adapted and customized, but again, it's the same stuff.
... which is exactly why differentiating between an architecture and its specific iterations is quite important. After all, both a Vega 64 dPGU and a Vega 3 iGPU are based off the same architecture, but are
radically different products. Unless you're talking
only about the architecture, then, being specific about what you're talking about becomes quite important to getting your point across.
1: The Xbox One X has a semi custom GPU based on Vega architecture, not Polaris.
2: It's a way of saying that they can't do much of what they have already developed which is already at its limit, and yes we do know it because it's been like that for years, and it's not going to change
3: Because they're different architectures. And optimizations still can't do miracles either.
4: Yes they might have improved that, they might be approaching nvidia's level of optimization in that term, but the fact stands, AMD themselves compared their 5700 XT to a 2070, and a 5700 to a 2060, and while most of the time take the performance crown, they sometimes lose, let's say they're probably going to battle the new "super" cards nvidia is preparing, which sounds like they're going to basically a 2060Ti and a 2070Ti. Anyway 5700 XT 5700 are roughly that performance level, and it doesn't seem to me that they're capable of doing what Sony is saying in no way. Nothing close to 6x performance, but much closer to a 2x. Well if they're factoring other stuff in that number i don't know, but it's not fair because it's simply not true to state that, if you talk about performance, it's computing performance only, unless you want to kinda scam your customers, and it wouldn't be the first time...
1: The One X has been reported to be a hybrid between Vega and Polaris, though it's not entirely clear what that means (not surprising, given that there are never deep-level presentations or whitepapers published on console APUs). I'd assume it means it has Vega NCUs with some other components (such as the GDDR5 controller) being ported from Polaris. After all, there's no other rendition of Vega with GDDR5. Also, they've likely culled some of the compute-centric features of Vega to bring the die size down.
2: That argument entirely neglects the low-level changes between GCN and RDNA. We still don't know their real-world benefits, but all competent technical analysts seem to agree that the "IPC" or perf/clock and perf/TFLOP gains AMD are promoting are believable.
3: There's no real performance per CU per clock difference between Polaris and Vega, at least not for gaming. Gaming performance scaling between Polaris and Vega is
very close to linear (or when looking at "performance per FLOP") when factoring in CU counts, clock speeds (and to some degree memory bandwidth). In other words, expecting a GDDR5-equipped, low-clocked, 40-CU Vega to perform close to a GDDR5-equipped, higher-clocked, 36-CU Polaris is entirely reasonable.
4: The 5700/XT is a ~250mm2 die. The RTX 2080 is a 545mm2 die, and the 2080 Ti is a 754mm2 die. Of course AMD aren't aiming for the high end with this - it's a clear (upper) midrange play. Remember, AMD is no stranger to large dice -
Fiji was 596mm2.
Vega 10 is 495mm2. If AMD were going for the high end, they'd add more CUs. Aiming for the mid-range with a smaller die makes sense on a new node with somewhat limited availability, but there's no doubt they have plans for a (likely much) larger RDNA-based die. This might be "Navi 2" or "Arcturus" or whatever, but nonetheless it's
obviously coming at some point in the future. You're arguing as if Navi 10 is the biggest/highest performance configuration possible for Navi, which ... well, we don't know, but it's one hell of an assumption. What's more likely - that AMD spent 3-4 years developing an architecture that
at best matches their already existing products at a smaller die size, but with no chance of scaling higher, or that they made an architecture that scales from low to high performance? My money is definitely on the latter.
As for the "6x performance" for the PS5, I never said that, but I tried giving you an explanation as to how they might arrive at those kinds of silly numbers - essentially by adding up multiple performance increases from different components (this is of course both misleading and rather dishonest, but that's how vague PR promises work, and we have to pick them apart ourselves). There's
no way whatsoever they're claiming it to have 6x the
GPU performance of the One X. That, as you say, is impossible - and if it did, they would definitely say so very specifically, as that'd be a huge selling point. But 2-3x GPU performance (not in FLOPS, but in actual gaming performance)? Doesn't sound too unlikely if they're willing to pay for a large enough die.
Again they did: Polaris 10 includes All RX 4xx cards;
Correct.
Polaris 20 includes RX 570 and 580;
Correct.
Polaris 30 includes RX 590,
Correct.
and Vega 20 includes Vega VII
Correct.
Vega 10 includes Vega 3/6/8/10/11/20/48/56/64
Nope.
Vega 10 (the die) is Vega 56 and 64. The issue here is that "Vega 10" is also the
marketing name for the highest-specced mobile Vega APU. A marketing name and an internal die code name is not the same thing whatsoever, even if the name is identical. Vega APUs are based off an entirely different die design than the Vega 10 die - otherwise they'd also be 495mm2 GPU-only dice. The same goes for the Macbook Pro lineup's Vega 20 and Vega 16 chips, which are based off the
Vega 12 die. This is of course confusing as all hell - after all, the names look the same, but the marketing naming scheme is based on the number of CUs enabled on the die, while the die code names are (seemingly) arbitrary - but that's how it is.
Polaris 10/20/30, they're all polaris, Vega 10 and 20, both Vega.
Yes, that's how architectures work.
Xbox One X GPU is built based on Vega 10 family,
Again: no.
If there is such a thing as a "Vega 10 family", it includes only the cards listed on the page linked here. I'm really sounding like a broken record here, but "Vega" and "Vega 10"
are not the same thing.
it's not like any of those mentioned before, but it's still Vega and very possibly part of Vega 10 group. But even if they aren't, even if it's not Vega 10 but it's Vega "Xbox", what does it matter, same stuff, same performance per watt, same die limitations as it's still part of the Vega family and they share everything. I agree it doesn't mean the same at 100% but at 90% it's still considerable the same, not exact but almost.
Again, what you're describing is, very roughly, the difference between an architecture and a specific rendition of that architecture, yet you refuse outright to acknowledge that these two concepts are different things.
What will it change technically? The disposition inside the die? The configuration? Apart from that? Performance/Watt are still the same, and won't go any higher. Not quite an accurate analogy, i'd say one is Half Life 2, and the other is Half Life 2 Episode 1 or 2. Basically Half life 2, apart from the story.
Again: If your analogy is that the game engine is the architecture and the story/specific use of the engine (the game) is the silicon die, then yes, absolutely. Both games based on an engine and dice based on an architecture share similar underpinnings (though often with minor variations for different reasons), but belonging within the same family. Which is why we have designations such as "architecture" and "die" - different levels of similar things. Though to be exact the engine isn't HalfLife 2, it's Source, and both HL2, HL2 EP1 and HL2 EP2 are all specific expressions of that engine - as well as a bunch of other games.
Well what are you arguing about then? We'll get whatever AMD is capable of doing with their chips, nothing more, and what Sony or Microsoft will be able to sell at a decent price, and not consume like an oven.
I'm arguing against you bombastically claiming that consoles will have Navi 10 - a specific die which they with 100% certainty won't have - and can't possibly be larger than this (which they can, if console makers want to pay for it).
Alright, but 33% reduction in clocks, won't give you only 3% performance loss, or even only 13%, it'll be something around 20-25%, which is still pretty significant. Again no, they never did that, because if they struggle meeting the Power limit by reducing clocks, widening the die, will only give them back that Power consumption they got rid of by lowering clocks, maybe not at the same price, but very close to that, so it doesn't make sense. You're still comparing Polaris based chips with a Vega based chip of a Xbox one X. Xbox One X die has 16 less CUs than the slower PC version of Vega, which is 56, with lower clocks ofc. But if you want to keep comparing it to Polaris have it you way.
I didn't say 33% reduction in clocks, i said power. The entire point was that - for example - with a 33% reduction in power, you wouldn't lose anywhere close to 33% of performance due to how voltage/clock scaling works. Similarly, a 33% reduction in clocks would likely lead to a
much more than 33% reduction in power consumption. The exact specifics of this depends on both the architecture and the process node it's implemented on. Which is why widening the die is far "cheaper" in terms of power consumption than increasing clocks. Adding 25% more CUs will increase performance by close to 25% (given sufficient memory bandwidth and other required resources) while also increasing power by about 25%. Increasing clocks by 25% will give more or less the same performance (again, assuming the CUs are being fed sufficiently), but will increase power by
far more than 25%. The downside with a wider die is that it's larger (duh), so it's more expensive to produce and will have lower production yields. PC GPUs and consoles make different calls on the balance between die size and speed, which was what I was trying to explain - which is why looking at PC clocks and die sizes isn't a very good predictor of future console GPU designs.
Hybrid. A lot of Vega, but not as compute-centric.
I actually understood they were different die sizes but we're still not sure they're the same die size tho (or i missed something official from AMD?), anyway it makes perfectly sense, it costs less to just use lower binned die sizes to make a slower chip that way, i guess we'll see a different die size if they ever make a more powerful chip and name it RX 5800 or something.
They are indeed the same die, with the 5700 being a "harvested" part with 4 CUs disabled.
1: We don't but we can assume it'll be something smaller than 5700 XT, maybe as big as 5700, with slower clocks. Nothing close to 6x the performance or 4k120fps.
2: Agreed but most of the games won't be able to do that, plain simple, and with most i'm talking about a good 70% if not more...Not on consoles, since there's no real Pro gamer community or esposts of any kind on consoles, and 80% of the console market is casual gaming. It's actually the CPU what i'm excited about, since previous consoles had a decent graphics chip, and an absolute pile of garbage as CPU, hopefully Zen 2 will change that forever, so that freaking devs can develop proper games and not being bottlenecked by consoles' ridiculous CPUs
3: Well you were talking about "1500-1800p" in one of your previous posts, so that's why i stated that. We're not cinematographers, and i wasn't trying to correct you when i said "UHD to be more precise", i just don't like the "4k" term, i started hating it in the latest years, maybe because it has been used inappropriately for years now, for any sort of advertisement.
1: That's a rather bombastic assumption with no real basis. There's little precedent for
huge console dice, true, but we still don't know which features they'll keep and cut from their custom design, which can potentially give significant size savings compared to PC GPUs. It's entirely possible that the upcoming consoles will have GPUs with more CUs than a 5700XT. I'm not saying that they will,but it's entirely possible.
2: Which is irrelevant, as nobody has said that all games will run at those frame rates, just that they might happen in some games. The whole point here is trying to figure out what is the truth and what is not in an intentionally vague statement.
3: Seems like we agree here. The only reason I use "4k" is that typing 2160p is a hassle, and everyone uses 4k no matter if it's technically incorrect - people don't read that and expect DCI 4k. Anyhow, this wasn't the point of the part of my post you were respdonding to with that statement, but rather an explanation of the reasons why consoles can get more performance out of their GPU resources than similar PC hardware. Which they definitely can. No miracles, no, but undoubtedly more fps and resolution per amount of hardware resources. That's what a lightweight OS and a single development platform allowing for specific optimizations will do for you.