• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD 7nm "Vega" by December, Not a Die-shrink of "Vega 10"

If you're going to go that technical, please pay some attention to punctuation and presenting your argument. I normally understand that stuff, but I can't make heads nor tails of your post.
That's cause rx580 was just going overboard with clocks to gain anything over 1060. When I said polaris, I meant rx480.

Yet Vega 56 matches the RX480 - and isn't clocked as high as the 64. Again: Vega is pushed to its limit in terms of clocks, just like RX 5XX polaris, and is thus very, very similar in terms of clock scaling and perf/W.

Not a hard limit, a memory chokepoint limit. People that mined with the card overclocked the memory and underclocked the core because there's not enough bandwidth to supply 64 CUs. Fury X was starved too. Vega 64 only has a little bit more bandwidth than Fury X.
Well, that begs the question why AMD's arch needs so much more memory bandwidth than Nvidia's for the same performance. I'm not saying you're wrong, but I don't think it's that simple. Also, if memory bandwidth was the real limitation (and there's no architectural max limit on CUs), releasing 7nm Vega 20 (with 4x HBM2) would make sense. Yet that's not happening. I'm choosing to interpret that as a sign, but you're of course welcome to disagree here. At the very least, it'll be interesting to see the CU count of Vega 20.
 
Well, that begs the question why AMD's arch needs so much more memory bandwidth than Nvidia's for the same performance. I'm not saying you're wrong, but I don't think it's that simple. Also, if memory bandwidth was the real limitation (and there's no architectural max limit on CUs), releasing 7nm Vega 20 (with 4x HBM2) would make sense. Yet that's not happening. I'm choosing to interpret that as a sign, but you're of course welcome to disagree here. At the very least, it'll be interesting to see the CU count of Vega 20.
Eh? Vega 20 is 7nm w/ 4 HBM2 stacks. Look at the original post of this thread.

Vega 20 has at least 64 CU but it's 50/50 on having more than that. Depends on whether or not 64 CU is still memory starved with the wider bus probably.

Not so sure about what navi really is. It might be last of GCN or not. Rumors are that Navi is especially made for Sony's next console Playstation 5, which will have Ryzen+Navi SOC or seperate ryzen cpu + navi dpgu.
Zen in a console is extremely unlikely. It will definitely be an APU like XB and PS currently have. Sony clearly made demands of AMD that Vega wasn't capable of filling. What demands though, I don't know. Those demands have taken Navi off of the GCN path and put AMD on a different path. GCN may continue to live on as a compute focus card but it seems likely that gaming products have been forked to whatever Navi is.
 
Last edited:
Eh? Vega 20 is 7nm w/ 4 HBM2 stacks. Look at the original post of this thread.

Vega 20 has at least 64 CU but it's 50/50 on having more than that. Depends on whether or not 64 CU is still memory starved with the wider bus probably.


Zen in a console is extremely unlikely. It will definitely be an APU like XB and PS currently have. Sony clearly made demands of AMD that Vega wasn't capable of filling. What demands though, I don't know. Those demands have taken Navi off of the GCN path and put AMD on a different path. GCN may continue to live on as a compute focus card but it seems likely that gaming products have been forked to whatever Navi is.
Apparently two words slipped out of my post there. Let me correct myself:
Also, if memory bandwidth was the real limitation (and there's no architectural max limit on CUs), releasing 7nm Vega 20 (with 4x HBM2) for gaming would make sense. Yet that's not happening. I'm choosing to interpret that as a sign, but you're of course welcome to disagree here. At the very least, it'll be interesting to see the CU count of Vega 20.
There. Make sense now?

As for the latter part of your post: Raven Ridge is also Zen (just like non-APU designs like Summit Ridge). As such, a non-Jaguar APU would be Zen in a console.
 
Well, that begs the question why AMD's arch needs so much more memory bandwidth than Nvidia's for the same performance. I'm not saying you're wrong, but I don't think it's that simple. Also, if memory bandwidth was the real limitation (and there's no architectural max limit on CUs), releasing 7nm Vega 20 (with 4x HBM2) would make sense.
Delta compression? Both have it but Nvidia's has so far been considered (and measured) to be much more effective.
Bandwidth might be a problem but not severe enough to solve with a solution as expensive as that.

Zen in a console is extremely unlikely. It will definitely be an APU like XB and PS currently have.
APU is just CPU and GPU on one die/package. Nothing really stops AMD from replacing Jaguar cores with Zen/+/2. AMD can easily do a Raven Ridge with larger GPU and very likely is doing exactly that.
 
They care more about lower power than they do about CPU prowess. AMD is likely working on an ultra low power fork of Zen to replace Jaguar and that fork is what will end up in the next gen consoles. It will not be Zen cores.

Raven Ridge has too much CPU and too little GPU to be used in a gaming console.

Apparently two words slipped out of my post there. Let me correct myself:

There. Make sense now?
Indeed, and as you pointed out, Vega 20 is not intended for gamers at all.
 
They care more about lower power than they do about CPU prowess. AMD is likely working on an ultra low power fork of Zen to replace Jaguar and that fork is what will end up in the next gen consoles. It will not be Zen cores.
Zen already scales very, very well to low power, sustaining 2GHz across 4c8t in 15W on 14nm for the 1st revision. Assuming the PS5 will use 7nm and either Zen+ or Zen2, they don't need a bespoke fork. Also, an AMD rep at Hot Chips recently indicated that the APUs (successors to Raven Ridge) will keep scaling to lower power over the following generations without losing performance. AMD designed Zen to fill as wide a space as Intel's Core, and by all accounts they've succeeded.
 
Eh? Vega 20 is 7nm w/ 4 HBM2 stacks. Look at the original post of this thread.

Vega 20 has at least 64 CU but it's 50/50 on having more than that. Depends on whether or not 64 CU is still memory starved with the wider bus probably.


Zen in a console is extremely unlikely. It will definitely be an APU like XB and PS currently have. Sony clearly made demands of AMD that Vega wasn't capable of filling. What demands though, I don't know. Those demands have taken Navi off of the GCN path and put AMD on a different path. GCN may continue to live on as a compute focus card but it seems likely that gaming products have been forked to whatever Navi is.

So you think that when they make custom zen soc for some unknown Chinese console manufacturer. They won't do one for Sony?
https://www.anandtech.com/show/1316...han-subor-z-console-with-custom-amd-ryzen-soc

Delta compression? Both have it but Nvidia's has so far been considered (and measured) to be much more effective.
Bandwidth might be a problem but not severe enough to solve with a solution as expensive as that.

APU is just CPU and GPU on one die/package. Nothing really stops AMD from replacing Jaguar cores with Zen/+/2. AMD can easily do a Raven Ridge with larger GPU and very likely is doing exactly that.

DCC and tiled base raster are both superior on nvidia's side. AMD says that ROPs are enough on GCN, but that can make difference too.
 
Indeed, and as you pointed out, Vega 20 is not intended for gamers at all.
You know you can't disprove a point by refusing to address it, right? To reiterate: if memory bandwidth is the main performance bottleneck of Vega 10, Vega 20 fixes that. (At an additional cost, sure, but with 2x B/W they should be able to increase CU count noticeably, lower clocks, and seriously improve perf/W, which is where they lag the most today.) So they should be able to release a far more powerful gaming GPU with Vega 20. Yet they're not doing that, not even when yields improve. Doesn't that say something about B/W not being the bottleneck for gaming?

Raven Ridge has too much CPU and too little GPU to be used in a gaming console.
Has anyone here argued that we'll see Raven Ridge in a console?
 
Zen already scales very, very well to low power, sustaining 2GHz across 4c8t in 15W on 14nm for the 1st revision.
Jaguar can go as low as 3.9w. It will no doubt be Zen-based but it won't be Ryzen. For one, the extra transistors SMT requires isn't worth the performance gain for Microsoft/Sony. They'll want a lean 8c/8t processor over feature-rich 4c/8t.

Yet they're not doing that, not even when yields improve. Doesn't that say something about B/W not being the bottleneck for gaming?
Nope, they'd rather sell these chips at $2000+ each to compute customers over <$1000 each to gamers. Games rarely use 6 GiB VRAM, nevermind 32 GiB. They would have to sell two SKUs of the chip, one with thicker stacks than the other. It's a lot of work and a lot money to go down that path so, Vega 20 focuses only on compute. AMD is investing their consumer resources on Navi.
 
Last edited:
Jaguar can go as low as 3.9w.
And Zen, given a 2x or more IPC advantage to Jaguar, can exceed it at perf/W. Even if it might bottom out a bit higher in terms of absolute wattage. Especially at 7nm. Both MS and Sony will be looking for actual increases in CPU performance, and more than 8 cores is unlikely, so they'll want Zen, and they'll want it at at least 2GHz.
It will no doubt be Zen-based but it won't be Ryzen. For one, the extra transistors SMT requires isn't worth the performance gain for Microsoft/Sony. They'll want a lean 8c/8t processor over feature-rich 4c/8t.
That's a rather meaningless thing to say. The cores in the PS4 and XBONE aren't "Jaguar" either (IIRC they're "Jaguar-derived"), but the difference is mainly academic. The implementation in a custom SoC requires changes to the design, so it'll never be an identical port, but saying "it'll be Zen, but not Ryzen" is a meaningless distinction (particularly as the marketing name "Ryzen" already encompasses at least three variants of the Zen arch). Nobody here is saying we'll see a straight port of neither Raven Ridge nor a whole Summit Ridge Zeppelin to a console. If you're understanding us this way, you're trying pretty hard to misunderstand.


Nope, they'd rather sell these chips at $2000+ each to compute customers over <$1000 each to gamers. Games rarely use 6 GiB VRAM, nevermind 32 GiB. They would have to sell two SKUs of the chip, one with thicker stacks than the other. It's a lot of work and a lot money to go down that path so, Vega 20 focuses only on compute. AMD is investing their consumer resources on Navi.
You did see that that was my point above, right? That this is a plan to get back to profitability, and they're putting off gaming until they can refresh the arch?

But honestly, I don't doubt for a second that AMD would launch a top-end Vega 20 consumer flagship GPU if they could compete with the 2080Ti. Even if it was a limited-run $1200+ exclusive, it would help win/maintain valuable mindshare while waiting for Navi, not to mention maintain developer interest. But there's no indication that they can. And while sad, that's okay (as sh*t does happen, after all), and they're better off not trying at all than delivering overpriced or half-arsed attempts until Navi can get them back in the game (as that would hurt them in terms of mindshare). If it can't, that's another story, but considering the amount of engineering talent they have and their relations to the console industry, I'm not particularly worried. But we'll have to wait a year or so to see what AMD can bring to the table.
 
And AMD, once again, leaves an entire segment to nvidia for a third generation in a row.

AMD sould just sell radeon by this point. They can do really well with GPUs, or CPUs, but not both. Sell Radeon to somebody that can actually produce decent GPUs. Vega was a year late and many watts too high, and is about to get eclipsed by a new generation of GPUs from nvidia.

Don't be silly. Vega was not a year too late, delayed yes but it did deliver.alomg with Polaris. AMD just isn't focused on the high-end gamers at the moment which is perfectly fine because it would have been a wasted battle. Instead AMD's Polaris is great on 14nm and does really well in mid-end and mobile. Vega was initially designed for 7nm for Datacenters. The 14nm FinFET still uses the Vega architecture and provides performance benefits despite being produced on a less favourable GloFlo process which does not like higher clockspeeds. Still, Vega64 was the best on FPU intensive applications and still is wonderful for the Radeon Pro and Insticnt products. Optimized code yields better performance compared to competition we've witnessed that already. On the APUs or Mobile side of things Vega is incredibly efficient. Efficiency is key because AMD is prepping for 7nm which will play nicely with such architectures especially once MCM becomes common practice. The ML DL and HPC markets are huge and Vega20 is going straight for that. MCM designs are the future and Vega is future-proofed. High-End gaming is tiny piece of the pie but that will also come once AMD has established leadership in 7nm GPUs, CPUs and the HSA that exists between them. So to shamelessly say AMD should sell off RTG, is being extremely naive, uneducated and foolish. RTG is in good hands, and has the backing of the World's most powerful CPU and their engineering teams.
 
I wish you guys would argue more with pictures and graphs, maybe some memes..... All this reading is hurting my ignorant brain.
 
Looks like they're gonna keep Vega for compute where it really shines and focus on making Navi gamer focused.
I have no doubt executives both companies have been, and are now, looking into ways to maximize sales via crypto mining.

Gamers above the "console" level are low priority for AMD in particular. It sold out the Vega cards that gamers were mocking so heavily, even though I warned them that Vega will be considered a success by AMD when miners cause it to sell out (which it did). Most gamers couldn't see past their noses to realize that AMD doesn't care who buys their cards beyond which group will bring in the most sales. All the fake complaint from both companies about mining is pure marketing. No corporation is going to be sad to have sales that exceed supply, beyond simply wishing there was more supply.

They may not be able to do it, but I have no doubt that people at both companies are trying to create new coins that will cause another crypto boom so they can sell out their products. AMD, in particular, has a strong incentive to try this strategy, since Nvidia's bribing of game developers has given that company quite a bit of advantage. Also, Nvidia is larger and has can afford to cater more to gamers. When you're smaller and poorer you have to focus more, which AMD did. Gamers haven't provided enough demand to move all of AMD/ATI's supply (even when the cards were superior to Nvidia's, as with the 5870). Crypto, although less reliable, offers a sweet target to try to hit again and again. We'll get whatever scraps are left from that market and the prosumer/pro/supercomputing stuff.

Microsoft and Sony also don't want people to stop propping up their artificial BS duopoly in "consoles". All those are are small form factor x86 PCs. The same can be accomplished without paying taxes to MS and Sony, with Linux and Vulkan/OpenGL on the already standard x86 hardware. And, you do pay serious taxes to prop up these "console" walled gardens, like the terrible Jaguar CPU and the absurdity of having to release the same game for three platforms despite them being made from the same x86 hardware standard. So, there may be some payola to AMD on the side from both MS and Sony to not make it affordable to buy into the PC gaming platform. Monopolies are even more extreme than duopolies, in terms of the ability of the controlling company to raise prices artificially — which is what we are seeing right now from Nvidia.

Jaguar can go as low as 3.9w.
Jaguar is a garbage CPU that exists in the market for two reasons only:

1) Intel didn't bother to create anything decent to compete with it in time.
2) (most importantly) Sony and Microsoft have a duopoly.

Sony and MS didn't need Jaguar to go to 3.9W. Gamers would have been better off with a Piledriver clocked low than with Jaguar, especially for a second console iteration. Artificially higher prices go hand-in-hand with duopolies, monopolies, and cartels. Obsolete and inadequate products that wouldn't survive on their own merits in a competitive market are what get sold by monopolists because they make more money, even though they deliver less to consumers than what the consumers demand. The beauty, for corporations, of monopoly (and duopoly, to a lesser, but still very important extent) is consumer capture. Captured consumers have no choice beyond either paying too much or being shut out of a market.
 
You're not entirely wrong, but considering that (native) games on Linux on average perform significantly worse than the same games on Windows, which is again outperformed by consoles at equivalent hardware levels (or at least as close as you can get), it's not quite as easy as you're making it out to be. Consoles are walled gardens, but that brings with it the benefit of low-level hardware access (moreso than Vulkan or DX12 seems able to provide) and the benefit of developing for a fixed hardware platform and thus learning how to best utilize its strengths and avoid its weaknesses over time - just look at how good late X360 or PS3 games look compared to the ones launched within the first year of those consoles' life cycles. Also, consoles are very cheap for what they offer (a very decent gaming experience for $250, just add any TV or monitor? Yeah, you're not getting that in a PC), and offer a level of convenience that is worth quite a lot to a lot of people. In short: consoles aren't going anywhere, and not because of unfair practices from MS or Sony. I prefer PC gaming, but it is undoubtedly both more expensive and more complicated. I don't mind. But I'm not getting rid of my consoles any time soon eiter, even though they don't see much use compared to my PC. Each has its distinctive strengths, and the price advantage is most definitely on the console side. So much for this being a product of a duopoly, I suppose?

As for monopolies, duopolies and cartels - there's no doubt that unfair business practices abound in the tech world (as they do in all major fields of international business, as there's no regulation or oversight to speak of), but I think you're going a bit too far here. Jaguar, while indeed being a garbage CPU, exists because it was a cheap, low silicon area, low-power multi-core X86 CPU which AMD could fit into an APU design at the right time and place. Intel's Atom designs of the same era were no slower, but couldn't be fit into an APU, so they weren't an option. The world has moved on in the 5-6 years since, but consoles are slow in terms of hardware development (the Xbox 360 still used its 2005-era PowerPC CPU in 2012 when the XBOne launched). There's nothing inherently wrong with this, even if it's outdated tech by today's standards. A faster console hardware replacement cycle doesn't make sense economically (most people wouldn't buy new consoles that quickly, and small developers would struggle to adapt to new architectures at that pace). There's no doubt that there's a lot to be gained from Jaguar-derived cores being phased out of consoles, but on the other hand, it's amazing what developers are able to do with 8 of these garbage cores. I fully welcome the next generation's move to Zen, but every design like this necessitates tradeoffs, and for 2012-13, Jaguar was an excellent choice.

As for AMD and Nvidia trying to invent new cryptocurrencies - it might be, but that sounds pretty out there, in particular in the current climate. Nobody is going to be interested in the new, "hot" crypto that's easy to mine if nobody is willing to buy or sell it. Also, difficulty is not what's behind the bubble bursting, but the simple fact that a system based purely on gambling and BS claims of value isn't sustainable over time. In other words, inventing a new currency changes nothing. If anything, there's a glut of currencies, and they're not helping anything. There's no doubt that both Nvidia and AMD have profited nicely off the crypto boom, and no doubt enjoyed this, but they've known from the get-go that this wasn't a sustainable market, nor one that encourages long-term sales or brand loyalty.
 
After some hours readying about process nodes, I discovered that since 2012 each foundry creates its own process node. I concluded that TSMC is ready with its 7nm because its 7nm are simplier than the 10 nm of Intel. How did I get that idea? Well according to official guidelines about the physical properties of transistors of the ITRS, the specs hasn't been fullfilled by TSMC since its 16 nm, which is more similar to its 20 nm than the official 16/14nm spec. Even its 12nm is more similar to its 20nm than the official 16/14 nm spec.

But TSMC hasn't been the only one cheating, Samsung's 10 nm is actually 14 nm according to the official specs, and Samsung's 14 nm is actually more similar to its 20 nm than the official 16/14 nm spec too. I feel so stupid for not knowing all these before, I really believed that each process node was the same for every foundry.

Sources: https://en.wikichip.org/wiki/WikiChip Nodes 22 nm to 7 nm
https://www.semiconductors.org/clientuploads/Research_Technology/ITRS/2015/0_2015 ITRS 2.0 Executive Report (1).pdf pages 38, 48
 
Last edited:
It's branding more than science, at least what goes on the press releases.
 
After some hours readying about process nodes, I discovered that since 2013 each foundry creates its own process node. I concluded that TSMC is ready with its 7nm because its 7nm are simplier than the 10 nm of Intel. How did I get that idea? Well according to official guidelines about the physical properties of transistors of the ITRS, the specs hasn't been fullfilled by TSMC since its 16 nm, which is more similar to its 20 nm than the official 16/14nm spec. Even its 12nm is more similar to its 20nm than the official 16/14 nm spec.

But TSMC hasn't been the only one cheating, Samsung's 10 nm is actually 14 nm according to the official specs, and Samsung's 14 nm is actually more similar to its 20 nm than the official 16/14 nm spec too. I feel so stupid for not knowing all these before, I really believed that each process node was the same for every foundry.

Sources: https://en.wikichip.org/wiki/WikiChip Nodes 20 nm to 7 nm
https://www.semiconductors.org/clientuploads/Research_Technology/ITRS/2015/0_2015 ITRS 2.0 Executive Report (1).pdf pages 38, 48
Except that was never the case, ever. For instance Intel's (new) 10nm isn't the same 10nm they demonstrated previously, IIRC it's less dense than they'd hoped for.
 
Except that was never the case, ever. For instance Intel's (new) 10nm isn't the same 10nm they demonstrated previously, IIRC it's less dense than they'd hoped for.
My comparison was with the old 10 nm, the one released in the i3 8121U. So now even Intel won't meet the official 10 nm spec as he has always done before with previous nodes.
 
It's branding more than science, at least what goes on the press releases.

Branding still has a drastic effect though. Look how the delays messed with Intel's stock. Even though their 10nm isn't all that different from others.. and who are also experiencing setbacks. Since they didn't "brand" their 10nm as 7nm, it makes it look worse than others.
 
My comparison was with the old 10 nm, the one released in the i3 8121U. So now even Intel won't meet the official 10 nm spec as he has always done before with previous nodes.
a) "Released" is a strong word in this case.

b) Process nodes have never been the same across vendors. If so, they'd have to cooperate on R&D. Not that that's a bad idea (I'd say it's a great idea, frankly), but it's not happening.

c) It's been thrown around quite a lot that Intel 10nm (at least the "old" one) was comparable in most metrics to the upcoming 7nm nodes from other fabs. Intel is generally seen as conservative in the naming of their nodes.

d) Node naming is pure marketing for all vendors, Intel as much as anyone else. It's been a long time since node name = feature size, if that ever was the case. The standards set by ITRS seem to be viewed as guidelines at best.
 
This is going to be a turning point. If the 20 series is successful and people pay up it will likely mark the end of any effort AMD will make to compete in the high end mainstream PC market. Should they come up with something better and much cheaper, they'll be at a disadvantage because they'll have much lower margins on their products. And if they want to have the same margins then they'll have to ask the same prices, either way the consumer will be screwed.

You reap what you sow, dear consumer. Anyone can ask whatever amount of money they want but that price becomes common place only when it is accepted on a large scale. Nvidia keeps rising the bar, are people going to accept it ? That's all it comes down to.

Nope, i have sworn i wont ever buy Nvidia and Intel again. Sold my dual Xeon 2690v3 and dual 1080ti, and got me a 32core epyc, and two Radeon frontier watercooled. Screw Nvidia and screw Intel. They wont ever get my Money ever again. So i will go with whatever AMD brings to the table. for me Intel and Nvidia cease to exist.
 
in the cpu front if amd want games to be optimised for 8+ cores they have to supply zen 8 core to the console market as for the gpu navi will be their bet.
these days only few games utilizes full 16 threads
 
They may not be able to do it, but I have no doubt that people at both companies are trying to create new coins that will cause another crypto boom so they can sell out their products.

LOL, what?! You clearly have no understanding of cryptocurrencies. AMD can't just "Make another Ethereum or Monero." Furthermore, they don't need to because mining is still profitable for the right people.

6144 shaders at 1700Mhz more likely.

That's my guess at this point too. Slightly higher clocks, but more cores. Vega is actually every bit efficient as Pascal if you don't ramp up the clockspeeds - just look at AMD's APU's.

I really hope they launch this card (even a cut down version) for gamers this year. 50% more TFLOPs and double the bandwidth would make this capable of 4K@144Hz.
 
Back
Top