• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Told you... 7NM Vega

Joined
Apr 17, 2017
Messages
401 (0.13/day)
Processor AMD Ryzen 1700X 8-Core 3.4Ghz
Motherboard ASRock X370 Taichi
Cooling Thermalright Le Grand Macho RT, 2x front Aerocool DS 140mm fans, 1x rear Aerocool DS 140mm fan
Memory 16GB G.Skill Flare X DDR4 3200Mhz
Video Card(s) PowerColor Red Devil Vega 64
Storage Samsung 960 Evo 500GB NVME SSD - Boot drive, Samsung 850 Evo 1TB SSD - Main storage
Display(s) Acer XG270HU Red 27" 1ms 144HZ WQHD 2K FreeSync Gaming Monitor
Case Fractal Design Define R6 USB C-Type Blackout Edition (Non TG)
Audio Device(s) Sennheiser GSP 301 Headset, Sennheiser Game Zero Special Edition Headset, Logitech Z623 System
Power Supply Seasonic AirTouch 850w 80 Plus Gold
Mouse SteelSeries Rival
Keyboard Razer Ornata Chroma
Software Windows 10 Professional 64bit
I was laughed at and even mocked when I said that AMD would release a 7NM Vega... you guys aren't laughing now ;)
 
isn't releasing old architectures on new nodes kind of the industry standard? Unless you want to change node AND architecture at the same time for some extra risk.
 
I think they mean a gaming version? because nobody really thought they would release a gaming version of Vega 20.
Yup, AMD never hinted at a Vega 20 consumer card. They were consistently adamant it's an HPC card. I suspect RTX launch changed AMD's mind and they were able to prevent leaks of that knowledge.
 
Yes, you wanted a <$550 card.
 
Golf clap...good for you... though you said before 2019. :p

Also will this best a 1080ti as was said? Congrats for getting a random piece of info right regardless. Now let me get out of the thread as there isnt room due to a large head parroting I told you so inside. "P
 
I was laughed at and even mocked when I said that AMD would release a 7NM Vega... you guys aren't laughing now ;)
lol,you are right,I was gonna say I need to eat a humble pie.

The performance/price of this card is another story though. I think everyone expected a full 4096sp die and 2GHz boost clocks right out of the box, the transition to 7nm is as big as 28nm to 14nm, and they were able to achieve a huge clock improvement over Fury X with Vega cards.I guess we've seen the absolute limitation of gcn cards with R7.

My points kinda stands though.I said they were never gonna release a gaming Vega 7 cause they weren't gonna profit from selling such a card to gamers,and it seems like they won't cause the card costs its msrp to produce.
 
I hereby humbly concede that point :)

With one caveat - we're looking at an almost identical die with 16GB HBM. Its a little bit of a stretch to call it a 'gaming GPU'.
 
I hereby humbly concede that point :)

With one caveat - we're looking at an almost identical die with 16GB HBM. Its a little bit of a stretch to call it a 'gaming GPU'.

Multirole GPU.
 
Who is this person saying "told you so" and why should we care?
 
Who is this person saying "told you so" and why should we care?

He could of added it to another thread here but no worries.
 
I think its more interesting just knowing what 7nm is capable of if you look at it all things considered AMD dropped 4 CUs upped the clocks and gained around 30% more performance from just a die shrink and some extra memory bandwidth.

Which makes me wonder with 7nm and Turing just what we can expect if NVIDIA did a die shrink. We would be looking at roughly a 30% performance boost while seeing the Die size drop considerably. Making the 2080 Ti caliber part a great deal cheaper.

Considering its a bout a 30% reduction in die size that would mean a 2080 Ti would go from 754 mm2 to 490 mm2 which is 50 mm2 or so smaller than an RTX 2080. Even with 7nm costing more that would mean 2080 Ti performance $750 rather than a $1000. So thats a huge die savings + lower cost add in higher clock speeds and it looks pretty damn attractive.

That said monolithic GPUs / CPUs are dead AMD needs chiplet designs for GPUs and asap. because if you look at performance scaling between AMDs parts a GPU on par die size with Turing with lower clocks while not power efficient would likely rival or exceed NVIDIA's current performance while dominating in compute workloads. Just they lack newer features. As such NAVI can't come soon enough.

Still looking at AMD graphics pipeline I think we can all agree Radeon VII is just a stop gap for now. With microsofts using DirectX on their console next gen console will likely feature DXR to some extent even if its only using for shadowing etc. As such AMD will likely with the next architecture reach parity with NVIDIA in terms of features.

So while told you so its gonna be such and such is a nice feather in a cap. considering how easy it is to judge AMD performance generation to generation its not an achievement. I predicted AMDs proper GPU performance for the last 4-5 generations with ease. Due to GCN and how they arrange the shaders and ROPs. determining performance is simple to figure out. Just as the Original Fury was simple and easy to judge performance wise VEGA which followed was as well. The new Radeon VII is no different. I still remember how many people thought I was nuts when Fury came out and I said it was just 2x Tonga GPUs yet when it released thats exactly what we got Tonga style GCN x2 boom i predicted it lol. Its honestly pathetic how easy AMD's performance for GCN has been in terms of prediction. Probably why NVIDIA can sand bag so much. Not like it takes a rocket scientist to figure out what they will do after all.
 
Last edited:
If it's more powerful than my Vega 64, I will be happy :)
 
Well people have no problem paying 900$ for 1080 Ti performence. So why not pay 700$ instead!!
 
That said monolithic GPUs / CPUs are dead AMD needs chiplet designs for GPUs and asap

No, I don't think they do. Chiplets would make sense for the highest end possible GPUs where you literally cannot fit more shaders on a single die, in other words this would make sense in the segment where AMD is no longer looking to compete in. 300-400 mm^2 monolithic Navi GPUs is the best we'll see from AMD for a long period of time. For the Instinct line-up, yeah I think we'll see this at some point.
 
I think they mean a gaming version? because nobody really thought they would release a gaming version of Vega 20.
I'm pretty sure AMD themselves were convinced a consumer Vega 20 won't be feasible (like Volta before it). But then they saw what Nvidia did with Turing and at that point you'd have to be pretty stupid to sit this one out.

Well people have no problem paying 900$ for 1080 Ti performence. So why not pay 700$ instead!!
And which card would that be? Because there's no card the offers 1080Ti performance, the 1080Ti sits halfway between 2070 and 2080. Neither of which sell for $900 (save for overpriced models).
 
No, I don't think they do. Chiplets would make sense for the highest end possible GPUs where you literally cannot fit more shaders on a single die, in other words this would make sense in the segment where AMD is no longer looking to compete in. 300-400 mm^2 monolithic Navi GPUs is the best we'll see from AMD for a long period of time. For the Instinct line-up, yeah I think we'll see this at some point.

You don't seem to understand that even on smaller GPUs it would be cheaper as well due to the increase in functional dies. 7nm costs about 2x as much as 12/14 w.e they wish to call it. smaller dies = less defective dies and more dies per wafer, using the same chiplet style package as Ryzen also means future additions like say NVIDIA's tensor cores can be done on the side without impacting overall yields.

this means AMD has 2 chips to build an I/O / interposer and a GPU chiplet. entry level GPU could be 2 chiplets in a package scaling up to could 3 then 4 etc just like they do with Ryzen / Threadripper and Epyc and now Rome with 8 chiplets.

All one has to do is open there eyes. AMD's GCN can in many ways already operate as chiplets just look at how there GPUs are arranged with the various CU counts / memory bus widths etc.

Tonga x2 = Fury for instance, In many cases AMD has already moved towards a single GPU design and then just doubles it to get the next segment.

Example 7750/7770 vs 7850/7870 etc the odd tier GPU was actually the 7900 series back then which didn't follow the concept.

Right now using chiplet designs would actually free up their RD they already have a proper interposer and have figured out seperate I/O dies. With 7nm

A chiplet based GPU would likely lose about 25% of the overall die space to an I/O chiplet however Looking at wafer size / number of defective dies etc. Its actually more cost affect for AMD. Its inevitable that this is the design they will go with. Since APUs still represent the bottom design and most consumer CPUs pack an IGPU. They can basically make every GPU in their linup out of a single GPU chiplet design.

Face facts, if AMD is to ever support DXR the only way they can do it is with a chiplet design becuase they do not have enough market penetration to absorb the cost of massive dies. AMD will compete with NVIDIA's monolithic designs because the market for it is there and its much more profitable. Or do you expect AMD's graphics division to limp along like they did with the Phenom II / Bulldozer / Excavator etc in the GPU segment? As is they only thing helping prop them up is GPU contracts for consoles. Chiplets make sense its scalable to meet demand it fits in well with their custom SoC offerings, and leverages there strengths. Because everyone is competing for fab time, smaller chiplets on an interposer means they can meet demand as necessary be it for entry level offerings or HPC offerings.

It also means future consoles can see iterative upgrades like the XBox One X and PS4. Were as development continues and refinements are made they can increase performance with a half generation by simply adding another chiplet or two to the design. This also helps bypass the issue of waiting for the next node.

Going by die shrink alone a 28nm 130 mm2 give or take =
Shaders 640 / TMUS 40 / ROPS 16

At 14nm that would drop ro about 75 mm2

at 7 nm thats a further 35% reduction.
dropping that to 49 mm2 for a 640 40 16 chiplet with its own I/O.

Mean 4x chiplets would be 2560 160 / 64 at 200 mm2 considering Ryzen is around 80 mm2 we can figure effectively AMD could likely push out a chiplet design with 1280 Shaders / 80 TMUs and 32 ROPs

This means a chiplet design would be equivalent to about a 400 mm2 monolithic die and deliver 5120 Shaders 320 TMUs 128 ROPs granted this is based on older GCN designs however considering GCN performance hasnt changed much per generation it holds value as a worthwhile comparison.

AMD could in theory use 8 smaller chiplets or 4 larger chiplets to achieve there goal. The likely hood of say 3 proper chiplets out of 4 vs a single monolithic die is entirely in their favor. It also allows for a 1280 shader entry level 2560 shader mid range 3840 high end and 5120 extreme range of GPUs. Obviously they could sacrifice some die space to DXR tech however the comparison remains valid.

A chiplet GPU to cover their entire product range is the only way forward. It will likely cover all market segments and scale up or down depending on client need. Results in lower power draw vs monolithic designs and would likely improve AMD's performance per watt. For instance looking at Intel vs AMD for wattage difference you can probably estimate that AMDs chiplet design would save around 20% in terms of power usage. This would drop the 300w TDP for the Radeon VII down to 240w. If your keeping track that would be abour 1280 shaders = about 100w. if you take 20% off that that would be 80watts. meaning at a 320w TDP they could up their shader count to 5000 with the ability to push TMU and ROP count higher as well. Obviously thats still quiet bad however. At 3840 shaders and its current design it would still save about 60 watts. obviously I/O die consumers power but you could estimate maybe 20 watts to that. a Chiplet design would end up cheaper, more power efficient and would cost less in R&D going forward.

Obviously the above is just speculation based on current die sizes, GCN features that have carried forward etc. However it still remains a valid comparison. A Chiplet GPU from a pure performance standpoint at all market segments makes far more sense. GPUs will not be getting cheaper in fact with each new process node we can expect performance and price to rapidly increase. Meaning these prices we see will not change. However chiplet designs would offset that somewhat.
 
Last edited:
I think its more interesting just knowing what 7nm is capable of if you look at it all things considered AMD dropped 4 CUs upped the clocks and gained around 30% more performance from just a die shrink and some extra memory bandwidth.

Which makes me wonder with 7nm and Turing just what we can expect if NVIDIA did a die shrink. We would be looking at roughly a 30% performance boost while seeing the Die size drop considerably. Making the 2080 Ti caliber part a great deal cheaper.

Considering its a bout a 30% reduction in die size that would mean a 2080 Ti would go from 754 mm2 to 490 mm2 which is 50 mm2 or so smaller than an RTX 2080. Even with 7nm costing more that would mean 2080 Ti performance $750 rather than a $1000. So thats a huge die savings + lower cost add in higher clock speeds and it looks pretty damn attractive.

That said monolithic GPUs / CPUs are dead AMD needs chiplet designs for GPUs and asap. because if you look at performance scaling between AMDs parts a GPU on par die size with Turing with lower clocks while not power efficient would likely rival or exceed NVIDIA's current performance while dominating in compute workloads. Just they lack newer features. As such NAVI can't come soon enough.

Still looking at AMD graphics pipeline I think we can all agree Radeon VII is just a stop gap for now. With microsofts using DirectX on their console next gen console will likely feature DXR to some extent even if its only using for shadowing etc. As such AMD will likely with the next architecture reach parity with NVIDIA in terms of features.

So while told you so its gonna be such and such is a nice feather in a cap. considering how easy it is to judge AMD performance generation to generation its not an achievement. I predicted AMDs proper GPU performance for the last 4-5 generations with ease. Due to GCN and how they arrange the shaders and ROPs. determining performance is simple to figure out. Just as the Original Fury was simple and easy to judge performance wise VEGA which followed was as well. The new Radeon VII is no different. I still remember how many people thought I was nuts when Fury came out and I said it was just 2x Tonga GPUs yet when it released thats exactly what we got Tonga style GCN x2 boom i predicted it lol. Its honestly pathetic how easy AMD's performance for GCN has been in terms of prediction. Probably why NVIDIA can sand bag so much. Not like it takes a rocket scientist to figure out what they will do after all.
Im not sure it would be cheaper AMD reduced vega from 490mm to 331mm to hit that 699 and i think nvidia are going to be stuck with monolithic a while yet , they will also increase complexity not retain the same , more RT cores for sure and more die space plus ,i admit rather cynically i think,, by that time their will be a gddr6 shortage.
nvidia announced research into chiplets last year , Amd started 5 years ago and also have'nt shown this next gen memory tie in with post gcn radeon yet.
I agree on all your other points though ,ive been a chiplet fan a while :)
 
Last edited:
Im not sure it would be cheaper AMD reduced vega from 490mm to 331mm to hit that 699 and i think nvidia are going to be stuck with monolithic a while yet , they will also increase complexity not retain the same , more RT cores for sure and more die space plus ,i admit rather cynically i think,, by that time their will be a gddr6 shortage.
nvidia announced research into chiplets last year , Amd started 5 years ago and also have'nt shown this next gen memory tie in with post gcn radeon yet.
I agree on all your other points though ,ive been a chiplet fan a while :)
Yes but keep in mind they are still using HBM2 16GB but the other issue is economy of scale. AMD's market share and market reach for their products has essentially dried up. Radeon VII is not going to sell like hot cakes. There for with limited runs and likely a larger push towards HPC the price will remain high. Thats just economy of scale for you. Thus chiplets is there only option.

As i further detailed above at 7nm AMD can make a 1280 shader GCN part for around 50 mm2.
 
IAnd which card would that be? Because there's no card the offers 1080Ti performance, the 1080Ti sits halfway between 2070 and 2080. Neither of which sell for $900 (save for overpriced models).
Well RTX 2070 cost as much or more then GTX 1080 Ti while performing less. And RXT 2080 performing slightly faster while costing 150$ to 250$ more then 1080 Ti.
 
Yes but keep in mind they are still using HBM2 16GB but the other issue is economy of scale. AMD's market share and market reach for their products has essentially dried up. Radeon VII is not going to sell like hot cakes. There for with limited runs and likely a larger push towards HPC the price will remain high. Thats just economy of scale for you. Thus chiplets is there only option.

As i further detailed above at 7nm AMD can make a 1280 shader GCN part for around 50 mm2.
Small dies have the best yields.
 
Back
Top