Wednesday, August 22nd 2018

AMD 7nm "Vega" by December, Not a Die-shrink of "Vega 10"

AMD is reportedly prioritizing its first 7 nanometer silicon fabrication allocations to two chips - "Rome," and "Vega 20." Rome, as you'll recall, is the first CPU die based on the company's "Zen 2" architecture, which will build the company's 2nd generation EPYC enterprise processors. "Vega 20," on the other hand, could be the world's first 7 nm GPU.

"Vega 20" is not a mere die-shrink of the "Vega 10" GPU die to the 7 nm node. For starters, it is flanked by four HBM2 memory stacks, confirming it will feature a wider memory interface, and support for up to 32 GB of memory. AMD at its Computex event confirmed that "Vega 20" will build Radeon Instinct and Radeon Pro graphics cards, and that it has no plans to bring it to the client-segment. That distinction will be reserved for "Navi," which could only debut in 2019, if not later.
Source: VideoCardz
Add your own comment

194 Comments on AMD 7nm "Vega" by December, Not a Die-shrink of "Vega 10"

#126
warrior420
Sounds good to me. For some reason people forget that AMD has to get this architecture in the hands of developers first, ie their Radeon Pro GPU line. Developers need to be able to build around it first, and apply the new technologies to the new software/hardware systems. And of course this is all for compatibility's sake. These things take time. The wait will be worth it. My Vega 64 is doing great for me :)
Posted on Reply
#127
JRMBelgium
I think all Vega users are very satisfied with their product. Most of them will upgrade to AMD again next year.
Posted on Reply
#128
londiste
Jelle MeesI think all Vega users are very satisfied with their product. Most of them will upgrade to AMD again next year.
Are you sure?
130W more power for around the same performance as GTX1080. Also, considering the custom cards coming to market late, a lot of Vega cards are reference. That cooler is noisy as hell even compared to Geforce's reference blower.

Mining and GPGPU stuff is where Vega is awesome. But where gaming is concerned, Vega is simply outclassed.
Now when GTX1080s and GTX1070Tis are around 50€ cheaper than Vega 64 and Vega56 respectively (at least in Europe) I honestly cannot see the appeal.

Turing is out and seems to have the usual 20-25% generational performance boost with a huge price increase. AMD might still reconsider bringing 7nm Vega to consumer space. We have not heard anything about the frequency potential but 7nm should bring a considerable uplift so there is a real chance :)
Posted on Reply
#129
JRMBelgium
londisteAre you sure?
130W more power for around the same performance as GTX1080. Also, considering the custom cards coming to market late, a lot of Vega cards are reference. That cooler is noisy as hell even compared to Geforce's reference blower.

Mining and GPGPU stuff is where Vega is awesome. But where gaming is concerned, Vega is simply outclassed.
Now when GTX1080s and GTX1070Tis are around 50€ cheaper than Vega 64 and Vega56 respectively (at least in Europe) I honestly cannot see the appeal.

Turing is out and seems to have the usual 20-25% generational performance boost with a huge price increase. AMD might still reconsider bringing 7nm Vega to consumer space. We have not heard anything about the frequency potential but 7nm should bring a considerable uplift so there is a real chance :)
I have Vega 56. Performance per watt is better then on gtx 970 or 980ti. Why was is no drama when Nvidia had this performance per watt? Noise is about 3dba higher then 1080ti. A good case can compensate for this.

Vega 64 is another story. But in all honesty, anyone who does a little bit of research before purchase knew that wit a simple bios flash you get Vega 64 performance...

And normally, the AMD cards shiuld not be more expensive. Mining hype makes Vega 56 more expensive... Vega outclassed? Not really. People just expected it to beat Nvidia wich is not realistic with AMD's budget.
Posted on Reply
#130
londiste
Jelle MeesI have Vega 56. Performance per watt is better then on gtx 970 or 980ti. Why was is no drama when Nvidia had this performance per watt? Noise is about 3dba higher then 1080ti. A good case can compensate for this.

Vega 64 is another story. But in all honesty, anyone who does a little bit of research before purchase knew that wit a simple bios flash you get Vega 64 performance...

And normally, the AMD cards shiuld not be more expensive. Mining hype makes Vega 56 more expensive... Vega outclassed? Not really. People just expected it to beat Nvidia wich is not realistic with AMD's budget.
This is a simple question of timeline:
2014: Maxwell - GTX980/GTX970
2016: Pascal - GTX1080
2017: Vega - Vega64/Vega56

1080Ti is a good 25-30% faster than Vega64. 30-35% compared to Vega56 (Edit: TPU reviews' performance summary says 43% faster than Vega56 and 28% faster than Vega64 at 1440p).
1080Ti also uses less power than Vega64, about 50W less.

Mining hype was not and is not what makes Vega more expensive. AMD needs to retain some profit margin for Vega. GTX1080/GTX1070Ti are cheaper to produce.

Budget is not all that relevant here. All the GPUs are manufactured in the same foundry. Just look at the objective measurements and specs of Vega compared to GP102/GP104 and you see why it was expected to perform roughly at the level of 1080Ti.

Edit:
Specs. Homework is to figure out which GPU is which.
Die size: 314 mm² vs 486 mm² (vs 471 mm²)
Transistors: 7.2 b vs 12.5 b (vs 12 b)
TDP/Power: 180 W vs 295 W (vs 250 W)
RAM bandwidth: 320 GB/s vs 484 GB/s (vs 484 GB/s)
FP32 Compute: 8.2 TFLOPS vs 10.2 TFLOPS (vs 10.6 TFLOPS)

Edit2: 7nm would be able to change the die size situation as well as power and hopefully clocks but we do not know how manufacturing prices compare. The last semi-confirmed accounts said 7nm is still more expensive. More expensive is OK when dealing with high price/margin scenarios like Instinct or Radeon Pro but is prohibitive in getting it to consumers.
Posted on Reply
#131
pkrol
cucker tarlsonAMD at its Computex event confirmed that "Vega 20" will build Radeon Instinct and Radeon Pro graphics cards, and that it has no plans to bring it to the client-segment.

enthusiast gamers are at the very end of their scope,forget they're gonna prioritize or innovate in that segment.
To be fair why would they. NVIDIA is entrenched in the minds of enthusiasts. Unless they leap frog NVIDIA there is no way they will have great sales. May as well focus on the business segment that only cares about ROI.
Posted on Reply
#132
londiste
pkrolNVIDIA is entrenched in the minds of enthusiasts.
The hell it is. Come out with a better card (objectively or subjectively) and there is nothing entrenched. AMD has RX580 competing very successfully with GTX1060.
Posted on Reply
#133
cucker tarlson
Jelle MeesI have Vega 56. Performance per watt is better then on gtx 970 or 980ti.
lol, no it isn't, not at resolution you're playing, maybe slightly better at higher resolutions but they're basically in the same league



oc vs oc 980Ti will be better
btw look where gtx 1070 is on 16nm and ddr5, 1.5x over a 14nm hbm2 amd card....
Jelle MeesIWhy was is no drama when Nvidia had this performance per watt?
Do you understand the concept of time and node size ? Maxwell was lightyears ahead of amd with maxwell



that forced amd to use hbm on fury x,that's why it had a measly 4gb vram and cost $650, and even then it was good 20% behind 980ti in perf/wat
Posted on Reply
#134
Valantar
cucker tarlsonAMD at its Computex event confirmed that "Vega 20" will build Radeon Instinct and Radeon Pro graphics cards, and that it has no plans to bring it to the client-segment.

enthusiast gamers are at the very end of their scope,forget they're gonna prioritize or innovate in that segment.
While it's (obviously) disappointing that AMD has yet to respond to Nvidia's performance/power gains since Pascal, and competition is desperately needed in the consumer GPU space, what they're doing makes sense in terms of a) keeping AMD alive, and b) letting them bring a truly competitive product to market in time.

Now, to emphasize: this sucks for us end-users. It really sucks. I would much rather live in a world where this wasn't the situation. But it's been pretty clear since the launch of Vega that this is the planned way forward, which makes sense in terms of AMD only recently returning to profitability and thus having to prioritize heavily what they spend their R&D money on.

But here's how I see this: AMD has a compute-centric GPU architecture, which still beats Nvidia (at least Pascal) in certain perf/w and perf/$ metrics when it comes to compute. At the very least, they're far more competitive there than they are in perf/W for gaming (which again limits their ability to compete in the high end, where cards are either power or silicon area limited). They've decided to play to their strengths with the current arch, and pitch it as an alternative to the Quadros and Teslas of the world. Which, as it looks right now, they're having reasonable success with, even with the added challenge that the vast majority of enterprise compute software is written for CUDA. Their consistent focus on promoting open-source software and open standards for writing software has obviously helped this. The key here, though, is that Vega - as it stands today - is a decently compelling product for this type of workload.

The question of what they could have done to improve gaming performance, as this is obviously where Vega lags behind Nvidia the most. This is an extremely complicated question. According to AMD around launch time, the majority of the increase in transistor count between Polaris and Vega was spent on increasing clock speeds, which ... well, didn't really do all that much. Around 200 MHz (1400-ish to 1600-ish), or ~14%. It's pretty clear they'd struggle to go further here. Now, I've also seen postings about 4096 SPs being a "hard" limit of the GCN architecture for whatever reason. I can't back that up, but at least it would seem to make sense in light of there being no increase between the Fury X and Vega 64. So, the architecture might need significant reworking to accommodate a wider layout (though I can't find any confirmation that this is actually the case). They're not starved for memory bandwidth (given that the Vega 56 and 64 match or exceed Nvidia's best). So what can they improve without investing a massive amount of money into R&D? We know multi-chip GPUs aren't ready yet, so ... there doesn't seem to be much. They'll improve power draw and possibly clock speeds by moving to new process nodes, but that's about it.

In other words: it seems very likely that AMD needs an architecture update far bigger than anything we've seen since the launch of GCN. This is a wildly expensive and massively time-consuming endeavor. Also note that AMD has about 1/10 the resources of Nvidia (if that), and have until recently been preoccupied with reducing debt and returning to profitability, all while designing a from-scratch X86 architecture. In other words: they haven't had the resources to do this. Yet.

But things are starting to come together. Zen is extremely successful with consumers, and is looking like it will be the same in the enterprise market. Which will give AMD much-needed funds to increase R&D spending. Vega was highly competitive in compute when it launched, and still seems to do quite well, even if their market share is a fraction of Nvidia's. It's still bringing in some money. All the while, this situation has essentially forced AMD to abandon the high-end gaming market. Is this a nice decision? No, as a gamer, right now, I don't like it at all. But for the future of both gaming and competition in the GPU market in general, I think they're doing the right thing. Hold off today, so that they can compete tomorrow. Investing what little R&D money they had into putting some proverbial lipstick on Vega to sell to gamers (which likely still wouldn't have let them compete in the high end) would have been crazy expensive, but not given them much back, and gained them nothing in the long run. Yet another "it can keep up with 2nd-tier Nvidia, but at 50W more at $100 less" card wouldn't have gained AMD much of an increased user base, given Nvidia's mindshare advantage. But if prioritizing high-margin compute markets for Vega for now, and focusing on Zen for really making money allows them to produce a properly improved architecture in a year? That's the right way to go, even if it leaves me using my Fury X for a while longer than I had originally planned to.

Of course, it's entirely possible that the new arch will fall flat on its face. I don't think so, but it's possible. But it's far more certain that yet another limited-resource, short-term GCN refresh would be even worse.
Posted on Reply
#135
cucker tarlson
But do Zen sales improve RTG R&D budget at all ? They split into a separate branch uder separate name. Seems not. If they cared about enthusiast gamers they'd release a bigger polaris on ddr5x, a card that is gaming oriented, that'd be sure to be better than vega,which is not gaming oriented at all. 1.45x difference from going 2.1x the die size and HBM2 ? Are you kidding me ? Nvidia hit 2.05x performance increase from 1060 to 1080Ti from 2.3x die size increase and ddr5x. Polaris seems like a better gaming architecture than Vega,despite slightly lower clocks. Then they release a compute-oriented Vega and do blind tests for gamers using freesync as a bargaining card, are you effin serious........

Posted on Reply
#136
FordGT90Concept
"I go fast!1!11!1!"
Lisa Su is in charge of AMD and RTG. She knows that RTG has fallen behind while AMD has catapulted ahead. She also knows that APUs sell well so having a good graphics core to attach to CPUs is important to AMD's CPU business.

I don't think Navi is GCN based. I think it's a new architecture which is why AMD has been quiet other than about Vega 20. Vega 20 will likely be AMD's last compute oriented card for a long while. Navi is focused on consumers and consoles.
Posted on Reply
#137
cucker tarlson
FordGT90ConceptLisa Su is in charge of AMD and RTG. She knows that RTG has fallen behind while AMD has catapulted ahead. She also knows that APUs sell well so having a good graphics core to attach to CPUs is important to AMD's CPU business.

I don't think Navi is GCN based. I think it's a new architecture which is why AMD has been quiet other than about Vega 20.
That's the reason AMD only focuses on gaming in mid-range,console,apu segments. Enthusiast gaming has not been their focus with Vega,and will not be sa long as they continue using it,which seems unlikely given how well Vega does at compute tasks,despite the gargantuan power draw. They're refining it with 7nm, it'll be much better than Vega 10, and that can only mean one thing - they plan to sell less Vegas at $600 for gamers, more Vegas at +$1000 for HPC.
Posted on Reply
#138
Valantar
cucker tarlsonBut do Zen sales improve RTG R&D budget at all ? They split into a separate branch uder separate name. Seems not. If they cared about enthusiast gamers they'd release a bigger polaris on ddr5x, a card that is gaming oriented, that'd be sure to be better than vega,which is not gaming oriented at all. 1.45x difference from going 2.1x the die size and HBM2 ? Are you kidding me ? Nvidia hit 2.05x performance increase from 2.3x die size increase and ddr5x. Polaris seems like a better gaming architecture than Vega,despite slightly lower clocks.
So you think AMD's board and CEO would just leave RTG to die without even trying to save it if it couldn't sustain itself? Yeah, that doesn't strike me as likely. The GPU and compute markets are too big to abandon when you're the 2nd-largest player in the market, even if that's a 2-player market. Not to mention that the separation of RTG and whatever the CPU-making division is called is still only an administrative division within AMD. Both affect AMDs success and finances, both put money into (or take money out of) what is ultimately the same piggy bank. If one division is struggling and needs heavy R&D investments, and one is doing well and doesn't need as much, it would be pretty damn stupid not to shuffle that money over.

And you're entirely right: RTG could have put out a cheaper-to-produce "big Polaris" with GDDR5X, which would likely have been a very compelling gaming card - but inherently inferior to Vega in the higher-margin enterprise segment (lack of RPM/FP16 support, no HBCC). Not to mention that - even with AMD's lego-like architectures - designing and getting this chip into production (including designing a GDDR5X controller for the first time, which would likely only see use in that one product line) would have been very expensive. Not even remotely as expensive as Vega or a new arch, but enough to make a serious dent - and thus push development of a new arch back even further. Short-term gains for long-term losses, or at least postponing long-term gains? Yeah, not the best strategy.
Posted on Reply
#139
cucker tarlson
ValantarSo you think AMD's board and CEO would just leave RTG to die without even trying to save it if it couldn't sustain itself? Yeah, that doesn't strike me as likely. The GPU and compute markets are too big to abandon when you're the 2nd-largest player in the market, even if that's a 2-player market. Not to mention that the separation of RTG and whatever the CPU-making division is called is still only an administrative division within AMD. Both affect AMDs success and finances, both put money into (or take money out of) what is ultimately the same piggy bank. If one division is struggling and needs heavy R&D investments, and one is doing well and doesn't need as much, it would be pretty damn stupid not to shuffle that money over.

And you're entirely right: RTG could have put out a cheaper-to-produce "big Polaris" with GDDR5X, which would likely have been a very compelling gaming card - but inherently inferior to Vega in the higher-margin enterprise segment (lack of RPM/FP16 support, no HBCC). Not to mention that - even with AMD's lego-like architectures - designing and getting this chip into production (including designing a GDDR5X controller for the first time, which would likely only see use in that one product line) would have been very expensive. Not even remotely as expensive as Vega or a new arch, but enough to make a serious dent - and thus push development of a new arch back even further. Short-term gains for long-term losses, or at least postponing long-term gains? Yeah, not the best strategy.
Sad,but true.
Posted on Reply
#140
FordGT90Concept
"I go fast!1!11!1!"
Vega was made at GloFo. I'm not sure why they did but that's likely the primary reason why Vega is relatively power hungry compared to Pascal. Vega 20 not only has architectural tweaks, it is on a process that's half the size. Depending on a number of factors, it could give RTX 2080 Ti a run for its money. We already know that Vega 10 was memory starved and Vega 20 remedies that by doubling the bandwidth. That change by itself likely makes it competitive with GTX 1080 Ti/ RTX 2080.
Posted on Reply
#141
londiste
ValantarBut here's how I see this: AMD has a compute-centric GPU architecture, which still beats Nvidia (at least Pascal) in certain perf/w and perf/$ metrics when it comes to compute. At the very least, they're far more competitive there than they are in perf/W for gaming (which again limits their ability to compete in the high end, where cards are either power or silicon area limited). They've decided to play to their strengths with the current arch, and pitch it as an alternative to the Quadros and Teslas of the world. Which, as it looks right now, they're having reasonable success with, even with the added challenge that the vast majority of enterprise compute software is written for CUDA. Their consistent focus on promoting open-source software and open standards for writing software has obviously helped this. The key here, though, is that Vega - as it stands today - is a decently compelling product for this type of workload.
It looks like Impact is their main sales vehicle for Vega. AMD very cleverly sidestepped the challenges you list and found a niche - FP16, that is their key to this. Nothing in even remotely same price range does FP16 as well. It is useful for AI Training and they capitalize heavily on it.

Some leaks for 7nm Vega have hinted to additional specialized compute units, similarly to tensor cores in Nvidia's Volta/Turing. These are suspected to be aimed at AI (training). That would actually make a lot of sense, especially with the roughly 2x transistor density of 7nm process as well as not having to alter the base architecture and its limits (yet).
Posted on Reply
#142
ppn
We need AMD to release 7nm VEGA64 with GDDR6. and that is all there is to it.
Posted on Reply
#143
Valantar
cucker tarlsonThat's thre reason AMD only focuses on gaming in mid-range,console,apu segments. Enthusiast gaming has not been their focus with Vega,and will not be sa long as they continue using it,which seems unlikely given how well Vega does at compute tasks,despite the gargantuan power draw. They're refining it with 7nm, it'll be much better than Vega 10, and that can only mean one thing - they plan to sell less Vegas at $600 for gamers, more Vegas at +$1000 for HPC.
Which is the "smart" thing for them to do. $600 Vegas for gamers don't make sense now anyhow, even if they were 7nm Vegas. Even if they gained 20% clock speed (unlikely) at the same power, they wouldn't be competitive with Turing, and AMD would have to keep selling their chips at lower margins than Nvidia in the consumer space (although less of a disadvantage given the massive die size of TU102 and TU104 with the RT cores).

If AMD/RTG can live out this lull by selling Polaris 10/20/30/whatever-minor-tweak for ever-lower prices at roughly performance-per-$ parity with Nvidia, while putting as much effort and money as possible into making their upcoming arch as good as possible, that's a far better solution than churning out half-assed attempts at putting lipstick on Polaris by spreading their limited R&D funds thinner.
FordGT90ConceptVega was made at GloFo. I'm not sure why they did but that's likely the primary reason why Vega is relatively power hungry compared to Pascal. Vega 20 not only has architectural tweaks, it is on a process that's half the size. Depending on a number of factors, it could give RTX 2080 Ti a run for its money.
Doubtful. If the reports I've seen of 64 CUs being a hard limit in GCN are to be trusted, that just means a smaller die with higher clocks or less power (or both). If we can trust Nvidia's numbers somewhat (and ignore their faux-AA tensor core stuff), AMD would need a 50%+ performance increase to beat the 2080, let alone the Ti. That's not happening, even with a 14-to-7nm transition.

Also, the process isn't the key issue - the GTX 1050 and 1050Ti are made by GloFo on the same process as AMD, and roughly match the other Pascal cards for perf/W. This is mainly an arch issue, not a process issue.
ppnWe need AMD to release 7nm VEGA64 with GDDR6. and that is all there is to it.
Why? The lower price of the RAM would likely be offset by designing a new chip with a new memory controller and going through the ~1/2-year process of getting it fabbed. Zero performance gain, at best a $100 price drop, and that's if AMD eats the entire design-to-silicon cost. Not to mention that total board power draw would increase, forcing them to lower clocks.
Posted on Reply
#144
cucker tarlson
FordGT90ConceptVega was made at GloFo. I'm not sure why they did but that's likely the primary reason why Vega is relatively power hungry compared to Pascal.
Excuses.... polaris or 1050ti are made at glofo too,yet they don't have such massive power consumption issues. Vega is power hungry primarily cause they just wanted more tflops.
Posted on Reply
#145
Valantar
cucker tarlsonExcuses.... polaris or 1050ti are made at glofo too,yet they don't have such massive power consumption issues. Vega is power hungry primarily cause they just wanted more tflops.
Yes and no. Vega is power hungry because it's a big die pushed as far as it will go in terms of clocks. If you scroll up to your own post #134, you'll see the Vega 64 and 56 straddling the RX 580 and 570 in terms of perf/W. Of course, with HBM, they should have had an advantage (of around 20-30W saved), but that's likely been eaten by pushing clocks even further. Still, Vega and Polaris have very, very similar perf/W overall. There's no reason to believe a "big Polaris" would have been noticeably more efficient - it would just have been cheaper.
Posted on Reply
#146
Adam Krazispeed
FluffmeisterA 32GB HBM2 7nm chip is gonna be expensive, it's no surprise they are focusing on the HPC/Pro market where money is. Volta already has large chunks of the market sewn up and Turing based Quadros are going reign supreme in the pro sector... they need 7nm up their competitiveness.
BUT BUT BUT????? what about yields on 7nm, they cant be 100% not even 60% of functioning Dies.... not @ 7nm (Cough, Intel Cough 10nm Cough ) and 2x of the hbm could be replaced with a larger die with 20-40% more ROPS and TMUs and AMD could maybe compete against Inte....l i mean NVIDIA>> OPPS..... lol

the point is AMD GPUS need MORE ROPS, (TMUS are usually high cont (Fury X) &Vega 64), but also with 2-3x the theoretical Fil Rates doubled @ same base/boost clocks with a ROP,TMU count equivalent say the ...

@ A ..... 64 ROP'S AND 256 TMU'S COUNT!!!!

fury x is 67 GPPS (GPixels per /)
267 GTPS (GTexels Per /s..... (i forgot the texel fillrates but its just an example!)
@ A 1050 mhz gpu CLOCK

now

VEGA20 7nm

NOT 64, but more like 96 rops remember this number!!! 96 ROPS....


but at 64 rops the fillrates are only slightly higher because of the core gpu clock/boost clocks...
64 rops @ 1ghz gpu clock should be at least doubled from the fury x on the Pixels fillrate like 64 X1ghz + 64.0 Gtexels p/s but with a x2 perf. it be 64 x2 x1ghz = 128.0 Gtexels p/s thAT what we need then make it up to 96 rops then amd would have a monster card
\\

MOST CARDS have much higher texels fillrates, but pixel rates need to be almost just as high to run all the pixels especially on 4k and up like 8k resolutions need that raw pixel fillrate power up in the hundreds 2-300 or even higher would make a difference on a cardlike the fury x with the same 267 gtexels /s texture rate but if we could just double the pixel fillrate i guarantee you the gpu would perform 100% better, especialy on 4k and higher resolutions!!!
Posted on Reply
#147
FordGT90Concept
"I go fast!1!11!1!"
ValantarDoubtful. If the reports I've seen of 64 CUs being a hard limit in GCN are to be trusted, that just means a smaller die with higher clocks or less power (or both).
Not a hard limit, a memory chokepoint limit. People that mined with the card overclocked the memory and underclocked the core because there's not enough bandwidth to supply 64 CUs. Fury X was starved too. Vega 64 only has a little bit more bandwidth than Fury X.
cucker tarlsonExcuses.... polaris or 1050ti are made at glofo too,yet they don't have such massive power consumption issues. Vega is power hungry primarily cause they just wanted more tflops.
12 billion transistors versus, what? 3.3 billion and 5.7 billion? GloFo 14nm was better suited to smaller chips.
Posted on Reply
#148
cucker tarlson
ValantarYes and no. Vega is power hungry because it's a big die pushed as far as it will go in terms of clocks. If you scroll up to your own post #134, you'll see the Vega 64 and 56 straddling the RX 580 and 570 in terms of perf/W. Of course, with HBM, they should have had an advantage (of around 20-30W saved), but that's likely been eaten by pushing clocks even further. Still, Vega and Polaris have very, very similar perf/W overall. There's no reason to believe a "big Polaris" would have been noticeably more efficient - it would just have been cheaper.
That's cause rx580 was just going overboard with clocks to gain anything over 1060. When I said polaris, I meant rx480.


However bad the situation is for those who buy xx80 nvidia cards,it is still a lot better than for those who stick to AMD in that segment. You can question the pricing of new cards and how useful rtx will be in the early adoption days, but nvidia has a new architecture out and they make cards available for gamers instantly. Vega 7nm will be out this year, but will take a friggin year for gamers to see one. I still think - better shoot rays with nvidia than shoot yourself in the foot by waiting for amd and getting disappointed again.
Posted on Reply
#149
jabbadap
FordGT90ConceptLisa Su is in charge of AMD and RTG. She knows that RTG has fallen behind while AMD has catapulted ahead. She also knows that APUs sell well so having a good graphics core to attach to CPUs is important to AMD's CPU business.

I don't think Navi is GCN based. I think it's a new architecture which is why AMD has been quiet other than about Vega 20. Vega 20 will likely be AMD's last compute oriented card for a long while. Navi is focused on consumers and consoles.
Not so sure about what navi really is. It might be last of GCN or not. Rumors are that Navi is especially made for Sony's next console Playstation 5, which will have Ryzen+Navi SOC or seperate ryzen cpu + navi dpgu.
Posted on Reply
#150
Valantar
Adam Krazispeed(...)
If you're going to go that technical, please pay some attention to punctuation and presenting your argument. I normally understand that stuff, but I can't make heads nor tails of your post.
cucker tarlsonThat's cause rx580 was just going overboard with clocks to gain anything over 1060. When I said polaris, I meant rx480.
Yet Vega 56 matches the RX480 - and isn't clocked as high as the 64. Again: Vega is pushed to its limit in terms of clocks, just like RX 5XX polaris, and is thus very, very similar in terms of clock scaling and perf/W.
FordGT90ConceptNot a hard limit, a memory chokepoint limit. People that mined with the card overclocked the memory and underclocked the core because there's not enough bandwidth to supply 64 CUs. Fury X was starved too. Vega 64 only has a little bit more bandwidth than Fury X.
Well, that begs the question why AMD's arch needs so much more memory bandwidth than Nvidia's for the same performance. I'm not saying you're wrong, but I don't think it's that simple. Also, if memory bandwidth was the real limitation (and there's no architectural max limit on CUs), releasing 7nm Vega 20 (with 4x HBM2) would make sense. Yet that's not happening. I'm choosing to interpret that as a sign, but you're of course welcome to disagree here. At the very least, it'll be interesting to see the CU count of Vega 20.
Posted on Reply
Add your own comment
Dec 18th, 2024 07:14 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts