Friday, April 6th 2018

In Aftermath of NVIDIA GPP, ASUS Creates AREZ Brand for Radeon Graphics Cards

Graphics card manufacturers are gradually starting to align their gaming brands with NVIDIA to get admission into the exclusive GeForce Partner Program (GPP). Although there isn't any official confirmation on behalf of the NVIDIA AIB partners, small but significant changes are starting to become evident. The first example comes from Gigabyte's Aorus gaming line. Gigabyte currently offers the Gaming Box external graphics enclosure with a GeForce GTX 1070, GTX 1080, or a Radeon RX 580. If we look at the packaging closely, we can clearly see that the RX 580 box lacks the Aorus branding. However, Gigabyte isn't alone though. MSI is apparently in favor of GPP too as they remove all their Radeon Gaming X models from their global website. Take the Radeon RX 580 for instance. The RX 580 models from the Armor lineup are the only ones present. Surprisingly the US website still carries the Gaming X models.

The latest rumor suggests that ASUS is the third AIB partner to jump on the GPP bandwagon. The Taiwanese manufacturer is allegedly creating the AREZ brand to accommodate their Radeon products. The AREZ moniker probably alludes to the Ares series of dual-GPU graphics cards historically centered around AMD GPUs. If this rumor is true, the Strix, Dual, Phoenix, and Expedition Radeon models are going to fall under the new AREZ branding. ASUS might even go as far as dropping their name from the AREZ models entirely.
Update 17/04/2018: ASUS has officially announced the 'AREZ' brand here.
Source: VideoCardz
Add your own comment

137 Comments on In Aftermath of NVIDIA GPP, ASUS Creates AREZ Brand for Radeon Graphics Cards

#76
bug
Captain_TomI agree that "The Polaris Decision" sucks for Enthusiast PC gaming and definitely has created some problems for Radeon in PC gaming. But this decision did not happen in a vacuum.

The question wasn't "Should we abandon the Ultra High End?" The question was "Is it worth our limited resources to take the top spot for a 4th time in row?" It actually seems like AMD made the right decision too: Their marketshare and revenue is up in the GPU department. It went DOWN during the 290X era.



P.S. Don't worry, AMD will hit back hard for the top spot by the end of 2020 now that they have money again ;).
I'm pretty sure there was no decision like that. They just ended up with a design that simply didn't scale and had to pretend that's what they were aiming for all along.
Posted on Reply
#77
Xzibit
bugCome on, it's not that hard, really. Nvidia wants a program where they would chip in on advertising (among other things). Of course they don't want AMD cards under the same umbrella, they'd be promoting their own competition if that happened. So I really don't think they asked for any specific line of products (it's not in the GPP parts that were "leaked" so far). Just a line for their own.
It's why I said Asus (and others) should have said: "you want a line for yourselves? fine, we'll keep what we have for amd and you're welcome to start promoting your dedicated line from scratch."

The real problem here is AMD's lack of ability to compete at the high-end for I don't remember how many years in a row. Everybody who praised them for abandoning high-end with Polaris and going mid-range only "because that's where the real money is", you now see how wrong you were. Because at this point AMD can't do amything about GPP, but go cry on Kyle's (and others') shoulders.
Don't be so sure.
KitGuruLast week when we spoke to our source, we heard that board partners were feeling the pressure with GPP. Nvidia currently has marketshare dominance, so AiBs heavily rely on the company’s support not just for marketing dollars, but for steady GPU supply too. The second point that our source raised with us is that Nvidia wants exclusivity over the most notable brand each AiB has to offer, meaning GPP members need to bump AMD cards off to a lesser-known sub-brand.
Posted on Reply
#78
Captain_Tom
bugI'm pretty sure there was no decision like that. They just ended up with a design that simply didn't scale and had to pretend that's what they were aiming for all along.
I am not sure of what you are talking about.

1) Polaris CLEARLY was a midrange card from the start, and the XBOX One shows that it is possible to scale it up at least a bit more if they wanted to.

2) Vega on the other hand was blatantly meant for: Mobile, Compute, AI, and then Gaming (in that order). Small Vega gpu's are more efficient than Pascal by a decent margin, and Vega competes with Volta in compute even though it probably costs half as much to make.
Posted on Reply
#79
evernessince
bugI'm pretty sure there was no decision like that. They just ended up with a design that simply didn't scale and had to pretend that's what they were aiming for all along.
That's not how GPUs work. You don't "scale" them up. Just like Nvidia, you make one big chip and maybe one or two other SKUs and then parts are binned based on defects and performance. Even the 1080 Ti is simply a "defective" Titan.

Captain Tom is correct, in that Polaris had a small die size in mind from the start. If AMD wanted to make a larger Polaris they could have easily started with a larger die size and de-activated defective cores as needed for lower end SKUs.

The only thing that could change this is if AMD is able to use it's infinity fabric with it's GPUs. Only then would they be able to scale up their GPUs and at that point Nvidia would truely be screwed because MCMs are cheaper, have better yields, and you can easily scale them up.
Posted on Reply
#80
Captain_Tom
evernessinceThe only thing that could change this is if AMD is able to use it's infinity fabric with it's GPUs. Only then would they be able to scale up their GPUs and at that point Nvidia would truely be screwed because MCMs are cheaper, have better yields, and you can easily scale them up.
That's exactly what Radeon is betting on for 2019 - GCN 5.0 + Infinity Fabric. They got lucky with how good the 7970 and 290X turned out, but only if you compare them to garbage Kepler. Once Maxwell came out it was over. There are only so many times you can "cheat" by using HBM.

Overall AMD has always had their best products when they were smaller dies. It has been common knowledge forever that underclocked and undervolted AMD cards gain immensely in the Efficiency Department, and it keeps getting more obvious every successive generation.

A gpu using infinity fabric to make a 7nm 4x3096-sp mega card would be fantastic! Let's hope they pull it off before Nvidia launches the GTX 1280 for $1099 lol.
Posted on Reply
#81
bug
evernessinceThat's not how GPUs work. You don't "scale" them up. Just like Nvidia, you make one big chip and maybe one or two other SKUs and then parts are binned based on defects and performance. Even the 1080 Ti is simply a "defective" Titan.

Captain Tom is correct, in that Polaris had a small die size in mind from the start. If AMD wanted to make a larger Polaris they could have easily started with a larger die size and de-activated defective cores as needed for lower end SKUs.

The only thing that could change this is if AMD is able to use it's infinity fabric with it's GPUs. Only then would they be able to scale up their GPUs and at that point Nvidia would truely be screwed because MCMs are cheaper, have better yields, and you can easily scale them up.
Of course they scale up. With frequency. Which Polaris can't handle well. But sorry I said mean words about your precious.
Posted on Reply
#82
Fujikoma
the54thvoidIn my workplace, we're not allowed to sell Pepsi from our Coke vending machine. It's not anti trust, it's cold, callous business.
I feel sorry for your workplace. I managed a bar in college and we sold Pepsi and Coke products alongside each other. Vendors had no say in the matter.
Posted on Reply
#83
bug
FujikomaI feel sorry for your workplace. I managed a bar in college and we sold Pepsi and Coke products alongside each other. Vendors had no say in the matter.
"Alongside" != "from the same vending machine". Surely you can make that distinction.
Posted on Reply
#84
Vayra86
oxidizedWow i was actually not serious, but if you put it this way, a gamer became "a person who plays (video) games" because, that term was referred to enthusiasts once, now it's used for all kind of people on all kind of platforms which is very stupid.
/OT
Nah back in the day they called those gamers 'nerds', now its the norm so certain groups of 'gamers' prefer to label themselves as 'enthusiasts' so they can set themselves apart again :D
Captain_TomI am not sure of what you are talking about.

1) Polaris CLEARLY was a midrange card from the start, and the XBOX One shows that it is possible to scale it up at least a bit more if they wanted to.

2) Vega on the other hand was blatantly meant for: Mobile, Compute, AI, and then Gaming (in that order). Small Vega gpu's are more efficient than Pascal by a decent margin, and Vega competes with Volta in compute even though it probably costs half as much to make.
What @bug says is true. You need to make a distinction from what AMD says it wants to do and what the products have become in reality. GCN was and still is far from efficient for gaming, and Tahiti > Hawaii was having the architecture inch towards a cap in terms of board TDP, heat and stability. This is why GCN still doesn't clock as high as Pascal does and Polaris was a good step forward, but it was simply not enough. That is why it was pushed as a midrange architecture, going bigger with it would have made it run into the same constraints as Hawaii did.

That is why Fury and Vega saw the light and why they needed HBM / more efficient memory. We know today that HBM does not extract greater performance for gaming and AMD missed the boat entirely with GDDR5X. Their timing was bad and it was bad because they were out of options on further expanding GCN. If AMD should have stepped back from the high end it was during the Fury release. That way they would have been able to make bank versus Pascal and they would have only missed the answer to the 980ti. Now, they miss out on almost two generations instead of one and not just on the 1080ti but also the 1080 which is a large chunk of the marketplace with very good margins. They lost to Maxwell because it provided the efficiency already, and Pascal was another leap forward that even Vega isn't the right answer for, even after its countless delays and adjusted promises. Polaris brought Maxwell efficiency a year too late, and it sold because it had nice allround performance at a decent price, not because it was a superb step forward. You don't 'win' battles in the midrange, you just move lots of units. AMD could have also just re-used Hawaii for that segment; they were already rebranding and kicking down GPUs a full tier anyway to keep the product stack filled up.

So yes, today's reality is that we see most of the Vega GPUs land in Frontier and MI25, but its really not because AMD wanted to all along, its because they are forced to do so to at least make SOME money out of it.

Navi is just another example of the desperate search for more performance on GCN without breaking out of limitations and power/heat budgets. They can't push more out of a single die, so they use multiple of them. While in essence the idea is similar to Ryzen, the comparison doesn't really convince me, because soon AMD will have 3 radically different types of cards out there that they have to develop and support. Hardly efficient. They also go back to the drawing board in a big way while their competitor is continually scaling the same architecture and fine tuning it further, while it still has the MCM option to turn to after that.

We can always hope for the best, but AMD simply hasn't got the time to keep screwing up GPU anymore.
Posted on Reply
#85
springs113
evernessinceThat's not how GPUs work. You don't "scale" them up. Just like Nvidia, you make one big chip and maybe one or two other SKUs and then parts are binned based on defects and performance. Even the 1080 Ti is simply a "defective" Titan.

Captain Tom is correct, in that Polaris had a small die size in mind from the start. If AMD wanted to make a larger Polaris they could have easily started with a larger die size and de-activated defective cores as needed for lower end SKUs.

The only thing that could change this is if AMD is able to use it's infinity fabric with it's GPUs. Only then would they be able to scale up their GPUs and at that point Nvidia would truely be screwed because MCMs are cheaper, have better yields, and you can easily scale them up.
I thought that's exactly what Navi is...a test ground for scalability. In essence ryzen1? Correct me if I'm wrong, I thought that's what their roadmap stated. Either way AMD is in a much better predicament than they were before Polaris. They set a goal and they've surpassed it with ryzen essentially taking them out of the red. The only thing I hate with things in the industry now a days is the hush hush, but I understand. I'm quite certain the CPU side was the easiest to tackle and make money, now they are back in competition with much cheaper to produce products(a tr4 cost em ~180) to produce. Intel can't touch that right now. For a company that's fighting 2giants AMD is doing quite well for themselves. Ppl here complain about Nvidia's greedy practices yet still support them...smdh.

Ultimately though AMDs goal was to get scalability into its GPU sector... it's coming and when it does, that's when the real fight starts. Vega being a compute card first is very fast as a gaming card. I think Navi will be the start of correct push for a gaming card.

Side note though, it still feels like Vega is being held back, from a driver and software implementation standpoint. Only time will tell though.
Posted on Reply
#86
Vayra86
springs113it's coming and when it does, that's when the real fight starts. Vega being a computer card first is very fast as a gaming card. I think Navi will be the start of correct push for a gaming card.

Side note though, it still feels like Vega is being held back, from a driver and software implementation standpoint. Only time will tell though.
Rewind to 2013, 2014, 2015, 2016... people said the same things about AMD GPUs and look where we are today. Its always too little, too late. You need the optimized performance at release, not when the cards turn old and people stopped caring. The reason you keep seeing that pattern is because the architecture at its core simply is not efficient and it never will be as long as AMD pushes it as a jack-of-all-trades. DX11 performance is a perfect example of that... they justnever got it right, so they pushed the 'low level API' card and only in DX12 and Vulkan do you see the performance you'd expect given the shaders under the hood.
Posted on Reply
#87
bug
Captain_TomI am not sure of what you are talking about.

1) Polaris CLEARLY was a midrange card from the start, and the XBOX One shows that it is possible to scale it up at least a bit more if they wanted to.

2) Vega on the other hand was blatantly meant for: Mobile, Compute, AI, and then Gaming (in that order). Small Vega gpu's are more efficient than Pascal by a decent margin, and Vega competes with Volta in compute even though it probably costs half as much to make.
Yes, because initially Vega was launched for mobiles. Only then we got the compute and AI cards with gaming parts to follow much, much later.
It's really funny (and worrisome at the same time) to watch people making stuff up and rewriting history rather than admit they're sometimes wrong.

For bonus points, here's how Vega competes with Pascal irl: www.phoronix.com/scan.php?page=article&item=12-opencl-98&num=1
Posted on Reply
#88
kruk
bugOf course they scale up. With frequency. Which Polaris can't handle well. But sorry I said mean words about your precious.
Um, what? Polaris was never meant to be a high clocked part. GCN (up to Vega) was always clocked around a GHz +/- 100 MHz. Optimum for Polaris was meant to be that GHz area with ~900 mV (see image below). They (probably) misjudged comparable future Pascal performance when they started Polaris development and later pumped up the clocks/voltages trying to close the gap (a lot of clues point to this as the most likely scenario). This tanked the efficiency. If Polaris 10 had more Stream Processors in the start, this probably would not be necessary. We can only guess how the 40 CU unit (Xbox) would perform on desktop, but IMO it would have Pascals efficiency range ...

Posted on Reply
#89
evernessince
bugOf course they scale up. With frequency. Which Polaris can't handle well. But sorry I said mean words about your precious.
Why can't you just admit you were wrong instead of trying to add details after the fact? You were proven wrong and now you're like "oh I meant frequency scaling not scaling".

Regardless, frequency means nothing with the small die size, as I pointed out earlier.
Vayra86Nah back in the day they called those gamers 'nerds', now its the norm so certain groups of 'gamers' prefer to label themselves as 'enthusiasts' so they can set themselves apart again :D



What @bug says is true. You need to make a distinction from what AMD says it wants to do and what the products have become in reality. GCN was and still is far from efficient for gaming, and Tahiti > Hawaii was having the architecture inch towards a cap in terms of board TDP, heat and stability. This is why GCN still doesn't clock as high as Pascal does and Polaris was a good step forward, but it was simply not enough. That is why it was pushed as a midrange architecture, going bigger with it would have made it run into the same constraints as Hawaii did.

That is why Fury and Vega saw the light and why they needed HBM / more efficient memory. We know today that HBM does not extract greater performance for gaming and AMD missed the boat entirely with GDDR5X. Their timing was bad and it was bad because they were out of options on further expanding GCN. If AMD should have stepped back from the high end it was during the Fury release. That way they would have been able to make bank versus Pascal and they would have only missed the answer to the 980ti. Now, they miss out on almost two generations instead of one and not just on the 1080ti but also the 1080 which is a large chunk of the marketplace with very good margins. They lost to Maxwell because it provided the efficiency already, and Pascal was another leap forward that even Vega isn't the right answer for, even after its countless delays and adjusted promises. Polaris brought Maxwell efficiency a year too late, and it sold because it had nice allround performance at a decent price, not because it was a superb step forward. You don't 'win' battles in the midrange, you just move lots of units. AMD could have also just re-used Hawaii for that segment; they were already rebranding and kicking down GPUs a full tier anyway to keep the product stack filled up.

So yes, today's reality is that we see most of the Vega GPUs land in Frontier and MI25, but its really not because AMD wanted to all along, its because they are forced to do so to at least make SOME money out of it.

Navi is just another example of the desperate search for more performance on GCN without breaking out of limitations and power/heat budgets. They can't push more out of a single die, so they use multiple of them. While in essence the idea is similar to Ryzen, the comparison doesn't really convince me, because soon AMD will have 3 radically different types of cards out there that they have to develop and support. Hardly efficient. They also go back to the drawing board in a big way while their competitor is continually scaling the same architecture and fine tuning it further, while it still has the MCM option to turn to after that.

We can always hope for the best, but AMD simply hasn't got the time to keep screwing up GPU anymore.
You must have missed the 15w Vega laptop chips. Clearly Vega has excellent performance per watt when it's in it's sweet spot. The same thing was found out when people were undervolting their Vega 64s. People were able to cut 1/3 off the power consumption at the same clocks, putting it at around the same perf/watt as pascal.

Nvidia scaling their architecture is great but there is a limit to how big you can make a chip or how high you can get the frequency, as evidenced by Intel having issues getting over the 5GHz bump for some time now. Nvidia is very close to the reticle limit and once you hit that you literally cannot make the chip bigger.

You had better hope AMD keeps competing or else we all will be enjoying our $700 GTX 1160s.
springs113I thought that's exactly what Navi is...a test ground for scalability. In essence ryzen1? Correct me if I'm wrong, I thought that's what their roadmap stated. Either way AMD is in a much better predicament than they were before Polaris. They set a goal and they've surpassed it with ryzen essentially taking them out of the red. The only thing I hate with things in the industry now a days is the hush hush, but I understand. I'm quite certain the CPU side was the easiest to tackle and make money, now they are back in competition with much cheaper to produce products(a tr4 cost em ~180) to produce. Intel can't touch that right now. For a company that's fighting 2giants AMD is doing quite well for themselves. Ppl here complain about Nvidia's greedy practices yet still support them...smdh.

Ultimately though AMDs goal was to get scalability into its GPU sector... it's coming and when it does, that's when the real fight starts. Vega being a compute card first is very fast as a gaming card. I think Navi will be the start of correct push for a gaming card.

Side note though, it still feels like Vega is being held back, from a driver and software implementation standpoint. Only time will tell though.
That's what everyone is assuming although there is no direct confirmation of that. I don't see any reason why those assumptions would be wrong either. GPU die sizes are much bigger than CPUs and thus have a much larger benefit from MCM tech like infinity fabric since yields decrease exponentially as die size increases. If AMD does release an MCM GPU, it will have very good yields, will be completely scalable, and it will be cheaper. After all, both AMD and Nvidia still have to pay for dective dies on a wafer even if they aren't usable.
Vayra86Rewind to 2013, 2014, 2015, 2016... people said the same things about AMD GPUs and look where we are today. Its always too little, too late. You need the optimized performance at release, not when the cards turn old and people stopped caring. The reason you keep seeing that pattern is because the architecture at its core simply is not efficient and it never will be as long as AMD pushes it as a jack-of-all-trades. DX11 performance is a perfect example of that... they justnever got it right, so they pushed the 'low level API' card and only in DX12 and Vulkan do you see the performance you'd expect given the shaders under the hood.
Vega's DX11 performance is pretty good, especially when the devs take advantage of primitive shaders. The problem isn't the architecture in this case, it's the geometry pipeline. Nvidia cards are simply able to render more polygons, that is if the game doesn't use primitive shaders.
bugYes, because initially Vega was launched for mobiles. Only then we got the compute and AI cards with gaming parts to follow much, much later.
It's really funny (and worrisome at the same time) to watch people making stuff up and rewriting history rather than admit they're sometimes wrong.

For bonus points, here's how Vega competes with Pascal irl: www.phoronix.com/scan.php?page=article&item=12-opencl-98&num=1
Once again, completely wrong. Vega launched first on Desktop, first targeting professionals with the frontier edition and then the gaming version. Mobile came last It's ironic in the same paragraph that you complain about people making stuff up yet you are doing just that.

0/10 troll
Posted on Reply
#90
Ubersonic
eidairaman1Sapphire is AMD exclusive Brand, Zotac is the NV Counterpart, like how XFX and EVGA are.
XFX used to be Nvidia exclusive but they made the mistake of branching out to make ATi cards so so Nvidia blocked them lol.
Posted on Reply
#91
Xzibit
UbersonicXFX used to be Nvidia exclusive but they made the mistake of branching out to make ATi cards so so Nvidia blocked them lol.
Fubzilla: XFX officially stops doing Nvidia
FudzillaXFX Europe have confirmed that they only want to associate themselves with AMD from this point onwards. The company will bring an outstanding array AMD next-generation graphics to market in October and November and it will compete aggressively against Fermi. Enough is enough, say XFX to Nvidia’s recent action to de-authorise them from the approved partner list.

Many people are not aware of the fact that until last Friday, XFX was actually selling and supporting Nvidia cards, just not Fermi based ones. The story is rather simple. Last year, XFX started selling ATI, Nvidia got mad and told them that they won’t get Fermi based cards. XFX wanted to offer AMD boards and many gamers actually saw this as a great move and because it could make some nice money and market share of course.

Nvidia didn’t cut XFX completely off as the company continued to sell anything from Nvidia but Fermi. Over the summer Nvidia launched the rest of Geforce GTX 400 family, but they didn’t give any to XFX.

XFX told us earlier that as of this day, they officially stop selling any Nvidia cards. This is a direct action after Nvidia’s decision to de-authorise XFX from approved partner list. The decision to stop working with Nvidia doesn’t surprise us as Nvidia simply didn’t send any Fermi chips to XFX and they were encouraging channel partners not to work with XFX, and therefore XFX simply could not sell enough Nvidia cards.
Good thing GPP is nothing like that
HardOCPNVIDIA will tell you that it is 100% up to its partner company to be part of GPP, and from the documents I have read, if it chooses not to be part of GPP, it will lose the benefits of GPP which include: high-effort engineering engagements -- early tech engagement -- launch partner status -- game bundling -- sales rebate programs -- social media and PR support -- marketing reports -- Marketing Development Funds (MDF). MDF is likely the standout in that list of lost benefits if the company is not a GPP partner.
Oh, nevermind...
Posted on Reply
#92
evernessince
XzibitFubzilla: XFX officially stops doing Nvidia



Good thing GPP is nothing like that



Oh, nevermind...
Yeah, the GPP seems to be stage one of forcing board partners to be Nvidia only or worse. If Nvidia can take AIB brands with no repercussions, what's stopping them from doing more?

Heck, Nvidia has a good enough online store that it can sell cards completely direct. The AIBs know this.

Now I'm not saying Nvidia would get rid of AIBs altogether but it will most certainly squeeze every last bit it can get out of these guys, whether that be through more Nvidia only cards or agreements to never take a dime of AMD marketing money.

If I've learned anything from the last 10 years of massive white collar crime, it's that companies will keep going until the law stops them.
Posted on Reply
#93
Ruru
S.T.A.R.S.
AREZ? I miss those ARES and MARS cards. AREZ ARES anyone? :D
Posted on Reply
#94
swirl09
As an owner currently of a Strix card, this doesnt sit well with me at all.

This will affect my next purchase.

Not to say I doubt there was pressure on those who opt'd to go along with it, but it doesnt change the fact I dont like where this path goes.
Posted on Reply
#95
eidairaman1
The Exiled Airman
GCN was good in 2013, it's time to Move on
Posted on Reply
#96
evernessince
swirl09As an owner currently of a Strix card, this doesnt sit well with me at all.

This will affect my next purchase.

Not to say I doubt there was pressure on those who opt'd to go along with it, but it doesnt change the fact I dont like where this path goes.
I was personally about to buy an ASUS Crosshair Hero VI due to it's dozen USB ports for VR. I decided to keep my ASRock board and just get a PCie usb card instead.

That said, I wonder if this program also extends to ASUS laptops as well. Does this mean only ASUS laptops with Nvidia hardware will get the ROG branding? I guess we'll find out.
eidairaman1GCN was good in 2013, it's time to Move on
That's what Navi should be for. If not, we're SOL.
Posted on Reply
#97
cdawall
where the hell are my stars
evernessinceThat's what Navi should be for. If not, we're SOL.
It will end up being baby vega glued together in bundles of 4-8 :roll:
Posted on Reply
#98
evernessince
cdawallIt will end up being baby vega glued together in bundles of 4-8 :roll:
Well I hope it's not just baby Vega (Navi should have improvements of it's own right?). If we assume AMD makes each die 1/3 the size of Vega, that makes each die 170mm2, which is quite a bit smaller than the RX 480 and even Ryzen's 213mm die size. Yields on such a die would be insanely high. It also places their basic 4 chip module (just like zen) above Vega 64 performance wise by 33% assuming there are no frequency improvements. What will likely happen though is Navi will bring some performance per watt improvements and AMD will target the sweet spot frequency, whatever that may be for Navi. They will likely also sell lower end products with a single die disabled. We know AMD can do an 8 Die solution but wouldn't that just be overkill? 266% of the current Vega 64 performance? That's enough power for 8K gaming or a very powerful rending chip. Just like Epic, AMD may reserve these chips for professional use.

There's a reason Nvidia also stated that MCM is the future and why they are also currently researching it. It's going to allow the continuation of moore's law for a little while longer.
Posted on Reply
#99
Totally
evernessinceThat's not how GPUs work. You don't "scale" them up. Just like Nvidia, you make one big chip and maybe one or two other SKUs and then parts are binned based on defects and performance. Even the 1080 Ti is simply a "defective" Titan.

Captain Tom is correct, in that Polaris had a small die size in mind from the start. If AMD wanted to make a larger Polaris they could have easily started with a larger die size and de-activated defective cores as needed for lower end SKUs.

The only thing that could change this is if AMD is able to use it's infinity fabric with it's GPUs. Only then would they be able to scale up their GPUs and at that point Nvidia would truely be screwed because MCMs are cheaper, have better yields, and you can easily scale them up.
Sorry to interject but he is correct when he said that they we're stuck with a chip that didn't scale. I don't know if you forgot or missed this key point of AMD's small chip strat. What the we're shooting for was a small chip that they depending on performance target package many dies on the same chip, at the time I imagined it to be something like crossfire on a chip, not too different from what they are doing with Ryzen. Then when Polaris finally rolled around it was too power hungry for them to realize such an ambition. So they we're technically stuck with a chip that didn't 'scale.' I'm certain that it is this effort that led to MCMs it is just that they are successful with implementing it on the CPU front. The original article can be found somewhere on Anandtech a piece about the Fury/7970 being the last monolithic die for AMD and their new focus we're smaller dies.
Posted on Reply
#100
Vayra86
evernessinceClearly Vega has excellent performance per watt when it's in it's sweet spot. The same thing was found out when people were undervolting their Vega 64s. People were able to cut 1/3 off the power consumption at the same clocks, putting it at around the same perf/watt as pascal.

Nvidia scaling their architecture is great but there is a limit to how big you can make a chip or how high you can get the frequency, as evidenced by Intel having issues getting over the 5GHz bump for some time now. Nvidia is very close to the reticle limit and once you hit that you literally cannot make the chip bigger.

You had better hope AMD keeps competing or else we all will be enjoying our $700 GTX 1160s.
- Vega perf/watt is fine but the overall performance per mm2 of die area is not. So what we get is a high clock to extract equal performance to what the competitor can do with a smaller die. And with that high clock, Vega perf/watt sucks. This is what happened with Vega 64, and a similar Vega 64 with GDDR5 would simply not have been possible. The bottom line is that Vega as a high end part simply isn't enough, no matter how you twist it or tweak it. Also, you handily omit the fact that the 'lower clocked efficient Vega' is also a silicon lottery. No guarantees.

- Nvidia's way of scaling their architecture is indeed great and what you are saying about how big a chip can be is exactly the problem GCN met during Hawaii and its the problem that Vega didn't manage to solve either. Meanwhile, there is a 35% performance gap at similar die sizes so now AMD has to resort to multiple dies, while Nvidia can postpone that for another full generation if they want to.

Or, perhaps we might even see a much bigger die area like what AMD did with Threadripper. Especially with HBM, the board has lots of space anyway. So there are tons of options, but ONLY if you have a highly efficient architecture that doesn't surpass TDP budgets for each segment. People simply will not accept a 400W GPU in this day and age, just as they won't accept hot and loud ones, let alone move all that hot air out of the ever smaller form factors on the marketplace. So efficiency is king for every single use case.

- You compare frequency limitations, but back when Maxwell was released, did you for one second consider the next gen would pass the 2 Ghz barrier for Nvidia? I for sure as hell didn't.

- I am not that worried about competition in the GPU space. I will say this, as I have said often; I think RTG would be much better served in the hands of a different company that can truly focus on its GPU effort instead of the happy marriage that is APU / custom chip design, because let's face it, for a true gaming GPU, those are all the wrong priorities and it shows. AMD is not doing anyone a service for the past few years and there is nothing on the horizon that is ready to dethrone Nvidia. Its up to AMD/RTG's priorities and management that we now have a high end abandoned for nearly two years...
Posted on Reply
Add your own comment
Dec 22nd, 2024 13:49 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts