Friday, September 1st 2017
On AMD's Raja Koduri RX Vega Tweetstorm
In what is usually described as a tweetstorm, AMD's RTG leader Raja Koduri weighed in on AMD's RX Vega reception and perception from both the public and reviewers. There are some interesting tidbits there; namely, AMD's option of setting the RX vega parts at frequencies and voltages outside the optimal curve for power/performance ratios, in a bid to increase attractiveness towards the performance/$ crowds.
However, it can be said that if AMD had done otherwise, neither gamers nor reviewers would have been impressed with cards that potentially delivered less performance than their NVIDIA counterparts, while consuming more power all the same (even if consuming significantly less wattage). At the rated MSRP (and that's a whole new discussion), this RTG decision was the best one towards increasing attractiveness of RX Vega offerings. However, Raja Koduri does stress Vega's dynamic performance/watt ratios, due to the usage of specially defined power profiles.To our forum-walkers: this piece is marked as an editorial
Raja also touched an interesting subject; namely, that "Folks doing perf/mm2 comparisons need to account for Vega10 features competing with 3 different competitive SOCs (GP100, GP102 and GP104)". This is interesting, and one of the points we have stressed here on TPU: AMD's Vega architecture makes great strides towards being the architecture AMD needs for its server/AI/computing needs, but it's a far cry from what AMD gamers deserve. it's always a thin line to be threaded by chip designers on which features and workloads to implement/improve upon with new architectures. AMD, due to its lack of a high-performance graphics chip, was in dire need of a solution that addressed both the high-performance gaming market and the server/computing market where the highest profits are to be made.This round, the company decided to focus its developments more towards the server side of the equation - as Raja Koduri himself puts it relative to Infinity Fabric: "Infinity fabric on Vega is optimized for server. It's a very scalable fabric and you will see consumer optimized versions of it in future." Likely, this is Raja's way of telling us to expect MCMs (Multi-Chip-Modules) supported by AMD's Infinity Fabric, much as we see with Ryzen. This design philosophy allows companies to design smaller, more scalable and cheaper chips, which are more resistant to yield issues, and effectively improve every metric, from performance, to power and yield, compared to a monolithic design. NVIDIA themselves have said that MCM chip design is the future, and this little tidbit by Raja could be a pointer towards AMD Radeon's future in that department.That Infinity Fabric is optimized for servers is a reflection of the entire Vega architecture; AMD's RX Vega may be gaming-oriented graphics cards, but most of the architecture improvements aren't capable of being tapped by already-existent gaming workloads. AMD tooled its architecture for a professional/computing push, in a bid to fight NVIDIA's ever-increasing entrenchment in that market segment, and it shows on the architecture. Does this equate to disappointing (current) gaming performance? It does. And it equates to especially disappointing price/performance ratios with the current supply/pricing situation, which the company still hasn't clearly and forcefully defined. Not that they can, anyway - it's not like AMD can point the finger towards the distributors and retailers that carry their products with no repercussion, now can they?Raja Koduri also addressed the current mining/gamer arguments, but in a slightly deflective way: in truth, every sale counts as a sale for the company in terms of profit (and/or loss, since rumors abound that AMD is losing up to $100 with every RX Vega graphics card that is being sold at MSRP). An argument can be made that AMD's retreating market share on the PC graphics card market would be a never-ending cycle of lesser developer attention towards architectures that aren't that prominent in their user bases; however, I think that AMD considers they're relatively insulated from that particular issue due to the fact that the company's graphics solutions are embedded into the PS4, PS4 Pro, Xbox One S consoles, as well as the upcoming Xbox One X. Developers code for AMD hardware from the beginning in most games; AMD can certainly put some faith in that as being a factor to tide their GPU performance over until they have the time to properly refine an architecture that shines at both computing and gaming environments. Perhaps when we see AMD's Navi, bolstered by AMD's increased R&D budget due to a (hopefully) successful wager in the professional and computing markets, will we see an AMD that can bring the competition to NVIDIA as well as they have done for Intel.
Sources:
Raja Koduri's Twitter, via ETeknix
However, it can be said that if AMD had done otherwise, neither gamers nor reviewers would have been impressed with cards that potentially delivered less performance than their NVIDIA counterparts, while consuming more power all the same (even if consuming significantly less wattage). At the rated MSRP (and that's a whole new discussion), this RTG decision was the best one towards increasing attractiveness of RX Vega offerings. However, Raja Koduri does stress Vega's dynamic performance/watt ratios, due to the usage of specially defined power profiles.To our forum-walkers: this piece is marked as an editorial
Raja also touched an interesting subject; namely, that "Folks doing perf/mm2 comparisons need to account for Vega10 features competing with 3 different competitive SOCs (GP100, GP102 and GP104)". This is interesting, and one of the points we have stressed here on TPU: AMD's Vega architecture makes great strides towards being the architecture AMD needs for its server/AI/computing needs, but it's a far cry from what AMD gamers deserve. it's always a thin line to be threaded by chip designers on which features and workloads to implement/improve upon with new architectures. AMD, due to its lack of a high-performance graphics chip, was in dire need of a solution that addressed both the high-performance gaming market and the server/computing market where the highest profits are to be made.This round, the company decided to focus its developments more towards the server side of the equation - as Raja Koduri himself puts it relative to Infinity Fabric: "Infinity fabric on Vega is optimized for server. It's a very scalable fabric and you will see consumer optimized versions of it in future." Likely, this is Raja's way of telling us to expect MCMs (Multi-Chip-Modules) supported by AMD's Infinity Fabric, much as we see with Ryzen. This design philosophy allows companies to design smaller, more scalable and cheaper chips, which are more resistant to yield issues, and effectively improve every metric, from performance, to power and yield, compared to a monolithic design. NVIDIA themselves have said that MCM chip design is the future, and this little tidbit by Raja could be a pointer towards AMD Radeon's future in that department.That Infinity Fabric is optimized for servers is a reflection of the entire Vega architecture; AMD's RX Vega may be gaming-oriented graphics cards, but most of the architecture improvements aren't capable of being tapped by already-existent gaming workloads. AMD tooled its architecture for a professional/computing push, in a bid to fight NVIDIA's ever-increasing entrenchment in that market segment, and it shows on the architecture. Does this equate to disappointing (current) gaming performance? It does. And it equates to especially disappointing price/performance ratios with the current supply/pricing situation, which the company still hasn't clearly and forcefully defined. Not that they can, anyway - it's not like AMD can point the finger towards the distributors and retailers that carry their products with no repercussion, now can they?Raja Koduri also addressed the current mining/gamer arguments, but in a slightly deflective way: in truth, every sale counts as a sale for the company in terms of profit (and/or loss, since rumors abound that AMD is losing up to $100 with every RX Vega graphics card that is being sold at MSRP). An argument can be made that AMD's retreating market share on the PC graphics card market would be a never-ending cycle of lesser developer attention towards architectures that aren't that prominent in their user bases; however, I think that AMD considers they're relatively insulated from that particular issue due to the fact that the company's graphics solutions are embedded into the PS4, PS4 Pro, Xbox One S consoles, as well as the upcoming Xbox One X. Developers code for AMD hardware from the beginning in most games; AMD can certainly put some faith in that as being a factor to tide their GPU performance over until they have the time to properly refine an architecture that shines at both computing and gaming environments. Perhaps when we see AMD's Navi, bolstered by AMD's increased R&D budget due to a (hopefully) successful wager in the professional and computing markets, will we see an AMD that can bring the competition to NVIDIA as well as they have done for Intel.
131 Comments on On AMD's Raja Koduri RX Vega Tweetstorm
RX Vega on the hand is already maxed out in terms of thermal profile, power consumption and MHz. How the hell should the end user adjust it to make it an appealing and somewhat OK gaming card. It is one thing to like a brand and it is another to hand out free passes when they simply failed to deliver a good enough gaming GPU for the current market.
On top of all that, RTG locked Vega's BIOS. Meaning there will be no way to implement any actual BIOS based modifications. RX Vega is a failure no matter what way you spin the story.
Also adding onto the "compute" market of Vega. Well good luck with RTG's next to non-existent technical support. In a market already has wide spread adoption of CUDA, it would be pretty fun to see how much RTG can carve out by utilizing their "Open standard Free stuff" strategy.
If you under-volted a 1070 for a more fair comparison the power difference would even be less. But saying in that configuration which is really apples and oranges that it uses a meager 39w less is really grasping. At my local electricity rates even running 100% 24x7 it would take almost 5 years to break even on the 39w power savings to make up $100, if you could somehow get really lucky and find one for $499. And apples to apples out of the box the Vega uses far more power.
These price/performance marketing pushes AMD is always trying just seem to be attempting to distract from the real benchmark numbers and availability and pricing, which are what they are. And the benchmark numbers aren't bad, but I have to call BS when they claim lower power usage than NV at stock...(huge asterisk) if they way under-volt the Vega (lol). That's like saying a Ferrari gets good gas mileage, if you don't drive it over 20 mph and only drive it down steep hills.
Vega is pretty useless for gaming. FYI: The CUDA compiler infrastructure is open source.
1800X using less power than 7700K in games is a proof that with more cores available the load levels across cores/threads gets lower and therefore efficiency levels get higher. 7700K stock draws 30% more power in this chart, and we know 1700 is even better than 1800X in performance/watt due to not utilizing XFR which raises voltage but shows very measly returns.
Is that just a fluke of some people? because if that is just totally possible then why does it not ship like that in the first place?
Power save uses 165w and scores ~9550 in gpu score for Fire Strike Extreme
-50% uses 112w and scores ~8300
@W1zzard What game did you use for this test?
I just bought a vega 64 and it’s running in power saver when I am on vms or when I play old games.. even some new games I am getting decent FPS in 4K.. and some I play at 1440 as I don’t see much change when playing in 4K.. it’s a awesome feature to have than want and don’t have..
Some brutal truths from Raja, Vega competes with the GP104 on performance, but it's impossible for them to compete with it on price, with such a large complex chip that also uses HBM2 too, it's no wonder there is talk of them losing money with each card sold.
GP102 just adds insult to injury, meanwhile Nvidia's HBM2 GP100 was never going to be seen in the consumer market anyway, and it's higher costs are easily absorbed by the enormous HPC budgets the card is targeted at.
The reality is AMD achieve wonders with their R&D budgets, but realistically they need to go the same route... if they can afford it.
So yeah, perf/$$$ isn't the best, but in the right scenario it's much better to run Vega than Pascal, especially if all your games can run at your refresh rate underclocked so the whole "power-hog" reasoning goes away :)
- Its not rocket science that 300w+ is a high number for this perf
- He's surprised at what reviewers found (wtf?! does he know his product?)
- Optimized Vega will be coming in the future. Ah, so you've had all this time, and we got a beta version. Sweet Run some benches and post them! Looking forward to seeing it
Second, this article seems to make the argument that AMD is insulated against poor performance because games are built for consoles first and foremost. Except that's not proven to be the case in several high profile cases where console-built games that also came to PC showed real problems on AMD cards. Why make an argument already proven false this year? Perpetuating that falsehood doesn't make it true. It's just echo chamber'ing the same failed logic.