Friday, September 1st 2017
On AMD's Raja Koduri RX Vega Tweetstorm
In what is usually described as a tweetstorm, AMD's RTG leader Raja Koduri weighed in on AMD's RX Vega reception and perception from both the public and reviewers. There are some interesting tidbits there; namely, AMD's option of setting the RX vega parts at frequencies and voltages outside the optimal curve for power/performance ratios, in a bid to increase attractiveness towards the performance/$ crowds.
However, it can be said that if AMD had done otherwise, neither gamers nor reviewers would have been impressed with cards that potentially delivered less performance than their NVIDIA counterparts, while consuming more power all the same (even if consuming significantly less wattage). At the rated MSRP (and that's a whole new discussion), this RTG decision was the best one towards increasing attractiveness of RX Vega offerings. However, Raja Koduri does stress Vega's dynamic performance/watt ratios, due to the usage of specially defined power profiles.To our forum-walkers: this piece is marked as an editorial
Raja also touched an interesting subject; namely, that "Folks doing perf/mm2 comparisons need to account for Vega10 features competing with 3 different competitive SOCs (GP100, GP102 and GP104)". This is interesting, and one of the points we have stressed here on TPU: AMD's Vega architecture makes great strides towards being the architecture AMD needs for its server/AI/computing needs, but it's a far cry from what AMD gamers deserve. it's always a thin line to be threaded by chip designers on which features and workloads to implement/improve upon with new architectures. AMD, due to its lack of a high-performance graphics chip, was in dire need of a solution that addressed both the high-performance gaming market and the server/computing market where the highest profits are to be made.This round, the company decided to focus its developments more towards the server side of the equation - as Raja Koduri himself puts it relative to Infinity Fabric: "Infinity fabric on Vega is optimized for server. It's a very scalable fabric and you will see consumer optimized versions of it in future." Likely, this is Raja's way of telling us to expect MCMs (Multi-Chip-Modules) supported by AMD's Infinity Fabric, much as we see with Ryzen. This design philosophy allows companies to design smaller, more scalable and cheaper chips, which are more resistant to yield issues, and effectively improve every metric, from performance, to power and yield, compared to a monolithic design. NVIDIA themselves have said that MCM chip design is the future, and this little tidbit by Raja could be a pointer towards AMD Radeon's future in that department.That Infinity Fabric is optimized for servers is a reflection of the entire Vega architecture; AMD's RX Vega may be gaming-oriented graphics cards, but most of the architecture improvements aren't capable of being tapped by already-existent gaming workloads. AMD tooled its architecture for a professional/computing push, in a bid to fight NVIDIA's ever-increasing entrenchment in that market segment, and it shows on the architecture. Does this equate to disappointing (current) gaming performance? It does. And it equates to especially disappointing price/performance ratios with the current supply/pricing situation, which the company still hasn't clearly and forcefully defined. Not that they can, anyway - it's not like AMD can point the finger towards the distributors and retailers that carry their products with no repercussion, now can they?Raja Koduri also addressed the current mining/gamer arguments, but in a slightly deflective way: in truth, every sale counts as a sale for the company in terms of profit (and/or loss, since rumors abound that AMD is losing up to $100 with every RX Vega graphics card that is being sold at MSRP). An argument can be made that AMD's retreating market share on the PC graphics card market would be a never-ending cycle of lesser developer attention towards architectures that aren't that prominent in their user bases; however, I think that AMD considers they're relatively insulated from that particular issue due to the fact that the company's graphics solutions are embedded into the PS4, PS4 Pro, Xbox One S consoles, as well as the upcoming Xbox One X. Developers code for AMD hardware from the beginning in most games; AMD can certainly put some faith in that as being a factor to tide their GPU performance over until they have the time to properly refine an architecture that shines at both computing and gaming environments. Perhaps when we see AMD's Navi, bolstered by AMD's increased R&D budget due to a (hopefully) successful wager in the professional and computing markets, will we see an AMD that can bring the competition to NVIDIA as well as they have done for Intel.
Sources:
Raja Koduri's Twitter, via ETeknix
However, it can be said that if AMD had done otherwise, neither gamers nor reviewers would have been impressed with cards that potentially delivered less performance than their NVIDIA counterparts, while consuming more power all the same (even if consuming significantly less wattage). At the rated MSRP (and that's a whole new discussion), this RTG decision was the best one towards increasing attractiveness of RX Vega offerings. However, Raja Koduri does stress Vega's dynamic performance/watt ratios, due to the usage of specially defined power profiles.To our forum-walkers: this piece is marked as an editorial
Raja also touched an interesting subject; namely, that "Folks doing perf/mm2 comparisons need to account for Vega10 features competing with 3 different competitive SOCs (GP100, GP102 and GP104)". This is interesting, and one of the points we have stressed here on TPU: AMD's Vega architecture makes great strides towards being the architecture AMD needs for its server/AI/computing needs, but it's a far cry from what AMD gamers deserve. it's always a thin line to be threaded by chip designers on which features and workloads to implement/improve upon with new architectures. AMD, due to its lack of a high-performance graphics chip, was in dire need of a solution that addressed both the high-performance gaming market and the server/computing market where the highest profits are to be made.This round, the company decided to focus its developments more towards the server side of the equation - as Raja Koduri himself puts it relative to Infinity Fabric: "Infinity fabric on Vega is optimized for server. It's a very scalable fabric and you will see consumer optimized versions of it in future." Likely, this is Raja's way of telling us to expect MCMs (Multi-Chip-Modules) supported by AMD's Infinity Fabric, much as we see with Ryzen. This design philosophy allows companies to design smaller, more scalable and cheaper chips, which are more resistant to yield issues, and effectively improve every metric, from performance, to power and yield, compared to a monolithic design. NVIDIA themselves have said that MCM chip design is the future, and this little tidbit by Raja could be a pointer towards AMD Radeon's future in that department.That Infinity Fabric is optimized for servers is a reflection of the entire Vega architecture; AMD's RX Vega may be gaming-oriented graphics cards, but most of the architecture improvements aren't capable of being tapped by already-existent gaming workloads. AMD tooled its architecture for a professional/computing push, in a bid to fight NVIDIA's ever-increasing entrenchment in that market segment, and it shows on the architecture. Does this equate to disappointing (current) gaming performance? It does. And it equates to especially disappointing price/performance ratios with the current supply/pricing situation, which the company still hasn't clearly and forcefully defined. Not that they can, anyway - it's not like AMD can point the finger towards the distributors and retailers that carry their products with no repercussion, now can they?Raja Koduri also addressed the current mining/gamer arguments, but in a slightly deflective way: in truth, every sale counts as a sale for the company in terms of profit (and/or loss, since rumors abound that AMD is losing up to $100 with every RX Vega graphics card that is being sold at MSRP). An argument can be made that AMD's retreating market share on the PC graphics card market would be a never-ending cycle of lesser developer attention towards architectures that aren't that prominent in their user bases; however, I think that AMD considers they're relatively insulated from that particular issue due to the fact that the company's graphics solutions are embedded into the PS4, PS4 Pro, Xbox One S consoles, as well as the upcoming Xbox One X. Developers code for AMD hardware from the beginning in most games; AMD can certainly put some faith in that as being a factor to tide their GPU performance over until they have the time to properly refine an architecture that shines at both computing and gaming environments. Perhaps when we see AMD's Navi, bolstered by AMD's increased R&D budget due to a (hopefully) successful wager in the professional and computing markets, will we see an AMD that can bring the competition to NVIDIA as well as they have done for Intel.
131 Comments on On AMD's Raja Koduri RX Vega Tweetstorm
They are touting their 7nm node as being amazing , it's supposed to be optimized for performance unlike the current node which is aimed at low power consumption parts.
I don't know , I don't have much trust in GloFo and AMD as a team to work perfectly in tandem. In my opinion the only product that was designed to perfectly match the characteristics of the available node was Ryzen and it turned out great. On the GPU side of things it seems to me that they have been working against each other for the past 5 years or so.
But If there is something that I've learned from playing with physics simulation in 3D software, is that this is a real powerhog, It's a ram eater, and even with open cl/cuda accelaration, you need some insane hardware to get smooth simulation. So it's hard to imagine close to real world physics running on a single gpu with a measly 8gb of memory,when the said gpu is already busy pushing pixels out. So just how much of an improvement can you expect over the current state of game physics ? Right now it's so abstract...are we talking about a building/cave collapsing with debris to dodge that wouldn't be scripted, but 100% simulated ? Trying to escape a deadly fluid who's speed will depend of the nature/tilt of the ground ?
Something like this:
Then there is Space Engineers:
Deformation of objects is in there, and craploads of physics, and it is a great example of how heavy cpu physics are in real time.
Space Engineers personifies what is wrong with CPU-based physics: it's way too easy to overwhelm causing huge framerate drops. Guerilla had that issue as well.
The sad thing about both of these games is that they had to reinvent the wheel to make their pseudo-physics simulations work. It's also painfully difficult to accelerate it on the GPU because the engine/middleware support fundamentally isn't there. That's the beauty of Havok/DirectPhysics: they can hybridize a CPU/GPU physics solution and get it widely implemented in game engines. DirectPhysics is really the only thing that can usurp PhysX which has proven to be a dead end thanks to NVIDIA.
Many core CPUs will also help solving this problem.
However I know that some effect like the water simulation in the witcher 3 are not really hungry, and are well made, but it only seems to work with the boat, it's less impressive when you are swimming, and they didn't mention what they are using for it.
When you blow up a wall in a game I don't care if each brick goes flying with the exact velocity it would need and if it lands where it should , all I care is if the overall movement of the objects looks natural. By fake I mean to not deal with this matter by classical means. I've looked this up and it turns out you can replace very cumbersome calculations with statistical approximations in a typical rigid body problem for example. The result is that per-object the simulation is not accurate however in the grand scheme of things it should look perfectly adequate. I think async will not solve the problem of having very robust physics engines that can be used to simulate a wide variety of things with little impact on performance.
Oh noes! A circle jerk attendee, me next!www.reddit.com/r/Amd/comments/5nb30f/bios_signature_check_bypass_patch/
Oh noes, because bypassing hash checks is so hard. Your little broken fingers can't google for 3 seconds. It would be nice if people who obviously have no idea what they are talking about would just stop the anti-amd circle jerk. Nope, random Internet troll #5322 has to rely on semantics to feel good about himself.
Does AMD control prices or does the retailer?
I'm asking because my local microcenter has two PowerColor RX Vega 56 in stock and it sells for $599. I asked him why is the price so high, he informed me that it's AMD that sets the prices not the retailer. I read here on TPU that it's the retailer that sets the prices. Is this store employee smoking crack or what?
Just think for a moment how dumb would it be if retailers would essentially let manufacturers dictate what profit they make.
I'll say it again in case some are still wondering : retailers aren't charities , they seek to make the biggest profit they can.
There's no way for them to enforce it. When they listed them at MSRP, they vanished within minutes. Supply versus demand dictates they raise their prices and because all of them feel that the product is worth more than MSRP (namely because what GTX 1060s and RX 580s are going for), they all do it. In this kind of environment, MSRP has no meaning.
I wouldn't be surprised if manufacturers are overnight shipping everything because they know they can still make hundreds of dollars of profit just by getting it to market where cryptomining has taken hold at their inflated prices.
Only Vega56 with no extras, air cooled *might* go for $400. No way you'll get a Vega64 at that price. If you're talking the $600 card is in stock, it is probably because it comes with extras that are unattractive to miners.
Look here:
www.microcenter.com/search/search_results.aspx?N=4294966937+4294946165&NTX=&NTT=&NTK=all&page=1&sortby=pricehigh
Choose "Queens/Flushing" store
And no, Vega56 isn't going to sell for gamers at $600 when GTX 1080s can be had for $570. When you look at mining, suddenly it all makes sense...
wccftech.com/ethereum-mining-gpu-performance-roundup/
Yes, yes, Vega isn't on there but everyone is saying it's >30 MH/s which means it is easily the highest scoring card here.
If you can't wait it out, you're better off buying a GeForce card...which sucks at mining.
Also, inflated prices of any new product, especially one that's scarce, what's new, hi market economy. Let's move on or just wait a couple months.
As for the price creep, remember there is this thing called inflation, and cards are still getting performance at the same price points roughly, sometimes a bi tlower price point as well let's not forget, but coin does devaluate over time. The strategy here is to hide a price increase through some odd marketing, and that is what we're seeing. Overall, you see this once every 4-5 years as companies re balance their price tiers with inflation and the market. They don't do this every year. If its ultrawide 1080p then for sure you will make that 1080 sweat, if not now then in 6 to 8 months definitely. I can max it out on my measly 1080p too :)