Friday, September 1st 2017

On AMD's Raja Koduri RX Vega Tweetstorm

In what is usually described as a tweetstorm, AMD's RTG leader Raja Koduri weighed in on AMD's RX Vega reception and perception from both the public and reviewers. There are some interesting tidbits there; namely, AMD's option of setting the RX vega parts at frequencies and voltages outside the optimal curve for power/performance ratios, in a bid to increase attractiveness towards the performance/$ crowds.

However, it can be said that if AMD had done otherwise, neither gamers nor reviewers would have been impressed with cards that potentially delivered less performance than their NVIDIA counterparts, while consuming more power all the same (even if consuming significantly less wattage). At the rated MSRP (and that's a whole new discussion), this RTG decision was the best one towards increasing attractiveness of RX Vega offerings. However, Raja Koduri does stress Vega's dynamic performance/watt ratios, due to the usage of specially defined power profiles.
To our forum-walkers: this piece is marked as an editorial

Raja also touched an interesting subject; namely, that "Folks doing perf/mm2 comparisons need to account for Vega10 features competing with 3 different competitive SOCs (GP100, GP102 and GP104)". This is interesting, and one of the points we have stressed here on TPU: AMD's Vega architecture makes great strides towards being the architecture AMD needs for its server/AI/computing needs, but it's a far cry from what AMD gamers deserve. it's always a thin line to be threaded by chip designers on which features and workloads to implement/improve upon with new architectures. AMD, due to its lack of a high-performance graphics chip, was in dire need of a solution that addressed both the high-performance gaming market and the server/computing market where the highest profits are to be made.
This round, the company decided to focus its developments more towards the server side of the equation - as Raja Koduri himself puts it relative to Infinity Fabric: "Infinity fabric on Vega is optimized for server. It's a very scalable fabric and you will see consumer optimized versions of it in future." Likely, this is Raja's way of telling us to expect MCMs (Multi-Chip-Modules) supported by AMD's Infinity Fabric, much as we see with Ryzen. This design philosophy allows companies to design smaller, more scalable and cheaper chips, which are more resistant to yield issues, and effectively improve every metric, from performance, to power and yield, compared to a monolithic design. NVIDIA themselves have said that MCM chip design is the future, and this little tidbit by Raja could be a pointer towards AMD Radeon's future in that department.
That Infinity Fabric is optimized for servers is a reflection of the entire Vega architecture; AMD's RX Vega may be gaming-oriented graphics cards, but most of the architecture improvements aren't capable of being tapped by already-existent gaming workloads. AMD tooled its architecture for a professional/computing push, in a bid to fight NVIDIA's ever-increasing entrenchment in that market segment, and it shows on the architecture. Does this equate to disappointing (current) gaming performance? It does. And it equates to especially disappointing price/performance ratios with the current supply/pricing situation, which the company still hasn't clearly and forcefully defined. Not that they can, anyway - it's not like AMD can point the finger towards the distributors and retailers that carry their products with no repercussion, now can they?
Raja Koduri also addressed the current mining/gamer arguments, but in a slightly deflective way: in truth, every sale counts as a sale for the company in terms of profit (and/or loss, since rumors abound that AMD is losing up to $100 with every RX Vega graphics card that is being sold at MSRP). An argument can be made that AMD's retreating market share on the PC graphics card market would be a never-ending cycle of lesser developer attention towards architectures that aren't that prominent in their user bases; however, I think that AMD considers they're relatively insulated from that particular issue due to the fact that the company's graphics solutions are embedded into the PS4, PS4 Pro, Xbox One S consoles, as well as the upcoming Xbox One X. Developers code for AMD hardware from the beginning in most games; AMD can certainly put some faith in that as being a factor to tide their GPU performance over until they have the time to properly refine an architecture that shines at both computing and gaming environments. Perhaps when we see AMD's Navi, bolstered by AMD's increased R&D budget due to a (hopefully) successful wager in the professional and computing markets, will we see an AMD that can bring the competition to NVIDIA as well as they have done for Intel.
Sources: Raja Koduri's Twitter, via ETeknix
Add your own comment

131 Comments on On AMD's Raja Koduri RX Vega Tweetstorm

#101
FordGT90Concept
"I go fast!1!11!1!"
RejZoRBut I see a bright future if Havok sometime soon becomes DirectPhysics as part of DirectX, that's the moment we'll see huge leap in game physics. It won't be just ragdolls anymore. It'll actually be able to affect gameplay.
It's not a matter of "if" but "when." Microsoft wouldn't be paying for trademarks on something they'll never release.
InVasManiVega also has less bandwidth than Fiji sadly which is probably one of the bigger things holding it back currently I think with a die shrink a higher core clock speed and a memory bus that's more like 640 bit it's successor will be in good shape provided Nvidia doesn't go crazy on R&D and make it look like trash comparatively which could happen I suppose though I don't there there is major demand for a $2000 GPU for gaming as it is GPU's are too damn expensive.
You may be right. Vega on Global Foundries' 7 nm would be a serious contender. Other than Navi (fairly distant future), I really haven't heard of anything AMD has in the pipeline.
Posted on Reply
#102
InVasMani
VEGA even on 10nm with a Radeon Pro SSG inspired M.2 connector setup wouldn't be a bad hold over until 7nm Navi which could double, triple, and quadruple down on the M.2 connections though the triple/quadruple setups would probably just be exclusive to it's workstation class cards for obvious reasons. They'd probably throw in a 10nm VEGA workstation card as well with twin M.2 connectors just so they don't cannibalize the Pro SSG.
Posted on Reply
#103
Vya Domus
GloFo doesn't have a 10nm node.

They are touting their 7nm node as being amazing , it's supposed to be optimized for performance unlike the current node which is aimed at low power consumption parts.

I don't know , I don't have much trust in GloFo and AMD as a team to work perfectly in tandem. In my opinion the only product that was designed to perfectly match the characteristics of the available node was Ryzen and it turned out great. On the GPU side of things it seems to me that they have been working against each other for the past 5 years or so.
Posted on Reply
#104
dyonoctis
I wish we could have some demo showing what to be expected from direct physics. From what I've understand async is basically allowing some units to work on compute task as soon as they are free.

But If there is something that I've learned from playing with physics simulation in 3D software, is that this is a real powerhog, It's a ram eater, and even with open cl/cuda accelaration, you need some insane hardware to get smooth simulation. So it's hard to imagine close to real world physics running on a single gpu with a measly 8gb of memory,when the said gpu is already busy pushing pixels out. So just how much of an improvement can you expect over the current state of game physics ? Right now it's so abstract...are we talking about a building/cave collapsing with debris to dodge that wouldn't be scripted, but 100% simulated ? Trying to escape a deadly fluid who's speed will depend of the nature/tilt of the ground ?
Posted on Reply
#105
FordGT90Concept
"I go fast!1!11!1!"
I expect pseudo-physics like Havok games have but more of them with better interaction between them. I'm not talking trash flying around (like some games). I'm talking about models getting deformed from a collision, structures collapsing if they become unstable, and water surfaces that displace when a dynamic object is on it. Things that are faked now on CPU because negotiating the physical and visual aspects of it are too demanding to do in real time. In the case of water, I'm not talking fluid dynamics like PhysX does in that it literally simulates blobs of water. I'm talking water providing realistic resistance (as a function of surface area and thrust) and reaction to waves that, presently, is just very simple bob and resistance modifiers on movement.
Posted on Reply
#106
Vya Domus
dyonoctisI wish we could have some demo showing what to be expected from direct physics. From what I've understand async is basically allowing some units to work on compute task as soon as they are free.

But If there is something that I've learned from playing with physics simulation in 3D software, is that this is a real powerhog, It's a ram eater, and even with open cl/cuda accelaration, you need some insane hardware to get smooth simulation. So it's hard to imagine close to real world physics running on a single gpu with a measly 8gb of memory,when the said gpu is already busy pushing pixels out. So just how much of an improvement can you expect over the current state of game physics ? Right now it's so abstract...are we talking about a building/cave collapsing with debris to dodge that wouldn't be scripted, but 100% simulated ? Trying to escape a deadly fluid who's speed will depend of the nature/tilt of the ground ?
Games already use compute for physics , id Tech 6 has support for GPU accelerated particles and so do many modern engines. You are talking about simulations that are done through the usual means of solving differential equations that get exponentially more difficult to compute by each object introduced in the simulation. That's most likely not how these engines compute physics , they more than likely use pseudorandom movements that are easy to compute for things like particles. And you can extend these models to many other things such as deformation/destruction. These things do not have to be computed with perfect accuracy , they just need to be close enough to reality so that they don't look off while maintaining a good level of dynamism.
Posted on Reply
#107
BiggieShady
FordGT90ConceptIt's going to take GPGPU to make physics mean something in games. Specifically, they need to prioritize the compute tasks and balance the GPU work with it.
Ah, tricky business ... ideally, first you'd need a way to detect N rigid body each-to-each collisions in parallel in a way that complexity is constant (or not function of N) ... maybe oct-tree hierarchical voxel grid that simulates a 3D "force field" to drive physics collisions, doing something similar to what nvidia did with voxel global illumination in their moon demo, also pure compute driven voxel grid for bouncing light rays.
Something like this:
Posted on Reply
#108
Vayra86
FordGT90ConceptI expect pseudo-physics like Havok games have but more of them with better interaction between them. I'm not talking trash flying around (like some games). I'm talking about models getting deformed from a collision, structures collapsing if they become unstable, and water surfaces that displace when a dynamic object is on it. Things that are faked now on CPU because negotiating the physical and visual aspects of it are too demanding to do in real time. In the case of water, I'm not talking fluid dynamics like PhysX does in that it literally simulates blobs of water. I'm talking water providing realistic resistance (as a function of surface area and thrust) and reaction to waves that, presently, is just very simple bob and resistance modifiers on movement.
While not 100% accurate, Red Faction: Guerilla came pretty damn close, and runs like a dream. Structural integrity is present there, things collapse, but the more you play with it, the more you can see the 'trick' behind it. Still manages to convince quite well though. I do think these implementations are more 'playable'.

Then there is Space Engineers:


Deformation of objects is in there, and craploads of physics, and it is a great example of how heavy cpu physics are in real time.
Posted on Reply
#109
FordGT90Concept
"I go fast!1!11!1!"
Red Faction: Guerilla used Havok on CPU. That's exactly what I'm talking about should be common place in games today but it's not for two reasons: 1) it's hell on net code and 2) there's no infrastructure to make it easy to implement. Case in point: a lot of games have prescripted destructible walls now. This is something basic support of physics in game engines has been made to support. With broad support of DirectPhysics, creating buildings, vehicles, and so on is less about visuals and more about structure. All things look the way they do because of physics. When an engine supports that basic principle, modelers are encouraged to embrace it as well. They aren't just making something pretty to look at, they're also building something functional. PhysX has already caused that shift of thinking in some games (like Farming Simulator and Unreal Engine) but a lot of developers are still stuck in that mentality of faking it until they make it (rag doll is fundamentally the result of inadequate mass of the body and way too much force delivered in the impact).

Space Engineers personifies what is wrong with CPU-based physics: it's way too easy to overwhelm causing huge framerate drops. Guerilla had that issue as well.

The sad thing about both of these games is that they had to reinvent the wheel to make their pseudo-physics simulations work. It's also painfully difficult to accelerate it on the GPU because the engine/middleware support fundamentally isn't there. That's the beauty of Havok/DirectPhysics: they can hybridize a CPU/GPU physics solution and get it widely implemented in game engines. DirectPhysics is really the only thing that can usurp PhysX which has proven to be a dead end thanks to NVIDIA.

Many core CPUs will also help solving this problem.
Posted on Reply
#110
dyonoctis
Vya DomusGames already use compute for physics , id Tech 6 has support for GPU accelerated particles and so do many modern engines. You are talking about simulations that are done through the usual means of solving differential equations that get exponentially more difficult to compute by each object introduced in the simulation. That's most likely not how these engines compute physics , they more than likely use pseudorandom movements that are easy to compute for things like particles. And you can extend these models to many other things such as deformation/destruction. These things do not have to be computed with perfect accuracy , they just need to be close enough to reality so that they don't look off while maintaining a good level of dynamism.
Yhea I knew about the compute part like tressFX and hairworks using direct compute, but both of these solution have a noticable impact on the framerate while still being "imperfect", that's why i'm curious about the gain of async compute with direct physic. Are they going to use them to lower the current strain on the framerate, or to push the effect further, and if they are doing the later, just how much of an improvement could we see ?

However I know that some effect like the water simulation in the witcher 3 are not really hungry, and are well made, but it only seems to work with the boat, it's less impressive when you are swimming, and they didn't mention what they are using for it.
Posted on Reply
#111
Vya Domus
FordGT90ConceptThey aren't just making something pretty to look at, they're also building something functional. PhysX has already caused that shift of thinking in some games (like Farming Simulator and Unreal Engine) but a lot of developers are still stuck in that mentality of faking it until they make it (rag doll is fundamentally the result of inadequate mass of the body and way too much force delivered in the impact).

...

The sad thing about both of these games is that they had to reinvent the wheel to make their pseudo-physics simulations work. It's also painfully difficult to accelerate it on the GPU because the engine/middleware support fundamentally isn't there.
I am on a totally different opinion , I think "faking it" is the way to go. It seems to me that what you want are perfect rigid-body and soft-body simulations on large scales , that's simply not going to happen any time soon with or without the help of GPU acceleration. It's just too much for current hardware.

When you blow up a wall in a game I don't care if each brick goes flying with the exact velocity it would need and if it lands where it should , all I care is if the overall movement of the objects looks natural. By fake I mean to not deal with this matter by classical means. I've looked this up and it turns out you can replace very cumbersome calculations with statistical approximations in a typical rigid body problem for example. The result is that per-object the simulation is not accurate however in the grand scheme of things it should look perfectly adequate.
dyonoctisYhea I knew about the compute part like tressFX and hairworks using direct compute, but both of these solution have a noticable impact on the framerate while still being "imperfect", that's why i'm curious about the gain of async compute with direct physic. Are they going to use them to lower the current strain on the framerate, or to push the effect further, and if they are doing the later, just how much of an improvement could we see ?

However I know that some effect like the water simulation in the witcher 3 are not really hungry, and are well made, but it only seems to work with the boat, it's less impressive when you are swimming, and they didn't mention what they are using for it.
I think async will not solve the problem of having very robust physics engines that can be used to simulate a wide variety of things with little impact on performance.
Posted on Reply
#112
Vayra86
Faking it has been the way to go thus far, not just with physics, but overall gaming graphics solutions. Tesselation for example; but also things like LOD are ways of 'faking it' in a way that GPUs can handle well. No gamer is asking for scientific accuracy, and we know that realistically modelling all these interactions will be extremely costly and its a cost the target market won't be paying, ever.
Posted on Reply
#113
FordGT90Concept
"I go fast!1!11!1!"
Physics are fundamentally what drive games at their core and there aren't many cases where it isn't faked. It's faked out of necessity, not want.
Posted on Reply
#114
Hood
springs113You guys are failing to realize that this "lil" company is taking on 2 big giants at the same time with nowhere those companies money at hand.
Intel was founded in 1968, AMD in 1969, and Nvidia in 1993. 49, 48, and 24 years old. They've had roughly the same amount of time as Intel, and twice as much as Nvidia, to build themselves into a "giant". A series of terrible business decisions, underhanded practices by competitors, and marginal marketing tactics, have made them what they are today (1/34th the size of Intel, 1/3rd the size of Nvidia, by total assets). They're doing alright, all things considered, but bad business and marketing decisions are still hurting them. Without better management, they are still in danger of bankruptcy.
Posted on Reply
#115
evernessince
xkm1948Flashing an unmodded BIOS is not modding the BIOS. :p Driver verifies that BIOS has valid checksum at boot. Modding Linux to make it work doesn't actually make it work...
Don't burst his imagination bubble goat![/QUOTE]

Oh noes! A circle jerk attendee, me next!
cadavecaFlashing an unmodded BIOS is not modding the BIOS. :p Driver verifies that BIOS has valid checksum at boot. Modding Linux to make it work doesn't actually make it work...
www.reddit.com/r/Amd/comments/5nb30f/bios_signature_check_bypass_patch/

Oh noes, because bypassing hash checks is so hard. Your little broken fingers can't google for 3 seconds. It would be nice if people who obviously have no idea what they are talking about would just stop the anti-amd circle jerk. Nope, random Internet troll #5322 has to rely on semantics to feel good about himself.
Posted on Reply
#116
SPLWF
Can someone answer a question for me.

Does AMD control prices or does the retailer?

I'm asking because my local microcenter has two PowerColor RX Vega 56 in stock and it sells for $599. I asked him why is the price so high, he informed me that it's AMD that sets the prices not the retailer. I read here on TPU that it's the retailer that sets the prices. Is this store employee smoking crack or what?
Posted on Reply
#117
Vya Domus
SPLWFI asked him why is the price so high, he informed me that it's AMD that sets the prices not the retailer.
That's a load of BS , no manufacturer has full control over the price of their products unless they sell them directly (which isn't the case here obviously as these cards are sold through said retailers). Not to mention that there are also the AIB partners in this chain.

Just think for a moment how dumb would it be if retailers would essentially let manufacturers dictate what profit they make.

I'll say it again in case some are still wondering : retailers aren't charities , they seek to make the biggest profit they can.
Posted on Reply
#118
SPLWF
Vya DomusThat's a load of BS , no manufacturer has full control over the price of their products unless they sell them directly (which isn't the case here obviously as these cards are sold through said retailers). Not to mention that there are also the AIB partners in this chain.

Just think for a moment how dumb would it be if retailers would essentially let manufacturers dictate what profit they make.

I'll say it again in case some are still wondering : retailers aren't charities , they seek to make the biggest profit they can.
Can't I provide them a link to AMD claims for sticking by MSRP, then maybe they will sell to me for $399? Can't I just prove to them that I'm using it for gaming, charge $599 for miners, not me.
Posted on Reply
#119
FordGT90Concept
"I go fast!1!11!1!"
"Manufacture Suggested Retail Price"

There's no way for them to enforce it. When they listed them at MSRP, they vanished within minutes. Supply versus demand dictates they raise their prices and because all of them feel that the product is worth more than MSRP (namely because what GTX 1060s and RX 580s are going for), they all do it. In this kind of environment, MSRP has no meaning.

I wouldn't be surprised if manufacturers are overnight shipping everything because they know they can still make hundreds of dollars of profit just by getting it to market where cryptomining has taken hold at their inflated prices.


Only Vega56 with no extras, air cooled *might* go for $400. No way you'll get a Vega64 at that price. If you're talking the $600 card is in stock, it is probably because it comes with extras that are unattractive to miners.
Posted on Reply
#120
SPLWF
FordGT90Concept"Manufacture Suggested Retail Price"

There's no way for them to enforce it. When they listed them at MSRP, they vanished within minutes. Supply versus demand dictates they raise their prices and because all of them feel that the product is worth more than MSRP (namely because what GTX 1060s and RX 580s are going for), they all do it. In this kind of environment, MSRP has no meaning.

I wouldn't be surprised if manufacturers are overnight shipping everything because they know they can still make hundreds of dollars of profit just by getting it to market where cryptomining has taken hold at their inflated prices.


Only Vega56 with no extras, air cooled *might* go for $400. No way you'll get a Vega64 at that price. If you're talking the $600 card is in stock, it is probably because it comes with extras that are unattractive to miners.
Vega 56 just the card itself (no AMD package BS) is $599. Their Vega 64 (also without packages) is $699. They have like two Vega 64 in stock and I just checked that they have 12 Vega 56 in stock. WhoTF will pay this price?

Look here:

www.microcenter.com/search/search_results.aspx?N=4294966937+4294946165&NTX=&NTT=&NTK=all&page=1&sortby=pricehigh

Choose "Queens/Flushing" store
Posted on Reply
#121
FordGT90Concept
"I go fast!1!11!1!"
$700 Vega64 is open box.

And no, Vega56 isn't going to sell for gamers at $600 when GTX 1080s can be had for $570. When you look at mining, suddenly it all makes sense...
wccftech.com/ethereum-mining-gpu-performance-roundup/

Yes, yes, Vega isn't on there but everyone is saying it's >30 MH/s which means it is easily the highest scoring card here.

If you can't wait it out, you're better off buying a GeForce card...which sucks at mining.
Posted on Reply
#122
SPLWF
FordGT90Concept$700 Vega64 is open box.

And no, Vega56 isn't going to sell for gamers at $600 when GTX 1080s can be had for $570. When you look at mining, suddenly it all makes sense...
wccftech.com/ethereum-mining-gpu-performance-roundup/

Yes, yes, Vega isn't on there but everyone is saying it's >30 MH/s which means it is easily the highest scoring card here.

If you can't wait it out, you're better off buying a GeForce card...which sucks at mining.
I have a LG 34" ultra-wide Freesync Monitor, buying a 1080 is a no brainer but I wouldn't be utilizing the full potential of my monitor.
Posted on Reply
#123
InVasMani
GTX1080 is hands down the best value upper mid range/lower high end card on the market the 1080Ti is decent as well, but it's the epitome higher GPU price creeping over the last decade or so from one generation of GPU's to the next that's happened due to a lack of overall competition from AMD/ATI as a result of their merger and debt balancing cash flow struggle relationship that's really crippled R&D. The whole situation is a perfect storm of poor situations for the gaming crowd between GPU's increased emphasis on compute, lack of competition, and of course elevated prices. While gaming isn't shrinking it's still growing I'm sure the emphasis on it is waning as a result of the increased emphasis on GPU compute related focuses where profit margins are higher for GPU makers. We've become somewhat forgotten or not important enough in relative terms to worry, care, or be particularly concerned about. Essentially we need to just take what they throw our way or that's how I'm seeing it The demand and profit is higher elsewhere even if gaming jump started the party.
Posted on Reply
#124
Vayra86
Come on people inflated GPU prices at launch, what's new here.

Also, inflated prices of any new product, especially one that's scarce, what's new, hi market economy. Let's move on or just wait a couple months.

As for the price creep, remember there is this thing called inflation, and cards are still getting performance at the same price points roughly, sometimes a bi tlower price point as well let's not forget, but coin does devaluate over time. The strategy here is to hide a price increase through some odd marketing, and that is what we're seeing. Overall, you see this once every 4-5 years as companies re balance their price tiers with inflation and the market. They don't do this every year.
SPLWFI have a LG 34" ultra-wide Freesync Monitor, buying a 1080 is a no brainer but I wouldn't be utilizing the full potential of my monitor.
If its ultrawide 1080p then for sure you will make that 1080 sweat, if not now then in 6 to 8 months definitely. I can max it out on my measly 1080p too :)
Posted on Reply
#125
InVasMani
We've seen quite a lot of inflation though the cards today are immensely more powerful than the ones five to ten years ago and at way less power relative to the performance capabilities of them so it balances out a bit you might say in the end, but as consumers we grow impatient naturally the tech struggle is real the future is constant...
Posted on Reply
Add your own comment
Dec 22nd, 2024 02:44 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts