• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon RX Vega Preview

Vega is far from perfect, but it actually meets my needs in the end.
Sure, I have to deal with the extra powerdraw and it is quite late, but I get GTX 1080 level performance for roughly the same price plus Freesync, excellent macOS compatibility and a card from a company pushing OSS and open standards.

In the end, I'm glad I waited and didn't replace my Tonga with Pascal.
 
Now if it's true that Navi is actually Raja's "from the ground up" project, and that when he got to AMD Polaris and Vega were already massively in development, than that gives me hope that Navi will truly be RTG's Zen and more.

Navi at this moment is probably still just a bunch of concepts thrown together , AMD has being way more occupied with other things. Raja has all the time to make something from the ground up. Polaris was pretty much done when he got back to AMD , the design for Vega was probably close to being done already too.

However Raja can't be a one man army , people give him too much credit. Sure he may be capable but unless he has talented engineers it will be for nothing.
 
For that kind of performance, 345W TDP is just absurdly high. Even an overclocked 1070 under water will probably stay under this while performing better. I had hoped for something Ryzen-like performance in the GPU space with Vega. :(

There's some discrepancy in the numbers. We have TDP, TGP, board power all being thrown out with number sets that do not match. The direct TDP info we got from AMD is in the article, but RX Vega 64 air-cooled supposedly has a 220 W TDP and RX Vega 56 a 165 W TDP.
 
The memory bandwidth and 210w might make the $399 base one at least a possible mining option. Longshot, but possible. And on paper the tflops look good on all 3. Will be looking forward to the third-party benchamarks.
I take any vague performance charts from AMD with a grain of salt, since they've cherry picked those with every release.

Also, the last 3 AMD cards I purchased have been really (really) thin as far as oc headroom compared to my NVidia cards. My R9 cards were lucky to get an extra 25Mhz overclocking on the core. I have 2 Pascal cards that run +150 core/+600 memory oc 24x7 at 100%. The relatively high power requirements on these Vegas is a red flag that oc headroom could be minimal on these as well.
 
So if I want the top of the range Vega, I am not able to buy it on it's own! I have to buy the liquid as part of a bundle I don't actually want for $699!!! That's $200 more than the stock Vega 64 with the plastic shroud. I can buy the lower clocked stock Vega on its own.... Likely the lower clock speeds will result in sub 1080 speeds!!! Not impressed.....
 
There's some discrepancy in the numbers. We have TDP, TGP, board power all being thrown out with number sets that do not match. The direct TDP info we got from AMD is in the article, but RX Vega 64 air-cooled supposedly has a 220 W TDP and RX Vega 56 a 165 W TDP.


Im going to guess its AMD creatively marketing Chill and other things, averaged with a few games of choice..... Typical of their marketing department.
 
Interesting stuff on the slide 33
slides-raja-33.jpg
It's ultra wide 1440p, and according to AMD that resolution is the best case scenario where Vega beats GTX1080 in minimum frame rates ... in which game, is it Doom?
Dammit AMD, I wanna see those ranges in any other game beside Doom.
 
Im going to guess its AMD creatively marketing Chill and other things, averaged with a few games of choice..... Typical of their marketing department.

I think it is just core TDP (lower numbers) vs rated full board power (which includes power deliver losses, PCB losses, fan power consumption etc). So the latter would be VGA card TDP, and the former just the GPU TDP.

Interesting stuff on the slide 33
View attachment 90663
It's ultra wide 1440p, and according to AMD that resolution is the best case scenario where Vega beats GTX1080 in minimum frame rates ... in which game, is it Doom?
Dammit AMD, I wanna see those ranges in any other game beside Doom.

Take a look here: https://www.techpowerup.com/reviews/AMD/Radeon_RX_Vega_Preview/images/slides-raja-51.jpg, it's the average from all the games tested including Doom.
 
Meh. I'm just gonna buy and Xbox X and be done with this $#!T show.
 
Why did they make the Shroud Silver? Should've been Red with an RGB Vega Logo. I'm looking forward to how the 56 compares to the 64 version. I am looking more towards the the 64.

I hope AIB will be available on time of release?
 
Meh. I'm just gonna buy and Xbox X and be done with this $#!T show.


Don't say things you really don't mean. You are scaring the children.

I have felt this way and am planning on getting my son an Xbox so we can play some coop games since on PC only emulators are allowed to really do it. Plus the issues of this generation of hardware and Windows 10 is way more expensive and intrusive than a console.
 
Don't say things you really don't mean. You are scaring the children.

I have felt this way and am planning on getting my son an Xbox so we can play some coop games since on PC only emulators are allowed to really do it. Plus the issues of this generation of hardware and Windows 10 is way more expensive and intrusive than a console.
It really is man. Plus the days of the PC having way better graphics is gone. All the PC gets anymore are lame ports. Instead of upgrading my whole rig, its way cheaper just to get a XboxX. Hell I played the new RE the other day and I got better FPS on my Xbox S than I did my PC. Shame.
 
A considerable portion of the die is occupied by compute related hardware.
There are no dedicated "compute units" in GPUs. The whole point of GPGPU is to utilize the same FPUs for rendering and for compute. Vega is simply inefficient. AMD needs to make a design which got better performance per transistor.

You get about 30% performance with a 40-60% extra frequency. Not exactly same as Fury X. The screenshot is from AMD's own video.

View attachment 90655
That's bad, and there is no excuse for it. And that's even a game which favors AMD hardware.

Navi at this moment is probably still just a bunch of concepts thrown together , AMD has being way more occupied with other things. Raja has all the time to make something from the ground up. Polaris was pretty much done when he got back to AMD , the design for Vega was probably close to being done already too.
Navi is rescheduled to be taped out by end of 2017, which makes the release window ~Q4 2018 -> Q2 2019.
(Roughly a year after Volta)

I remember when Polaris was touted as the "stopgap" solution until the glorious Vega should arrive. Vega now looks to be the greatest AMD failure in recent times.
 
Eh, don't go by manufacturer recommendations for PSUs. They overestimate a lot, and especially so if they have PSUs under their label.
 
Eh, don't go by manufacturer recommendations for PSUs. They overestimate a lot, and especially so if they have PSUs under their label.

AMD recommended PSU for Vega FE is 850W.
 
It's the average from all the games tested including Doom.
Thanks for digging that up ... also those are similar ranges as Pro Vega FE tests weeks ago... I'm not seeing them bragging about tiled rasterizer in the slides either
 
It really is man. Plus the days of the PC having way better graphics is gone. All the PC gets anymore are lame ports. Instead of upgrading my whole rig, its way cheaper just to get a XboxX. Hell I played the new RE the other day and I got better FPS on my Xbox S than I did my PC. Shame.


There will come a day and it may be soon or now when the majority of users stop buying PC's as the performance of consoles, laptops, tablets, phones, and thin clients obsoletes your home computer, and the avalanche has already started. Why buy a faster PC at this point if no software out there except specialized software takes advantage of more cores or even needs 4+Ghz speeds to run properly. I'm finding that struggle trying to justify a system upgrade, the only way I can justify it is by purchasing a 4K TV, none or few of which have the features I want for a reasonable price. My antique GPU still pumps out great frame rates at 1080 in the games I have and at great looking settings.

PC's are becoming specialized hardware, which is what they started out as, and its the sign of its mainstream death, kinda like typewriters were specialized, then became mainstream, and are now specialized items. Business will still use PC's but its getting cheaper and cheaper to have managed services and high speed internet with thin clients and one or two main servers. I don't even have a PC at work, just a laptop and there are times I can do most of my work on my phone as its faster.

On topic.... I wonder what performance difference there will be once the tile based rendering is turned on? Perhaps this is the sandbagging AMD was doing, most likely it will be fuxxored for at least a few months as they develop drivers to take advantage of all the new bells and whistles.
 
Why are we talking about overheating APUs in slim cases with tiny noisy fans running closed OS with paid network service? Because they can sometimes run 1080p at 60 fps?
 
There are no dedicated "compute units" in GPUs. The whole point of GPGPU is to utilize the same FPUs for rendering and for compute. Vega is simply inefficient. AMD needs to make a design which got better performance per transistor.

No dedicated compute units but extra transistors to accommodate the jump to packed FP16 and INT8 , that doesn't come for free , precious die space is used, this hasn't really got much to do with efficiency it's more of a straight forward trade off. Fermi did this.

What they need to do is split GCN into 2 different versions , one stripped of all of this and reduced hardware scheduling for gaming and one with all the bells and whistles. At least that's what I think will please the consumer market. That's what Nvidia is doing too.
 
No dedicated compute units but extra transistors to accommodate the jump to packed FP16 and INT8 , that doesn't come for free , precious die space is used, this hasn't really got much to do with efficiency it's more of a trade off.
Sure, flexible support for fp16/fp32 costs a few transistors, but it's not a big deal. Nvidia have done the same in Tegra and GP100.

Since full fp16 support seems to be Vega's only advantage over GP104, it surprises me that AMD haven't made a big deal out of it. Just imagine, announcing AofS, now with 40% more performance with fp16…
 
Sure, flexible support for fp16/fp32 costs a few transistors, but it's not a big deal. Nvidia have done the same in Tegra and GP100.

Since full fp16 support seems to be Vega's only advantage over GP104, it surprises me that AMD haven't made a big deal out of it. Just imagine, announcing AofS, now with 40% more performance with fp16…

I believe support is coming, it's mentioned Wolfenstein II for example will support RPM:

fwogKg2.png
 
Sure, flexible support for fp16/fp32 costs a few transistors, but it's not a big deal. Nvidia have done the same in Tegra and GP100.

Since full fp16 support seems to be Vega's only advantage over GP104, it surprises me that AMD haven't made a big deal out of it. Just imagine, announcing AofS, now with 40% more performance with fp16…


Vega seems to be more of a compute chip that can also play games, if anyone here thinks the big money is in consumer graphics hardware and not in compute they are delusional. In consumer graphics they sell a card and have to write drivers for how many years to support multiple developers attempting to implement dumb ideas different ways as its easier for them, the driver has to help convert that into actual machine language at a penalty. In compute you put out a library that you wrote using all the tricks to make a system perform computations faster, and its up to the developer to make sure to stay inside those parameters, and they most likely never upgrade drivers as their software doesn't change that often.

The more I read about Vega the more and more concerned I am that the mining clients are going to be here soon for them and they will be great at it, and then the only benefit will be for AMD to sell a card and provide one set of mediocre drivers that are stable and then drop Vega once the mining boom is over.
 
There are no dedicated "compute units" in GPUs. The whole point of GPGPU is to utilize the same FPUs for rendering and for compute. Vega is simply inefficient. AMD needs to make a design which got better performance per transistor.

Technically, everything that does FP64 can be considered compute hardware (though not necessarily "compute unit").
 
Back
Top