Wednesday, July 19th 2017

AMD's RX Vega Low Key Budapest Event: Vega Pitted Against GTX 1080

On the first stop in AMD's two-continent spanning RX Vega tour (which really only counts with three locations), the company pitted their upcoming RX Vega graphics card (we expect this to be their flagship offering) against NVIDIA's GTX 1080 graphics card. The event itself was pretty subdued, and there was not much to see when it comes to the RX Vega graphics card - literally. Both it and the GTX 1080 were enclosed inside PC towers, with the event-goers not being allowed to even catch a glimpse of the piece of AMD hardware that has most approximated a unicorn in recent times.

The Vega-powered system also made use of a Ryzen 7 processor, and the cards were running Battlefield 1 (or Sniper Elite 4; there's lots of discussion going on about that, but the first image below does show a first-person view) with non-descript monitors, one supporting FreeSync, the other G-Sync. The monitor's models were covered by cloth so that users weren't able to tell which system was running which graphics card, though due to ASUS' partnership in the event, both were (probably) of ASUS make. The resolution used was 3440 x 1440, which should mean over 60 FPS on the GTX 1080 on Ultra. It has been reported by users that attended the event that one of the systems lagged slightly in one portion of the demo, though we can't confirm which one (and I'd say that was AMD's intention.)
All in all, I have to say, this tour doesn't inspire me confidence. This isn't the kind of "in your face" comparison we're used to seeing from companies who know they have a winning product; should the comparison be largely in favor of AMD, I posit the company would be taking every advantage of that by showcasing their performance leadership. There did seem to be an inordinate amount of smoke and mirrors here, though, with AMD going out of its way to prevent attendees from being able to discern between their and their competitors' offering.
AMD reportedly told attendees that the AMD and NVIDIA systems had a $300 difference in AMD's favor. All other hardware being equal, and accounting for AMD's stance that a FreeSync monitor tends to cost around $200 less than a comparable NVIDIA G-Sync enabled one, that leaves around $100 savings solely towards the RX Vega part of the equation. This means the RX Vega could sell around the $459-$500 bracket, if current pricing of the GTX 1080 is what AMD considered.
Sources: Reddit User @ Szunyogg, RX Vega Budapest Google Photos, WCCFTech
Add your own comment

175 Comments on AMD's RX Vega Low Key Budapest Event: Vega Pitted Against GTX 1080

#126
justimber
yep. same here. will reserve judgement after the whole product is released.
Posted on Reply
#127
FordGT90Concept
"I go fast!1!11!1!"
cdawall440w. I'm sorry, but as someone who deals with that kind of heat in a case from graphics cards people don't realize the other issues it causes.
Overclocked on what is effectively pre-production silicon.
Posted on Reply
#128
RejZoR
cdawallName the last time the "not optimized" driver argument was made? They are never optimized they are never this never that etc. Stop making excuses for bad PR hyping up half finished products.



440w. I'm sorry, but as someone who deals with that kind of heat in a case from graphics cards people don't realize the other issues it causes.
Oh god, you're still not getting it what "optimizations" we're talking about. These aren't the kind of "5% here and 3% there" optimizations that entirely fall down to individual games. We were talking about optimizations in terms of giving a GAMING RX Vega a FULLY WORKING driver. There was no need to give Vega FE such features as it performs ok as it is even without any of the fancy features. Where RX Vega explicitly depends on them. Releasing it when not ready would be just straight foolish as everyone would piss on it like they have on Vega FE. That's what we're saying the entire bloody time. If AMD knew they can't get anything out of it, what would be the point in postponing it for a whole month if the end result will be exactly the same?

You're bitching about bad PR and then in the same breath, you expect AMD to say outloud they don't have the drivers finished yet. That wouldn't be a bad press then, somehow... It's no secret that AMD doesn't have the resources to pull "magic" shit NVIDIA does, so give them some bloody slack, geez. Everyone pissing on AMD like everyone's life depends on it. Tired of waiting? Buy god damn GTX 1080. Many have and many will. I've decided to wait even though I was this <> close to just hitting a BUY button for AORUS GTX 1080Ti. Didn't wait last time with Fury X, but will now. RX Vega may turn out to be a good card despite power consumption, but if it doesn't, it'll still make some sort of competition on the market, making NVIDIA cards potentially cheaper. If that makes just 50€ less, then so be it. Right now is actually the worst time to be jumping to anything with RX Vega being literally around the corner. But whatever man, gotta go on beer with the fanboys now...

Also, those 440W. Yeah, it's a lot. But that's water cooled AND overclocked version where you can place the radiator on the case exhaust. Meaning it won't really affect case internal temperature. For normal stock state it's 350W and for air cooled, 300W. Still not ideal, but for the right price and maybe special features, who cares? Sure, it'll heat up the room, but so does my GTX 980. I have to run AC anyway. So, what difference does it make? For winter, you'll save up on heating. This is no joke, I've had just PC heating up my place for 2 winters now. The central heating radiator was closed except for the coldest days.

People make way too much drama about power consumption. If it's great, excellent. If it's not, then you check other benefits or tradeoffs and decide. But just universally taking a piss at products that have a certain power draw or thermals is becoming a really annoying habit of the actual fanboys. Particularly from the green camp. My HD7950 at 1.2 GHz was also a freaking furnace. But it was stupid fast. I didn't care. That was my decision. And I have the same with GTX 980. I could run it ultra cool at stock. But I've decided to max it all out. It's also a furnace, I could fry eggs on the backplate, it's that hot. But that's what I wanted and willingly decided for it. Who are you to say what I want or don't want? And same applies to all potential buyers of RX Vega. They are adults for the most part, we don't need your parroting how Vega's power draw is shit and horrible. We'll decide about that when it's actually released.
Posted on Reply
#130
cdawall
where the hell are my stars
RejZoROh god, you're still not getting it what "optimizations" we're talking about. These aren't the kind of "5% here and 3% there" optimizations that entirely fall down to individual games. We were talking about optimizations in terms of giving a GAMING RX Vega a FULLY WORKING driver. There was no need to give Vega FE such features as it performs ok as it is even without any of the fancy features. Where RX Vega explicitly depends on them. Releasing it when not ready would be just straight foolish as everyone would piss on it like they have on Vega FE. That's what we're saying the entire bloody time. If AMD knew they can't get anything out of it, what would be the point in postponing it for a whole month if the end result will be exactly the same?
It took amd over two years to get the r9 fury competitive with its 980ti competitor. In that time nvidia releases an entire series of cards and amd released half of one. So I guess if your plan is to be a card for 2 years from now that has a complete driver buy vega on release. We also don't know what percentage it could be improved. No one still knows if the hbcc will actually help in games like AMD has hyped. No one knows if AMD can actually get the TBR working like they hyped.
RejZoRYou're bitching about bad PR and then in the same breath, you expect AMD to say outloud they don't have the drivers finished yet. That wouldn't be a bad press then, somehow... It's no secret that AMD doesn't have the resources to pull "magic" shit NVIDIA does, so give them some bloody slack, geez. Everyone pissing on AMD like everyone's life depends on it. Tired of waiting? Buy god damn GTX 1080. Many have and many will. I've decided to wait even though I was this <> close to just hitting a BUY button for AORUS GTX 1080Ti. Didn't wait last time with Fury X, but will now. RX Vega may turn out to be a good card despite power consumption, but if it doesn't, it'll still make some sort of competition on the market, making NVIDIA cards potentially cheaper. If that makes just 50€ less, then so be it. Right now is actually the worst time to be jumping to anything with RX Vega being literally around the corner. But whatever man, gotta go on beer with the fanboys now...
Oh so you mean you will just buy nvidia because they have the better product? Is this before or after you complain how their market share is unfair?
RejZoRAlso, those 440W. Yeah, it's a lot. But that's water cooled AND overclocked version where you can place the radiator on the case exhaust. Meaning it won't really affect case internal temperature. For normal stock state it's 350W and for air cooled, 300W. Still not ideal, but for the right price and maybe special features, who cares? Sure, it'll heat up the room, but so does my GTX 980. I have to run AC anyway. So, what difference does it make? For winter, you'll save up on heating. This is no joke, I've had just PC heating up my place for 2 winters now. The central heating radiator was closed except for the coldest days.
Couple of things water cooling a gpu decreases power consumption at the same clocks/volts. More heat turns into more leakage which turns into more power consumption. We have a card sucking down 440w not even close to all of that is being exhausted out of the case from that tiny 120mm radiator. Which means its building up in thw case. Rule of thumb is 150w for every 120mm of radiator for a custom loop, we gave already seen garbage aio's can't touch that. Even stock that leaves 200w somewhere.
RejZoRPeople make way too much drama about power consumption. If it's great, excellent. If it's not, then you check other benefits or tradeoffs and decide. But just universally taking a piss at products that have a certain power draw or thermals is becoming a really annoying habit of the actual fanboys. Particularly from the green camp. My HD7950 at 1.2 GHz was also a freaking furnace. But it was stupid fast. I didn't care. That was my decision. And I have the same with GTX 980. I could run it ultra cool at stock. But I've decided to max it all out. It's also a furnace, I could fry eggs on the backplate, it's that hot. But that's what I wanted and willingly decided for it. Who are you to say what I want or don't want? And same applies to all potential buyers of RX Vega. They are adults for the most part
7950's with a heavy overclock pull half the wattage we are talking about. I dont think you realise how much heat 440w is. The good rx 480's are drawing 95w under load. So 4 rx 480's worth of heat, 5 or 6 gtx 1060's, nearly 3 gtx 1070's, 2 gtx 1080's, nearly 2 gtx 1080ti's. Hell that's two fx9590's worth of power

You could have an entire custom loop cooled 1080ti/7700k rig for less wattage than the Vega card shown.
RejZoRwe don't need your parroting how Vega's power draw is shit and horrible. We'll decide about that when it's actually released.
Pot meet kettle. You have been doing the exact same thing as a pro for Vega for months. VRM design should have been the dead giveaway that this card would be eating. AMD didn't invest money into the beat reference VRM section for fun. These cards suck power like it is going out of style.
FordGT90ConceptOverclocked on what is effectively pre-production silicon.
Can you name a time when any manufacturer put the worst silicon on the most expensive cards it sells? Want to know something scary? There is a chance that is the best silicon AMD has right now.
Posted on Reply
#131
Prince Valiant
EarthDogCourt. Jester.
And the people that keep aggressively replying :laugh:?
Posted on Reply
#132
efikkan
RejZoROh god, you're still not getting it what "optimizations" we're talking about. These aren't the kind of "5% here and 3% there" optimizations that entirely fall down to individual games. We were talking about optimizations in terms of giving a GAMING RX Vega a FULLY WORKING driver. There was no need to give Vega FE such features as it performs ok as it is even without any of the fancy features. Where RX Vega explicitly depends on them.
I'm just going to remind you what AMD says themselves:
"The Radeon™ Vega Frontier Edition graphics card is designed to simplify and accelerate game creation by providing a single GPU that is optimized for every stage of this workflow, from asset production, to playtesting, to performance optimization."

So which game features have AMD intensionally disabled on the card they call optimized for gaming?
RejZoRyou expect AMD to say outloud they don't have the drivers finished yet. That wouldn't be a bad press then, somehow... It's no secret that AMD doesn't have the resources to pull "magic" shit NVIDIA does…
So, how much time are they going to need? Another year? They've had more time than usual to polish the driver.
cdawallCan you name a time when any manufacturer put the worst silicon on the most expensive cards it sells? Want to know something scary? There is a chance that is the best silicon AMD has right now.
Exactly. It would be unusual if they picked the lower binning and sold it for more…
Posted on Reply
#133
RejZoR
@Effting
It's not what AMD has intentionally disabled, it's what AMD hasn't implemented yet... Vega FE was released without that because it was assumed it's not a game changer if it lacks that stuff for now. A pure gaming card like RX Vega however, entirely depends on those capabilities. If you don't have them yet, releasing it is somehow pointless. It's why I keep on bitching over idiots testing Vega FE as if it was a pure gaming card.

How much they are gonna need? RX Vega is getting released for real at the end of this month. That much.
Posted on Reply
#134
cdawall
where the hell are my stars
Bullshit
Posted on Reply
#135
FordGT90Concept
"I go fast!1!11!1!"
cdawallCan you name a time when any manufacturer put the worst silicon on the most expensive cards it sells? Want to know something scary? There is a chance that is the best silicon AMD has right now.
Nope, but I also can't name a time where a company released their top of the line chip twice, several months apart, announcing that the chip is still coming. And yes, that's a distinct possibility too.

AMD might be starting a new trend of giving developers access to new silicon before anyone else so they have an opportunity to optimize for it before the main product launches.
Posted on Reply
#136
cdawall
where the hell are my stars
FordGT90ConceptNope, but I also can't name a time where a company released their top of the line chip twice, several months apart, announcing that the chip is still coming. And yes, that's a distinct possibility too.

AMD might be starting a new trend of giving developers access to new silicon before anyone else so they have an opportunity to optimize for it before the main product launches.
Or they could have literally zero stock of hbm, or this giant expensive hard to manufacture gpu could have garbage yields.
Posted on Reply
#137
FordGT90Concept
"I go fast!1!11!1!"
A likely contributing factor which hopefully they're rectifying by taping out a GDDR5X version of the chip for late 2017/early 2018.
Posted on Reply
#138
cdawall
where the hell are my stars
FordGT90ConceptA likely contributing factor which hopefully they're rectifying by taping out a GDDR5X version of the chip for late 2017/early 2018.
That isn't going to change how huge this gpu is.
Posted on Reply
#139
FordGT90Concept
"I go fast!1!11!1!"
The GPU is not huge compared to, say, Fiji. Fiji was not only a larger lithograph but it also had 1024-bit wide bus instead of 512. Fiji was literally at the limit of what the interposer tech could handle, Vega is not.
Posted on Reply
#140
cdawall
where the hell are my stars
FordGT90ConceptThe GPU is not huge compared to, say, Fiji. Fiji was not only a larger lithograph but it also had 1024-bit wide bus instead of 512. Fiji was literally at the limit of what the interposer tech could handle, Vega is not.
Still has an interposer, 4096 cu's a 512bit bus, the HBCC cheap etc.
Posted on Reply
#141
Gasaraki
RejZoRBenchmarks say otherwise. But since you own them, I guess everyone else is wrong, right?
LOL, first time hearing that Fury X beats 1080Ti in games.
Posted on Reply
#142
FordGT90Concept
"I go fast!1!11!1!"
cdawallStill has an interposer, 4096 cu's a 512bit bus, the HBCC cheap etc.
Vega is only slightly larger (484 mm²) than GP102 (471 mm²). By comparison, Fiji is 596 mm².
Posted on Reply
#143
cdawall
where the hell are my stars
FordGT90ConceptVega is only slightly larger (484 mm²) than GP102 (471 mm²). By comparison, Fiji is 596 mm².
GP102 also at maximum has a 384 bit bus width, no interposer and is sold to most consumers neutered/re-purposed broken dies
Posted on Reply
#144
FordGT90Concept
"I go fast!1!11!1!"
Does that imply their GP102 yields are pretty crappy? Vega has more compute cores than GP102 but not by much. On paper, Vega is the faster GPU of the two. Vega can reasonably expected to have fairly poor yields too. Do they really have enough inventory of binned chips for the Frontier Edition or is it an older silicon revision (water cooled got binned chips while air cooled got the rest)?

We don't really know until the consumer RX Vega card launches. Frontier Edition was just weird.
Posted on Reply
#145
efikkan
RejZoRIt's not what AMD has intentionally disabled, it's what AMD hasn't implemented yet... Vega FE was released without that because it was assumed it's not a game changer if it lacks that stuff for now. A pure gaming card like RX Vega however, entirely depends on those capabilities. If you don't have them yet, releasing it is somehow pointless. It's why I keep on bitching over idiots testing Vega FE as if it was a pure gaming card.
What precisely are you talking about here? Is it the tiled rasterization again? Do you even know what it is? No such feature is implemented in a driver; it's a hardware scheduling feature.
RejZoRHow much they are gonna need? RX Vega is getting released for real at the end of this month. That much.
They've had working hardware to test since last November, even demonstrated it working in December. That's about nine months of polishing the driver, which is more than they usually need. And it's not even a new architecture.
FordGT90ConceptA likely contributing factor which hopefully they're rectifying by taping out a GDDR5X version of the chip for late 2017/early 2018.
Hopefully they will, because going with HBM has been their greatest mistake with Vega. It will help with supply and cost, but it wouldn't help with consumption, performance, etc.
Posted on Reply
#146
FordGT90Concept
"I go fast!1!11!1!"
HBM2 should be lower power and higher performance than GDDR5X.

Thing is, we're just speculating. AMD has never said why they released a Frontier Edition ahead of the main product. They also never said why Vega keeps getting kicked down the road. I would love to be a fly on the wall in RTG meetings on Vega to hear their reasoning for these things.
Posted on Reply
#147
cdawall
where the hell are my stars
FordGT90ConceptDoes that imply their GP102 yields are pretty crappy? Vega has more compute cores than GP102 but not by much. On paper, Vega is the faster GPU of the two. Vega can reasonably expected to have fairly poor yields too. Do they really have enough inventory of binned chips for the Frontier Edition or is it an older silicon revision (water cooled got binned chips while air cooled got the rest)?

We don't really know until the consumer RX Vega card launches. Frontier Edition was just weird.
GP102 released may of 2016. I would imagine in the more than year it has been publicly available more than enough dies have existed that did not meet qc. The fact that they havw gp104 dies going onto gp106 cards hints that yields might be less than perfect (or that gp106 cannot meet demand)
Posted on Reply
#149
efikkan
FordGT90ConceptHBM2 should be lower power and higher performance than GDDR5X.
HBM2 is a little more energy efficient than GDDR, but compared to the hot GPU it wouldn't matter much.
384-bit GDDR5X is faster than 2048-bit HBM2 BTW…
FordGT90ConceptThing is, we're just speculating. AMD has never said why they released a Frontier Edition ahead of the main product. They also never said why Vega keeps getting kicked down the road. I would love to be a fly on the wall in RTG meetings on Vega to hear their reasoning for these things.
We do know HBM(2) supplies are a problem, even for Nvidia.
Posted on Reply
#150
S@LEM!
FordGT90ConceptHBM2 should be lower power and higher performance than GDDR5X.

Thing is, we're just speculating. AMD has never said why they released a Frontier Edition ahead of the main product. They also never said why Vega keeps getting kicked down the road. I would love to be a fly on the wall in RTG meetings on Vega to hear their reasoning for these things.
1080ti messed them up
Posted on Reply
Add your own comment
Jan 11th, 2025 19:40 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts