Wednesday, January 9th 2019

AMD Announces the Radeon VII Graphics Card: Beats GeForce RTX 2080

AMD today announced the Radeon VII (Radeon Seven) graphics card, implementing the world's first GPU built on the 7 nm silicon fabrication process. Based on the 7 nm "Vega 20" silicon with 60 compute units (3,840 stream processors), and a 4096-bit HBM2 memory interface, the chip leverages 7 nm to dial up engine clock speeds to unprecedented levels (above 1.80 GHz possibly). CEO Lisa Su states that the Radeon VII performs competitively with NVIDIA's GeForce RTX 2080 graphics card. The card features a gamer-friendly triple-fan cooling solution with a design focus on low noise. AMD is using 16 GB of 4096-bit HBM2 memory. Available from February 7th, the Radeon VII will be priced at USD $699.

Update: We went hands on with the Radeon VII card at CES.
Add your own comment

157 Comments on AMD Announces the Radeon VII Graphics Card: Beats GeForce RTX 2080

#126
Gasaraki
RowsolHBM is expensive and 16GB is overkill. I would rather see it ship with 8GB and a lower price. Can someone point me to a benchmark of a game using more than 8GB of vram?
Only one game, Battlefield 5 running DX12 at 4K.
Posted on Reply
#127
INSTG8R
Vanguard Beta Tester
GasarakiOnly one game, Battlefield 5 running DX12 at 4K.
FC5 with the texture pack as well.
Posted on Reply
#129
cucker tarlson
INSTG8RNo I own one so I know exactly how much power it uses. I don’t go by outdated numbers you’re using and I’ve tested both BIOS many times and the performance difference is non existent. Oh and the “HOT” BIOS tops out a 276W so again your 300W is outdated. I’ve never even made ANY attempt at undervolting.Yeah I can easily push it over 300(Ran the “hot” BIOS this morning with the clocks at 1750/1000 50% power limit and hit 347W peak so nobodies saying it can’t eat up power. But please drop your 300W assumptions they’re wrong and your opinion not fact at this point.
First You say it's never over 240w,now it's 280w. Measure that at 4k no fps limit.It's impossible all reviews lie.
Posted on Reply
#130
FordGT90Concept
"I go fast!1!11!1!"
R0H1TWasn't less ROPs supposedly one of the biggest drawback of Fury & Vega? If you cut the ROPs in half then what would happen to the performance, surely none wants Vega64 respun?
Yes, they wouldn't have doubled the ROP count if they didn't think it was beneficial.
Posted on Reply
#131
INSTG8R
Vanguard Beta Tester
cucker tarlsonFirst You say it's never over 240w,now it's 280w. Measure that at 4k no fps limit.It's impossible all reviews lie.
You brought up the BIOSs those are the numbers from both. Reading comprehension? Still neither over 300 unless I make it. I also told you how high I can make it go which is basically slamming it to the wall to do it... anything else you need clarification on to continually disprove your 300W BS?

Edit: Not seeing 300 here either...
Posted on Reply
#132
Gasaraki
INSTG8REquating process node to performance is a fools game.. Turing is HUGE! Makes my Vega look small...

Edit: I guess me equating die size is no better but point still stands.
Yeah, I don't get why people care what nanometer the chip is made with. It could be one nanometer for all I care, if it's hotter, slower, more power hungry than something that's 25nm, do I give two shits?
Posted on Reply
#133
medi01
cucker tarlsona 550mm2 12nm card from nvidia runs at 230W with RT features, a 7nm 330mm2 Vega can barely match it at 300W.
They went with old arch, doubling ROPs and dropping some CUs and that alone got them basically perf/chip size parity.

Nobody knows power consumption figures at this point.
Posted on Reply
#134
Gasaraki
siluro818Yeah that guy is just pathetic. I thought I was reading Trump quotes for a second there :D
He's just saying what everyone in here is saying. The card is not impressive at all.
Posted on Reply
#135
cucker tarlson
medi01They went with old arch, doubling ROPs and dropping some CUs and that alone got them basically perf/chip size parity.

Nobody knows power consumption figures at this point.
I thought I saw 300w mentioned in the OP. more ROPs is what they should've done a long time ago.It's all in the clock speeds imo, at 7nm they should be able to clock higher than nvidia's 16nm.
Posted on Reply
#136
Gasaraki
newtekie1True, but do they need 128 ROPs in a desktop graphics card on this performance level?



A redesign isn't really necessary, just remove two of the HBM2 stacks from the GPU and disable two of the memory controllers(which would also disable half of the ROPs if I'm not mistaken).
But then you have the Vega 64...
INSTG8RFC5 with the texture pack as well.
Well I mean stock. You can go download 4K Skyrim textures that load up the video ram but that's not the norm for that game.
Posted on Reply
#137
Assimilator
GasarakiBut then you have the Vega 64...
Exactly. The fact that AMD straight up doubled Vega 20's ROPs compared to Vega 10, rather than going to something like 96 ROPS/12GB, strongly suggests that they are well aware Vega 10's performance is ROP-limited.
Posted on Reply
#138
prtskg
cucker tarlsonIt's all in the clock speeds imo, at 7nm they should be able to clock higher than nvidia's 16nm.
Clock speeds depend on architecture too and GCN has never been a good clocker.
Posted on Reply
#139
siluro818
GasarakiHe's just saying what everyone in here is saying. The card is not impressive at all.
Pff yeah right xD
Posted on Reply
#140
gamerman
hold your horses, wait for real reviews, if owner and amd CEO say its same level than rtx 2080, you should think twice....she compared it FE version of coz.
and if radeon VII needs 3 fans for reference model, whta AIB can offer?

looks it high oc'd...lets see.

what i heard ,rtx 2080 is faster than radeon VII,easily,and if we compare efficiency, rtx 2080 crushed radeon VII.
radeon VII,even it has 7nm line arhcitech are not engineer hurray hardware, faraway it.
amd fans dissapoint ...again.

well,wait for test,few week and see yourself.

with 7nm line arhitech and still over 300 gaming powerdraw,and still loose rtx 2080 and clear rtx 2080 ti its NOT and cant be editor choice' its fact.

amd has all aces their hands,but result is average.

well,lets see shwn next nvidia release their next rtx 300 series,its for sure 7nm line builded also, and its NOT go ANY model over 250W!

bad work amd,again.
Posted on Reply
#141
cucker tarlson
gamermanamd fans dissapoint ...again.
yes,nvidia is yet again ahead in fan performance.
Posted on Reply
#142
Unregistered
Interesting bit of news (again) from Linus... he was looking at the VII and noticed it was whisper quiet EVEN while running a game. The fans were not running at high speed, and those do not look like particularly great fan blades...

Intriguing...

Posted on Edit | Reply
#143
TheinsanegamerN
Honestly radeon VII sucks IMO. It has the same problem as turing, in that the % perf. boost in no way reflects the price.

You can get a vega 64 for $400, $500 if you dont want to wait and just pick one up anywhere.

OR, you can get 30% more performance at $800. Which makes no sense in terms of perf/$. And much like how I think turing sucks due to perf/$ compared to pascal, I think radeon VII sucks compared to vega. This chip should be $500, not $800.

This also doesnt look good for AMD's next generation IMO. They need 7nm to meet the performance and power consumption of 14nm turing. When turing inevitably is ported to 7nm, AMD will once again be several steps behind.
Posted on Reply
#144
FordGT90Concept
"I go fast!1!11!1!"
I think they're both staring at the same problem (7 nm) and reached the same conclusion (prices have to go up). It may get worse with each coming process generation, and they'll come slower. Blame physics.
Posted on Reply
#145
TheinsanegamerN
FordGT90ConceptI think they're both staring at the same problem (7 nm) and reached the same conclusion (prices have to go up). It may get worse with each coming process generation, and they'll come slower. Blame physics.
7nm isnt THAT expensive. To justify these prices, 7nm would have to be something like 4x-5x as expensive as 14nm, and I dont see that being possible, even if it is newer tech.

Sure, prices probably wont reach early 2010s levels anytime soon, but the current jacked up prices can solely be laid at the feet of no competition. Nvidia has had sole control of the market for 3 years now, and AMD still isnt releasing much in the way of competition, seeing as their 7nm chip manages to be less efficient then a 14nm nvidia chip.

The moment AMD bothers to compete with Navi, prices will magically begin to fall as team red and team green begin to undercut each other. Much like the CPU space, where the ability to economically make an 8 core intel chip on the mainstream socket went from impossible to publicly available once ryzen hit the scene.
Posted on Reply
#147
TheinsanegamerN
FordGT90ConceptLook at the second reply here:
www.quora.com/How-much-does-it-cost-to-tapeout-a-28-nm-14-nm-and-10-nm-chip

AMD showed a similar chart itself. Costs are growing exponentially, not linearly. 7 nm likely costs double what 16-12 nm did. Sprinkle in the fact that HBM2 and GDDR6 supply is limited, and you got a perfect storm of higher prices.
So 2x the price of silicon, given that silicon is not even 50% the cost of a full GPU, somehow results in graphics cards being 100% more expensive? This also doesnt take into account the fact that the same core count GPU would have a much smaller die with 7nm, meaning more dies per wafer. Once the process matures, complex GPUs will be easier/cheaper to mass produce with limited wafers. If 7nm is a full proper node shrink, then you can build roughly 4x the dies on the same wafer area.

HBM2 has been in "short supply" for over 18 months now, that was the excuse when vega 64 came out. So either that excuse is complete bunk, or "short supply" is actually "normal supply" and AMD really needs to stop using it. Either way it is a poor decision by AMD, resulting in a lack of proper competition leading to massive price rises.

Go look at nvidia's quarterly earnings reports. 47% Y/Y increase in net income for Q3 2018 for instance, despire 14nm being 3X more expensive then 28nm. If the price increases mattered as much as you say they did, that net income would not be so freaking high. Nvidia and AMD are taking advantage of the market to pump up prices and net incomes through the roof, the higher prices would result in a marginal cost increase for GPUs, not the massive doubling we have seen the last 3 years. This is only possible because Nvidia has no competition, and now AMD is trying to get a piece of that pie.

If we had proper GPU competition, prices would likely be 30-40% lower then they are now. The high GPU prices in the current market are a direct result of Nvidia wanting to line their pockets with fatter margins, and their financial results show that quite blatantly with record revenues, record net profits, and higher cash dividends and stock buybacks. The "rising costs" excuse is simple gaslighting to mislead people into accepting higher prices. While it is highly unlikely wee will ever see $300 x80 GPUs again, they shouldnt cost anywhere near $800.
Posted on Reply
#148
FordGT90Concept
"I go fast!1!11!1!"
The costs are higher across the board: memory, GPU, PCB (especially integration of components on to it), and HSFs (AMD and NVIDIA both went triple fan).

You're forgetting that smaller nodes also mean more defects/higher failure rates. Why do you think AMD went full chiplet design for 7nm Ryzen? It's a cost-containing measure.

The bulk of HBM2 goes into cards that sell for many thousands of dollars (MI25 apparently still goes for $4900 or more). Radeon Vega cards get whatever is left.

AMD put the squeeze on Intel because of the chiplet design that let them make processors cheaper. Chiplet isn't something that really works for GPUs. Navi's original stated goal was to do exactly that; however, Navi being chiplet in design was apparently descoped years ago. NVIDIA hasn't even dabbled in chiplet as far as I know. Without chiplet GPU designs, costs will only get worse as process shrinks.
Posted on Reply
#149
moproblems99
TheinsanegamerNyou can get 30% more performance at $800
Did I miss something? Isn't it $699?
Posted on Reply
#150
siluro818
TheinsanegamerNHonestly radeon VII sucks IMO. It has the same problem as turing, in that the % perf. boost in no way reflects the price.

You can get a vega 64 for $400, $500 if you dont want to wait and just pick one up anywhere.

OR, you can get 30% more performance at $800. Which makes no sense in terms of perf/$. And much like how I think turing sucks due to perf/$ compared to pascal, I think radeon VII sucks compared to vega. This chip should be $500, not $800.

This also doesnt look good for AMD's next generation IMO. They need 7nm to meet the performance and power consumption of 14nm turing. When turing inevitably is ported to 7nm, AMD will once again be several steps behind.
You get 30% more performance and double the memory for a 40% higher price. It's a fair increase considering this is not a new technology superseding a previous generation. Look at it as the late high-end model the Vega line lacked until now.
Posted on Reply
Add your own comment
Dec 22nd, 2024 22:38 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts