Wednesday, January 9th 2019

AMD Announces the Radeon VII Graphics Card: Beats GeForce RTX 2080

AMD today announced the Radeon VII (Radeon Seven) graphics card, implementing the world's first GPU built on the 7 nm silicon fabrication process. Based on the 7 nm "Vega 20" silicon with 60 compute units (3,840 stream processors), and a 4096-bit HBM2 memory interface, the chip leverages 7 nm to dial up engine clock speeds to unprecedented levels (above 1.80 GHz possibly). CEO Lisa Su states that the Radeon VII performs competitively with NVIDIA's GeForce RTX 2080 graphics card. The card features a gamer-friendly triple-fan cooling solution with a design focus on low noise. AMD is using 16 GB of 4096-bit HBM2 memory. Available from February 7th, the Radeon VII will be priced at USD $699.

Update: We went hands on with the Radeon VII card at CES.
Add your own comment

157 Comments on AMD Announces the Radeon VII Graphics Card: Beats GeForce RTX 2080

#26
IceShroom
TesterAnonIts 2 8pin, you do the math.
Well less then 2080 Ti with 3x 8 pin.
Posted on Reply
#27
the54thvoid
Super Intoxicated Moderator
Also, to point out the obvious. Isn't this AMD's most expensive consumer gfx card ever?
Posted on Reply
#28
Metroid
Musselshe was talking monitor resolution ;)
I saw an 8 and a six pin, i might be wrong about it.
Posted on Reply
#29
INSTG8R
Vanguard Beta Tester
TesterAnonIts 2 8pin, you do the math.
Derp! and the 2080 has how many? baseless assumptions are baseless :rolleyes:
Posted on Reply
#30
GhostRyder
Hmm, its interesting to say the least. That 3 fan cooler can be nice but I only hope its not something that's going to be running at high speeds on stock settings otherwise that's going to be a drawback.

16gb HBM??? Holy cow that's ridiculous. But how much does it help?

Wish this was competing with the RTX 2080ti...Sadly still seems the only card worth getting would still be the 2080ti in terms of an upgrade for me.

I read the clocks are expected to be up to 1.8ghz, if it can overclock to that 2.4 listed on here that would be interesting to see!
Posted on Reply
#31
berniebennybernard
ArpeegeeDamn it's looking like the RX 590 was a foreshadowing of the pricing structure AMD has decided to take against Nvidia; keep prices comparable to comparable performance. This might hurt AMD more than make them competitive in the long term.
I know those running Hackintoshes and eGPUs with their Macs will probably love this as NVIDIA has been neglecting drivers for macOS.
Posted on Reply
#32
delshay
FordGT90ConceptWhy 60 CU instead of 64 CU? AMD giving themselves wiggle room in terms of manufacturing? Or was it designed with 60 from the start to make room for more memory controllers?
The MI60 has the full 64 CU & it has 32GB HBM. I think AMD is holding something back.
Posted on Reply
#33
FordGT90Concept
"I go fast!1!11!1!"
Jism16GB seems a bit too much; but on the other hand i think it was more cheaper then go for the 8GB route as it requires a change into the interposer. Imo future proof video card with 16GB of ram!
It has 4 x 4 GiB stacks versus 2 x 4 GiB stacks in Vega 56/64.
delshayThe MI60 has the full 64 CU & it has 32GB HBM. I think AMD is holding something back.
Ah, makes sense. These are the rejects.
Posted on Reply
#34
ShurikN
FordGT90ConceptWhy 60 CU instead of 64 CU? AMD giving themselves wiggle room in terms of manufacturing? Or was it designed with 60 from the start to make room for more memory controllers?
My initial thought is that the are a lot of working MI60 chips that have been hit by a single defect directly on the CUs. They are unusable for MI60 but a lower tier MI product is not necessary, so why not launch it as an RX card and try to make some money.
Posted on Reply
#35
delshay
FordGT90ConceptIt has 4 x 4 GiB stacks versus 2 x 4 GiB stacks in Vega 56/64.


Ah, makes sense. These are the rejects.
You made me laugh, but I have to agree with you on this one. Would datacentre want cut down cards? leaving a open mind here.
Posted on Reply
#36
jabbadap
INSTG8RDerp! and the 2080 has how many? baseless assumptions are baseless :rolleyes:
RTX 2080 has 8-pin + 6-pin. Not that 2x8pin means 375W tdp, i.e RTX 2080ti has two of them and TDP for that is 260/250W(FE/ref.).
Posted on Reply
#37
HimymCZe
awesome, I just wish we've got CPU now and GPU later this year. :/
Posted on Reply
#38
efikkan
So AMD repurposing their professional cards, this should be an indicator of what other options they have for the immediate future.
MrGenius2x the VRAM. Don't forget that. So you're getting A LOT more for your money actually.
Which is fine if you need it, but if it doesn't improve your workload, then it's not better value.
AMD would probably be better off with 8 GB and cutting the price.
the54thvoidAlso, to point out the obvious. Isn't this AMD's most expensive consumer gfx card ever?
Well, Radeon Pro Duo(Fiji) was $1499 and Vega 64 Liquid $699 (to the extent it actually sold).
But I guess the argument over Nvidia overpricing is much weaker now.
Posted on Reply
#39
unikin
It looks like they basically took AMD Radeon Instinct MI50 add active cooling and increase core/memory frequencies. That's it.

Great value for video content creators and computing, not so great for gaming. It's FP64 performance 6.7 TFLOPs is on pair with Titan V (costing $3K). This is not a true gaming card.

If true power draw will be around 300W. Vega 64 deja vu all over again :(
Posted on Reply
#40
bug
efikkanSo AMD repurposing their professional cards, this should be an indicator of what other options they have for the immediate future.
I don't think this necessarily means Navi is late or in trouble. More likely, as suggested at Anandtech, with Turing priced so high, AMD took a second look at Vega 20 and said: why not cash in on defective chips that we were throwing away if charging an arm and a leg is a thing now?
Posted on Reply
#41
kings
So, an AMD competitor card for 1080Ti, 2 years later, for $700. On the green side, we have RTX at insane prices also.

Sad sad times on the GPU market, absolutely depressing. Im glad I bought a PS4 Pro last year...
Posted on Reply
#42
Rowsol
HBM is expensive and 16GB is overkill. I would rather see it ship with 8GB and a lower price. Can someone point me to a benchmark of a game using more than 8GB of vram?
Posted on Reply
#43
phill
If this is true, I'd consider two........

I do wonder why AMD goes with this HBM memory rather than GDDR6 or something?? Meh, what do I know :D
Posted on Reply
#44
Sasqui
Now when do we find out that they ran the 2080 with a dual core Pentium 4 and the Radeon VII with the best Ryzen chip?

j/k of course.

This is great news for the gaming landscape.
Posted on Reply
#45
efikkan
bugI don't think this necessarily means Navi is late or in trouble. More likely, as suggested at Anandtech, with Turing priced so high, AMD took a second look at Vega 20 and said: why not cash in on defective chips that we were throwing away if charging an arm and a leg is a thing now?
Well, it's not like they can just make more of a GPU over night. If AMD are to have any real volume of Vega 20s in a month, then this batch of wafers would have to be started at least 3 months ago. An alternative would be to do a "paper launch" and ramp up the production later, but I don't see any reason to do that.

My best guess is that AMD realized their best case scenario for Navi 10 would be in the fall, and there could be a risk of further delays. Still, Vega 20 with 16GB HBM2 is expensive to produce.
Posted on Reply
#46
btarunr
Editor & Senior Moderator
FordGT90ConceptWhy 60 CU instead of 64 CU?
This is steaming hot out of my ass, but Radeon VII could trigger an RTX 2090 and RTX 2090 Ti (full TU104 and full TU102 with 12 GB). AMD is probably saving the 64 CU ASIC for that. Right now it can compete and harvest just fine with 60 CU.
Posted on Reply
#47
Ravenas
Selling direct on AMD is a win. I’m tired of retailers raping customers on CPUs and GPUs.
Posted on Reply
#48
bug
efikkanWell, it's not like they can just make more of a GPU over night. If AMD are to have any real volume of Vega 20s in a month, then this batch of wafers would have to be started at least 3 months ago. An alternative would be to do a "paper launch" and ramp up the production later, but I don't see any reason to do that.

My best guess is that AMD realized their best case scenario for Navi 10 would be in the fall, and there could be a risk of further delays. Still, Vega 20 with 16GB HBM2 is expensive to produce.
Turing (and pricing) have been known since September. AMD had plenty of time to plan this ;)
That doesn't make you wrong an the Navi front though. I mean, AMD has announced their plans for CPUs till Q2 or Q3. If they said nothing about GPUs, it's likely nothing is planned.
Posted on Reply
#49
erocker
*
MrGenius2x the VRAM. Don't forget that. So you're getting A LOT more for your money actually.
But if it doesn't translate to performance than meh. Disappointing AMD is keeping with HBM. The card could of been a lot cheaper.
Posted on Reply
#50
xenocide
RowsolHBM is expensive and 16GB is overkill. I would rather see it ship with 8GB and a lower price. Can someone point me to a benchmark of a game using more than 8GB of vram?
Most don't, and a lot that do basically just dump textures into the VRAM in anticipation of them being needed (e.g. Black Ops III). As a frame of reference, The Witcher 3 can run great at 4K maxed out while using less than 3GB of VRAM. So this idea that cards are crippled by VRAM just doesn't have a lot of basis in reality. 6-8GB is probably the sweet spot, so shipping a card with 16GB of HBM just seems like overkill.
Posted on Reply
Add your own comment
Jan 8th, 2025 19:16 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts