Friday, January 11th 2019

AMD Radeon VII Detailed Some More: Die-size, Secret-sauce, Ray-tracing, and More

AMD pulled off a surprise at its CES 2019 keynote address, with the announcement of the Radeon VII client-segment graphics card targeted at gamers. We went hands-on with the card earlier this week. The company revealed a few more technical details of the card in its press-deck for the card. To begin with, the company talks about the immediate dividends of switching from 14 nm to 7 nm, with a reduction in die-size from 495 mm² on the "Vega 10" silicon to 331 mm² on the new "Vega 20" silicon. The company has reworked the die to feature a 4096-bit wide HBM2 memory interface, the "Vega 20" MCM now features four 32 Gbit HBM2 memory stacks, which make up the card's 16 GB of memory. The memory clock has been dialed up to 1000 MHz from 945 MHz on the RX Vega 64, which when coupled with the doubled bus-width, works out to a phenomenal 1 TB/s memory bandwidth.

We know from AMD's late-2018 announcement of the Radeon Instinct MI60 machine-learning accelerator based on the same silicon that "Vega 20" features a total of 64 NGCUs (next-generation compute units). To carve out the Radeon VII, AMD disabled 4 of these, resulting in an NGCU count of 60, which is halfway between the RX Vega 56 and RX Vega 64, resulting in a stream-processor count of 3,840. The reduced NGCU count could help AMD harvest the TSMC-built 7 nm GPU die better. AMD is attempting to make up the vast 44 percent performance gap between the RX Vega 64 and the GeForce RTX 2080 with a combination of factors.
First, AMD appears to be maximizing the clock-speed headroom achieved from the switch to 7 nm. The Radeon VII can boost its engine clock all the way up to 1800 MHz, which may not seem significantly higher than the on-paper 1545 MHz boost frequency of the RX Vega 64, but the Radeon VII probably sustains its boost frequencies better. Second, the slide showing the competitive performance of Radeon VII against the RTX 2080 pins its highest performance gains over the NVIDIA rival in the "Vulkan" title "Strange Brigade," which is known to heavily leverage asynchronous-compute. AMD continues to have a technological upper-hand over NVIDIA in this area. AMD mentions "enhanced" asynchronous-compute for the Radeon VII, which means the company may have improved the ACEs (async-compute engines) on the "Vega 20" silicon, specialized hardware that schedule async-compute workloads among the NGCUs. With its given specs, the Radeon VII has a maximum FP32 throughput of 13.8 TFLOP/s

The third and most obvious area of improvement is memory. The "Vega 20" silicon is lavishly endowed with 16 GB of "high-bandwidth cache" memory, which thanks to the doubling in bus-width and increased memory clocks, results in 1 TB/s of memory bandwidth. Such high physical bandwidth could, in theory, allow AMD's designers to get rid of memory compression which probably frees up some of the GPU's number-crunching resources. The memory size also helps. AMD is once again throwing brute bandwidth to overcome any memory-management issues its architecture may have.
The Radeon VII is being extensively marketed as a competitor to GeForce RTX 2080. NVIDIA holds a competitive edge with its hardware being DirectX Raytracing (DXR) ready, and even integrated specialized components called RT cores into its "Turing" GPUs. The "Vega 20" continues to lack such components, however AMD CEO Dr. Lisa Su confirmed at her post-keynote press round-table that the company is working on ray-tracing. "I think ray tracing is important technology; it's something that we're working on as well, from both a hardware/software standpoint."

Responding to a specific question by a reporter on whether AMD has ray-tracing technology, Dr. Su said: "I'm not going to get into a tit for tat, that's just not my style. So I'll tell you that. What I will say is ray tracing is an important technology. It's one of the important technologies; there are lots of other important technologies and you will hear more about what we're doing with ray tracing. You know, we certainly have a lot going on, both hardware and software, as we bring up that entire ecosystem."

One way of reading between the lines would be - and this is speculation on our part - that AMD could working on retrofitting some of its GPUs powerful enough to handle raytracing with DXR support through a future driver update, as well as working on future generations of GPUs with hardware-acceleration for many of the tasks that are required to get hybrid rasterization work (adding real-time raytraced objects to rasterized 3D scenes). Just as real-time raytracing is technically possible on "Pascal" even if daunting on the hardware, with good enough work directed at getting a ray-tracing model to work on NGCUs leveraging async-compute, some semblance of GPU-accelerated real-time ray-tracing compatible with DXR could probably be achieved. This is not a part of the feature-set of Radeon VII at launch.

The Radeon VII will be available from 7th February, priced at $699, which is on-par with the SEP of the RTX 2080, despite the lack of real-time raytracing (at least at launch). AMD could shepherd its developer-relations on future titles being increasingly reliant on asynchronous compute, the "Vulkan" API, and other technologies its hardware is good at.
Add your own comment

154 Comments on AMD Radeon VII Detailed Some More: Die-size, Secret-sauce, Ray-tracing, and More

#76
moproblems99
I don't consider that a major disadvantage. It's probably less than $20 a year. If that is the only disadvantage then I don't see a problem. Also, throw that 215W out after you start overclocking and lower that 300W when you undervolt.
Posted on Reply
#77
Totally
FluffmeisterAccording to AMD the cost of 7nm is significant, with 16 hbm2 I can't imagine it's cheap for them, but i assume they are least making some money.
All that slide does is say is the die says remian the same costs go up, so when does a die not get smaller when going from a larger process to a smaller one? With this in mind it shows that they they were are profitting with decreasing margins until jump to 7nm.
Posted on Reply
#78
lexluthermiester
Late to the party again, but I'd say this is a decent answer to RTX. Maybe not the show stopper that Ryzen was but damn decent none-the-less. It seems AMD has kicked it up.
fynxerGood luck with RayTracing in software, if that was viable we would have had that already. If they do it it is just a desperate move not to look obsolete.
Raytracing has been done in software for decades, just not real-time.
fynxerDo not expect RayTracing in hardware until end of 2020 and even then they will be years behind nVidia who will, by that time, be in the process of readying their third gen RTX cards for release.
You don't and can't know any of that.
Posted on Reply
#79
steen
[QUOTE="Kaotik, post: 3974472, member: 101367"Unless proven otherwise it should be 64, as the Vega 20 diagrams from Instinct release clearly show 4 Pixel Engines per Shader Engine.[/QUOTE]

I must admit I took the 128 ROPs report as given. If the Instinct diags aren't just high level basic copies of Vega10 slides, then definately 64 ROPS for Vega20.
Posted on Reply
#80
btarunr
Editor & Senior Moderator
AMD has confirmed that the card's ROP count is 64.
Posted on Reply
#81
Nkd
fynxerGood luck with RayTracing in software, if that was viable we would have had that already. If they do it it is just a desperate move not to look obsolete.

Do not expect RayTracing in hardware until end of 2020 and even then they will be years behind nVidia who will, by that time, be in the process of readying their third gen RTX cards for release.

We need Intel to enter the market with RayTracing from the get go in 2020.

I also have a feeling that AMD may be working secretly with Intel on RayTracing tech to sett up a unified standard against nvidias RTX.
3rd gen rtx card? Not happening lol. NVidia is not going to refresh until 2020. Thats when they will have 7nm. You really think Nvidia is goint to replace rtx 20 series after less then 12 months? They don't have a process to shrink to and they are not in a hurry to do it. Heck they stretched pascal for 2 years. So Nvidia is going to have 3 rtx generations in 3 years lol. Do you realize what you are saying?
lexluthermiesterLate to the party again, but I'd say this is a decent answer to RTX. Maybe not the show stopper that Ryzen was but damn decent none-the-less. It seems AMD has kicked it up.


Raytracing has been done in software for decades, just not real-time.

You don't and can't know any of that.
yea he thinks nvidia is going to release 3 rtx generations in 3 years 2018, 2019 and then 2020. When pascal went for 2 years alone. Not sure about that rofl.
Posted on Reply
#82
zo0lykas
GasarakiThe only way they could have done this is if they priced the Radeon 7 at $649 or $599, not $699. $699 is the same price as the RTX2080 but the 2080 doesn't have the heat, power use, has RT cores, has Tensor cores, etc. Overall the RTX2080 is expensive because it has new tech in it. If I have to pay the same price, I will buy the one with the lower power draw, the lower heat, the advance tech in it.



699 its a good price for that performance, plus dont forget how looks stock cooler. A not shit blowers style.
The rumor is that it costs close to $750 to make the Radeon 7 cards. So no, they are not making money. This is just to stop the bleeding.
Posted on Reply
#83
ssdpro
moproblems99What are the drawbacks? What advantages does the 2080 have? You can't be talking about RTX and DLSS, can you?
The drawback is the rumored price of $699 and missing technology. If you can get the technology with the other product at the same price why settle? It is like choosing between two identical cars - one has headlights and one doesn't. The salesman can say "hey it is light out right now maybe you won't need those headlights". AMD's engineering has always been adequate but it sold by undercutting competition pricing. If AMD GPU prices intend to match the competition I can't see how they continue to improve their already dismal market shares. NVIDIA's release and pricing led to a major crash in their sales and stock value - I am not sure why a strengthening AMD would want to embrace that model. AMD has a long way to go before they can price with the big boys.
Posted on Reply
#84
razaron
CammIt should be noted that Nvidia has a huge ass achilles heel with the RTX series - that RT operations are INT based, and that the card needs to flush to switch between FP and INT operations.

Dedicated Hardware acceleration for RT is a smokescreen IMO, the key is if you can cut down your FP or INT instructions as small as possible and run as many as parallel as possible. AMD does have some FP division capability so its possible that some cards can be retrofitted for RT.
Source? I tried googling it and couldn't find anything.
Posted on Reply
#85
Assimilator
I just remembered how the AMD fanboys were pissing over RTX 2080 's $699 pricetag at launch, but Radeon VII comes along with the same price and suddenly people are claiming it's great value.

No, great value would be if it wasn't just a Vega respin with double the memory bandwidth, double the VRAM, 250MHz higher clocks, and an extra $200 tacked on to the price. The die-shrink to 7nm is going to help with power and heat, but this is still Vega/GCN 5 with all its limitations, and I honestly don't expect this card to outperform GTX 2080 in the way AMD is claiming.
Posted on Reply
#86
efikkan
For reference, Vega 20 would need about ~40% more performance over Vega 10 to be on par with RTX 2080. I do wonder which changes are going to make that possible.
Posted on Reply
#87
Zubasa
AssimilatorI just remembered how the AMD fanboys were pissing over RTX 2080 's $699 pricetag at launch, but Radeon VII comes along with the same price and suddenly people are claiming it's great value.

No, great value would be if it wasn't just a Vega respin with double the memory bandwidth, double the VRAM, 250MHz higher clocks, and an extra $200 tacked on to the price. The die-shrink to 7nm is going to help with power and heat, but this is still Vega/GCN 5 with all its limitations, and I honestly don't expect this card to outperform GTX 2080 in the way AMD is claiming.
Both are bad value, one being worse than the other doesn't mean either card are good value.
Posted on Reply
#88
Nkd
efikkanPrimarily a major difference in TDP: 215W vs. ~300W.

When you have competing products A and B, which performs and costs the same, but one of them have a major disadvantage, why would anyone ever buy it?
Gtx 2080 is around 225w. It remains to be seen what the actual usage is on Radeon 7 during gaming. For that we wait for reviews.
GasarakiThe only way they could have done this is if they priced the Radeon 7 at $649 or $599, not $699. $699 is the same price as the RTX2080 but the 2080 doesn't have the heat, power use, has RT cores, has Tensor cores, etc. Overall the RTX2080 is expensive because it has new tech in it. If I have to pay the same price, I will buy the one with the lower power draw, the lower heat, the advance tech in it.

The rumor is that it costs close to $750 to make the Radeon 7 cards. So no, they are not making money. This is just to stop the bleeding.
I don't think that was how much it costs them to make, it was what they originally wanted to sell it at. Yea I have no doubt they are not making much on it.

Plus lets hold off on that heat portion. Wait for the reviews, you can't complain about heat when you haven't seen the temps yet. Will it use more power? Yea sure doesn't mean its going to run hot.
Posted on Reply
#89
M2B
efikkanFor reference, Vega 20 would need about ~40% more performance over Vega 10 to be on par with RTX 2080. I do wonder which changes are going to make that possible.

This video shows the performance of a Vega 64 clocked at 1,750MHz against an RTX 2080 running at stock clocks. (Also don't forget Vega 64 has 4 more CUs than Radeon VII which makes up for that 50MHz core clock deficit)
Even the memory on the AMD side is overclocked and at those clocks the vega has 580GB of memory bandwidth which is quite a lot.
This is pretty much what you would expect from a Radeon VII to do, maybe a little bit better.
Posted on Reply
#90
efikkan
NkdGtx 2080 is around 225w. It remains to be seen what the actual usage is on Radeon 7 during gaming. For that we wait for reviews.


AMD promises "25% more performance at the same power", whatever that means.
25% is not enough to be on par with RTX 2080.

But as you say, reviews will tell the truth.
Posted on Reply
#91
moproblems99
ssdproIf you can get the technology with the other product
I fail to see the missing technology. RTX is usable in one game...and the series is trash. DLSS looks like shit compared to the other available methods. I fail to see what benefits the 2080 has.
Posted on Reply
#92
FordGT90Concept
"I go fast!1!11!1!"
efikkan


AMD promises "25% more performance at the same power", whatever that means.
25% is not enough to be on par with RTX 2080.

But as you say, reviews will tell the truth.
If you take it at face value, Radeon VII has 25% more performance for the same power consumption (295w).
13% of that performance comes from the higher boost clock of 1800 MHz (remember, 4 CU short).
12% likely comes from Radeon VII's ability to hold boost clock longer than Vega 64 does.

You know how it goes: they're likely talking about games where Vega 64 does really well against Turing. I highly doubt they're talking about an average.
Posted on Reply
#93
Wavetrex
I wonder why AMD is stuck with maximum 4096 SP's ?
I mean.... Fury, Vega (1), Vega II ... they are almost identical.

Considering that the new chip is rather small at 331 mm2, what stopped them from making a 450 mm2 chip for example and fitting 72 CU's in it, or 96 !!
It would wipe the floor with 2080 Ti with 6144 SP's (let's say cut a few for being defective, even with 5760 SP's it would still crush it with raw computer power and that massive 1TBps bandwidth, WHILE BEING A SMALLER CHIP due to 7nm)

Instead, they just shrunk Fury, then shrunk it again without adding anything :(
Posted on Reply
#94
FordGT90Concept
"I go fast!1!11!1!"
Because bigger = lower yields. AMD is all about mass production these days.
Posted on Reply
#95
Totally
AssimilatorI just remembered how the AMD fanboys were pissing over RTX 2080 's $699 pricetag at launch, but Radeon VII comes along with the same price and suddenly people are claiming it's great value.

No, great value would be if it wasn't just a Vega respin with double the memory bandwidth, double the VRAM, 250MHz higher clocks, and an extra $200 tacked on to the price. The die-shrink to 7nm is going to help with power and heat, but this is still Vega/GCN 5 with all its limitations, and I honestliy don't expect this card to outperform GTX 2080 in the way AMD is claiming.
I see people justifying the power consumption but I don't see that, only ONE comment stating "...because the 2080 is $699." was it's 10-series counter also $699 at launch?
efikkan


AMD promises "25% more performance at the same power", whatever that means.
25% is not enough to be on par with RTX 2080.

But as you say, reviews will tell the truth.
Good spot, but it probably means what it says to does, it's 25% more effiecient. Assuming it's being compared to the V64, when consuming the same amount of power it does 25% more work. We could probably figure out how much power this card really sucks down with that bit assuming power/perf scales linearly and a little guestimation(2080 power * [V64/2080] ratio * [v7/v64] ratio) puts the card around 400-450w.
Posted on Reply
#96
Apocalypsee
WavetrexI wonder why AMD is stuck with maximum 4096 SP's ?
I mean.... Fury, Vega (1), Vega II ... they are almost identical.

Considering that the new chip is rather small at 331 mm2, what stopped them from making a 450 mm2 chip for example and fitting 72 CU's in it, or 96 !!
It would wipe the floor with 2080 Ti with 6144 SP's (let's say cut a few for being defective, even with 5760 SP's it would still crush it with raw computer power and that massive 1TBps bandwidth, WHILE BEING A SMALLER CHIP due to 7nm)

Instead, they just shrunk Fury, then shrunk it again without adding anything :(
They say have removed the 4 Shader Engine limitation on GCN5 (Vega), but I dont believe that. They instead put something to mitigate the limitation like DSBR, NGG fastpath, HBCC, which some of them are broken. AMD should dump GCN for gaming card and start anew.

Even if I dont have my Vega56 I wont buy this card at all, for once it still uses the same limitation since Fiji. They only increase clockspeed and add tiny bit of improvement here and there. Only reason I bought my Vega56 is because it didnt have the dreaded 4GB limitation as Fury so new games wont choke, and I get it for cheap since mining crash.
Posted on Reply
#97
Manoa
WavetrexI wonder why AMD is stuck with maximum 4096 SP's ?
I mean.... Fury, Vega (1), Vega II ... they are almost identical.

Considering that the new chip is rather small at 331 mm2, what stopped them from making a 450 mm2 chip for example and fitting 72 CU's in it, or 96 !!
It would wipe the floor with 2080 Ti with 6144 SP's (let's say cut a few for being defective, even with 5760 SP's it would still crush it with raw computer power and that massive 1TBps bandwidth, WHILE BEING A SMALLER CHIP due to 7nm)

Instead, they just shrunk Fury, then shrunk it again without adding anything :(

Because bigger = lower yields. AMD is all about mass production these days.
so...the rules of interactive entertainment is ?
think littel ? - stay littel
think big - get BIG

isn't that whay nvidea have money for drivers and AMD don't ? the money they get from thinking big give them drivers, when AMD fail again and again in drivers and still didn't learned that game developer relations important ? let me guess: the "solution" to developer relations is put more GB/s memory banwith and another 1000 mhz. they never learn. do wrong once, you stupid, do wrong twice you retard, do wrong 3: you insane.
Posted on Reply
#98
moproblems99
Manoado wrong once, you stupid, do wrong twice you retard, do wrong 3: you insane.
What does complaining about driver issues that don't exist make you?
Posted on Reply
#99
Manoa
whare do you see complaining ? and whare do you see don't exist ?

AMD shills ? I can respect that, you look like a fighter too. "fight for your right for gaming on AMD, kill anyone that looks like against AMD" ?
but you could use a brain: a gaming developer relationship program will benefit AMD more than a few more mhz and a few more GB/s memory banwith
you don't see the advantage of that ? for your own good ? what does that make you ?
how about async compute enabled on all games sounds to you ? should I mention how much faster doom 4 was with async enabled ? and that was just one game where is was used.....WITHOUT AMD's help...and that's just the beginning, are you able to imagine what it could mean if AMD was involved ? in all games ?
oh wait, you are a radeon expert, im sorry you must know more than I do
Posted on Reply
#100
efikkan
WavetrexI wonder why AMD is stuck with maximum 4096 SP's ?
I mean.... Fury, Vega (1), Vega II ... they are almost identical.

Considering that the new chip is rather small at 331 mm2, what stopped them from making a 450 mm2 chip for example and fitting 72 CU's in it, or 96 !!
It would wipe the floor with 2080 Ti with 6144 SP's (let's say cut a few for being defective, even with 5760 SP's it would still crush it with raw computer power and that massive 1TBps bandwidth, WHILE BEING A SMALLER CHIP due to 7nm)

Instead, they just shrunk Fury, then shrunk it again without adding anything :(
FordGT90Concept said it's yields, and that's part of it, but the biggest reason is probably resource management. If AMD were to make a GPU with 50% more cores, it would need at least 50% scheduling resources. Resource management is already the main reason why GCN is inefficient compared to Nvidia, and the reason why RTX 2060 (1920 cores) manages to match Vega 64 (4096 cores). As we all know, AMD have plenty of theoretical performance that they simply can't utilize properly. Adding 50% more cores would require rebalancing of the entire design, otherwise they would risk getting even lower efficiency. Vega 20 is just a tweaked design with some professional features added.
Posted on Reply
Add your own comment
Aug 24th, 2024 17:44 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts