Tuesday, June 11th 2019

AMD Radeon RX 5700 XT Confirmed to Feature 64 ROPs: Architecture Brief

AMD "Navi 10" is a very different GPU from the "Vega 10," or indeed the "Polaris 10." The GPU sees the introduction of the new RDNA graphics architecture, which is the first big graphics architecture change on an AMD GPU in nearly a decade. AMD had in 2011 released its Graphics CoreNext (GCN) architecture, and successive generations of GPUs since then, brought generational improvements to GCN, all the way up to "Vega." At the heart of RDNA is its brand new Compute Unit (CU), which AMD redesigned to increase IPC, or single-thread performance.

Before diving deeper, it's important to confirm two key specifications of the "Navi 10" GPU. The ROP count of the silicon is 64, double that of the "Polaris 10" silicon, and same as "Vega 10." The silicon has sixteen render-backends (RBs), these are quad-pumped, which work out to an ROP count of 64. AMD also confirmed that the chip has 160 TMUs. These TMUs are redesigned to feature 64-bit bi-linear filtering. The Radeon RX 5700 XT maxes out the silicon, while the RX 5700 disables four RDNA CUs, working out to 144 TMUs. The ROP count on the RX 5700 is unchanged at 64.
The RDNA Compute Unit sees the bulk of AMD's innovation. Groups of two CUs make a "Dual Compute Unit" that share a scalar data cahe, shader instruction cache, and a local data share. Each CU is now split between two SIMD units of 32 stream processors, a vector register, and a scalar unit, each. This way, AMD doubled the number of scalar units on the silicon to 80, double the CU count. Each scalar unit is similar in concept to a CPU core, and is designed to handle heavy scalar indivisible workloads. Each SIMD unit has its own scheduler. Four TMUs are part of each CU. This massive redesign in SIMD and CU hierarchy achieves a doubling in scalar- and vector instruction rates, and resource pooling between every two adjacent CUs.
Groups of five RDNA dual-compute unit share a prim unit, a rasterizer, 16 ROPs, and a large L1 cache. Two such groups make a Shader Engine, and the two Shader Engines meet at a centralized Graphics Command Processor that marshals workloads between the various components, a Geometry Processor, and four Asynchronous-Compute Engines (ACEs).
The second major redesign "Navi" features over previous generations is the cache hierarchy. Each RDNA dual-CU has a local fast cache AMD refers to as L0 (level zero). Each 16 KB L0 unit is made up of the fastest SRAM, and cushions direct transfers between the compute units and the L1 cache, bypassing the compute unit's I-cache and K-cache. The 128 KB L1 cache shared between five dual-CUs is a 16-way block of fast SRAM cushioning transfers between the shade engines and the 4 MB of L2 cache.

In all, RDNA helps AMD achieve a 2.3x gain in performance per area, 1.5x gain in performance per Watt. The "Navi 10" silicon measures just 251 mm² compared to the 495 mm² of the "Vega 10" GPU die. A lot of these spatial gains are also attributable to the switch to the new 7 nm silicon fabrication process from 14 nm.
AMD also briefly touched on its vision for real-time ray-tracing. To begin with, we can confirm that the "Navi 10" silicon has no fixed function hardware for ray-tracing such as the RT core or tensor cores found in NVIDIA "Turing" RTX GPUs. For now, AMD's implementation of DXR (DirectX Ray-tracing) for now relies entirely on programmable shaders. At launch the RX 5700 series won't be advertised to support DXR. AMD will instead release support through driver updates. The RDNA 2 architecture scheduled for 2020-21 will pack some fixed-function hardware for certain real-time ray-tracing effects. AMD sees a future in which real-time ray-tracing is handled on the cloud. The next frontier for cloud-computing is cloud-assist, where your machine can offload processing workloads to the cloud.
Add your own comment

38 Comments on AMD Radeon RX 5700 XT Confirmed to Feature 64 ROPs: Architecture Brief

#1
kapone32
Is anyone on here going to pre order one of these now that we have specs and pricing?
Posted on Reply
#2
neatfeatguy
I'm still holding on to my 980Ti. I've got no reason to upgrade. My card performs as well as a 1070, which isn't too far behind a 2060 which sell for around $350. Any real cards worth upgrading to would be a 2070 at minimum or 2080 or one of these 5700XT cards which MSRP puts at $449.

I can't say I'm impressed enough with the price performance to drop $450+ to upgrade to something newer. I'm patient (and kind of broke), so I'm in no rush to upgrade. I can wait and see what kind of things we can expect from the next gen of GPUs.
Posted on Reply
#3
Xaled
kapone32Is anyone on here going to pre order one of these now that we have specs and pricing?
I will not rush because I think this pricing is just initial and the final price wont settle until nivida drop the prices of 2070 and maybe even the 2080 (and assumption and a wish at the same time)
Posted on Reply
#4
WikiFM
For gamers on the ray tracing wagon, Navi is a no go. AMD has lost that market. Lets see how slow is in DXR once its supported (if at all).
Posted on Reply
#5
Darmok N Jalad
With the size of the die and the efficiency gains, I wonder if AMD has an X2 model in the works? They made a Vega X2 that leverages Infinity Fabric for the new Mac Pro, so this should be much easier, right?
Posted on Reply
#6
kapone32
Darmok N JaladWith the size of the die and the efficiency gains, I wonder if AMD has an X2 model in the works? They made a Vega X2 that leverages Infinity Fabric for the new Mac Pro, so this should be much easier, right?
Agreed that would be very interesting!!!!!
Posted on Reply
#7
TesterAnon
WikiFMFor gamers on the ray tracing wagon, Navi is a no go. AMD has lost that market. Lets see how slow is in DXR once its supported (if at all).
They lost the price/performance market too which is sad because that was the only market AMD had left.
Posted on Reply
#8
phanbuey
"AMD sees a future in which real time ray tracing is handled in the cloud."

Seems counter-intuitive when the market is increasing lower latency, high-FPS, high-refresh offering and current consumer devices with dedicated hardware are capable of real time ray tracing.

I am on the ray tracing wagon, and while I do appreciate the slightly cooler shadows and more realistic lighting in the 2 games I play that support it, I really do see it as more of a developer-side feature. I just imagine it's way easier to say "light source here, shiny thing here, go" than do all of the lightmap work and try to fake it. I think the cloud RTRT might be true for consoles...
Posted on Reply
#9
xkm1948
So overall a nice revamp of GCN probably thanks to the demands from next gen consoles.

Meanwhile RTRT over the cloud? Did they write that when they were high or something? The latency will be horrible.
Posted on Reply
#10
Steevo
phanbuey"AMD sees a future in which real time ray tracing is handled in the cloud."

Seems counter-intuitive when the market is increasing lower latency, high-FPS, high-refresh offering and current consumer devices with dedicated hardware are capable of real time ray tracing.

I am on the ray tracing wagon, and while I do appreciate the slightly cooler shadows and more realistic lighting in the 2 games I play that support it, I really do see it as more of a developer-side feature. I just imagine it's way easier to say "light source here, shiny thing here, go" than do all of the lightmap work and try to fake it. I think the cloud RTRT might be true for consoles...
I'm going to go out on a limb here and guess they are going to real time render ray tracing cloud side and send compressed vector info from light sources and only render small ray traced parts client side. We could easily run the ray tracing with no players and then compress the vector info for "replay" just like Nvidia did with most Physx. The huge amount of data required to real time trace everything is huge, but once done the output should be simple, small files that can be used to reassemble the data needed.
Posted on Reply
#11
64K
xkm1948So overall a nice revamp of GCN probably thanks to the demands from next gen consoles.

Meanwhile RTRT over the cloud? Did they write that when they were high or something? The latency will be horrible.
The way the term RTRT is thrown around is almost meaningless right now. A Developer can implement it in such a small amount that it's unnoticeable and still say that it uses RTRT without lying. It's like when a Developer says a game runs 4K with fill in the blank card. The game could be running at sub 30 FPS on lowest settings and still the claim that it's running at 4K isn't a lie. It's just not useful info.
Posted on Reply
#12
GreiverBlade
WikiFMLets see how slow is in DXR once its supported (if at all).
about as slow as a normal RT enabled card ... RT is kinda the ultimate slow down option (unless willing to 1080p and no AA, well not an issue for no AA tho ... )
WikiFMFor gamers on the ray tracing wagon, Navi is a no go. AMD has lost that market.
actually, there is a market for RT games? wow ... more than 2? and with noticeable difference between RO off and RT on?
:laugh: (funny since the next Xbox and PS will have RT and they are not using Nivida for their GPU's ... well not that i care more for Consoles than i care about RT, tho)
TesterAnonThey lost the price/performance market too which is sad because that was the only market AMD had left.
ah? the RX5700XT will be much pricier than the 2070 ? (well ... for me it will be ... after all my retailer are thinking a 2080 is worth 850$ base price and a 2070 worth 580 average price )
well ... i reckon it would be bad if these new Radeon cards will be put on the same price level of stupidly priced RTX line (well if they aren't priced above ... it's half fine ... a RVII is just above a 2070 for me atm ... and a better alternative to me.... so if the RX5700XT is close to that : no issues )
SteevoI'm going to go out on a limb here and guess they are going to real time render ray tracing cloud side and send compressed vector info from light sources and only render small ray traced parts client side. We could easily run the ray tracing with no players and then compress the vector info for "replay" just like Nvidia did with most Physx. The huge amount of data required to real time trace everything is huge, but once done the output should be simple, small files that can be used to reassemble the data needed.
should be quite correct as a guess (although i find highly annoying that fashion of thinking "the cloud is the future and everyone want it" .... reflected on Stadia or other cloudgaming platform and now for that kind of idiocy )
Posted on Reply
#13
medi01
xkm1948Meanwhile RTRT over the cloud? Did they write that when they were high or something? The latency will be horrible.
I don't think they meant "for gamers".
Costs of that would be prohibitive.
Posted on Reply
#14
GreiverBlade
medi01I don't think they meant "for gamers".
Costs of that would be prohibitive.
let's hope for that ... but it would not be for developer either then ... :laugh:
Posted on Reply
#15
WikiFM
GreiverBladeabout as slow as a normal RT enabled card ... RT is kinda the ultimate slow down option (unless willing to 1080p and no AA, well not an issue for no AA tho ... )

actually, there is a market for RT games? wow ... more than 2? and with noticeable difference between RO off and RT on?
:laugh: (funny since the next Xbox and PS will have RT and they are not using Nivida for their GPU's ... well not that i care more for Consoles than i care about RT, tho)
Ray tracing performance is much slower in GTX cards than in RTX ones.

If you cant spot the difference between RT off and RT on you should see an oculist. Since both next Xbox and PS5 will use Navi they would use a minimum amout of RT, which couldn't be compared to RTX cards.
Since you dont care about consoles nor RT I dont see the point of your comment.
Posted on Reply
#16
RichF
"In all, RDNA helps AMD achieve a 2.3x gain in performance per area, 1.5x gain in performance per Watt. The 'Navi 10' silicon measures just 251 mm² compared to the 495 mm² of the 'Vega 10' GPU die."

It's too bad that AMD keeps using smaller nodes as a way to sell us small dies instead of taking full advantage.

I guess this is what happens when the company cares more about designing for consoles than for us. At least "the console" won't be quite so much of a joke, though, once it has Zen cores instead of Jaguar (which shouldn't have made it past the drawing board at Sony nor MS).

We waited so long for Polaris and then for Polaris to be replaced by the massively-hyped Vega, which had the same IPC as Fiji. Then, Radeon VII comes out with a small die. Color me underwhelmed.
Posted on Reply
#17
JB_Gamer
TesterAnonThey lost the price/performance market too which is sad because that was the only market AMD had left.
Is that so, not according to the information that was delivered at E3?

I've got the impression that the green army is trolling on every blog and forum right at this moment, the war is on!
Posted on Reply
#18
Vya Domus
It's interesting they've done things such as double the scalar execution units instead of removing them which is what I would have expected them to do, it looks like compute performance wasn't crippled in process of improving the graphics pipeline.
TesterAnonThey lost the price/performance market too which is sad because that was the only market AMD had left.
When that's the only market left for you when everyone else moved into higher margins territory you're doing something wrong. I pointed out long ago that AMD is going to increase their prices no matter how good or crappy their next generation GPUs are, you can thank Nvidia and the people who bought RTX cards for this wonderful advancement.
phanbuey"AMD sees a future in which real time ray tracing is handled in the cloud."

Seems counter-intuitive when the market is increasing lower latency, high-FPS, high-refresh offering and current consumer devices with dedicated hardware are capable of real time ray tracing.

I am on the ray tracing wagon, and while I do appreciate the slightly cooler shadows and more realistic lighting in the 2 games I play that support it, I really do see it as more of a developer-side feature. I just imagine it's way easier to say "light source here, shiny thing here, go" than do all of the lightmap work and try to fake it. I think the cloud RTRT might be true for consoles...
They were talking about Stadia, aka cloud gaming. NOT off loading RTRT to the cloud.
Posted on Reply
#19
Steevo
RichF"In all, RDNA helps AMD achieve a 2.3x gain in performance per area, 1.5x gain in performance per Watt. The 'Navi 10' silicon measures just 251 mm² compared to the 495 mm² of the 'Vega 10' GPU die."

It's too bad that AMD keeps using smaller nodes as a way to sell us small dies instead of taking full advantage.

I guess this is what happens when the company cares more about designing for consoles than for us. At least "the console" won't be quite so much of a joke, though, once it has Zen cores instead of Jaguar (which shouldn't have made it past the drawing board at Sony nor MS).

We waited so long for Polaris and then for Polaris to be replaced by the massively-hyped Vega, which had the same IPC as Fiji. Then, Radeon VII comes out with a small die. Color me underwhelmed.
This is the same typical working response from AMD/ATI, use a new node to validate and learn for larger chips with a midrange card. Its happened numerous times.
Posted on Reply
#20
IceShroom
RichFI guess this is what happens when the company cares more about designing for consoles than for us. At least "the console" won't be quite so much of a joke, though, once it has Zen cores instead of Jaguar (which shouldn't have made it past the drawing board at Sony nor MS).
Because there is where the money is made.
And Jaguar core is much powerful than then you guys think, its just dont clock that high.
Posted on Reply
#21
RichF
IceShroomBecause there is where the money is made.
And Jaguar core is much powerful than then you guys think, its just dont clock that high.
It doesn't have to be that way. All consoles are are x86 PCs in disguise. People should stop giving MS and Sony money to weaken the PC gaming platform by splintering it into three parts for no good reason.

As for Jaguar, my understanding is that it had worse IPC than Piledriver.
SteevoThis is the same typical working response from AMD/ATI, use a new node to validate and learn for larger chips with a midrange card. Its happened numerous times.
It has also been the case that AMD doesn't compete, as with the 3870/3850. Same non-competitive performance as the previous generation. Without adequate competition (e.g. duopoly), sure, maybe a company thinks it's in its interest to do things this way.
Posted on Reply
#23
Darmok N Jalad
RichFIt doesn't have to be that way. All consoles are are x86 PCs in disguise. People should stop giving MS and Sony money to weaken the PC gaming platform by splintering it into three parts for no good reason.

As for Jaguar, my understanding is that it had worse IPC than Piledriver.

It has also been the case that AMD doesn't compete, as with the 3870/3850. Same non-competitive performance as the previous generation. Without adequate competition (e.g. duopoly), sure, maybe a company thinks it's in its interest to do things this way.
I don’t think we’d have the huge gaming market we do without cheap consoles powering it all. It’s a simple and easy investment to buy a console, and it’s very standardized to level the advantage of any one player. Every game you buy for a console will work with minimal intervention—no driver updates, no upgrades, no quality sliders to tweak, targeted FPS. That is a price many are willing to pay to have lesser graphics, and developers will come because they know the target hardware and can provide a consistent experience for millions of customers.

Nothing against PC gaming—it offers more control and customization, but with great power comes more money and more time fiddling. I have the means to buy a gaming rig, but I don’t have the time to mess with it. Consumers are speaking though, and the developers go to where the customers are.
Posted on Reply
#24
phanbuey
Darmok N JaladI don’t think we’d have the huge gaming market we do without cheap consoles powering it all. It’s a simple and easy investment to buy a console, and it’s very standardized to level the advantage of any one player. Every game you buy for a console will work with minimal intervention—no driver updates, no upgrades, no quality sliders to tweak, targeted FPS. That is a price many are willing to pay to have lesser graphics, and developers will come because they know the target hardware and can provide a consistent experience for millions of customers.

Nothing against PC gaming—it offers more control and customization, but with great power comes more money and more time fiddling. I have the means to buy a gaming rig, but I don’t have the time to mess with it. Consumers are speaking though, and the developers go to where the customers are.
Agreed, but life is too short to play FPS's with your thumbs @ 30 fps.
Posted on Reply
#25
ValenOne
WikiFMRay tracing performance is much slower in GTX cards than in RTX ones.

If you cant spot the difference between RT off and RT on you should see an oculist. Since both next Xbox and PS5 will use Navi they would use a minimum amout of RT, which couldn't be compared to RTX cards.
Since you dont care about consoles nor RT I dont see the point of your comment.
Microsoft has confirmed "hardware accelerated" ray-tracing for Scarlet, hence placing it's GPU with second generation NAVI.
Posted on Reply
Add your own comment
Nov 21st, 2024 06:04 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts