• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 9070 and 9070 XT Official Performance Metrics Leaked, +42% 4K Performance Over Radeon RX 7900 GRE

As someone with a Radeon 6700 10gb, this is very welcome news.

Though i'm waiting to see how the 9060 shakes out.
 
Did you read my post at all before replying ? They have the means to own/rent large server farms and they're also in a competitive industry, so of course they're gonna make the investment to have perfect raster & RT so they can be praised about the arts of their movies but they do that by pushing frames so complexely created that even on modern hardware, render rate is HOURS PER FRAME, not FRAMES PER SECOND like in gaming, we boot up our PCs and get gaming on the spot, animation movies take MONTHS to be completed so this is impossible to use as a comparison point
Im asking why would they invest doing anything in RT if it doesn't look better? CGI from 20 years ago (even in high budget movies, ie matrix) look a lot worse than todays games. I can see games from 20 years from now looking as good as today's CGI.
 
these are not rumours anymore, it is AMD's own numbers
Still to be taken with a grain of salt. E.g., some games are benchmarked both with RT on and off, some aren't. Could be cherry picking, could simply be lack of time...
I try to block any supposed leaks and concentrate on day-1 reviews. Served me well so far, I believe it will continue to do so.
 
Im asking why would they invest doing anything in RT if it doesn't look better? CGI from 20 years ago (even in high budget movies, ie matrix) look a lot worse than todays games. I can see games from 20 years from now looking as good as today's CGI.
Go right ahead, if you think you can find a single movie from the mid 2000s that looks better than, idfk-

*throwing darts at random Google results*

...what the fuck ?

DreamWorks is working on a live action "How To Train Your Dragon" ??

Arcane Season 2, Avatar 3 (did they really need to make a third ?), whatever...
 
The PS being what?
The hardware system to play 3D rendered Games. Yes you had sprite based Games like Street Fighter but the new bandwidth allowed for Ray Tracing. Think Medieval.

Go right ahead, if you think you can find a single movie from the mid 2000s that looks better than, idfk-

*throwing darts at random Google results*

...what the fuck ?

DreamWorks is working on a live action "How To Train Your Dragon" ??

Arcane Season 2, Avatar 3 (did they really need to make a third ?), whatever...
The first Spiderman Game blew me away the first time I saw it on my Nephew's 720P TV.
 
Don't even waste your time with that kiddo.

View attachment 386428

He admitted on another thread he buys nvidia regardless, so he's here just to troll anyways.
Was willing to give him a chance, but not worth it.

Placed it on the lovely Ignore list.
 
Last edited:
Since AMD decided to make only relatively small dies for this gen in order to be able to produce much more and sell for less to gain marketshare while keeping good margins, $600 for the XT and $500 for the other one are good price points to succeed in their target. They just need to make plenty of those to flood the market, not allowing scalpers and retailers to move prices up. Marketshare will change very fast if they do so. Much more if FSR4 is close to DLSS.
 
Before the name change, it was supposed to be the 8800XT ; by definition, the -800(XT) model being AMD's mid-tier SKU
They confirmed they wouldn't produce a Big Navi SKU on RDNA4 (for the others : NO they didn't say they'd stop making high end GPUs, they just said they'd focus on the mid/low-mid SKUs for RDNA4 while they're reorganizing their R&D effort for UDNA)

Oh I see... I'm guessing that's why there's so little difference between RT on/off ?

Ah yes, ML/AI. Of course, they have to promote what *datacenters* are after and insist on the one feature that caused them to make Blackwell 2.0 (because 1.0 was *that bad* I'm guessing -see reports of cracked dies from heat in DC racks from big customers-) with little raster improvement but SO MANY AI accelerators jam-packed on those dies...

Of all things I wanted AI to be used on, graphics wasn't one of them... Imagine how neat it would have been to run games' NPCs on AI from the GPUs ! Now that would have been epic. Maybe in games like Stellaris, Cities Skylines or, idk, CoD campaign enemies ?
Intel and AMD are working in that direction, as well. But seeing at how DX12 just started to support that tech, it's going to take a while before it's seeing widespread use in games. AMD UDNA launch might probably mention it, wich that arch being more optimized for ML/AI.

Forums lately are very quick at saying that raster is being artficially stalled, but even AMD who's supposedly still focusing on that isn't exaclty making big stride to bring insane generational performance uplift, with them letting the AI/RT company taking the top spot even in raster perf.

Then there's just how poorly the shader core count seems to scale. A 5090 is twice the GPU as a 5080, but it's not twice as fast, use a lot of power and that brings a lot of issues with it. Then there's TSMC asking a new born in exchange for a bleeding edge wafer, I can understand why ML is being seriously considered to improving the visuals.
 
Since AMD decided to make only relatively small dies for this gen in order to be able to produce much more and sell for less to gain marketshare while keeping good margins, $600 for the XT and $500 for the other one are good price points to succeed in their target. They just need to make plenty of those to flood the market, not allowing scalpers and retailers to move prices up. Marketshare will change very fast if they do so. Much more if FSR4 is close to DLSS.

Considering there aren’t any reference cards, AMD can do very little to enforce retailer pricing through AIBs and virtually nothing to stop scalping.

I hope to be wrong but don’t expect pricing to be a home run. I think were in for another gen of AMD generally offering better value with slightly cheaper prices than competing cards.
 
Why did you say RTRT so many times in the beginning? Is this real time ray tracing?

Also, in reference to CG artists. You must mean that they enjoy real time ray tracing versus having lower end scenes that they then render for the full effect. Or are you saying that having the RT cores allows for full scene rendering to be done faster?
Yep, RTRT stands for real time ray tracing. And yhea RT/PT is really the graal of rendering methods more accurate, can enable more effects, and is overall more straightforward to use. Heavier compute requirement, but definitely better at producing an image. Raster is faster compute wise, but the technical aspect of raster rendering is heavier, there's a lot of different rasterisation techniques but each have their own limitations.

With RT you think more about how you are going to set-up the parameters of the materials and the lights to get the desired effect without being too ineficient. With raster you really have to think about wich technique is going to be suited for reflections, ambient occlusion, GI, separately, for your usecase, budget/technical skill level. From I've seen it's not uncommon to see custom lighting algorithm having to be developped if you are unhappy with the engine default tools.
And that's probably explaining why UE is becoming more and more widely used, because EPIC is somewhat handling that aspect themselves. When having to develop and maintain a custom engine seems to have become a daunting task in the modern era.

The one issue that would have with the current state of CG in games, is that unlike the movies industry, they seems to be focused on pushing the technicalities for realism, when the movies industry is also making big strides in stylized effect. Offline 3D renderer like Renderman or Arnold are not just good at path traced realism, they are also insane for stylized rendering. But few studios seems willing to explore that kind of aesthetic.
Stylized Presets Turntable sur Vimeo
1740418400260.png
 
Last edited:
RT doesn't always mean better graphics, the best recent exemple is the Half Life 2 rtx remaster video. Realistic lighting - checked ; 100% correct material properties on all objects - unchecked. Difficult to be wowed, when rust metal and wood look the same and interact the same with light. That was the same problem with early movie and cgi when everything looked like rubber, we have PBR rendering now, but unless it is applied carefully, RT will just made material properties mistake look even more obvious...
 
The 9070XT being 42% faster than a 7900GRE puts it ahead of the 5070Ti according to TPU relative performance.

42% Using RT

37% if no RT

7900GRE AVG fps = 59.3 +37% = 81fps avg
Slower than 4080 in 4K
average-fps-3840-2160.png


If the performance is around the 5070 Ti I'll quite happily pay $650 for the XT, especially after the whole nvidia fake pricing farce, heck, this whole nvidia generation has been fake everything, by comparison (if the performance proves real), the AMD offerings look like a bargain.
650 + Vat + AIB model extra.
U know it cost more than 650 to get one?

whit 20% vat = 780 + Aib extra if want better model

So, 30-35% on average uplift in 1440p from a factory downclocked 7900GRE in a list of games AMD choose themself. I bet they was testing it without any overclocking. And factory downclocked 7900GRE in 1440p is a bit above 7800XT, maybe 10% difference. Well, not so bad, everything depends on price. If 5070Ti was available at MSRP AMD had no chances, but it costs more than $1000, so red team has a good chances.

750$ + 20% vat = 900
So +100 extra for AIB model.
We have 5070Ti sold 940e and we have 25.5% Vat

Why ppls think we can buy GPUs MSRP whitout VAT
 
42% Using RT

37% if no RT

7900GRE AVG fps = 59.3 +37% = 81fps avg
Slower than 4080 in 4K
View attachment 386462


650 + Vat + AIB model extra.
U know it cost more than 650 to get one?

whit 20% vat = 780 + Aib extra if want better model



750$ + 20% vat = 900
So +100 extra for AIB model.
We have 5070Ti sold 940e and we have 25.5% Vat

Why ppls think we can buy GPUs MSRP whitout VAT
Based on the performance leaks and estimates for the hypothetical RTX 9070 XT, it would likely have an FPS value around 100. This estimate is based on the assumption that the RTX 9070 XT would outperform the RTX 4080 Super 16 GB (85.5 FPS) and be close to the RTX 5080 16 GB (96.7 FPS). Therefore, the estimated FPS for the RTX 9070 XT would be approximately 100 FPS
 
were in for another gen of AMD generally offering better value with slightly cheaper prices than competing cards.
Man, in which country is that the case?

The "slightly" part.

In Germany freaking 12GB piece of 4070 scheisse costs more than 7900XT 20GB, a card that is 22% faster per TPU benchmarks.
 
Man, in which country is that the case?

The "slightly" part.

In Germany freaking 12GB piece of 4070 scheisse costs more than 7900XT 20GB, a card that is 22% faster per TPU benchmarks.

In regards to MSRP. We all know Nvidia doesn’t do price reductions unless it’s a super rebrand. On the other hand AMD prices will see discounts after initial release improving value, like the case seems to be in your country.
 
Then there's just how poorly the shader core count seems to scale. A 5090 is twice the GPU as a 5080, but it's not twice as fast, use a lot of power and that brings a lot of issues with it. Then there's TSMC asking a new born in exchange for a bleeding edge wafer, I can understand why ML is being seriously considered to improving the visuals.
R.O.P are the same as the RTX 4090in the RTX 5090. At some point the CPU cannot push enough data to large GPU's. I really think Nvidia didn't add more R.O.P's because they already knew the RTX 4090 was at the limit CPU being CPU bond. Adding shaders while stifling R.O.P increase & tuning RT cores to be faster in the latency pipeline is the only thing Nvidia has done for Black well. Normally with a 50% increase in shaders you end up with about 33% increase in performance. All Nvidia really achieved in Blackwell was a minor increase of the usually 33% to 35% it's not really impressive at all unless you take the R.O.P limit into around.

Both AMD & Nivida have decided to limit themselves in specific areas of GPU to drive innovations on both sides
 
R.O.P are the same as the RTX 4090in the RTX 5090. At some point the CPU cannot push enough data to large GPU's. I really think Nvidia didn't add more R.O.P's because they already knew the RTX 4090 was at the limit CPU being CPU bond. Adding shaders while stifling R.O.P increase & tuning RT cores to be faster in the latency pipeline is the only thing Nvidia has done for Black well. Normally with a 50% increase in shaders you end up with about 33% increase in performance. All Nvidia really achieved in Blackwell was a minor increase of the usually 33% to 35% it's not really impressive at all unless you take the R.O.P limit into around.

Both AMD & Nivida have decided to limit themselves in specific areas of GPU to drive innovations on both sides
Here, it must be clarified that, in the RTX 5090, the same number of ROPs are used as in the RTX 4090. High-end GPUs face more CPU bottlenecking, which likely played a role in Nvidia's decision not to increase the number of ROPs. They went for the more shaders and extensive optimization of their RT cores so as to lessen latency in pipeline operations.
While the 50% increase in the number of shaders should have ideally resulted in about a 33% increase in performance, the Blackwell architecture managed to get a small improvement to 35% - though to someone less technically savvy, this may at first come off as unimpressive; however, the overall balance and efficiency of GPU designs must also be taken into account.
Both AMD and Nvidia, of course, have strategically limited certain aspects of their GPUs in order to spur innovation and provide themselves with a competitive edge that allows for targeted resources to be directed where the GPU needs improvement, such as AI and generative computing, and still deliver high-performance solutions in graphics.
 
9070 (not the XT) is faster than the 5070 and more energy efficient.
But no halo card this time so no one will buy it. AMD knows this so I wonder why they didn't bother.

Plus I always have a laugh watching poor people with $600 RTX 4070s act like $620 7900 XTs are slow. In 8/10 games you're the slowpoke.
Intel is also selling GPUs without a halo product. I think they are moving some volume.
 
Here, it must be clarified that, in the RTX 5090, the same number of ROPs are used as in the RTX 4090. High-end GPUs face more CPU bottlenecking, which likely played a role in Nvidia's decision not to increase the number of ROPs. They went for the more shaders and extensive optimization of their RT cores so as to lessen latency in pipeline operations.
While the 50% increase in the number of shaders should have ideally resulted in about a 33% increase in performance, the Blackwell architecture managed to get a small improvement to 35% - though to someone less technically savvy, this may at first come off as unimpressive; however, the overall balance and efficiency of GPU designs must also be taken into account.
Both AMD and Nvidia, of course, have strategically limited certain aspects of their GPUs in order to spur innovation and provide themselves with a competitive edge that allows for targeted resources to be directed where the GPU needs improvement, such as AI and generative computing, and still deliver high-performance solutions in graphics.

The only issue with that is that architecture changes/redesigns have produced higher increases in performance, with slightly more changes. This does looks like just a major change in to increase A.I, but Blackwell is sometimes slower in specific loads compared to Ada Lovelace in some A.I workloads.
 
AMD is set to release the Radeon RX 9070 Series shortly, but it probably won't match the performance of the RTX 5070 Ti.
I didn't think this snarky comment in the 5070 Ti reviews would age so poorly. I hope real review data will confirm.
 
What if 9070xt overclocks from 3.15GHz to 3.5GHz?

Likely locked down, which is fine because of bandwidth limitation (and perhaps binning for higher-end part).
This is partially to keep prices low, partially not to give away the farm. Part of it truly is to compete with nVIDIA's stack, where perhaps they would have differently at one point.

That doesn't make them bad, especially for the price. Just, you know, don't expect them to be great at what they currently may be able to do 'just barely', forever, for reasons I have explained before.
This is no different than a lot of nVIDIA parts, but cheaper (because sensible use of cheap RAM, etc). Some things just don't line up very well right now, but will on 3nm, as I've again said countless times.
Many things I've postulated are pretty much coming true, even looking at this. 3244mhz (~7508sp raster utilization bc 64 rops?) would be bw-limited at around a little less than 22000mhz GDDR5...iow fits ram OC.
Now watch as 9070 is limited to around 3100mhz (power-limited) or so, maybe less, which (with 7168sp) matches the stock speed of 20gbps GDDR5...because something like that would make sense.
Hooray, 10% (or more) product segmentation. That doesn't need to be there, but w/e! That's just how it is, IG. Marketing FTW. Also, yeah, clearly they want to make 9070 cheap, compete 5070 in perf, (5060ti price?)
I dunno; the kicker is that 16GB instead of mangling the bus and 12GB. It's cool they gave it, but might hike the price up a little. Maybe they binned them some way I'm not considering; all possible.

Again, with 8192sp, to use all compute would require 24gbps @ ~3264mhz (by my math; others may vary slightly). Weird where this clocks...weird. Obviously using it as raster requires less and could clock higher.
How high? Well...If using the same bw calc as 9070xt...about the same as I expect these chips to clock without any power limits (on average). Huh. Again, very weird. Except, you know, not...bc logical.

They 'haven't tested 5070ti' because it's not it's competition, is the point. I mean, it is...but it isn't. I chuckled waiting to see if anyone else would catch that (they didn't, afaik).

C'mon, peeps, be logical.

Do you truly believe they don't have a 5070ti? Don't you think it's more likely they expected to lose (against certain cards with a certain config)...
....but are trying to match them where they can now (for blowout perf/$ if not higher asp for SKUs than planned)? I mean, I get it...I don't want to explain it, and perhaps should less in the future, but I get it.

I truly do find all this amusing, fwiw. Obviously products are fluid and chips can be sliced up different ways, but this is one the most-fascinating I've seen develop more-publically before our very eyes.

Mostly bc nVIDIA is ripping people off, and AMD is finding ways to capitalize on that with their always-planned logical iterations.
I wouldn't doubt if originally they were 20/24gbps 16GB products....that lost to 5070ti/5080...but still had to be priced against 5070.

But now...we'll have to see what happens w/ 24gbps ram. 16GB or 32GB? Maybe they expected to have to compete with TI w/ higher-end 16GB product, but now can with the lower-end, and hence why so.

Will they then try to match the 5070ti in price, >16GB against all current 16GB B203 products (with a 32GB model)? Hmm...makes sense, doesn't it?

What people *really* want is a 16GB 24gbps model at 5070 pricing, without clock/power limits, and they are purposely avoiding that, but with 32GB they might be able to up-sell to 5070ti pricing against a 5080.
Ofc, nVIDA will likely release a 24GB model that more definately wins the fight where AMD could only *almost* compete against a 16GB model (even if 16GB themselves), but that's not the point.

The point is popcorn.

1740436984810.gif


You really shouldn't ignore people, that's not very polite, even if you disagree. When they say things you don't like, and then are kinda-sorta-pretty-much right, you could've been prepared for that possibility.

But hey, if you are like that, or follow people like that, you do you. I just think it's unhealthy. Sometimes you have to listen to other people, even eventually take the L, and no, it's not fun when it happens.
I just do not support people living in their bubble of affirmation predicated on marketing...Expand your horizons; it promotes greater thought and independance. Sometimes you may be right/have a good idea.
Sometimes, and I am no different, you get it wrong, but at least you gave people something to think about and/or understand that some people think/feel that way. If you know differently; have the conversation.
Sometimes, especially when math/history is involved and you're familiar, and you can prove why, you get it right.
Sometimes you still get it wrong (and/or forget things). That's human. Nobody should be ashamed of that; when people feel that way they often contribute less, and then one group controls the narrative.
And that group is not always representative of all people and/or always factually correct. Most of the time they're trying to sell you something, and/or get something from it. I am not.

Sometimes, and maybe this part is just me, one writes and/or explains too damn much. It's just...in this atmosphere...with so many people blindly following and questioning nothing...I have to do it sometimes.

Because I care; not because I need to be right. I just want the ideas out there and for the needed sentiment to be represented. That can be extremely difficult, and i'm not perfect; always learning/adapting.

I think that's okay, for me and/or anyone else, as long as they're being honest and attempting to contribute to the best of their knowledge/ability; and hopefully without any ulterior motives/preconceptions.

:)
 
Last edited:
They haven't tested 5070ti because it's not it's competition. I truly do find all this amusing, fwiw. Everything I've explained is pretty much coming true, even looking at this.
They probably didn't test against anything Nvidia to keep it professional. Testing against their own outgoing product is the right thing to do, this is what Nvidia does as well, by the way (even if with MFG skewed data, but that's besides the point here).
 
42% Using RT

37% if no RT

7900GRE AVG fps = 59.3 +37% = 81fps avg
Slower than 4080 in 4K
The 9070XT isn't a high end card, so I'm not expecting amazing 4k performance out of it, 37% faster is still decent, IMO.
AMD just needs to get the pricing right, if they price the XT at $649 and make sure it has plenty of stock, they have a chance to get lots of sales because anything from Nvidia is overpriced or unobtainable.
 
The 9070XT isn't a high end card, so I'm not expecting amazing 4k performance out of it, 37% faster is still decent, IMO.
AMD just needs to get the pricing right, if they price the XT at $649 and make sure it has plenty of stock, they have a chance to get lots of sales because anything from Nvidia is overpriced or unobtainable.
$649 for what they are comparing to the 7900 GRE from last gen...a $549 MSRP card would not be accepted well by the market imo. They gotta price it better than that. The 7800 XT was a $499 card and they chose "70" branding for this new one. I think $649 would just be seen as more "slotting into Nvidia bracket, just not as high" as opposed to the market disruption they really need to do if they want to really make a statement and maybe start gaining back some marketshare. I'm afraid all $649 will appeal to are those already inclined to buy AMD in the first place. I really don't think that will be aggressive enough. Sure there's MSRP and then real price of the 5070 Ti, so I kind of get it, but if the idea is to disrupt and reset what the mainstream is, I don't think $649 is gonna wow anyone.

Just my read, but I could be wrong. I have a 3080 Ti and game on 4k. My biggest gripe right now is VRAM. I realize the 9070 XT isn't "high end", but it should still be an improvement in 4k over my card, and gets me 16GB. If priced right, I'd get it to tide me over till next gen. I would have considered a 5080 if it was 24GB, and the 5070 Ti at $900 can go fly a kite. If the RT has improved as much as they claim and the rumors suggest and if FSR4 end up being good, then I don't see why not.
 
$649 for what they are comparing to the 7900 GRE from last gen...a $549 MSRP card would not be accepted well by the market imo. They gotta price it better than that. The 7800 XT was a $499 card and they chose "70" branding for this new one. I think $649 would just be seen as more "slotting into Nvidia bracket, just not as high" as opposed to the market disruption they really need to do if they want to really make a statement and maybe start gaining back some marketshare. I'm afraid all $649 will appeal to are those already inclined to buy AMD in the first place. I really don't think that will be aggressive enough. Sure there's MSRP and then real price of the 5070 Ti, so I kind of get it, but if the idea is to disrupt and reset what the mainstream is, I don't think $649 is gonna wow anyone.
That's an 18% higher price for 38% higher 1440p and 42% higher 4K performance. Model numbers are irrelevant.
 
Back
Top