• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RDNA4 (RX 9070XT / 9070) launch announced for (delayed to) March 2025

Joined
Nov 13, 2024
Messages
130 (1.83/day)
System Name le fish au chocolat
Processor AMD Ryzen 7 5950X
Motherboard ASRock B550 Phantom Gaming 4
Cooling Peerless Assassin 120 SE
Memory 2x 16GB (32 GB) G.Skill RipJaws V DDR4-3600 DIMM CL16-19-19-39
Video Card(s) NVIDIA GeForce RTX 3080, 10 GB GDDR6X (ASUS TUF)
Storage 2 x 1 TB NVME & 2 x 4 TB SATA SSD in Raid 0
Display(s) MSI Optix MAG274QRF-QD
Power Supply 750 Watt EVGA SuperNOVA G5
I assume that's CPU bound, right? What does his fps look like? In that case it makes sense.
yes it is cpu bound and I think thats the point of the video, Dlss doesn't decrease input delay in every imaginary scenario you can think of, but with a bit of brain juice someone should come to that conclusion. (I meant the video had no point, and the title feels cklickbaity but now knowing it was still interesting)
FPS was also worse with dlss, here is the full picture:

1737560805760.png
 
Last edited:
Joined
May 17, 2021
Messages
3,262 (2.42/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
yes it is cpu bound and I think thats the point of the video, Dlss doesn't decrease input delay in every imaginary scenario you can think of, but with a bit of brain juice someone should come to that conclusion.

you don't even need to use brain juice, it's clearly spelled in the video
 
Joined
Oct 28, 2012
Messages
1,241 (0.28/day)
Processor AMD Ryzen 3700x
Motherboard asus ROG Strix B-350I Gaming
Cooling Deepcool LS520 SE
Memory crucial ballistix 32Gb DDR4
Video Card(s) RTX 3070 FE
Storage WD sn550 1To/WD ssd sata 1To /WD black sn750 1To/Seagate 2To/WD book 4 To back-up
Display(s) LG GL850
Case Dan A4 H2O
Audio Device(s) sennheiser HD58X
Power Supply Corsair SF600
Mouse MX master 3
Keyboard Master Key Mx
Software win 11 pro
Moore's law or Jensen's law? The claim of cards have to get more expensive while hardware can no longer improve is a bunch of bs. There is a way around the limit of monolithic die improvements, if AMD can make a chiplet GPU then I'm sure Nvidia can figure it out.
It sounds like you're already buying into the marketing of upscaling and fake frames is a performance improvement, not just a clever trick to convince gamers to keep buying the next gen which will be required to run dlss4 with even more fake frames.
And I'll stop calling it fake frames if Nvidia stops marketing fake frames as a performance uplift over the previous gen, but I expect reviewers will hype it up and say it's a con on cards that don't have it.
The way that AMD has been using chiplet for their gaming GPU is different from what they used for the datacenter. In the datacenter they really fused two full Instinct GPU die together to get more perfomance at a lower cost (MCM). On the gaming side, the GPU is still fairly monolithic, they've dissociated the cache on a lower node only to save cost. They've decided against using MCM because games don't really like that kind of architecture. It's fine for compute, but games need a really high transfer speed/low latency between the two die.
Chiplet for GPU is not a performance enhancement, but a cost saving mesure. We are still limited by how big the graphic engine itself can be on a single die. And TSMC is going to price that die to the moon.
Blackwell for the datacenter makes use of MCM as well.

1737561424613.png
1737561454501.png

1737561512250.png
 
Joined
Feb 18, 2005
Messages
5,902 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) Dell S3221QS(A) (32" 38x21 60Hz) + 2x AOC Q32E2N (32" 25x14 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G604
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
if AMD can make a chiplet GPU then I'm sure Nvidia can figure it out
Remind me, how has AMD's use of chiplet GPUs worked out for them in terms of performance and thus marketshare?
 
Joined
Nov 15, 2024
Messages
88 (1.28/day)
It's fine for compute, but games need a really high transfer speed/low latency between the two die.
Does the "sea of wires" seen in the Halo chip mitigate this?

Remind me, how has AMD's use of chiplet GPUs worked out for them in terms of performance and thus marketshare?
"Depending on how the Radeon RX 9000 series and RDNA 4 fare in the market, AMD could revisit the enthusiast segment with its next generation UDNA architecture that the company will make common to both graphics and compute."

Couple of people mentioned supposed comments made be AMD regarding their chiplet design
https://www.reddit.com/r/hardware/comments/1i3cjyb
 
Joined
Oct 6, 2021
Messages
1,614 (1.34/day)
The way that AMD has been using chiplet for their gaming GPU is different from what they used for the datacenter. In the datacenter they really fused two full Instinct GPU die together to get more perfomance at a lower cost (MCM). On the gaming side, the GPU is still fairly monolithic, they've dissociated the cache on a lower node only to save cost. They've decided against using MCM because games don't really like that kind of architecture. It's fine for compute, but games need a really high transfer speed/low latency between the two die.
Chiplet for GPU is not a performance enhancement, but a cost saving mesure. We are still limited by how big the graphic engine itself can be on a single die. And TSMC is going to price that die to the moon.
Blackwell for the datacenter makes use of MCM as well.

View attachment 381020View attachment 381021
View attachment 381022


AMD achieved far more than simply combining two dies. The MI300X is an MCM monster, packing 8 GPUs into a single design. Its effectiveness in computing stems from the fact that such workloads are typically less sensitive to minor latency issues. On the other hand, gaming performance can be significantly affected by even the smallest hiccups, making a Multi-GPU MCM better suited for compute-heavy tasks rather than gaming scenarios.

An MCM tailored for gaming would demand a far more intricate design or a higher level of sophistication in how games are rendered.


AMD MI300 – Taming The Hype – AI Performance, Volume Ramp, Customers, Cost, IO, Networking, Software – SemiAnalysis
View attachment 475bbd25-8616-45ea-a880-6d596eef3cda_3016x2120.webp
View attachment https3A2F2Fsubstack-post-media.s3.amazonaws.com2Fpublic2Fimages2F5d57444c-19b8-499d-b4dd-3147...webp
 
Joined
Feb 18, 2005
Messages
5,902 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) Dell S3221QS(A) (32" 38x21 60Hz) + 2x AOC Q32E2N (32" 25x14 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G604
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
"Depending on how the Radeon RX 9000 series and RDNA 4 fare in the market, AMD could revisit the enthusiast segment with its next generation UDNA architecture that the company will make common to both graphics and compute."

Couple of people mentioned supposed comments made be AMD regarding their chiplet design
https://www.reddit.com/r/hardware/comments/1i3cjyb
No idea how that is relevant to what I asked.
 
Joined
Apr 2, 2011
Messages
2,870 (0.57/day)
Remind me, how has AMD's use of chiplet GPUs worked out for them in terms of performance and thus marketshare?

Come on man, don't come to a discussion with that weak sauce of an argument.

The best feature, or the most capable, doesn't make you automatically win in a competition for market share. All it means is that you have a feature. This is where marketing can sell anything if worded correctly. It's also where the best doesn't always win. Citations needed though, right?
VHS versus Beta.
TressFX...because why not talk about an AMD feature to cement this
Wenkel engines versus standard 4 stroke
PhysX...because the physics coprocessor is such a fun idea it was gobbled up by GPUs

Make the argument that AMD's marketing sucks. Make the argument that chiplets have to communicate, and the design issues go all the way back to the days of SLI versus Crossfire. Don't argue with someone saying that market share is the same as the value of the technology. That sort of silliness only invites someone to keep arguing because the retort is as obvious as can be. At least respond truthfully with "if AMD had realized any significant benefits from the chiplet design wouldn't they have thoroughly trounced the 40x0 generation from Nvidia?" That's entirely truthful, forces the revelation that if there is a benefit it has not been realized, and doesn't really leave any reasonable actor an argument.




As a side note, I for one do believe that we are coming to the end of Moore's law with single chips. That said, we are already finding solutions like the infinity fabric. Basically, if you can't pack them smaller, distribute them and develop a proper communication network. The relatively monolithic nature of the modern GPU is leading to co-processors for AI meant to interpolate to generate frames...but that's missing out on the fundamental shift we actually require. Few people remember, but geometric processing existed before rasterization. If you can make some fundamental leap which is more efficient than rasterization we entirely restart the GPU race. This is kind of like the modern multi-core CPU supplanting the single core, and opposite the Nvidia solution of brute forcing ray trace calculations. Sometimes, you need to break the mold and start from an entirely different set of assumptions rather than just refining a single idea.
Lord knows, raster performance is unlikely to make leaps and bounds based on our current implementations.
 
Joined
Aug 3, 2006
Messages
231 (0.03/day)
Location
Austin, TX
Processor Ryzen 6900HX
Memory 32 GB DDR4LP
Video Card(s) Radeon 6800m
Display(s) LG C3 42''
Software Windows 11 home premium
That's a loser mentality. Release a fast product at a reasonable price and that's it. If nvidia wanted to they would have already kicked amd out of the market, it doesn't even make sense to try to do anything about it. 5090 - 900$ msrp - 5080 = 500$ msrp, there you go, amd is now back to consoles. So in what universe does nvidia care about competing with AMD in the gpu market? They don't even know who amd is.

Mass psychosis.
 
Joined
Jul 24, 2024
Messages
368 (2.01/day)
System Name AM4_TimeKiller
Processor AMD Ryzen 5 5600X @ all-core 4.7 GHz
Motherboard ASUS ROG Strix B550-E Gaming
Cooling Arctic Freezer II 420 rev.7 (push-pull)
Memory G.Skill TridentZ RGB, 2x16 GB DDR4, B-Die, 3800 MHz @ CL14-15-14-29-43 1T, 53.2 ns
Video Card(s) ASRock Radeon RX 7800 XT Phantom Gaming
Storage Samsung 990 PRO 1 TB, Kingston KC3000 1 TB, Kingston KC3000 2 TB
Case Corsair 7000D Airflow
Audio Device(s) Creative Sound Blaster X-Fi Titanium
Power Supply Seasonic Prime TX-850
Mouse Logitech wireless mouse
Keyboard Logitech wireless keyboard
I was specifically referring about the pcgh review that was posted in a previous page where they show big differences between the 8 and 16 gb 7600 gpus. Problem is the 8gb 4060 was faster than both so it's not the vram per we that's the issue there.


DLSS / fsr reduces latency. You are probably referring to FG.
Today I don't care anymore, but tomorrow I might find you some videos on YTB where 4060 Ti 16GB wins over 8GB badly thanks. Thanks to more VRAM there is much less stuttering and fps drops. I already searched for those videos like 2 times, so 3rd time won't be a significant problem.
Sure, I just mean, performance issues aren't the only symptoms of running out of VRAM.
Average FPS is sometimes not the best indicator of a situation where you VRAM is not enough.
Many gameplay videos show tremendous increase in .1% and 1% lows, which talks for less stuttering.

Sure, that sucks, but isn't that the case every gen? It's AMD's fault for manufacturing too many GPUs that people don't want to buy, at current prices at least.
I encourage you to take a look at Nvidia's SKU portfolio. Sometimes even release of totally useless product happens (e.g. 4080S). They even do "same" products with different dies and memory.
4060, 4060 Ti, 4070, 4070 SUPER, 4070 Ti, 4070 Ti SUPER, 4080, 4080 Super (basically equals to 4080), 4090, 4090D (basically 4090)
Now take a look at AMD's SKU portfolio: 7 gaming desktop SKUs vs. like twice that amount what released Nvidia.

I don't want it, but those things is what people talk about. "But for RT and hence DLSS Nvidia is better" is just a stated fact.
Might as well change soon. We'll see.

Hyperbolic maybe but for AAA gaming all of those things will just get more important. Look at Black Myth Wukong. And again, DLSS being better than FSR is just a stated fact.
Black Wukong is unoptimized piece of shit game that you can't run without upscaling even on mighty RTX 4090. Even W1zzard used 66% upscaling in his review. 66% upscaling, lol!
Such games as Black Wukong, should, at the very first place, get PROPER optimizations, because they very much suck at how they look compared to how taxy on hardware they are.
This is exactly what I'have been telling people here - DLSS helps game devs to neglect polishing and optimization works on games, releasing them sooner, thus earning more profits.
It's win for devs, win for Nvidia, unfortunely huge loss for us.

All of this to say, their problem is drivers, and they are unwilling to drop the pricing to compensate for that. I will easily pay 100 more for a nvidia card just to not have to troubleshoot shit all the time and like me there are countless others.
This argument again? How come I and many other on this forum haven't experienced serious AMD driver-related issue for like ... 10 years already?

Every GPU maker has had problems with drivers. Nvidia, AMD, Intel, Matrox, ... Man, I've seen such bugs with Matrox in the plant where I work. AMD had serious problems with drivers years ago. Those what you experience today cannot really be compared to that driver disaster before. Nvidia had problems with dying GPUs while gaming. Now their biggest issue is fixing overlay performance degradation in their new app. Intel drivers for Arc was a total shitshow for like 1 year at least. Reviewers cannot test some games, the damn cards just won't run it. But credit to them, they made incredible work, now it's other story.

Not a very good comparison; a "BMW engine" is not a killer feature, but DLSS very much is. And before you say "DLSS is just upscaling", it's not - it started out as such but it's become much more, including frame generation, and that's what makes it killer. The consumer market agrees, and if you as an individual do not - you're welcome to that opinion, but please remember it is just that, unsupported by the available facts. As such it makes perfect sense for W1zz to note it in his reviews.
You know, for someone who adores and loves BMW stuff above anything else it may be actually that killer feature, right? A fact that supports that it's not a killer feature is that I and many others here at TPU (45% according to currently ongoing poll), as well as many others gamers elsewhere, are able to game without such after-rendering image processing technologies. Whether it is a killer feature or not is also subject to personal opinion. It's not something that is absolutely necessary to play games. I have never used FSR, XeSS and I could not have used DLSS because I haven't owned any Nvidia GPU since like 2016. IMHO, it's clearly not normal that game devs keep producing shitty games that are unable to run at even 4K max. settings @ 60 fps without DLSS/FSR and FG on $1600 GPU. That's insane. If anyone wants to have 2-3 times more fps, ok, go for DLSS/FSR, FG. My point is that such technologies should not be abused to compensate for piece of shit game development and optimizations. At first place, these technologies were introduced to enable higher framerates at lower tier GPUs. Nowadays this technology is being abused so much. Being forced to turn on DLSS/FSR on $1600 GPU is insane.

I think that stating DLSS as con in reviews of GPUs (be it AMD or Intel GPU reviews) is not quiet objective and points to a fact about reviewer being biased. It's also unfair, because AMD and Intel cannot support DLSS even if they wanted. It's locked, proprietary technology. On the other hand, AMD open sourced FSR as well as Intel did with XeSS, so Nvidia supports them. Maybe if AMD never opened FSR to public and if Intel did the same, we would see con in Nvidia GPUs reviews about lack of FSR/XeSS support. But I very highly doubt it.

Having dlss noted as a feature is fine, but IMO listing it as a con isn't and only leads to the readers questioning the bias of the reviewer. Frame Gen and upscaling are just nice to have for those who want it, it shouldn't be pushed as something needed, but unfortunately the AAA games industry have been using dlss as an easy way to avoid game optimization.
Exactly my thought. May force will be with you, always.

Unfortunately you've failed to educate yourself on how the Moore's Law wall is precluding the generational advancements in graphics horsepower that we've become accustomed to. Upscaling and frame generation are required technologies if we want to see graphics fidelity continue to improve over and above what GPUs can offer. They are not hacks, they are not lazy, they are not fake frames, they are a solution to a fundamental physical constraint. Denying the facts doesn't change them.
Nvidia pushes most of effort and money to performance advancements related to "AI". RTX 5070 Ti has more AI performance than RTX 4090 and about 75% more AI performance than predecessor. Surely there is place for improvements but the place might be elsewhere. Were this improvements focused some other place, things could be different. As for generated frames, they really are fake, as you don't really see game interacting to your actions since (there is no CPU-computing involved). It makes framerate higher, that's the only thing. We will see >3 frames being inserted soon, because 600+ Hz monitors are around the corner. Same technology as (M)FG used TVs in the past. I can't tell how it's now, I have been happy no-owner of a TV for almost 12 years. Reflex 2 is a good cheat, but still, it won't make that game more "alive".
 
Top