• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RDNA4 (RX 9070XT / 9070) launch announced for (delayed to) March 2025

Status
Not open for further replies.
Joined
Nov 13, 2024
Messages
148 (2.06/day)
System Name le fish au chocolat
Processor AMD Ryzen 7 5950X
Motherboard ASRock B550 Phantom Gaming 4
Cooling Peerless Assassin 120 SE
Memory 2x 16GB (32 GB) G.Skill RipJaws V DDR4-3600 DIMM CL16-19-19-39
Video Card(s) NVIDIA GeForce RTX 3080, 10 GB GDDR6X (ASUS TUF)
Storage 2 x 1 TB NVME & 2 x 4 TB SATA SSD in Raid 0
Display(s) MSI Optix MAG274QRF-QD
Power Supply 750 Watt EVGA SuperNOVA G5
I assume that's CPU bound, right? What does his fps look like? In that case it makes sense.
yes it is cpu bound and I think thats the point of the video, Dlss doesn't decrease input delay in every imaginary scenario you can think of, but with a bit of brain juice someone should come to that conclusion. (I meant the video had no point, and the title feels cklickbaity but now knowing it was still interesting)
FPS was also worse with dlss, here is the full picture:

1737560805760.png
 
Last edited:
Joined
May 17, 2021
Messages
3,275 (2.43/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
yes it is cpu bound and I think thats the point of the video, Dlss doesn't decrease input delay in every imaginary scenario you can think of, but with a bit of brain juice someone should come to that conclusion.

you don't even need to use brain juice, it's clearly spelled in the video
 
Joined
Oct 28, 2012
Messages
1,244 (0.28/day)
Processor AMD Ryzen 3700x
Motherboard asus ROG Strix B-350I Gaming
Cooling Deepcool LS520 SE
Memory crucial ballistix 32Gb DDR4
Video Card(s) RTX 3070 FE
Storage WD sn550 1To/WD ssd sata 1To /WD black sn750 1To/Seagate 2To/WD book 4 To back-up
Display(s) LG GL850
Case Dan A4 H2O
Audio Device(s) sennheiser HD58X
Power Supply Corsair SF600
Mouse MX master 3
Keyboard Master Key Mx
Software win 11 pro
Moore's law or Jensen's law? The claim of cards have to get more expensive while hardware can no longer improve is a bunch of bs. There is a way around the limit of monolithic die improvements, if AMD can make a chiplet GPU then I'm sure Nvidia can figure it out.
It sounds like you're already buying into the marketing of upscaling and fake frames is a performance improvement, not just a clever trick to convince gamers to keep buying the next gen which will be required to run dlss4 with even more fake frames.
And I'll stop calling it fake frames if Nvidia stops marketing fake frames as a performance uplift over the previous gen, but I expect reviewers will hype it up and say it's a con on cards that don't have it.
The way that AMD has been using chiplet for their gaming GPU is different from what they used for the datacenter. In the datacenter they really fused two full Instinct GPU die together to get more perfomance at a lower cost (MCM). On the gaming side, the GPU is still fairly monolithic, they've dissociated the cache on a lower node only to save cost. They've decided against using MCM because games don't really like that kind of architecture. It's fine for compute, but games need a really high transfer speed/low latency between the two die.
Chiplet for GPU is not a performance enhancement, but a cost saving mesure. We are still limited by how big the graphic engine itself can be on a single die. And TSMC is going to price that die to the moon.
Blackwell for the datacenter makes use of MCM as well.

1737561424613.png
1737561454501.png

1737561512250.png
 
Joined
Feb 18, 2005
Messages
5,914 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) Dell S3221QS(A) (32" 38x21 60Hz) + 2x AOC Q32E2N (32" 25x14 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G604
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
if AMD can make a chiplet GPU then I'm sure Nvidia can figure it out
Remind me, how has AMD's use of chiplet GPUs worked out for them in terms of performance and thus marketshare?
 
Joined
Nov 15, 2024
Messages
94 (1.32/day)
It's fine for compute, but games need a really high transfer speed/low latency between the two die.
Does the "sea of wires" seen in the Halo chip mitigate this?

Remind me, how has AMD's use of chiplet GPUs worked out for them in terms of performance and thus marketshare?
"Depending on how the Radeon RX 9000 series and RDNA 4 fare in the market, AMD could revisit the enthusiast segment with its next generation UDNA architecture that the company will make common to both graphics and compute."

Couple of people mentioned supposed comments made be AMD regarding their chiplet design
https://www.reddit.com/r/hardware/comments/1i3cjyb
 
Joined
Oct 6, 2021
Messages
1,619 (1.34/day)
The way that AMD has been using chiplet for their gaming GPU is different from what they used for the datacenter. In the datacenter they really fused two full Instinct GPU die together to get more perfomance at a lower cost (MCM). On the gaming side, the GPU is still fairly monolithic, they've dissociated the cache on a lower node only to save cost. They've decided against using MCM because games don't really like that kind of architecture. It's fine for compute, but games need a really high transfer speed/low latency between the two die.
Chiplet for GPU is not a performance enhancement, but a cost saving mesure. We are still limited by how big the graphic engine itself can be on a single die. And TSMC is going to price that die to the moon.
Blackwell for the datacenter makes use of MCM as well.

View attachment 381020View attachment 381021
View attachment 381022


AMD achieved far more than simply combining two dies. The MI300X is an MCM monster, packing 8 GPUs into a single design. Its effectiveness in computing stems from the fact that such workloads are typically less sensitive to minor latency issues. On the other hand, gaming performance can be significantly affected by even the smallest hiccups, making a Multi-GPU MCM better suited for compute-heavy tasks rather than gaming scenarios.

An MCM tailored for gaming would demand a far more intricate design or a higher level of sophistication in how games are rendered.


AMD MI300 – Taming The Hype – AI Performance, Volume Ramp, Customers, Cost, IO, Networking, Software – SemiAnalysis
View attachment 475bbd25-8616-45ea-a880-6d596eef3cda_3016x2120.webp
View attachment https3A2F2Fsubstack-post-media.s3.amazonaws.com2Fpublic2Fimages2F5d57444c-19b8-499d-b4dd-3147...webp
 
Joined
Feb 18, 2005
Messages
5,914 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) Dell S3221QS(A) (32" 38x21 60Hz) + 2x AOC Q32E2N (32" 25x14 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G604
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
"Depending on how the Radeon RX 9000 series and RDNA 4 fare in the market, AMD could revisit the enthusiast segment with its next generation UDNA architecture that the company will make common to both graphics and compute."

Couple of people mentioned supposed comments made be AMD regarding their chiplet design
https://www.reddit.com/r/hardware/comments/1i3cjyb
No idea how that is relevant to what I asked.
 
Joined
Apr 2, 2011
Messages
2,872 (0.57/day)
Remind me, how has AMD's use of chiplet GPUs worked out for them in terms of performance and thus marketshare?

Come on man, don't come to a discussion with that weak sauce of an argument.

The best feature, or the most capable, doesn't make you automatically win in a competition for market share. All it means is that you have a feature. This is where marketing can sell anything if worded correctly. It's also where the best doesn't always win. Citations needed though, right?
VHS versus Beta.
TressFX...because why not talk about an AMD feature to cement this
Wenkel engines versus standard 4 stroke
PhysX...because the physics coprocessor is such a fun idea it was gobbled up by GPUs

Make the argument that AMD's marketing sucks. Make the argument that chiplets have to communicate, and the design issues go all the way back to the days of SLI versus Crossfire. Don't argue with someone saying that market share is the same as the value of the technology. That sort of silliness only invites someone to keep arguing because the retort is as obvious as can be. At least respond truthfully with "if AMD had realized any significant benefits from the chiplet design wouldn't they have thoroughly trounced the 40x0 generation from Nvidia?" That's entirely truthful, forces the revelation that if there is a benefit it has not been realized, and doesn't really leave any reasonable actor an argument.




As a side note, I for one do believe that we are coming to the end of Moore's law with single chips. That said, we are already finding solutions like the infinity fabric. Basically, if you can't pack them smaller, distribute them and develop a proper communication network. The relatively monolithic nature of the modern GPU is leading to co-processors for AI meant to interpolate to generate frames...but that's missing out on the fundamental shift we actually require. Few people remember, but geometric processing existed before rasterization. If you can make some fundamental leap which is more efficient than rasterization we entirely restart the GPU race. This is kind of like the modern multi-core CPU supplanting the single core, and opposite the Nvidia solution of brute forcing ray trace calculations. Sometimes, you need to break the mold and start from an entirely different set of assumptions rather than just refining a single idea.
Lord knows, raster performance is unlikely to make leaps and bounds based on our current implementations.
 
Joined
Aug 3, 2006
Messages
243 (0.04/day)
Location
Austin, TX
Processor Ryzen 6900HX
Memory 32 GB DDR4LP
Video Card(s) Radeon 6800m
Display(s) LG C3 42''
Software Windows 11 home premium
That's a loser mentality. Release a fast product at a reasonable price and that's it. If nvidia wanted to they would have already kicked amd out of the market, it doesn't even make sense to try to do anything about it. 5090 - 900$ msrp - 5080 = 500$ msrp, there you go, amd is now back to consoles. So in what universe does nvidia care about competing with AMD in the gpu market? They don't even know who amd is.

Mass psychosis.
 
Joined
Jul 24, 2024
Messages
371 (2.01/day)
System Name AM4_TimeKiller
Processor AMD Ryzen 5 5600X @ all-core 4.7 GHz
Motherboard ASUS ROG Strix B550-E Gaming
Cooling Arctic Freezer II 420 rev.7 (push-pull)
Memory G.Skill TridentZ RGB, 2x16 GB DDR4, B-Die, 3800 MHz @ CL14-15-14-29-43 1T, 53.2 ns
Video Card(s) ASRock Radeon RX 7800 XT Phantom Gaming
Storage Samsung 990 PRO 1 TB, Kingston KC3000 1 TB, Kingston KC3000 2 TB
Case Corsair 7000D Airflow
Audio Device(s) Creative Sound Blaster X-Fi Titanium
Power Supply Seasonic Prime TX-850
Mouse Logitech wireless mouse
Keyboard Logitech wireless keyboard
I was specifically referring about the pcgh review that was posted in a previous page where they show big differences between the 8 and 16 gb 7600 gpus. Problem is the 8gb 4060 was faster than both so it's not the vram per we that's the issue there.


DLSS / fsr reduces latency. You are probably referring to FG.
Today I don't care anymore, but tomorrow I might find you some videos on YTB where 4060 Ti 16GB wins over 8GB badly thanks. Thanks to more VRAM there is much less stuttering and fps drops. I already searched for those videos like 2 times, so 3rd time won't be a significant problem.
Sure, I just mean, performance issues aren't the only symptoms of running out of VRAM.
Average FPS is sometimes not the best indicator of a situation where you VRAM is not enough.
Many gameplay videos show tremendous increase in .1% and 1% lows, which talks for less stuttering.

Sure, that sucks, but isn't that the case every gen? It's AMD's fault for manufacturing too many GPUs that people don't want to buy, at current prices at least.
I encourage you to take a look at Nvidia's SKU portfolio. Sometimes even release of totally useless product happens (e.g. 4080S). They even do "same" products with different dies and memory.
4060, 4060 Ti, 4070, 4070 SUPER, 4070 Ti, 4070 Ti SUPER, 4080, 4080 Super (basically equals to 4080), 4090, 4090D (basically 4090)
Now take a look at AMD's SKU portfolio: 7 gaming desktop SKUs vs. like twice that amount what released Nvidia.

I don't want it, but those things is what people talk about. "But for RT and hence DLSS Nvidia is better" is just a stated fact.
Might as well change soon. We'll see.

Hyperbolic maybe but for AAA gaming all of those things will just get more important. Look at Black Myth Wukong. And again, DLSS being better than FSR is just a stated fact.
Black Wukong is unoptimized piece of shit game that you can't run without upscaling even on mighty RTX 4090. Even W1zzard used 66% upscaling in his review. 66% upscaling, lol!
Such games as Black Wukong, should, at the very first place, get PROPER optimizations, because they very much suck at how they look compared to how taxy on hardware they are.
This is exactly what I'have been telling people here - DLSS helps game devs to neglect polishing and optimization works on games, releasing them sooner, thus earning more profits.
It's win for devs, win for Nvidia, unfortunely huge loss for us.

All of this to say, their problem is drivers, and they are unwilling to drop the pricing to compensate for that. I will easily pay 100 more for a nvidia card just to not have to troubleshoot shit all the time and like me there are countless others.
This argument again? How come I and many other on this forum haven't experienced serious AMD driver-related issue for like ... 10 years already?

Every GPU maker has had problems with drivers. Nvidia, AMD, Intel, Matrox, ... Man, I've seen such bugs with Matrox in the plant where I work. AMD had serious problems with drivers years ago. Those what you experience today cannot really be compared to that driver disaster before. Nvidia had problems with dying GPUs while gaming. Now their biggest issue is fixing overlay performance degradation in their new app. Intel drivers for Arc was a total shitshow for like 1 year at least. Reviewers cannot test some games, the damn cards just won't run it. But credit to them, they made incredible work, now it's other story.

Not a very good comparison; a "BMW engine" is not a killer feature, but DLSS very much is. And before you say "DLSS is just upscaling", it's not - it started out as such but it's become much more, including frame generation, and that's what makes it killer. The consumer market agrees, and if you as an individual do not - you're welcome to that opinion, but please remember it is just that, unsupported by the available facts. As such it makes perfect sense for W1zz to note it in his reviews.
You know, for someone who adores and loves BMW stuff above anything else it may be actually that killer feature, right? A fact that supports that it's not a killer feature is that I and many others here at TPU (45% according to currently ongoing poll), as well as many others gamers elsewhere, are able to game without such after-rendering image processing technologies. Whether it is a killer feature or not is also subject to personal opinion. It's not something that is absolutely necessary to play games. I have never used FSR, XeSS and I could not have used DLSS because I haven't owned any Nvidia GPU since like 2016. IMHO, it's clearly not normal that game devs keep producing shitty games that are unable to run at even 4K max. settings @ 60 fps without DLSS/FSR and FG on $1600 GPU. That's insane. If anyone wants to have 2-3 times more fps, ok, go for DLSS/FSR, FG. My point is that such technologies should not be abused to compensate for piece of shit game development and optimizations. At first place, these technologies were introduced to enable higher framerates at lower tier GPUs. Nowadays this technology is being abused so much. Being forced to turn on DLSS/FSR on $1600 GPU is insane.

I think that stating DLSS as con in reviews of GPUs (be it AMD or Intel GPU reviews) is not quiet objective and points to a fact about reviewer being biased. It's also unfair, because AMD and Intel cannot support DLSS even if they wanted. It's locked, proprietary technology. On the other hand, AMD open sourced FSR as well as Intel did with XeSS, so Nvidia supports them. Maybe if AMD never opened FSR to public and if Intel did the same, we would see con in Nvidia GPUs reviews about lack of FSR/XeSS support. But I very highly doubt it.

Having dlss noted as a feature is fine, but IMO listing it as a con isn't and only leads to the readers questioning the bias of the reviewer. Frame Gen and upscaling are just nice to have for those who want it, it shouldn't be pushed as something needed, but unfortunately the AAA games industry have been using dlss as an easy way to avoid game optimization.
Exactly my thought. May force will be with you, always.

Unfortunately you've failed to educate yourself on how the Moore's Law wall is precluding the generational advancements in graphics horsepower that we've become accustomed to. Upscaling and frame generation are required technologies if we want to see graphics fidelity continue to improve over and above what GPUs can offer. They are not hacks, they are not lazy, they are not fake frames, they are a solution to a fundamental physical constraint. Denying the facts doesn't change them.
Nvidia pushes most of effort and money to performance advancements related to "AI". RTX 5070 Ti has more AI performance than RTX 4090 and about 75% more AI performance than predecessor. Surely there is place for improvements but the place might be elsewhere. Were this improvements focused some other place, things could be different. As for generated frames, they really are fake, as you don't really see game interacting to your actions since (there is no CPU-computing involved). It makes framerate higher, that's the only thing. We will see >3 frames being inserted soon, because 600+ Hz monitors are around the corner. Same technology as (M)FG used TVs in the past. I can't tell how it's now, I have been happy no-owner of a TV for almost 12 years. Reflex 2 is a good cheat, but still, it won't make that game more "alive".
 
Joined
Jan 14, 2019
Messages
13,751 (6.24/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
I don't want it, but those things is what people talk about. "But for RT and hence DLSS Nvidia is better" is just a stated fact.
Yeah, the 4070 does 17 FPS in cyberpunk instead of 11. Call me amazed.

Aspiring video creator mind you. "I'm not doing it right now but I would like to do X and for that Nvidia is better so..."
A bunch of grown-up children lying to themselves if you ask me.

Go read any review w1z has ever made. DLSS being the superior tech is also a stated fact.
Well, if not having DLSS is truly a negative as it is stated, then you're locked into buying Nvidia for inflated prices the rest of your life. Not a very nice existence, imo. The narrative has to change.

The way that AMD has been using chiplet for their gaming GPU is different from what they used for the datacenter. In the datacenter they really fused two full Instinct GPU die together to get more perfomance at a lower cost (MCM). On the gaming side, the GPU is still fairly monolithic, they've dissociated the cache on a lower node only to save cost. They've decided against using MCM because games don't really like that kind of architecture. It's fine for compute, but games need a really high transfer speed/low latency between the two die.
Chiplet for GPU is not a performance enhancement, but a cost saving mesure. We are still limited by how big the graphic engine itself can be on a single die. And TSMC is going to price that die to the moon.
Blackwell for the datacenter makes use of MCM as well.

View attachment 381020View attachment 381021
View attachment 381022
I would say, chiplets are a cost saving measure, even on CPUs. Nothing more.
 

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
19,770 (2.86/day)
Location
w
System Name Black MC in Tokyo
Processor Ryzen 5 7600
Motherboard MSI X670E Gaming Plus Wifi
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Corsair Vengeance @ 6000Mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston KC3000 1TB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Plantronics 5220, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Dell SK3205
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
people get all upset when i talk about AMD cards, but i will stand by this: AMD's problem is the drivers, the software, they are absolute dog shit, and you can't change my mind. They burnt way to many people with their cards. But i'm also not willing to say what i said here, so i'm not a hater, i'm a realist.

All of this to say, their problem is drivers, and they are unwilling to drop the pricing to compensate for that. I will easily pay 100 more for a nvidia card just to not have to troubleshoot shit all the time and like me there are countless others.

And I will say that AMD Adrenalin is superior to anything Nvidia has ever done and literally the only time I've had stability issues was when I was playing around with undervolting. The point I was trying to make is that AMD cards (at least where I live) have better price/performance ratios and people still bought the Nvidia options.

Black Wukong is unoptimized piece of shit game that you can't run without upscaling even on mighty RTX 4090. Even W1zzard used 66% upscaling in his review. 66% upscaling, lol!
Such games as Black Wukong, should, at the very first place, get PROPER optimizations, because they very much suck at how they look compared to how taxy on hardware they are.
This is exactly what I'have been telling people here - DLSS helps game devs to neglect polishing and optimization works on games, releasing them sooner, thus earning more profits.
It's win for devs, win for Nvidia, unfortunely huge loss for us.

Agreed, but all of this pushes the hardware requirements further towards Nvidia because they are the ones with software and impetus.

Yeah, the 4070 does 17 FPS in cyberpunk instead of 11. Call me amazed.

Hi amazed. I agree, but that is what people see.
A bunch of grown-up children lying to themselves if you ask me.
To be fair I think being an aspiring creative is better than not being creative at all.

Well, if not having DLSS is truly a negative as it is stated, then you're locked into buying Nvidia for inflated prices the rest of your life. Not a very nice existence, imo. The narrative has to change.

It is a negative. Again I refer to everything literally everyone says on the subject, very much including reviewers. The best opininon one can have about FSR is "I think it's ok, but yeah it probably doesn't look as good as DLSS". Nvidia cards can run DLSS, FSR and XeSS (and I just now realized XeSS is Excess and I hate it so much, what is this the 90's??) and DLSS is univerally seen as the superior one. This is what Techspot says on the subject:

And while the upscaling ecosystem has kept improving and FSR 3.1 is a step in the right direction, AMD should absolutely not take this as a final victory. AMD's upscaler is still inferior to DLSS and needs continual work to remain a competitive option. If AMD wants to convince prospective GPU buyers that Radeon cards are worth as much as GeForce cards, their upscaler needs to be just as good as DLSS. We're still waiting for that to happen, and until it does, we won't be able to recommend Radeon without a discount.

AMD is the lower class option.
 
Last edited:
Joined
Jun 14, 2020
Messages
4,119 (2.44/day)
System Name Mean machine
Processor AMD 6900HS
Memory 2x16 GB 4800C40
Video Card(s) AMD Radeon 6700S
Well, if not having DLSS is truly a negative as it is stated, then you're locked into buying Nvidia for inflated prices the rest of your life. Not a very nice existence, imo. The narrative has to change.
Not having it is a negative in the sense that DLSS is currently better than fsr and xess. If FSR 4 is equal or better then not having DLSS is kinda irrelevant.
 
Joined
Jan 14, 2019
Messages
13,751 (6.24/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
And I will say that AMD Adrenalin is superior to anything Nvidia has ever done and literally the only time I've had stability issues was when I was playing around with undervolting. The point I was trying to make is that AMD cards (at least where I live) have better price/performance ratios and people still bought the Nvidia options.
That I agree with.

Hi amazed. I agree, but that is what people see.
Because reviews influence them in the wrong direction. They're all over the moon because X card does 5% better than Y card, but they conveniently forget to mention that both of them are unusable.

To be fair I think being an aspiring creative is better than not being creative at all.
If you are truly creative, then I agree. But a lot of people aren't, they just think they are. You can take a look at the results, it's called Tiktok. It's better to admit that you're not the creative type than to embarrass yourself like that.

It is a negative. Again I refer to everything literally everyone says on the subject, very much including reviewers. The best opininon one can have about FSR is "I think it's ok, but yeah it probably doesn't look as good as DLSS". Nvidia cards can run DLSS, FSR and XeSS (and I just now realized XeSS is Excess and I hate it so much, what is this the 90's??) and DLSS is univerally seen as the superior one. This is what Techspot says on the subject:



AMD is the lower class option.
That just means that every non-Nvidia GPU will be an utter piece of trash forever and there's nothing any manufacturer can do about it. Let's also not forget that FSR and XeSS run on Nvidia because AMD and Intel coded them as such. Only Nvidia can't be asked to code for a wider variety of hardware. Is it really a positive thing? Is it really something that brings gaming forward? No, rather the opposite. The only thing it brings us closer to is an Nvidia monopoly.
 

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
19,770 (2.86/day)
Location
w
System Name Black MC in Tokyo
Processor Ryzen 5 7600
Motherboard MSI X670E Gaming Plus Wifi
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Corsair Vengeance @ 6000Mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston KC3000 1TB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Plantronics 5220, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Dell SK3205
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
That just means that every non-Nvidia GPU will be an utter piece of trash forever and there's nothing any manufacturer can do about it. Let's also not forget that FSR and XeSS run on Nvidia because AMD and Intel coded them as such. Only Nvidia can't be asked to code for a wider variety of hardware. Is it really a positive thing? Is it really something that brings gaming forward? No, rather the opposite. The only thing it brings us closer to is an Nvidia monopoly.

Yep and it sucks but you can't undo any of it. Too much impetus.
 
Joined
Jan 14, 2019
Messages
13,751 (6.24/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
Yep and it sucks but you can't undo any of it. Too much impetus.
You can by not drooling over DLSS like a teenage girl over Justin Bieber.
 
Joined
Jun 14, 2020
Messages
4,119 (2.44/day)
System Name Mean machine
Processor AMD 6900HS
Memory 2x16 GB 4800C40
Video Card(s) AMD Radeon 6700S
Come on, the reason xess and fsr work on everything is because of very low marketshare. You can't make a feature exclusive when you don't have marketshare AND your solution is worse than the one that has the marketshare.

If FSR becomes as good, it's going to be locked for amd gpus only.
 
Joined
Jan 14, 2019
Messages
13,751 (6.24/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
Come on, the reason xess and fsr work on everything is because of very low marketshare. You can't make a feature exclusive when you don't have marketshare AND your solution is worse than the one that has the marketshare.

If FSR becomes as good, it's going to be locked for amd gpus only.
Does that excuse the scumbaggery we call exclusivity? If I have the market share, everything I do suddenly becomes morally acceptable?
 
Joined
Jun 14, 2020
Messages
4,119 (2.44/day)
System Name Mean machine
Processor AMD 6900HS
Memory 2x16 GB 4800C40
Video Card(s) AMD Radeon 6700S
Does that excuse the scumbaggery we call exclusivity? If I have the market share, everything I do suddenly becomes morally acceptable?
I think what's more morally unacceptable is pushing an open feature only because you don't have the marketshare to make it exclusive, and when that feature being open brings you the clientele then locking it. Now THAT'S scumbaggery.

I don't expect - in a competitive 50-50 market- either amd or nvidia to allow me to use each others features on my non amd or non nvidia gpu. That would be idiotic. They spend money developing that tech and giving it to their competition is literally losing sales. Nobody spends money to lose sales.
 
Joined
Jan 14, 2019
Messages
13,751 (6.24/day)
Location
Midlands, UK
Processor Various Intel and AMD CPUs
Motherboard Micro-ATX and mini-ITX
Cooling Yes
Memory Overclocking is overrated
Video Card(s) Various Nvidia and AMD GPUs
Storage A lot
Display(s) Monitors and TVs
Case The smaller the better
Audio Device(s) Speakers and headphones
Power Supply 300 to 750 W, bronze to gold
Mouse Wireless
Keyboard Mechanic
VR HMD Not yet
Software Linux gaming master race
I think what's more morally unacceptable is pushing an open feature only because you don't have the marketshare to make it exclusive, and when that feature being open brings you the clientele then locking it. Now THAT'S scumbaggery.
What clientele? AMD has 10% market share.

If they lock FSR 4, I'll call AMD out on it just the same as I do Nvidia. One wrong doesn't make another one right. Shitfuckery is shitfuckery, whichever way you look at it, so let's not shift the goalpost.

AI and upscaling have to be standardized, imo.
 
Joined
Jul 26, 2024
Messages
375 (2.05/day)
Well, if not having DLSS is truly a negative as it is stated, then you're locked into buying Nvidia for inflated prices the rest of your life. Not a very nice existence, imo. The narrative has to change.
that is actually more true than people here realize. I always thought "competitive" is not enough to achieve parity between nvidia and amd. I, for one, do not need a dozen reasons to pay a 50eur premium on a 500-600 upper mid range card. DLSS/DLAA and +20-30% RT difference was always enough. I would absolutely regret not spending that 10% more than I would cherish saving it on a card that doesn't have features I like. It'd be different if that was 100-150eur, but it's not. Especially when most games these days have RT/PT from day one, and I'm never going to notice that added +5% rasterization performance on radeon cards when the fps numbers with rt off are already well above 100 on both.

Zrzut ekranu 2025-01-23 081433.png


AI and upscaling have to be standardized, imo.
it'll never be when you have nvidia making AI their main selling point, even if Microsoft tried. they move too quickly for the market to adapt to new features every generation.
 
Last edited:
Joined
Jun 14, 2020
Messages
4,119 (2.44/day)
System Name Mean machine
Processor AMD 6900HS
Memory 2x16 GB 4800C40
Video Card(s) AMD Radeon 6700S
What clientele? AMD has 10% market share.

If they lock FSR 4, I'll call AMD out on it just the same as I do Nvidia. One wrong doesn't make another one right. Shitfuckery is shitfuckery, whichever way you look at it, so let's not shift the goalpost.

AI and upscaling have to be standardized, imo.
I won't call them out(i might call out the fans that are holding double standards though). Actually i'd rather they lock FSR 4 if it translates to making it better. Both AI and upscaling are still in their baby years, there is lots of room for improvements and that's why they can't be standarized as of right now. When eventually upscaling might reach a plateau, sure it might. But currently neither amd or nvidia are going to invest a lot of $$ to improve an open standard that anyone can use. It just wouldn't make sense, right?
 
Status
Not open for further replies.
Top