Monday, January 29th 2024

Top AMD RDNA4 Part Could Offer RX 7900 XTX Performance at Half its Price and Lower Power

We've known since way back in August 2023, that AMD is rumored to be retreating from the enthusiast graphics segment with its next-generation RDNA 4 graphics architecture, which means that we likely won't see successors to the RX 7900 series squaring off against the upper end of NVIDIA's fastest GeForce RTX "Blackwell" series. What we'll get instead is a product stack closely resembling that of the RX 5000 series RDNA, with its top part providing a highly competitive price-performance mix around the $400-mark. A more recent report by Moore's Law is Dead sheds more light on this part.

Apparently, the top Radeon RX SKU based on the next-gen RDNA4 graphics architecture will offer performance comparable to that of the current RX 7900 XTX, but at less than half its price (around the $400 mark). It is also expected to achieve this performance target using a smaller, simpler silicon, with significantly lower board cost, leading up to its price. What's more, there could be energy efficiency gains made from the switch to a newer 4 nm-class foundry node and the RDNA4 architecture itself; which could achieve its performance target using fewer numbers of compute units than the RX 7900 XTX with its 96.
When it came out, the RX 5700 XT offered an interesting performance proposition, beating the RTX 2070, and forcing NVIDIA to refresh its product stack with the RTX 20-series SUPER, and the resulting RTX 2070 SUPER. Things could go down slightly differently with RDNA4. Back in 2019, ray tracing was a novelty, and AMD could surprise NVIDIA in the performance segment even without it. There is no such advantage now, ray tracing is relevant; and so AMD could count on timing its launch before the Q4-2024 debut of the RTX 50-series "Blackwell."
Sources: Moore's Law is Dead (YouTube), Tweaktown
Add your own comment

517 Comments on Top AMD RDNA4 Part Could Offer RX 7900 XTX Performance at Half its Price and Lower Power

#426
AusWolf
3valatzyIf you "know" so much, tell us the truth.
Nobody knows so much. That's exactly why there is noise. Everybody is speculating.
Posted on Reply
#427
3valatzy
AusWolfNobody knows so much. That's exactly why there is noise. Everybody is speculating.
Define "noise". Define "knows". Define "facts".
Posted on Reply
#428
Chrispy_
3valatzyIt won't offer the XTX performance, while the power consumption will remain extremely high, even if around 300W. This card needed to be 215 W max.
Why did you dig this thread up again with this post, and where are your sources for this seemingly bogus and controversial information?

There have been plenty of leaks demonstrating the the 9070XT as something approximately the performance of the 4080/XTX in raster or RT-light titles, and something like a 4070Ti in RT-heavy scenarios such as path tracing in CP2077 and Indiana Jones.

Power is claimed to be 330W for the factory-OC cards being leaked, which is also (as claimed here in this old thread from a year ago) comes in at lower power than the XTX's 355W.

If you want to make statements that contradict everything seen so far then at least back it up with a link, a theory, or at least an explanation of how you arrived at your conclusion. Just blurting out FUD like that makes you look like a biased fanboy with no understanding of what's going on and no ability to follow the leaks and news updates that have been sweeping across the web for the last five weeks since this old thread went cold.

AMD have a history of terrible marketing, and flaky launches - but third parties doing independent benchmarks are all over the web already, circumventing AMD's claims with embargo-breaking leaks. If you've been ignoring them then I don't know what to say to you. Maybe go and read some news, watch some coverage?
Posted on Reply
#429
3valatzy
A 256-bit chip with only 16 GB. Will never reach and compete in the long run with the 24 GB cards . .
Posted on Reply
#430
Zach_01
The definition of trolling…
Posted on Reply
#431
Dr. Dro
Macro DeviceIt's a well known fact. If it did they'd already released it. AMD fans' copium levels are now off all charts.
I mean, it takes that much denial considered there are plenty of triple 8 pin input cards. They definitely won't be super power efficient, even if a nominal decrease over RDNA 3 is seen (and how could it not, given it has less memory and a smaller, far less complex core installed)
3valatzyA 256-bit chip with only 16 GB. Will never reach and compete in the long run with the 24 GB cards . .
That is not why the 9070 XT won't measure up, really. The 16 GB, 256-bit 4080 was a better experience than the 24 GB, 384-bit 3090 all around, and I had both, I'd know. It simply has less execution resources than the 7900 XTX and the increased clock speeds or any architectural level improvements aren't enough to close the gap. The existence of so many triple input cards that are technically specified up to 525 W shows that to achieve these clock speeds they also took a note from the 6500 XT's playbook, throwing efficiency out of the window, if it ever had any and was truly a mere 7800 XT replacement as the most ardent fans argue.

Not to mention AMD's garbage drivers don't help their case any. A pretty control panel does not and will never hide the technical mischief under the hood and this is where they fall utterly short.
kapone32Facts? Real;y? The lack of facts is why there is so much noise.
Why are you always defending AMD as if your life depended on it...
Posted on Reply
#432
kapone32
Dr. DroI mean, it takes that much denial considered there are plenty of triple 8 pin input cards. They definitely won't be super power efficient, even if a nominal decrease over RDNA 3 is seen (and how could it not, given it has less memory and a smaller, far less complex core installed)



That is not why the 9070 XT won't measure up, really. The 16 GB, 256-bit 4080 was a better experience than the 24 GB, 384-bit 3090 all around, and I had both, I'd know. It simply has less execution resources than the 7900 XTX and the increased clock speeds or any architectural level improvements aren't enough to close the gap. The existence of so many triple input cards that are technically specified up to 525 W shows that to achieve these clock speeds they also took a note from the 6500 XT's playbook, throwing efficiency out of the window, if it ever had any and was truly a mere 7800 XT replacement as the most ardent fans argue.

Not to mention AMD's garbage drivers don't help their case any. A pretty control panel does not and will never hide the technical mischief under the hood and this is where they fall utterly short.



Why are you always defending AMD as if your life depended on it...
It is funny that it is always the same people that come onto AMD threads This is you.

Not to mention AMD's garbage drivers don't help their case any. A pretty control panel does not and will never hide the technical mischief under the hood and this is where they fall utterly short.

The question should be why are you always attacking AMD with statements that have no basis in fact.
Posted on Reply
#433
Dr. Dro
kapone32It is funny that it is always the same people that come onto AMD threads This is you.

The question should be why are you always attacking AMD with statements that have no basis in fact.
Gaslighting doesn't work on me, and you've been fighting people in this thread long before I got here.

But I have the proposition of a lifetime: since you're highly knowledgeable about Windows KMDs and UMDs, meaning, you're able to describe software functions, gather minidumps, write detailed step by step issue reproduction instructions, and you seem to understand the inner workings of the software to make such an assertion, please join their beta tester program. I really, really want to see you there. They need highly involved, passionate individuals in the program. People with the fire I once had. Maybe it'll even help ship out ROCm faster, need to teach those nGreedia bullies and their CUDA runtime a lesson.

In the meantime, do me a favor and helping you prove myself wrong, and share this little window for me?

Posted on Reply
#434
kapone32
Dr. DroGaslighting doesn't work on me, and you've been fighting people in this thread long before I got here.

But I have the proposition of a lifetime: since you're highly knowledgeable about Windows KMDs and UMDs, meaning, you're able to describe software functions, gather minidumps, write detailed step by step issue reproduction instructions, and you seem to understand the inner workings of the software to make such an assertion, please join their beta tester program. I really, really want to see you there. They need highly involved, passionate individuals in the program. Maybe it'll even help ship out ROCm faster, need to teach those nGreedia bullies and their CUDA runtime a lesson.

In the meantime, do me a favor and helping you prove myself wrong, and share this little window for me?

Talk about Gas lighting. Let's get back to the focus

"Not to mention AMD's garbage drivers don't help their case any. A pretty control panel does not and will never hide the technical mischief under the hood and this is where they fall utterly short."


Explain what makes them Garbage, since you seem comfortable making that statement.

Technical Mischief? What? are you even talking about?

Under the hood they fall short? Is the card released and do you have information that the rest of the World doesn't because you seem to.


What are you trying to do with this paragraph? Do you realize what you are displaying?
Posted on Reply
#435
Patriot
Dr. DroGaslighting doesn't work on me, and you've been fighting people in this thread long before I got here.

But I have the proposition of a lifetime: since you're highly knowledgeable about Windows KMDs and UMDs, meaning, you're able to describe software functions, gather minidumps, write detailed step by step issue reproduction instructions, and you seem to understand the inner workings of the software to make such an assertion, please join their beta tester program. I really, really want to see you there. They need highly involved, passionate individuals in the program. People with the fire I once had. Maybe it'll even help ship out ROCm faster, need to teach those nGreedia bullies and their CUDA runtime a lesson.

In the meantime, do me a favor and helping you prove myself wrong, and share this little window for me?

I am aware this is not at me, but I had something to add on the subject of underdog support. (shill free)
I have a hive of mi100s, 7900xtx for ROCm dev work... and yet...

I wish for my HPC customers to have a choice away from ngreedia, but F%# is their software ecosystem good.
AMD is laser focused on a few workloads and the rest suffer. They will get there, they have forward momentum, but they are behind.
Even when their hardware is better on paper... they are usually behind.
They are doing well for inference, so well that Meta's 405B model (at launch) ran exclusively on mi300X's Now that also may be a vram resource thing, even if a single mi300x is slower than an h100, you need less mi300x to run them...
They are starting to win on some training, but only on very well tuned models.
This is a month old snapshot.
semianalysis.com/2024/12/22/mi300x-vs-h100-vs-h200-benchmark-part-1-training/

Things like this... bring me hope.
www.phoronix.com/news/AMD-Feedback-ROCm-Support
Posted on Reply
#436
Hecate91
Ah, yes, always the Nvidia mindshare attacking in these threads.
I would question why people are always thread crapping on AMD, when they clearly don't care, would never buy anything from AMD, and like they don't have anything better to do with their $2000 graphics card.
And the whole control panel argument is a funny one, AMD came up with a better control panel than THE software company who couldn't get it right, maybe they can use some of their AI power to come up with a working control panel. I haven't had one issue with AMD drivers, and I wish people would stop with that tired claim from 10 years ago, but hey anything to gaslight people into buying from team green, I guess.
Posted on Reply
#437
Chrispy_
Dr. DroThat is not why the 9070 XT won't measure up, really. The 16 GB, 256-bit 4080 was a better experience than the 24 GB, 384-bit 3090 all around, and I had both, I'd know. It simply has less execution resources than the 7900 XTX and the increased clock speeds or any architectural level improvements aren't enough to close the gap.
True about 4080 vs 3090.

For the theorycrafting about the 384-bit XTX vs the 256-bit 9070XT, you have to remember that the XTX uses chiplets which feel like a bit of a failed experiment at this stage. We know chiplets hampered RDNA3 to some extent with AMD missing originally intended clockspeeds, failing to live up to their pre-launch promises, and we know from later deep dives that latency penalties and transistor budget spent on Infinity Fabric don't help - those interconnects affect idle power draw and overall efficiency too. If GPU chiplets worked and let us have cheap "threadripper" style behemoth GPUs, that would have been great - but AMD have shelved the tech for the moment and reverted to monolithic.

The 7900XTX seemed to underperform at midrange resolutions; It has 60% more compute, 100% more ray accelerators, 100% more ROPs than the 7800XT and it was clocked higher, too. Yet, it was only about 35% faster than a 7800XT at 1440p and never even close to 60% faster than a 7800XT at any resolution. I will wait for reviews, but I suspect the XTX will be marginally faster than the 9070XT in pure raster benchmarks, and significantly slower than the 9070XT in the most intensive RT benchmarks - the average score across many games will likely be very close if the selection of games is new enough to include the many lightweight-RT titles that are becoming more common.

So yeah, if you account for the inefficiencies of the XTX design, and throw RDNA4 raytracing improvements at a suite of tested games that include some RT, I think the 9070 closing the gap to the XTX is entirely believable.
Posted on Reply
#438
Dr. Dro
Hecate91Ah, yes, always the Nvidia mindshare attacking in these threads.
I would question why people are always thread crapping on AMD, when they clearly don't care, would never buy anything from AMD, and like they don't have anything better to do with their $2000 graphics card.
And the whole control panel argument is always a funny one, AMD came up with a better control panel than THE software company who couldn't get it right, naybe they can use some of their AI power to come up with working control panel. I haven't had one issue with AMD drivers, and I wish people would stop with that tired claim from 10 years ago, but hey anything to gaslight people into buying from team green, I guess.
Why is it that when anyone speaks anything that isn't strictly positive about AMD, it is thread crapping and hostility, but when it is anything related to Nvidia it's fair game to dunk on anything, prices, call them "nGreedia", Jensen, people who "willingly let themselves get ripped off", yeah, it's perfectly okay to do so?
Chrispy_True about 4080 vs 3090.

For the theorycrafting about the 384-bit XTX vs the 256-bit 9070XT, you have to remember that the XTX uses chiplets which feel like a bit of a failed experiment at this stage. We know chiplets hampered RDNA3 to some extent with latency penalties and transistor budget spent on Infinity Fabric that monolithic variants don't need - those interconnects affect idle power draw and overall efficiency too.

The 7900XTX seemed to underperform at midrange resolutions; It has 60% more compute, 100% more ray accelerators, 100% more ROPs than the 7800XT and it was clocked higher, too. Yet, it was only about 35% faster than a 7800XT at 1440p and never even close to 60% faster than a 7800XT at any resolution. I will wait for reviews, but I suspect the XTX will be marginally faster than the 9070XT in pure raster benchmarks, and significantly slower than the 9070XT in the most intensive RT benchmarks - the average score across many games will likely be very close if the selection of games is new enough to include the many lightweight-RT titles that are becoming more common.
That is still somewhat of a presumption, though. At most I see the chiplets inducing some extra latency (which could indeed hurt performance), but it generally held that the 4080 lost steam much faster at high resolutions than the 7900 did, and that can be attributed to the significantly higher memory bandwidth available. I personally think that the removal of the chiplets and going monolithic again is primarily a cost saving measure, it's probably cheaper to develop and manufacture this smaller die than go all out on the packaging. The lower latency from the localized, fully integrated memory controller probably helps.
Posted on Reply
#439
Patriot
Hecate91Ah, yes, always the Nvidia mindshare attacking in these threads.
I would question why people are always thread crapping on AMD, when they clearly don't care, would never buy anything from AMD, and like they don't have anything better to do with their $2000 graphics card.
And the whole control panel argument is always a funny one, AMD came up with a better control panel than THE software company who couldn't get it right, naybe they can use some of their AI power to come up with a working control panel. I haven't had one issue with AMD drivers, and I wish people would stop with that tired claim from 10 years ago, but hey anything to gaslight people into buying from team green, I guess.
Dude, the fact that you need grub tunables for stable multi-gpu in rocm is kinda crazy, as in if you dont have pcieralloc on, iommu on and set to passthrough, it will randomly lockup.
That is the AMD enterprise ecosystem, workarounds.

Now, is nvidia problem free? No, even training lama 3 they encountered a problem every 3 hours, but things were able to be dealt with in an automated fashion, with AMD issues more often require hard resets.
I have been using rocm since vega64/mi25 and cuda since k80. Ecosystem wise AMD is behind by 1.5-2yrs. So much bare metal stuff for AMD is a pita, but when you build it using their container base, works flawlessly.

Nothing hurts AMD more than this stupid hype train that the new mid end card will beat the old high end and not cost what its worth... AMD is not a charity.
Posted on Reply
#440
Vayra86
I think this 9070 = 7900XTX story is the same kind of wishful thinking that we saw with people thinking the 5080 would beat a 4090. It ain't happening. Especially not if the card is now also doing a lot more with RT.
Posted on Reply
#441
kapone32
Dr. DroWhy is it that when anyone speaks anything that isn't strictly positive about AMD, it is thread crapping and hostility, but when it is anything related to Nvidia it's fair game to dunk on anything, prices, call them "nGreedia", Jensen, people who "willingly let themselves get ripped off", yeah, it's perfectly okay to do so?



That is still somewhat of a presumption, though. At most I see the chiplets inducing some extra latency (which could indeed hurt performance), but it generally held that the 4080 lost steam much faster at high resolutions than the 7900 did, and that can be attributed to the significantly higher memory bandwidth available. I personally think that the removal of the chiplets and going monolithic again is primarily a cost saving measure, it's probably cheaper to develop and manufacture this smaller die than go all out on the packaging. The lower latency from the localized, fully integrated memory controller probably helps.
Show me where an AMD has called Nvidia garbage? Do you even read what you type?
Posted on Reply
#442
Chrispy_
Dr. DroThat is still somewhat of a presumption, though. At most I see the chiplets inducing some extra latency (which could indeed hurt performance), but it generally held that the 4080 lost steam much faster at high resolutions than the 7900 did, and that can be attributed to the significantly higher memory bandwidth available. I personally think that the removal of the chiplets and going monolithic again is primarily a cost saving measure, it's probably cheaper to develop and manufacture this smaller die than go all out on the packaging. The lower latency from the localized, fully integrated memory controller probably helps.
We have ~3GHz boost clocks on the 9070XT, and faster GDDR6 to keep up with it - which is a 22% performance gain right there, even if you ignore everything else. Also, the 9070XT is 64CU, rather than the 7800XT's 60CU, which is another 7%

So, even if there was zero architectural change between RDNA3 and RDNA4, we're talking 30% gains simply by going from 60CU of the 7800XT to 64CU and clocking the snot out of it (lol, 330W). The XTX is only 35-40% faster than the 7800XT so architectural improvements don't have to be very significant to make the leaked 9070XT performance seem both reasonable and credible.
Posted on Reply
#443
Dr. Dro
PatriotDude, the fact that you need grub tunables for stable multi-gpu in rocm is kinda crazy, as in if you dont have pcieralloc on, iommu on and set to passthrough, it will randomly lockup.
That is the AMD enterprise ecosystem, workarounds.

Now, is nvidia problem free? No, even training lama 3 they encountered an a problem every 3 hours, but things were able to be dealt with in an automated fashion, with AMD issues more often require hard resets.
I have been using rocm since vega64/mi25 and cuda since k80. Ecosystem wise AMD is behind by 1.5-2yrs. So much bare metal stuff for AMD is a pita, but when you build it using their container base, works flawlessly.

Nothing hurts AMD more than this stupid hype train that the new mid end card will beat the old high end and not cost what its worth... AMD is not a charity.
It's a breath of fresh air having this type of feedback. Things must be a ton better in the enterprise world than they are at the client segment. ROCm on client can't come fast enough. The lack of an unified, standardized runtime has been a huge problem. CUDA runs on anything NV, even that laptop from 2008 you found in your attic after decades. The value of that can't be understated, IMO.
Chrispy_We have ~3GHz boost clocks on the 9070XT, and faster GDDR6 to keep up with it - which is a 22% performance gain right there, even if you ignore everything else. Also, the 9070XT is 64CU, rather than the 7800XT's 60CU, which is another 7%

So, even if there was zero architectural change between RDNA3 and RDNA4, we're talking 30% gains simply by going from 60CU of the 7800XT to 64CU and clocking the snot out of it (lol, 330W). The XTX is only 35-40% faster than the 7800XT so architectural improvements don't have to be very significant to make the leaked 9070XT performance seem both reasonable and credible.
Indeed, it is an okay uplift, even if the clocks do come at the expense of efficiency. I'm looking forward to it myself, it's not a card for me, but I genuinely do want them to succeed.
kapone32Show me where an AMD has called Nvidia garbage? Do you even read what you type?
You apparently do not read what I type, either. But give it time, if they ever release something better, you can sure Frank Azor is gonna get right to it - he already did it to Intel! :laugh: That's why you have to pay scalped prices on the 9800X3D, according to him. Unfortunately he never answered regarding my inquiry if he paid that guy his $10. I asked...
Posted on Reply
#444
Hecate91
PatriotDude, the fact that you need grub tunables for stable multi-gpu in rocm is kinda crazy, as in if you dont have pcieralloc on, iommu on and set to passthrough, it will randomly lockup.
That is the AMD enterprise ecosystem, workarounds.

Now, is nvidia problem free? No, even training lama 3 they encountered a problem every 3 hours, but things were able to be dealt with in an automated fashion, with AMD issues more often require hard resets.
I have been using rocm since vega64/mi25 and cuda since k80. Ecosystem wise AMD is behind by 1.5-2yrs. So much bare metal stuff for AMD is a pita, but when you build it using their container base, works flawlessly.

Nothing hurts AMD more than this stupid hype train that the new mid end card will beat the old high end and not cost what its worth... AMD is not a charity.
I haven't had to deal with rocm yet, not sure when the discussion became professional workloads on consumer gaming cards, though yes Nvidia has a much better market for it. But everything above the xx70 tier might as well get branded as a Quadro because of gaming price/performance, and the 50 series doesn't look to be a massive uplift from the 40 series.
I'm not expecting the mid range card to beat the 7900XTX, all I'm expecting is the 9070XT is maybe 10% faster than the 7900XT, but better at RT, more efficiency would be nice and not much can be said about some cards having 3x 8 power connectors yet, we don't know what power draw the AIB cards have. Also never claimed AMD is a charity, but they always give you the full die, don't have a history of limiting features to the latest card (yes you could say FSR4 is an example but AMD claimed RDNA3 will get FSR4 later) they don't overprice their cards for the VRAM you get, and they have better support for Linux, which I've been using more as M$ is going to EOL W10 and I refuse to use W11.
Posted on Reply
#445
kapone32
Dr. DroIt's a breath of fresh air having this type of feedback. Things must be a ton better in the enterprise world than they are at the client segment. ROCm on client can't come fast enough. The lack of an unified, standardized runtime has been a huge problem. CUDA runs on anything NV, even that laptop from 2008 you found in your attic after decades. The value of that can't be understated, IMO.



You apparently do not read what I type, either. But give it time, if they ever release something better, you can sure Frank Azor is gonna get right to it - he already did it to Intel! :laugh: That's why you have to pay scalped prices on the 9800X3D, according to him. Unfortunately he never answered regarding my inquiry if he paid that guy his $10. I asked...
You still refuse to define your Garbage statement.
Posted on Reply
#446
Krit
Vayra86I think this 9070 = 7900XTX
Who is saying that ? Even rumors are constantly showing RX 9070 XT raster performance between RX 7900 XT and RX 7900 XTX and even better results in RT.
Posted on Reply
#447
Vayra86
KritWho is saying that ? Even rumors are constantly showing RX 9070 XT raster performance between RX 7900 XT and RX 7900 XTX and even better in RT.
Yeah same difference, I don't think its going to get there. Better in RT, sure. Faster in raster than 7900XT? Doubtful.
Posted on Reply
#448
Hecate91
Dr. DroWhy is it that when anyone speaks anything that isn't strictly positive about AMD, it is thread crapping and hostility, but when it is anything related to Nvidia it's fair game to dunk on anything, prices, call them "nGreedia", Jensen, people who "willingly let themselves get ripped off", yeah, it's perfectly okay to do so?
It is very much the opposite, there isn't any hostility in calling out what is very obvious here. It's the Nvidia users always dunking on AMD in these threads, the hostility is in the same people always bashing on AMD when they have no intention in buying anything from AMD while looking down on anyone who doesn't own an Nvidia card.
And yes Nvidia has became greedy, people have been willingly letting themselves getting ripped off since the RTX 30 series, but most still haven't realized it and blame the AIB's for pricing compared to the very rare purposely limited in supply FE card. If a meme like "ngreedia" really bothers anyone then they need to consider their bias on how much they like a GPU company that doesn't care about their gaming customers.
Posted on Reply
#449
Dr. Dro
kapone32You still refuse to define your Garbage statement.
I mean, not that the burden of proof is on me, I made an assertion, it's up to you to prove me wrong. That's how forums - a medium for the exchange of ideas - work.

So, tell me. What benefits will I get from buying a Radeon that I do not get with buying a GeForce? Can you say with the utmost certainty that their drivers are feature-complete? (beyond a "doesn't happen to me so doesn't happen to anyone else prob'ly) Can I expect each and every application I could possibly want to run, without having to resort to driver downgrades/version swaps, workarounds, disabling features and whatnot? Can I use the latest and greatest graphics techniques without running into bugs, crashes, or at the very least a tremendous reduction in performance? Will this product sustain my expected performance target and provide the experience I want in newer titles? (keeping in mind, I play at 4K, and if raw frame rates cannot be sustained, are upscaler methods reliable?) Can I play older games or use mods that may require API support that is off the beaten path? Will all of my art and video editing software work with equal or superior speed? Will I be provided with day one targeted updates to all the latest games? Will AMD have an equivalent or superior competitor to all of the next-generation features that Nvidia will offer and extend to their products over the years? Will I have drivers for this card when it is old and passed down, possibly even to my children someday since now I am approaching my mid 30's? What can this card do that the one I plan on acquiring will not? And after all of this, can you guarantee that I will not have my work bitterly destroyed by my old friend darkness [ICODE]amdkmdag.sys[/ICODE]? Are you able to concisely explain to me why am I wrong in my supposedly biased view?

Let me rephrase and simplify this question: Why should I give AMD my hard earned cash and faithfully opt for their product instead?
Hecate91It is very much the opposite, there isn't any hostility in calling out what is very obvious here. It's the Nvidia users always dunking on AMD in these threads, the hostility is in the same people always bashing on AMD when they have no intention in buying anything from AMD while looking down on anyone who doesn't own an Nvidia card.
And yes Nvidia has became greedy, people have been willingly letting themselves getting ripped off since the RTX 30 series, but most still haven't realized it and blame the AIB's for pricing compared to the very rare purposely limited in supply FE card. If a meme like "ngreedia" really bothers anyone then they need to consider their bias on how much they like a GPU company that doesn't care about their gaming customers.
That argument would maybe work if I happened to be the individual behind the "AMDGPU" account on X. No one is looking down on you people, it's never, ever been a personal thing. I don't think anyone on this forum or anywhere else on the internet actually have a personal problem with the choice of graphics card you prefer, mate.

I invite you to join kapone on answering the question I directed at him. Make me want to buy a 9070 XT instead of an RTX 5080 or 5090 based not only on your experiences, but on things that we have today, not on the basis of a potential future or promise by AMD. What cool things can this product do that my old Ada card can not. I promise you, pinky promise that I do not have my mind completely made in the sense of "It's nvidia or else" out of sheer brand loyalty, or I wouldn't have written this post.

That's for a great start - then we can start to debate the merits for people who unlike me, aren't in pursuit of bleeding-edge technology, but that is for another day.
Posted on Reply
#450
Patriot
Dr. DroIt's a breath of fresh air having this type of feedback. Things must be a ton better in the enterprise world than they are at the client segment. ROCm on client can't come fast enough. The lack of an unified, standardized runtime has been a huge problem. CUDA runs on anything NV, even that laptop from 2008 you found in your attic after decades. The value of that can't be understated, IMO.
WSL is the current way enabled for full rocm stack on windows.
rocm.docs.amd.com/projects/radeon/en/latest/docs/install/wsl/install-radeon.html
Posted on Reply
Add your own comment
Mar 3rd, 2025 04:46 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts