• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 7600 XT Launches on May 25

I agree, that's why I mentioned them. They were each a huge leap in graphics fidelity. But take unified shaders for example, despite all of its benefits, the average gamer stuck to Windows XP and DirectX 9.0c/shader model 3.0 to dear life, mostly as a consequence of rejecting Windows Vista. We've had AAAs releasing that way well into 2014.

Needless to say my point was that often, great new technologies have hurdles in adoption only because people have prejudice towards something unrelated, raises the cost of the hardware or don't like how it raises system requirements significantly. Ray tracing is no different.
RT and AI in gaming GPUs is just a side effect of Nvidia using the same chip to serve two different demands... as the server and professional market has become the most lucrative, resources have been directed towards serving the core market while finding it a way to find some use for these features in the secondary market (gamers), the current RT is a joke.
 
RT and AI in gaming GPUs is just a side effect of Nvidia using the same chip to serve two different demands... as the server and professional market has become the most lucrative, the resources to serve the core market while finding a way to find some use for these features in the secondary market (gamers), the current RT is a joke.

That's not the case, it's very much intentional and the way that Jensen Huang believes that the industry will go forward, and he is placing the entire engineering weight of his company behind it. AMD's been scrambling to follow ever since.

NVIDIA actually doesn't share the top die between its consumer, professional and enterprise segments since Maxwell, their server grade accelerators use a quite different architecture from the GeForce line nowadays. The closest thing released at consumer level that's based on their enterprise hardware was the Titan V, which was a one-off and is already practically 6 years old.

Look up for example the GA100, it's solely focused on compute and doesn't have ray tracing cores or a display output engine at all.
 
That's not the case, it's very much intentional and the way that Jensen Huang believes that the industry will go forward, and he is placing the entire engineering weight of his company behind it. AMD's been scrambling to follow ever since.

NVIDIA actually doesn't share the top die between its consumer, professional and enterprise segments since Maxwell, their server grade accelerators use a quite different architecture from the GeForce line nowadays. The closest thing released at consumer level that's based on their enterprise hardware was the Titan V, which was a one-off and is already practically 6 years old.

Look up for example the GA100, it's solely focused on compute and doesn't have ray tracing cores or a display output engine at all.
I think he's smart enough to know it's not viable in games any time soon. But he manages to sell the illusion very well, I admit that marketing is a very strong point of Nvidia.

Yeah, The blocks are still the same, see the Tensor Cores there, plus the chips below the G100 are still used in gaming and professional GPUs: https://www.techpowerup.com/gpu-specs/nvidia-ga102.g930
 
I think he's smart enough to know it's not viable in games any time soon. But he manages to sell the illusion very well, I admit that marketing is a very strong point of Nvidia.

Yeah, The blocks are still the same, see the Tensor Cores there, plus the chips below the G100 are still used in gaming and professional GPUs: https://www.techpowerup.com/gpu-specs/nvidia-ga102.g930

Gaming and pro-viz (formerly Quadro), but not enterprise (Tesla) segment. But gaming and pro-viz have always shared the same hardware with a different driver set, this is the same for AMD and Intel as well. It wasn't a feature that was added by the demands of pro-viz segment. Either way, I agree: JHH is no stranger to the fact that it'll take time for it to be perfected, but he had to start somewhere. Ada is proof that the performance can and is still going up in regards to this, I can only hope that RDNA 4 will take this more seriously.
 
It would be nice if raytracing was removed from the gpu,and a dedicated raytracing only pci card could be made for those who want raytracing. A sort of dual card setup if you want both?
 
It would be nice if raytracing was removed from the gpu,and a dedicated raytracing only pci card could be made for those who want raytracing. A sort of dual card setup if you want both?

I'm sure the idea was toyed with briefly, but I believe it's not possible due to the enormous amount of internal bandwidth required for it to work.
 
It would be nice if raytracing was removed from the gpu,and a dedicated raytracing only pci card could be made for those who want raytracing. A sort of dual card setup if you want both?
Didn't fly with Physx, why would it fly with RT?
 
That's not the case, it's very much intentional and the way that Jensen Huang believes that the industry will go forward, and he is placing the entire engineering weight of his company behind it. AMD's been scrambling to follow ever since.

Okay, take a step back and look at this from a non-biased point of view. The truth is, Nvidia and AMD/ATI have innovated for years. You can argue that Nvidia led the way most of the time, but they've both put great new technologies into their products and slowly but surely ramped those features up so that developers could use them. Before RT, they never tried to make us pay twice as much for the privilege. Before RT, they never tried to convince us that inserting fake frames was necessary for a good experience.
 
Pricing and availability in quantity. The 2 biggest questions.
 
Okay, take a step back and look at this from a non-biased point of view. The truth is, Nvidia and AMD/ATI have innovated for years. You can argue that Nvidia led the way most of the time, but they've both put great new technologies into their products and slowly but surely ramped those features up so that developers could use them. Before RT, they never tried to make us pay twice as much for the privilege. Before RT, they never tried to convince us that inserting fake frames was necessary for a good experience.

There's no bias, Jensen has been going on about raytraced graphics for as long as I remember - the fuss about OptiX back in the GTX 200 days...

AMD's innovative days IMO were largely during the HBM era. I was an avid fan, I had all three generations, Fiji, Vega10 and Vega20 (2 Fury X, 1 Vega FE and 1 VII), but these aged exceptionally poorly and I ended up doing away with them. Especially the Fury X, with its limited memory capacity (the 4 GB really weren't equivalent to 12 GB GDDR5), and that they mercilessly pulled the plug on driver support the exact day it turned 5 years old. Then came the disaster we know as RDNA, which had an amazing evolution in RDNA 2, but now? They're fumbling again.

The fake frames thing - DLSS3 FG is being gatekept because AMD's not able to fight back. I guarantee that if FSR 3 with frame interpolation comes, the tune will change - NV will bring it to Ampere and AMD fans won't be aggravated by it anymore.
 
I was referring to minimum requirements.

Games CAN STILL HAVE "all the bells and whistles", which could well require 16+ GB if need be, but by having such high minimum requirements, they are effectively cutting off a SIGNIFICANT portion of potential customers: this is the point i was trying to make.

That is again down to greed. You could have had an 8GB graphics card all the way back with a R9 390 or RX 480. The only reason games raising minimum requirements is a problem now is because Nvidia sold a bunch of gamers cards with far too little VRAM at far too high a price for far too long. This is not by far the first time this has happened, it's just the first time people are realizing that they've been taken over the barrel by Nvidia for the last 3 generations.

There's no bias, Jensen has been going on about raytraced graphics for as long as I remember - the fuss about OptiX back in the GTX 200 days...

AMD's innovative days IMO were largely during the HBM era. I was an avid fan, I had all three generations, Fiji, Vega10 and Vega20 (2 Fury X, 1 Vega FE and 1 VII), but these aged exceptionally poorly and I ended up doing away with them. Especially the Fury X, with its limited memory capacity (the 4 GB really weren't equivalent to 12 GB GDDR5), and that they mercilessly pulled the plug on driver support the exact day it turned 5 years old. Then came the disaster we know as RDNA, which had an amazing evolution in RDNA 2, but now? They're fumbling again.

The fake frames thing - DLSS3 FG is being gatekept because AMD's not able to fight back. I guarantee that if FSR 3 with frame interpolation comes, the tune will change - NV will bring it to Ampere and AMD fans won't be aggravated by it anymore.

AMD's best days were absolutely not when it was experimenting with HBM in it's consumer products, not by a long shot. It's HBM products were some of the worst it's even launched.
 
Where are 7700(XT) and 7800(XT) lines? There's a huge price and performance gap between a 7600XT and a 7900XT(X).
 
AMD's best days were absolutely not when it was experimenting with HBM in it's consumer products, not by a long shot. It's HBM products were some of the worst it's even launched.

For gamers perhaps not, GCN was already getting long in the tooth by its third generation, but they were the most innovative and high-performance (compute wise) in quite a long time. It's little wonder those designs forked into the CDNA family, with CDNA 1 being practically identical to Vega 20 in most regards.
 
Did you guys see the rumour that AMD is releasing a 7800XTX, 7800XT and 7700XT with 16 and 12 GB of VRAM respectively?

If AMD had the balls to call out nV for under-provisioning VRAM capacities (imo, they were absolutely SPOT-ON).... i'd expect them to appoint 16GB on all 3 of these performance-driven mid-tier variants. With todays GPU prices, mid-tier is the new higher-tier performance segment. Falling short on 16GB - the jokes on them!
 
If AMD had the balls to call out nV for under-provisioning VRAM capacities (imo, they were absolutely SPOT-ON).... i'd expect them to appoint 16GB on all 3 of these performance-driven mid-tier variants. With todays GPU prices, mid-tier is the new higher-tier performance segment. Falling short on 16GB - the jokes on them!

Thing is, they did and then proceeded to announce a 8 GB graphics card within the week.

Dc Comics Joker GIF by HBO Max
 
It amazes me that people can look at how awful and distracting SSR and SSAO are and wonder what RT brings to the table.
 
Thing is, they did and then proceeded to announce a 8 GB graphics card within the week.

Dc Comics Joker GIF by HBO Max

really? official announcement for which 7000-series subdivision? Can you link me up (im in the dark here)

I'd be offended if 8GB was even informally mentioned for the 7600XT and the upwards terrain. It would be repulsive for AMD to play the "we care" marketing manoeuvre alongside pooping on NV, and then just to turn around and drop 8GB poop-bombs on the budget kerbed larger consumer base.

To be frank, the desire for higher VRAM provisions has been on my mind for some time now and AMD going public in calling-out nV felt like a positive sign. It would be a shame if AMD disregarded the lower performance segment (the wider consumer count) with lacklustre same-ole lapsing BS.
 
really? official announcement for which 7000-series subdivision? Can you link me up (im in the dark here)

I'd be offended if 8GB was even informally mentioned for the 7600XT and the upwards terrain. It would be repulsive for AMD to play the "we care" marketing manoeuvre alongside pooping on NV, and then just to turn around and drop 8GB poop-bombs on the budget kerbed larger consumer base.

To be frank, the desire for higher VRAM provisions has been on my mind for some time now and AMD going public in calling-out nV felt like a positive sign. It would be a shame if AMD disregarded the lower performance segment (the wider consumer count) with lacklustre same-ole lapsing BS.

The source of the release date is MLID, as you can see in the OP, but the word is that it's a Navi 33-based design. That is a 128-bit bus GPU, which means that it's going to top off at 8 GB. Either way, you'll know more in the coming weeks.

This is certainly not Navi 32, unless AMD wants to sell a higher-tier GPU as a lower-tier placement in a bizarre, contrary move to what Nvidia has been doing.
 
There's no bias, Jensen has been going on about raytraced graphics for as long as I remember - the fuss about OptiX back in the GTX 200 days...

AMD's innovative days IMO were largely during the HBM era. I was an avid fan, I had all three generations, Fiji, Vega10 and Vega20 (2 Fury X, 1 Vega FE and 1 VII), but these aged exceptionally poorly and I ended up doing away with them. Especially the Fury X, with its limited memory capacity (the 4 GB really weren't equivalent to 12 GB GDDR5), and that they mercilessly pulled the plug on driver support the exact day it turned 5 years old. Then came the disaster we know as RDNA, which had an amazing evolution in RDNA 2, but now? They're fumbling again.

The fake frames thing - DLSS3 FG is being gatekept because AMD's not able to fight back. I guarantee that if FSR 3 with frame interpolation comes, the tune will change - NV will bring it to Ampere and AMD fans won't be aggravated by it anymore.
Okay, I'll believe you, but you sounded like a fanboy bubbling over about Jensen, and of course Jensen has been tooting the ray tracing horn for a long time. Who in the graphics industry hasn't? I think it was 13 or 14 years ago, he bought a company that devoted itself to ray tracing. After 10 years, he decided the technology was good enough to sell. In my opinion, I don't think anyone else at the time had the cahoonies to devote a significant portion of their chip's real estate to it. A technology that was honestly only good enough to add some selected lighting effects. I guess when you are the technology leader, you can take chances. Props on that, he doesn't mind taking big chances.

I'll defer commenting on DLSS3 because my response will be a rant. It has nothing to do with who is offering the technology and on what hardware. My biggest irritation is how they market it.
 
Okay, I'll believe you, but you sounded like a fanboy bubbling over about Jensen, and of course Jensen has been tooting the ray tracing horn for a long time. Who in the graphics industry hasn't? I think it was 13 or 14 years ago, he bought a company that devoted itself to ray tracing. After 10 years, he decided the technology was good enough to sell. In my opinion, I don't think anyone else at the time had the cahoonies to devote a significant portion of their chip's real estate to it. A technology that was honestly only good enough to add some selected lighting effects. I guess when you are the technology leader, you can take chances. Props on that, he doesn't mind taking big chances.

I'll defer commenting on DLSS3 because my response will be a rant. It has nothing to do with who is offering the technology and on what hardware. My biggest irritation is how they market it.

But that's because he has, it's something that graphics engineers have had on their sight for a very, very long time now, and he's always been quite enthusiastic about it. Their first raytracing system was actually released in 2009, and was far from real time, even if you had quad-SLI GTX 200 cards like two GTX 295s to pull the workload. I have even the link for its presentation slides saved:


Regarding DLSS3: believe me, such rant isn't necessary, it's a grievance I share. And I mean, quite literally, as I've been given nothing but a middle finger regarding that specific feature and the way it's been marketed, especially considered that I actually have an RTX 3090, the one GPU that shouldn't ever be denied new features within the very first generation after it. I've had my pitchfork at the ready towards Nvidia ever since they did that ridiculous insult to my intelligence during CES 2023. What a disgrace.
 
The source of the release date is MLID, as you can see in the OP, but the word is that it's a Navi 33-based design. That is a 128-bit bus GPU, which means that it's going to top off at 8 GB. Either way, you'll know more in the coming weeks.

This is certainly not Navi 32, unless AMD wants to sell a higher-tier GPU as a lower-tier placement in a bizarre, contrary move to what Nvidia has been doing.

ah ok, i fell into the assumption you were suggesting a 7600XT or similar-level offering was "officially" scored for 8GB. I guess we'll have to wait and see
 
ah ok, i fell into the assumption you were suggesting a 7600XT or similar-level offering was "officially" scored for 8GB. I guess we'll have to wait and see

At this point i'm 99% sure it's going to be an 8 GB GPU, if the Navi 33 silicon with 128-bit GDDR6 interface is used. But indeed, anything can change.
 
I find it weird they are skipping the 7700XT like that. I dont know if this is a good move or not. Maybe they are saying the 7700XT is not as competitive at the price point they have to put it at or they dont have as many stocked up yet.
I would remind you they said the same thing about Physx (which was also pretty cool when done right, the papers flying around in Batman games was pretty neat looking)
That is my major issue as its a similar situation. Ray tracing is cool and can be very impressive, but at such an extreme cost. PhysX was cool when used right and even though it was locked to Nvidia, it was at least nnice after awhile being able to throw a cheap Nvidia GPU in a system and get the PhysX while using an AMD card/not taking performance from your main Nvidia card.

Currently, the reality its a cool tech that is going to take a long time to mature at the rate its going at. The fact that it hits performance so much that it makes most games crippled unless you have DLSS enable to max or a RTX 4090 is going to hold it back for many more years.
 
IMO, they are placing too much focus on the wrong thing, and have done so since ray tracing was introduced.

Supposedly, the more "audience" a game COULD have, the more potential customers the game's publisher could have, so i find it VERY ODD for game publishers to PURPOSELY cripple their games by having such RIDICULOUS minimum requirements, which are AGGRAVATED EVEN FURTHER with ray tracing.

Were "the penalty" for enabling ray tracing like ... say ... 15% or 20% tops, THEN it would be justified, but "the penalty" is often in the 40% range and sometimes crosses the 50% mark (AMD cards mostly, but not always), which makes the tech premature, IMO: it needs to be developed further.


Were the game's minimum requirements "more down to earth" while ALSO having "higher quality modes", their games could have A LOT MORE potential buyers: the game's publishers are shooting their own feet ... with a cannon ...
It's mighty big Cannon too that will see them making losses on a lot of games as they lock out so many PC gaming rigs. Immortals of Aveum comes to mind as a ridiculous spec game for what it is.
 
Did you see the bit where it was said 7800XTX was most likely cancelled as it was going to be a cut down N31 die like W7800.

7700XT with only 12GB would make AMD seem like trolls given how much they were taunting Nvidia over lack of memory. 7700XT brings nothing over the 4070 and is said to be slower.
I agree with this as AMD is degrading its lineup for the sake of coin. They are no better an Ngreedia if they do the 12 gb/192 bit route.
 
Back
Top