Sunday, June 25th 2023

AMD Radeon RX 7600 Slides Down to $249

The AMD Radeon RX 7600 mainstream graphics card slides a little closer to its ideal price, with an online retailer price-cut sending it down to $249, about $20 less than its MSRP of $269. The cheapest RX 7600 graphics card in the market right now is the MSI RX 7600 MECH 2X Classic, going for $249 on Amazon; followed by the XFX RX 7600 SWFT 210 at $258, and the ASRock RX 7600 Challenger at $259.99.

The sliding prices of the RX 7600 should improve its prospects against the upcoming NVIDIA GeForce RTX 4060, which leaked 3DMark benchmarks show to be around 17% faster than the previous-generation RTX 3060 (12 GB) and 30% faster than its 8 GB variant. Our real-world testing puts the RX 7600 about 15% faster than the RTX 3060 (12 GB) at 1080p, which means there could be an interesting square-off between the RTX 4060 and RX 7600. NVIDIA has announced $299 as the baseline price for the RTX 4060, which should put pressure on AMD partners to trim prices of the RX 7600 to below the $250-mark.
Source: VideoCardz
Add your own comment

61 Comments on AMD Radeon RX 7600 Slides Down to $249

#26
JustBenching
AusWolfFSR doesn't suck. In my opinion, it's pretty equal to DLSS in its current state. DLSS 1 also sucked, by the way, so there's that. The only other Nvidia-exclusive feature is DLSS 3 FG, which you won't enjoy on a mid-range card due to the latency, and is pretty much pointless on a high-end one due to the already high framerates. It only exists for Nvidia to win on review charts.

If you think it's worth the extra money, by all means, buy into the "ecosystem" (whatever that word means here), but I really think it isn't.
Why won't you enjoy it on a midrange card? Say I get 70 fps and fg gets me up to 100 or 120 or whatever, how is that a problem?
Dr. DroOn my 3050M's case, it's not even that the hardware's performance is too inadequate, targeting 1080p with DLSS and medium settings, you're going to have a decent time... or would; if the 4 GB VRAM didn't get in the way. Nvidia is devious like that, even their low-end hardware is designed to be like a gateway drug to get people to buy their higher-end stuff.
Good thing with laptops is, you can drop resolution or use upscaling easily, the monitor is so small that ppi will still be insane huge
Posted on Reply
#27
Dr. Dro
fevgatosGood thing with laptops is, you can drop resolution or use upscaling easily, the monitor is so small that ppi will still be insane huge
If you have a high-end panel, that is. My laptop has a basic 1080p 120Hz panel, at 15.6 it looks pretty much like any ol' entry level monitor.
Posted on Reply
#28
Chrispy_
The price it always should have been, given what it's up against at $180-250

Finally, at $249 it's competitive with the $229 6650XT cards you can still find new on store shelves.
Posted on Reply
#29
evernessince
Dr. DroBy that I don't mean ray tracing, but all of the Nvidia-exclusive features that they've developed over the years. Successfully, that is.

Since most of AMD's open source equivalents either flopped (weren't adopted) or suck (FSR)
Vulkan is based on AMD's Mantle
FreeSync in industry standard while G-Sync is niche.

Surely you jest, do you know how terrible PC gaming would be if things like PhysX were standard and not made open source? CUDA is a good example of what happens when an Nvidia standard wins, zero vendor choice.

And please with the exaggeration on FSR. FSR is not that far off from DLSS. If FSR sucks as you says, so too does DLSS. That's if your opinion is consistent. Why must internet comments always needlessly exaggerate.
Posted on Reply
#30
JustBenching
Dr. DroIf you have a high-end panel, that is. My laptop has a basic 1080p 120Hz panel, at 15.6 it looks pretty much like any ol' entry level monitor.
Ah, mine is 14" 1440p. Looks extra sharp even at 1080p so I have no issue playing with the igpu with FSR on. The panel is so small that even FSR looks good on it :roll:
Posted on Reply
#31
Dr. Dro
evernessinceVulkan is based on AMD's Mantle
FreeSync in industry standard while G-Sync is niche.

Surely you jest, do you know how terrible PC gaming would be if things like PhysX were standard and not made open source? CUDA is a good example of what happens when an Nvidia standard wins, zero vendor choice.

And please with the exaggeration on FSR. FSR is not that far off from DLSS. If FSR sucks as you says, so too does DLSS. That's if your opinion is consistent. Why must internet comments always needlessly exaggerate.
1. Vulkan's only positive is that it's an open-source standard that works on operating systems other than Windows. Under Windows, DirectX 12 produces consistently better results.
2. Hardware G-Sync monitors still tend to outperform FreeSync/VESA AdaptiveSync monitors in general, though this is irrelevant on a current context as compatibility is universal
3. CUDA gained its foothold because AMD failed to offer a competing solution when it mattered; and it's too late to do so now
4. There is no reason for anyone with the ability to use DLSS or XeSS to use FSR. Its greatest strength is that it is hardware agnostic, but NVIDIA can do all three and Radeons can do XeSS

www.techpowerup.com/review/atomic-heart-fsr-2-2-vs-dlss-2-vs-dlss-3-comparison/
Speaking of FSR 2.2 image quality, the FSR 2.2 implementation comes with noticeable compromises in image quality—in favor of performance in most sequences of the game. We spotted excessive shimmering and flickering in motion on vegetation, tree leaves and thin steel objects, which might be quite distracting for some people. While the amount of shimmering is less pronounced in comparison to the average FSR 2.1 implementation, shimmering is clearly more visible than in either the in-game native TAA or DLSS image output. Also, there is quite noticeable shimmering issues on weapon scopes, which glow brightly and blink in motion, especially at lower resolutions. The second-most-noticeable difference in the FSR 2.2 implementation compared to the in-game TAA or DLSS solution is a softer and less detailed overall image quality, which is especially visible with grass and vegetation in general.
www.techpowerup.com/review/the-last-of-us-part-i-fsr-2-2-vs-dlss-comparison/
...second-most-noticeable issue in both DLSS and FSR 2.2 is the ghosting around your characters head, and it is especially visible at lower resolution such as 1080p Quality mode. Also, the FSR 2.2 implementation has shimmering in motion on vegetation and tree leaves, however, the amount of shimmering is less pronounced in comparison to the usual FSR 2.1 implementations, like in the Resident Evil 4 Remake for example, and these shimmering issues on vegetation and tree leaves are visible only in motion.

Speaking of performance, compared to DLSS, FSR 2.2 has slightly smaller performance gains across all resolutions, while also producing more image quality issues compared to other available temporal upscaling techniques.
www.techpowerup.com/review/cyberpunk-2077-xess-1-1-vs-fsr-2-1-vs-dlss-3-comparison/
The in-game TAA solution has very poor rendering of small object detail—thin steel objects and power lines, tree leaves, and vegetation in general. The in-game TAA solution also has shimmering issues on the whole image, even when standing still, and it is especially visible at lower resolutions such as 1080p, for example. All of these issues with the in-game TAA solution are resolved when DLAA, DLSS or XeSS are enabled, due to the better quality of their built-in anti-aliasing solution. Also, the sharpening filters in the DLAA, DLSS and XeSS render path can help to improve overall image quality. With DLSS and XeSS you can expect an improved level of detail rendered in vegetation and tree leaves in comparison to the in-game TAA solution. Small details in the distance, such as wires or thin steel objects, are rendered more correctly and completely in all Quality modes. With DLAA enabled, the overall image quality improvement goes even higher, rendering additional details, such as higher fidelity hair for example, compared to the in-game TAA solution, DLSS and XeSS. Also, both DLSS 3.1 and XeSS 1.1 handle ghosting quite well, even at extreme angles.

The FSR 2.1 implementation comes with noticeable compromises in image quality
—in favor of performance in most sequences of the game. We spotted excessive shimmering and flickering on vegetation, tree leaves and thin steel objects; they are shimmering even when standing still and it is visible even at 4K FSR 2.1 Quality mode, which might be quite distracting for some people. Once you're switching from FSR 2.1 Quality mode to Balanced or Performance, the whole image will start to shimmer even more. The anti-aliasing quality is also inferior, as the overall image has more jagged lines in motion, especially visible behind cars while driving through the world and in vegetation. Also, in the current FSR 2.1 implementation ghosting issues are worse than both DLSS and XeSS at day time, and it is even more pronounced when there is a lack of lighting in the scene, as the FSR 2.1 image may have some black smearing behind moving objects at extreme angles.
I rest my case. It's clearly the worst upscaler of the bunch - and it's not even something I'm particularly enthusiastic about (though many feel this is the most important current-generation tech around) - I prefer a native image whenever possible.
Posted on Reply
#32
JohH
AMD's their own enemy here with 6600 (XT) and 6650 XT pricing.
Posted on Reply
#33
rv8000
Dr. Dro1. Vulkan's only positive is that it's an open-source standard that works on operating systems other than Windows. Under Windows, DirectX 12 produces consistently better results.
2. Hardware G-Sync monitors still tend to outperform FreeSync/VESA AdaptiveSync monitors in general, though this is irrelevant on a current context as compatibility is universal
3. CUDA gained its foothold because AMD failed to offer a competing solution when it mattered; and it's too late to do so now
4. There is no reason for anyone with the ability to use DLSS or XeSS to use FSR. Its greatest strength is that it is hardware agnostic, but NVIDIA can do all three and Radeons can do XeSS

www.techpowerup.com/review/atomic-heart-fsr-2-2-vs-dlss-2-vs-dlss-3-comparison/



www.techpowerup.com/review/the-last-of-us-part-i-fsr-2-2-vs-dlss-comparison/



www.techpowerup.com/review/cyberpunk-2077-xess-1-1-vs-fsr-2-1-vs-dlss-3-comparison/



I rest my case. It's clearly the worst upscaler of the bunch - and it's not even something I'm particularly enthusiastic about (though many feel this is the most important current-generation tech around) - I prefer a native image whenever possible.
Afaik, and the last time I checked, XeSS generally offers less of a performance improvement than FSR and DLSS; which is completely aside from the fact each software technology is prone to introduce additional visual artifacts otherwise not produced via native rendering. It’s interesting tech, but relatively useless if you care about image quality and stability; most notably ghosting, smearing, and flickering. There’s way too much emphasis put on these software rendering techniques. Not to mention I find it odd any tech review site can come to the conclusion that you’re improving image quality when you’re actively introducing additional artifacts.

If there was any specific tech to be lauded as a true improvement to gaming, it would absolutely be VRR. And while we have Nvidia to thank for that, being hardware locked into one vendor is no longer an issue. Module based gsync monitors are pretty rare, and I don’t think there are many reviews doing comparisons, or that sister units of the same design exist with/without, so there’s no way to make a definitive statement saying either is superior.
Posted on Reply
#34
Dr. Dro
rv8000Afaik, and the last time I checked, XeSS generally offers less of a performance improvement than FSR and DLSS; which is completely aside from the fact each software technology is prone to introduce additional visual artifacts otherwise not produced via native rendering. It’s interesting tech, but relatively useless if you care about image quality and stability; most notably ghosting, smearing, and flickering. There’s way too much emphasis put on these software rendering techniques. Not to mention I find it odd any tech review site can come to the conclusion that you’re improving image quality when you’re actively introducing additional artifacts.

If there was any specific tech to be lauded as a true improvement to gaming, it would absolutely be VRR. And while we have Nvidia to thank for that, being hardware locked into one vendor is no longer an issue. Module based gsync monitors are pretty rare, and I don’t think there are many reviews doing comparisons, or that sister units of the same design exist with/without, so there’s no way to make a definitive statement saying either is superior.
Agreed on the monitors (and I even pointed out that it's an irrelevant thing in the present age), but it's not that XeSS offers less of a performance improvement, it's that by design it can only be the fastest if the XMX cores are available (thus, on Arc GPUs). DP4A path is relatively high speed for an ML-based upscaler, but it will scale down to INT24 on hardware that doesn't support DP4A such as Navi 10/5700 XT, and that's where it will get particularly slow. It retains the image quality though.

Remember for all the fanfare AMD went on about FSR 3, they're still a no-show, and were I a betting man they are probably looking at a way to mitigate the performance "issues" that XeSS exhibits in some way, as earlier iterations of FSR are all about speed and compatibility.
Posted on Reply
#35
rv8000
Dr. DroAgreed on the monitors (and I even pointed out that it's an irrelevant thing in the present age), but it's not that XeSS offers less of a performance improvement, it's that by design it can only be the fastest if the XMX cores are available (thus, on Arc GPUs). DP4A path is relatively high speed for an ML-based upscaler, but it will scale down to INT24 on hardware that doesn't support DP4A such as Navi 10/5700 XT, and that's where it will get particularly slow. It retains the image quality though.

Remember for all the fanfare AMD went on about FSR 3, they're still a no-show, and were I a betting man they are probably looking at a way to mitigate the performance "issues" that XeSS exhibits in some way, as earlier iterations of FSR are all about speed and compatibility.
My point being, based on which vendor your GPU is from, you’re always going to want to run the software implementation from your vendor. The end result will always be a mixed bag of what did the upscalar “fix” vs what new bad thing it introduced.

FSR3 seems like more of blindsiding than anything else; rushed PR announcement with likely little to no development completed when it was actually announced. It will be just as useless as DLSS3; pointless on high end hardware, trade offs at midrange, and bad on the lower end where it would theoretically be most useful.
Posted on Reply
#36
HD64G
To become a bargain it needs to get closer to $200. Still, it has a good price now.
Posted on Reply
#37
TheinsanegamerN
HD64GTo become a bargain it needs to get closer to $200. Still, it has a good price now.
IMO, $150. $200 was the price of the 8GB RX 580......6 years ago. Today a 8GB GPU is the same as cards liek the RX 550 were back then, bare minimum 1080p cards that cant even truly match the current gen consoles.
JohHAMD's their own enemy here with 6600 (XT) and 6650 XT pricing.
Those cards really do show what rDNA3 brings to the table: that is to say, nothing. core for core, clock for clock, there is almost no difference. So unsurprisingly, people are not jumping on it.
Posted on Reply
#38
evernessince
Dr. Dro1. Vulkan's only positive is that it's an open-source standard that works on operating systems other than Windows. Under Windows, DirectX 12 produces consistently better results.
Vulkan tends to provide higher performance on average than DX 12 does. DX12's closer to metal features of which were inspired by mantle anyways.
Dr. Dro2. Hardware G-Sync monitors still tend to outperform FreeSync/VESA AdaptiveSync monitors in general, though this is irrelevant on a current context as compatibility is universal
Nonsense: www.tomshardware.com/features/gsync-vs-freesync-nvidia-amd-monitor

"So which is better: G-Sync or FreeSync? With the features being so similar there is no inherent reason to select a particular monitor. Both technologies produces similar results, so the contest is mostly a wash at this point. There are a few disclaimers, however.

If you purchase a G-Sync monitor, you will only have support for its adaptive-sync features with a GeForce graphics card. You're effectively locked into buying Nvidia GPUs as long as you want to get the most out of your monitor. With a FreeSync monitor, particularly the newer, higher quality variants that meet the FreeSync Premium Pro certification, you're often free to use AMD or Nvidia graphics cards."
Dr. Dro3. CUDA gained its foothold because AMD failed to offer a competing solution when it mattered; and it's too late to do so now
Actually AMD very well intents to compete using ROCm

In fact they have a wrapper you can use to run CUDA native code on AMD cards.

That AMD originally didn't have a CUDA competitor was down to the fact that they were nearly broke for 8 years.
Dr. Dro4. There is no reason for anyone with the ability to use DLSS or XeSS to use FSR. Its greatest strength is that it is hardware agnostic, but NVIDIA can do all three and Radeons can do XeSS

www.techpowerup.com/review/atomic-heart-fsr-2-2-vs-dlss-2-vs-dlss-3-comparison/
Dr. DroI rest my case. It's clearly the worst upscaler of the bunch - and it's not even something I'm particularly enthusiastic about (though many feel this is the most important current-generation tech around) - I prefer a native image whenever possible.
XeSS looses more to FSR than wins. You cherry picking a single example doesn't prove otherwise. Your original comment was that FSR sucks. Nowhere in any of those linked articles, even the worse case scenarios does the author implies it sucks because it doesn't. That was you, who apparently doesn't use upscaling and has no personal experience, being overly hyperbolic.
Posted on Reply
#39
Macro Device
FSR 2.1 is borderline ruining the whole gaming experience in Cyberpunk 2077 as of latter iterations of the game. That's for sure. Intel's XeSS provides with WAY more consistent and stable image, albeit for a huge performance cost (6700 XT here so no complaints, XeSS has a right to prefer Intel GPUs). Mega ghosting effect in motion is yuck!

The positive thing about RX 7600 which most of you decided to ignore is energy efficiency whilst running low-demanding tasks compared to previous generation cards. Look at its power consumption in Cyberpunk 1080p capped to 60 FPS (about 75 or 80 W) which is more than 1.5x lower than that of RX 6700 XT (about 130 W). In lands of expensive electricity bills this matters. And thus the card deserves its place on the shelves (mostly because predecessors are insanely expensive lmao).

There is no doubt this initial $300 was a rip off. But is there a card which is not priced way higher than most people agree to pay for it?
Posted on Reply
#40
EatingDirt
TheinsanegamerNIMO, $150. $200 was the price of the 8GB RX 580......6 years ago. Today a 8GB GPU is the same as cards liek the RX 550 were back then, bare minimum 1080p cards that cant even truly match the current gen consoles.


Those cards really do show what rDNA3 brings to the table: that is to say, nothing. core for core, clock for clock, there is almost no difference. So unsurprisingly, people are not jumping on it.
8GB 580 was never $200 MSRP. It may have gotten down to $200, but at release it was $230-300. $230 in 2017 is $248 in today. Like I said earlier, another $25 off or so and this becomes reasonable upgrade for people on a budget with those 580's. or older cards/lower performing cards.

I do suggest you stop saying RDNA3 brings nothing to the table, it just makes you sound like a fanboy. The RDNA3 cards that are on 5nm(7900's) are just as efficient as Nvidia's 4000 series, and they're on an inferior node(5nm vs 4nm). They also typically are a better value-per-dollar than Nvidia's cards in pure rasterization.

For example, let's say I was in the market for a card for 1440p+, and my budget for the GPU is below $900. My highest-end options are a 7900XT and a 4070 Ti. I would almost definitely get a 7900XT over a 4070 Ti. With RDNA3 I get more than enough RAM to not worry about lowering textures in my games, I get 8-10% better rasterization performance in most games and I would save ~$15.

Seems like RDNA3 brought something to the table in this case, does it not?
Posted on Reply
#41
HD64G
TheinsanegamerNIMO, $150. $200 was the price of the 8GB RX 580......6 years ago. Today a 8GB GPU is the same as cards liek the RX 550 were back then, bare minimum 1080p cards that cant even truly match the current gen consoles.


Those cards really do show what rDNA3 brings to the table: that is to say, nothing. core for core, clock for clock, there is almost no difference. So unsurprisingly, people are not jumping on it.
We have to include inflation and, to be precise, 8GB RX480 and later 580 cost $250 when they launched back then. They dropped in price much later and got closer to $200. So, 7600 for $250 is fair priced and anything lower than that price is even better. None has to agree, just my opinion considering all of the above.
Posted on Reply
#42
Dr. Dro
evernessinceVulkan tends to provide higher performance on average than DX 12 does. DX12's closer to metal features of which were inspired by mantle anyways.



Nonsense: www.tomshardware.com/features/gsync-vs-freesync-nvidia-amd-monitor

"So which is better: G-Sync or FreeSync? With the features being so similar there is no inherent reason to select a particular monitor. Both technologies produces similar results, so the contest is mostly a wash at this point. There are a few disclaimers, however.

If you purchase a G-Sync monitor, you will only have support for its adaptive-sync features with a GeForce graphics card. You're effectively locked into buying Nvidia GPUs as long as you want to get the most out of your monitor. With a FreeSync monitor, particularly the newer, higher quality variants that meet the FreeSync Premium Pro certification, you're often free to use AMD or Nvidia graphics cards."



Actually AMD very well intents to compete using ROCm

In fact they have a wrapper you can use to run CUDA native code on AMD cards.

That AMD originally didn't have a CUDA competitor was down to the fact that they were nearly broke for 8 years.





XeSS looses more to FSR than wins. You cherry picking a single example doesn't prove otherwise. Your original comment was that FSR sucks. Nowhere in any of those linked articles, even the worse case scenarios does the author implies it sucks because it doesn't. That was you, who apparently doesn't use upscaling and has no personal experience, being overly hyperbolic.
1. Vulkan games never perform better than DX12 on Windows if the game offers both code paths on Nvidia GPUs. Vulkan in addition has several limitations on Windows that DX12 doesn't regarding multiplane and other things that are of relevance to developers. I understand AMD's DirectX implementations have always been behind but with the overhauled driver base that they introduced for Navi 21 and newer in May 2022 it should be much better.

2. Truly weird to double down on this, but hardware Gsync modules have a much wider range in general. Again, it doesn't matter. Both brands of VRR technology will work with both brands of GPUs, only that Radeon doesn't really take advantage of the hardware module, making it wasteful in a sense

3. ROCm isn't a competitor for CUDA and was never intended to be, ROCm is closer in nature to the Tesla compute mode driver. It's also not supported on Windows and hardware support is exceptionally limited: in fact RDNA 3 doesn't support it yet. Calling ROCm a competitor to CUDA is a bad move to say the least. Supposedly it's coming to Windows with that trademarked soon but currently, the official support is limited to Pro and Instinct only, exclusively on Linux. AMD fans need to understand that you can't bank on a potential future development to justify a product. Really, you have no guarantees a potential future thing will pan out as you expect.

4. I'm not cherry picking, what kind of denial is that? I've linked at least 4 of TPUs own reviews. Read them and see the comparison images. It's just worse. He won't say it sucks but you can deduce from what's being said. Shimmering, ghosting, occlusion issues, softened image, loss of detail... How is any of that desirable? Then you wonder why AMD has 10% of the GPU market share.

I don't have this need to appease, as an AMD fan you shouldn't defend them but DEMAND that they improve by exposing their dirt. As a corporation they only do the bare minimum to get approval from their customers. AMD isn't Mr. Nice Guy, demand improvements, vote with your wallet and they'll come.
Posted on Reply
#43
Metroid
This is worth 100 usd, anybody paying more than that is helping nvidia and amd to keep prices high.
Posted on Reply
#45
Macro Device
MetroidThis is worth 100 usd
Whilst I agree on its price being considerably high this seems to be an exaggeration. It would be the first $100 GPU to run ALL games at 1080p at high settings at 60 FPS if it cost $100. Even the legendary GTX 1060 series and RX 480 series had been twice as expensive whilst providing with less smoothness in games.

$200 to $220 is completely OK. Anything more is just a one-way ticket for nVidia to profits. Anything less just doesn't make sense for AMD. Don't you forget there is a none slower RTX 4060 incoming which does also sport DLSS3 and Fake Frames™ techniques, want you or not, making for a difference worth paying a third more. Not to mention RTX 4060 is the first GPU of such price segment actually capable of running casual RT at reasonable speeds awhilst RX 7600 fails to even achieve that. And RTX 4060 is less power demanding.

"Reasonable" (probably reasonable indeed) price of anything below 270 USD in RTX 4060 would just be a complete funeral for all AMD products. Which, to be frank, haven't look good from the very start except for RX 7900 XTX which kicks 4070 Ti's butt for sure.
Posted on Reply
#46
evernessince
Dr. Dro1. Vulkan games never perform better than DX12 on Windows if the game offers both code paths on Nvidia GPUs. Vulkan in addition has several limitations on Windows that DX12 doesn't regarding multiplane and other things that are of relevance to developers. I understand AMD's DirectX implementations have always been behind but with the overhauled driver base that they introduced for Navi 21 and newer in May 2022 it should be much better.
Key words there are Nvidia GPU. You are confusing AMD's DX11 and DX9 implementation with their DX12 implementation. AMD's DX12 implementation has not had the same multi-threading problems that they had with DX11 and 9. That might be down to several factors aside from the game itself being more in control of threads on the newer API.
Dr. Dro2. Truly weird to double down on this, but hardware Gsync modules have a much wider range in general. Again, it doesn't matter. Both brands of VRR technology will work with both brands of GPUs, only that Radeon doesn't really take advantage of the hardware module, making it wasteful in a sense
There are a variety of variable sync capable display scalers and for the most part they have gotten as good if not better than the G-Sync module. Some Variable refresh rate scales are capable of a larger refresh range than the G-Sync module is As the article I linked pointed out, they are feature equivalent.
Dr. Dro3. ROCm isn't a competitor for CUDA and was never intended to be, ROCm is closer in nature to the Tesla compute mode driver. It's also not supported on Windows and hardware support is exceptionally limited: in fact RDNA 3 doesn't support it yet. Calling ROCm a competitor to CUDA is a bad move to say the least. Supposedly it's coming to Windows with that trademarked soon but currently, the official support is limited to Pro and Instinct only, exclusively on Linux. AMD fans need to understand that you can't bank on a potential future development to justify a product. Really, you have no guarantees a potential future thing will pan out as you expect.
I never said ROCm was going to save anyone. I merely pointed it out given that you implied AMD was never going to have a CUDA competitor.
Dr. Dro4. I'm not cherry picking, what kind of denial is that? I've linked at least 4 of TPUs own reviews. Read them and see the comparison images. It's just worse. He won't say it sucks but you can deduce from what's being said. Shimmering, ghosting, occlusion issues, softened image, loss of detail... How is any of that desirable? Then you wonder why AMD has 10% of the GPU market share.
You linked 4 articles, of which 1 has XeSS and based off that sample of 1 you declared XeSS was better than FSR. None of the articles you linked are one of the better implementations of FSR either. Anyone can cherry pick examples to make any one of the three up-scaling technologies look bad. You aren't even trying to take a balanced approach.
Dr. DroI don't have this need to appease, as an AMD fan you shouldn't defend them but DEMAND that they improve by exposing their dirt. As a corporation they only do the bare minimum to get approval from their customers. AMD isn't Mr. Nice Guy, demand improvements, vote with your wallet and they'll come.
You do realize that both me and you own Nvidia cards right? You probably should have checked system specs before making such nonsense claims. The difference is I don't get invested in a brand and blindly defend a company. My post history here clearly demonstrates this. You don't have to be an AMD fanboy to say that FSR does not suck, that was clearly hyperbolic language on your part which you seem determined to continue to double down on. I'd challenge you to tell the difference in a double blind test.
HD64GWe have to include inflation and, to be precise, 8GB RX480 and later 580 cost $250 when they launched back then. They dropped in price much later and got closer to $200. So, 7600 for $250 is fair priced and anything lower than that price is even better. None has to agree, just my opinion considering all of the above.
The die size is a decent bit smaller than a 580. At the end of the day the GPU is priced like most other GPUs this generation, underwhelming and lacking excitement.

The only upside to the current GPU market situation is that it's giving Intel a good chunk of time to catch up with their drivers.
Posted on Reply
#47
Dr. Dro
Honestly man I'm not looking for a fight, and that I'm so critical of AMD sometimes is because I know they can do better. I have an Nvidia GPU out of chance, speaking for myself I've always had a thing for Radeon cards - but so far AMD hasn't given me reasons to celebrate. Seems like they just keep self owning like that.

The statement towards AMD fans wasn't explicitly directed at you, sorry if it came out that way: but it's a general trend I see. AMD can do better. I know it, I've seen it first hand, trust me on this.

The easiest way to spot DLSS v. FSR is foliage and fire renditions. FSR will always be worse off. XeSS is still new. Cyberpunk is like the only implementation of XeSS 1.1 that I know of, but more are coming, and the criticisms leveled at FSR are consistent through a large variety of games. Good enough as it may be for some people, when options are available it's the last thing I'm looking at.
Posted on Reply
#48
AusWolf
Dr. DroHonestly man I'm not looking for a fight, and that I'm so critical of AMD sometimes is because I know they can do better. I have an Nvidia GPU out of chance, speaking for myself I've always had a thing for Radeon cards - but so far AMD hasn't given me reasons to celebrate. Seems like they just keep self owning like that.

The statement towards AMD fans wasn't explicitly directed at you, sorry if it came out that way: but it's a general trend I see. AMD can do better. I know it, I've seen it first hand, trust me on this.
I'm not entirely sure what you expect AMD to "do better" at. The arguments I see above seem more like nit-picking than actual arguments to me. At least as an owner of both AMD and Nvidia cards, I don't see any of those issues manifest anywhere. If you're a developer, and you absolutely need CUDA, I understand. Other than that, your arguments sound a bit made-up to me (no offense).
Dr. DroThe easiest way to spot DLSS v. FSR is foliage and fire renditions. FSR will always be worse off. XeSS is still new. Cyberpunk is like the only implementation of XeSS 1.1 that I know of, but more are coming, and the criticisms leveled at FSR are consistent through a large variety of games. Good enough as it may be for some people, when options are available it's the last thing I'm looking at.
I see you've got a 3090. In my opinion, comparing any kind of upscaling with that level of performance at hand is a bit silly. I would just run everything at native res. :)
Posted on Reply
#49
Macro Device
AusWolfI would just run everything at native res.
Good luck running latterday crap at native 4K on Ultra settings using this card. It will cry and beg you for dropping some settings below Ultra.

It's also insuffiicient for native 1440p on Ultra at 144 Hz or higher mark. Upscalers are the necessary poison here as well. And I'm not criticising 3090, don't get me wrong, this one is more than solid. Game developers' urge to make games as badly as possible is what's criticised.
Posted on Reply
#50
AusWolf
Beginner Micro DeviceGood luck running latterday crap at native 4K on Ultra settings using this card. It will cry and beg you for dropping some settings below Ultra.

It's also insuffiicient for native 1440p on Ultra at 144 Hz or higher mark. Upscalers are the necessary poison here as well. And I'm not criticising 3090, don't get me wrong, this one is more than solid. Game developers' urge to make games as badly as possible is what's criticised.
Then drop it below Ultra. :) Not that Ultra and High, or even Medium are so much different these days anyway.

Other than that, I agree. Using bad optimisation to sell upscaler tech is disgusting to say the least.
Posted on Reply
Add your own comment
Dec 26th, 2024 21:06 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts