Friday, June 23rd 2023

Radeon RX 7800 XT Based on New ASIC with Navi 31 GCD on Navi 32 Package?

AMD Radeon RX 7800 XT will be a much-needed performance-segment addition to the company's Radeon RX 7000-series, which has a massive performance gap between the enthusiast-class RX 7900 series, and the mainstream RX 7600. A report by "Moore's Law is Dead" makes a sensational claim that it is based on a whole new ASIC that's neither the "Navi 31" powering the RX 7900 series, nor the "Navi 32" designed for lower performance tiers, but something in between. This GPU will be AMD's answer to the "AD103." Apparently, the GPU features the same exact 350 mm² graphics compute die (GCD) as the "Navi 31," but on a smaller package resembling that of the "Navi 32." This large GCD is surrounded by four MCDs (memory cache dies), which amount to a 256-bit wide GDDR6 memory interface, and 64 MB of 2nd Gen Infinity Cache memory.

The GCD physically features 96 RDNA3 compute units, but AMD's product managers now have the ability to give the RX 7800 XT a much higher CU count than that of the "Navi 32," while being lower than that of the RX 7900 XT (which is configured with 84). It's rumored that the smaller "Navi 32" GCD tops out at 60 CU (3,840 stream processors), so the new ASIC will enable the RX 7800 XT to have a CU count anywhere between 60 to 84. The resulting RX 7800 XT could have an ASIC with a lower manufacturing cost than that of a theoretical Navi 31 with two disabled MCDs (>60 mm² of wasted 6 nm dies), and even if it ends up performing within 10% of the RX 7900 XT (and matching the GeForce RTX 4070 Ti in the process), it would do so with better pricing headroom. The same ASIC could even power mobile RX 7900 series, where the smaller package and narrower memory bus will conserve precious PCB footprint.
Source: Moore's Law is Dead (YouTube)
Add your own comment

169 Comments on Radeon RX 7800 XT Based on New ASIC with Navi 31 GCD on Navi 32 Package?

#26
Vayra86
john_Obviously we are exchanging opinions. And as Harry Callahan have said... :p

Don't know what a GPU needs to do to keep feeding a high refresh rate monitor or a dual monitor setup. But close to 100W power consumption in video playback is unacceptable in 2023. While an opinion I also believe it is logical to say that. Your smartphone in your pocket or on your desk can probably playback the same video files at 5-10W power consumption. In fact even AMD's own APUs play video while keeping their power consumption for the whole SOC at even lower than 15W. So, something gone wrong with RDNA 3 here. And even if we consider this still an opinion, then there is this chart

where a 6900XT is at half the power consumption.
AMD Radeon RX 7900 XTX Review - Disrupting the RTX 4080 - Power Consumption | TechPowerUp
Its high. But there are other examples with strange gaps. Look at the 3090ti compared to a 3090. 13W at the same VRAM capacity. Mkay? The APU/CPU/Phone comparisons are pretty much nonsense. Yes, other parts built for efficiency and even specifically efficient video playback, over performance can run said content efficiently. I mean... water is wet.

We're simply looking at a power state issue here, one that has plagued every other gen of cards in either camp historically, and its always multi monitor or desktop light usage being the culprit. RDNA3 'went wrong' is jumping to conclusions. The 3090ti didn't go wrong either, it just has a different set of power states and voltages to eek out that 5% gain over its sibling.
Posted on Reply
#27
Dr. Dro
Vayra86Its obviously not for you if you're sitting on a 3090 that is already within spitting distance :) I think you're exaggerating a little. AMD's GPU stack has been in a far worse position not too long ago, arguably from (post-) Hawaii XT up to and including RDNA1. They have a more competitive stack since RDNA2, not a worse one. RDNA3 features that are missing are mostly in the RT performance/DLSS camp, and even despite that they run everything fine. Its really not relevant that a 4090 exists with a much higher perf cap; everything below it is samey in almost every way between these two camps.
I'm not exaggerating, the fact that it's not for me speaks volumes... it's been a refresh wave and a full generation since I purchased my 33 month old graphics card. It should be ancient at this point, if you look at it objectively 3 years is more than half of the R9 Fury X's entire lifetime as a supported product(!), but no, AMD can't even release a product that will decisively beat it!
nguyenWell once AMD keep dipping into the red as they did last quarter, heads will roll, let hope it's this guy first LOL
Scott Herkelman, Frank Azor and Sasa Marinkovic work in marketing, if they look like morons doing their job it's because they're literally scraping the barrel to try and market these products. There's next to no redeemable thing about owning a Radeon today. You're missing out on all of the cool things if you buy one. That's not me speaking, it's the market share doing so.
Posted on Reply
#28
Vayra86
Dr. DroI'm not exaggerating, the fact that it's not for me speaks volumes... it's been a refresh wave and a full generation since I purchased my 33 month old graphics card. It should be ancient at this point, if you look at it objectively 3 years is more than half of the R9 Fury X's entire lifetime as a supported product(!), but no, AMD can't even release a product that will decisively beat it!



Scott Herkelman, Frank Azor and Sasa Marinkovic work in marketing, if they look like morons doing their job it's because they're literally scraping the barrel to try and market these products. There's next to no redeemable thing about owning a Radeon today. You're missing out on all of the cool things if you buy one. That's not me speaking, it's the market share doing so.
Dude the entire Ampere > Ada performance gap is abysmal just the same. The age of your graphics card isn't relevant either as both camps have spaced out their release cadence to bi-yearly for some time now, too. Ada isn't for you either, unless you are willing to up the spending cap to 1,5x what you used to pay for x90.

Get real. Seriously. Or simply adjust your expectations. This shit has been happening since Turing and its the new norm, no, you don't need to upgrade every gen, it was never a good idea, and it certainly isn't now.

Its funny you mention Fury X in this context btw :D What's happening in that head of yours I wonder... are you gearing up to get a 4090 after all, or just expressing general disappointment?
Posted on Reply
#29
Chrispy_
I just want a release date for Navi32, and some updates on whether they've fixed the silicon issues that caused disappointing clocks in Navi31 and forced the entire AMD driver team to stop what they were doing for 4 months and try to fix Navi31 clock stability in software.
Posted on Reply
#30
john_
Vayra86Its high. But there are other examples with strange gaps. Look at the 3090ti compared to a 3090. 13W at the same VRAM capacity. Mkay? The APU/CPU/Phone comparisons are pretty much nonsense. Yes, other parts built for efficiency and even specifically efficient video playback, over performance can run said content efficiently. I mean... water is wet.

We're simply looking at a power state issue here, one that has plagued every other gen of cards in either camp historically, and its always multi monitor or desktop light usage being the culprit. RDNA3 'went wrong' is jumping to conclusions. The 3090ti didn't go wrong either, it just has a different set of power states and voltages to eek out that 5% gain over its sibling.
My opinion, strictly an opinion here, let's say totally wrong, is that you choose to ignore the facts. But that's my opinion, probably wrong and because it is borderline insult I apologize in advance for this opinion.
Posted on Reply
#31
Dr. Dro
Vayra86Dude the entire Ampere > Ada performance gap is abysmal just the same. The age of your graphics card isn't relevant either as both camps have spaced out their release cadence to bi-yearly for some time now, too. Ada isn't for you either, unless you are willing to up the spending cap to 1,5x what you used to pay for x90.

Get real. Seriously. Or simply adjust your expectations. This shit has been happening since Turing and its the new norm, no, you don't need to upgrade every gen, it was never a good idea, and it certainly isn't now.

Its funny you mention Fury X in this context btw :D What's happening in that head of yours I wonder... are you gearing up to get a 4090 after all, or just expressing general disappointment?
The 4090 can do +80% over the 3090 in many situations, so it's not the same. Cut-down as it is, too.

Regarding Fury X... just another chapter of my prolonged love-hate relationship with AMD. It'll take quite some time for me to stop resenting them for EOL'ing it with high severity bugs I've reported going unfixed.
Posted on Reply
#32
QuietBob
Dr. Droas someone with a high-end Ampere card, it's not for me either. Its marginal gains in raster
Would you call a 33% difference "marginal"?

john_close to 100W power consumption in video playback
This has long been fixed. It's still high, but nowhere near 100 W. Same with idle:

Posted on Reply
#33
john_
QuietBobThis has long been fixed. It's still high, but nowhere near 100 W.
Ah! thanks. Gone at RDNA2 levels. Not exactly fixed, still hight, but much much better. Hope they drop it lower in future GPUs.
Posted on Reply
#34
lexluthermiester
john_Don't know what a GPU needs to do to keep feeding a high refresh rate monitor or a dual monitor setup. But close to 100W power consumption in video playback is unacceptable in 2023. While an opinion I also believe it is logical to say that. Your smartphone in your pocket or on your desk can probably playback the same video files at 5-10W power consumption. In fact even AMD's own APUs play video while keeping their power consumption for the whole SOC at even lower than 15W. So, something gone wrong with RDNA 3 here. And even if we consider this still an opinion, then there is this chart

where a 6900XT is at half the power consumption.
Likely an architectural thing. However, it's still not a serious problem. It's 88w, not 800w, and that's during use. Not a real problem. And as others have stated, that situation was fixed swiftly. People need to stop complaining about non-issues.
john_My opinion, strictly an opinion here, let's say totally wrong, is that you choose to ignore the facts. But that's my opinion, probably wrong and because it is borderline insult I apologize in advance for this opinion.
There's nothing wrong with opinions and the expression there-of. It's when those opinions fly in the face of reason and logic that a problem arises.
Posted on Reply
#35
Vayra86
john_My opinion, strictly an opinion here, let's say totally wrong, is that you choose to ignore the facts. But that's my opinion, probably wrong and because it is borderline insult I apologize in advance for this opinion.
No, no I'm not saying you are wrong, I'm putting things in perspective. Some sanity checks, I feel, are necessary here. Things are pulled way out of proportion. Yes, there are gaps in RDNA3. No, its not optimal as a release, as a stack, and how it was priced. At the same time, neither was Ada, and it still commands a higher net price, also per frame, for a somewhat expanded featureset and a few less gaps. But it ALSO has its gaps - notably in VRAM - and those aren't fixable.

As for insults... don't even worry, none seen or taken, we're discussing a thing, people shouldn't have to walk a tight rope doing so.
Dr. DroThe 4090 can do +80% over the 3090 in many situations, so it's not the same. Cut-down as it is, too.

Regarding Fury X... just another chapter of my prolonged love-hate relationship with AMD. It'll take quite some time for me to stop resenting them for EOL'ing it with high severity bugs I've reported going unfixed.
Fury X was arguably the worst GPU ever in terms of support... yeah. Total fail and we can blame Raja"HBM"Koduri.
Posted on Reply
#36
Dr. Dro
QuietBobWould you call a 33% difference "marginal"?

For what it's worth, at that rate it's 10% a year - when compared to a bone stock 350W 3090 using review data from back then. Reality is closer to it being 1:1 with the 7900 XT in general. I can't say I'm impressed, especially considering its last-generation RT performance, multitude of quirks and inferior software support.

If the 7900 XTX hadn't failed, it would have been matching the 4090, just as the 6900 XT once matched the 3090.
Posted on Reply
#37
TheoneandonlyMrK
tabascosauzKeeping the GCD and easy scaling based on MCD is only, like, the entire point behind bringing RDNA3 to chiplets, what took them so long?? :confused:

Also, "answer to the AD103" is an interesting statement considering it's hoping for parity with 4070 Ti at best? 4070 Ti is AD104.



So I guess that precludes you from ever commenting on 12VHPWR's woes because you never touched a 40 series card? We're all armchair generals here :D

Though I am curious what these clock "bugs" are, even when it comes to problems seems like Navi31 has more relevant concerns that need to be solved first.
Like what?.

Having owned Navi31 without issue since launch I often wonder where y'all get your issues from.

A bug dropping performance marginally that was rumoured but not proven to have happened Still ended up with a card that runs flawlessly smooth and fast in all games.


All GPU designs end up with faults in the design Errata lists same with CPU just one company gets noted though by some here, realistic, not.
Posted on Reply
#38
QuietBob
Dr. Drocompared to a bone stock 350W 3090 using review data from back then. Reality is closer to it being 1:1 with the 7900 XT
This is from the latest GPU review. All cards have been re-tested with the current game suite. At no resolution is the RTX3090 on par with the 7900XT in raster. The upgrade may not make sense for you (and reasonably so), but saying these two are about equal is a long stretch:

Posted on Reply
#39
ToTTenTranz
I have to admit it's a bit disheartening that AMD has to use a N31 chip to make sure the 7800XT is faster than the 6800XT. This is not the performance upgrade one would have expected 2 years after RDNA2's release.

N5 should have brought a 20% performance (i.e. clocks) increase over N7, instead the 7900XTX usually clocks just as high as the 6900XT. And the N6 RX7600 clocks just as high as the N5 7900XTX.

My guess is AMD really had planned for the N5 RDNA3 chips to clock 20% higher, putting the N31 7900XTX and the N32 7800XT in the >2.7GHz range which would get the latter to be closer to the 6800XT in compute throughput (72CUs on N21 @ 2250MHz ~= 60 CUs on N32 @ 2700MHz).

Instead the new process brought them nearly zero clock advantages, and AMD now has to use the bigger N31 chip to bring some advantage over the 6800XT.


Big question now is if RDNA4 is solving these issues, since it's now clear that AMD hasn't been able to solve them on N32.




And the most ironic thing is how we do have RDNA3 GPUs clocking at >2700MHz at a low(ish) power usage inside the Phoenix SoC, but those are hopelessly bandwidth starved because AMD decided against putting L3 cache on it.
Posted on Reply
#40
tabascosauz
john_Ah! thanks. Gone at RDNA2 levels. Not exactly fixed, still hight, but much much better. Hope they drop it lower in future GPUs.
No review can properly encapsulate what multi-monitor and video playback power figures are. And it's not like Nvidia GPUs are immune to increasing power consumption the more monitors/resolutions you add, but they have more intermediate VRAM clock steps available to them to lessen the blow. If you can't make 10W, you might still make 20W. If you can't make 20W, you might still make 40W.

"Normal" RDNA video playback of ~40W isn't even remotely a problem. Yes, it's high compared to GeForce, but even the dinky 7900XT MBA cooler can manage to stay fanless at that level, most of the time.

When you see 80W video playback, that's not video playback power, that's AMD's perpetual lack of intermediate VRAM clocks overshadowing everything else and turning everything into 80-90W because VRAM is stuck at 2500MHz.

Time and time again an Adrenalin release is accompanied by reports that the multi monitor problem is "fixed". A quick trip to Reddit confirms plenty of run of the mill 2 monitor high refresh setups stay unchanged at 80-90W. All the Radeon team seem to be doing is just vetting specific display setups over time - if no major artifacting issues, allow it to drop VRAM clock. Which would actually eventually solve the problem, if AMD didn't have a habit of repeatedly regressing on the multi monitor problem and undoing all the progress they'd made every couple years.

Back to the point that GeForce cards aren't immune to variance, I get about half the idle power figures that w1zz had in reviews. But that's a difference of 10W vs. 20W, not 20W vs 80W. Wondrous what VRAM can do when it has access to more than just two modes.
TheoneandonlyMrKLike what?.

Having owned Navi31 without issue since launch I often wonder where y'all get your issues from.

A bug dropping performance marginally that was rumoured but not proven to have happened Still ended up with a card that runs flawlessly smooth and fast in all games.


All GPU designs end up with faults in the design Errata lists same with CPU just one company gets noted though by some here, realistic, not.
A shocking number of them all come from AMD's driver relationship with multi monitor setups. If you only have one screen of course it runs like a top, it's the ideal scenario.

No, I don't count "not hitting 3GHz" as a "bug", it just diminishes the stuff that actually matters. The performance is there, it's everything else. You can't just slap an engine on frame rails and sell it as a car, no matter how good that motor is.
Posted on Reply
#41
Speedyblupi
kapone32You keep complaining about Bug fixes yet you have a 4090.
Progress benefits all of us, even if we already have a good enough GPU.
Posted on Reply
#42
john_
lexluthermiesterLikely an architectural thing. However, it's still not a serious problem. It's 88w, not 800w, and that's during use. Not a real problem. And as others have stated, that situation was fixed swiftly. People need to stop complaining about non-issues.
It's high. 42-48W is high (not 88W, the updated numbers from @QuietBob 's post show 42-48W). It's not 300W that we can expect in gaming, but still high. When other hardware and I mean AMD's hardware, can do the same task at a fraction of the power the 7900X/X needs, then there is something wrong in the design. A decision to maybe simplify the design, or cut costs? Don't know. But with AMD searching for advantages and not disadvantages we should be looking at much lower power consumption in every new GPU series. Not higher and after a fix the same as the previous gen (which unfortunately makes RDNA3 again look more and more like RDNA2, even in video playback). Imagine AMD's cards from the lowest to the highest needing single digit power to playback a video. That would have been a clear advantage because gamers don't use their PCs 100% of the time to game. They could be playing videos or watching movies and series. And that's hours of PC usage. So 10W (6600 XT) vs 42-48W (7900 XT/X) it's a serious difference. I repeat my perspective here. AMD should be searching for advantages where it can. Intel for example with it's pathetic iGPUs before the ARC, where at least trying to find advantages where it could. And it did had an advantage in the media engine if I am not mistaken.
Vayra86No, no I'm not saying you are wrong, I'm putting things in perspective. Some sanity checks, I feel, are necessary here. Things are pulled way out of proportion. Yes, there are gaps in RDNA3. No, its not optimal as a release, as a stack, and how it was priced. At the same time, neither was Ada, and it still commands a higher net price, also per frame, for a somewhat expanded featureset and a few less gaps. But it ALSO has its gaps - notably in VRAM - and those aren't fixable.

As for insults... don't even worry, none seen or taken, we're discussing a thing, people shouldn't have to walk a tight rope doing so.
Thanks.

There are serious gaps in RDNA3. The fact that I keep calling RDNA3 as a failure and as nothing more than RDNA2 and no one yet has come out and throw me charts showing how wrong I am, does say something. Hope they manage to take advantage of the new architectural advantages of RDNA3 with RDNA 3.5 or 4. Probably they where too much occupied into making this first generation of chiplets to work this time.

As for Nvidia. It is using VRAM as a kill switch at least from 15-20 years ago. All their architectures are probably build that way.
Many many years ago I was doing some simple benchmarks. I have kept a few. For example, 9800GT with 512MB of VRAM. Look what happens in the forth test when the VRAM goes over 512MBs


It becomes a slide show. I mean a TRUE slideshow. While performance is dropping from test 1 to test 2 to test 3 the way someone would expect, it tanks in the last test. I don't seem to have saved tests with an ATi/AMD card - might be somewhere - but I remember that ATi/AMD cards where not dropping dead when going over VRAM capacity.

15-20 years latter we see the same with the only difference this time AMD suffering the same (but at least usually offering more VRAM at the same price range)
Posted on Reply
#43
TheoneandonlyMrK
tabascosauzNo review can properly encapsulate what multi-monitor and video playback power figures are. And it's not like Nvidia GPUs are immune to increasing power consumption the more monitors/resolutions you add, but they have more intermediate VRAM clock steps available to them to lessen the blow. If you can't make 10W, you might still make 20W. If you can't make 20W, you might still make 40W.

"Normal" RDNA video playback of ~40W isn't even remotely a problem. Yes, it's high compared to GeForce, but even the dinky 7900XT MBA cooler can manage to stay fanless at that level, most of the time.

When you see 80W video playback, that's not video playback power, that's AMD's perpetual lack of intermediate VRAM clocks overshadowing everything else and turning everything into 80-90W because VRAM is stuck at 2500MHz.

Time and time again an Adrenalin release is accompanied by reports that the multi monitor problem is "fixed". A quick trip to Reddit confirms plenty of run of the mill 2 monitor high refresh setups stay unchanged at 80-90W. All the Radeon team seem to be doing is just vetting specific display setups over time - if no major artifacting issues, allow it to drop VRAM clock. Which would actually eventually solve the problem, if AMD didn't have a habit of repeatedly regressing on the multi monitor problem and undoing all the progress they'd made every couple years.

Back to the point that GeForce cards aren't immune to variance, I get about half the idle power figures that w1zz had in reviews. But that's a difference of 10W vs. 20W, not 20W vs 80W. Wondrous what VRAM can do when it has access to more than just two modes.



A shocking number of them all come from AMD's driver relationship with multi monitor setups. If you only have one screen of course it runs like a top, it's the ideal scenario.

No, I don't count "not hitting 3GHz" as a "bug", it just diminishes the stuff that actually matters. The performance is there, it's everything else. You can't just slap an engine on frame rails and sell it as a car, no matter how good that motor is.
Yes I have two monitors, a shocking number?.

I know of one, high power draw. The end.

So would be interested in knowing a shocking number of other bug's with proof.
Posted on Reply
#44
tabascosauz
TheoneandonlyMrKYes I have two monitors, a shocking number?.

I know of one, high power draw. The end.

So would be interested in knowing a shocking number of other bug's with proof.
My dude, I've said my piece months ago when I had the card. If you have only 60Hz screens and/or single monitor there's nothing to note. I'm not repeating the whole writeup again. Go look back in the owners club or my project log.
Posted on Reply
#45
lexluthermiester
john_AMD should be searching for advantages where it can.
Quibbling over less than 50w of power shouldn't be one of them. Nor should it be something people get unpleasant about.
Posted on Reply
#46
Dr. Dro
john_It's high. 42-48W is high (not 88W, the updated numbers from @QuietBob 's post show 42-48W). It's not 300W that we can expect in gaming, but still high. When other hardware and I mean AMD's hardware, can do the same task at a fraction of the power the 7900X/X needs, then there is something wrong in the design. A decision to maybe simplify the design, or cut costs? Don't know. But with AMD searching for advantages and not disadvantages we should be looking at much lower power consumption in every new GPU series. Not higher and after a fix the same as the previous gen (which unfortunately makes RDNA3 again look more and more like RDNA2, even in video playback). Imagine AMD's cards from the lowest to the highest needing single digit power to playback a video. That would have been a clear advantage because gamers don't use their PCs 100% of the time to game. They could be playing videos or watching movies and series. And that's hours of PC usage. So 10W (6600 XT) vs 42-48W (7900 XT/X) it's a serious difference. I repeat my perspective here. AMD should be searching for advantages where it can. Intel for example with it's pathetic iGPUs before the ARC, where at least trying to find advantages where it could. And it did had an advantage in the media engine if I am not mistaken.


Thanks.

There are serious gaps in RDNA3. The fact that I keep calling RDNA3 as a failure and as nothing more than RDNA2 and no one yet has come out and throw me charts showing how wrong I am, does say something. Hope they manage to take advantage of the new architectural advantages of RDNA3 with RDNA 3.5 or 4. Probably they where too much occupied into making this first generation of chiplets to work this time.

As for Nvidia. It is using VRAM as a kill switch at least from 15-20 years ago. All their architectures are probably build that way.
Many many years ago I was doing some simple benchmarks. I have kept a few. For example, 9800GT with 512MB of VRAM. Look what happens in the forth test when the VRAM goes over 512MBs


It becomes a slide show. I mean a TRUE slideshow. While performance is dropping from test 1 to test 2 to test 3 the way someone would expect, it tanks in the last test. I don't seem to have saved tests with an ATi/AMD card - might be somewhere - but I remember that ATi/AMD cards where not dropping dead when going over VRAM capacity.

15-20 years latter we see the same with the only difference this time AMD suffering the same (but at least usually offering more VRAM at the same price range)
Oh, the old Tropics demo. Love it, I still have it in my benching suite :D


I actually ran it on my current build - of course its on not running on the UHD 770


I agree with you too and that is also what I am meaning to say when I sound so harsh towards these RX 7000 series cards. So would any chart, from any reviewer. On the best cases, the 7600 does barely 10% over the 6600 XT, and is actually pretty dead even to 4% faster than the 6650 XT. There is something wrong big time with RDNA 3, either it was an architectural gamble that AMD hoped it'd pay off but didn't, the drivers are hilariously and absolutely rotten, or they have severe hardware errata, it's gotta be one of these three, either way, it doesn't look good.
Posted on Reply
#47
TriCyclops
lexluthermiesterYou mean providing a variety of products that people can buy? No, that's never a good thing, is it.. :rolleyes:
No I meant what I said, not what you think I said.
Posted on Reply
#48
john_
lexluthermiesterQuibbling over less than 50w of power shouldn't be one of them. Nor should it be something people get unpleasant about.
Look, we see things differently. And it's not about 50W of power in gaming for example or in Furmark. It's in video playback. I mean why waste an extra of 30W of power while watching a movie? I am not 15 years old when I was playing games all day. Today I will spend more time watching youtube videos and movies than gaming. So power consumption in video playback is important for me. Does AMD have the luxury to tell me "If you want low power consumption in video playback go and buy a competing card"? As I said, AMD should start looking for advantages that CAN achieve. Not trying to play catch up with Nvidia with Nvidia dictating the rules. Where is FreeSync 3.0 with Frame Generation? I am throwing it as an example. AMD should be looking in improving it's hardware in various ways, not just trying to follow where Nvidia wants to drive the market.
Posted on Reply
#49
Bomby569
on par with the 4070ti, so 10% slower then the 7900xt, how many cards will they plan on releasing to stack them every 10%?!

i guess no 7800xtx or it would just be hilarious :roll:
Posted on Reply
#50
TheoneandonlyMrK
tabascosauzMy dude, I've said my piece months ago when I had the card. If you have only 60Hz screens and/or single monitor there's nothing to note. I'm not repeating the whole writeup again. Go look back in the owners club or my project log.
If you're looking as hard as you did, for, and at issues and only came up with the three I just read, I wouldn't call that a shocking amount relative to Intel and Nvidia. GPUs but that's me, fan stopping really seamed to bother you plus multi monitor idle above all else, one of those doesn't even register on my radar most of the time, as for stuttering, I don't use your setup you do, some of that stuttering Was down to your personal set-up IE cable's, glad you got it sorted by removing the card entirely but for many other silent users and some vocal, few of your issues applied.

And more importantly, this Is made by the same people, but it isn't your card, so I think your issues might not apply to this rumour personally.
Posted on Reply
Add your own comment
Mar 15th, 2025 22:21 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts