Wednesday, April 26th 2023

AMD Radeon RX 7600 Early Sample Offers RX 6750 XT Performance at 175W: Rumor

AMD is expected to debut its performance-segment Radeon RX 7600 RDNA3 graphics card in May-June 2023, with board partners expected to show off their custom-design cards in the 2023 Computex (June). Moore's Law is Dead reports that they've spoken to a source with access to an early graphics card sample running the 5 nm "Navi 33" silicon that powers the RX 7600. This card, with development drivers (which are sure to be riddled with performance limiters); offers a 11% performance uplift over the Radeon RX 6650 XT, and a gaming power draw of 175 W (the RX 6650 XT pulls around 185-190 W).

This is still an early sample running development drivers, but a 11% performance boost puts it in the league of the Radeon RX 6700 XT. Should a production RX 7600 with launch-day drivers put on another 5-7% performance over this, the RX 7600 could end up with performance roughly matching the RX 6750 XT (a slim performance lead over the RTX 3070 in 1080p gaming). Should its power draw also hold, one can expect custom-design graphics cards to ship with single 8-pin PCIe power connectors. A couple of nifty specs of the RX 7600 also leaked out in the MLID report: Firstly, that 8 GB will remain the standard memory size for the RX 7600, as it is for the current RX 6650 XT. Secondly, the RX 7600 engine clock is reported to boost "above" 2.60 GHz.
Source: Moore's Law is Dead (YouTube)
Add your own comment

91 Comments on AMD Radeon RX 7600 Early Sample Offers RX 6750 XT Performance at 175W: Rumor

#76
Dr. Dro
kapone32This generation did not deliver? My 7900XT is in every way faster than my 6800XT. Correspondingly my 7900X3D is also mush faster in feel than the 5800X3D. The narrative is strong for this generation but once you cut through the noise the truth is that AMD with the 6000 and 7000 will have a full stack for people to get into. I see the 6700XT going to $299 and that would be perfect for people like me. If the 7600XT can match a 6750XT that is good news. These chips are not one super huge chip but chiplets so things like that Monster Enterprise Card with 32GB of VRAM could easily be configured for Desktop use.
It's not enough, especially when you factor in the cost. Also, the only reason I could possibly want more GPU horsepower is for raytracing. AMD doesn't deliver that to me. It doesn't impress me because I already had what you're currently experiencing over 2 years ago. Almost 3, at this point.

Posted on Reply
#77
Kapone33
Dr. DroIt's not enough, especially when you factor in the cost. Also, the only reason I could possibly want more GPU horsepower is for raytracing. AMD doesn't deliver that to me. It doesn't impress me because I already had what you're currently experiencing over 2 years ago. Almost 3, at this point.

My experience was nuanced, the thing is I originally got a 7900XTX and it died. In the meantime I sold my 6800XT in a system. I got a refund for my 7900XTX and got a 7900XT for $400 less. I am in no way interested in Ray Tracing so that did not get me and if I don't use FSR why would I care about DLSS? That is me though but it's not like I did not enjoy thoroughly my 6800XT it's just that the 7900XT actually drives my 4K 144Hz panel exactly how I thought it would and that is more than enough for me. I also am a huge fan of AMD's driver/software support which is not circa 2012 anymore and actually is great as the RX580 8GB is still a viable card so many years later. I know that Nvidia has perceived more stable drivers but when I got a 3060 laptop I shook my head when I went into the software package to see the exact same interface as my GTS450 from 2010.
Posted on Reply
#78
Dr. Dro
kapone32My experience was nuanced, the thing is I originally got a 7900XTX and it died. In the meantime I sold my 6800XT in a system. I got a refund for my 7900XTX and got a 7900XT for $400 less. I am in no way interested in Ray Tracing so that did not get me and if I don't use FSR why would I care about DLSS? That is me though but it's not like I did not enjoy thoroughly my 6800XT it's just that the 7900XT actually drives my 4K 144Hz panel exactly how I thought it would and that is more than enough for me. I also am a huge fan of AMD's driver/software support which is not circa 2012 anymore and actually is great as the RX580 8GB is still a viable card so many years later. I know that Nvidia has perceived more stable drivers but when I got a 3060 laptop I shook my head when I went into the software package to see the exact same interface as my GTS450 from 2010.
Perhaps you're much more easily impressed than I am, but then again the relative base I am using is in another level altogether. Ray tracing performance is the only thing I really care about at this point because there are no raster-only games which an RTX 3090 won't comfortably run on ultra high. The other auxiliary improvements that RDNA 3 may bring, such as a higher quality encoder, are all things that Nvidia had already given me all those years ago. AMD's barely catching up here.

Changing the control panel's design often is by no means a sign of quality driver support, btw. And especially not of driver stability. ;)
Posted on Reply
#79
Vayra86
Dr. DroIt's not enough, especially when you factor in the cost. Also, the only reason I could possibly want more GPU horsepower is for raytracing. AMD doesn't deliver that to me. It doesn't impress me because I already had what you're currently experiencing over 2 years ago. Almost 3, at this point.

You bought into the top of the stack, that means any further upgrades, especially with just one gen between them, are going to be hyper costly for minimal gain.

The longer you wait, the more you save. I gained almost 300% from last card to this one. Even if the price was too high, the gain made it totally worthwhile.

You expect too much from one gen to the next - you could move to a 4090 btw, that's +80%. ;) Seems substantial - any other option was off the table for you regardless.
Posted on Reply
#80
Dr. Dro
Vayra86You bought into the top of the stack, that means any further upgrades, especially with just one gen between them, are going to be hyper costly for minimal gain.

The longer you wait, the more you save. I gained almost 300% from last card to this one. Even if the price was too high, the gain made it totally worthwhile.

You expect too much from one gen to the next - you could move to a 4090 btw, that's +80%. ;) Seems substantial - any other option was off the table for you regardless.
I thought it was the more you buy the more you save :laugh:

But yeah, I agree. The low VRAM curse of the 3080 doesn't affect me, so I'm just gonna wait for RDNA 4 and Blackwell GPUs before I make a decision, unless a miracle happens and GPU prices lower quite significantly. Next thing I will be purchasing is an OLED TV, current display I have is alright but doesn't do my PC justice.
Posted on Reply
#81
londiste
Dr. Dro2nd generation G6X (introduced with 3090 Ti) greatly alleviated power consumption, and the 4070 which is the GPU in the same power consumption tier uses standard G6, doesn't it? Because if not... The performance per watt disparity is even more extreme.
RTX 4070 is using GDDR6X.

Do you have a reference, maybe a link about the 2nd generation GDDR6X in RTX 3090Ti? I do not remember anything resembling this from any coverage.
RTX 3090 Ti did get a more efficient VRAM subsystem but it was simply because RTX 3090 TI got 2GB memory chips instead of double the amount of 1GB chips mounted on both front and back of the card. This should bring a nice 30% or so of power saving by itself.
Posted on Reply
#82
Dr. Dro
londisteRTX 4070 is using GDDR6X.

Do you have a reference, maybe a link about the 2nd generation GDDR6X in RTX 3090Ti? I do not remember anything resembling this from any coverage.
RTX 3090 Ti did get a more efficient VRAM subsystem but it was simply because RTX 3090 TI got 2GB memory chips instead of double the amount of 1GB chips mounted on both front and back of the card. This should bring a nice 30% or so of power saving by itself.
Yeah, I saw, I looked it up after I posted that. That makes the 4070 even more remarkable to me.

The original 3090 also received 21 Gbps chips, specifically Micron MT61K256M32JE-21 (D8BGX), the reason the 3090 ships at 19.5Gbps is to save power (around 40% of this GPU's power budget is chugged by the G6X alone). That, and they don't clock much above that, so there's no illusion of headroom, my personal card does *exactly* 21 Gbps and not an inch more. Well, maybe just a tiny bit - 1319 MHz according to GPU-Z, instead of 1313:



The 3090 Ti has the updated Micron MT61K512M32KPA-21:U (D8BZC) chip, same as the 4090:

www.techpowerup.com/review/nvidia-geforce-rtx-3090-ti-founders-edition/4.html
www.techpowerup.com/review/nvidia-geforce-rtx-4090-founders-edition/4.html
Posted on Reply
#83
AnotherReader
Dr. DroThe original 3090 also received 21 Gbps chips, specifically Micron MT61K256M32JE-21 (D8BGX), the reason the 3090 ships at 19.5Gbps is to save power (around 40% of this GPU's power budget is chugged by the G6X alone). That, and they don't clock much above that, so there's no illusion of headroom, my personal card does *exactly* 21 Gbps and not an inch more. Well, maybe just a tiny bit - 1319 MHz according to GPU-Z, instead of 1313:
I think you're mistaken. Micron claims 7.25 pJ pr bit for GDD6X. That works out to about 54 W for the 3090. The 24 memory chips should be 24 to 48 W. That is at most 30% of a 3090's power budget.
Posted on Reply
#84
Dr. Dro
AnotherReaderI think you're mistaken. Micron claims 7.25 pJ pr bit for GDD6X. That works out to about 54 W for the 3090. The 24 memory chips should be 24 to 48 W. That is at most 30% of a 3090's power budget.
Trust me, I make that claim from experience. GPU-Z is capable of measuring and reporting the MVDDC rail wattage on these cards. It can easily push north of 100 W, and the memory controller load isn't even maxed out. Driving the clamshell G6X on the 3090 at high resolutions such as 4K is absurdly power demanding. Using 3DMark Speed Way (heavy raytracing workload) as an example, it will average 120 watts here.



That reminds me, I think time to repaste this card is coming. 3 years of ownership without opening it, the hotspot temps are getting a bit high for my taste :oops:
Posted on Reply
#85
Count von Schwalbe
Nocturnus Moderatus
Dr. DroTrust me, I make that claim from experience. GPU-Z is capable of measuring and reporting the MVDDC rail wattage on these cards. It can easily push north of 100 W,
Jeez, that is a lot more inefficient than I had thought it would be.

I guess that is why VRMs need at least as much cooling as the VRAM chips themselves.

I would assume MVDDC power draw reported would be the incoming side of the VRM.
Posted on Reply
#86
Dr. Dro
Count von SchwalbeJeez, that is a lot more inefficient than I had thought it would be.

I guess that is why VRMs need at least as much cooling as the VRAM chips themselves.

I would assume MVDDC power draw reported would be the incoming side of the VRM.
Yeah, you begin to understand why NVIDIA opted to install only 10 GB on the 3080 when you see the original 3090 at work. It made sense for most gamers, once you account for lower shader count and that the GPU core itself is afforded a lot more power, should have been a no brainer at resolutions that gamers commonly use. Except VRAM usage began to balloon up, and there's a few situations where those 10 GB can be a bit uncomfortable already. The 3090 relies on the extra shaders present to do that same work, which is why they are a lot closer than they should be. The 3090 Ti's faster because it solved the memory power consumption problem, fully enabled the GPU and raised the power limit at the same time, so that's where the 20% perf uplift comes from despite the 3090 technically being 98% enabled.

As for the side, I don't know exactly, but it makes sense to me
Posted on Reply
#87
Sherhi
I hope it's good card, shame it's only 8gb though...dunno about technical shenanigans but consoles have 10gbs? So I would like that number as bare minimum honestly for 60s card that is considered midtier because many studios are limited (or unchained) by current consoles' hardware. I am still using gtx 760 and even though I play older games (mostly grand strategies like EU4) it's starting to show it's age. At this point almost any new card will give me like 450-500% better performance but I see modern midtier standard to be at 60s card with enough memory to run new games at 1440p on medium settings without any issues and for the next 3-4 years which...doesn't seem to be the case.
Posted on Reply
#88
Dr. Dro
SherhiI hope it's good card, shame it's only 8gb though...dunno about technical shenanigans but consoles have 10gbs? So I would like that number as bare minimum honestly for 60s card that is considered midtier because many studios are limited (or unchained) by current consoles' hardware. I am still using gtx 760 and even though I play older games (mostly grand strategies like EU4) it's starting to show it's age. At this point almost any new card will give me like 450-500% better performance but I see modern midtier standard to be at 60s card with enough memory to run new games at 1440p on medium settings without any issues and for the next 3-4 years which...doesn't seem to be the case.
Good card and 8 GB are mutually exclusive these days, and have been for some time now, far before the trend caught up. People would be mad at me for saying this not 6 months ago.

Anyway, consoles have an unified 16 GB pool and a custom OS which doesn't consume as many resources as Windows, nor have the applications that you'd usually have chugging your RAM. Games are also shipped in custom settings for the console's capabilities, so they have assets optimized for its format, unlike on PC, where assets tend to emphasize quality or performance, instead of a tailored mix of both. Fortunately, a 32 GB RAM kit is affordable nowadays unless you go for high-bin, exotic performance kits with select ICs, so you should buy that instead.
Posted on Reply
#89
AnotherReader
Dr. DroTrust me, I make that claim from experience. GPU-Z is capable of measuring and reporting the MVDDC rail wattage on these cards. It can easily push north of 100 W, and the memory controller load isn't even maxed out. D30riving the clamshell G6X on the 3090 at high resolutions such as 4K is absurdly power demanding. Using 3DMark Speed Way (heavy raytracing workload) as an example, it will average 120 watts here.
This suggests that the early GDDR6X devices had high power consumption, i.e. the power consumed by a single chip is probably around 3 W instead of the 1 to 2 W that has been the norm for a while. It would be nice if we could get a screenshot from a 3090 Ti in the same benchmark. I also noticed that the combined chip and VRAM power drawn is barely 75% of the board draw. It seems that the VRMs are rather inefficient.

Honestly, given the high prices of the flagships, HBM is beginning to look better for them. An additional 500 to 600 dollars won't bother the buyers of these cards. Also, for laptop GPUs, LPDDR5 would be better than GDDR6 etc. Widen the interface by 2x and you would still save power.
Posted on Reply
#90
Dr. Dro
AnotherReaderThis suggests that the early GDDR6X devices had high power consumption, i.e. the power consumed by a single chip is probably around 3 W instead of the 1 to 2 W that has been the norm for a while. It would be nice if we could get a screenshot from a 3090 Ti in the same benchmark. I also noticed that the combined chip and VRAM power drawn is barely 75% of the board draw. It seems that the VRMs are rather inefficient.

Honestly, given the high prices of the flagships, HBM is beginning to look better for them. An additional 500 to 600 dollars won't bother the buyers of these cards. Also, for laptop GPUs, LPDDR5 would be better than GDDR6 etc. Widen the interface by 2x and you would still save power.
Apparently it's a lot lower, the way 3090 Ti has been re-engineered improves the memory subsystem in many ways. The 3080 Ti does not have this improved memory, it uses a lower bin 19 Gbps G6X chip that is also used in the RTX 3070 Ti, so it's not a valid comparison. However, it is also a newer revision from the original 3080 10 GB's chips. Basically:

3080 (original 10 GB model) uses 10x 8Gbit Micron MT61K256M32JE-19:T (D8BGW), rated 19 Gbps
3070 Ti (8x), 3080 12 GB, 3080 Ti (12x) 8Gbit Micron MT61K256M32JE-19G:T (D8BWW), rated 19 Gbps
3090 uses 24x 8Gbit Micron MT61K256M32JE-21 (D8BGX), rated 21 Gbps
3090 Ti and 4090 use 12x 16Gbit Micron MT61K512M32KPA-21:U (D8BZC), rated 21 Gbps
As of now, other Ada cards use the same chips as 3090 Ti and 4090 but in lower quantities appropriate for their bus widths

Which makes the RTX 3090 unique in its extreme memory power consumption, as it has the first generation and first revision of chips, at their highest speed bin and you actually need to feed 24 of them. It's the worst case scenario.

From my understanding, the problem with HBM is that the silicon and the memory must be flawless and cannot be tested until packaged, if there's problems with the substrate, GPU ASIC or in any of the active HBM stacks, the entire package has to be discarded. This greatly reduces yield and was a cause for concern for AMD with Fiji and the two Vega generations. Titan V as well, it had a bad/disabled HBM stack (3072 out of 4096-bit enabled). It might not be feasible, especially considered the more affordable products tend to use harvested versions of the higher end chips, or they just disable them to maximize yield and profit as Nvidia has done with the 4090.
Posted on Reply
#91
Figus
tfdsafIf the performance numbers are true than this is another DOA card! Hey I'm saying this for both gpu makers, what a shock, we can be objective and not hold ANY company in our hearth!

For this to actually be good it would need to be at least 6-7% faster as the report suggested and not cost a penny over $300 and also be available with 16GB of vram for $40 more!

This needs to be on par with the RX 6800, draw less power and cost $300 or less to be actually good value! If AMD are smart they will go with this strategy and offer a 16GB model as well for $40 or $50 more!
Maybe in 2010 you could have expected it, now e 2 class jump is a dream, and a sub 300$ price is even more a dream, 7600 will be the replacement of 6600, probably it will go like a 6700 with a little bit less energy drain, hardly any more. A 6800 is in another league, and the xt version is still one of the most price performance card you could buy today for 1440p.
Posted on Reply
Add your own comment
Apr 12th, 2025 13:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts