Wednesday, April 26th 2023

AMD Radeon RX 7600 Early Sample Offers RX 6750 XT Performance at 175W: Rumor
AMD is expected to debut its performance-segment Radeon RX 7600 RDNA3 graphics card in May-June 2023, with board partners expected to show off their custom-design cards in the 2023 Computex (June). Moore's Law is Dead reports that they've spoken to a source with access to an early graphics card sample running the 5 nm "Navi 33" silicon that powers the RX 7600. This card, with development drivers (which are sure to be riddled with performance limiters); offers a 11% performance uplift over the Radeon RX 6650 XT, and a gaming power draw of 175 W (the RX 6650 XT pulls around 185-190 W).
This is still an early sample running development drivers, but a 11% performance boost puts it in the league of the Radeon RX 6700 XT. Should a production RX 7600 with launch-day drivers put on another 5-7% performance over this, the RX 7600 could end up with performance roughly matching the RX 6750 XT (a slim performance lead over the RTX 3070 in 1080p gaming). Should its power draw also hold, one can expect custom-design graphics cards to ship with single 8-pin PCIe power connectors. A couple of nifty specs of the RX 7600 also leaked out in the MLID report: Firstly, that 8 GB will remain the standard memory size for the RX 7600, as it is for the current RX 6650 XT. Secondly, the RX 7600 engine clock is reported to boost "above" 2.60 GHz.
Source:
Moore's Law is Dead (YouTube)
This is still an early sample running development drivers, but a 11% performance boost puts it in the league of the Radeon RX 6700 XT. Should a production RX 7600 with launch-day drivers put on another 5-7% performance over this, the RX 7600 could end up with performance roughly matching the RX 6750 XT (a slim performance lead over the RTX 3070 in 1080p gaming). Should its power draw also hold, one can expect custom-design graphics cards to ship with single 8-pin PCIe power connectors. A couple of nifty specs of the RX 7600 also leaked out in the MLID report: Firstly, that 8 GB will remain the standard memory size for the RX 7600, as it is for the current RX 6650 XT. Secondly, the RX 7600 engine clock is reported to boost "above" 2.60 GHz.
91 Comments on AMD Radeon RX 7600 Early Sample Offers RX 6750 XT Performance at 175W: Rumor
Changing the control panel's design often is by no means a sign of quality driver support, btw. And especially not of driver stability. ;)
The longer you wait, the more you save. I gained almost 300% from last card to this one. Even if the price was too high, the gain made it totally worthwhile.
You expect too much from one gen to the next - you could move to a 4090 btw, that's +80%. ;) Seems substantial - any other option was off the table for you regardless.
But yeah, I agree. The low VRAM curse of the 3080 doesn't affect me, so I'm just gonna wait for RDNA 4 and Blackwell GPUs before I make a decision, unless a miracle happens and GPU prices lower quite significantly. Next thing I will be purchasing is an OLED TV, current display I have is alright but doesn't do my PC justice.
Do you have a reference, maybe a link about the 2nd generation GDDR6X in RTX 3090Ti? I do not remember anything resembling this from any coverage.
RTX 3090 Ti did get a more efficient VRAM subsystem but it was simply because RTX 3090 TI got 2GB memory chips instead of double the amount of 1GB chips mounted on both front and back of the card. This should bring a nice 30% or so of power saving by itself.
The original 3090 also received 21 Gbps chips, specifically Micron MT61K256M32JE-21 (D8BGX), the reason the 3090 ships at 19.5Gbps is to save power (around 40% of this GPU's power budget is chugged by the G6X alone). That, and they don't clock much above that, so there's no illusion of headroom, my personal card does *exactly* 21 Gbps and not an inch more. Well, maybe just a tiny bit - 1319 MHz according to GPU-Z, instead of 1313:
The 3090 Ti has the updated Micron MT61K512M32KPA-21:U (D8BZC) chip, same as the 4090:
www.techpowerup.com/review/nvidia-geforce-rtx-3090-ti-founders-edition/4.html
www.techpowerup.com/review/nvidia-geforce-rtx-4090-founders-edition/4.html
That reminds me, I think time to repaste this card is coming. 3 years of ownership without opening it, the hotspot temps are getting a bit high for my taste :oops:
I guess that is why VRMs need at least as much cooling as the VRAM chips themselves.
I would assume MVDDC power draw reported would be the incoming side of the VRM.
As for the side, I don't know exactly, but it makes sense to me
Anyway, consoles have an unified 16 GB pool and a custom OS which doesn't consume as many resources as Windows, nor have the applications that you'd usually have chugging your RAM. Games are also shipped in custom settings for the console's capabilities, so they have assets optimized for its format, unlike on PC, where assets tend to emphasize quality or performance, instead of a tailored mix of both. Fortunately, a 32 GB RAM kit is affordable nowadays unless you go for high-bin, exotic performance kits with select ICs, so you should buy that instead.
Honestly, given the high prices of the flagships, HBM is beginning to look better for them. An additional 500 to 600 dollars won't bother the buyers of these cards. Also, for laptop GPUs, LPDDR5 would be better than GDDR6 etc. Widen the interface by 2x and you would still save power.
3080 (original 10 GB model) uses 10x 8Gbit Micron MT61K256M32JE-19:T (D8BGW), rated 19 Gbps
3070 Ti (8x), 3080 12 GB, 3080 Ti (12x) 8Gbit Micron MT61K256M32JE-19G:T (D8BWW), rated 19 Gbps
3090 uses 24x 8Gbit Micron MT61K256M32JE-21 (D8BGX), rated 21 Gbps
3090 Ti and 4090 use 12x 16Gbit Micron MT61K512M32KPA-21:U (D8BZC), rated 21 Gbps
As of now, other Ada cards use the same chips as 3090 Ti and 4090 but in lower quantities appropriate for their bus widths
Which makes the RTX 3090 unique in its extreme memory power consumption, as it has the first generation and first revision of chips, at their highest speed bin and you actually need to feed 24 of them. It's the worst case scenario.
From my understanding, the problem with HBM is that the silicon and the memory must be flawless and cannot be tested until packaged, if there's problems with the substrate, GPU ASIC or in any of the active HBM stacks, the entire package has to be discarded. This greatly reduces yield and was a cause for concern for AMD with Fiji and the two Vega generations. Titan V as well, it had a bad/disabled HBM stack (3072 out of 4096-bit enabled). It might not be feasible, especially considered the more affordable products tend to use harvested versions of the higher end chips, or they just disable them to maximize yield and profit as Nvidia has done with the 4090.