Monday, January 8th 2024
AMD Announces the Radeon RX 7600 XT 16GB Graphics Card
AMD announced the new Radeon RX 7600 XT graphics card, bolstering its mid-range of 1080p class GPUs. The RX 7600 XT is designed for maxed out AAA gaming at 1080p, although it is very much possibly to play many of the titles at 1440p with fairly high settings. You can also take advantage of technologies such as FSR 3 frame generation in games that support it, AMD Fluid Motion Frames on nearly all DirectX 12 and DirectX 11 games; as well as the new expanded AMD HyperRX performance enhancement that engages a host of AMD innovations such as Radeon Super Resolution, Anti-Lag, and Radeon Boost, to achieve a target frame rate.
The Radeon RX 7600 XT is based on the same 6 nm "Navi 33" silicon, and the latest RDNA 3 graphics architecture, as the Radeon RX 7600. If you recall, the RX 7600 had maxed out all 32 CU on the silicon. To design the RX 7600 XT, AMD retained the "Navi 33," but doubled the memory size to 16 GB, and increased the clock speeds. The 16 GB of memory is deployed across the same 128-bit wide memory bus as the 8 GB is on the RX 7600. The memory speed is unchanged, too, at 18 Gbps GDDR6-effective; as is the resulting memory bandwidth, of 288 GB/s. There are two key changes—the GPU clock speeds and power limits.The Game Clock of the RX 7600 XT is set at 2.47 GHz, compared to 2.25 GHz on the RX 7600; and the maximum Boost Clock is set at 2.76 GHz, compared to 2.66 GHz on the RX 7600. To support these, and improve boost clock residency, AMD increased the total board power (TBP) to 190 W, up from 165 W on the RX 7600. As a result, the RX 7600 XT custom-design graphics cards will feature two 8-pin PCIe power connectors, or at least a combination of 6-pin and 8-pin; while the RX 7600 made to with just one 8-pin.
Another small change with the RX 7600 XT is that board partners will be mandated to wire out DisplayPort 2.1 on their custom boards (to use the required clock drivers and other ancillaries); they cannot opt to have DisplayPort 1.4 to save costs.
The 6 nm "Navi 33" silicon physically features 32 RDNA 3 compute units (CU), adding up to 2,048 stream processors, 64 AI accelerators, 32 Ray accelerators, 128 TMUs, and 64 ROPs. A 32 MB Infinity Cache memory cushions the 128-bit GDDR6 memory interface, which on the RX 7600 XT drives 16 GB, running at 18 Gbps.Thanks to the increase engine clocks, the RX 7600 XT is shown posting a proportionate increase in performance across popular titles at 1080p with maxed out settings, including ray tracing. The RX 7600 XT is shown posing a near doubling in performance over the GeForce RTX 2060 6 GB. The RX 7600 XT is also shown offering playable frame rates at 1440p with max settings (albeit without ray tracing). AMD is making the case for 16 GB with creator and generative AI applications, where the large video memory should come very handy.
AMD Radeon RX 7600 XT will be available on January 24, 2024. It is exclusively a partner-driven launch, there will be no reference design in the retail market. AMD set $329 as the baseline price for the RX 7600 XT, a $60 premium over the RX 7600.
The Radeon RX 7600 XT is based on the same 6 nm "Navi 33" silicon, and the latest RDNA 3 graphics architecture, as the Radeon RX 7600. If you recall, the RX 7600 had maxed out all 32 CU on the silicon. To design the RX 7600 XT, AMD retained the "Navi 33," but doubled the memory size to 16 GB, and increased the clock speeds. The 16 GB of memory is deployed across the same 128-bit wide memory bus as the 8 GB is on the RX 7600. The memory speed is unchanged, too, at 18 Gbps GDDR6-effective; as is the resulting memory bandwidth, of 288 GB/s. There are two key changes—the GPU clock speeds and power limits.The Game Clock of the RX 7600 XT is set at 2.47 GHz, compared to 2.25 GHz on the RX 7600; and the maximum Boost Clock is set at 2.76 GHz, compared to 2.66 GHz on the RX 7600. To support these, and improve boost clock residency, AMD increased the total board power (TBP) to 190 W, up from 165 W on the RX 7600. As a result, the RX 7600 XT custom-design graphics cards will feature two 8-pin PCIe power connectors, or at least a combination of 6-pin and 8-pin; while the RX 7600 made to with just one 8-pin.
Another small change with the RX 7600 XT is that board partners will be mandated to wire out DisplayPort 2.1 on their custom boards (to use the required clock drivers and other ancillaries); they cannot opt to have DisplayPort 1.4 to save costs.
The 6 nm "Navi 33" silicon physically features 32 RDNA 3 compute units (CU), adding up to 2,048 stream processors, 64 AI accelerators, 32 Ray accelerators, 128 TMUs, and 64 ROPs. A 32 MB Infinity Cache memory cushions the 128-bit GDDR6 memory interface, which on the RX 7600 XT drives 16 GB, running at 18 Gbps.Thanks to the increase engine clocks, the RX 7600 XT is shown posting a proportionate increase in performance across popular titles at 1080p with maxed out settings, including ray tracing. The RX 7600 XT is shown posing a near doubling in performance over the GeForce RTX 2060 6 GB. The RX 7600 XT is also shown offering playable frame rates at 1440p with max settings (albeit without ray tracing). AMD is making the case for 16 GB with creator and generative AI applications, where the large video memory should come very handy.
AMD Radeon RX 7600 XT will be available on January 24, 2024. It is exclusively a partner-driven launch, there will be no reference design in the retail market. AMD set $329 as the baseline price for the RX 7600 XT, a $60 premium over the RX 7600.
51 Comments on AMD Announces the Radeon RX 7600 XT 16GB Graphics Card
Also, exchange rates are $329 USD = $440 CAD at this time.
Also, I love that they're using figures with frame interpolation on. Nvidia paved the way to this horrid practice, at least now that they both do it consumers are equally mislead.
Additional 8 GB VRAM is cool but we are yet to witness more than two games where this matters at this level of raw performance.
On top of that, 4060 is:
• Cheaper than $330;
• Capable of DLSS;
• Capable of CUDA workloads;
• Much more energy efficient;
• Usually much more compact.
I don't see a reason for this 7600 XT to be interesting for anyone who is not in the market for the cheapest 16 GB GPU for their professional workloads. At $330, you can get a 6700 XT, or even a 6750 XT which are much faster in gaming.
7600 XT is a nonsensical, gluttonly overpriced and non-balanced release. It's even worse than a 16 GB version of 4060 Ti because nVidia had zero competition at $500 price point when it was released. Now, 4060 is a competition to consider, yet AMD are behaving like 4060 doesn't exist.
Every 8GB card that gets released, including the 7600, gets dumped on for not having enough VRAM. Then a 16GB version comes out and gets dumped on for having too much VRAM. I get that 16GB is more framebuffer than the chip can make full use of, but there's no in-between available. AMD can't just make a 12GB card without reconfiguring the memory bus. So they're stuck with "extra" memory chips on the BOM, and they're not about to just not calc those in because they're largely superfluous.
Pricing will sort itself out. It generally does. Eventually.
7600 XT has 190 W power limit. That's 115%.
Performance uplift from cranking power limit up has never been linear, you can surely divide it by 2 and get the best case scenario, thus 7.5 percent speed, especially considering RDNA3 GPUs are way beyond their sweet spot already.
97% + 7.5% of 97% = 104.3% performance of 4060. Best case. Realistic case is 99 to 101 % depending on an exact make.
We also don't know for sure if that ain't an RTX 3090 alike case. Because in case it is it's double the VRAM chips, thus double the VRAM power consumption, thus even less chances for 7600 XT to be fast enough. Don't underestimate the sheer amount of guys with Aerocool VX / Perdoon / KCAS / Power Man PSUs and those with suboptimal PC cases. I personally dump this GPU for being last gen level slow. 16 GB or 8 GB, this GPU will be a bad purchase at 330 USD. AMD could easily cut 7700 XT down to 48 CUs, cut clocks a little bit and call it a day. They were one and a half gen behind in 2018. Now they are THREE gens (lackluster RT + worse upscaler + worse power efficiency; the difference is bigger than 6 years ago) behind and they are acting like nVidia's products released after 2018 don't exist. If our retail prices made any sense I'd go for a 40 series GPU (prices are currently same level much higher than MSRP + VAT for both AMD and nVidia GPUs).
nVidia, by vast overpricing, offered a hangar sized room for AMD to slot their GPUs in. AMD made GPUs slightly faster in raster and much worse in everything else and are pretending that selling them for the same price is completely fine.
7600 Launch: $269
7600 XT Launch: $299
4060 Ti Launch: $399
4060 Ti 16GB Launch: $499
The one thing I don't understand is who these cards are for. They're the modern day entry to 1080p60 gaming but the last time I had anything on a 64 or 128 bit bus fit for gaming, we were still in the DirectX9 era, where cards like this tend to suffer the most. Perhaps it's a compromise for those that enjoy gaming and AI training.
By the time games gain significantly higher details, the computational load and bandwidth requirements will grow even more, slowing the frame rate to a crawl long before you get to see games make sensible use of this extra memory. RX 7600 (XT) isn't particularly powerful by today's standards, and it will certainly not be 3-5 years down the road. Slapping 16 GB of VRAM on this card isn't going to extend its useful life as a gaming card.
The Term "Future Proof" should never be considered when building a Gaming PC. Its a gimmick, pointless endeavor as you stated.
What I prefer is strategic Gaming PC custom building. Acquire a feature rich motherboard with (1) CPU upgrade pathway & (1) Ram upgrade pathway (16GB to 32GB? or 32GB to 64GB). GPUs can always be swapped in and out, M.2 + SSD drives can always be swapped in and out for newer versions.
RDNA3 looks great, BUT I am not sure what AMD is thinking here? This GPU makes no sense with 16GB. Would have been better off with 8GB or 12GB and drop the price tag. What also boggles the mind is how the RX 7700XT and 7800XT are SO CLOSE to each other. Is it because AMD is having issues squeezing higher performance with RDNA3? I probably don't need to upgrade my RX 6700XT as it pounds any AAA game at 1440p with max PQ & very playable FPS. But the 7700XT is overpriced because its so close in performance with the 7800XT and both are considered 1440p GPUs. :confused: lol AMD pulling an Nvidia again?
Remember the question is not 'can a 7600 class card use 16GB of VRAM' the question is actually 'can a 7600 class card use more than 8GB of VRAM'.
Also the bandwidth and compute requirements only increase if you stick to the same pre-set of high/ultra. If you allow yourself to use lower pre-sets as games get more demanding then you won't exceed your compute. The realistic minimum for a game is going to be PS5 / Series X so if your GPU has more compute than those consoles you should always be able to tune it to hit playable settings until games no longer target PS5 / Series X which is a good 5/6 years away yet.
This is what made the RX480 8GB / RX580 so good, the GPU performance and VRAM was enough that it could offer a much better than console experience until pretty recently. It has only been since we started seeing fewer cross gen games on PS5 / Series X that they have fallen behind. 7600XT is going to be similar although for a true sub $300 GPU to really hit the sweet spot we will probably waiting for the next gen.
VRAM is heavily compressed on the fly, and changes constantly with dynamic tiled rendering. The way to check whether you need more is to see if the performance plummets. If it keeps scaling (e.g. with an OC), then VRAM size isn't the issue. Just look at the RTX 4060 Ti 16 GB, which has shown us how pointless slapping extra VRAM on a card really is. Its bigger brothers (4070 and up), manage to scale well with "just" 12 GB, and there isn't a time coming when the extra VRAM is going to make the 4060 Ti 16 GB able to outperform cards from a higher tier. So exactly when is the investment in VRAM going to pay off? :rolleyes:
If you're going to make actual use of VRAM during a frame, then you'll always going to need bandwidth and computational power to go along with it. It doesn't matter if it's a current game, or a game coming 5 years from now, this basic fact isn't going to change.
7600 XT 16 GB wit its 288 GB/s bandwidth, means it could theoretically access at most 4.8 GB during a single frame at 60 FPS, or 2.4 GB at 120 FPS, assuming 100% utilization of the memory bus (which is extremely unrealistic). So in reality, the only way to make use of such a large VRAM is to have a lot allocated that isn't actually needed in the immediate future. By the time you need 16 GB to play games at desired settings on this card, it's going to perform far below 60 FPS. Keep in mind, this card isn't a great performer even by today's standards.
The "future-proofing" with VRAM argument always comes down to arguing for a theoretical utopia. Is the extra VRAM giving you extra performance? No. Will the extra VRAM mean the card will perform well for 2 years longer than without it? No. It's probably a combination of surplus GPUs of certain bins, and the need for some media attention. We know they have released "pointless" products in the past when they need something to show while the next generation gets ready.
7600 XT would certainly be more interesting if it was a more cut down Navi 32 (like 160-bit 10 GB and ~3072 cores), but the reality is they probably don't have a surplus of bins matching that.
So at 1440p the 4060Ti 16GB can use RT if you can accept 30FPS. The 8GB cant and neither can the 3070 Ti which is a much faster GPU usually.
Here the 4060Ti can offer path tracing at 1080p at just over 30 fps which the objectively faster 3070 Ti can't manage and neither can the 4060Ti 8GB manage.
Now in these 2 cases I don't think 16 GB will help the 7600 XT much because AMD is that much worse in RT. It would surely have helped the 3070 and 3070Ti though. I mean why is the 3060 12GB ahead of the 3070 in this test. Sure both are 'unplayable' but the 3060 12GB should be more unplayable because it has less compute and less bandwidth.
Or here at 1080p in Ratchet and Clank.
4060Ti 16GB ahead of the 3070Ti and offering a much better experience than the 4060Ti 8GB. In this title I would not at all be surprised if the 7600 XT managed to comfortably exceed 60fps which the current 7600 just can't do.
Or here in RE4 at 1440p with RT.
3060 12GB actually offers a playable console like experience which for a $330 is quite okay, the 3070 crashes when if it had 16GB of VRAM it would be a 60+ FPS experience. I expect the 7600 would also crash if it was tested here but the 7600XT would probably be in and around 60 FPS in this example.
One thing these bar charts don't tell you is how bad texture swapping is on 8GB cards. Sure the FPS bar might look fine but does the IQ hold up while playing or are you looking at an awful lot of low resolution textures while the proper ones load in after a quick camera pan because it can't keep up.
So sure, you can take your numbers and theory crafting about why 16GB won't matter and I will just look at the observable evidence we have. To point it out again, a title does not need to use 16GB of VRAM for the XT to show an advantage, a title just needs to use 9GB of VRAM, maybe even 8.5GB and it will show. Either in smoother frame rates or with less texture pop and better IQ.
EDIT:
Purely academic of course because only the 4090 is playable but despite the massive compute short comings of the 4060Ti 16 GB it is faster than the 4070 and 4070ti. I wasn't going to show this chart for the very reason that nothing is remotely playable but it does show the stark difference between being compute bound and VRAM bound. With a VRAM bind you just hit a wall and usually the only setting you can change to improve it is textures or in this case turning Path Tracing / RT off. With compute binds there are more settings you can turn down to get you where you need to be.
Additional calculating power, however, helps in 100% games in 100% scenarios. In VRAM bound scenarios, of course, it helps way less but once again, it's <10 games out here exhibiting such behaviour. I agree with XT needing >8 GB but making it 6700-alike (36 CUs instead of 32 and 10 GB on 160-bit bus instead of 8/128), of course with at least 10% higher frequencies than in the case of 6700 non-XT, would both solve the 8 GB issue and the low general speed issue. At 32 CUs, 16 GB over a very slender 128-bit bus... it's an exceptionally niche product to say the very least.
With fewer and fewer games catering to the PS4 and more and more using RT if your intention is to buy a GPU at a good price and ride it out until the next generation consoles then the 7600XT is the 1st GPU at a reasonable MSRP that has enough VRAM and enough compute power to make it possible. The 6700XT is a good alternative option right now due to it being on sale but that depends on region. The 8GB variant won't be able to manage that.
Yes it would be better if the 7600XT was a further cut down N32 rather than an OC'd N33 with double the ram but a cut down N32 variant would very likely cost closer to $400 so would be less appealing on that front.
Speaking RT titles, Avatar: Frontiers of Pandora is well below 60 FPS on RX 6700 XT and yes, we're talking 2000late 1080p resolution. At 4K, its VRAM buffer is still nowhere near being maxed out but this GPU only gets a tad above a dozen FPS. Things are even worse for an RX 7600 with it being under 40 FPS at 1080p and 13 FPS at 4K. Cyberpunk 2077 is also unplayable on both GPUs if we enable RT to the point it eats at least half their VRAM. Control? Unplayable even at 1080p. Software ray traced titles like Resident Evil 4 Remake? Yes, there is an obvious deficite if we're talking 8 GB VRAM GPUs, yet RT is a total gimmick in this particular game.
Even 4060 isn't bought for RT despite handling it way better than BOTH aforementioned GPUs. At below 500-600 USD, gamers don't expect RT to be handled well. And even without RT, pure raster performance is in a gutter way before VRAM limitations kick in. Remember GTX 1060 6 GB launch? Remember GTX 980? These cards are tied both in scenarios where 4 GB is enough and in scenarios where 4 GB is not enough. 7600 is way slower than 3070 Ti and it can't reasonably saturate its own 8 GB, let alone 16 GB. A couple titles with VRAM hogging habit don't count. Horsepower requirements always grow first. That's where RX 7900 GRE belongs. Unfortunately, AMD aren't aware of this yet.
Sure you will need to make compromises but you should be able to maintain console like IQ and have more FPS or have the same FPS and higher IQ, sometimes you will be able to turn on RT if the game has a console like RT setting available (how worthwhile it is to turn on is upto the user).
Neither of these parts will hit VRAM walls at certain settings combinations like the 3070/3070Ti and other more power 8GB cards do. Just look at the Ratchet and Clank example at 1080p. The 6700 XT is 2x faster than the 7600 and the 6700XT is no where near 2x the compute performance. I fully expect the 7600XT hit more than 60 fps in that game at that setting, maybe even more like 70+ depending on how VRAM vs compute vs bandwidth limited the bottlenecks are.