Sunday, August 6th 2023
AMD Retreating from Enthusiast Graphics Segment with RDNA4?
AMD is rumored to be withdrawing from the enthusiast graphics segment with its next RDNA4 graphics architecture. This means there won't be a successor to its "Navi 31" silicon that competes at the high-end with NVIDIA; but rather one that competes in the performance segment and below. It's possible AMD isn't able to justify the cost of developing high-end GPUs to push enough volumes over the product lifecycle. The company's "Navi 21" GPU benefited from the crypto-currency mining swell, but just like with NVIDIA, the company isn't able to push enough GPUs at the high-end.
With RDNA4, the company will focus on specific segments of the market that sell the most, which would be the x700-series and below. This generation will be essentially similar to the RX 5000 series powered by RDNA1, which did enough to stir things up in NVIDIA's lineup, and trigger the introduction of the RTX 20 SUPER series. The next generation could see RDNA4 square off against NVIDIA's next-generation, and hopefully, Intel's Arc "Battlemage" family.
Source:
VideoCardz
With RDNA4, the company will focus on specific segments of the market that sell the most, which would be the x700-series and below. This generation will be essentially similar to the RX 5000 series powered by RDNA1, which did enough to stir things up in NVIDIA's lineup, and trigger the introduction of the RTX 20 SUPER series. The next generation could see RDNA4 square off against NVIDIA's next-generation, and hopefully, Intel's Arc "Battlemage" family.
363 Comments on AMD Retreating from Enthusiast Graphics Segment with RDNA4?
I actually think Innovation, possibly photonics will extend the viability of silicon due to the vast amounts of working processes developed for silicon fabrication way way beyond the Angstrom era.
Asus has recently shown NVMe drive attached to GPU's PCB transmitting data over PCIe link. By extension, its enough to install USB4 controller on GPU's PCB to inject USB and PCIe data flowing to/from USB-C port alongside DisplayPort video data.
RTX 4090 is the undisputed king at the moment. An expensive king, it is right, and the price is the only weapon with which these Taliban, denying the army of technologies that increase the value of an nVidia video card. Anyway, their desperation can be seen in how they use a game sponsored by AMD to hide the drama from the others.
Something funny that stuck in my mind from the review of a well-known compatriot for RTX 4060 Ti/ RX 7600:
4060 Ti - driver installation and running games without problems.
7600 - driver installation and... error, error, error. It was solved after several wasted hours, including reinstalling the OS.
And he also found that he had to use nVidia software to be able to get the results of the AMD video card in the games that did not have a benchmark utility because AMD has nothing similar.
I extracted his results from two reviews because he "omitted" to compare the results, they were too against AMD. How to compare in Cyberpunk the 34 fps obtained by the RX 7600 (igp disaster) with the 111 fps obtained by the 4060?
Video cards were tested only in 1080p
The sources are here
The new thing would be installing USB4 controller on GPU's PCB, so that USB-C port carries not only DP video data, but other protocols too. We have never had this solution on GPU.
Sadly on the MEG Z690 Ace the USB-C port that carries a video signal doesn't work with HDMI. I bought a USB-C to HDMI dongle on Amazon to use after I sold my 3090 and had the 4080 on the way, unfortunately it did not work at all... then I read on the manual that it only supports DisplayPort. I guess even us nerds need to RTFM sometimes. Fortunately, it's not all a waste, apparently it works just fine with my laptop - and provides a way to use the integrated Radeon graphics with an external display, as its native HDMI port is wired directly to the RTX 3050.
They also really need their FG tech to be as good as DLSS3. I just wish Nvidia would toss us 3000 series people a bone with FG... but guess I'll have to wait for AMD/Intel solutions...
At TSMC, Apple always has a priority and currently uses 90% of 3nm capacity. Plus, they don't pay for defective dies for the first time, which means 3nm yields are lower than expected, perhaps 65-70% at the moment.
At Samsung, yields are unknown on GAA Fet 3nm node. Unclear if it's 50 or 60% currently, which is low. Nvidia really had a problem with them on 8nm.
Everybody wants to be on a cutting edge node for most advanced products, but it takes a few years to improve yields towards 90%. It's a painfully slow process...
It makes me wonder if there's situation where it works fine. Like starting fresh with amd and go nvidia, does it works fine ? Or does starting fresh with nvidia and go amd then works fine ? I've never seen experimentation on this, maybe guru 3d have some test. Im pretty sure i have seen a perf summary somewhere in adrenalin. Isn't a ton of test are made without any manufacturer software anyway ?
People need to realise that Apple pays in advance by building entirely new factories for TSMC's next best. No one else has that kind of money. Perhaps Nvidia in two years.
AMD has secured 3nm for several Zen5 products, such as Turin and Turin dense CPUs. Server takes priority for latest and greatest nodes.
There is currently no major disparity between AMD, Nvidia and Intel in node process. Intel is behind in server chips, Nvidia is in front in AI chips. Yes, they can get higher chiplet yield per wafer due to their smaller size, but 3nm wafer itself is significantly more expensive at the moment that only Apple can afford the capacity they had booked and paid two years ago.