Friday, June 10th 2022

AMD RDNA3 Offers Over 50% Perf/Watt Uplift Akin to RDNA2 vs. RDNA; RDNA4 Announced

AMD in its 2022 Financial Analyst Day presentation claimed that it will repeat the over-50% generational performance/Watt uplift feat with the upcoming RDNA3 graphics architecture. This would be a repeat of the unexpected return to the high-end and enthusiast market-segments of AMD Radeon, thanks to the 50% performance/Watt uplift of the RDNA2 graphics architecture over RDNA. The company also broadly detailed the various new specifications of RDNA3 that make this possible.

To begin with, RDNA3 debuts on the TSMC N5 (5 nm) silicon fabrication node, and will debut a chiplet-based approach that's somewhat analogous to what AMD did with its 2nd Gen EPYC "Rome" and 3rd Gen Ryzen "Matisse" processors. Chiplets packed with the GPU's main number-crunching and 3D rendering machinery will make up chiplets, while the I/O components, such as memory controllers, display controllers, media engines, etc., will sit on a separate die. Scaling up the logic dies will result in a higher segment ASIC.
AMD also stated that it has re-architected the compute unit with RDNA3 to increase its IPC. The graphics pipeline is bound to get certain major changes, too. The company is doubling down on its Infinity Cache on-die cache memory technology, with RDNA3 featuring the next-generation Infinity Cache (which probably operates at higher bandwidths).

From the looks of it, RDNA3 will be exclusively based on 5 nm, and the company announced, for the very first time, the new RDNA4 graphics architecture. It shared no details about RDNA4, except that it will be based on a more advanced node than 5 nm.

AMD RDNA3 is expected to debut in the second half of 2022, with ramp across 2023. RDNA4 is slated for some time in 2024.
Add your own comment

121 Comments on AMD RDNA3 Offers Over 50% Perf/Watt Uplift Akin to RDNA2 vs. RDNA; RDNA4 Announced

#51
Valantar
siluro818Obviously I cannot know this for certain, but it's gotta be all about the yields. Navi 22 is only in 6700XT because they clearly have near perfect yields on that chip. Meanwhile Navi 21 6900XTs outnumber the 6800s by a factor of [I don't how much exactly, but it must be a lot lol] for the exact same reasons. Had they known how few failed 21s they'll be getting they would have designed a Navi that slots in-between the current 21s and 22s, but since mid-range is where AMD usually gets their sales at and with all the supply shortages that were rearing their heads at the time they just doubled down on the smaller chips and called it a day.
That's exactly why I think the segmentation doesn't make sense. It's clear they veered pessimistic in their initial yield estimates, but even accounting for that dropping all the way to 40 CUs for Navi 22 from the 60CU low-end bin of Navi 21 doesn't add up to me. If I were to guess, it might be that they were planning for Navi 22 to be a massive volume push for mobile, which would have made a larger die size expensive. But then that didn't materialize either - there aren't many 6700M/6800M/6850M XT laptops out there. And then theres Navi 23 which is just weirdly close - just 8 CUs less? - leaving no room for cut-down Navi 22 anywhere. I mean, TSMC 7nm yields were pretty well known - AMD had been using the process for a couple of years at this point. Were they expecting to have tons of faulty dice for every SKU? Was the plan all along for Navi 22 to be some quickly forgotten in-between thing that nobody really cares about? Seems like such a waste to me. Even just adding 8 CUs to Navi 22 would have made it a lot more flexible, allowing for a more competitive 6700 XT and a cheaper, lower power 6700 if they had wanted it. Such a weird decision.
ARFCutting the large dice into ever smaller slices is production cost reduction, not IPC or performance related exercise.
But it may have a slight background advantage of improving the qualities of the overall solution, my guess is not more than 5% overall.
I mean better thermals, improved management of the integrated parts.
It's only really a production cost reduction if it allows for small enough chips to noticeably increase wafer area utilization (i.e. reducing the area used on incomplete dice etc.) or if it improves yields (and at least TSMC 7nm yields are near perfect at this point). It won't meaningfully affect thermals or anything else, and of course there's always additional interconnect power with an MCM solution. The main advantage is optimization and splitting a large, complex die into smaller, simpler chiplets.
ARFThe audio processors should be cut altogether, I don't understand why the GPUs must include audio device which costs transistors on the dice.
Are you actually arguing for cutting basic functionality that GPUs have had for more than a decade? That's a bad idea. HDMI and DP both carry audio signals, and tons of people use audio outputs (either speakers or headphone connectors) on their monitors/TVs. And the transistor cost of audio processing is tiny compared to essentially everything else. This really isn't worth caring about, and cutting it would piss a lot of people off with minimal gains to show from it.
Posted on Reply
#52
ARF
ValantarAre you actually arguing for cutting basic functionality that GPUs have had for more than a decade? That's a bad idea. HDMI and DP both carry audio signals, and tons of people use audio outputs (either speakers or headphone connectors) on their monitors/TVs. And the transistor cost of audio processing is tiny compared to essentially everything else. This really isn't worth caring about, and cutting it would piss a lot of people off with minimal gains to show from it.
It causes driver related issues, and yes, I have never used the graphics integrated audio. I have a normal audio processor integrated on the motherboard..
Posted on Reply
#53
konga
ARFCutting the large dice into ever smaller slices is production cost reduction, not IPC or performance related exercise.
But it may have a slight background advantage of improving the qualities of the overall solution, my guess is not more than 5% overall.
I mean better thermals, improved management of the integrated parts.

The audio processors should be cut altogether, I don't understand why the GPUs must include audio device which costs transistors on the dice.
Well, the point I'm making is that MCM allows them to do 800+ mm2 dies in a way that's more practical for them, and it could even allow them to do GPU sizes that were previously impossible. We probably won't see it for RDNA3, but don't be surprised if you see 1000+ mm2 GPUs eventually. Again, it's just a means for them to put more silicon on the card, which has its pros when it comes to power efficiency.

And please don't cut the audio. I actually use it! Almost everyone who HDMI-outs to a TV uses GPU audio.
Posted on Reply
#54
ARF
kongaWell, the point I'm making is that MCM allows them to do 800+ mm2 dies in a way that's more practical for them, and it could even allow them to do GPU sizes that were previously impossible. We probably won't see it for RDNA3, but don't be surprised if you see 1000+ mm2 GPUs eventually. Again, it's just a means for them to put more silicon on the card, which has its pros when it comes to power efficiency.
Yeah, AMD does not want to reach or even think about the absolute production limit - the reticle size. Nvidia, on the other hand, does it almost every generation.
kongaAnd please don't cut the audio. I actually use it! Almost everyone who HDMI-outs to a TV uses GPU audio.
I have never succeeded in making it work - there is always silence.
Posted on Reply
#55
InVasMani
ValantarSorry, but what are these numbers you're working with? Are you inventing them out of thin air? And what's the relation between the different numbers? You also seem to be mixing power and performance? Remember, performance (clocks) and power do not scale linearly, and any interconnect will consume power. You're making this out to be far simpler than it is. Other than that all you're really saying here seems to be the age-old truism of wide and slow chips generally being more efficient. And, of course, you're completely ignoring the cost of using two dice to deliver the performance of one.

What? A 1/6th/16.67% area reduction from a node change will be 16.67% no matter how large your die, no matter how many of them you combine. A percentage/fractional reduction in area doesn't add up as you add parts together - that number is relative, not absolute.

It's absolutely possible that an MCM approach can allow for power savings, but only if it allows for larger total die sizes and lower clocks. Otherwise it's no different from a monolithic die, except for the added interconnect power. And, of course, larger dice are themselves a fundamental problem when per-transistor costs are no longer dropping noticeably, which is leading to rapidly rising chip prices.

Again, this isn't accurate. A GPU die has its heat very evenly spread across the entire die (unlike CPUs which are very concentrated), as most of the die is compute cores. Spreading this across two dice won't affect thermals much, as both dice will still be connected to the same cooler - it's not like you're running them independently of each other. Assuming the same power draw and area for a monolithic and MCM solution, the thermal difference between the two will be minimal. And, crucially, you want the distance between dice on package to be as small as possible to keep latencies low.

Fans generally run directly off 12V and don't rely on VRMs on the GPU, just a fan controller IC sending out PWM signals (unless the fans are for some reason controlled through voltage, which is rather unlikely).


Idk, I think the truth is somewhere in the middle. Both chips have distinct qualities and deficiencies. The 6800 is fantastically efficient; the 6700 XT gets a lot of performance out of a relatively small die. Now, the 6700 XT is indeed rather poor in terms of efficiency for an RDNA2 chip, but it still beats out the majority of Ampere GPUs, so ... meh. (The 6500XT is another matter entirely.)

I still can't wrap my head around AMD's RDNA2 segmentation though. The 16-32-40-80CU lineup just doesn't make sense IMO, and kind of forced them to tune the 6700XT the way they did. 20-32-48-80 or something like that would have made a lot more sense. It's also weird just how few SKUs Navi 22 has been used in overall.
The wattage was just a example case figure point to work around for illustrative purposes in how things tie together. It's irrelevant AMD can scale things however they see fit and pretty linearly or non linearly for any given aspect of the design to a point barring legitimate limitations like silicone space.

I was thinking more along the lines of 100w chip split into two 50w chips and a 50% efficiency uplift on each. across. If you take 50% efficiency uplift on each chip you end up at 2x performance at the same wattage. There approximately 1/6 die size reduction as well from 6nm to 5nm so if that were linear and it's not exactly that simple, but you'd end up with a 50w part consuming more like 41.66w each instead or 83.34w in total. That leaves like 16.66w to account for or efficiency power per watt.

You also have double the infinity fabric cache and we've seen measurable gains in efficiency thru that. Plus AMD indicated architectural optimization improvements over RDNA2 whatever that implies. For gaming it's hard to say and probably some stuff to do with variable rate shading/upscale/compression and overall configuration balance between TMU's/ROPs and so on scaling. You've also got memory performance and efficiency uplift isn't standing still either. Simply put there are plenty of possible ways AMD can extract further performance and efficiency to reach that target figure.

Heat concentration will be worse in a square die versus that same die chopped in two and elongated to form more of a rectangle in series with each other that can spread heat better w/o contending with hot spots. The whole GPU/CPU heat concentration thing is entirely irrelevant to any of that. The issue is hot spots are easier to tackle and deal with by cutting them in half and elongating heat dispersion. It's like taking a bed of coals in a wood stove or camp fire if you spread them out it'll burn out/cool down more quickly. In this instance you're spreading heat to the IHS and coolers mounting surface in turn.

Power savings a p-state could control one of the two dies to turn it off or place it into a deeper sleep power savings mode below a given work threshold. Additionally it could do something like Ethernet flow control and auto negotiate round-robin deep sleep between each of the GPU die's when it goes below thresholds. Basically they can modulate between the two chip dies for power savings and better heat dispersion between them to negate throttling under heavier workloads leading to more even sustained boost performance.

You reach a saturation point naturally full load like you would with water cooling and ambient temperature rise or any cooling really in regard to cooling TDP effectiveness, but you get there more slowly and have more evenly distributed performance along the way.

In terms of fans idk, but GPU Power Level goes up about 1% and Total Power Level goes up about 3% if if I change my GPU's fan speed from 45% to 100% according to NVIDIA Inspector's readings. I could've sworn it can even play a role in boost clocks in some instances. The power has to come from either PCIE bus or PCIE power connector in either case and will attribute some to warming the PCB in turn due to higher current.

I could've sworn there were instances with Maxwell where if you lower fan RPM's sometimes you can end up with a better boost clock performance than cranking the fan RPM's to 100% contrary to what you'd expect. That's in regard to fairly strict power limits though with a undervolt and overclock scenario to extract better performance per watt. The overall jist of it was fan speed power eats into the boost speeds expected power drawn within the power limits imposed.

In terms of worse RNDA2 SKU's from a balance standpoint with where they are positioned performance and cost consideration wise 6500XT and 6650XT are two of the worst. That unofficial 6700 non XT was done for a reason the 6650XT is a pitiful uplift over the 6600XT and between the 6700XT so it basically better fills that gap not that it's a heck of a lot better. From a performance standpoint it does a better job fill the gap from a price one I didn't check. The 6400XT isn't so bad the PCIE x4 matter is less problematic with it given the further cut down design it doesn't suffocate it quite the same way.
Posted on Reply
#56
konga
InVasManiI was thinking more along the lines of 100w chip split into two 50w chips and a 50% efficiency uplift on each. across. If you take 50% efficiency uplift on each chip you end up at 2x performance at the same wattage.
You're adding percentages where you shouldn't be again. 50% efficiency gain is 50% efficiency gain no matter how the silicon is divided. It doesn't become a 2x perf/watt gain just because there are two chips now.
Posted on Reply
#57
ARF
Have we seen the actual dice configuration for the Navi 31?
I guess it will be like this:

Posted on Reply
#58
spnidel
b-but there's no way a 6800 XT would use less power than the 2080 ti AND be faster!
Posted on Reply
#59
InVasMani
kongaYou're adding percentages where you shouldn't be again. 50% efficiency gain is 50% efficiency gain no matter how the silicon is divided. It doesn't become a 2x perf/watt gain just because there are two chips now.
Yeah confusing myself on the matter brain fart moment I guess where I'm trying to warp my head around it in way that makes sense. So it's a bit like GDDR6X PAM4 divided between two die's that can each double data transfer rates that can access each half of the VRAM and I/O can micro manage in essence!?
ARFHave we seen the actual dice configuration for the Navi 31?
I guess it will be like this:

I'd like to inject based on the v6 Navi 31 diagram above should've had a V8!!?
Posted on Reply
#60
80-watt Hamster
ARFI have never succeeded in making it work - there is always silence.
By contrast, GPU audio output works for me 9 times out of 10, all the way back to Fermi. Maybe it's a useless feature for you, but I like to be able to plug my PC into a TV and have sound come out with just an HDMI cable. Heck, one of my recent builds was having trouble with the onboard audio (drivers or hardware I never figured out), so I plugged my speakers into the monitor output instead. GPU audio saved my life!*

*not really
Posted on Reply
#61
chrcoluk
ARFNavi 31 will break the 1000 GTexels/s (1 TTexel/s) Texture Fillrate barrier.

Damn that looks so sweet man, would rather have 16 gig vram any day of the week instead of RT cores I will never use. Shame AMD dont have the equivelent of FE in the UK at MSRP. Else I would have switched over to the red team.
Posted on Reply
#62
LupintheIII
RithsomWith the ever-so increasing difficulty of being able to source new, power-efficient nodes, this is very hard to believe. I'd love for it to be true, but I'm going to keep my expectations in check until independent reviews come out.
It was true for RDNA1 over GCN and for RDNA2 over RDNA1... all on the same 7nm node...
They did it twice over the last 3 years, I don't get what is hard to belive.
Posted on Reply
#63
InVasMani
Not only that, but people had similar takes on Zen 1 to Zen 2 and Zen 2 to Zen 3 you saw plenty of similar rhetoric some people are just waiting to see AMD falter to jump all over them or have low optimism.
Posted on Reply
#64
btk2k2
InVasManiNot only that, but people had similar takes on Zen 1 to Zen 2 and Zen 2 to Zen 3 you saw plenty of similar rhetoric some people are just waiting to see AMD falter to jump all over them or have low optimism.
AMD have been pretty reliable with their performance estimates and comparisons to be fair to them. They did it with the 5800X3D. They could have easily pointed at the gains in ACC, MSFS, iRacing, Stellaris and made some pretty rediculous claims but they didn't, they kept to the sort of gaming suite reviewers use and said the gain over a 5900X was around 15% which has now been verified by several independent reviewers.

AMD don't seem to excessively puff up their numbers so I see no reason why this >50% perf / watt claim should not be believed.
Posted on Reply
#65
InVasMani
What would happen if AMD did something akin to big LITTLE combined with I/O on the GPU side combining N5 and N5P I can't help speculate a bit. I don't believe they've done so, but could the I/O reserve the extra hardware performance for say hardware accelerated FSR? Also given N5P can be +5% performance or -10% power could one chip be used one and the other opposite!!? Sort of a interesting hypothetical thought. Similarly what about differentiating something like FP half precision and double precision between two chips?

It seems like there are a lot of tangible ways they might extract a bit more performance or efficiency potentially depending on how the I/O can sync and leverage it. It's basically a dedicated sequencer for how the GPU operates with low latency chips closely connected to it.
Posted on Reply
#66
Mussels
Freshwater Moderator
InVasManiTwo 100w chips reduce performance per chip 50% add together 50% + 50% = 100% you're still at 100w and double the performance. Of course it begs the question how can they get there? Well a node shrink. I'm not sure how much that accounts for on the chip die, but there could be a bit higher performance per watt on the DRAM side advancement as well. Still 50% is a lot it seems like, but then you've always got infinity cache and it makes a huge difference on performance per watt especially since benefits from compression as well and perhaps the I/O die controlling the two MCM chips handles a bit of decompression and intelligently at the same time.

The node reduction itself from 6nm down to 5nm is what 1/6? across two chip dies which works out to 1/3 they also shuffle logic to the I/O and I'm not sure how much that occupies off hand, but say it bumps up to 40% more silicone space with the space that is pretty good. The other good aspect is heat is spread out more between two chip dies which is better than a one chip die the size of 2 all condensed in one spot. It's much better for the heat load to be spread apart and radiate more to the cooler. That even reduces stress on the VRM's that have to power the fans for the GPU. Something interesting is if a AIB's were to ever put a fan header on the side that could be plugged into a system header instead shifting more stress to the MB VRM's and off of the GPU's VRM's given they can consume a few watts.

It seems pretty reasonable and plausible. Let's not forget there could be a bit more room to increase the die size to make room for more of that cache over the previous chip dies. In fact even not taking that into account if the cache is on the die and you pair two you double the cache. This isn't SLI/CF either plus it's got a dedicated I/O die as well. Just moving logic to the I/O die will free up silicone space on the chip die. It might not be 50% in all instances, but up to in the right scenario I can see it. Lastly FSR is another metric in all of this and gives a uplift on efficiency per watt. You can certainly argue it's important to consider the performance per watt context scenario a company be it AMD/Intel/Nvidia or others are talking about.

I'm going to go out on a limb on this one and say it could be 50% performance per watt or greater across the entire RDNA3 product segment under the right circumstances. You have to also consider along with all the other parts mentioned voltage is squared and smaller dies running at lower wattage require lower voltage increasing efficiency per watt as a whole. So I'm pretty certain this can be very much realistic. I'm not going to say I'm 100% about 50% performance per watt across the entire SKU lineup, but AMD hints at it you can argue without explicitly going into detail. AMD neither indicates nor discredits that it's for a particular RDNA3 SKU, but rather lists RDNA3 which could be either or though can subtly pointing out it's across the product lineup or at least the initial launch product lineup.
You aren't great at math, friend
ARFI have never succeeded in making it work - there is always silence.
With the exception of old cards like the 8800GT that had an SPDIF port, all modern GPU's just have simple built in audio. There is nothing to do except install your drivers, and change the default audio device in windows.
Posted on Reply
#67
Jism
R0H1TThey're already doing this with CDNA based cards, I'd say an even chance they'll do so with consumer cards especially if Nvidia releases 500~600W monstrosity chips! No way AMD matches them with just 400~500W even if they lead in perf/W at the high end.

It is possible. Nvidia's approach might be fast / low latency but at the expensive of quite some power. AMD's approach is 4 tiny "GPUs" basicly as a hardware crossfire linked together paired with infinity cache and HBM. The advantage for AMD is higher yields, scalability but more important more efficient then throwing in one big core. The downside would be latency, however infinity cache can tackle that. It works amazingly well for RDNA cutting costs of for example larger GDDR bus width.
Posted on Reply
#68
Gica
ValantarMy thoughts exactly. It's a funny turnaround to see AMD taking the lead on perf/W when they were so far behind Nvidia for so many years, but they knocked it out of the park with RDNA2, so I'm inclined to be optimistic towards this.
Today
AMD: TSMC 7nm
nVidia: Samsung 8nm

Next
AMD and nVidia: TSMC 5nm

I expect nVidia to keep the advantage in RT and consolidate its advantage with DLSS.
Remember that an nVidia card also supports FSR. The two technologies practically keep the old Turing alive.
Posted on Reply
#69
Tremdog
ratirtI'm starting to believe, that the perf/watt is a dead end in the graphics and CPU industries. It no longer satisfies me when companies say that and obviously the growing power consumption for these has a lot to do with it. I'm looking forward for the new tech but if the power consumption is through the roof, I will literally skip buying and investing in graphics cards and CPUs for that matter.


where do you have 2x performance increase over RDNA2? AMD said 50% increase.
Amd said a 50% increase per watt. So to have twice the performance, they increase the number of streaming processors as well as clock cycle.
oxrufiioxoInteresting if the rumored 2x performance increase over RDNA2 is true the top gpu would need to use around 500W.
RTX 3090 Ti uses 450 watts, the 4000 series is rumoured to chug significantly more than 3000 series.
Posted on Reply
#70
Valantar
80-watt HamsterBy contrast, GPU audio output works for me 9 times out of 10, all the way back to Fermi. Maybe it's a useless feature for you, but I like to be able to plug my PC into a TV and have sound come out with just an HDMI cable. Heck, one of my recent builds was having trouble with the onboard audio (drivers or hardware I never figured out), so I plugged my speakers into the monitor output instead. GPU audio saved my life!*

*not really
Yeah, I've literally never heard of GPU audio not working. Like... how would it not work?
ARFI have never succeeded in making it work - there is always silence.
Does your monitor have speakers? Have they been turned all the way down? There is no reason why this shouldn't work.
chrcolukDamn that looks so sweet man, would rather have 16 gig vram any day of the week instead of RT cores I will never use. Shame AMD dont have the equivelent of FE in the UK at MSRP. Else I would have switched over to the red team.
Doesn't the AMD.com 'shop' site work in the uk?
InVasManiWhat would happen if AMD did something akin to big LITTLE combined with I/O on the GPU side combining N5 and N5P I can't help speculate a bit. I don't believe they've done so, but could the I/O reserve the extra hardware performance for say hardware accelerated FSR? Also given N5P can be +5% performance or -10% power could one chip be used one and the other opposite!!? Sort of a interesting hypothetical thought. Similarly what about differentiating something like FP half precision and double precision between two chips?

It seems like there are a lot of tangible ways they might extract a bit more performance or efficiency potentially depending on how the I/O can sync and leverage it. It's basically a dedicated sequencer for how the GPU operates with low latency chips closely connected to it.
What would "akin to big.little" even mean in a GPU, where all the cores are incredibly tiny? Remember, that +5%perf/-10%power is mainly down to clock tuning, and can be achieved on any gpu through the boost algorithm.

As for most of the rest of what you're saying here, it would be incredibly expensive (taping out two different dice, possibly using different libraries) for very little benefit. Double precision FP doesn't matter for consumers.
GicaToday
AMD: TSMC 7nm
nVidia: Samsung 8nm

Next
AMD and nVidia: TSMC 5nm

I expect nVidia to keep the advantage in RT and consolidate its advantage with DLSS.
Remember that an nVidia card also supports FSR. The two technologies practically keep the old Turing alive.
Sure, they have a node advantage, but what you're doing here is essentially hand-waving away the massive efficiency improvements AMD has delivered over the past few generations. Remember, back when both were on 14/16nm, Nvidia was miles ahead, with a significant architectural efficiency lead. Now, AMD has a minor node advantage but also crucially has caught up in terms of architectural efficiency - TSMC 7nm is better than Samsung 8nm, but not by that much. And they're then promising another >50% perf/W increase on top of that, while Nvidia is uncharacteristically quiet.
Posted on Reply
#71
HD64G
RDNA3's MCM design for sure means that the top GPUs (Navi31&32) will have 2 GPU core chiplets and some more dies for IO and anything else needed. Chiplet means die having compute cores as Zen arch showed since Zen2. They call die the IO chip, not chiplet. And chiplets for CPU worked great for both cost/yileds and binning, so with the +50% in perf/W, Navi31 at 5nm could double theoretical performance over Navi21 and consume 400-450W. While nVidia being at 4nm to double theoretical performance over 3090 will consume at least 550-600W and cost more.
Posted on Reply
#72
AusWolf
ARFIt causes driver related issues, and yes, I have never used the graphics integrated audio. I have a normal audio processor integrated on the motherboard..
What kind of driver related issues does it cause? I've never had any problem using the GPU's audio. In fact, I find it quite useful, not just for TVs - there's one less cable in the jungle at the back of the PC.
Posted on Reply
#73
ModEl4
The 50% Perf/Watt isn't something new, both Rich Bergman (interview) and David Wang (Presentation) mentioned it in the past:

www.notebookcheck.net/AMD-is-expecting-the-next-gen-RDNA-3-GPUs-scheduled-for-2021-to-deliver-50-improved-performance-per-watt-over-the-new-RX-6000-series.503412.0.html

Although we still don't know many details (chiplets, gpu & memory frequencies etc.) so i will be updating the below sometime in the future when more info becomes known, my current hypothesis is the below :


I meant Navi32-8192 instead of Navi31-8192
Posted on Reply
#74
ARF
AusWolfWhat kind of driver related issues does it cause? I've never had any problem using the GPU's audio. In fact, I find it quite useful, not just for TVs - there's one less cable in the jungle at the back of the PC.
Mess, usually when I uninstall the Radeon Software, it deleted the Dolby Audio software panel from my tray, as well. Needed to reinstall the entire Realtek High Definition Audio Driver package again..
ValantarDoes your monitor have speakers? Have they been turned all the way down? There is no reason why this shouldn't work.
No, it doesn't.
It is when I try to connect the notebook to the UHD 4K TV using the HDMI cable. There is never sound via that cable.
ModEl4
I meant Navi32-8192 instead of Navi31-8192
Yes, for cut down Navi 31 that would be terrible performance causing lack of sense to launch such a version, completely..

Fixed it for you:

Posted on Reply
#75
ModEl4
ARFMess, usually when I uninstall the Radeon Software, it deleted the Dolby Audio software panel from my tray, as well. Needed to reinstall the entire Realtek High Definition Audio Driver package again..



No, it doesn't.
It is when I try to connect the notebook to the UHD 4K TV using the HDMI cable. There is never sound via that cable.



Yes, for cut down Navi 31 that would be terrible performance causing lack of sense to launch such a version, completely..

Fixed it for you:

lol my bad m) i edited navi 32 but i left the 4090Ti/4090 typo
Below the table without the MS Paint job (thanks btw!)
Posted on Reply
Add your own comment
Dec 22nd, 2024 06:52 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts