• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Announces RDNA 3 GPU Launch Livestream

Joined
Jun 21, 2021
Messages
3,121 (2.42/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
"AMD will announce its first RDNA 3 based GPUs on the 3rd of November"

Announcement or hard launch or both??

It depends on what they say on November 3.

They could say "available now" or "available [insert future date]". About the only thing they won't say is "We started selling these yesterday."

Wait until after their event and you'll know, just like the rest of us. It's not like anyone here is privy to AMD's confidential marketing event plans.
 
Joined
Dec 22, 2011
Messages
3,890 (0.82/day)
Processor AMD Ryzen 7 3700X
Motherboard MSI MAG B550 TOMAHAWK
Cooling AMD Wraith Prism
Memory Team Group Dark Pro 8Pack Edition 3600Mhz CL16
Video Card(s) NVIDIA GeForce RTX 3080 FE
Storage Kingston A2000 1TB + Seagate HDD workhorse
Display(s) Samsung 50" QN94A Neo QLED
Case Antec 1200
Power Supply Seasonic Focus GX-850
Mouse Razer Deathadder Chroma
Keyboard Logitech UltraX
Software Windows 11
It all boils down to stock levels, and with the world and their mother using TSMC, I suspect stock levels will be sold out day 1.

I hope i'm wrong, but I'd place a bet on myself being right.
 
Joined
Jun 21, 2021
Messages
3,121 (2.42/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
TPU's own benchmarks compare the efficiency of the 6xxx AMD series to the Nvidia 3xxx series(and now the 4090). Here's the biggest outlier:
cyberpunk-2077-rt-3840-2160.png (500×570) (tpucdn.com)

Cyberpunk shows AMD GPU's roughly at -69% and Nvidia 3xxx series GPU's at around -50%. This is the game with the largest difference, and probably the most intensive raytracing, that TPU has tested in their most recent benchmarks. So overall we have a ~20% difference and around ~30% if you include the 4090.

Farcry 6 the AMD GPU is actually ~5% more efficient than the Nvidia and in F1 Nvidia is only ~8% more efficient.

Just because Brand X increased performance by ___ percentage from one generation to the next doesn't automatically mean that Brand Y will see the same percentage bump.

We have gone through this before. Different game titles perform differently between various cards at various resolutions based on how each title was written and how it harnesses the GPU silicon and interacts with the device driver.

Radeon cards have notoriously poor DirectX 11 performance for example but they do pretty well with DX12 titles. Stuff like that.

You can speculate all you want on unannounced product but that doesn't generate any useful data points for a purchase decision. We have to wait for thoughtful third-party reviews of actual hardware running actual software (device drivers and applications).

It's foolish to extrapolate 7xxx AMD series behavior and Nvidia 4xxx series behavior based on the previous generation's comparison.

Remember that AMD had a slight silicon advantage with RDNA2 because they used TSMC while NVIDIA used Samsung for Ampere. That foundry advantage is now gone since NVIDIA picked TSMC for Ada Lovelace.

The performance gap could widen, it could shrink, it could stay the same. Or more likely there will be gains in some titles, losses in other titles. AMD could debut new machine learning cores or they could keep them out of their GPU die.

A lot of performance will rely on software quality. We just saw that with Intel's ARC launch: some titles run great, some titles sucks. Intel's GPU silicon appears to be okay, it's really their driver software that's their biggest current problem.

AMD already knows how their new GPU silicon stacks up compared to RTX 4090. If they can't beat NVIDIA in benchmark scores, they will likely have to compete on price. Just like they've done for years.

We'll have a better idea on November 3rd.
 
Joined
Jun 2, 2017
Messages
9,411 (3.40/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
Just because Brand X increased performance by ___ percentage from one generation to the next doesn't automatically mean that Brand Y will see the same percentage bump.

We have gone through this before. Different game titles perform differently between various cards at various resolutions based on how each title was written and how it harnesses the GPU silicon and interacts with the device driver.

Radeon cards have notoriously poor DirectX 11 performance for example but they do pretty well with DX12 titles. Stuff like that.

You can speculate all you want on unannounced product but that doesn't generate any useful data points for a purchase decision. We have to wait for thoughtful third-party reviews of actual hardware running actual software (device drivers and applications).

It's foolish to extrapolate 7xxx AMD series behavior and Nvidia 4xxx series behavior based on the previous generation's comparison.

Remember that AMD had a slight silicon advantage with RDNA2 because they used TSMC while NVIDIA used Samsung for Ampere. That foundry advantage is now gone since NVIDIA picked TSMC for Ada Lovelace.

The performance gap could widen, it could shrink, it could stay the same. Or more likely there will be gains in some titles, losses in other titles. AMD could debut new machine learning cores or they could keep them out of their GPU die.

A lot of performance will rely on software quality. We just saw that with Intel's ARC launch: some titles run great, some titles sucks. Intel's GPU silicon appears to be okay, it's really their driver software that's their biggest current problem.

AMD already knows how their new GPU silicon stacks up compared to RTX 4090. If they can't beat NVIDIA in benchmark scores, they will likely have to compete on price. Just like they've done for years.

We'll have a better idea on November 3rd.
I know this is highly anecdotal but when I got my 6500XT I was blown away by the fact it basically OC to 2983 MHZ on the GPU and 2400 MHZ on the Memory. If they can achieve the Clock speeds advertised on Youtube I have no doubt that the performance will be compelling. Let's also remember these are 5nm GPUs just like Nvidia but one could argue that AMD has had a longer time refining TSMC than Nvidia so may be able to extract more performance.
 
Joined
May 2, 2017
Messages
7,762 (2.77/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
It all boils down to stock levels, and with the world and their mother using TSMC, I suspect stock levels will be sold out day 1.

I hope i'm wrong, but I'd place a bet on myself being right.
Yeah, but TSMC order numbers have also been falling off a cliff recently as every chipmaker is scaling back production. The question is if this came soon enough to bolster RDNA3 production - and I don't think so. But it could bode well for stock to show up in large numbers a month or two afterwards.
 
Joined
Dec 22, 2011
Messages
3,890 (0.82/day)
Processor AMD Ryzen 7 3700X
Motherboard MSI MAG B550 TOMAHAWK
Cooling AMD Wraith Prism
Memory Team Group Dark Pro 8Pack Edition 3600Mhz CL16
Video Card(s) NVIDIA GeForce RTX 3080 FE
Storage Kingston A2000 1TB + Seagate HDD workhorse
Display(s) Samsung 50" QN94A Neo QLED
Case Antec 1200
Power Supply Seasonic Focus GX-850
Mouse Razer Deathadder Chroma
Keyboard Logitech UltraX
Software Windows 11
Yeah, but TSMC order numbers have also been falling off a cliff recently as every chipmaker is scaling back production. The question is if this came soon enough to bolster RDNA3 production - and I don't think so. But it could bode well for stock to show up in large numbers a month or two afterwards.

Your glass is half full, I like it!
 
Joined
May 2, 2017
Messages
7,762 (2.77/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Joined
Jan 17, 2018
Messages
440 (0.17/day)
Processor Ryzen 7 5800X3D
Motherboard MSI B550 Tomahawk
Cooling Noctua U12S
Memory 32GB @ 3600 CL18
Video Card(s) AMD 6800XT
Storage WD Black SN850(1TB), WD Black NVMe 2018(500GB), WD Blue SATA(2TB)
Display(s) Samsung Odyssey G9
Case Be Quiet! Silent Base 802
Power Supply Seasonic PRIME-GX-1000
Just because Brand X increased performance by ___ percentage from one generation to the next doesn't automatically mean that Brand Y will see the same percentage bump.

We have gone through this before. Different game titles perform differently between various cards at various resolutions based on how each title was written and how it harnesses the GPU silicon and interacts with the device driver.

Radeon cards have notoriously poor DirectX 11 performance for example but they do pretty well with DX12 titles. Stuff like that.

You can speculate all you want on unannounced product but that doesn't generate any useful data points for a purchase decision. We have to wait for thoughtful third-party reviews of actual hardware running actual software (device drivers and applications).

It's foolish to extrapolate 7xxx AMD series behavior and Nvidia 4xxx series behavior based on the previous generation's comparison.

Remember that AMD had a slight silicon advantage with RDNA2 because they used TSMC while NVIDIA used Samsung for Ampere. That foundry advantage is now gone since NVIDIA picked TSMC for Ada Lovelace.

The performance gap could widen, it could shrink, it could stay the same. Or more likely there will be gains in some titles, losses in other titles. AMD could debut new machine learning cores or they could keep them out of their GPU die.

A lot of performance will rely on software quality. We just saw that with Intel's ARC launch: some titles run great, some titles sucks. Intel's GPU silicon appears to be okay, it's really their driver software that's their biggest current problem.

AMD already knows how their new GPU silicon stacks up compared to RTX 4090. If they can't beat NVIDIA in benchmark scores, they will likely have to compete on price. Just like they've done for years.

We'll have a better idea on November 3rd.

I stated the efficiency of the AMD cards are raytracing and you said I was wrong. I showed you evidence that I was correct, and instead of addressing my comment you went on this odd little rant.

Again. A 10-15% increase in Raytracing efficiency on the upcoming generation of AMD cards will put them roughly on par in raytracing with Nvidia cards. It's fairly reasonable to assume that the next generation AMD cards will be more efficient at raytracing, because they've literally said that was one of the focuses of the RDNA 3 architecture.

On raw performance, who knows? I never mentioned raw performance, however the chance of RDNA3 regressing in raytracing is slim-to-none.
 
Joined
Jun 21, 2021
Messages
3,121 (2.42/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
Well, you can't buy an RDNA3 card now.

Regardless of how much AMD touts their ray tracing improvements in RDNA3, there's no direct evidence of it YET. Speculate all you want but until there are third-party benchmark results and reviews, there's no useful data for a purchase decision.

I would love to see AMD destroy NVIDIA in raster, RT, and image upscaling performance, at 100W TDP less and 25% cheaper at MSRP. But that's just a pipe dream right now. If AMD thinks they can do that, great, they have until November 3 to figure it out. Because November 4, they aren't gonna have anything faster.

Most likely AMD already has a pretty good idea how their halo RNDA3 card (7900 XT?) will match up against RTX 4090.

I don't think there are any games that run only on RT cores. Raster performance is part of the end result. We'll see if Radeon can catch up to GeForce on unassisted RT performance. One thing for sure, almost no gamer will turn on RT without turning on some sort of image upscaling technology. And those image upscaling technologies have some impact on image quality so even if the unassisted RT image quality is identical, what really matters to the end user is how those images appear after going through the upscaling process.

No one is going to play some unassisted ray traced game at 4K that runs 15 fps. It might be done for a graphics benchmark but not by people who are trying to game.
 
Last edited:
Joined
Apr 22, 2021
Messages
249 (0.18/day)
TPU's own benchmarks compare the efficiency of the 6xxx AMD series to the Nvidia 3xxx series(and now the 4090). Here's the biggest outlier:
cyberpunk-2077-rt-3840-2160.png (500×570) (tpucdn.com)

Cyberpunk shows AMD GPU's roughly at -69% and Nvidia 3xxx series GPU's at around -50%. This is the game with the largest difference, and probably the most intensive raytracing, that TPU has tested in their most recent benchmarks. So overall we have a ~20% difference and around ~30% if you include the 4090.

Farcry 6 the AMD GPU is actually ~5% more efficient than the Nvidia and in F1 Nvidia is only ~8% more efficient.
TPU also benches titles with the latest swapped vDLSS DLL.

Is that even ~ practical ~ for every end-users knowledge / rig's performance?!

I stated the efficiency of the AMD cards are raytracing and you said I was wrong. I showed you evidence that I was correct, and instead of addressing my comment you went on this odd little rant.

Again. A 10-15% increase in Raytracing efficiency on the upcoming generation of AMD cards will put them roughly on par in raytracing with Nvidia cards. It's fairly reasonable to assume that the next generation AMD cards will be more efficient at raytracing, because they've literally said that was one of the focuses of the RDNA 3 architecture.

On raw performance, who knows? I never mentioned raw performance, however the chance of RDNA3 regressing in raytracing is slim-to-none.

You CANNOT get more power (unless the devs work with AMD exclusively... yeah, right, like that would happen) from utilizing... wait for it... modified texture shaders (in a nutshell, that's RDNA2's answer for it's so-called dedicated RT cores aka CUs) until AMD decides to invest in its R&D, and REIMAGINE, its TRUE dedicated ray-tracing cores/CUs for its silicon. :laugh:
 
Last edited:
Joined
Oct 27, 2020
Messages
804 (0.53/day)
I posted earlier that 7900XT at 330W will be -7% at best case in 4K vs 4090 but only -2% in QHD.
The reason for QHD projection, was Shader engines (S E.) in Navi31 (6) vs RTX4090's (11) active GPC and the assumption was that in QHD performance 4090 will lose around 1% per GPC/SE difference, ergo 5% vs the 4K difference.
That was wrong, even if the they had the same numbers of SM/GPCs the QHD difference would be at least 1%-2% lower and it doesn't matter if 11GPCs are active the die has the inherent latency characteristics of a 12GPC design, so it is at least 7% difference, so logically if 330W Navi31 is 7% slower in 4K, it won't be 2% slower in QHD but it will at least match 4090 in QHD. (So OC versions faster than 4090 in QHD)
The latest rumors suggest 42WGP (10752SP) and 20GB (320bit bus) for 6900XT, I had in mind 44WGP (11264SP) and 24GB (384bit bus), although this is good for the design (meaning with less resources it can supposedly achieve the ≥50% performance/W claim) the difference in WGPs are small anyway, maybe by lowering the GDDR6 ICs and MCDs they dropped power consumption a little bit and they increased instead 5-6% the frequency in order to compensate the difference.
Regarding naming, 6900XT was full Navi21 with full 16GB realized, now 7900XT is supposedly cut down Navi31 (1 GCD 42WGP+5 MCDs) with only 20GB present so the naming is strange, it should have been 7800XT, I bet AMD doesn't want consumers to compare the performance/$ of 7900XT with 6800XT (assumption=nearly +75% performance at 4K) because it will be worse... (It will need $999 SRP just to barely match current RX6800XT ($549) performance/$ and even if we compare it with original 2020 SRP ($649) 6800XT will have only 15% worse performance/$, so AMD in this case ($999) will give you after 2 years only 15% more performance/$ essentially despite claiming same SRP as 6900XT.
That's why they choose this naming with 7950X being the top dog.
Still the value will be much better than $1199 RTX 4080 regarding raster!
But $999 is the best case scenario for 7900XT (nearly same performance/$ as current priced 6800XT), I wonder with this kind of performance what price AMD will position it based on Ada competition/pricing!
 
Last edited:
Joined
Jan 17, 2018
Messages
440 (0.17/day)
Processor Ryzen 7 5800X3D
Motherboard MSI B550 Tomahawk
Cooling Noctua U12S
Memory 32GB @ 3600 CL18
Video Card(s) AMD 6800XT
Storage WD Black SN850(1TB), WD Black NVMe 2018(500GB), WD Blue SATA(2TB)
Display(s) Samsung Odyssey G9
Case Be Quiet! Silent Base 802
Power Supply Seasonic PRIME-GX-1000
TPU also benches titles with the latest swapped vDLSS DLL.

Is that even ~ practical ~ for every end-users knowledge / rig's performance?!
What in the world are you talking about? What does a vDLSS DLL have to do with anything I posted?
You CANNOT get more power (unless the devs work with AMD exclusively... yeah, right, like that would happen) from utilizing... wait for it... modified texture shaders (in a nutshell, that's RDNA2's answer for it's so-called dedicated RT cores aka CUs) until AMD decides to invest in its R&D, and REIMAGINE, its TRUE dedicated ray-tracing cores/CUs for its silicon. :laugh:
I really hope you a hardware & software engineer. Otherwise, it seems like a foolish statement to just say 'lulz, they can't improve their raytracing efficiency because their architecture doesn't work the same way as Nvidia's'. I expect this to be an interesting post to come back to at the launch of RDNA 3.

I'll admit that I'm wrong if RDNA3 continues to perform as inefficiently as RDNA2 in raytracing titles, however I doubt that I will need to. They've been pretty clear they're dedicating more resources to raytracing in RDNA3. There's no logical reason for them to lie about it.
 
Joined
Sep 17, 2014
Messages
22,738 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Well boys. Time to go to the medieval armorer and get equipped. We got us some bots asses to kick next month. HELL YEAH BOYS!!! WE CAN DO IT!!!


RDNA3 AT MSRP WILL BE MINE!!! EAT SHIT BOTS!!! :rockout: :rockout: :rockout: :rockout: :rockout: :rockout: :rockout: :rockout: :rockout:
Inb4 the next disappointing result ;)
 
Joined
Nov 15, 2020
Messages
935 (0.62/day)
System Name 1. Glasshouse 2. Odin OneEye
Processor 1. Ryzen 9 5900X (manual PBO) 2. Ryzen 9 7900X
Motherboard 1. MSI x570 Tomahawk wifi 2. Gigabyte Aorus Extreme 670E
Cooling 1. Noctua NH D15 Chromax Black 2. Custom Loop 3x360mm (60mm) rads & T30 fans/Aquacomputer NEXT w/b
Memory 1. G Skill Neo 16GBx4 (3600MHz 16/16/16/36) 2. Kingston Fury 16GBx2 DDR5 CL36
Video Card(s) 1. Asus Strix Vega 64 2. Powercolor Liquid Devil 7900XTX
Storage 1. Corsair Force MP600 (1TB) & Sabrent Rocket 4 (2TB) 2. Kingston 3000 (1TB) and Hynix p41 (2TB)
Display(s) 1. Samsung U28E590 10bit 4K@60Hz 2. LG C2 42 inch 10bit 4K@120Hz
Case 1. Corsair Crystal 570X White 2. Cooler Master HAF 700 EVO
Audio Device(s) 1. Creative Speakers 2. Built in LG monitor speakers
Power Supply 1. Corsair RM850x 2. Superflower Titanium 1600W
Mouse 1. Microsoft IntelliMouse Pro (grey) 2. Microsoft IntelliMouse Pro (black)
Keyboard Leopold High End Mechanical
Software Windows 11
Joined
Sep 17, 2014
Messages
22,738 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Seems legit? LOL. I suppose we're looking at RTX4090 thicccness here for this model.

I stated the efficiency of the AMD cards are raytracing and you said I was wrong. I showed you evidence that I was correct, and instead of addressing my comment you went on this odd little rant.

Again. A 10-15% increase in Raytracing efficiency on the upcoming generation of AMD cards will put them roughly on par in raytracing with Nvidia cards. It's fairly reasonable to assume that the next generation AMD cards will be more efficient at raytracing, because they've literally said that was one of the focuses of the RDNA 3 architecture.

On raw performance, who knows? I never mentioned raw performance, however the chance of RDNA3 regressing in raytracing is slim-to-none.
15% won't put them on par... That's what, 4 FPS on most titles in 4K at the top end?

This is going to be highly per game specific stuff, at best. Nvidia has already shown us the key is a solid DLSS + RT implementation, and a latency hit, to go further. Or in other words, so much for the low hanging fruit, they've already hit a wall and we're now paying in more ways than silicon die area + TDP increase + ray approximation instead of true accuracy :) RT is already dismissed for anything competitive with the current update on Ada.

In brief, its a massive shitshow.

I agree, RDNA3 won't regress, but I wouldn't be too eager for massive RT strides. And honestly, good riddance IMHO. Just make us proper GPUs that slay frames. We have to consider as well that the current console crop still hasn't got much in the way of RT capability either. Where is the market push? AMD won't care, by owning consoles they have Nvidia by the balls. AMD can take its sweet time and watch Nvidia trip over all the early issues before the next console gen wants something proper.
 
Last edited:

AsRock

TPU addict
Joined
Jun 23, 2007
Messages
19,114 (2.99/day)
Location
UK\USA
I think everyone is going to be in for a surprise, even nVidia. :):)

Well i don't care if that's good or bad to be honest, 6900XT would of been more than enough for what i wanted, but with them abandoning older cards i thought i would try to get the near ish to be released.

As long as it's reliable and all that i be happy.
 
Joined
Oct 27, 2020
Messages
804 (0.53/day)
Seems legit? LOL. I suppose we're looking at RTX4090 thicccness here for this model.
In the RTX 4090 TPU bench, in QHD 3090 is 74% and 6950XT 77% and in 4K equivalent regarding raster.
335mm² RX6800M is Navi22 based and is exactly like RX6700XT (full die) with around 7% lower boost clock (2.39GHz vs 2.58GHz) and -6% lower game clock (2.3GHz vs 2.42GHz)
According to leaks Navi32 is in the same ballpark (around 350-352mm²) and the 5nm compute dies can hit near 4GHz if pushed vs 3GHz that 7nm RDNA2 could hit under same conditions, with 30WGPs (60 old CUs) and 7680SP.
At 3GHz boost and full die (should be within 2.7GHz-3GHz) it will have nearly double the RX6950XT FP32 performance (nearly same pixel fillrate) and it needs just to match it, sure doubling the SPs won't bring 2X in actual performance but will bring something (1.25X-1.35X).
RX 6800M was 145W and 3080Ti mobile was 175W, maybe AMD will increase the TGP at 175W also (since this time they will be much closer to Nvidia's mobile flagship)
 
Joined
Sep 17, 2014
Messages
22,738 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
In the RTX 4090 TPU bench, in QHD 3090 is 74% and 6950XT 77% and in 4K equivalent regarding raster.
335mm² RX6800M is Navi22 based and is exactly like RX6700XT (full die) with around 7% lower boost clock (2.39GHz vs 2.58GHz) and -6% lower game clock (2.3GHz vs 2.42GHz)
According to leaks Navi32 is in the same ballpark (around 350-352mm²) and the 5nm compute dies can hit near 4GHz if pushed vs 3GHz that 7nm RDNA2 could hit under same conditions, with 30WGPs (60 old CUs) and 7680SP.
At 3GHz boost and full die (should be within 2.7GHz-3GHz) it will have nearly double the RX6950XT FP32 performance (nearly same pixel fillrate) and it needs just to match it, sure doubling the SPs won't bring 2X in actual performance but will bring something (1.25X-1.35X).
RX 6800M was 145W and 3080Ti mobile was 175W, maybe AMD will increase the TGP at 175W also (since this time they will be much closer to Nvidia's mobile flagship)
Exactly, 175W :D In a laptop. Without a CPU or anything else.
 
Joined
Apr 21, 2005
Messages
185 (0.03/day)
In the RTX 4090 TPU bench, in QHD 3090 is 74% and 6950XT 77% and in 4K equivalent regarding raster.
335mm² RX6800M is Navi22 based and is exactly like RX6700XT (full die) with around 7% lower boost clock (2.39GHz vs 2.58GHz) and -6% lower game clock (2.3GHz vs 2.42GHz)
According to leaks Navi32 is in the same ballpark (around 350-352mm²) and the 5nm compute dies can hit near 4GHz if pushed vs 3GHz that 7nm RDNA2 could hit under same conditions, with 30WGPs (60 old CUs) and 7680SP.
At 3GHz boost and full die (should be within 2.7GHz-3GHz) it will have nearly double the RX6950XT FP32 performance (nearly same pixel fillrate) and it needs just to match it, sure doubling the SPs won't bring 2X in actual performance but will bring something (1.25X-1.35X).
RX 6800M was 145W and 3080Ti mobile was 175W, maybe AMD will increase the TGP at 175W also (since this time they will be much closer to Nvidia's mobile flagship)

I am not sure that laptop N32 is full N32. Think it may be the 12GB 3 MCD version with a cut die (maybe around 5k shaders at a guess) because I get the same rough idea that full N32, even downclocked, should be 20/30% or so faster than the 6950XT.

Also N32 is ~200mm of N5 + ~144mm of N6 for the full 4 MCD version or 108mm of N6 for the 3MCD version if the SkyJuice Angstronomics leak is accurate.
 
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Here is how last gen panned out:

1666602484193.png


4090, that is 45-50%-ish ahead of 3090Ti, should be well within punching distance for AMD's top dog.

The question is: the consequences of MCU design.
Will we see Zen's chiplet story and major meltdowns in competitor camp.
Or did it rather fail and new GPU cannot beat old one by 50%-ish percent.


PS
DLLS3/RT is pure bazinga, the former has quite negative impact on visual quality, when the latter is barely used even by people with uber cards.

Mind you, there were 3D TVs not so long ago.
% of users actually using that feature was arguably higher than number of users with RT on.
It got slashed.

For RT to take of it must be much less problematic to develop and much less taxing in FPS terms.
 
Joined
Apr 21, 2005
Messages
185 (0.03/day)
Here is how last gen panned out:

View attachment 266896

4090, that is 45-50%-ish ahead of 3090Ti, should be well within punching distance for AMD's top dog.

The question is: the consequences of MCU design.
Will we see Zen's chiplet story and major meltdowns in competitor camp.
Or did it rather fail and new GPU cannot beat old one by 50%-ish percent.


PS
DLLS3/RT is pure bazinga, the former has quite negative impact on visual quality, when the latter is barely used even by people with uber cards.

Mind you, there were 3D TVs not so long ago.
% of users actually using that feature was arguably higher than number of users with RT on.
It got slashed.

For RT to take of it must be much less problematic to develop and much less taxing in FPS terms.

It is pretty obvious that the 4090 is within range of RDNA3. We don't know where it will actually fall but given the public announcement by AMD of >50% perf/watt and some TBP guesses you can easily get to 4090 level performance and AMDs prior +50%'s have been on the conservative side. Of course we need to wait and see if they have actually achieved that or not but it is 100% in the realms of reasonably possible.

RT is the future. I don't think RDNA3 / Ada are going to deliver that future but I could see RDNA4 / Hopper (or whatever the consumer next gen NV codename is) doing for RT what the 9700Pro did for AF and AA by making it a default on feature.
 
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
That said, NVIDIA may be putting more effort into improving their Tensor cores especially since ML is more important for their Datacenter business.
Huang himself, said that datacenter GPU is a no-brainer design.
More silicon => more dumb compute unites can be crammed.
(I think the context of that comment was AMD rapidly catching up)

Yeah, but TSMC order numbers have also been falling off a cliff recently as every chipmaker is scaling back production. The question is if this came soon enough to bolster RDNA3 production - and I don't think so. But it could bode well for stock to show up in large numbers a month or two afterwards.
Shouldn't AMD have an edge given its using multiple nodes?
 
Joined
Oct 27, 2020
Messages
804 (0.53/day)
I am not sure that laptop N32 is full N32. Think it may be the 12GB 3 MCD version with a cut die (maybe around 5k shaders at a guess) because I get the same rough idea that full N32, even downclocked, should be 20/30% or so faster than the 6950XT.

Also N32 is ~200mm of N5 + ~144mm of N6 for the full 4 MCD version or 108mm of N6 for the 3MCD version if the SkyJuice Angstronomics leak is accurate.
I agree, probably cut down, if you calculate what I suggest in my post with full die it's even higher than RX6950 but depends on the base/game/boost clock and the TGP that AMD will target.
Also the competition will have a strong product in the form of AD103, depending what performance level will achieve AMD will decide then.
N32 I had the impression that the leak was 200+4*37.5mm² (other sites rounding the MCDs at 38mm² also), is 4*36mm² now the latest info?
 
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Hype intensifier / overhype mode on:

1666637755613.png
 
Top