• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 7900 XTX Performance Claims Extrapolated, Performs Within Striking Distance of RTX 4090

Joined
May 31, 2016
Messages
4,340 (1.48/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 16GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Nvidia has all opportunity to tweak the line up and the better half isn't even out... They always ran the risk of misfires because they release first.
I dont think that was a misfire since that would suggest something unpredictable they are trying to tackle. NVidia's actions were intentional with the pricing or release of the 4090 as a first card which is obvious why. From what they have said so far about the pricing, all graphics cards are a joke. Then removing the 4080 12GB from the release because that was literally a flying circus.
 
Last edited:
Joined
Aug 3, 2006
Messages
87 (0.01/day)
Location
San Antonio, TX
System Name Geil
Processor Ryzen 6900HS
Memory 16GB DDR5
Video Card(s) Radeon 6700S
Storage 1TB SSD
Display(s) 120Hz 2560x1600
Software Windows 11 home premium
It shows you're retired for a while now, because this is absolute nonsense.
Even if what I say is untrue (I kinda doubt considering I follow the linux community) doesn't it suck to have stigmas?
 
Joined
Jul 9, 2015
Messages
3,413 (1.05/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
-20% slower vs
You said 4090 was 140%, if 4080 was 100%, which was wrong, it was 166%.

As for "I can grab a random leak on the internet, that shows that 4090 is only 100/73 => 37% faster than 4080", oh well.
 
Joined
Oct 27, 2020
Messages
789 (0.60/day)
You said 4090 was 140%, if 4080 was 100%, which was wrong, it was 166%.

As for "I can grab a random leak on the internet, that shows that 4090 is only 100/73 => 37% faster than 4080", oh well.
Where did you find the 166%?
My original post (you replied to my post) was about what potential performance a AD103 based RTX 4080 model could achieve if Nvidia decided to change specs (full die and +4% higher clocks than the current config was my proposal) and for this RTX 4080 config (my original proposal) I quoted that 4090 should have been +39% faster based on specs, but in reality the difference would be only +25% in 5800X TPU testbed with the current games selection, because in this particular setup 4090 realize around -10% of it's true potential.
+25% means 4090=125% and full AD103 4080 100% (or 4090 100% and full AD103 4080 80% , it's the same thing)
The Time Spy results that I quoted as an indication, if valid, shows that even in synthetic results the difference between 4090 (100) and current slower 4080 config (73) is much less than what you claim.
If TPU doesn't change testbed the average difference will be even less in games.(slightly different, around 74-75%)
No point to argue, reviews will come in a few weeks anyway and we will see who's assumption will prove true.
 
Last edited:
Joined
Jul 9, 2015
Messages
3,413 (1.05/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Shader distribution in NV lineups, highlights how terrible things are in the green camp, it had NEVER been so bad, the "unlaunched" 4080 12GB is basically on 2060 levels in terms of % from the halo:

1667991997478.png



Where did you find the 166%?
If 408 is a 40% cutdown from 4090, then 4090 is 166% of 4080.

As for "what is faster" and magical shaders that do much better in 4080 (with slower mem and what not) than in 4090, we'll see soon enough.
 
Joined
Jun 10, 2014
Messages
2,907 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Shader distribution in NV lineups, highlights how terrible things are in the green camp, it had NEVER been so bad, the "unlaunched" 4080 12GB is basically on 2060 levels in terms of % from the halo:
But does that really matter though?
Isn't it far more important how much performance you get per Dollar and how it compares vs. the competition and its predecessor?

I find it funny how the typical complaint over the years has been the opposite; too little difference between the three highest tiers. Quite often, a 60/60Ti model has been "too close" to the 70/80 (before the 90 models) models, and sometimes the 70 model has been very close to the 80 model (e.g. GTX 970).

These days the 90 model is a much bigger step up than the old Titan models used to be. But they haven't done that by making the mid-range models worse, so what's the problem then?
 
Joined
Oct 27, 2020
Messages
789 (0.60/day)
Shader distribution in NV lineups, highlights how terrible things are in the green camp, it had NEVER been so bad, the "unlaunched" 4080 12GB is basically on 2060 levels in terms of % from the halo:

View attachment 269212



If 408 is a 40% cutdown from 4090, then 4090 is 166% of 4080.

As for "what is faster" and magical shaders that do much better in 4080 (with slower mem and what not) than in 4090, we'll see soon enough.
I didn't say 4080 was -40% slower than 4090 (the base of comparison is 4090 in this case, if 4090=100% then 4080=100-40=60%)
i said 4090 is +40% faster than 4080 (the base of comparison is 4080 in this case, if 4080=100% then 4090 = 100+40=140%)
It very basic math-logic stuff really, I don't know why it confuse you...
 
Joined
Sep 3, 2019
Messages
3,029 (1.75/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 150W PPT limit, 79C temp limit, CO -9~14
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F37h, AGESA V2 1.2.0.B
Cooling Arctic Liquid Freezer II 420mm Rev7 with off center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3600MHz 1.42V CL16-16-16-16-32-48 1T, tRFC:288, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~465W (390W current) PowerLimit, 1060mV, Adrenalin v24.5.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v23H2, OSB 22631.3155)
All GPCs are active in RTX 4080, they just disabled some SMs, all they have to do is re-enable them for the AD103 dies that can be fully utilized and the rest can be used in future cut-down AD103 based products (and also increase the clocks for the full AD103 parts)
And anyway my point wasn't what Nvidia will do but what it could achieve based on AD103 potential...


According to leak, even an OC cut-down RTX 4080 (304TCs enabled vs 336TCs of my higher clocked full AD103 config...) appears to be only -20% slower vs RTX 4090 in 3DMark Time Spy Performance preset and -27% in Extreme 4K preset...
You do your math, I will do mine!
For example theoretical Shading performance delta alone is useless to extract performance difference between 2 models, it's much more complex than that...

View attachment 268945
You do realize that in those synthetic scores the CPU is involved too. If assumed that the CPU is the same for all them, then the CPU is responsible in every score differently (as percentage of the final score).
In order to evaluate the GPU you should see the GPU score only.
 
Joined
May 31, 2016
Messages
4,340 (1.48/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 16GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
But does that really matter though?
Isn't it far more important how much performance you get per Dollar and how it compares vs. the competition and its predecessor?

I find it funny how the typical complaint over the years has been the opposite; too little difference between the three highest tiers. Quite often, a 60/60Ti model has been "too close" to the 70/80 (before the 90 models) models, and sometimes the 70 model has been very close to the 80 model (e.g. GTX 970).

These days the 90 model is a much bigger step up than the old Titan models used to be. But they haven't done that by making the mid-range models worse, so what's the problem then?
I find a slight problem with performance per $ recently. I think the companies know how to exploit that understanding. Picture this.
$300 for a card that gets 100 FPS in a game average (it does not matter what game etc.) That will be our starting point.
then you have a new gen card release and the same tier card costs $420 and you get 150FPS.
another gen $540 for 200FPS. and another $660 for 250FPS. The performance per $ is better every gen but it is still a mid range card. It is the same card for which you have paid 300$ merely 4 years ago. The other aspect is, the four year old game has 2 new releases and each one normally halves the FPS of a graphics card. this means you dont get 250FPS with your $660 card which is a mid range card nonetheless. Don't get me wrong, you still have plenty of FPS but the problem is you paid $660 for a card that has around 125FPS in a game compared to $300 for 100FPS in a game 4 years ago?
Check Far Cry franchise (from FarCry 4 to 6) and 980 vs 1080 and 2080 with MSRP prices $550(dropped to 500 in 6 months) , $600, $799 (it dropped to $700 1 year later) respectively. This is just to illustrate the problem.
That is exactly what NV has been doing for years. Now you get a mid range card like 4070 for how many $$$ today? Advertised as 4080 to be exact. You can still say the performance per $ is good but is it worth to pay that much for the card?
 
Last edited:

HTC

Joined
Apr 1, 2008
Messages
4,611 (0.78/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
I don't remember where exactly i saw it but someone made an interesting observation, and i haven't been able to confirm or dispute it: during the presentation, have the 7900 GPUs been referred to SPECIFICALLY as using the N31 chip?

I didn't notice myself either way but, IF IT'S TRUE, then that would explain why the XTX isn't called a 7950. Could AMD be trolling us an nVidia and planing to launch THE REAL N31 chip @ a later date?

That would also mean higher prices for the lower cards though, and that isn't a good prospect to look forward to ...
 
Joined
Sep 3, 2019
Messages
3,029 (1.75/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 150W PPT limit, 79C temp limit, CO -9~14
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F37h, AGESA V2 1.2.0.B
Cooling Arctic Liquid Freezer II 420mm Rev7 with off center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3600MHz 1.42V CL16-16-16-16-32-48 1T, tRFC:288, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~465W (390W current) PowerLimit, 1060mV, Adrenalin v24.5.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v23H2, OSB 22631.3155)
I don't remember where exactly i saw it but someone made an interesting observation, and i haven't been able to confirm or dispute it: during the presentation, have the 7900 GPUs been referred to SPECIFICALLY as using the N31 chip?

I didn't notice myself either way but, IF IT'S TRUE, then that would explain why the XTX isn't called a 7950. Could AMD be trolling us an nVidia and planing to launch THE REAL N31 chip @ a later date?

That would also mean higher prices for the lower cards though, and that isn't a good prospect to look forward to ...
I don’t understand what exactly suggests that 7900XTX isn’t Navi31. And what might be? A Navi32?
And RDNA2 6900XT and 6950XT are both Navi21 chips.

On the contrary I do believe that AMD will introduce a bigger more expensive die down the road into 2023 but I don’t know what might be called.
 

HTC

Joined
Apr 1, 2008
Messages
4,611 (0.78/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
I don’t understand what exactly suggests that 7900XTX isn’t Navi31. And what might be? A Navi32?

That was what i understood, from what the dude said.

Like i said, i didn't notice if they specifically referred to the two 7900 cards as using N31 chips or not, but it would make SOME sense, i think.

Such a stunt would MOST CERTAINLY catch nVidia with their pants down ...

I do believe that AMD will introduce a bigger more expensive die down the road into 2023

That's the most likely scenario, i agree.
 
Joined
May 31, 2016
Messages
4,340 (1.48/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 16GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
I don't remember where exactly i saw it but someone made an interesting observation, and i haven't been able to confirm or dispute it: during the presentation, have the 7900 GPUs been referred to SPECIFICALLY as using the N31 chip?

I didn't notice myself either way but, IF IT'S TRUE, then that would explain why the XTX isn't called a 7950. Could AMD be trolling us an nVidia and planing to launch THE REAL N31 chip @ a later date?

That would also mean higher prices for the lower cards though, and that isn't a good prospect to look forward to ...
maybe 2 chiplet Core design can be released later. The new AMD design is chiplets but the core is one chiplet and then memory are other.
 
Joined
Oct 15, 2010
Messages
951 (0.19/day)
System Name Little Boy / New Guy
Processor AMD Ryzen 9 5900X / Intel Core I5 10400F
Motherboard Asrock X470 Taichi Ultimate / Asus H410M Prime
Cooling ARCTIC Liquid Freezer II 280 A-RGB / ARCTIC Freezer 34 eSports DUO
Memory TeamGroup Zeus 2x16GB 3200Mhz CL16 / Teamgroup 1x16GB 3000Mhz CL18
Video Card(s) Asrock Phantom RX 6800 XT 16GB / Asus RTX 3060 Ti 8GB DUAL Mini V2
Storage Patriot Viper VPN100 Nvme 1TB / OCZ Vertex 4 256GB Sata / Ultrastar 2TB / IronWolf 4TB / WD Red 8TB
Display(s) Compumax MF32C 144Hz QHD / ViewSonic OMNI 27 144Hz QHD
Case Phanteks Eclipse P400A / Montech X3 Mesh
Power Supply Aresgame 850W 80+ Gold / Aerocool 850W Plus bronze
Mouse Gigabyte Force M7 Thor
Keyboard Gigabyte Aivia K8100
Software Windows 10 Pro 64 Bits
If it works on a similar principle, I am afraid not much can be done on latency.
They actually did say that latency will be lower on their tech.
 

redlock81

New Member
Joined
Dec 4, 2022
Messages
1 (0.00/day)
Honestly the fact they didn't compare it directly to the 4090 shows you it's beneath it. And the aggressive pricing tells the story of the bad ray tracing performance. Pretty much another Nvidia win across the board this generation. Sorry AMD.
Its so close to the performance i dont think people will mind, as inflation keeps going up i believe the all mighty dollar will win, for performance to be so close and AMD is 600$ cheaper for a reference card...that can be a CPU and MB. RT is still in its growing stages and it really depends how its implemented and there are far more games without RT than there is games with RT...also there are far more games that dont look any different when turning the setting on only to tank performance, seriously who cares about RT and besides Unreal Engine 5 as already displayed that you dont need the graphics card to even do RT lol, frames matter more at this point as monitors keep upping the limits. Also AIB's have more freedom making card for AMD they can up the limit on power the card used to 450w and use ddr7 and yes that is a thing and available and easily match the performance of the 4090 at least for rasterized performance. RT is dumb, until its a total game changer no one cares!
 
Top