• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Palit GeForce RTX 4060 Ti GPU Specs Leaked - Boost Clocks of Up to 2685 MHz & 18 GB/s GDDR6 Memory

T0@st

News Editor
Joined
Mar 7, 2023
Messages
2,077 (3.18/day)
Location
South East, UK
More leaks are emerging from Russia regarding NVIDIA's not-yet-officially-confirmed RTX 4060 Ti GPU family - two days ago Marvel Distribution (RU) released details of four upcoming Palit custom design cards, again confirming the standard RTX 4060 Ti GPU configuration of 8 GB VRAM (plus 128-bit memory bus). Earlier today hardware tipster momomo_us managed to track down some more pre-launch time info (rumors point to late May), courtesy of another Russian e-retailer (extremecomp.ru). The four Palit Dual and StormX custom cards from the previous leak are spotted again, but this new listing provides a few extra details.

Palit's four card offerings share the same basic specification of 18 GB/s GDDR6 memory, pointing to a maximum theoretical bandwidth of up to 288 GB/s - derived from the GPU's confirmed 8 GB 128-bit memory interface. The standard Dual variant appears to have a stock clock speed of 2310 MHz, the StormX and StormX OC models are faster at 2535 MHz and 2670 MHz (respectively), and the Dual OC is the group leader with 2685 MHz. The TPU database's (speculative) entry for the reference NVIDIA GeForce RTX 4060 Ti GPU has the base clock listed as 2310 MHz, and the boost clock at 2535 MHz - so the former aligns with the Palit Dual model's normal mode of operation (its boost clock number is unknown), and the latter lines up with the standard StormX variant's (presumed) boost mode. Therefore the leaked information likely shows only the boosted clock speeds for Palit's StormX, StormX OC and Dual OC cards.



View at TechPowerUp Main Site | Source
 
Joined
Apr 10, 2010
Messages
1,863 (0.35/day)
Location
London
System Name Jaspe
Processor Ryzen 1500X
Motherboard Asus ROG Strix X370-F Gaming
Cooling Stock
Memory 16Gb Corsair 3000mhz
Video Card(s) EVGA GTS 450
Storage Crucial M500
Display(s) Philips 1080 24'
Case NZXT
Audio Device(s) Onboard
Power Supply Enermax 425W
Software Windows 10 Pro
8 GB VRAM?

No way, :laugh:
 
Joined
Dec 22, 2011
Messages
3,890 (0.82/day)
Processor AMD Ryzen 7 3700X
Motherboard MSI MAG B550 TOMAHAWK
Cooling AMD Wraith Prism
Memory Team Group Dark Pro 8Pack Edition 3600Mhz CL16
Video Card(s) NVIDIA GeForce RTX 3080 FE
Storage Kingston A2000 1TB + Seagate HDD workhorse
Display(s) Samsung 50" QN94A Neo QLED
Case Antec 1200
Power Supply Seasonic Focus GX-850
Mouse Razer Deathadder Chroma
Keyboard Logitech UltraX
Software Windows 11
Yep, sadly this won't be able to play the shitty AAA games you're all rushing out and beta testing.
 
Joined
Mar 16, 2017
Messages
2,154 (0.76/day)
Location
Tanagra
System Name Budget Box
Processor Xeon E5-2667v2
Motherboard ASUS P9X79 Pro
Cooling Some cheap tower cooler, I dunno
Memory 32GB 1866-DDR3 ECC
Video Card(s) XFX RX 5600XT
Storage WD NVME 1GB
Display(s) ASUS Pro Art 27"
Case Antec P7 Neo
Hey that’s the same amount of memory bandwidth as my 5600XT…which launched over 3 years ago for $279. Progress in the 6-line of GPUs!
 
Joined
Dec 31, 2020
Messages
999 (0.69/day)
Processor E5-4627 v4
Motherboard VEINEDA X99
Memory 32 GB
Video Card(s) 2080 Ti
Storage NE-512
Display(s) G27Q
Case DAOTECH X9
Power Supply SF450
Memory Bandwidth

3060 Ti: 448.0 GB/s
4060 Ti: 288.0 GB/s

Good times. :shadedshu:

4070 is being able to stand up to 3080 with just 504 GBs versus 760 GBs thanks to a L2$, that is 33% less bandwidth, the rest being the same in terms 30 Gflops provided by 5888 Cuda and 64 Rops but operating at 50% higher frequency resulting the performance of 8704 / 96

Pretty much the same 4060 Ti is the equivalent of 6144 / 72. The problem here is the Rops. 48 Rops versus 96. That is absolute crap.
 
Joined
Dec 25, 2020
Messages
6,978 (4.80/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
Yep, sadly this won't be able to play the shitty AAA games you're all rushing out and beta testing.

and neither will the RX 7600 XT. At least with the last generation, if you were unhappy with the RTX 3070's framebuffer, you could get a 6700 XT.

Nowhere to run now.
 
Joined
May 6, 2023
Messages
59 (0.10/day)
and neither will the RX 7600 XT. At least with the last generation, if you were unhappy with the RTX 3070's framebuffer, you could get a 6700 XT.

Nowhere to run now.
But everywhere to hide (1660 super 336 GB/s). My next move will be the 3060 12GB model (360GB/s). I play them, they don't play me. Ascendance is imminent.
 
Joined
Sep 17, 2014
Messages
22,644 (6.04/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Joined
Sep 27, 2008
Messages
1,210 (0.20/day)
Memory Bandwidth

3060 Ti: 448.0 GB/s
4060 Ti: 288.0 GB/s

Good times. :shadedshu:

Comparing the memory bandwidth between two different architectures doesn't usually come to any useful performance predictions. It's a "huh, neat" figure at best.

The GTX 760 has 80GB/s more memory bandwidth than the GTX 960. Guess which card performs 20% worse than the other?
The RX 480 matches the R9 390 with a 128GB/s memory bandwidth deficit.
 
Joined
Jul 9, 2022
Messages
6 (0.01/day)
Location
EU (still)
Processor 3600
Motherboard Strix B550-F Gaming /2803
Cooling DarkRock
Memory Flare X 3200MHZ 16GB 2x8 / F4-3200C14D-16GFX
Video Card(s) Palit 1080
Storage FireCuda 530
Display(s) Acer Predator / XB241H
Case freestyle
Audio Device(s) onboard
Power Supply Toughpower Grand RGB 850W Gold
Mouse Razer DeathAdder v2
Keyboard Razer Cynosa Light
Software Win11Pro 21H2
with all the fake frames , thats more than enough ^^ im waiting for an apu delivering 2080-3070performance with a low of 35W max. also im not buying into x86-64 anymore and also i will never spend a dimme on sdram/ddr 123456789 ..... its all old tech. stuff modular, lets do like with heaters and cars. ban it and presure the industry to come up with something new. how long will this bs go on ? nothing changed over the years, just press the last drop out of the stone .
 
Joined
Jun 14, 2020
Messages
3,530 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Hey that’s the same amount of memory bandwidth as my 5600XT…which launched over 3 years ago for $279. Progress in the 6-line of GPUs!
But the 4060ti will probably have a 20 times bigger cache though
 
Joined
Dec 25, 2020
Messages
6,978 (4.80/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
But everywhere to hide (1660 super 336 GB/s). My next move will be the 3060 12GB model (360GB/s). I play them, they don't play me. Ascendance is imminent.

Being fair, absolute memory bandwidth isn't as much of an issue as it used to be but both companies are hedging their product's performance on large on-die caches.

Last generation with 6950 XT (512 GB/s) vs. 3090 Ti (1013 GB/s) already showed how AMD's design desperately relies on its cache hit rate, with the performance drastically reduced in software that doesn't have a high hit rate or in situations where raw memory bandwidth is heavily demanded of (usage of ultra high quality textures and/or resolutions above 1440p, and most notably, ray tracing).

The end result is that AMD's design was mediocre for 4K/Ultra gaming and had poor ray tracing performance, something that the RTX 3090 and its refresh are perfectly capable of doing well. On the flip side however as long as you didn't enable ray tracing, AMD's design was faster at lower resolutions, especially 1080p, so in the end these turned out to be great cards. The RTX 3090 tends to fall behind the even the vanilla RX 6800 in pure raster workloads at 1080p on games that are friendly to AMD's architecture.

All in all the conclusion you can draw from this is the same, the RTX 4060 Ti and the RX 7600 XT are both strictly designed for 1080p gaming and I suspect their performance is going to fall off a cliff once they're run at 1440p and 4K will be completely unworkable.
 
Joined
May 6, 2023
Messages
59 (0.10/day)
Comparing the memory bandwidth between two different architectures doesn't usually come to any useful performance predictions. It's a "huh, neat" figure at best.

The GTX 760 has 80GB/s more memory bandwidth than the GTX 960. Guess which card performs 20% worse than the other?
The RX 480 matches the R9 390 with a 128GB/s memory bandwidth deficit.
If the bandwidth of the newer generation product is lower than the previous gen product, you can assume, between the two compared models that there is some memory handicap, I only upgrade to a model that has at least the same (192bit) or higher numbers of every facet of the card, minus clock speeds, as with the increase in shader cores the speeds(Mhz) are usually less compared to a lesser count of cores. for instance the 1660 super has 1408 cores @ 1530Mhz and the 3060 has 3584 cores @ 1320Mhz. In conclusion there is a lot to be said about the memory bandwidth of cards today and how it relates to game performance in different scenarios, but overall, for every upgrade, I wouldn't recommend going backwards -
3060 Ti: 448.0 GB/s
4060 Ti: 288.0 GB/s < --- going backwards like 2 steps forward one step back line dancing.

And I think the large on die caches are clouding this issue. Sounds gimmicky and a way for them to have you think that low memory bandwidth is OK.
 
Joined
Dec 25, 2020
Messages
6,978 (4.80/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
i thought Nvidia pulled out of Russia??

Despite economic sanctions by the West, I've read that life largely goes on as usual in Russia. They've had access to consumer electronics such as iPhones at practically the same prices that are practiced in Europe, as well as most goods that they'd normally have by the way of grey imports.
 
Joined
Feb 15, 2018
Messages
259 (0.10/day)
Despite economic sanctions by the West, I've read that life largely goes on as usual in Russia. They've had access to consumer electronics such as iPhones at practically the same prices that are practiced in Europe, as well as most goods that they'd normally have by the way of grey imports.
yea ur absolutely right. Also since Nvidia have lots on unsold inventory with cryptomine crash+ lifes returning to normal after lockdown.
Selling them to Russia via china would only do GOOD for them without caring about the other things.
 
Joined
Nov 15, 2020
Messages
929 (0.62/day)
System Name 1. Glasshouse 2. Odin OneEye
Processor 1. Ryzen 9 5900X (manual PBO) 2. Ryzen 9 7900X
Motherboard 1. MSI x570 Tomahawk wifi 2. Gigabyte Aorus Extreme 670E
Cooling 1. Noctua NH D15 Chromax Black 2. Custom Loop 3x360mm (60mm) rads & T30 fans/Aquacomputer NEXT w/b
Memory 1. G Skill Neo 16GBx4 (3600MHz 16/16/16/36) 2. Kingston Fury 16GBx2 DDR5 CL36
Video Card(s) 1. Asus Strix Vega 64 2. Powercolor Liquid Devil 7900XTX
Storage 1. Corsair Force MP600 (1TB) & Sabrent Rocket 4 (2TB) 2. Kingston 3000 (1TB) and Hynix p41 (2TB)
Display(s) 1. Samsung U28E590 10bit 4K@60Hz 2. LG C2 42 inch 10bit 4K@120Hz
Case 1. Corsair Crystal 570X White 2. Cooler Master HAF 700 EVO
Audio Device(s) 1. Creative Speakers 2. Built in LG monitor speakers
Power Supply 1. Corsair RM850x 2. Superflower Titanium 1600W
Mouse 1. Microsoft IntelliMouse Pro (grey) 2. Microsoft IntelliMouse Pro (black)
Keyboard Leopold High End Mechanical
Software Windows 11
$200 card max.
 
Joined
Sep 27, 2008
Messages
1,210 (0.20/day)
If the bandwidth of the newer generation product is lower than the previous gen product, you can assume, between the two compared models that there is some memory handicap, I only upgrade to a model that has at least the same (192bit) or higher numbers of every facet of the card, minus clock speeds, as with the increase in shader cores the speeds(Mhz) are usually less compared to a lesser count of cores. for instance the 1660 super has 1408 cores @ 1530Mhz and the 3060 has 3584 cores @ 1320Mhz. In conclusion there is a lot to be said about the memory bandwidth of cards today and how it relates to game performance in different scenarios, but overall, for every upgrade, I wouldn't recommend going backwards -
3060 Ti: 448.0 GB/s
4060 Ti: 288.0 GB/s < --- going backwards like 2 steps forward one step back line dancing.

Could you repeat what you said, in English this time?
 
Joined
Jul 20, 2020
Messages
1,149 (0.71/day)
System Name Gamey #1 / #3
Processor Ryzen 7 5800X3D / Ryzen 7 5700X3D
Motherboard Asrock B450M P4 / MSi B450 ProVDH M
Cooling IDCool SE-226-XT / IDCool SE-224-XTS
Memory 32GB 3200 CL16 / 16GB 3200 CL16
Video Card(s) PColor 6800 XT / GByte RTX 3070
Storage 4TB Team MP34 / 2TB WD SN570
Display(s) LG 32GK650F 1440p 144Hz VA
Case Corsair 4000Air / TT Versa H18
Power Supply EVGA 650 G3 / EVGA BQ 500
If the bandwidth of the newer generation product is lower than the previous gen product, you can assume, between the two compared models that there is some memory handicap, I only upgrade to a model that has at least the same (192bit) or higher numbers of every facet of the card, minus clock speeds, as with the increase in shader cores the speeds(Mhz) are usually less compared to a lesser count of cores. for instance the 1660 super has 1408 cores @ 1530Mhz and the 3060 has 3584 cores @ 1320Mhz. In conclusion there is a lot to be said about the memory bandwidth of cards today and how it relates to game performance in different scenarios, but overall, for every upgrade, I wouldn't recommend going backwards -
3060 Ti: 448.0 GB/s
4060 Ti: 288.0 GB/s < --- going backwards like 2 steps forward one step back line dancing.

And I think the large on die caches are clouding this issue. Sounds gimmicky and a way for them to have you think that low memory bandwidth is OK.

What matters is FPS for your money. Paper specs are no way to buy a video card, otherwise you'd expect Nvidia cards to be 2-3x the speed of AMD ones based on core counts.

4090 - 16384 cores
7900XT - 5376 cores

205% more cores but only 46% faster. Even in RT it's only 80% faster. But then it doesn't cost 3x the AMD card, only 2x.
 
Joined
May 6, 2023
Messages
59 (0.10/day)
What matters is FPS for your money. Paper specs are no way to buy a video card, otherwise you'd expect Nvidia cards to be 2-3x the speed of AMD ones based on core counts.

4090 - 16384 cores
7900XT - 5376 cores

205% more cores but only 46% faster. Even in RT it's only 80% faster. But then it doesn't cost 3x the AMD card, only 2x.

I don't know of too many outlets that let you rent video cards, as in not going by paper specs. Regardless so far I have chosen very well, for the money spent and the end result after some compromises yes, in game settings, but 90 percent of the time Im able to facilitate imo an improved graphical scenario compared to default settings and in some cases what ever highest setting. All in an effort to maintain clear visual fidelity and high framerates ie over 60fps at a minimum. SO far so good, so im thinking since the 1660 super has done so well and frankly is doing very well even currently in Hogwarts legacy and cyberpunk. coincidently Hogwarts recommended card IS the 1660 super and think that's odd but whatever. Usually its a more powerful higher end card for rec spec. The 3060 is the next logical choice not just for me, but for a lot of people who would like 12 GB vram and have quite a but more shading power but not spend an insane amount. Because any higher and there are certain system requirements that a lot of people wont meet, like power draw and size restrictions. And yea am5 and ddr5 and pci 5 are upon us but really there is a lot of headroom left for people with the previous gen platform. Here is my current system

Amd 5600x
16gb 3600 xmp
gigabyte x570
wd sn770 nvme 1tb
evga 1660 super oc
corsair rm650 psu

This would be the final upgrade to the system, unless I wanted to double the system ram it wouldnt really make a difference but other than that it would be the last thing and that 3060 would make this system viable for another at the most, 5 years. Instead of building a whole new platform for couple grand cause it would be kik ash, spend around 3 to 400 now for a card.

p.s. I forgot to mention I play at 1440p only. since I got my latest monitor a 27 lg hdr gsync 1440 has been the main stay and really who would want to go back? Thing is its key to know exactly what settings in the game to choose to allow for the framerates to stay above the all mighty 60 FPS, for instance, in hogwarts legacy, if you turn the effects setting to high or ultra, it totally changes the reflection effect on just about everything and kills framerate, but heres the thing, it looks better on med than it does on the higher setting. Why? Because the higher settings uses a completely different relfection effect that doesnt even look as good but uses more power. Most likely its raymarched reflections for high and ultra and cubemap for medium and lower, but a very high quality cubemap. So its these "compromises" lol are what im talking about, its very specific and time consuming, for some games, but its worth the effort because like i say in the end it often reflects something even better looking than so called ultra setting. And finally let me say this, this is the enthusiast aspect to all of this computer stuff, this is what its all about, tinkering and testing and retesting to find the OPTIMAL settings. For your pc. I love the word optimal btw. OPTIMIZE!
 
Last edited:
Joined
Dec 31, 2020
Messages
999 (0.69/day)
Processor E5-4627 v4
Motherboard VEINEDA X99
Memory 32 GB
Video Card(s) 2080 Ti
Storage NE-512
Display(s) G27Q
Case DAOTECH X9
Power Supply SF450
it happened before with GTX 960 and GTX 760 where we had the same step back, 192 to 112 GB/s but the performance was the same. So clearly if you're lookin for any improvement this isn't for you.

After this weak generation comes GDDR7 with 576GB's over the same 128 bit bus. But they may decide to stick with GDDR6 for the low end, worst case 360 GB/s.

Things like DLSS and neural compression just add latencly, but the L2 cache is a good thing that probably keeps the complete frame on-die instead in the memory, so there is less burden on the inerface overall.
 
Joined
May 6, 2023
Messages
59 (0.10/day)
it happened before with GTX 960 and GTX 760 where we had the same step back, 192 to 112 GB/s but the performance was the same. So clearly if you're lookin for any improvement this isn't for you.

After this weak generation comes GDDR7 with 576GB's over the same 128 bit bus. But they may decide to stick with GDDR6 for the low end, worst case 360 GB/s.

Things like DLSS and neural compression just add latencly, but the L2 cache is a good thing that probably keeps the complete frame on-die instead in the memory, so there is less burden on the inerface overall.
I understand and agree with all that you are saying, the only point im trying to make is that there is a prudent upgrade path for ones that don't have as much cash and I think I have discovered it. The 3060 is a generous upgrade over the 1660 super, under the right settings and supporting system(my current system) I expect to double my framerates(in some games). im expecting at least a 25 fps increase in all major titles flight sim, cyberpunk, hogwarts. I also never use dsr fsr upscaling of any kind, just straight no vsync cause the gsync monitor handles it all, 1440. Its a fast ips.
 
Last edited:
Joined
Dec 25, 2020
Messages
6,978 (4.80/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
What matters is FPS for your money. Paper specs are no way to buy a video card, otherwise you'd expect Nvidia cards to be 2-3x the speed of AMD ones based on core counts.

4090 - 16384 cores
7900XT - 5376 cores

205% more cores but only 46% faster. Even in RT it's only 80% faster. But then it doesn't cost 3x the AMD card, only 2x.

Isn't Ada a dual issue design just like Ampere? Which means that it's actually 8192 units that double up FP32 workloads (for example, 3090 is marketed as 10496 CUDA cores but actually contains 5248 shader units spread across 82 SM blocks/compute cores out of the 84 present in a full die such as 3090 Ti)?

In any case, while the physical die area of Navi 31 is smaller in comparison to AD102, it doesn't contain the humongous L2 cache its competitor has, nor any of its specialty features such as tensor processing cores, which should bring the die area that's dedicated to shader processing on Navi 31 significantly closer to the amount of area used in AD102 for the same purpose. I genuinely believe AMD has designed it targeting AD102, from their earliest narrative and claimed performance gains over the 6950 XT... except that they failed so miserably to achieve that I actually pity them.

I don't understand why the RX 7900 XTX turned out to be such a disaster, it must contain very severe and potentially unfixable hardware errata, because if you look at it objectively, it's architected really well. I am no GPU engineer, but I don't really see any major problem with the way RDNA 3 is architected and how its resources are managed and positioned internally. Even its programmability seems to be at least as flexible as the others. At a first glance, it seems like a really thought out architecture from programmability to execution, but it just doesn't pull its weight when put next to Ada. I refuse to believe AMD's drivers are that bad, not after witnessing first hand the insane amount of really hard work the Radeon team put on it, even if I sound somewhat unappreciative of said efforts sometimes (but trust me, I am not). It's a really good read and even for a layman you should be able to more or less end up with an understanding of the hardware's inner workings:


Despite my often harsh tone towards AMD, I really think they have the potential to reverse this and make an excellent RDNA 4 that will be competitive with Blackwell, regardless, I don't think I will end up with a 5090 on my system if NVIDIA keeps their pricing scheme that way.
 
Top