- Joined
- Dec 25, 2020
- Messages
- 7,245 (4.90/day)
- Location
- São Paulo, Brazil
System Name | "Icy Resurrection" |
---|---|
Processor | 13th Gen Intel Core i9-13900KS |
Motherboard | ASUS ROG Maximus Z790 Apex Encore |
Cooling | Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM |
Memory | 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V |
Video Card(s) | NVIDIA RTX A2000 |
Storage | 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD |
Display(s) | 55-inch LG G3 OLED |
Case | Pichau Mancer CV500 White Edition |
Power Supply | EVGA 1300 G2 1.3kW 80+ Gold |
Mouse | Microsoft Classic IntelliMouse (2017) |
Keyboard | IBM Model M type 1391405 |
Software | Windows 10 Pro 22H2 |
Benchmark Scores | I pulled a Qiqi~ |
Nope CPU's have gone even higher, the top chip on desktops 10 years back was the 5960x(?) & this year(next?) we'll have a TR with probably up to 96 cores. And it's definitely more than 12x faster than Intel's best HEDT chips back then, even if you take the top server chips they now top out at 128c/256t for AMD. In fact you could argue that CPU's have progressed far more, in part of course due to the stagnation with *dozer & Intel deciding to milk quad cores for at least half a decade!
The top Ivy bridge Xeon chips topped out at 12 cores, so again vastly lower.
Threadripper Pro is not a desktop processor, Threadripper as a consumer grade CPU died with the 3990X.
But even if you account for the market niche and multi-die CPUs (which really are multiple CPUs in one package), I don't think IPC hasn't gone up a full 10x from Haswell to Raptor Cove (2013-2023). Operating frequencies increased greatly in the interim as well.
Core counts went from 18 (Haswell-EP) to basically around 128, so not a full 10x increase. IPC must have gone up around 6 times higher, and also an extra GHz on average, but I guess that's about it.
Might have if you compare Piledriver to Zen 4 but AMD CPUs were hopeless garbage until Ryzen came out. Could be worth looking at sometime with some real data, but we all remember how 1st gen Core i7 CPUs made sport of FX.
Still GPUs have easily outpaced this growth. GK110 to AD102 is one hell of a leap.
The naked narcissism inside nGreedia must be absolutely awful to have to navigate through.
Ah, yes, I'm sure "nGreedia" engineers are just jumping at the opportunity to work at better companies, such as AMD, perhaps?
For gaming, GPUs have gotten faster by about 10 times in the last 10 years. Ten years ago, the fastest GPUs were the 780 Ti and the 290X. The performance improvement from the 780 Ti to the 4090 at 4K is about 10 times. The table below uses TPU's reviews at 4K for the GTX 1060, 1080 Ti, RTX 3080, and RTX 4090 respectively.
GTX 780 Ti to GTX 970 GTX 970 to GTX 1080 Ti GTX 1080 Ti to RTX 3080 RTX 3080 to RTX 4090 85/83 1/0.36 100/53 190/99
Multiplying all the speedups gives 10.3 which isn't too far off the multi-threaded performance increase for CPUs in that time. Anandtech's CPU bench can be used to compare the 4770k and the 7950X. There are common applications where the 7950X is as much as 9 times faster than the 4770K and these applications don't leverage any instructions unique to the newer processor such as AVX-512. I haven't used the 13900K because their database doesn't have numbers for any Intel CPUs faster than the 12900K.
View attachment 316005
View attachment 316006
Rather than blaming CPU designers, you should be asking game engine developers why their engines are unable to utilize these CPUs efficiently.
I'm saddened that Bill Dally is misrepresenting TSMC's contribution to these gains. The 28nm to 5 nm transition isn't worth only a 2.5 times increase in GPU resources. From the Titan X to AD102, clock speeds have increased by nearly 2.5 times and the GPU has 6 times more FP32 flops per clock. That is a 15 fold increase in compute solely related to the process. We shouldn't ignore the work done by Nvidia's engineers, but if we take his claim at face value, then a 28 nm 4090 would be only 2.5 times slower than the actual 4090 which is patently ridiculous.
You're also accounting shipping products (and at a relatively low weight class) to normalize for performance, the comparison in progress should IMHO be done comparing fully enabled and endowed processors that are configured for their fullest performance, perhaps normalized for frequency to accurately measure improvements at an architectural level. We don't even have such a product available to the public for Ada Lovelace yet.
Last edited: