• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel's Core Ultra 9 285K Performance Claims Leaked, Doesn't Beat i9-14900K at Gaming

Joined
Jan 11, 2022
Messages
768 (0.76/day)
Man 2024 the year every company decided they dgaf about gamers....
CPUs have only been marginal in boosting gaming performing for many years now, a market segment that’s been guided to getting midrange CPUs and splurging in graphics cards for 2 decades now.
 

SL2

Joined
Jan 27, 2006
Messages
2,305 (0.34/day)
Nothing surprising here

1728495323701.png
 
Joined
Jun 10, 2014
Messages
2,970 (0.79/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
TBH yes I agree. I think we are reaching a clock rate max like we did with netburst. We can try and increase IPC even more, but I dont think that will be the primary shift.
Games will only scale with faster CPUs until the CPU is no longer the bottleneck. If you e.g. had a CPU 10x faster per core than current Raptor Lake CPUs, you wouldn't see much of a difference in most games, and most certainly not scale very far with the few games which do. But a few years down the line, you'll probably see a larger difference as game engines become more demanding of CPUs. Hopefully some of this is used for something useful to improve the games, but unfortunately a lot of it will probably be bloat. This is already evident in those games which are very sensitive to large L3 caches, which is a symptom of bloated code.

With coding languages and game engines being more advanced then they were, I actually think we will start to see a shift in gaming whein the parallelism with either the engine, software stack or underlying technology APIs. honestly probably all of them.
For the most part, it's mostly secondary workloads we see more parallelized in games, like loading assets, audio processing or some other async tasks. The two most performance critical elements, the game simulation (which we used to call it "game loop" in the old days) and the main rendering thread will continue to dictate the performance of games. Smaller tasks may be delegated to smaller worker threads, but scaling this greatly increases the risk of delays which results in stutter, or worse glitches like we see in so many games now.

If multithreading is to provide significant gains in gaming performance in the future, there would have to be different kinds of changes than we've seen so far. As latency quickly adds up when trying to synchronize increasing number of threads, efforts to reduce latency or even "guarantee" deadlines would be required. Firstly a much faster OS scheduler, and probably some semi-"RT" like features so threads are undisturbed by other tasks. Secondly graphics drivers etc. would need to behave more like in a RT system, and thirdly possibly HW changes to streamline communication.

But while multithreading often gets the most attention, optimizing for ILP is much more important for performance scaling, whether it's for gaming or user interactive applications. For smaller work chunks which needs to be synchronized, multithreading can only get you so far before overhead or latency bottlenecks it, but modern CPUs are also increasingly superscalar, which means the relative performance gains for writing clean efficient code is larger than ever. And while CPU frontends are increasingly advanced, e.g. Meteor Lake improves branch misprediction recovery, the gains from saturating the pipeline is even greater. The bigger problem here is the software practises which are popular today, especially how OOP, abstraction and generalization are employed. It is remarkable how much having dense logic affects CPU performance. But at some point I would expect compilers and potentially ISAs to evolve in order to scale with wider CPU architectures, hopefully in a better way than Itanium. :)

So I guess Arrow Lake is going to be like Zen 5, Only slight improvements here and there and the main focus seems to be improving efficiency.
If anything, I'm hoping for more consistent performance. Pushing clock speeds too far leads to very unstable clock speeds, and at least for some of us that may be more annoying than slightly lower but more consistent performance. That is at least my impression from comparing Raptor Lake(i5-13600K) to Comet Lake(i7-10700K) at work, purely anecdotal and subjective impression, even though Raptor Lake has clearly higher peak performance.

Hopefully this will reduce expectations and hype though about Arrow lake performance increases, Zen 5 had such a bad reception at launch due to overhype which led to massive disappointment at launch when people saw Zen 5 did not improve as much as they were expecting.
Overhype servers no one.
Unlike most, I'm not that disappointed with Zen 5, and I'm very curious to see how it performs in upcoming Threadripper models.
 
Last edited:
Joined
Apr 13, 2023
Messages
43 (0.08/day)
Weird, arrow lake has a 3 node advantage over raptorlake. It should be doing far better than just use less power.

It's not even that, PL2 is rated at 250W on leaked materials, so that's just 3W less than 14900k.

With HT removal it will be slower in MT workload. ST performance doesn't matter much these days.

The only advantages are native support for DDR5 6400MT memory, raised from 5600MT and I think dedicated PCIe 5.0 NVme port, if someone plans to upgrade now.
Oh and theoretical lack of Vmin Shift hardware bug.
 

tfp

Joined
Jun 14, 2023
Messages
71 (0.15/day)
It's not even that, PL2 is rated at 250W on leaked materials, so that's just 3W less than 14900k.

With HT removal it will be slower in MT workload. ST performance doesn't matter much these days.

The only advantages are native support for DDR5 6400MT memory, raised from 5600MT and I think dedicated PCIe 5.0 NVme port, if someone plans to upgrade now.
Oh and theoretical lack of Vmin Shift hardware bug.

ST performance does matter and MT workloads only matter up to point. It really depends on how many threads your software really can use. For gaming 6 to 8 cores with or with HT is still enough otherwise AMD 12 and 16 core chips would wipe the floor regardless of some small latency issues like they do in other heavily threaded applications. Really that is what matters, most people don't need more than 8 cores or threads for things to be fast. If someone fits the use cases for a really heavily threaded chip well great buy one but they are not the vast majority of people.

Maybe in the future only MT workloads will only matter and then we will all have slower 100+ core chips, but I doubt it.
 
Joined
Nov 16, 2023
Messages
1,148 (3.48/day)
Location
Nowhere
System Name I don't name my rig
Processor 14700K
Motherboard Asus TUF Z790
Cooling Air/water/DryIce
Memory DDR5 G.Skill Z5 RGB 6000mhz C36
Video Card(s) RTX 4070 Super
Storage 980 Pro
Display(s) Some LED 1080P TV
Case Open bench
Audio Device(s) Some Old Sherwood stereo and old cabinet speakers
Power Supply Corsair 1050w HX series
Mouse Razor Mamba Tournament Edition
Keyboard Logitech G910
VR HMD Quest 2
Software Windows
Benchmark Scores Max Freq 13700K 6.7ghz DryIce Max Freq 14700K 7.0ghz DryIce Max all time Freq FX-8300 7685mhz LN2
I'm an honest guy. Gonna just tell it like it is. Let me reword the thread title for you.

285K is NOT appealing. Have a great Cinebench score, but games CyberPunk 2077 like shit.

FnA, that's a nail in the coffin pre-release!!! OUCH! :nutkick:
 
Joined
Jan 14, 2019
Messages
11,798 (5.62/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Hang on... What's that 447 W over there? Are they trying to tell me that this thing compares well to the 7950X3D while eating half a kilowatt? :wtf:

They must be out of their minds to think that this is acceptable on any level.
 

SL2

Joined
Jan 27, 2006
Messages
2,305 (0.34/day)
Hang on... What's that 447 W over there? Are they trying to tell me that this thing compares well to the 7950X3D while eating half a kilowatt? :wtf:

They must be out of their minds to think that this is acceptable on any level.
You know a 14900K alone doesn't draw 500 W in games, that's insane. I think it's the whole system including a graphics card.
 
Last edited:
Joined
May 7, 2023
Messages
562 (1.07/day)
Processor Ryzen 5700x
Motherboard Gigabyte Auros Elite AX V2
Cooling Thermalright Peerless Assassin SE White
Memory TeamGroup T-Force Delta RGB 32GB 3600Mhz
Video Card(s) PowerColor Red Dragon Rx 6800
Storage Fanxiang S660 1TB, Fanxiang S500 Pro 1TB, BraveEagle 240GB SSD, 2TB Seagate HDD
Case Corsair 4000D White
Power Supply Corsair RM750x SHIFT
So AMD Ryzen 9000 series shows 5-12% gains vs 7000 series, epic failure, stagnant, sidegrade, Intel, can't beat last gen in gaming, butttttttttttttttttttt power savings, now 250w vs 350w+ WINNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN

You know a 14900K alone doesn't draw 500 W in games, that's insane. I think it's the whole system including a graphics card. (169.5 W seems to be a copy paste error from average application power draw, it's the same error in the 14700K review.)

So 145 - 80 = 65W in games
View attachment 366869
No not just in gaming but CPU intensive tasks, rendering, editing, CPU intensive tasks, those CPU's literally sup 400w of power and double that if not more than Ryzen in gaming where yes it might not be 500w but still 150-200w vs about 80w

We get the same from the Nvidia apologisers, saying AMD uses 30w at idle with their Intel thermonuclear reactors, it's pretty fucking funny :laugh:
 
Joined
Jan 14, 2019
Messages
11,798 (5.62/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
You know a 14900K alone doesn't draw 500 W in games, that's insane. I think it's the whole system including a graphics card. (169.5 W seems to be a copy paste error from average application power draw, it's the same error in the 14700K review.)

So 145 - 80 = 65W in games
View attachment 366869
Of course not in games. But I don't want my CPU anywhere near that number in any case. If it's total system power, then I guess that's fine, although with every system being different, such data doesn't say anything to me. I only care about individual component power consumption to figure out cooling needs.
 
Joined
Jan 18, 2021
Messages
139 (0.10/day)
Processor Core i7-12700
Motherboard MSI B660 MAG Mortar
Cooling Noctua NH-D15
Memory G.Skill Ripjaws V 64GB (4x16) DDR4-3600 CL16 @ 3466 MT/s
Video Card(s) AMD RX 6800
Storage Too many to list, lol
Display(s) Gigabyte M27Q
Case Fractal Design Define R5
Power Supply Corsair RM750x
Mouse Too many to list, lol
Keyboard Keychron low profile
Software Fedora, Mint
So AMD Ryzen 9000 series shows 5-12% gains vs 7000 series, epic failure, stagnant, sidegrade, Intel, can't beat last gen in gaming, butttttttttttttttttttt power savings, now 250w vs 350w+ WINNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
I don't think any sane person criticized Zen 5 solely for its unimpressive performance uplift over Zen 4. The criticisms, rather, were:

- AMD wildly over-promised on performance
- pricing

It has become AMD's habit to set prices too high just long enough to get crucified in initial reviews, then almost immediately drop those prices after the damage is done. It's an unforced error, and it's sad to watch if you're remotely interested in healthy competition. In this case, AMD's went a step further, magnifying their error by arguing with reviewers about their benchmark results. And circumstances magnified the error even more--AMD's flailing over Zen 5 actually took pressure off Intel, which was in the process of immolating its reputation via the ongoing Raptor-Lake-degradation drama.

But sure, I agree; Zen 5 isn't bad. The product itself doesn't deserve much criticism.
 

SL2

Joined
Jan 27, 2006
Messages
2,305 (0.34/day)
So AMD Ryzen 9000 series shows 5-12% gains vs 7000 series, epic failure, stagnant, sidegrade, Intel, can't beat last gen in gaming, butttttttttttttttttttt power savings, now 250w vs 350w+
Did you even miss the part where AMD claimed Zen5 being much faster than it is, whereas this whole topic is about Intel saying Arrow lake won't be any faster than Raptor? Do you even understand the difference?

This is the weakest post of the day lol

No not just in gaming but CPU intensive tasks, rendering, editing, CPU intensive tasks, those CPU's literally sup 400w of power and double that if not more than Ryzen in gaming where yes it might not be 500w but still 150-200w vs about 80w
Well look again, It clearly says AVERAGE (in chinese, but no need to know that anyway) FPS (which I bet you can read), yeah that's for games, which means including a high end graphics card, and you know it'll be a 4090 (and not AMD or Arc), and you how much power they draw.

It doesn't say 150-200 W anywhere. I already shown you the comparison with TPU's review. You're just making shit up, but since you mentioned "CPU intensive tasks" in the same sentence TWICE I'd suggest you take a nap before replying.

Of course not in games. But I don't want my CPU anywhere near that number in any case. If it's total system power, then I guess that's fine, although with every system being different, such data doesn't say anything to me. I only care about individual component power consumption to figure out cooling needs.
What we want here is irrelevant. You know it must be the whole system.
 
Joined
Jan 14, 2019
Messages
11,798 (5.62/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
What we want here is irrelevant. You know it must be the whole system.
One can never be sure these days. Anyway, if it's CPU only, then it's insane, if it's total system power, then it's useless info.
 
Joined
Nov 26, 2021
Messages
1,569 (1.49/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Joined
Jan 14, 2019
Messages
11,798 (5.62/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro

tfp

Joined
Jun 14, 2023
Messages
71 (0.15/day)
It's not useless if the graphics card is the same and it's just really the CPU/MB that is different and is Intel vs Intel. It shows at a system level there is savings.

The are all rumors and leaks. All it doing is hinting at what we will see in the coming days when real reviews are released. This debate is not going to conclude until we have real numbers from review sites.
 

SL2

Joined
Jan 27, 2006
Messages
2,305 (0.34/day)
It makes comparisons near impossible.
You're changing subject, and I don't know why. We're probably just supposed to just compare those two systems in that pic with each other. For the same reason you don't necessarily compare system power draw between different reviewers.

Your initial point was
Hang on... What's that 447 W over there?
I tried to explain that it must be system power, as in there's nothing strange about that power draw.
What game, what resolution, what graphics card, rest of system specs, etc.
TPU doesn't show resolution in power draw tests for CPU's either. It is however included in game efficiency, so I'd guess it's the same resolution in power draw.

Finally, this is LEAKED INFO. There's most likely footnotes about all the settings and specifications, but they're not posted here.
There's too many unknowns, that's why I'm saying that total system power is useless info.
Maybe, but like I said, this is not CPU power draw only.

Again, what we would want from a portion of a leaked presentation under NDA at the time is irrelevant. It wasn't for the public eye to begin with, and it's not complete.

My ugly calculation says it runs at 65 W average in games, right between 12600K and 12700K. We'll see in two weeks how close it is. :roll:


I guess we'll get more info in a few hours.


________________________________________________________________________________________________________________


Edit: I guess we didn't have to wait that long.

1728549826088.png


Hotspot not in the center, of course.
1728550004263.png
 
Last edited:
Joined
Jan 14, 2019
Messages
11,798 (5.62/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
You're changing subject, and I don't know why. We're probably just supposed to just compare those two systems in that pic with each other. For the same reason you don't necessarily compare system power draw between different reviewers.
I'm not comparing anything against a 14900K because I don't have one. And since I already look at the 14900K as a power hog, I'd rather not compare anything against it in terms of power.

It's like saying that the new Ford has better fuel economy than the F-150 Raptor. But I don't have a Raptor, so why would I care? I have a Fiesta, so how does it compare to that?

I tried to explain that it must be system power, as in there's nothing strange about that power draw.

TPU doesn't show resolution in power draw tests for CPU's either. It is however included in game efficiency, so I'd guess it's the same resolution in power draw.
TPU shows max power consumption. Here, we don't even know if it's that or something else. It's just a random number thrown onto a presentation slide.

Finally, this is LEAKED INFO. There's most likely footnotes about all the settings and specifications, but they're not posted here.

Maybe, but like I said, this is not CPU power draw only.

Again, what we would want from a portion of a leaked presentation under NDA at the time is irrelevant. It wasn't for the public eye to begin with, and it's not complete.
Let's settle with that. Like all leaked info on any product from any company, this is just as much useless.
 

SL2

Joined
Jan 27, 2006
Messages
2,305 (0.34/day)
I'm not comparing anything against a 14900K because I don't have one. And since I already look at the 14900K as a power hog, I'd rather not compare anything against it in terms of power.
It's just as strange as AMD comparing a new 9950X with a 7950X. As they're presenteing a successor, it's the most natural thing to do. Since the last one is famous for actually being a power hog, it makes sense to show that the new one draws less.

This isn't any new "Ford"/Intel, it's their newest desktop CPU compared with the previous desktop CPU.

TPU shows max power consumption. Here, we don't even know if it's that or something else. It's just a random number thrown onto a presentation slide.
We weren't talking about max power consumption to begin with, it was average like I said.

You asked for resolution in power draw tests, and it's not present in TPU reviews either. But again, it's not hard to figure out what it is.
1728554882310.png


Let's settle with that.
You could have led with that. Cheers
 
Joined
Sep 5, 2023
Messages
320 (0.79/day)
Location
USA
System Name Dark Palimpsest
Processor Intel i9 13900k with Optimus Foundation Block
Motherboard EVGA z690 Classified
Cooling MO-RA3 420mm Custom Loop
Memory G.Skill 6000CL30, 64GB
Video Card(s) Nvidia 4090 FE with Heatkiller Block
Storage 3 NVMe SSDs, 2TB-each, plus a SATA SSD
Display(s) Gigabyte FO32U2P (32" QD-OLED) , Asus ProArt PA248QV (24")
Case Be quiet! Dark Base Pro 900
Audio Device(s) Logitech G Pro X
Power Supply Be quiet! Straight Power 12 1200W
Mouse Logitech G502 X
Keyboard GMMK Pro + Numpad
That's on october 24 I think
It is such a slimy thing to do to review embargo until launch day. They announced them, let the reviews go.
 
Joined
Jun 10, 2014
Messages
2,970 (0.79/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
With HT removal it will be slower in MT workload. ST performance doesn't matter much these days.

The only advantages are native support for DDR5 6400MT memory, raised from 5600MT and I think dedicated PCIe 5.0 NVme port, if someone plans to upgrade now.
This is the common misconception with "single threaded performance". What it actually means is performance per core/thread, and this is in fact the other multiplier for the theoretical limit of multithreaded performance, so "single threaded" performance matters whether you have 2 or 32 cores. Only in applications/workloads with large batch jobs will more slower cores makes up for slower performance per core, and in more user interactive applications or mixed workloads*, having fast enough is the key factor for a good user experience, and this will continue for the foreseeable future.

But as you probably know, over time performance per core has become more and more unpredictable. An i9-14900KS (stock) wouldn't run at 6.2 GHz sustained in all kinds of workloads, and the more load there is on other cores the lower it will boost. This has become so unpredictable that the rated clock speeds are almost useless at this point. It started to get bad with Coffee Lake, but with Alder/Raptor Lake the variance of single core performance has gotten pretty extreme. (And I'm talking about desktop K-SKUs, low TDP SKUs and laptops are even worse) How noticeable this is to the end user depends on the workload and the user. So if Arrow Lake manages to reduce this variance while not advancing the peak performance much further, I would still consider it an improvement. If anything, with current products this might be an overlooked advantage for AMD.

*) By "mixed workloads" I mean typical "prosumer" use running multiple applications at once, typically not "high load" most of the time. The vast majority of benchmarks run one at the time, and only benchmarks peak performance.

I don't think any sane person criticized Zen 5 solely for its unimpressive performance uplift over Zen 4. The criticisms, rather, were:
- AMD wildly over-promised on performance
- pricing
When did AMD over-promise on performance for Zen 5? (I must have missed it)

The big deal-breaker for "prosumers" with Zen 5 is the chipset/motherboard offerings. With too many lanes tied up with USB4, lanes shared between some M.2s and GPU, and only 4 lanes to the chipet, combined with "premium" motherboards which doesn't even maximize the platform IO features, it becomes almost laughable. While offering very affordable and efficient 12 and even 16 cores, with beautiful AVX-512 support, the platform looks very appealing until you start looking at long-term usability. For those who don't replace their machine every 2-3 years, memory bandwidth and PCIe lanes quickly becomes the bottleneck. If they can't offer lanes for a GPU + 3-4 SSDs + 10G NIC + 6-8 SATA devices without significant downgrades in performance, it's really a fail. Intel (mainly W680 motherboards) seem to have an edge here, but even here flexibility for expansion should be the primary focus for picking a motherboard, and it's not easy.

But I have hopes for Threadripper though, to finally unleash the Zen 5 cores.
 
Joined
Jun 1, 2021
Messages
303 (0.25/day)
Still, 400W on avg for "gaming" is kind of absurd.

X3D will need not even 1/3rd of that to even produce more or higher FPS.
400W is system load. Most of it is probably being used by the GPU.

I would suspect they used System load because they did not want to admit that the 14900K and co use 150W or more in gaming.
 
Joined
Nov 26, 2021
Messages
1,569 (1.49/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
400W is system load. Most of it is probably being used by the GPU.

I would suspect they used System load because they did not want to admit that the 14900K and co use 150W or more in gaming.
Unfortunately for the 14900k, system load isn't 400 W. It can be as high as 609 W even after enforcing a 125 W power limit. At default settings, Techspot measured another 100 W over that for over 700 W system power draw.

One big advantage of the 125W 'performance' profile is power consumption. Although still significantly higher than the 7800X3D, it's a noteworthy improvement. For example, in Starfield, the 14900K delivered similar fps performance using either profile, but the 'performance' profile reduced total system usage by a massive 15%, shaving 100 watts off.

1728578319393.png
 
Top