• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Tom's editorial on Arrow lake "big gains in productivity and power efficiency, but not in gaming"

Joined
Jun 1, 2011
Messages
4,566 (0.93/day)
Location
in a van down by the river
Processor faster at instructions than yours
Motherboard more nurturing than yours
Cooling frostier than yours
Memory superior scheduling & haphazardly entry than yours
Video Card(s) better rasterization than yours
Storage more ample than yours
Display(s) increased pixels than yours
Case fancier than yours
Audio Device(s) further audible than yours
Power Supply additional amps x volts than yours
Mouse without as much gnawing as yours
Keyboard less clicky than yours
VR HMD not as odd looking as yours
Software extra mushier than yours
Benchmark Scores up yours
Joined
Jun 20, 2024
Messages
367 (2.50/day)
Maybe not for headline gaming peak performance numbers, but both AMD and Intel have often had 'best band for buck' gaming products that sit in the middle of the product stack, e.g. Ryzen 5600X or i5-12600K, where anything beyond that point is diminishing returns (usually because the increased power use / TDP starts imposing performance limiting or the base platform not being able to exploit the extra core bandwidth).

I'm expecting the Arrow Lake platform to move that performance point forward for gamers, so not a massive waste of time in that sense - primarily because the IPC/IPS vs TDP is going to be a lot better thus not butting up against those limits quite as soon. Effectively, more cores maybe maintaining a better frequency for less power use/wastage means that average baseline gaming performance will be higher.

So basically, outside of the X3D chips, and the 14900K's, etc., this might make Intel more of a mid-low end default gamers choice.
 
Joined
Jun 1, 2011
Messages
4,566 (0.93/day)
Location
in a van down by the river
Processor faster at instructions than yours
Motherboard more nurturing than yours
Cooling frostier than yours
Memory superior scheduling & haphazardly entry than yours
Video Card(s) better rasterization than yours
Storage more ample than yours
Display(s) increased pixels than yours
Case fancier than yours
Audio Device(s) further audible than yours
Power Supply additional amps x volts than yours
Mouse without as much gnawing as yours
Keyboard less clicky than yours
VR HMD not as odd looking as yours
Software extra mushier than yours
Benchmark Scores up yours
this might make Intel more of a mid-low end default gamers choice.
I'm hoping for great value. I cheer on both sides, competitive markets equal lower prices ( as long as they are not in collusion)
 
Joined
Jun 20, 2024
Messages
367 (2.50/day)
I'm hoping for great value. I cheer on both sides, competitive markets equal lower prices ( as long as they are not in collusion)
Maybe... I expect AM5 price reductions if overlapping product segment performance favours Intel and they have chosen similar/matching/slightly lower MSRPs...
 
Joined
Sep 5, 2023
Messages
350 (0.80/day)
Location
USA
System Name Dark Palimpsest
Processor Intel i9 13900k with Optimus Foundation Block
Motherboard EVGA z690 Classified
Cooling MO-RA3 420mm Custom Loop
Memory G.Skill 6000CL30, 64GB
Video Card(s) Nvidia 4090 FE with Heatkiller Block
Storage 3 NVMe SSDs, 2TB-each, plus a SATA SSD
Display(s) Gigabyte FO32U2P (32" QD-OLED) , Asus ProArt PA248QV (24")
Case Be quiet! Dark Base Pro 900
Audio Device(s) Logitech G Pro X
Power Supply Be quiet! Straight Power 12 1200W
Mouse Logitech G502 X
Keyboard GMMK Pro + Numpad
My only question at this point on 285K is does it overclock? The 14th gen was pegged out already and had huge power draws by default, so the fact that they've dropped the power draw (and temps) may mean that there's some headroom there?

I also noticed that the 14900k was tested in gaming benchmarks in baseline performance mode (which is a 125W-PL1, 188W-PL2 limited profile?) even though their own table they published for "Intel Recommendations: 'Intel Default Settings'" says "Intel recommends using the 'Extreme' Power Delivery Profile if supported by the voltage regulator and motherboard design", which would actually be the 253W-limit for 14900k. It even says "Intel does not recommend Baseline power delivery profiles for 13th and 14th Gen K Sku processors unless required for compatibility".
1728575165678.png


Maybe I'm missing something, but that certainly seems pretty shady...it may explain why the 14900k numbers they posted are lower than the review numbers for it.
Edit: Digging through https://edc.intel.com/content/www/u...marks/intel-core-ultra-processors-series-2_1/, it looks like they set the 14900k at PL1=PL2=253W for their benchmark tests and the 285K was at 250W...but they did their power efficiency tests with the 285K set at 125W and the 14900k at 253W lol
 
Last edited:
Joined
Jun 20, 2024
Messages
367 (2.50/day)
@R0H1T , this and many other things hopefully will be covered when reviewed - especially if @W1zzard (or whoever reviews it) does the normal efficiency charts.

With Raptor Lake performance scaled pretty poorly as PL was raised upto and past 200W - it may still be a similar story for Arrow Lake in terms of being in the 200W zone, however potentially they may have better scaling from the 100-150W power window compared to Raptor Lake (which locked to lower <95W power limits was actually pretty efficient especially considering Intel 7/10nm process - which is why in terms of bang for buck the Core i3 Alder/Raptor lake CPUs were not pure trash).
 
Joined
Nov 4, 2005
Messages
11,966 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
They can call a silicon Interposer whatever they like but AMD did it first....
 
Joined
Nov 4, 2005
Messages
11,966 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
My only question at this point on 285K is does it overclock? The 14th gen was pegged out already and had huge power draws by default, so the fact that they've dropped the power draw (and temps) may mean that there's some headroom there?

I also noticed that the 14900k was tested in gaming benchmarks in baseline performance mode (which is a 125W-PL1, 188W-PL2 limited profile?) even though their own table they published for "Intel Recommendations: 'Intel Default Settings'" says "Intel recommends using the 'Extreme' Power Delivery Profile if supported by the voltage regulator and motherboard design", which would actually be the 253W-limit for 14900k. It even says "Intel does not recommend Baseline power delivery profiles for 13th and 14th Gen K Sku processors unless required for compatibility".
View attachment 366971

Maybe I'm missing something, but that certainly seems pretty shady...it may explain why the 14900k numbers they posted are lower than the review numbers for it.
Edit: Digging through https://edc.intel.com/content/www/u...marks/intel-core-ultra-processors-series-2_1/, it looks like they set the 14900k at PL1=PL2=253W for their benchmark tests and the 285K was at 250W...but they did their power efficiency tests with the 285K set at 125W and the 14900k at 253W lol
Just like AMD it's node process limited. The thermal density they have a upper hand with a thinner IHS I'm sure. Putting Ecores between P cores to further enhance Delta was a good move too.
 
Joined
Nov 27, 2023
Messages
2,222 (6.29/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent (Solid)
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original) on a X-Raypad Equate Plus V2
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (23H2)
Honestly, probably a good trade off for the majority of the market. Gaming performance is really relevant to a very small portion of the market, most people will be GPU limited and I don’t think that many are going to see the 600 to 700 FPS increase in their CS matches. Just a hunch. Better MT performance and energy efficiency are universally useful though.
 
Joined
Nov 4, 2005
Messages
11,966 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
You're comparing the foveros to AMD chiplet or something else?

2017 AMD was using a silicon interposer.

As much hate/dislike/distain anyone may have they did a LOT first and have moved the needle for consumers.
Here are their firsts.

Tessellation
GPU compute
X64
Chiplet
Interposer
3D stacking
HBM
SSD on GPU
HDR


Intel was the pioneer of HT/SMT. They have made huge advances in paying off companies in monopolistic practices and been fined a pittance for trying to eliminate competitive companies. Nvidia has a great PR and spin team and a lot of money.
 
Joined
Jun 20, 2024
Messages
367 (2.50/day)

It goes a couple of years further back to the original R9 GPUs which had the first HBM memory, e.g. R9 Nano. Vega wasn't the first, but AMD were the first to do it in that way... I just wish it was more successful in terms of the actual product. In practice the approach has many benefits which is why Intel have gone all in - probably because they have better resources to - a shame AMD haven't been able to leverage it in the same way but I guess the post-Athlon64 / pre-Ryzen years prohibited such moves.

BUT.... whilst innovative and certainly pushing tech forward thanks to integrating the seperate dies into one physical block unit (and not just seperate dies sharing an interposer), it's not exactly a new general idea - the seperate dies are still interconnected with their own interposer, so it's an interconnecting block on top of another interconnecting block.

As much hate/dislike/distain anyone may have they did a LOT first and have moved the needle for consumers.
Here are their firsts.

Tessellation
GPU compute
X64
Chiplet
Interposer

3D stacking
HBM
SSD on GPU
HDR


Intel was the pioneer of HT/SMT. They have made huge advances in paying off companies in monopolistic practices and been fined a pittance for trying to eliminate competitive companies. Nvidia has a great PR and spin team and a lot of money.

Erm... not sure that list will stand up to scrutiny...

Tessellation:
Conceptually not an AMD idea and on some level 'tessellation' work has been done by lots of 3D hardware/software before ATI did it (expanded upon in a sec) - as part of some rendering pipelines is to break objects/surfaces down for processing and rendering. ATI/AMD Incorporating a Tessallation engine (like they did with the Radeon 8500) to try and restore lost fidelity (i.e. to make curves more round rather than polygon stepped, etc.) was a good idea but never properly adopted - had a play with one back in the day and it sure made things in Counter-Strike funny looking (you turned it on to test/play about with, and then off again to actually properly play a game). Fortunately DX10/11 properly implemented it.

Chiplet + Interposer:
Multi-chip modules have existed for a very long time, and in the PC space Intel were doing something like that a long time before AMD were, e.g. the Pentium Pro (1995 - CPU and cache dies both on same ceramic package). What do you define as a chiplet exactly...? And for that matter the interposer is surely the ceramic package in this case:
1728581205794.jpeg


Other 'chiplet' / 'interposer' combinations pre-AM4:
PS3 RSX: View attachment 1728582406838.webp
Intel Pentium D:
1728580721239.jpeg


3D Stacking:
I'm afraid in the push for more NAND flash storage space for SSDs, Toshiba were ahead there:
1728581545604.jpeg


HDR:
You'd need to provide some specific example - dynamic range (and the lack of ability to recreate it on screen) has been well known for decades. There were many 'solutions' developed - I'm not sure what AMD brought to the table seeing as the 'modern' current take on it really is more along the lines of work done by Brightside/Dolby in terms of actual displays and standards, meanwhile game engines (even Valve's Source engine) were tackling HDR like capabilities through software approaches. Being first to support a standard others may have come up with isn't really a first in terms of developing that solution / standard.


On the other hand - things you didn't mention which people may attribute to Intel or AMD:
On-die memory controller - maybe not the first in terms of tech as many ARM devices for example use such a thing, but in the PC space AMD were first for consumers.
If it wasn't delayed, AMD would have had the first on-die full socketed 'northbridge' SoC with Llano / FM1 socket incorporating PCIe, IMC, and other bus connections directly to the CPU package... but it was late so Intel technically gets there with the LGA 1156.
But technically, these items themselves are really just a derivative of the 'Geode' line features conceptually speaking, which Cyrix (eeewwww) started.
 

Attachments

  • 1728580664988.webp
    124.5 KB · Views: 21
Last edited:
Joined
May 3, 2019
Messages
2,069 (1.02/day)
System Name BigRed
Processor I7 12700k
Motherboard Asus Rog Strix z690-A WiFi D4
Cooling Noctua D15S chromax black/MX6
Memory TEAM GROUP 32GB DDR4 4000C16 B die
Video Card(s) MSI RTX 3080 Gaming Trio X 10GB
Storage M.2 drives WD SN850X 1TB 4x4 BOOT/WD SN850X 4TB 4x4 STEAM/USB3 4TB OTHER
Display(s) Dell s3422dwg 34" 3440x1440p 144hz ultrawide
Case Corsair 7000D
Audio Device(s) Logitech Z5450/KEF uniQ speakers/Bowers and Wilkins P7 Headphones
Power Supply Corsair RM850x 80% gold
Mouse Logitech G604 lightspeed wireless
Keyboard Logitech G915 TKL lightspeed wireless
Software Windows 10 Pro X64
Benchmark Scores Who cares

2017 AMD was using a silicon interposer.

As much hate/dislike/distain anyone may have they did a LOT first and have moved the needle for consumers.
Here are their firsts.

Tessellation
GPU compute
X64
Chiplet
Interposer
3D stacking
HBM
SSD on GPU
HDR


Intel was the pioneer of HT/SMT. They have made huge advances in paying off companies in monopolistic practices and been fined a pittance for trying to eliminate competitive companies. Nvidia has a great PR and spin team and a lot of money.

Intels Foveros directly connect two dies that are next to each other with no gap as in the AMD pic you posted.
 
Top