Processor | Ryzen 5600 |
---|---|
Motherboard | X570 I Aorus Pro |
Cooling | Deepcool AG400 |
Memory | HyperX Fury 2 x 8GB 3200 CL16 |
Video Card(s) | RX 6700 10GB SWFT 309 |
Storage | SX8200 Pro 512 / NV2 512 |
Display(s) | 24G2U |
Case | NR200P |
Power Supply | Ion SFX 650 |
Mouse | G703 (TTC Gold 60M) |
Keyboard | Keychron V1 (Akko Matcha Green) / Apex m500 (Gateron milky yellow) |
Software | W10 |
Yeah but at that point you might as well get a 5900X, and manually configure nothing. And it'll be cheaper.Can probably be done if you're willing to manually configure your power limits to something more sensible, with some undervolting to try and regain some of that performance. Would be interesting to see where this ends up on the benchmarks.
1900 is pretty much guaranteed with Zen 3 but anything higher than that is impossible to get stable. Some chips can bench 2100-2133 though.It won't matter much since Zen 3 infinity fabric can't clock much higher than 1800Mhz (Some lucky chips will go up to 2000Mhz). They need 1:1 infinity fabric to DRAM frequency to get the best performance. 3600Mhz is already the sweet spot
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
Yep. It'll be really interesting to see performance comparisons between these with some sort of power limit on the 12900K.Yeah but at that point you might as well get a 5900X, and manually configure nothing. And it'll be cheaper.
He cooled it on AIO water. So it can be done. It was also a cooler-mounting thing. The board's VRMs were in the way. Just gotta find the right cooler to fit, or the right board. It'll be hot, but so are laptops.Not really sure what you're getting at with this comment.
The memory should be as close in frequency as possible otherwise the reviewer is giving the new Gen Intel CPUs an advantage. I would say to perform the test again with DDR5 4800 or 5200 memory and use a higher frequency DDR4 memory.given how memory stability is, i'd say the equivalents are basically there. I mean, i get what you're saying, but then shouldn't we be seeing like DDR5-6600-6800?
In that little blue box? LOL.
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
Higher frequency DDR4 means running out of sync with IF on AMD or in gear 2 for Intel, which generally performs worse than lower speeds in 1:1/gear 1 (outside of strictly bandwidth-bound workloads, which there are essentially none of in this test suite or any normal consumer workload). Also, remember that DDR5-6000c36 is much higher absolute latency than DDR4-3600c16. 1000ms/1800MHz*16=8.89ms latency; 1000ms/3000MHz*36=12ms latency. And to be clear, most consumer workloads are far more dependent on memory latency than memory bandwidth (with iGPU gaming being the main exception).The memory should be as close in frequency as possible otherwise the reviewer is giving the new Gen Intel CPUs an advantage. I would say to perform the test again with DDR5 4800 or 5200 memory and use a higher frequency DDR4 memory.
On the Anandtech review, they have tested with just the E-Cores enabled and they top out at 50W full utilization. That's seriously impressive. So the problem is with the P-Cores I guess, they are just ridiculously inefficient to the point of negating any gains from the E-Cores.What a reason to make E-Cores? Idle consumption isn't great with it, оverall perfomance is about the same. Maybe Windows should be improved a lot in that way, something like OS and it's services work on E-cores and keep other cores for work applications.
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
It allows them to have more than 8c16t without ballooning die size (each 4c E core cluster is slightly larger than a single P core; this die is as large as the 10900K at 208mm²), and it significantly increases MT performance in apps capable of making use of them. They're not blazing fast, but they aren't slow either, and there are 8 of them after all. They're not for idle power consumption reduction, at least not in desktops. Should do that job decently in laptops, though we'll see if they're able to implement them so that all P cores can go to sleep (at least disabling all P cores on these desktop chips is not possible) while keeping the E cores running.What a reason to make E-Cores? Idle consumption isn't great with it, оverall perfomance is about the same. Maybe Windows should be improved a lot in that way, something like OS and it's services work on E-cores and keep other cores for work applications.
All the more reason for them to add them - if you're going for a 250W power budget, better to spend 50W on great efficiency and 200W on crap efficiency than 250W on crap efficiency. It'll be really interesting to see how mobile versions of these chips stack up against the M1 Pro and Max!On the Anandtech review, they have tested with just the E-Cores enabled and they top out at 50W full utilization. That's seriously impressive. So the problem is with the P-Cores I guess, they are just ridiculously inefficient to the point of negating any gains from the E-Cores.
The bigger issue seems to be that unlimited tau or turbos ~they are just ridiculously inefficient to the point of negating any gains from the E-Cores.
Software has always been behind but maybe this transition to big.LITTLE will change that.What a reason to make E-Cores? Idle consumption isn't great with it, оverall perfomance is about the same. Maybe Windows should be improved a lot in that way, something like OS and it's services work on E-cores and keep other cores for work applications.
There's no way they'll be a match for M1.All the more reason for them to add them - if you're going for a 250W power budget, better to spend 50W on great efficiency and 200W on crap efficiency than 250W on crap efficiency. It'll be really interesting to see how mobile versions of these chips stack up against the M1 Pro and Max!
With the M1(x) you aren't just comparing the chip. It's the entire Mac platform, so a truly apples to apples comparison will be hard to come by.There's no way they'll be a match for M1.
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
Not in ST, no, as the M1 essentially ties the best ST cores from both Intel and AMD. But for MT? It could be pretty close, if Intel is able to run 8 E cores at 3.9GHz at 50W. Remember, the M1 Pro/Max has essentially no power ratings or limits at all, and can range from 40 to ~100W under MT loads depending on the load and thermals.There's no way they'll be a match for M1.
That doesn't really matter if you're controlling the workload properly, i.e. compiling a known test suite yourself like AnandTech does, or running tests in multi-platform applications like Creative Suite. Both have pros and cons, but both are valid comparisons in their own way (the former is as close to a level playing field as you'll get; the latter is as close to real-world as you'll get). The issues start arising if you're running synthetic benchmarks that you have no control over (i.e. GeekBench), or are using different software that "kind of does the same things" like some bad reviewers tend to do.With the M1(x) you aren't just comparing the chip. It's the entire Mac platform. So a truly apples to apples comparison will be hard to come by.
I honestly think Intel should spend more time extracting more performance from those E-Cores. They're actually faster than Skylake cores while basically sipping power. Very impresseiveIt allows them to have more than 8c16t without ballooning die size (each 4c E core cluster is slightly larger than a single P core; this die is as large as the 10900K at 208mm²), and it significantly increases MT performance in apps capable of making use of them. They're not blazing fast, but they aren't slow either, and there are 8 of them after all. They're not for idle power consumption reduction, at least not in desktops. Should do that job decently in laptops, though we'll see if they're able to implement them so that all P cores can go to sleep (at least disabling all P cores on these desktop chips is not possible) while keeping the E cores running.
All the more reason for them to add them - if you're going for a 250W power budget, better to spend 50W on great efficiency and 200W on crap efficiency than 250W on crap efficiency. It'll be really interesting to see how mobile versions of these chips stack up against the M1 Pro and Max!
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
Yep, I was just thinking whether we might see Intel going hard in that direction in the near future architecturally. Though it's not unlikely for those cores to have a hard frequency limit that's much lower than the P cores, so their ST performance might suffer. Also makes me wonder what would happen if they gave the E core clusters a massive L2 cache like Apple's M1 P core clusters.I honestly think Intel should spend more time extracting more performance from those E-Cores. They're actually faster than Skylake cores while basically sipping power. Very impresseive
Not really no, you're still bound by the OS & scheduler. Looking at some of the results currently win11 still needs a bit of a work handling a lot of these tasks properly. Apple has probably at least a decade of lead over MS in this & similar margin wrt Intel. The hardware scheduler (thread detector) on ADL is interesting but it also raises the question as to how it will work with or maybe override the built in scheduler for Windows in certain tasks.but both are valid comparisons in their own way (the former is as close to a level playing field as you'll get; the latter
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
Wait, Apple has a decade's lead for their 1-year old desktop architecture? Remember, MacOS isn't iOS. Also: the OS is out of the control of literally everyone except for Apple and MS. Software vendors, users, Intel, AMD, doesn't matter. It is what it is, and it is accounted for in testing. If Apple's OS and scheduler are doing a better job than Windows, does that undermine the performance or efficiency of their cores? Of course not. It's likely the integration helps them, but it's not what is causing their 2-3x efficiency lead. And besides, the performance you get is the performance you get in the real world. Saying "but one has an OS/scheduler advantage" doesn't change that. Performance is ultimately performance.Not really no, you're still bound by the OS & scheduler. Looking at some of the results currently win11 still needs a bit of a work handling a lot of these tasks properly. Apple has probably at least a decade of lead over MS in this & similar margin wrt Intel. The hardware scheduler (thread detector) on ADL is interesting but it also raises the question as to how it will work or override the built in scheduler for Windows.
System Name | (2008) Dell XPS 730x H2C |
---|---|
Processor | Intel Extreme QX9770 @ 3.8GHz (No OC) |
Motherboard | Dell LGA 775 (Dell Propiatary) |
Cooling | Dell AIO Ceramic Water Cooling (Dell Propiatary) |
Memory | Corsair Dominator Platinum 16GB (4 x 4) DDR3 |
Video Card(s) | EVGA GTX 980ti 6GB (2016 ebay-used) |
Storage | (2) WD 1TB Velociraptor & (1) WD 2TB Black |
Display(s) | Alienware 34" AW3420DW (Amazon Warehouse) |
Case | Stock Dell 730x with "X" Side Panel (65 pounds fully decked out) |
Audio Device(s) | Creative X-FI Titanium & Corsair SP2500 Speakers |
Power Supply | PSU: 1000 Watt (Dell Propiatary) |
Mouse | Alienware AW610M (Amazon Warehouse) |
Keyboard | Corsair K95 XT (Amazon Warehouse) |
Software | Windows 7 Ultimate & Alienware FX Lighting |
Benchmark Scores | No Benchmarking & Overclocking |
Money is my Reality: I only care about Intel stock doing me a favor like AMD did last year! Doubling my AMD money in less then 10-months time. Now the big 401K money managers and their contributing clients and those who still have REAL JOBS, looking conservatively for $85 plus per share during Intel's 4th quarter report. And it looks like the Intel boys are on the right track! Yes AMD had their moment in time but Wall Street as we all know has no memory! What have you done for me lately AMD keeps on coming up. Win some an lose some.Reality:
Win some, Lose some
Double the power consumption
Double the heat
Double the platform cost
Windows 11
Apple has experience with big.LITTLE for close to a decade, yes it isn't iOS but you're telling me that their experience with Axx chips or ARM over the years won't help them here? Yes technically MS also had Windows on ARM but we know where that went.Apple has a decade's lead for their 1-year old desktop architecture
No of course not but without the actual chips out there how can MS optimize for it? You surely don't expect win11 to be 100% perfect right out the gate with something that's basically releasing after the OS was RTMed? Real world user feedback & subsequent telemetry data will be needed to better tune for ADL ~ that's just a reality. Would you say that testing AMD with those skewed L3 results was also just as fair?If Apple's OS and scheduler are doing a better job than Windows, does that undermine the performance or efficiency of their cores?
Processor | AMD Ryzen 5900X |
---|---|
Motherboard | MSI MAG X570 Tomahawk |
Cooling | Dual custom loops |
Memory | 4x8GB G.SKILL Trident Z Neo 3200C14 B-Die |
Video Card(s) | AMD Radeon RX 6800XT Reference |
Storage | ADATA SX8200 480GB, Inland Premium 2TB, various HDDs |
Display(s) | MSI MAG341CQ |
Case | Meshify 2 XL |
Audio Device(s) | Schiit Fulla 3 |
Power Supply | Super Flower Leadex Titanium SE 1000W |
Mouse | Glorious Model D |
Keyboard | Drop CTRL, lubed and filmed Halo Trues |
System Name | Silent |
---|---|
Processor | Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader |
Motherboard | ASUS ROG Strix X670E-I, chipset fans replaced with Noctua A14x25 G2 |
Cooling | Optimus Block, HWLabs Copper 240/40 + 240/30, D5/Res, 4x Noctua A12x25, 1x A14G2, Mayhems Ultra Pure |
Memory | 32 GB Dominator Platinum 6150 MT 26-36-36-48, 56.6ns AIDA, 2050 FCLK, 160 ns tRFC, active cooled |
Video Card(s) | RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock |
Storage | Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB |
Display(s) | 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear, MX900 dual gas VESA mount |
Case | Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front, LINKUP Ultra PCIe 4.0 x16 white |
Audio Device(s) | Audeze Maxwell Ultraviolet w/upgrade pads & LCD headband, Galaxy Buds 3 Pro, Razer Nommo Pro |
Power Supply | SF750 Plat, full transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua |
Mouse | Razer Viper Pro V2 8 KHz Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape |
Keyboard | Wooting 60HE+ module, TOFU-R CNC Alu/Brass, SS Prismcaps W+Jellykey, LekkerV2 mod, TLabs Leath/Suede |
Software | Windows 11 IoT Enterprise LTSC 24H2 |
Benchmark Scores | Legendary |
Rubbish.With a SFF PC it's pretty tough to run a 5950X, but seems like it would be impossible without undervolting/limiting the new 12900K.
It won't matter much since Zen 3 infinity fabric can't clock much higher than 1800Mhz (Some lucky chips will go up to 2000Mhz). They need 1:1 infinity fabric to DRAM frequency to get the best performance. 3600MT/s is already the sweet spot
System Name | Pioneer |
---|---|
Processor | Ryzen R9 9950X |
Motherboard | GIGABYTE Aorus Elite X670 AX |
Cooling | Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans... |
Memory | 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30 |
Video Card(s) | XFX RX 7900 XTX Speedster Merc 310 |
Storage | Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs |
Display(s) | 55" LG 55" B9 OLED 4K Display |
Case | Thermaltake Core X31 |
Audio Device(s) | TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED |
Power Supply | FSP Hydro Ti Pro 850W |
Mouse | Logitech G305 Lightspeed Wireless |
Keyboard | WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps |
Software | Gentoo Linux x64 / Windows 11 Enterprise IoT 2024 |
Vs 14nm before? Yes.Thumbs up:
really?
- 10 nanometer production process