• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Core Configurations of Intel Core Ultra 200 "Arrow Lake-S" Desktop Processors Surface

Joined
Dec 25, 2020
Messages
6,631 (4.67/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
Correct, and I may add that the complexity of implementing SMT in the pipeline has grown greatly with ever more superscalar CPU designs. Not to mention the biggest problem; all the security issues, which requires lots of constraints for the designers to avoid. Thirdly, there is also the fact that modern CPUs have much more capable front-ends, which are better and better at keeping the execution units saturated. This was originally one of the core motivations of SMT, but going forward the potential gain here is going to shrink relatively speaking.


If you're talking of architectural engineering decisions, then I disagree. Their designs have generally been held back 2-3 years due to production issues, which probably still have some lasting delays. When it comes to their production however, there has been lots of bad decisions…

As to a "clean room" design, I doubt any of big CPU designers will start that much from scratch, but they do however have to make the big design decisions in the very beginning of the design process, like how threading will work, how cores are interacting etc., as all other design decisions are resulting from that, although they probably don't have the resources to redesign and finetune every tiny part of the CPU design in the first try. So deciding to ditch SMT certainly was done early on, but I would expect them to need a few "attempts" to fully break free from all the design constraints and unleash new levels of IPC. :)

Looking forward, there will be a lot of advancements in superscalar execution. I know Intel are looking into strategies to lessen the impact of branch mispredictions and avoid pipeline stalls and flushes. I believe some of this was supposed to show up in Meteor Lake, but I haven't studied whether it is and the success of it. But over the next generations, we should expect there to be significant gains.


Just for the sake of being correct, Rocket Lake wasn't a regression in terms of overall performance, it offered ~19% IPC gains and similar clocks, but sacrificed 2 cores vs. Comet Lake, which leads to people thinking it was inferior. Rocket Lake which was a "backport" of Ice Lake to 14nm was greatly held back by this "inferior" node. The whole family is called "Sunny Cove", with Ice Lake being released in 2019 (server only, very limited availability), followed by Tiger Lake which was a small architectural improvement. Rocket Lake surprisingly seems to be a derivative of Ice Lake-S(never finalized) rather than Tiger Lake, I assume because Tiger Lake never was designed for this purpose and it was much quicker to backport Ice Lake-S instead.

Apologies I should have been more specific, I'm referring to gaming performance. Most games still favor the i9-10900K over the 11900K.
 
Joined
Nov 26, 2021
Messages
1,637 (1.51/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Correct, and I may add that the complexity of implementing SMT in the pipeline has grown greatly with ever more superscalar CPU designs. Not to mention the biggest problem; all the security issues, which requires lots of constraints for the designers to avoid. Thirdly, there is also the fact that modern CPUs have much more capable front-ends, which are better and better at keeping the execution units saturated. This was originally one of the core motivations of SMT, but going forward the potential gain here is going to shrink relatively speaking.


If you're talking of architectural engineering decisions, then I disagree. Their designs have generally been held back 2-3 years due to production issues, which probably still have some lasting delays. When it comes to their production however, there has been lots of bad decisions…

As to a "clean room" design, I doubt any of big CPU designers will start that much from scratch, but they do however have to make the big design decisions in the very beginning of the design process, like how threading will work, how cores are interacting etc., as all other design decisions are resulting from that, although they probably don't have the resources to redesign and finetune every tiny part of the CPU design in the first try. So deciding to ditch SMT certainly was done early on, but I would expect them to need a few "attempts" to fully break free from all the design constraints and unleash new levels of IPC. :)

Looking forward, there will be a lot of advancements in superscalar execution. I know Intel are looking into strategies to lessen the impact of branch mispredictions and avoid pipeline stalls and flushes. I believe some of this was supposed to show up in Meteor Lake, but I haven't studied whether it is and the success of it. But over the next generations, we should expect there to be significant gains.


Just for the sake of being correct, Rocket Lake wasn't a regression in terms of overall performance, it offered ~19% IPC gains and similar clocks, but sacrificed 2 cores vs. Comet Lake, which leads to people thinking it was inferior. Rocket Lake which was a "backport" of Ice Lake to 14nm was greatly held back by this "inferior" node. The whole family is called "Sunny Cove", with Ice Lake being released in 2019 (server only, very limited availability), followed by Tiger Lake which was a small architectural improvement. Rocket Lake surprisingly seems to be a derivative of Ice Lake-S(never finalized) rather than Tiger Lake, I assume because Tiger Lake never was designed for this purpose and it was much quicker to backport Ice Lake-S instead.
SMT's relative contribution to the die area doesn't increase with the complexity of the rest of the core; both ThunderX3 and the Pentium 4 spent about 5% of their area on SMT. However, both validation time, and crucially, the attack surface for machines hosted in the cloud increase because of SMT. A relatively simple fix would have been for hypervisors to avoid splitting 2 logical threads of one core across multiple customers.

1715445228033.jpeg
 
Top