- Joined
- May 22, 2015
- Messages
- 13,753 (3.96/day)
Processor | Intel i5-12600k |
---|---|
Motherboard | Asus H670 TUF |
Cooling | Arctic Freezer 34 |
Memory | 2x16GB DDR4 3600 G.Skill Ripjaws V |
Video Card(s) | EVGA GTX 1060 SC |
Storage | 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500 |
Display(s) | Dell U3219Q + HP ZR24w |
Case | Raijintek Thetis |
Audio Device(s) | Audioquest Dragonfly Red :D |
Power Supply | Seasonic 620W M12 |
Mouse | Logitech G502 Proteus Core |
Keyboard | G.Skill KM780R |
Software | Arch Linux + Win10 |
I'm not up to date with what ARM does these days, but one of RISC's advantages was that executing any instruction within the same timeframe/cycles dramatically simplifies scheduling. By contrast, ever since Intel went pipelined (Pentium, iirc), they essentially have a sizeable silicon chunk breaking complex instructions down into simple ones, emulating what RISC does.Since I am unfettered with firsthand knowledge, I think I know an easy shortcut.
A simple instruction set will put more pressure on the data caches, either by running further cycles, or further instructions(i-cache) to do the same amount of work, so will use up more overhead to do the same amount of work(d-cache).
However, its data caches are aligned -there are no divergent flow rates, so control management is simpler.
I'll guess it comes to how much overhead is present from wasted cycles due to the complex vs. simple instruction set difference, in reference to how much transistor - and therefore power - budget is saved from simplifying the instruction flow.
Like I said, I don't know whether one will prevail over the other. Or whether a hybrid design will trump both.