• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Could Solve Memory Bottlenecks of its MCM CPUs by Disintegrating the Northbridge

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,243 (7.55/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
AMD sprung back to competitiveness in the datacenter market with its EPYC enterprise processors, which are multi-chip modules of up to four 8-core dies. Each die has its own integrated northbridge, which controls 2-channel DDR4 memory, and a 32-lane PCI-Express gen 3.0 root complex. In applications that can not only utilize more cores, but also that are memory bandwidth intensive, this approach to non-localized memory presents design bottlenecks. The Ryzen Threadripper WX family highlights many of these bottlenecks, where video encoding benchmarks that are memory-intensive see performance drops as dies without direct access to I/O are starved of memory bandwidth. AMD's solution to this problem is by designing CPU dies with a disabled northbridge (the part of the die with memory controllers and PCIe root complex). This solution could be implemented in its upcoming 2nd generation EPYC processors, codenamed "Rome."

With its "Zen 2" generation, AMD could develop CPU dies in which the integrated northrbidge can be completely disabled (just like the "compute dies" on Threadripper WX processors, which don't have direct memory/PCIe access relying entirely on InfinityFabric). These dies talk to an external die called "System Controller" over a broader InfinityFabric interface. AMD's next-generation MCMs could see a centralized System Controller die that's surrounded by CPU dies, which could all be sitting on a silicon interposer, the same kind found on "Vega 10" and "Fiji" GPUs. An interposer is a silicon die that facilitates high-density microscopic wiring between dies in an MCM. These explosive speculative details and more were put out by Singapore-based @chiakokhua, aka The Retired Engineer, a retired VLSI engineer, who drew block diagrams himself.



The System Controller die serves as town-square for the entire processor, and packs a monolithic 8-channel DDR4 memory controller that can address up to 2 TB of ECC memory. Unlike current-generation EPYC processors, this memory interface is truly monolithic, much like Intel's implementation. The System Controller also features a PCI-Express gen 4.0 x96 root-complex, which can drive up to six graphics cards with x16 bandwidth, or up to twelve at x8. The die also integrates the southbridge, known as Server Controller Hub, which puts out common I/O interfaces such as SATA, USB, and other legacy low-bandwidth I/O, in addition to some more PCIe lanes. There could still be external "chipset" on the platform that puts out more connectivity.



The Retired Engineer goes on to speculate that AMD could even design its socket AM4 products as MCMs of two CPU dies sharing a System Controller die; but cautioned to take it with "a bowl of salt." This is unlikely given that the client-segment has wafer-thin margins compared to enterprise, and AMD would want to build single-die products - ones in which the integrated northbridge isn't disabled. Still, that doesn't completely discount the possibility of a 2-die MCM for "high-margin" SKUs that AMD can sell around $500. In such cases, the System Controller die could be leaner, with fewer InfinityFabric links, a 2-channel memory I/O, and a 32-lane PCIe gen 4.0 root.



AMD will debut the "Rome" MCM within 2018.

View at TechPowerUp Main Site
 
Joined
Nov 3, 2013
Messages
2,141 (0.53/day)
Location
Serbia
Processor Ryzen 5600
Motherboard X570 I Aorus Pro
Cooling Deepcool AG400
Memory HyperX Fury 2 x 8GB 3200 CL16
Video Card(s) RX 6700 10GB SWFT 309
Storage SX8200 Pro 512 / NV2 512
Display(s) 24G2U
Case NR200P
Power Supply Ion SFX 650
Mouse G703 (TTC Gold 60M)
Keyboard Keychron V1 (Akko Matcha Green) / Apex m500 (Gateron milky yellow)
Software W10
This could also be relevant to the article
 
Joined
Dec 10, 2015
Messages
545 (0.17/day)
Location
Here
System Name Skypas
Processor Intel Core i7-6700
Motherboard Asus H170 Pro Gaming
Cooling Cooler Master Hyper 212X Turbo
Memory Corsair Vengeance LPX 16GB
Video Card(s) MSI GTX 1060 Gaming X 6GB
Storage Corsair Neutron GTX 120GB + WD Blue 1TB
Display(s) LG 22EA63V
Case Corsair Carbide 400Q
Power Supply Seasonic SS-460FL2 w/ Deepcool XFan 120
Mouse Logitech B100
Keyboard Corsair Vengeance K70
Software Windows 10 Pro (to be replaced by 2025)
It's a solution that creates more problems :kookoo:
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,745 (3.30/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Hrm... I don't think we've really had multi die chips since Core 2... and since then, the northbridge has moved off the board onto the chip. Still, creating a separate design for EPYC (or even some Threadripper chips) to work around that performance penalty kinda ruins the scalability of the Zen architecture, and may not perform all that well anyway... cause now you've got X amount of dies trying to communicate with the same northbridge, and thereby the rest of the system, at the same time...
 
Joined
Sep 14, 2017
Messages
625 (0.24/day)
This is similar to IBM's approach from what I recall seeing. The SC is the system controller chip. I really like how AMD is trying something else especially with their infinity fabric. I think IBM even helped them with their hyper-threading implementation.

It also makes sense that initially Ryzen and EPYC were based off the same design overall and packaging to save cost and now most likely will separate into their own dedicated production lines.

 
Last edited:
Joined
Apr 12, 2013
Messages
7,534 (1.77/day)
I was wondering when this rumor would end up here, lo & behold. What's interesting is that if the system controller is moved off die, you can basically make any number of CPU/GPU combinations as well, the only limiting factor being the TDP especially for ULV segment. The details should be revealed sometime next month I believe.
Hrm... I don't think we've really had multi die chips since Core 2... and since then, the northbridge has moved off the board onto the chip. Still, creating a separate design for EPYC (or even some Threadripper chips) to work around that performance penalty kinda ruins the scalability of the Zen architecture, and may not perform all that well anyway... cause now you've got X amount of dies trying to communicate with the same northbridge, and thereby the rest of the system, at the same time...
This isn't just the NB, remember Zen is already a full on SoC.
 
Last edited:
Joined
Jan 13, 2018
Messages
157 (0.06/day)
System Name N/A
Processor Intel Core i5 3570
Motherboard Gigabyte B75
Cooling Coolermaster Hyper TX3
Memory 12 GB DDR3 1600
Video Card(s) MSI Gaming Z RTX 2060
Storage SSD
Display(s) Samsung 4K HDR 60 Hz TV
Case Eagle Warrior Gaming
Audio Device(s) N/A
Power Supply Coolermaster Elite 460W
Mouse Vorago KM500
Keyboard Vorago KM500
Software Windows 10
Benchmark Scores N/A
As I kept reading I realized that this is not going to work on the same slow InfinityFabric, also it is going to be more latency because more hops are needed to communicate with cores in another die and also there is not direct access to memory in any die. I can imagine this design to be very low performing in low threaded apps and to have a outrageous power consumption. I hope AMD do not take this approach (good for even more cores, bad for performance)
 
Joined
Apr 12, 2013
Messages
7,534 (1.77/day)
As I kept reading I realized that this is not going to work on the same slow InfinityFabric, also it is going to be more latency because more hops are needed to communicate with cores in another die and also there is not direct access to memory in any die. I can imagine this design to be very low performing in low threaded apps and to have a outrageous power consumption. I hope AMD do not take this approach (good for even more cores, bad for performance)
I have a feeling you'll be surprised (disappointed?) by how well it performs.
https://www.hpcwire.com/2018/10/30/cray-unveils-shasta-lands-nersc-9-contract/

There's also a possibility that there will be more than one die, especially for desktops & notebooks.
 
Joined
Jan 13, 2018
Messages
157 (0.06/day)
System Name N/A
Processor Intel Core i5 3570
Motherboard Gigabyte B75
Cooling Coolermaster Hyper TX3
Memory 12 GB DDR3 1600
Video Card(s) MSI Gaming Z RTX 2060
Storage SSD
Display(s) Samsung 4K HDR 60 Hz TV
Case Eagle Warrior Gaming
Audio Device(s) N/A
Power Supply Coolermaster Elite 460W
Mouse Vorago KM500
Keyboard Vorago KM500
Software Windows 10
Benchmark Scores N/A
Joined
Apr 12, 2013
Messages
7,534 (1.77/day)
100 petaflops of "peak performance" powered by an undisclosed number of AMD EPYC and NVIDIA GPUs. I'm quite sure most of the performance comes from GPUs anyway. This proves nothing.
It certainly proves that the system isn't bottle-necked in any way you're thinking it'd be, also it's Rome.

You mean "theoretical peak FLOPS" unless we are to believe that CPUs in most supercomputers are just for show.
 
Joined
Sep 15, 2011
Messages
6,727 (1.39/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
So many experts here in CPU design it's amazing to be part of such a group. You should all be hired by AMD! Seriously!

For their "Clown Division"...
 
Joined
Jun 12, 2017
Messages
136 (0.05/day)
The only question now is whether the 32MB L3 cache per CCX chip will be present as this leak suggests. It is totally possible that L3 cache all get dumped to the center controller chip. 32MB cache in 7nm is really some cost to consider. And making 8 of them shared and coherent is hard AF. If this is the case (and they use it in MSDT), it's screwed.
 
Joined
Mar 6, 2012
Messages
569 (0.12/day)
Processor i5 4670K - @ 4.8GHZ core
Motherboard MSI Z87 G43
Cooling Thermalright Ultra-120 *(Modded to fit on this motherboard)
Memory 16GB 2400MHZ
Video Card(s) HD7970 GHZ edition Sapphire
Storage Samsung 120GB 850 EVO & 4X 2TB HDD (Seagate)
Display(s) 42" Panasonice LED TV @120Hz
Case Corsair 200R
Audio Device(s) Xfi Xtreme Music with Hyper X Core
Power Supply Cooler Master 700 Watts
So many experts here in CPU design it's amazing to be part of such a group. You should all be hired by AMD! Seriously!

For their "Clown Division"...

I couldn't have put it better without sounding like a troll.
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,745 (3.30/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
I was wondering when this rumor would end up here, lo & behold. What's interesting is that if the system controller is moved off die, you can basically make any number of CPU/GPU combinations as well, the only limiting factor being the TDP especially for ULV segment. The details should be revealed sometime next month I believe.This isn't just the NB, remember Zen is already a full on SoC.

If the system controller/NB is moved off die... isn't that a huge step backwards? Sure it might be good for connecting a lot of stuff together... but it would be really slow and clunky compared to the way things have been done for the past 10 years or more.

Anyway, this design (and others that exist already, as seen in the 2990WX) kinda sounds like a multi socket system... all in one chip. It's great for huge threaded workloads, being wallet friendly, and cramming an obscene amount of cores into one chip/board, but if you're not using it for that, it's detrimental.
 
Joined
Apr 12, 2013
Messages
7,534 (1.77/day)
If the system controller/NB is moved off die... isn't that a huge step backwards? Sure it might be good for connecting a lot of stuff together... but it would be really slow and clunky compared to the way things have been done for the past 10 years or more.

Anyway, this design (and others that exist already, as seen in the 2990WX) kinda sounds like a multi socket system... all in one chip. It's great for huge threaded workloads, being wallet friendly, and cramming an obscene amount of cores into one chip/board, but if you're not using it for that, it's detrimental.
That's the biggest question for everyone right now, though if the leaks are true then AMD must have done their own tests & found it to not be such a major regression, if at all. The part about L3 & possible L4 also makes sense, as some of the latency trade-offs can be mitigated by increasing the L3 size & introducing L4 thereby increasing the overall cache hits.

I think there will be more dies this time around, Zen had 2 with the second one being RR having an IGP.
 
Last edited:

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,745 (3.30/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Maybe you're on to something. When the Athlon was smacking the Pentium around, part of the reason for that was because they had an on die memory controller... but then the Core 2 series came out, and though it still relied on the old, slow FSB (with memory controller on the then on board, not on chip NB), it smacked the Athlon around... and they had big caches. Then Intel had much smaller caches with Nehalem, going forward.

AMD's CCX design is great, but even that has its limits, as when you put a bunch of them together, they all have to communicate with each other in some way... but there was a reason everything moved off the board onto the CPU, it's much faster that way. AMD certainly has an issue on their hands... and this move seems like a gamble to me. Time will tell if they come out with a hit of a flop...
 
Joined
Jan 13, 2018
Messages
157 (0.06/day)
System Name N/A
Processor Intel Core i5 3570
Motherboard Gigabyte B75
Cooling Coolermaster Hyper TX3
Memory 12 GB DDR3 1600
Video Card(s) MSI Gaming Z RTX 2060
Storage SSD
Display(s) Samsung 4K HDR 60 Hz TV
Case Eagle Warrior Gaming
Audio Device(s) N/A
Power Supply Coolermaster Elite 460W
Mouse Vorago KM500
Keyboard Vorago KM500
Software Windows 10
Benchmark Scores N/A
So many experts here in CPU design it's amazing to be part of such a group. You should all be hired by AMD! Seriously!

For their "Clown Division"...
Did you already notice that you included yourself? :laugh:
 
Joined
Nov 3, 2013
Messages
2,141 (0.53/day)
Location
Serbia
Processor Ryzen 5600
Motherboard X570 I Aorus Pro
Cooling Deepcool AG400
Memory HyperX Fury 2 x 8GB 3200 CL16
Video Card(s) RX 6700 10GB SWFT 309
Storage SX8200 Pro 512 / NV2 512
Display(s) 24G2U
Case NR200P
Power Supply Ion SFX 650
Mouse G703 (TTC Gold 60M)
Keyboard Keychron V1 (Akko Matcha Green) / Apex m500 (Gateron milky yellow)
Software W10
So many experts here in CPU design it's amazing to be part of such a group. You should all be hired by AMD! Seriously!

For their "Clown Division"...
Took the words out of my mouth... or fingers in this case.
 
Joined
Apr 5, 2015
Messages
31 (0.01/day)
AMD knew what it was doing with Zen and the CCX/Infinity Fabric thing(i guess we can agree that Zen is overall a good and performing architecture), now, i guess they have learned something from this experience and they want to go further with the idea, also... the multi chip solution is the only practical one we have for high core count cpus, there is no way to manifacture gigantic peaces of silicon with decent yealds. And we also don't know if AMD made some improvements to the Infinty Fabric to reduce the bandwidth/latency problem of this connection, other than increasing the L3 cache for obvious reasons. I think this is something new, with good possibilities if done correctly, we need to relax and wait for the end result. As costumers/consumers, we always need to be positive and supportive for new and brave ideas.
 
Last edited:
Joined
Feb 13, 2012
Messages
523 (0.11/day)
But how does this effect minimum latency? Right now with the current approach there is a somewhat wide delta between min and max latency depending on which core is communicating with what. When an app is running locally on a ccx the latency is excellent, when both ccx's are needed then the latency slightly increases, and lastly when needing to connect to other chips on the module for one workload then latency maxes out. This central north bridge might lower that max latency and make the gap between min and max much smaller, however from a high level one can expect min latency to take a big hit and increase drastically.

Tldr - Unless i am misunderstanding something; this approach will only lower the min-max delta to make a more consistent latency in all workloads, but that would be achieved by increasing min latency and decreasing max latency - counter productive if true
 
Joined
Oct 1, 2006
Messages
4,932 (0.74/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
But how does this effect minimum latency? Right now with the current approach there is a somewhat wide delta between min and max latency depending on which core is communicating with what. When an app is running locally on a ccx the latency is excellent, when both ccx's are needed then the latency slightly increases, and lastly when needing to connect to other chips on the module for one workload then latency maxes out. This central north bridge might lower that max latency and make the gap between min and max much smaller, however from a high level one can expect min latency to take a big hit and increase drastically.

Tldr - Unless i am misunderstanding something; this approach will only lower the min-max delta to make a more consistent latency in all workloads, but that would be achieved by increasing min latency and decreasing max latency - counter productive if true
Currently the Zen die compose of 2 CCX of Quad-Cores and both are connected to the SOC / NB via Infinity Fabric.
So to access the L3 Cache that is on another die it requires 3 hops, first from the CCX to the local SOC, second to SOC of the other die, then from the other SOC to the CCX with the L3.
On this new layout the number of hops is 2, first to the Central Hub, second to the other CCX where the L3 is located.

What this does though, is avoid the issues with the 2990WX / 2970WX where some cores needs 3 hops to the memory.
First from CCX to local SOC then to the SOC on the IO Die.
Also the 2-Die Threadripper connects to each other via 2 links of Infinity Fabric, and the 4-Die version only has 1 connection to each die, so half the bandwidth.
If each Zen 2 die also keeps its 2 IF links, it would always have as much if not double bandwidth to the memory, if AMD can keep the IF speed the same as Zen 1.
On Zen 2 each CCX is always 1 hop away from memory, meaning it will have consistent latency across all dies.

For gaming isn't it mostly the maximum latency that cause frame-time issues?
After all the 1% and 0.1% lows are measuring the max frame time between each frame, as the minimum frame time aka Max FPS isn't nearly as important.
 
Last edited:

bug

Joined
May 22, 2015
Messages
13,779 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
It's a solution that creates more problems :kookoo:
It's engineering. Any solution will create problems sooner or later, given the "proper" scenario ;)
 
Joined
Jan 8, 2017
Messages
9,436 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
The only question now is whether the 32MB L3 cache per CCX chip will be present as this leak suggests. It is totally possible that L3 cache all get dumped to the center controller chip. 32MB cache in 7nm is really some cost to consider. And making 8 of them shared and coherent is hard AF. If this is the case (and they use it in MSDT), it's screwed.

The cache needs to be low latency, therefore it has to be on the same die.

because more hops are needed to communicate with cores in another die

It's going to be less actually, on average.

I can imagine this design to be very low performing in low threaded apps

If the communication between the cores is hampered as you say , how would that affect the single thread performance ? It's the exact opposite of what you are describing, leaving only the cores and cache on each die would allow for higher clocks and therfore higher single thread performance and higher performance in general.
 
Joined
Sep 17, 2014
Messages
22,452 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
So, all roads truly do lead to Rome, then.
 
Top