• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD "Vega 20" GPU Not Before Late Q1-2019

Joined
Sep 17, 2014
Messages
22,666 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
VEGA 7nm is in the same 2017-2020 roadmap. standing between VEGA 14nm, NAVI 7nm and NEXT GEN 7+. So yes it is for gaming. This is happening.

As usual you are very crafty at pulling facts and specs out of your rear end.

Source or it didn't happen, most of what you're saying only happens in your head.

AMD is stuck with four CU lanes with 64 ROPS setup since R9-290X design. Vega 64 has four CU lanes and 64 ROPS with L2 cache link and 1536 Mhz clock speed range.

AMD moved from two CU lanes with 32 ROPS 7970 in December 2011 to four CU lanes with 64 ROPS R9-290X in year 2013. From 2013 to 2018, there was no increase in CU lane and ROPS count, while NVIDIA GPUs scaled towards 88 to 96 ROPS.

Raja "TFLOPS" Koduri joined AMD in 2013.

Nice find and linking of those facts. It will be interesting putting those next to what Raja's going to produce for Intel.

GCN was already up for a radical redesign in 2013 in every way and Hawaii was already scaled up too far, its a diminishing returns fiesta. At that point it was rapidly losing ground in perf/watt, ran into heat problems, and lacked optimization and focus compared to the competition. Its clear AMD has done everything in the book to stretch the 'old' GCN out for several more years with minimal effort and investment (and make it viable for emerging markets, too, quite a feat if you think of it, its the only way their HBM focus makes sense). Does that mean it is end of life? Not sure you can say that, it still has potency, but it fails to extract that proper for specific use cases (such as gaming). The way I see it, GCN can only survive when it gets more specialized, narrowed down and split up into branches for specific markets.
 
Last edited:

ppn

Joined
Aug 18, 2015
Messages
1,231 (0.36/day)
CES 2018. GPU roadmap.
I see now. architecture desn't mean product for all.
 
Joined
Jan 8, 2017
Messages
9,504 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
From 2013 to 2018, there was no increase in CU lane and ROPS count

And you conclude from that what ? Again, do you work at AMD and know something we don't with regards to what can or can't be done ?
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
CES 2018. GPU roadmap.
I see now. architecture desn't mean product for all.
It used to. But since we have had Vega that never made it to consumer space, other architectures may follow that do not scale back well enough to be worth making the transition.
Until AMD/Nvidia themselves announce they will turn it into a consumer products, it's wiser to not assume they will. It's more cost effective to have a unified architecture, but it would seem compute is starting to grow enough to make it more profitable to break with that rule.
Oh well, interesting times ahead...
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
It's more cost effective to have a unified architecture, but it would seem compute is starting to grow enough to make it more profitable to break with that rule.
That rule only really applies if the arch is more or less equally suitable for all use cases. Vega clearly isn't that, being far better at compute than gaming loads. As such, "more cost effective" kind of goes out the window when you have to use oversized dice to match the competition (which kills margins). This also forces you out of the high-end market, when your biggest chip can only match the competition's second-tier one. This is exactly what happened with Vega. While developing separate architectures is of course wildly expensive, at a certain point it starts making sense (or being necessary) when your current arch can't keep up in specific scenarios.

Still, I agree with your general approach: assume "graphics" (read: parallel compute) chips are targeted towards the high-volume, high-margin enterprise space first, and gaming second if at all. If it's for gaming, it will be shouted from the rooftops, as that's the general approach to marketing towards gamers. Amd holding back now might give them the necessary capital to develop a competitive gaming arch for the future, while launching Vega 20 for gaming would be both low-volume, likely not competitive with Turing, and low-margin, not helping anyone - not gamers, not themselves. As a strategy for a resource-constrained competitor to a near-monopolist, it's the only sensible one.
 
Joined
Feb 25, 2016
Messages
292 (0.09/day)
Well you are right, still Arcturus will be a chip based in a new nameless architecture different than Navi (that's still GCN).

I think Vega 20 is too expensive to produce to be competitive in the gaming market, that is why it won't launch in any consumer products till yields improve enough to be mass produced at a good price point.
What if it's just a chip from navi architecture?
 
Joined
Jan 13, 2018
Messages
157 (0.06/day)
System Name N/A
Processor Intel Core i5 3570
Motherboard Gigabyte B75
Cooling Coolermaster Hyper TX3
Memory 12 GB DDR3 1600
Video Card(s) MSI Gaming Z RTX 2060
Storage SSD
Display(s) Samsung 4K HDR 60 Hz TV
Case Eagle Warrior Gaming
Audio Device(s) N/A
Power Supply Coolermaster Elite 460W
Mouse Vorago KM500
Keyboard Vorago KM500
Software Windows 10
Benchmark Scores N/A
Joined
Nov 3, 2011
Messages
695 (0.14/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
Nice find and linking of those facts. It will be interesting putting those next to what Raja's going to produce for Intel.

GCN was already up for a radical redesign in 2013 in every way and Hawaii was already scaled up too far, its a diminishing returns fiesta. At that point it was rapidly losing ground in perf/watt, ran into heat problems, and lacked optimization and focus compared to the competition. Its clear AMD has done everything in the book to stretch the 'old' GCN out for several more years with minimal effort and investment (and make it viable for emerging markets, too, quite a feat if you think of it, its the only way their HBM focus makes sense). Does that mean it is end of life? Not sure you can say that, it still has potency, but it fails to extract that proper for specific use cases (such as gaming). The way I see it, GCN can only survive when it gets more specialized, narrowed down and split up into branches for specific markets.
GCN can survive when rasterization hardware improves. Workloads like crypto-currency doesn't use ROPS or rasterization hardware, hence AMD's TFLOPS is competitive against NVIDIA's TFLOPS.

GpGPU compute workloads usually use TMUs as it's read-write units instead ROPS read-write units. This issue relates to AMD's push for async compute optimizations.

Vega 64's ROPS and rasterization power is similar GP104 level, hence Vega 64's inferior results when compared to GTX 1080 Ti with 88 ROPS.

Vega 56 with 12.2 TFLOPS and 1710 Mhz clock speed can beat Strix Vega 64 with 1590 Mhz and 13 TFLOPS e.g.
Higher clock speed improves Vega 56's ROPS/rasterization hardware. AMD needs to reduce CU count, move to 7nm process tech and focus on higher clock speed.
 
Last edited:
Joined
Jun 10, 2014
Messages
2,995 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
GCN can survive when rasterization hardware improves.

Vega 64's ROPS and rasterization power is similar GP104 level, hence Vega 64's inferior results when compared to GTX 1080 Ti with 88 ROPS.
If you look at the specs, you'll see that RTX 2070/2080 have the same level of GP/s as GTX 1070/1080, while offering a significant performance boost (~35%) over the previous generation. RTX 2070 performs close to GTX 1080 Ti, and RTX 2080 way beyond it with far less rasterization performance. The rasterization performance of Vega is not that far off. Nvidia is not bottlenecked by rasterization performance, neither should AMD be.

Vega 56 with 12.2 TFLOPS and 1710 Mhz clock speed can beat Strix Vega 64 with 1590 Mhz and 13 TFLOPS e.g.

Higher clock speed improves Vega 56's ROPS/rasterization hardware. AMD needs to reduce CU count, move to 7nm process tech and focus on higher clock speed.
You do realize that a Vega 56 overclocked to 1710 MHz (12.2 TFLOPS) is close to RTX 2080 Ti level theoretical performance? (11.8-13.4 TFLOPS) While the similarly performing RTX 2070 have just 6.5-7.5 TFLOPs! No, GCN can't survive in this battle, it simply doesn't scale.

The problem with GCN is not lack of rasterization hardware, it's not lack of computation performance (it has plenty) and it's not lack of memory bandwidth (plenty here too). It's siply lack of saturation of resources, sometimes 30-40% of the CUs are simply "starving". The reason why you can sometimes see like an overclocked Vega 56 scale better than an overclocked Vega 64 has to do with resource management, not total resources. Low-level scheduling is always done in the GPU, and some bins just strike a better balance of scheduling, cache utilization and register files.

You can see the same thing with GTX 970; while GTX 960, 980 and 980 Ti all perform nearly identical in performance per TFLOP, GTX 970 performs much better than it should "on paper". This is also the reason why many factory overclocked GTX 970s where close to stock GTX 980s, offering an incredible value. So why does GTX 970, which is a cut down GTX 980, scale so much better? It simply strikes a sweetspot in GPU resources for scheduling, cache management and register files.
 
Joined
Nov 3, 2011
Messages
695 (0.14/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
If you look at the specs, you'll see that RTX 2070/2080 have the same level of GP/s as GTX 1070/1080, while offering a significant performance boost (~35%) over the previous generation. RTX 2070 performs close to GTX 1080 Ti, and RTX 2080 way beyond it with far less rasterization performance. The rasterization performance of Vega is not that far off. Nvidia is not bottlenecked by rasterization performance, neither should AMD be.


You do realize that a Vega 56 overclocked to 1710 MHz (12.2 TFLOPS) is close to RTX 2080 Ti level theoretical performance? (11.8-13.4 TFLOPS) While the similarly performing RTX 2070 have just 6.5-7.5 TFLOPs! No, GCN can't survive in this battle, it simply doesn't scale.

The problem with GCN is not lack of rasterization hardware, it's not lack of computation performance (it has plenty) and it's not lack of memory bandwidth (plenty here too). It's siply lack of saturation of resources, sometimes 30-40% of the CUs are simply "starving". The reason why you can sometimes see like an overclocked Vega 56 scale better than an overclocked Vega 64 has to do with resource management, not total resources. Low-level scheduling is always done in the GPU, and some bins just strike a better balance of scheduling, cache utilization and register files.

You can see the same thing with GTX 970; while GTX 960, 980 and 980 Ti all perform nearly identical in performance per TFLOP, GTX 970 performs much better than it should "on paper". This is also the reason why many factory overclocked GTX 970s where close to stock GTX 980s, offering an incredible value. So why does GTX 970, which is a cut down GTX 980, scale so much better? It simply strikes a sweetspot in GPU resources for scheduling, cache management and register files.
You are wrong.
RX Vega 56 with 64 ROPS at 1710 Mhz and ~12.2 TFLOPS beaten Strix Vega 64 with 64 ROPS at 1590 Mhz with 13 TFLOPS! Lesson: RX Vega needs ROPS power.
At 1710 Mhz, this Vega 56 has faster rasterization, ROPS and geometry.


There's four Rasterizer units with four Shader Engines equipped GCNs. RB units has the ROPS units.
For classic rasterizer, compute TFLOPS means little without ROPS read/write units.

RTX 2080's 64 ROPS has nearly 2000 Mhz clock speed and coupled with 4 MB L2 cache (with NVIDIA superior delta color compression). At 1928 Mhz (stealth boost mode), RTX 2080 has 11.85 TFLOPS.

GTX 1080 Ti's 88 ROPS clock speed range from 1600 Mhz to 1800 Mhz and coupled with 3 MB L2 cache (with NVIDIA superior delta color compression). RTX 2080 is superior with 4MB L2 cache when compared to GTX 1080 Ti's 3MB L2 cache.

My MSI GTX 1080 Ti FE reference model goes beyond paper spec clock speed mentioned in techpowerup's GTX 1080 Ti FE database. My GTX 1080 Ti FE at 1600 Mhz has 11.5 TFLOPS and at 1800 Mhz has 12.9 TFLOPS which is dependent on thermals. Techpowerup's database with Pascal GPU's clock speed specs are bullshit since my stock GTX 1080 Ti FE is faster than the paper specs.

RTX 2080 Ti's 88 ROPS has nearly 2000 Mhz clock speed and coupled with 6 MB L2 cache (with NVIDIA superior delta color compression). At 1928 Mhz (stealth boost mode), RTX 2080 Ti has 16.78 TFLOPS.

NVIDIA GPUs has stealth boost mode features goes beyond paper specs.


RX Vega 64 has 64 ROPS at 1536 Mhz and coupled with 4 MB L2 cache (AMD's delta color compression is inferior)

ROPS coupled with multi-MB L2 cache is used for tile render methods.


For comparison, GP102's six rasterziation units for each GPC block.

GP102's ROPS range from 88 to 96 ROPS with clock speed range from 1600 Mhz to 1800 Mhz, or sometimes 1900 Mhz during cold winter. My GTX 1080 Ti FE usually has 1700 Mhz to 1800 Mhz clock speed range, but rarely drops to 1600 Mhz.

Vega 64 can't match my GTX 1080 Ti's six rasterziation units and 88 ROPS with clock speed range between 1600 to 1800Mhz.

GP104 has four GPC blocks, hence four rasterziation units just like Vega 56/64's four rasterziation units.
GP104 has 64 ROPS, just like Vega 56/64's 64 RPS
GP104 has higher clock speed while Vega 56/64 has higher horizontal CU compute.

No brainier to why Vega 56/64 lands into GP104 level game results.


GCN lacks rasterization hardware while cryptocurrency workloads doesn't use rasterziation/ROPS hardware.
 
Last edited:
Joined
Jun 10, 2014
Messages
2,995 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
You are wrong.
No, your entire argument is fundamentally flawed.
If Vega was bottlenecked by ROPs, Nvidia should be too. And yet, RTX 2080 manages to outperform GTX 1080 Ti by a good margin despite having much lower rasterization performance. There is a reason why neither AMD nor Nvidia is pushing rasterization performance, there are plenty of other bottlenecks more worthy of die space.
 
Joined
Apr 12, 2013
Messages
7,563 (1.77/day)
No, your entire argument is fundamentally flawed.
If Vega was bottlenecked by ROPs, Nvidia should be too. And yet, RTX 2080 manages to outperform GTX 1080 Ti by a good margin despite having much lower rasterization performance. There is a reason why neither AMD nor Nvidia is pushing rasterization performance, there are plenty of other bottlenecks more worthy of die space.
I don't see how that works? You mean 10% or less?



Now I'm not saying that Vega's biggest bottleneck is ROP, but what you said sounds odd, not in the least because AMD & Nvidia GPU uarch aren't directly comparable.
 
Joined
Nov 3, 2011
Messages
695 (0.14/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
No, your entire argument is fundamentally flawed.
If Vega was bottlenecked by ROPs, Nvidia should be too. And yet, RTX 2080 manages to outperform GTX 1080 Ti by a good margin despite having much lower rasterization performance. There is a reason why neither AMD nor Nvidia is pushing rasterization performance, there are plenty of other bottlenecks more worthy of die space.
Your argument is wrong since you didn't factor the following

1. Vega 56 at 1710 Mhz increased it's rasterization power with lower 12.2 TFLOPS when compared to Strix Vega 64's at 1590Mhz with 13.03 TFLOPS. Vega 56 at 1710 Mhz beats Strix Vega 64.

2. RTX 2080 has larger 4MB L2 cache against GTX 1080 Ti's 3 MB L2 cache. This L2 cache is use for immediate mode tile render. Larger cache delays/reduce access hit rates with external memory.
NVIDIA improved L2 cache rasterization design with Turing!


Vega IP (gfx9 family) gains L2 cache linkage with it's ROPS design. AMD improved L2 cache rasterization design with Vega, but wouldn't solve rasterization power inferiority when compared to GTX 1080 Ti FE's 88 ROPS with 1600 to 1800 Mhz clock speed.

3. RTX 2080 has higher clock speed for it's 64 ROPS with ~1928 Mhz stealth boost mode when compared to GTX 1080 Ti's 88 ROPS at 1600 to 1800Mhz clock speed.

In terms raw ROPS power, RTX 2080 at 1928Mhz has about 82 percent of GTX 1080 Ti's ROPS at 1700Mhz, but RTX 2080 has 33 percent larger L2 cache storage over GTX 1080 Ti.

4. Recent mid-range Vega M GH has 64 ROPS at 1190 Mhz with 3.6 TFLOPS compute ratio. AMD effectively improved their mobile rasterization power at a given TFLOPS power.
 
Last edited:
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
RTX 2070 performs close to GTX 1080 Ti, and RTX 2080 way beyond it
How can one type this crap on tech forum with a straight face? Jesus...
 

bug

Joined
May 22, 2015
Messages
13,843 (3.95/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Top