• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Valve Announces the Steam Deck Game Console

Joined
Jun 27, 2011
Messages
6,770 (1.38/day)
Processor 7800x3d
Motherboard Gigabyte B650 Auros Elite AX
Cooling Custom Water
Memory GSKILL 2x16gb 6000mhz Cas 30 with custom timings
Video Card(s) MSI RX 6750 XT MECH 2X 12G OC
Storage Adata SX8200 1tb with Windows, Samsung 990 Pro 2tb with games
Display(s) HP Omen 27q QHD 165hz
Case ThermalTake P3
Power Supply SuperFlower Leadex Titanium
Software Windows 11 64 Bit
Benchmark Scores CB23: 1811 / 19424 CB24: 1136 / 7687
I don't know what you want.
A Steam Deck!

You said some of those games are not on steam by the way. That is incorrect. Each of those is on steam. They are all in my steam library and I have played many of them before.
 
Joined
May 8, 2021
Messages
1,978 (1.51/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
You said some of those games are not on steam by the way. That is incorrect. Each of those is on steam. They are all in my steam library and I have played many of them before.
Motosis isn't on Steam, hell I can't even find it on net. Either it's a typo or it's really some rare game.
 
Joined
Jun 27, 2011
Messages
6,770 (1.38/day)
Processor 7800x3d
Motherboard Gigabyte B650 Auros Elite AX
Cooling Custom Water
Memory GSKILL 2x16gb 6000mhz Cas 30 with custom timings
Video Card(s) MSI RX 6750 XT MECH 2X 12G OC
Storage Adata SX8200 1tb with Windows, Samsung 990 Pro 2tb with games
Display(s) HP Omen 27q QHD 165hz
Case ThermalTake P3
Power Supply SuperFlower Leadex Titanium
Software Windows 11 64 Bit
Benchmark Scores CB23: 1811 / 19424 CB24: 1136 / 7687
Motosis isn't on Steam, hell I can't even find it on net. Either it's a typo or it's really some rare game.
Typo. Mitosis. It is a re-skinned agar.io clone put on the steam store with new game modes and a pay to win business model.
 
Joined
May 8, 2021
Messages
1,978 (1.51/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Typo. Mitosis. It is a re-skinned agar.io clone put on the steam store with new game modes and a pay to win business model.
Dude, it's still a typo. The actual name is "Mitos.is". Can you please stop being drunk when typing? No gamepad support, low spec friendly, not linux native, 50% chance of it working with proton. I wouldn't really want to run it on Deck.
 

Durvelle27

Moderator
Staff member
Joined
Jul 10, 2012
Messages
6,795 (1.50/day)
Location
Memphis, TN
System Name Black Prometheus
Processor |AMD Ryzen 7 1700
Motherboard ASRock B550M Pro4|MSI X370 Gaming PLUS
Cooling Thermalright PA120 SE | AMD Stock Cooler
Memory G.Skill 64GB(2x32GB) 3200MHz | 32GB(4x8GB) DDR4
Video Card(s) ASUS DirectCU II R9 290 4GB
Storage Sandisk X300 512GB + WD Black 6TB+WD Black 6TB
Display(s) LG Nanocell85 49" 4K 120Hz + ACER AOPEN 34" 3440x1440 144Hz
Case DeepCool Matrexx 55 V3 w/ 6x120mm Intake + 3x120mm Exhaust
Audio Device(s) LG Dolby Atmos 5.1
Power Supply Corsair RMX850 Fully Modular| EVGA 750W G2
Mouse Logitech Trackman
Keyboard Logitech K350
Software Windows 10 EDU x64
Hard pass for me. Couldn't care less about emulation or playing AAA games in 720 p on low settings.
I honestly don't understand why people dwell on this. At this screen size your eyes could never see the difference between 720P or 1080P. And it will be able to easily run med-high settings at 60FPS
 
Joined
May 8, 2021
Messages
1,978 (1.51/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
I honestly don't understand why people dwell on this. At this screen size your eyes could never see the difference between 720P or 1080P. And it will be able to easily run med-high settings at 60FPS
You can certainly see low settings and sub 20 fps though
 

Durvelle27

Moderator
Staff member
Joined
Jul 10, 2012
Messages
6,795 (1.50/day)
Location
Memphis, TN
System Name Black Prometheus
Processor |AMD Ryzen 7 1700
Motherboard ASRock B550M Pro4|MSI X370 Gaming PLUS
Cooling Thermalright PA120 SE | AMD Stock Cooler
Memory G.Skill 64GB(2x32GB) 3200MHz | 32GB(4x8GB) DDR4
Video Card(s) ASUS DirectCU II R9 290 4GB
Storage Sandisk X300 512GB + WD Black 6TB+WD Black 6TB
Display(s) LG Nanocell85 49" 4K 120Hz + ACER AOPEN 34" 3440x1440 144Hz
Case DeepCool Matrexx 55 V3 w/ 6x120mm Intake + 3x120mm Exhaust
Audio Device(s) LG Dolby Atmos 5.1
Power Supply Corsair RMX850 Fully Modular| EVGA 750W G2
Mouse Logitech Trackman
Keyboard Logitech K350
Software Windows 10 EDU x64
Joined
May 8, 2021
Messages
1,978 (1.51/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Except this won't run at low settings at 20 FPS
Yeah, you will need to lower your resolution a lot more for that Cyberpunk to run okay. All way down to 960x540.
 

Durvelle27

Moderator
Staff member
Joined
Jul 10, 2012
Messages
6,795 (1.50/day)
Location
Memphis, TN
System Name Black Prometheus
Processor |AMD Ryzen 7 1700
Motherboard ASRock B550M Pro4|MSI X370 Gaming PLUS
Cooling Thermalright PA120 SE | AMD Stock Cooler
Memory G.Skill 64GB(2x32GB) 3200MHz | 32GB(4x8GB) DDR4
Video Card(s) ASUS DirectCU II R9 290 4GB
Storage Sandisk X300 512GB + WD Black 6TB+WD Black 6TB
Display(s) LG Nanocell85 49" 4K 120Hz + ACER AOPEN 34" 3440x1440 144Hz
Case DeepCool Matrexx 55 V3 w/ 6x120mm Intake + 3x120mm Exhaust
Audio Device(s) LG Dolby Atmos 5.1
Power Supply Corsair RMX850 Fully Modular| EVGA 750W G2
Mouse Logitech Trackman
Keyboard Logitech K350
Software Windows 10 EDU x64
Yeah, you will need to lower your resolution a lot more for that Cyberpunk to run okay. All way down to 960x540.
So let me get this straight, The GPD Win3 can play Cyberpunk at 1280x720 on low settings with 30-40FPS while having a much inferior GPU but you think the Deck can't. Man you guys are comical
 
Joined
May 8, 2021
Messages
1,978 (1.51/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
So let me get this straight, The GPD Win3 can play Cyberpunk at 1280x720 on low settings with 30-40FPS while having a much inferior GPU but you think the Deck can't. Man you guys are comical
That's what I calculated. I looked up GPD Win 3, it's actually 35 watt handheld and fps highly depends on area. In city it was in 20s. And on top of that, it used variable resolution, so it wasn't 720p raw. So, you still need 960x540, lowest settings to hit 40 fps average.
 

Durvelle27

Moderator
Staff member
Joined
Jul 10, 2012
Messages
6,795 (1.50/day)
Location
Memphis, TN
System Name Black Prometheus
Processor |AMD Ryzen 7 1700
Motherboard ASRock B550M Pro4|MSI X370 Gaming PLUS
Cooling Thermalright PA120 SE | AMD Stock Cooler
Memory G.Skill 64GB(2x32GB) 3200MHz | 32GB(4x8GB) DDR4
Video Card(s) ASUS DirectCU II R9 290 4GB
Storage Sandisk X300 512GB + WD Black 6TB+WD Black 6TB
Display(s) LG Nanocell85 49" 4K 120Hz + ACER AOPEN 34" 3440x1440 144Hz
Case DeepCool Matrexx 55 V3 w/ 6x120mm Intake + 3x120mm Exhaust
Audio Device(s) LG Dolby Atmos 5.1
Power Supply Corsair RMX850 Fully Modular| EVGA 750W G2
Mouse Logitech Trackman
Keyboard Logitech K350
Software Windows 10 EDU x64
That's what I calculated. I looked up GPD Win 3, it's actually 35 watt handheld and fps highly depends on area. In city it was in 20s. And on top of that, it used variable resolution, so it wasn't 720p raw. So, you still need 960x540, lowest settings to hit 40 fps average.
From the review I viewed it was locked at 720P and was suggested also locking to 30FPS for best performance. But lets see the Iris Xe 96 GPU is on average 5% faster than the Vega 8 which is 8CUs (512 Cores) and based on GCN. The Deck is RDNA 2 with 8CUs (512 Cores), RDNA2 should offer around 50% more performance per core over GCN. So i don't see the deck having any issues what's so ever looking that even a 5700G can power games just fine with the aged Vega 8
 
Joined
Aug 23, 2017
Messages
113 (0.04/day)
System Name DELL 3630
Processor I7 8700K
Memory 32 gig
Video Card(s) 4070
I honestly don't understand why people dwell on this. At this screen size your eyes could never see the difference between 720P or 1080P. And it will be able to easily run med-high settings at 60FPS
I have had enough tablets in the past with 7-inch screens to see there is a noticeable difference between 720p and 1080p. For example, jaggies are much more pronounced due to the larger pixel size in 720p. And a noticeable drop in detail in mobile games even. An open-world AAA game like RDR2 would not look good on this.
 
Joined
Apr 3, 2010
Messages
800 (0.15/day)
Location
US
System Name Desktop
Processor AMD Ryzen 5 5600X [3.7GHz/4.6GHz][6C/12T]
Motherboard ASUS TUF Gaming X570-PRO [X570]
Cooling Cooler Master Hyper 212 RGB Black Edition
Memory G.SKILL Ripjaws V Series 32GB [DDR4 3600][2x16GB][16-19-19-39@1.35V]
Video Card(s) ASUS KO GeForce RTX 3060 Ti V2 OC Edition 8GB GDDR6 [511.65]
Storage [OS] Samsung 970 Evo 500GB | [Storage] 980 1TB | 860 Evo 1TB | 850 Evo 500GB | Seagate Firecuda 2TB
Display(s) LG 27GL850 [27"][2560x1440@144Hz][Nano IPS][LED][G-SYNC Compatible][DP]
Case Corsair Obsidian 750D
Audio Device(s) Realtek ALC S1200A High Definition Audio CODEC
Power Supply EVGA SuperNOVA 1000 G1+ [+12V: 83.3A 999.6W][80 Plus Gold]
Mouse Logitech M570 Trackball
Keyboard Corsair Gaming K55 RGB
Software Microsoft Windows 10 Pro [21H1][64-bit]
Well thankfully there should be plenty of reviews and information available long before the majority of us who reserved a unit will have have the opportunity to buy.
 

Durvelle27

Moderator
Staff member
Joined
Jul 10, 2012
Messages
6,795 (1.50/day)
Location
Memphis, TN
System Name Black Prometheus
Processor |AMD Ryzen 7 1700
Motherboard ASRock B550M Pro4|MSI X370 Gaming PLUS
Cooling Thermalright PA120 SE | AMD Stock Cooler
Memory G.Skill 64GB(2x32GB) 3200MHz | 32GB(4x8GB) DDR4
Video Card(s) ASUS DirectCU II R9 290 4GB
Storage Sandisk X300 512GB + WD Black 6TB+WD Black 6TB
Display(s) LG Nanocell85 49" 4K 120Hz + ACER AOPEN 34" 3440x1440 144Hz
Case DeepCool Matrexx 55 V3 w/ 6x120mm Intake + 3x120mm Exhaust
Audio Device(s) LG Dolby Atmos 5.1
Power Supply Corsair RMX850 Fully Modular| EVGA 750W G2
Mouse Logitech Trackman
Keyboard Logitech K350
Software Windows 10 EDU x64
I have had enough tablets in the past with 7-inch screens to see there is a noticeable difference between 720p and 1080p. For example, jaggies are much more pronounced due to the larger pixel size in 720p. And a noticeable drop in detail in mobile games even. An open-world AAA game like RDR2 would not look good on this.
The problem with that is were the screens native 720P or higher. I saw RDR2 being played on the Win3 and it looked no different than playing on the Xbox One or PS4 which majority games also ran at 720-900P. RDR2 ran at 864P on the Xbox One
 
Joined
May 8, 2021
Messages
1,978 (1.51/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
From the review I viewed it was locked at 720P and was suggested also locking to 30FPS for best performance. But lets see the Iris Xe 96 GPU is on average 5% faster than the Vega 8 which is 8CUs (512 Cores) and based on GCN. The Deck is RDNA 2 with 8CUs (512 Cores), RDNA2 should offer around 50% more performance per core over GCN. So i don't see the deck having any issues what's so ever looking that even a 5700G can power games just fine with the aged Vega 8
Well I did calculations, based on how RX 560 performed and then tried to compare it to 8CU RDNA2 iGPU based on teraflops alone. It seems that it wasn't a very good idea. I looked at different video:

This is 3400G. It is similar to what Deck will be. It has 4C8T config, but with Zen+ cores. Deck is clocked lower, but has higher IPC, so it be similar to 3400G, but a tiny bit weaker. 3400G has 11 CU Vega cores (704 of them). In raw teraflops, Vega 11 is a bit faster than what Deck can achieve at its peak. So overall 3400G is somewhat faster at CPU and quite a bit faster at GPU. It does run Cyberpunk at 720p, but there are some pretty bad frame drops and some areas just have quite low fps and I concluded earlier that I consider 40 fps as playable. Ryzen 3400G can't achieve that and it runs game at 1280x720, which is a bit lower than native Deck resolution, which is 1280x800. That's 10% more pixels to drive. Realistically, I would expect Deck's GPU to be 20% slower than 3400G's and CPU to be 30% slower. Cyberpunk isn't very CPU demanding game, but it's hard on GPU, so it maybe won't be bottlenecked by Deck's CPU, but Deck has 20% less GPU power than 3400G. So let's calculate. 1280x720 is 921600 pixels, Deck is 20% slower, so let's reduce pixels by 20%. We get 737280 pixels. At this point we get same 34 fps as 3400G, but we really want 40 fps. 40 fps is 15% more hardware taxing, so lets take away 15% resolution. With that reduction, we are left with 626688 pixels. Closest resolution to that is 960x640 and now Deck supposedly runs Cyberpunk okay, that's quite a bit lower than 1280x800 (1024000 pixels) resolution. With my previous calculation I arrived at 500k pixels or so, so this time result is more optimistic, but it still isn't that great for Deck. Depending on overall system performance, FSR may speed up little RDNA APU, I don't think it will make Cyberpunk run at 1280x800 with 40 fps average. FSR doesn't work very great with low end hardware, as its overhead is so big that it cancels out a lot of performance gains.
 

Durvelle27

Moderator
Staff member
Joined
Jul 10, 2012
Messages
6,795 (1.50/day)
Location
Memphis, TN
System Name Black Prometheus
Processor |AMD Ryzen 7 1700
Motherboard ASRock B550M Pro4|MSI X370 Gaming PLUS
Cooling Thermalright PA120 SE | AMD Stock Cooler
Memory G.Skill 64GB(2x32GB) 3200MHz | 32GB(4x8GB) DDR4
Video Card(s) ASUS DirectCU II R9 290 4GB
Storage Sandisk X300 512GB + WD Black 6TB+WD Black 6TB
Display(s) LG Nanocell85 49" 4K 120Hz + ACER AOPEN 34" 3440x1440 144Hz
Case DeepCool Matrexx 55 V3 w/ 6x120mm Intake + 3x120mm Exhaust
Audio Device(s) LG Dolby Atmos 5.1
Power Supply Corsair RMX850 Fully Modular| EVGA 750W G2
Mouse Logitech Trackman
Keyboard Logitech K350
Software Windows 10 EDU x64
Well I did calculations, based on how RX 560 performed and then tried to compare it to 8CU RDNA2 iGPU based on teraflops alone. It seems that it wasn't a very good idea. I looked at different video:

This is 3400G. It is similar to what Deck will be. It has 4C8T config, but with Zen+ cores. Deck is clocked lower, but has higher IPC, so it be similar to 3400G, but a tiny bit weaker. 3400G has 11 CU Vega cores (704 of them). In raw teraflops, Vega 11 is a bit faster than what Deck can achieve at its peak. So overall 3400G is somewhat faster at CPU and quite a bit faster at GPU. It does run Cyberpunk at 720p, but there are some pretty bad frame drops and some areas just have quite low fps and I concluded earlier that I consider 40 fps as playable. Ryzen 3400G can't achieve that and it runs game at 1280x720, which is a bit lower than native Deck resolution, which is 1280x800. That's 10% more pixels to drive. Realistically, I would expect Deck's GPU to be 20% slower than 3400G's and CPU to be 30% slower. Cyberpunk isn't very CPU demanding game, but it's hard on GPU, so it maybe won't be bottlenecked by Deck's CPU, but Deck has 20% less GPU power than 3400G. So let's calculate. 1280x720 is 921600 pixels, Deck is 20% slower, so let's reduce pixels by 20%. We get 737280 pixels. At this point we get same 34 fps as 3400G, but we really want 40 fps. 40 fps is 15% more hardware taxing, so lets take away 15% resolution. With that reduction, we are left with 626688 pixels. Closest resolution to that is 960x640 and now Deck supposedly runs Cyberpunk okay, that's quite a bit lower than 1280x800 (1024000 pixels) resolution. With my previous calculation I arrived at 500k pixels or so, so this time result is more optimistic, but it still isn't that great for Deck. Depending on overall system performance, FSR may speed up little RDNA APU, I don't think it will make Cyberpunk run at 1280x800 with 40 fps average. FSR doesn't work very great with low end hardware, as its overhead is so big that it cancels out a lot of performance gains.
the problem with the comparison is it is still GCN vs RDNA2. Yes the Vega 11 has more CUs/Cores but it is also still using a much inferior architecture. Just to give you an example

The Vega 64 which was the highest end single GPU you could get based on GCN; It has 4096 Cores vs the Current Gen RX 6700 XT based on RDNA2; it has almost half the cores at 2560 Cores but based on our very own TPUs review the 6700XT is on average 36% faster than the Vega 64 at 1080P

The architectural refinement alone gives it a boost
 
Joined
May 8, 2021
Messages
1,978 (1.51/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
the problem with the comparison is it is still GCN vs RDNA2. Yes the Vega 11 has more CUs/Cores but it is also still using a much inferior architecture. Just to give you an example

The Vega 64 which was the highest end single GPU you could get based on GCN; It has 4096 Cores vs the Current Gen RX 6700 XT based on RDNA2; it has almost half the cores at 2560 Cores but based on our very own TPUs review the 6700XT is on average 36% faster than the Vega 64 at 1080P

The architectural refinement alone gives it a boost
I look at teraflops. 3400G is faster than Deck in pure teraflops. Also I compared 11GCN CUs with 8 RDNA CUs and GCN CUs are undoubtedly higher clocked. I think that comparison certainly is quite fair.

BTW why 1080p? Those cards aren't even getting well loaded at resolution that low.
 
Joined
Mar 10, 2010
Messages
11,878 (2.20/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
I look at teraflops. 3400G is faster than Deck in pure teraflops. Also I compared 11GCN CUs with 8 RDNA CUs and GCN CUs are undoubtedly higher clocked. I think that comparison certainly is quite fair.

BTW why 1080p? Those cards aren't even getting well loaded at resolution that low.
Well that's where your going wrong, rDNA was made to Game not flop.
Gcn and now cDNA are made to flop the shit out of stuff, my vega64 still is worthy in some tasks, just sigh.
 

Durvelle27

Moderator
Staff member
Joined
Jul 10, 2012
Messages
6,795 (1.50/day)
Location
Memphis, TN
System Name Black Prometheus
Processor |AMD Ryzen 7 1700
Motherboard ASRock B550M Pro4|MSI X370 Gaming PLUS
Cooling Thermalright PA120 SE | AMD Stock Cooler
Memory G.Skill 64GB(2x32GB) 3200MHz | 32GB(4x8GB) DDR4
Video Card(s) ASUS DirectCU II R9 290 4GB
Storage Sandisk X300 512GB + WD Black 6TB+WD Black 6TB
Display(s) LG Nanocell85 49" 4K 120Hz + ACER AOPEN 34" 3440x1440 144Hz
Case DeepCool Matrexx 55 V3 w/ 6x120mm Intake + 3x120mm Exhaust
Audio Device(s) LG Dolby Atmos 5.1
Power Supply Corsair RMX850 Fully Modular| EVGA 750W G2
Mouse Logitech Trackman
Keyboard Logitech K350
Software Windows 10 EDU x64
I look at teraflops. 3400G is faster than Deck in pure teraflops. Also I compared 11GCN CUs with 8 RDNA CUs and GCN CUs are undoubtedly higher clocked. I think that comparison certainly is quite fair.

BTW why 1080p? Those cards aren't even getting well loaded at resolution that low.
You can look at TFlops all you want, TFlops does not translate into more power
 
Joined
May 8, 2021
Messages
1,978 (1.51/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Well that's where your going wrong, rDNA was made to Game not flop.
Gcn and now cDNA are made to flop the shit out of stuff, my vega64 still is worthy in some tasks, just sigh.
You can look at TFlops all you want, TFlops does not translate into more power

Oh people, you are making TPU uncool here. The main task that graphics card is supposed to do is to flop. CPU is mostly used for barely parallel, but heavily sequential code, which is mostly arithmetic (a good ALU). CPU can also do floating point operations, but due to low parallelization, it's not really very optimal for that. Graphics cards as well as some old math co-processors, are very good at floating point operations. Those operations are a lot rarer in general computing, but they do dominate in certain tasks. Gaming is one of them, as well as some productivity and scientific computing. Gaming is mostly low precision (relatively), so games often utilize single precision or half precision computing capabilities of cards, meanwhile productivity tasks like CAD work, scientific simulations, medical screening, require same fundamental task, but in more precise form and thus they often utilize double precision (basically same floating points, but a lot more numbers after point, so less rounding, more precision and often less speed, but on consumer cards a lot less speed, due to nV and AMD wanting to milk enterprises with Quadros and Radeon Pros). Obviously other card aspects matter, but flopping also matters a lot. Depending on architecture, it can be hard to achieve maximum theoretical floating point performance, be it difficult to program architectures or be it various software overhead. A good example of difficult to program architecture for is Kepler, in each SMX (streaming multiprocessor), it had 192 cores, compared to Fermi's 32, but also the smaller controller logic. I won't get into details, but after a while it became clear, that Kepler's SMX's controller logic was insufficient to properly distribute load to each CUDA core and required some software trickery to work well, if not, it will essentially be underutilizing CUDA cores and it would lose a lot of performance. Still, even with this unfortunate trait, Kepler was a massive improvement over Fermi, so even less than ideal optimization meant, that it will be faster than Fermi, but the problem became clear, once it became old and devs may have started to not optimize for it as much, so Radeons that at launch were weaker, started to beat faster Kepler cards. All I want to say here, is that floating point performance certainly matters, but due to various reasons, maximum theoretical floating point operation performance may not be achieved. That doesn't make floating point spec useless, it's there, but how much in reality is achieved will inevitably vary. Games are made with various developmental constrains (time, money, team size, human talent, goals and etc) and often don't really extract everything from the cards. As long as they run good enough and as long as good degree of actual floating point performance is achieved, there's very little reason to pour more RnD into optimization. Meanwhile, professional software is often more serious about having as much performance as possible, due to how computationally heavy certain tasks are, thus they are far more motivated (also less limited by time and budget) to optimize for hardware better. That's why some game benchmark toping cards are beaten by supposedly "less" powerful cards. Oh, and nVidia historically gimps double precision floating point performance a lot more on consumer cards, than AMD, that's why AMD cards for a long time dominate in MilkyWay@Home.

So, there's only one question, how much it is easier to tap into all those RDNA 2 teraflops, compared to GCN. Sadly that's hard to quantify. But it seems that it should be substantially easier.
 
Joined
Aug 23, 2017
Messages
113 (0.04/day)
System Name DELL 3630
Processor I7 8700K
Memory 32 gig
Video Card(s) 4070
The problem with that is were the screens native 720P or higher. I saw RDR2 being played on the Win3 and it looked no different than playing on the Xbox One or PS4 which majority games also ran at 720-900P. RDR2 ran at 864P on the Xbox One
Upscaling console games from 900p and 864p to 1080p looks better than native 720p. Win3 is only a 5.5-inch 720p screen, so it won't be as noticeable compared to 1080p. The Deck is 7 inch 720p screen with larger pixels. The bigger the screen, the worse 720p looks.
 
Joined
Mar 10, 2010
Messages
11,878 (2.20/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Oh people, you are making TPU uncool here. The main task that graphics card is supposed to do is to flop. CPU is mostly used for barely parallel, but heavily sequential code, which is mostly arithmetic (a good ALU). CPU can also do floating point operations, but due to low parallelization, it's not really very optimal for that. Graphics cards as well as some old math co-processors, are very good at floating point operations. Those operations are a lot rarer in general computing, but they do dominate in certain tasks. Gaming is one of them, as well as some productivity and scientific computing. Gaming is mostly low precision (relatively), so games often utilize single precision or half precision computing capabilities of cards, meanwhile productivity tasks like CAD work, scientific simulations, medical screening, require same fundamental task, but in more precise form and thus they often utilize double precision (basically same floating points, but a lot more numbers after point, so less rounding, more precision and often less speed, but on consumer cards a lot less speed, due to nV and AMD wanting to milk enterprises with Quadros and Radeon Pros). Obviously other card aspects matter, but flopping also matters a lot. Depending on architecture, it can be hard to achieve maximum theoretical floating point performance, be it difficult to program architectures or be it various software overhead. A good example of difficult to program architecture for is Kepler, in each SMX (streaming multiprocessor), it had 192 cores, compared to Fermi's 32, but also the smaller controller logic. I won't get into details, but after a while it became clear, that Kepler's SMX's controller logic was insufficient to properly distribute load to each CUDA core and required some software trickery to work well, if not, it will essentially be underutilizing CUDA cores and it would lose a lot of performance. Still, even with this unfortunate trait, Kepler was a massive improvement over Fermi, so even less than ideal optimization meant, that it will be faster than Fermi, but the problem became clear, once it became old and devs may have started to not optimize for it as much, so Radeons that at launch were weaker, started to beat faster Kepler cards. All I want to say here, is that floating point performance certainly matters, but due to various reasons, maximum theoretical floating point operation performance may not be achieved. That doesn't make floating point spec useless, it's there, but how much in reality is achieved will inevitably vary. Games are made with various developmental constrains (time, money, team size, human talent, goals and etc) and often don't really extract everything from the cards. As long as they run good enough and as long as good degree of actual floating point performance is achieved, there's very little reason to pour more RnD into optimization. Meanwhile, professional software is often more serious about having as much performance as possible, due to how computationally heavy certain tasks are, thus they are far more motivated (also less limited by time and budget) to optimize for hardware better. That's why some game benchmark toping cards are beaten by supposedly "less" powerful cards. Oh, and nVidia historically gimps double precision floating point performance a lot more on consumer cards, than AMD, that's why AMD cards for a long time dominate in MilkyWay@Home.

So, there's only one question, how much it is easier to tap into all those RDNA 2 teraflops, compared to GCN. Sadly that's hard to quantify. But it seems that it should be substantially easier.
Grow up ,read up, and give your nogin a tap.

You don't define what makes a GPU good or not.

It's use depends on it's use case.

This IS for gaming, not simulations or super computer work or server etc, Gaming.

Get over yourself.
 
Joined
May 8, 2021
Messages
1,978 (1.51/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Grow up ,read up, and give your nogin a tap.

You don't define what makes a GPU good or not.

It's use depends on it's use case.

This IS for gaming, not simulations or super computer work or server etc, Gaming.

Get over yourself.
Dude, I'm saying that floating point is pretty much fps.
 
Joined
Jun 11, 2020
Messages
574 (0.35/day)
Location
Florida
Processor 5800x3d
Motherboard MSI Tomahawk x570
Cooling Thermalright
Memory 32 gb 3200mhz E die
Video Card(s) 3080
Storage 2tb nvme
Display(s) 165hz 1440p
Case Fractal Define R5
Power Supply Toughpower 850 platium
Mouse HyperX Hyperfire Pulse
Keyboard EVGA Z15
Oh people, you are making TPU uncool here. The main task that graphics card is supposed to do is to flop. CPU is mostly used for barely parallel, but heavily sequential code, which is mostly arithmetic (a good ALU). CPU can also do floating point operations, but due to low parallelization, it's not really very optimal for that. Graphics cards as well as some old math co-processors, are very good at floating point operations. Those operations are a lot rarer in general computing, but they do dominate in certain tasks. Gaming is one of them, as well as some productivity and scientific computing. Gaming is mostly low precision (relatively), so games often utilize single precision or half precision computing capabilities of cards, meanwhile productivity tasks like CAD work, scientific simulations, medical screening, require same fundamental task, but in more precise form and thus they often utilize double precision (basically same floating points, but a lot more numbers after point, so less rounding, more precision and often less speed, but on consumer cards a lot less speed, due to nV and AMD wanting to milk enterprises with Quadros and Radeon Pros). Obviously other card aspects matter, but flopping also matters a lot. Depending on architecture, it can be hard to achieve maximum theoretical floating point performance, be it difficult to program architectures or be it various software overhead. A good example of difficult to program architecture for is Kepler, in each SMX (streaming multiprocessor), it had 192 cores, compared to Fermi's 32, but also the smaller controller logic. I won't get into details, but after a while it became clear, that Kepler's SMX's controller logic was insufficient to properly distribute load to each CUDA core and required some software trickery to work well, if not, it will essentially be underutilizing CUDA cores and it would lose a lot of performance. Still, even with this unfortunate trait, Kepler was a massive improvement over Fermi, so even less than ideal optimization meant, that it will be faster than Fermi, but the problem became clear, once it became old and devs may have started to not optimize for it as much, so Radeons that at launch were weaker, started to beat faster Kepler cards. All I want to say here, is that floating point performance certainly matters, but due to various reasons, maximum theoretical floating point operation performance may not be achieved. That doesn't make floating point spec useless, it's there, but how much in reality is achieved will inevitably vary. Games are made with various developmental constrains (time, money, team size, human talent, goals and etc) and often don't really extract everything from the cards. As long as they run good enough and as long as good degree of actual floating point performance is achieved, there's very little reason to pour more RnD into optimization. Meanwhile, professional software is often more serious about having as much performance as possible, due to how computationally heavy certain tasks are, thus they are far more motivated (also less limited by time and budget) to optimize for hardware better. That's why some game benchmark toping cards are beaten by supposedly "less" powerful cards. Oh, and nVidia historically gimps double precision floating point performance a lot more on consumer cards, than AMD, that's why AMD cards for a long time dominate in MilkyWay@Home.

So, there's only one question, how much it is easier to tap into all those RDNA 2 teraflops, compared to GCN. Sadly that's hard to quantify. But it seems that it should be substantially easier.

How can you reconcile RX5700 having better frame rates in most games than Vega64? That's 9.6 tf vs 12.5 tf. Not to mention a higher power limit lol. Tflops don't tell the whole story in gaming.

Can you at least wait until the Deck is out before sounding so sure it sucks?
 
Last edited:
Joined
Mar 10, 2010
Messages
11,878 (2.20/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Top