Thursday, July 15th 2021

Valve Announces the Steam Deck Game Console

Valve announces Steam Deck, the first in a new category of handheld PC gaming devices starting at $399. Steam Deck is a powerful all-in-one portable PC. With a custom processor developed in cooperation with AMD, Steam Deck is comparable to a gaming laptop with the ability to run the latest AAA games. Your Steam library will be on Deck to play games wherever and whenever you want. Steam Deck is also an open PC, adding the ability to install any software or connect with any hardware.

"We think Steam Deck gives people another way to play the games they love on a high-performance device at a great price," says Valve founder Gabe Newell. "As a gamer, this is a product I've always wanted. And as a game developer, it's the mobile device I've always wanted for our partners." Steam Deck starts at $399, with increased storage options available for $529 and $649. Reservations open July 16th at 10 AM PDT; shipping is slated to start in December 2021.
Steam Deck details:
  • Powerful, custom APU developed with AMD
  • Optimized for hand-held gaming
  • Full-sized controls
  • 7" touchscreen
  • WiFi and Bluetooth ready
  • USB-C port for accessories
  • microSD slot for storage expansion
  • 3 different storage options available
For more information, visit this page.
Add your own comment

188 Comments on Valve Announces the Steam Deck Game Console

#126
The red spirit
NordicYou said some of those games are not on steam by the way. That is incorrect. Each of those is on steam. They are all in my steam library and I have played many of them before.
Motosis isn't on Steam, hell I can't even find it on net. Either it's a typo or it's really some rare game.
Posted on Reply
#127
Nordic
The red spiritMotosis isn't on Steam, hell I can't even find it on net. Either it's a typo or it's really some rare game.
Typo. Mitosis. It is a re-skinned agar.io clone put on the steam store with new game modes and a pay to win business model.
Posted on Reply
#128
The red spirit
NordicTypo. Mitosis. It is a re-skinned agar.io clone put on the steam store with new game modes and a pay to win business model.
Dude, it's still a typo. The actual name is "Mitos.is". Can you please stop being drunk when typing? No gamepad support, low spec friendly, not linux native, 50% chance of it working with proton. I wouldn't really want to run it on Deck.
Posted on Reply
#129
Durvelle27
AvrageGamrHard pass for me. Couldn't care less about emulation or playing AAA games in 720 p on low settings.
I honestly don't understand why people dwell on this. At this screen size your eyes could never see the difference between 720P or 1080P. And it will be able to easily run med-high settings at 60FPS
Posted on Reply
#130
The red spirit
Durvelle27I honestly don't understand why people dwell on this. At this screen size your eyes could never see the difference between 720P or 1080P. And it will be able to easily run med-high settings at 60FPS
You can certainly see low settings and sub 20 fps though
Posted on Reply
#131
Durvelle27
The red spiritYou can certainly see low settings and sub 20 fps though
Except this won't run at low settings at 20 FPS
Posted on Reply
#132
The red spirit
Durvelle27Except this won't run at low settings at 20 FPS
Yeah, you will need to lower your resolution a lot more for that Cyberpunk to run okay. All way down to 960x540.
Posted on Reply
#133
Durvelle27
The red spiritYeah, you will need to lower your resolution a lot more for that Cyberpunk to run okay. All way down to 960x540.
So let me get this straight, The GPD Win3 can play Cyberpunk at 1280x720 on low settings with 30-40FPS while having a much inferior GPU but you think the Deck can't. Man you guys are comical
Posted on Reply
#134
The red spirit
Durvelle27So let me get this straight, The GPD Win3 can play Cyberpunk at 1280x720 on low settings with 30-40FPS while having a much inferior GPU but you think the Deck can't. Man you guys are comical
That's what I calculated. I looked up GPD Win 3, it's actually 35 watt handheld and fps highly depends on area. In city it was in 20s. And on top of that, it used variable resolution, so it wasn't 720p raw. So, you still need 960x540, lowest settings to hit 40 fps average.
Posted on Reply
#135
Durvelle27
The red spiritThat's what I calculated. I looked up GPD Win 3, it's actually 35 watt handheld and fps highly depends on area. In city it was in 20s. And on top of that, it used variable resolution, so it wasn't 720p raw. So, you still need 960x540, lowest settings to hit 40 fps average.
From the review I viewed it was locked at 720P and was suggested also locking to 30FPS for best performance. But lets see the Iris Xe 96 GPU is on average 5% faster than the Vega 8 which is 8CUs (512 Cores) and based on GCN. The Deck is RDNA 2 with 8CUs (512 Cores), RDNA2 should offer around 50% more performance per core over GCN. So i don't see the deck having any issues what's so ever looking that even a 5700G can power games just fine with the aged Vega 8
Posted on Reply
#136
AvrageGamr
Durvelle27I honestly don't understand why people dwell on this. At this screen size your eyes could never see the difference between 720P or 1080P. And it will be able to easily run med-high settings at 60FPS
I have had enough tablets in the past with 7-inch screens to see there is a noticeable difference between 720p and 1080p. For example, jaggies are much more pronounced due to the larger pixel size in 720p. And a noticeable drop in detail in mobile games even. An open-world AAA game like RDR2 would not look good on this.
Posted on Reply
#137
TheOne
Well thankfully there should be plenty of reviews and information available long before the majority of us who reserved a unit will have have the opportunity to buy.
Posted on Reply
#138
Durvelle27
AvrageGamrI have had enough tablets in the past with 7-inch screens to see there is a noticeable difference between 720p and 1080p. For example, jaggies are much more pronounced due to the larger pixel size in 720p. And a noticeable drop in detail in mobile games even. An open-world AAA game like RDR2 would not look good on this.
The problem with that is were the screens native 720P or higher. I saw RDR2 being played on the Win3 and it looked no different than playing on the Xbox One or PS4 which majority games also ran at 720-900P. RDR2 ran at 864P on the Xbox One
Posted on Reply
#139
The red spirit
Durvelle27From the review I viewed it was locked at 720P and was suggested also locking to 30FPS for best performance. But lets see the Iris Xe 96 GPU is on average 5% faster than the Vega 8 which is 8CUs (512 Cores) and based on GCN. The Deck is RDNA 2 with 8CUs (512 Cores), RDNA2 should offer around 50% more performance per core over GCN. So i don't see the deck having any issues what's so ever looking that even a 5700G can power games just fine with the aged Vega 8
Well I did calculations, based on how RX 560 performed and then tried to compare it to 8CU RDNA2 iGPU based on teraflops alone. It seems that it wasn't a very good idea. I looked at different video:

This is 3400G. It is similar to what Deck will be. It has 4C8T config, but with Zen+ cores. Deck is clocked lower, but has higher IPC, so it be similar to 3400G, but a tiny bit weaker. 3400G has 11 CU Vega cores (704 of them). In raw teraflops, Vega 11 is a bit faster than what Deck can achieve at its peak. So overall 3400G is somewhat faster at CPU and quite a bit faster at GPU. It does run Cyberpunk at 720p, but there are some pretty bad frame drops and some areas just have quite low fps and I concluded earlier that I consider 40 fps as playable. Ryzen 3400G can't achieve that and it runs game at 1280x720, which is a bit lower than native Deck resolution, which is 1280x800. That's 10% more pixels to drive. Realistically, I would expect Deck's GPU to be 20% slower than 3400G's and CPU to be 30% slower. Cyberpunk isn't very CPU demanding game, but it's hard on GPU, so it maybe won't be bottlenecked by Deck's CPU, but Deck has 20% less GPU power than 3400G. So let's calculate. 1280x720 is 921600 pixels, Deck is 20% slower, so let's reduce pixels by 20%. We get 737280 pixels. At this point we get same 34 fps as 3400G, but we really want 40 fps. 40 fps is 15% more hardware taxing, so lets take away 15% resolution. With that reduction, we are left with 626688 pixels. Closest resolution to that is 960x640 and now Deck supposedly runs Cyberpunk okay, that's quite a bit lower than 1280x800 (1024000 pixels) resolution. With my previous calculation I arrived at 500k pixels or so, so this time result is more optimistic, but it still isn't that great for Deck. Depending on overall system performance, FSR may speed up little RDNA APU, I don't think it will make Cyberpunk run at 1280x800 with 40 fps average. FSR doesn't work very great with low end hardware, as its overhead is so big that it cancels out a lot of performance gains.
Posted on Reply
#140
Durvelle27
The red spiritWell I did calculations, based on how RX 560 performed and then tried to compare it to 8CU RDNA2 iGPU based on teraflops alone. It seems that it wasn't a very good idea. I looked at different video:

This is 3400G. It is similar to what Deck will be. It has 4C8T config, but with Zen+ cores. Deck is clocked lower, but has higher IPC, so it be similar to 3400G, but a tiny bit weaker. 3400G has 11 CU Vega cores (704 of them). In raw teraflops, Vega 11 is a bit faster than what Deck can achieve at its peak. So overall 3400G is somewhat faster at CPU and quite a bit faster at GPU. It does run Cyberpunk at 720p, but there are some pretty bad frame drops and some areas just have quite low fps and I concluded earlier that I consider 40 fps as playable. Ryzen 3400G can't achieve that and it runs game at 1280x720, which is a bit lower than native Deck resolution, which is 1280x800. That's 10% more pixels to drive. Realistically, I would expect Deck's GPU to be 20% slower than 3400G's and CPU to be 30% slower. Cyberpunk isn't very CPU demanding game, but it's hard on GPU, so it maybe won't be bottlenecked by Deck's CPU, but Deck has 20% less GPU power than 3400G. So let's calculate. 1280x720 is 921600 pixels, Deck is 20% slower, so let's reduce pixels by 20%. We get 737280 pixels. At this point we get same 34 fps as 3400G, but we really want 40 fps. 40 fps is 15% more hardware taxing, so lets take away 15% resolution. With that reduction, we are left with 626688 pixels. Closest resolution to that is 960x640 and now Deck supposedly runs Cyberpunk okay, that's quite a bit lower than 1280x800 (1024000 pixels) resolution. With my previous calculation I arrived at 500k pixels or so, so this time result is more optimistic, but it still isn't that great for Deck. Depending on overall system performance, FSR may speed up little RDNA APU, I don't think it will make Cyberpunk run at 1280x800 with 40 fps average. FSR doesn't work very great with low end hardware, as its overhead is so big that it cancels out a lot of performance gains.
the problem with the comparison is it is still GCN vs RDNA2. Yes the Vega 11 has more CUs/Cores but it is also still using a much inferior architecture. Just to give you an example

The Vega 64 which was the highest end single GPU you could get based on GCN; It has 4096 Cores vs the Current Gen RX 6700 XT based on RDNA2; it has almost half the cores at 2560 Cores but based on our very own TPUs review the 6700XT is on average 36% faster than the Vega 64 at 1080P

The architectural refinement alone gives it a boost
Posted on Reply
#141
The red spirit
Durvelle27the problem with the comparison is it is still GCN vs RDNA2. Yes the Vega 11 has more CUs/Cores but it is also still using a much inferior architecture. Just to give you an example

The Vega 64 which was the highest end single GPU you could get based on GCN; It has 4096 Cores vs the Current Gen RX 6700 XT based on RDNA2; it has almost half the cores at 2560 Cores but based on our very own TPUs review the 6700XT is on average 36% faster than the Vega 64 at 1080P

The architectural refinement alone gives it a boost
I look at teraflops. 3400G is faster than Deck in pure teraflops. Also I compared 11GCN CUs with 8 RDNA CUs and GCN CUs are undoubtedly higher clocked. I think that comparison certainly is quite fair.

BTW why 1080p? Those cards aren't even getting well loaded at resolution that low.
Posted on Reply
#142
TheoneandonlyMrK
The red spiritI look at teraflops. 3400G is faster than Deck in pure teraflops. Also I compared 11GCN CUs with 8 RDNA CUs and GCN CUs are undoubtedly higher clocked. I think that comparison certainly is quite fair.

BTW why 1080p? Those cards aren't even getting well loaded at resolution that low.
Well that's where your going wrong, rDNA was made to Game not flop.
Gcn and now cDNA are made to flop the shit out of stuff, my vega64 still is worthy in some tasks, just sigh.
Posted on Reply
#143
Durvelle27
The red spiritI look at teraflops. 3400G is faster than Deck in pure teraflops. Also I compared 11GCN CUs with 8 RDNA CUs and GCN CUs are undoubtedly higher clocked. I think that comparison certainly is quite fair.

BTW why 1080p? Those cards aren't even getting well loaded at resolution that low.
You can look at TFlops all you want, TFlops does not translate into more power
Posted on Reply
#144
The red spirit
TheoneandonlyMrKWell that's where your going wrong, rDNA was made to Game not flop.
Gcn and now cDNA are made to flop the shit out of stuff, my vega64 still is worthy in some tasks, just sigh.
Durvelle27You can look at TFlops all you want, TFlops does not translate into more power
Oh people, you are making TPU uncool here. The main task that graphics card is supposed to do is to flop. CPU is mostly used for barely parallel, but heavily sequential code, which is mostly arithmetic (a good ALU). CPU can also do floating point operations, but due to low parallelization, it's not really very optimal for that. Graphics cards as well as some old math co-processors, are very good at floating point operations. Those operations are a lot rarer in general computing, but they do dominate in certain tasks. Gaming is one of them, as well as some productivity and scientific computing. Gaming is mostly low precision (relatively), so games often utilize single precision or half precision computing capabilities of cards, meanwhile productivity tasks like CAD work, scientific simulations, medical screening, require same fundamental task, but in more precise form and thus they often utilize double precision (basically same floating points, but a lot more numbers after point, so less rounding, more precision and often less speed, but on consumer cards a lot less speed, due to nV and AMD wanting to milk enterprises with Quadros and Radeon Pros). Obviously other card aspects matter, but flopping also matters a lot. Depending on architecture, it can be hard to achieve maximum theoretical floating point performance, be it difficult to program architectures or be it various software overhead. A good example of difficult to program architecture for is Kepler, in each SMX (streaming multiprocessor), it had 192 cores, compared to Fermi's 32, but also the smaller controller logic. I won't get into details, but after a while it became clear, that Kepler's SMX's controller logic was insufficient to properly distribute load to each CUDA core and required some software trickery to work well, if not, it will essentially be underutilizing CUDA cores and it would lose a lot of performance. Still, even with this unfortunate trait, Kepler was a massive improvement over Fermi, so even less than ideal optimization meant, that it will be faster than Fermi, but the problem became clear, once it became old and devs may have started to not optimize for it as much, so Radeons that at launch were weaker, started to beat faster Kepler cards. All I want to say here, is that floating point performance certainly matters, but due to various reasons, maximum theoretical floating point operation performance may not be achieved. That doesn't make floating point spec useless, it's there, but how much in reality is achieved will inevitably vary. Games are made with various developmental constrains (time, money, team size, human talent, goals and etc) and often don't really extract everything from the cards. As long as they run good enough and as long as good degree of actual floating point performance is achieved, there's very little reason to pour more RnD into optimization. Meanwhile, professional software is often more serious about having as much performance as possible, due to how computationally heavy certain tasks are, thus they are far more motivated (also less limited by time and budget) to optimize for hardware better. That's why some game benchmark toping cards are beaten by supposedly "less" powerful cards. Oh, and nVidia historically gimps double precision floating point performance a lot more on consumer cards, than AMD, that's why AMD cards for a long time dominate in MilkyWay@Home.

So, there's only one question, how much it is easier to tap into all those RDNA 2 teraflops, compared to GCN. Sadly that's hard to quantify. But it seems that it should be substantially easier.
Posted on Reply
#145
AvrageGamr
Durvelle27The problem with that is were the screens native 720P or higher. I saw RDR2 being played on the Win3 and it looked no different than playing on the Xbox One or PS4 which majority games also ran at 720-900P. RDR2 ran at 864P on the Xbox One
Upscaling console games from 900p and 864p to 1080p looks better than native 720p. Win3 is only a 5.5-inch 720p screen, so it won't be as noticeable compared to 1080p. The Deck is 7 inch 720p screen with larger pixels. The bigger the screen, the worse 720p looks.
Posted on Reply
#146
TheoneandonlyMrK
The red spiritOh people, you are making TPU uncool here. The main task that graphics card is supposed to do is to flop. CPU is mostly used for barely parallel, but heavily sequential code, which is mostly arithmetic (a good ALU). CPU can also do floating point operations, but due to low parallelization, it's not really very optimal for that. Graphics cards as well as some old math co-processors, are very good at floating point operations. Those operations are a lot rarer in general computing, but they do dominate in certain tasks. Gaming is one of them, as well as some productivity and scientific computing. Gaming is mostly low precision (relatively), so games often utilize single precision or half precision computing capabilities of cards, meanwhile productivity tasks like CAD work, scientific simulations, medical screening, require same fundamental task, but in more precise form and thus they often utilize double precision (basically same floating points, but a lot more numbers after point, so less rounding, more precision and often less speed, but on consumer cards a lot less speed, due to nV and AMD wanting to milk enterprises with Quadros and Radeon Pros). Obviously other card aspects matter, but flopping also matters a lot. Depending on architecture, it can be hard to achieve maximum theoretical floating point performance, be it difficult to program architectures or be it various software overhead. A good example of difficult to program architecture for is Kepler, in each SMX (streaming multiprocessor), it had 192 cores, compared to Fermi's 32, but also the smaller controller logic. I won't get into details, but after a while it became clear, that Kepler's SMX's controller logic was insufficient to properly distribute load to each CUDA core and required some software trickery to work well, if not, it will essentially be underutilizing CUDA cores and it would lose a lot of performance. Still, even with this unfortunate trait, Kepler was a massive improvement over Fermi, so even less than ideal optimization meant, that it will be faster than Fermi, but the problem became clear, once it became old and devs may have started to not optimize for it as much, so Radeons that at launch were weaker, started to beat faster Kepler cards. All I want to say here, is that floating point performance certainly matters, but due to various reasons, maximum theoretical floating point operation performance may not be achieved. That doesn't make floating point spec useless, it's there, but how much in reality is achieved will inevitably vary. Games are made with various developmental constrains (time, money, team size, human talent, goals and etc) and often don't really extract everything from the cards. As long as they run good enough and as long as good degree of actual floating point performance is achieved, there's very little reason to pour more RnD into optimization. Meanwhile, professional software is often more serious about having as much performance as possible, due to how computationally heavy certain tasks are, thus they are far more motivated (also less limited by time and budget) to optimize for hardware better. That's why some game benchmark toping cards are beaten by supposedly "less" powerful cards. Oh, and nVidia historically gimps double precision floating point performance a lot more on consumer cards, than AMD, that's why AMD cards for a long time dominate in MilkyWay@Home.

So, there's only one question, how much it is easier to tap into all those RDNA 2 teraflops, compared to GCN. Sadly that's hard to quantify. But it seems that it should be substantially easier.
Grow up ,read up, and give your nogin a tap.

You don't define what makes a GPU good or not.

It's use depends on it's use case.

This IS for gaming, not simulations or super computer work or server etc, Gaming.

Get over yourself.
Posted on Reply
#147
The red spirit
TheoneandonlyMrKGrow up ,read up, and give your nogin a tap.

You don't define what makes a GPU good or not.

It's use depends on it's use case.

This IS for gaming, not simulations or super computer work or server etc, Gaming.

Get over yourself.
Dude, I'm saying that floating point is pretty much fps.
Posted on Reply
#148
Colddecked
The red spiritOh people, you are making TPU uncool here. The main task that graphics card is supposed to do is to flop. CPU is mostly used for barely parallel, but heavily sequential code, which is mostly arithmetic (a good ALU). CPU can also do floating point operations, but due to low parallelization, it's not really very optimal for that. Graphics cards as well as some old math co-processors, are very good at floating point operations. Those operations are a lot rarer in general computing, but they do dominate in certain tasks. Gaming is one of them, as well as some productivity and scientific computing. Gaming is mostly low precision (relatively), so games often utilize single precision or half precision computing capabilities of cards, meanwhile productivity tasks like CAD work, scientific simulations, medical screening, require same fundamental task, but in more precise form and thus they often utilize double precision (basically same floating points, but a lot more numbers after point, so less rounding, more precision and often less speed, but on consumer cards a lot less speed, due to nV and AMD wanting to milk enterprises with Quadros and Radeon Pros). Obviously other card aspects matter, but flopping also matters a lot. Depending on architecture, it can be hard to achieve maximum theoretical floating point performance, be it difficult to program architectures or be it various software overhead. A good example of difficult to program architecture for is Kepler, in each SMX (streaming multiprocessor), it had 192 cores, compared to Fermi's 32, but also the smaller controller logic. I won't get into details, but after a while it became clear, that Kepler's SMX's controller logic was insufficient to properly distribute load to each CUDA core and required some software trickery to work well, if not, it will essentially be underutilizing CUDA cores and it would lose a lot of performance. Still, even with this unfortunate trait, Kepler was a massive improvement over Fermi, so even less than ideal optimization meant, that it will be faster than Fermi, but the problem became clear, once it became old and devs may have started to not optimize for it as much, so Radeons that at launch were weaker, started to beat faster Kepler cards. All I want to say here, is that floating point performance certainly matters, but due to various reasons, maximum theoretical floating point operation performance may not be achieved. That doesn't make floating point spec useless, it's there, but how much in reality is achieved will inevitably vary. Games are made with various developmental constrains (time, money, team size, human talent, goals and etc) and often don't really extract everything from the cards. As long as they run good enough and as long as good degree of actual floating point performance is achieved, there's very little reason to pour more RnD into optimization. Meanwhile, professional software is often more serious about having as much performance as possible, due to how computationally heavy certain tasks are, thus they are far more motivated (also less limited by time and budget) to optimize for hardware better. That's why some game benchmark toping cards are beaten by supposedly "less" powerful cards. Oh, and nVidia historically gimps double precision floating point performance a lot more on consumer cards, than AMD, that's why AMD cards for a long time dominate in MilkyWay@Home.

So, there's only one question, how much it is easier to tap into all those RDNA 2 teraflops, compared to GCN. Sadly that's hard to quantify. But it seems that it should be substantially easier.
How can you reconcile RX5700 having better frame rates in most games than Vega64? That's 9.6 tf vs 12.5 tf. Not to mention a higher power limit lol. Tflops don't tell the whole story in gaming.

Can you at least wait until the Deck is out before sounding so sure it sucks?
Posted on Reply
#149
TheoneandonlyMrK
The red spiritDude, I'm saying that floating point is pretty much fps.
And I'm saying your wrong. .... .
Posted on Reply
#150
The red spirit
ColddeckedHow can you reconcile RX5700 having better frame rates in most games than Vega64? That's 9.6 tf vs 12.5 tf. Not to mention a higher power limit lol. Tflops don't tell the whole story in gaming.

Can you at least wait until the Deck is out before sounding so sure it sucks?
And here I go bust. I don't know. I just know that flops definitely matter, due to basically all geometry calculations being floating point. I said that there's could be limitations in achieving maximum theoretical floating point performance. From spec sheet, RX 5700 XT definitely looks overall worse, but here's one thing that is better on it and it's pixel fillrate. It seems that hardware in that one aspect on RX 5700 XT is just better and maybe that one specification matters.

Here's some snack:
www.tomshardware.com/reviews/3d-benchmarking,205-3.html

It seems that Vega 64 may perform better than RX 5700 XT at low resolutions, but maybe not.
TheoneandonlyMrKAnd I'm saying your wrong. .... .
Any technical reason why?
Posted on Reply
Add your own comment
Sep 17th, 2024 06:38 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts