• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Valve Announces the Steam Deck Game Console

I don't know what you want.
A Steam Deck!

You said some of those games are not on steam by the way. That is incorrect. Each of those is on steam. They are all in my steam library and I have played many of them before.
 
You said some of those games are not on steam by the way. That is incorrect. Each of those is on steam. They are all in my steam library and I have played many of them before.
Motosis isn't on Steam, hell I can't even find it on net. Either it's a typo or it's really some rare game.
 
Motosis isn't on Steam, hell I can't even find it on net. Either it's a typo or it's really some rare game.
Typo. Mitosis. It is a re-skinned agar.io clone put on the steam store with new game modes and a pay to win business model.
 
Typo. Mitosis. It is a re-skinned agar.io clone put on the steam store with new game modes and a pay to win business model.
Dude, it's still a typo. The actual name is "Mitos.is". Can you please stop being drunk when typing? No gamepad support, low spec friendly, not linux native, 50% chance of it working with proton. I wouldn't really want to run it on Deck.
 
Hard pass for me. Couldn't care less about emulation or playing AAA games in 720 p on low settings.
I honestly don't understand why people dwell on this. At this screen size your eyes could never see the difference between 720P or 1080P. And it will be able to easily run med-high settings at 60FPS
 
I honestly don't understand why people dwell on this. At this screen size your eyes could never see the difference between 720P or 1080P. And it will be able to easily run med-high settings at 60FPS
You can certainly see low settings and sub 20 fps though
 
Except this won't run at low settings at 20 FPS
Yeah, you will need to lower your resolution a lot more for that Cyberpunk to run okay. All way down to 960x540.
 
Yeah, you will need to lower your resolution a lot more for that Cyberpunk to run okay. All way down to 960x540.
So let me get this straight, The GPD Win3 can play Cyberpunk at 1280x720 on low settings with 30-40FPS while having a much inferior GPU but you think the Deck can't. Man you guys are comical
 
So let me get this straight, The GPD Win3 can play Cyberpunk at 1280x720 on low settings with 30-40FPS while having a much inferior GPU but you think the Deck can't. Man you guys are comical
That's what I calculated. I looked up GPD Win 3, it's actually 35 watt handheld and fps highly depends on area. In city it was in 20s. And on top of that, it used variable resolution, so it wasn't 720p raw. So, you still need 960x540, lowest settings to hit 40 fps average.
 
That's what I calculated. I looked up GPD Win 3, it's actually 35 watt handheld and fps highly depends on area. In city it was in 20s. And on top of that, it used variable resolution, so it wasn't 720p raw. So, you still need 960x540, lowest settings to hit 40 fps average.
From the review I viewed it was locked at 720P and was suggested also locking to 30FPS for best performance. But lets see the Iris Xe 96 GPU is on average 5% faster than the Vega 8 which is 8CUs (512 Cores) and based on GCN. The Deck is RDNA 2 with 8CUs (512 Cores), RDNA2 should offer around 50% more performance per core over GCN. So i don't see the deck having any issues what's so ever looking that even a 5700G can power games just fine with the aged Vega 8
 
I honestly don't understand why people dwell on this. At this screen size your eyes could never see the difference between 720P or 1080P. And it will be able to easily run med-high settings at 60FPS
I have had enough tablets in the past with 7-inch screens to see there is a noticeable difference between 720p and 1080p. For example, jaggies are much more pronounced due to the larger pixel size in 720p. And a noticeable drop in detail in mobile games even. An open-world AAA game like RDR2 would not look good on this.
 
Well thankfully there should be plenty of reviews and information available long before the majority of us who reserved a unit will have have the opportunity to buy.
 
I have had enough tablets in the past with 7-inch screens to see there is a noticeable difference between 720p and 1080p. For example, jaggies are much more pronounced due to the larger pixel size in 720p. And a noticeable drop in detail in mobile games even. An open-world AAA game like RDR2 would not look good on this.
The problem with that is were the screens native 720P or higher. I saw RDR2 being played on the Win3 and it looked no different than playing on the Xbox One or PS4 which majority games also ran at 720-900P. RDR2 ran at 864P on the Xbox One
 
From the review I viewed it was locked at 720P and was suggested also locking to 30FPS for best performance. But lets see the Iris Xe 96 GPU is on average 5% faster than the Vega 8 which is 8CUs (512 Cores) and based on GCN. The Deck is RDNA 2 with 8CUs (512 Cores), RDNA2 should offer around 50% more performance per core over GCN. So i don't see the deck having any issues what's so ever looking that even a 5700G can power games just fine with the aged Vega 8
Well I did calculations, based on how RX 560 performed and then tried to compare it to 8CU RDNA2 iGPU based on teraflops alone. It seems that it wasn't a very good idea. I looked at different video:

This is 3400G. It is similar to what Deck will be. It has 4C8T config, but with Zen+ cores. Deck is clocked lower, but has higher IPC, so it be similar to 3400G, but a tiny bit weaker. 3400G has 11 CU Vega cores (704 of them). In raw teraflops, Vega 11 is a bit faster than what Deck can achieve at its peak. So overall 3400G is somewhat faster at CPU and quite a bit faster at GPU. It does run Cyberpunk at 720p, but there are some pretty bad frame drops and some areas just have quite low fps and I concluded earlier that I consider 40 fps as playable. Ryzen 3400G can't achieve that and it runs game at 1280x720, which is a bit lower than native Deck resolution, which is 1280x800. That's 10% more pixels to drive. Realistically, I would expect Deck's GPU to be 20% slower than 3400G's and CPU to be 30% slower. Cyberpunk isn't very CPU demanding game, but it's hard on GPU, so it maybe won't be bottlenecked by Deck's CPU, but Deck has 20% less GPU power than 3400G. So let's calculate. 1280x720 is 921600 pixels, Deck is 20% slower, so let's reduce pixels by 20%. We get 737280 pixels. At this point we get same 34 fps as 3400G, but we really want 40 fps. 40 fps is 15% more hardware taxing, so lets take away 15% resolution. With that reduction, we are left with 626688 pixels. Closest resolution to that is 960x640 and now Deck supposedly runs Cyberpunk okay, that's quite a bit lower than 1280x800 (1024000 pixels) resolution. With my previous calculation I arrived at 500k pixels or so, so this time result is more optimistic, but it still isn't that great for Deck. Depending on overall system performance, FSR may speed up little RDNA APU, I don't think it will make Cyberpunk run at 1280x800 with 40 fps average. FSR doesn't work very great with low end hardware, as its overhead is so big that it cancels out a lot of performance gains.
 
Well I did calculations, based on how RX 560 performed and then tried to compare it to 8CU RDNA2 iGPU based on teraflops alone. It seems that it wasn't a very good idea. I looked at different video:

This is 3400G. It is similar to what Deck will be. It has 4C8T config, but with Zen+ cores. Deck is clocked lower, but has higher IPC, so it be similar to 3400G, but a tiny bit weaker. 3400G has 11 CU Vega cores (704 of them). In raw teraflops, Vega 11 is a bit faster than what Deck can achieve at its peak. So overall 3400G is somewhat faster at CPU and quite a bit faster at GPU. It does run Cyberpunk at 720p, but there are some pretty bad frame drops and some areas just have quite low fps and I concluded earlier that I consider 40 fps as playable. Ryzen 3400G can't achieve that and it runs game at 1280x720, which is a bit lower than native Deck resolution, which is 1280x800. That's 10% more pixels to drive. Realistically, I would expect Deck's GPU to be 20% slower than 3400G's and CPU to be 30% slower. Cyberpunk isn't very CPU demanding game, but it's hard on GPU, so it maybe won't be bottlenecked by Deck's CPU, but Deck has 20% less GPU power than 3400G. So let's calculate. 1280x720 is 921600 pixels, Deck is 20% slower, so let's reduce pixels by 20%. We get 737280 pixels. At this point we get same 34 fps as 3400G, but we really want 40 fps. 40 fps is 15% more hardware taxing, so lets take away 15% resolution. With that reduction, we are left with 626688 pixels. Closest resolution to that is 960x640 and now Deck supposedly runs Cyberpunk okay, that's quite a bit lower than 1280x800 (1024000 pixels) resolution. With my previous calculation I arrived at 500k pixels or so, so this time result is more optimistic, but it still isn't that great for Deck. Depending on overall system performance, FSR may speed up little RDNA APU, I don't think it will make Cyberpunk run at 1280x800 with 40 fps average. FSR doesn't work very great with low end hardware, as its overhead is so big that it cancels out a lot of performance gains.
the problem with the comparison is it is still GCN vs RDNA2. Yes the Vega 11 has more CUs/Cores but it is also still using a much inferior architecture. Just to give you an example

The Vega 64 which was the highest end single GPU you could get based on GCN; It has 4096 Cores vs the Current Gen RX 6700 XT based on RDNA2; it has almost half the cores at 2560 Cores but based on our very own TPUs review the 6700XT is on average 36% faster than the Vega 64 at 1080P

The architectural refinement alone gives it a boost
 
the problem with the comparison is it is still GCN vs RDNA2. Yes the Vega 11 has more CUs/Cores but it is also still using a much inferior architecture. Just to give you an example

The Vega 64 which was the highest end single GPU you could get based on GCN; It has 4096 Cores vs the Current Gen RX 6700 XT based on RDNA2; it has almost half the cores at 2560 Cores but based on our very own TPUs review the 6700XT is on average 36% faster than the Vega 64 at 1080P

The architectural refinement alone gives it a boost
I look at teraflops. 3400G is faster than Deck in pure teraflops. Also I compared 11GCN CUs with 8 RDNA CUs and GCN CUs are undoubtedly higher clocked. I think that comparison certainly is quite fair.

BTW why 1080p? Those cards aren't even getting well loaded at resolution that low.
 
I look at teraflops. 3400G is faster than Deck in pure teraflops. Also I compared 11GCN CUs with 8 RDNA CUs and GCN CUs are undoubtedly higher clocked. I think that comparison certainly is quite fair.

BTW why 1080p? Those cards aren't even getting well loaded at resolution that low.
Well that's where your going wrong, rDNA was made to Game not flop.
Gcn and now cDNA are made to flop the shit out of stuff, my vega64 still is worthy in some tasks, just sigh.
 
I look at teraflops. 3400G is faster than Deck in pure teraflops. Also I compared 11GCN CUs with 8 RDNA CUs and GCN CUs are undoubtedly higher clocked. I think that comparison certainly is quite fair.

BTW why 1080p? Those cards aren't even getting well loaded at resolution that low.
You can look at TFlops all you want, TFlops does not translate into more power
 
Well that's where your going wrong, rDNA was made to Game not flop.
Gcn and now cDNA are made to flop the shit out of stuff, my vega64 still is worthy in some tasks, just sigh.
You can look at TFlops all you want, TFlops does not translate into more power

Oh people, you are making TPU uncool here. The main task that graphics card is supposed to do is to flop. CPU is mostly used for barely parallel, but heavily sequential code, which is mostly arithmetic (a good ALU). CPU can also do floating point operations, but due to low parallelization, it's not really very optimal for that. Graphics cards as well as some old math co-processors, are very good at floating point operations. Those operations are a lot rarer in general computing, but they do dominate in certain tasks. Gaming is one of them, as well as some productivity and scientific computing. Gaming is mostly low precision (relatively), so games often utilize single precision or half precision computing capabilities of cards, meanwhile productivity tasks like CAD work, scientific simulations, medical screening, require same fundamental task, but in more precise form and thus they often utilize double precision (basically same floating points, but a lot more numbers after point, so less rounding, more precision and often less speed, but on consumer cards a lot less speed, due to nV and AMD wanting to milk enterprises with Quadros and Radeon Pros). Obviously other card aspects matter, but flopping also matters a lot. Depending on architecture, it can be hard to achieve maximum theoretical floating point performance, be it difficult to program architectures or be it various software overhead. A good example of difficult to program architecture for is Kepler, in each SMX (streaming multiprocessor), it had 192 cores, compared to Fermi's 32, but also the smaller controller logic. I won't get into details, but after a while it became clear, that Kepler's SMX's controller logic was insufficient to properly distribute load to each CUDA core and required some software trickery to work well, if not, it will essentially be underutilizing CUDA cores and it would lose a lot of performance. Still, even with this unfortunate trait, Kepler was a massive improvement over Fermi, so even less than ideal optimization meant, that it will be faster than Fermi, but the problem became clear, once it became old and devs may have started to not optimize for it as much, so Radeons that at launch were weaker, started to beat faster Kepler cards. All I want to say here, is that floating point performance certainly matters, but due to various reasons, maximum theoretical floating point operation performance may not be achieved. That doesn't make floating point spec useless, it's there, but how much in reality is achieved will inevitably vary. Games are made with various developmental constrains (time, money, team size, human talent, goals and etc) and often don't really extract everything from the cards. As long as they run good enough and as long as good degree of actual floating point performance is achieved, there's very little reason to pour more RnD into optimization. Meanwhile, professional software is often more serious about having as much performance as possible, due to how computationally heavy certain tasks are, thus they are far more motivated (also less limited by time and budget) to optimize for hardware better. That's why some game benchmark toping cards are beaten by supposedly "less" powerful cards. Oh, and nVidia historically gimps double precision floating point performance a lot more on consumer cards, than AMD, that's why AMD cards for a long time dominate in MilkyWay@Home.

So, there's only one question, how much it is easier to tap into all those RDNA 2 teraflops, compared to GCN. Sadly that's hard to quantify. But it seems that it should be substantially easier.
 
The problem with that is were the screens native 720P or higher. I saw RDR2 being played on the Win3 and it looked no different than playing on the Xbox One or PS4 which majority games also ran at 720-900P. RDR2 ran at 864P on the Xbox One
Upscaling console games from 900p and 864p to 1080p looks better than native 720p. Win3 is only a 5.5-inch 720p screen, so it won't be as noticeable compared to 1080p. The Deck is 7 inch 720p screen with larger pixels. The bigger the screen, the worse 720p looks.
 
Oh people, you are making TPU uncool here. The main task that graphics card is supposed to do is to flop. CPU is mostly used for barely parallel, but heavily sequential code, which is mostly arithmetic (a good ALU). CPU can also do floating point operations, but due to low parallelization, it's not really very optimal for that. Graphics cards as well as some old math co-processors, are very good at floating point operations. Those operations are a lot rarer in general computing, but they do dominate in certain tasks. Gaming is one of them, as well as some productivity and scientific computing. Gaming is mostly low precision (relatively), so games often utilize single precision or half precision computing capabilities of cards, meanwhile productivity tasks like CAD work, scientific simulations, medical screening, require same fundamental task, but in more precise form and thus they often utilize double precision (basically same floating points, but a lot more numbers after point, so less rounding, more precision and often less speed, but on consumer cards a lot less speed, due to nV and AMD wanting to milk enterprises with Quadros and Radeon Pros). Obviously other card aspects matter, but flopping also matters a lot. Depending on architecture, it can be hard to achieve maximum theoretical floating point performance, be it difficult to program architectures or be it various software overhead. A good example of difficult to program architecture for is Kepler, in each SMX (streaming multiprocessor), it had 192 cores, compared to Fermi's 32, but also the smaller controller logic. I won't get into details, but after a while it became clear, that Kepler's SMX's controller logic was insufficient to properly distribute load to each CUDA core and required some software trickery to work well, if not, it will essentially be underutilizing CUDA cores and it would lose a lot of performance. Still, even with this unfortunate trait, Kepler was a massive improvement over Fermi, so even less than ideal optimization meant, that it will be faster than Fermi, but the problem became clear, once it became old and devs may have started to not optimize for it as much, so Radeons that at launch were weaker, started to beat faster Kepler cards. All I want to say here, is that floating point performance certainly matters, but due to various reasons, maximum theoretical floating point operation performance may not be achieved. That doesn't make floating point spec useless, it's there, but how much in reality is achieved will inevitably vary. Games are made with various developmental constrains (time, money, team size, human talent, goals and etc) and often don't really extract everything from the cards. As long as they run good enough and as long as good degree of actual floating point performance is achieved, there's very little reason to pour more RnD into optimization. Meanwhile, professional software is often more serious about having as much performance as possible, due to how computationally heavy certain tasks are, thus they are far more motivated (also less limited by time and budget) to optimize for hardware better. That's why some game benchmark toping cards are beaten by supposedly "less" powerful cards. Oh, and nVidia historically gimps double precision floating point performance a lot more on consumer cards, than AMD, that's why AMD cards for a long time dominate in MilkyWay@Home.

So, there's only one question, how much it is easier to tap into all those RDNA 2 teraflops, compared to GCN. Sadly that's hard to quantify. But it seems that it should be substantially easier.
Grow up ,read up, and give your nogin a tap.

You don't define what makes a GPU good or not.

It's use depends on it's use case.

This IS for gaming, not simulations or super computer work or server etc, Gaming.

Get over yourself.
 
Grow up ,read up, and give your nogin a tap.

You don't define what makes a GPU good or not.

It's use depends on it's use case.

This IS for gaming, not simulations or super computer work or server etc, Gaming.

Get over yourself.
Dude, I'm saying that floating point is pretty much fps.
 
Oh people, you are making TPU uncool here. The main task that graphics card is supposed to do is to flop. CPU is mostly used for barely parallel, but heavily sequential code, which is mostly arithmetic (a good ALU). CPU can also do floating point operations, but due to low parallelization, it's not really very optimal for that. Graphics cards as well as some old math co-processors, are very good at floating point operations. Those operations are a lot rarer in general computing, but they do dominate in certain tasks. Gaming is one of them, as well as some productivity and scientific computing. Gaming is mostly low precision (relatively), so games often utilize single precision or half precision computing capabilities of cards, meanwhile productivity tasks like CAD work, scientific simulations, medical screening, require same fundamental task, but in more precise form and thus they often utilize double precision (basically same floating points, but a lot more numbers after point, so less rounding, more precision and often less speed, but on consumer cards a lot less speed, due to nV and AMD wanting to milk enterprises with Quadros and Radeon Pros). Obviously other card aspects matter, but flopping also matters a lot. Depending on architecture, it can be hard to achieve maximum theoretical floating point performance, be it difficult to program architectures or be it various software overhead. A good example of difficult to program architecture for is Kepler, in each SMX (streaming multiprocessor), it had 192 cores, compared to Fermi's 32, but also the smaller controller logic. I won't get into details, but after a while it became clear, that Kepler's SMX's controller logic was insufficient to properly distribute load to each CUDA core and required some software trickery to work well, if not, it will essentially be underutilizing CUDA cores and it would lose a lot of performance. Still, even with this unfortunate trait, Kepler was a massive improvement over Fermi, so even less than ideal optimization meant, that it will be faster than Fermi, but the problem became clear, once it became old and devs may have started to not optimize for it as much, so Radeons that at launch were weaker, started to beat faster Kepler cards. All I want to say here, is that floating point performance certainly matters, but due to various reasons, maximum theoretical floating point operation performance may not be achieved. That doesn't make floating point spec useless, it's there, but how much in reality is achieved will inevitably vary. Games are made with various developmental constrains (time, money, team size, human talent, goals and etc) and often don't really extract everything from the cards. As long as they run good enough and as long as good degree of actual floating point performance is achieved, there's very little reason to pour more RnD into optimization. Meanwhile, professional software is often more serious about having as much performance as possible, due to how computationally heavy certain tasks are, thus they are far more motivated (also less limited by time and budget) to optimize for hardware better. That's why some game benchmark toping cards are beaten by supposedly "less" powerful cards. Oh, and nVidia historically gimps double precision floating point performance a lot more on consumer cards, than AMD, that's why AMD cards for a long time dominate in MilkyWay@Home.

So, there's only one question, how much it is easier to tap into all those RDNA 2 teraflops, compared to GCN. Sadly that's hard to quantify. But it seems that it should be substantially easier.

How can you reconcile RX5700 having better frame rates in most games than Vega64? That's 9.6 tf vs 12.5 tf. Not to mention a higher power limit lol. Tflops don't tell the whole story in gaming.

Can you at least wait until the Deck is out before sounding so sure it sucks?
 
Last edited:
Back
Top