• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Valve Announces the Steam Deck Game Console

How can you reconcile RX5700 having better frame rates in most games than Vega64? That's 9.6 tf vs 12.5 tf. Not to mention a higher power limit lol. Tflops don't tell the whole story in gaming.

Can you at least wait until the Deck is out before sounding so sure it sucks?
And here I go bust. I don't know. I just know that flops definitely matter, due to basically all geometry calculations being floating point. I said that there's could be limitations in achieving maximum theoretical floating point performance. From spec sheet, RX 5700 XT definitely looks overall worse, but here's one thing that is better on it and it's pixel fillrate. It seems that hardware in that one aspect on RX 5700 XT is just better and maybe that one specification matters.

Here's some snack:

It seems that Vega 64 may perform better than RX 5700 XT at low resolutions, but maybe not.

And I'm saying your wrong. .... .
Any technical reason why?
 
And here I go bust. I don't know. I just know that flops definitely matter, due to basically all geometry calculations being floating point. I said that there's could be limitations in achieving maximum theoretical floating point performance. From spec sheet, RX 5700 XT definitely looks overall worse, but here's one thing that is better on it and it's pixel fillrate. It seems that hardware in that one aspect on RX 5700 XT is just better and maybe that one specification matters.

Here's some snack:

It seems that Vega 64 may perform better than RX 5700 XT at low resolutions, but maybe not.


Any technical reason why?
Because flops do not align well with FPS ,why else?!.

In flops did a 1080ti beat a vega64?! No but in FPS it did.
 
Because flops do not align well with FPS ,why else?!.

In flops did a 1080ti beat a vega64?! No but in FPS it did.
I see. You still haven't noticed maximum theoretical floating point performance
 
And here I go bust. I don't know. I just know that flops definitely matter, due to basically all geometry calculations being floating point. I said that there's could be limitations in achieving maximum theoretical floating point performance. From spec sheet, RX 5700 XT definitely looks overall worse, but here's one thing that is better on it and it's pixel fillrate. It seems that hardware in that one aspect on RX 5700 XT is just better and maybe that one specification matters.

Here's some snack:

It seems that Vega 64 may perform better than RX 5700 XT at low resolutions, but maybe not.


Any technical reason why?

DING DING DING.

With newer architectures it becomes easier to achieve maximum theoretical performance.

Get it?

Anyways, if you want to claim that Vega 64 is better at low res than 5700 then post evidence. It still wouldn't explain the difference between the two cards seeing how the Vega has the ~25% higher TF number.
 
Last edited:
DING DING DING.

With newer architectures it becomes easier to achieve maximum theoretical performance.

Get it?
I already knew that, that's why Kepler aged poorly. But I think that pixel fillrate advantage still matters.
 
The 5700 is a little faster Vega 64



Relative Performance 1920x1080

Thanks for proving my point! Vega 64 has better TF, but 5700 has better game performance.

I already knew that, that's why Kepler aged poorly. But I think that pixel fillrate advantage still matters.

Stop moving the goal posts, even if pixel fill rate matters, it just goes to show you TFlops isn't everything when it comes to gaming performance which was your original point. Just admit you don't know, its ok to be wrong about things and learn...
 
Last edited:
We really need hand on reviews. @W1zzard will TPU be doing a steam deck review? It isn't in your usual suite of products reviewed.
 
Stop moving the goal posts, even if pixel fill rate matters, it just goes to show you TFlops isn't everything when it comes to gaming performance which was your original point. Just admit you don't know, its ok to be wrong about things and learn...
I don't mind admitting something, but there's nothing to learn here. You all just keep saying that this or that doesn't matter, but what does matter? I'm pretty sure that flops, pixel fillrate and texture fillrate do indeed matter at different stages of 3D rendering pipeline. I don't know exactly where, but I'm open to learn that. Sadly, nobody talks about that here.
 
I see. You still haven't noticed maximum theoretical floating point performance
Ducks that got to do with actual FPS performance.

Nothing.

And again dodged my question is a Vega 64 better than a 1080Ti then?!, Cos it's flops say so.
 
I'm talking about graphics card, what inside of it does matter? You say that floating point performance doesn't matter.
It's not totally irrelevant, but Rops, Tmus Raycast units, clock frequency, ACE engine's the cache structure and amount and other special function hardware, and the attached memory frequency and bandwidth all have a part to play in how much FPS, whereas flops uses a few but far from all of those functions.
 
I don't mind admitting something, but there's nothing to learn here. You all just keep saying that this or that doesn't matter, but what does matter? I'm pretty sure that flops, pixel fillrate and texture fillrate do indeed matter at different stages of 3D rendering pipeline. I don't know exactly where, but I'm open to learn that. Sadly, nobody talks about that here.

Look, of course TF matter, but its just a measure of one aspect of performance. Another problem is that you think TF are all equal. They aren't when it comes to actual game performance. That is why you can't compare AMD and Nvidia cards strictly based on TFlop since Maxwell, because Nvidia uses their TF better (more efficiency tricks) than AMD did, and were able to get better game performance even though their their TFlop number isn't equal.

As MrK is pointing out, Vega 64 had more TF than 1080ti, but the 1080ti is far and away the better gaming GPU. Just don't get so hung up on the TFlop numbers, it'll give you a rough idea but there's more to the equation.
 
It's not totally irrelevant, but Rops, Tmus Raycast units, clock frequency, ACE engine's the cache structure and amount and other special function hardware, and the attached memory frequency and bandwidth all have a part to play in how much FPS, whereas flops uses a few but far from all of those functions.
ROPs, TMUs probably influence FLOPs. Ray Tracing Units, don't matter, unless you use them. Clock speed is surely directly related to FLP output. ACE is just AMD's version of SMX, which is CU ( Compute Unit). CU is cores with controlling logic, but without cache, ROPs, TMUs, encoders, decoders and etc. Memory bandwidth is likely heavily related to FLOP output too.


Look, of course TF matter, but its just a measure of one aspect of performance.
It's general performance evaluation. It takes into account clock speed, IPC, core count and likely memory speed. That's pretty much whole card.

Another problem is that you think TF are all equal.
They are are, floating point operation per second isn't a fuzzy metric.

They aren't when it comes to actual game performance. That is why you can't compare AMD and Nvidia cards strictly based on TFlop since Maxwell, because Nvidia uses their TF better (more efficiency tricks) than AMD did, and were able to get better game performance even though their their TFlop number isn't equal.
But why?


As MrK is pointing out, Vega 64 had more TF than 1080ti, but the 1080ti is far and away the better gaming GPU. Just don't get so hung up on the TFlop numbers, it'll give you a rough idea but there's more to the equation.
Well that's a general consensus, but I wonder why exactly.
 
ROPs, TMUs probably influence FLOPs. Ray Tracing Units, don't matter, unless you use them. Clock speed is surely directly related to FLP output. ACE is just AMD's version of SMX, which is CU ( Compute Unit). CU is cores with controlling logic, but without cache, ROPs, TMUs, encoders, decoders and etc. Memory bandwidth is likely heavily related to FLOP output too.



It's general performance evaluation. It takes into account clock speed, IPC, core count and likely memory speed. That's pretty much whole card.


They are are, floating point operation per second isn't a fuzzy metric.


But why?



Well that's a general consensus, but I wonder why exactly.
Why, because both manufacturers have been at this a while, they realised early on they're just chasing the bottleneck about and always will be.
And clearly flops were not the bottleneck, so what is?!

Also regurgitating the same shit I just said back at me slightly fleshed out is you agreeing with me flops are not the big picture?! It Sooo?!.
 
Also regurgitating the same shit I just said back at me slightly fleshed out is you agreeing with me flops are not the big picture?! It Sooo?!.
I don't care about agreeing or not. I only care about actually learning how each graphics card specification influences the way graphics card works.
 
I don't care about agreeing or not. I only care about actually learning how each graphics card specification influences the way graphics card works.
Funny because this started because you Know Steam deck is useless for gaming on ffs.

You don't learn by shitposting in tangential threads you have no interest in, yet spout crap like your an insider who somehow Knows how it performs before reviews?!.
 
Funny because this started because you Know Steam deck is useless for gaming on ffs.

You don't learn by shitposting in tangential threads you have no interest in, yet spout crap like your an insider who somehow Knows how it performs before reviews?!.
Either way, it will not run Cyberpunk or Valhalla very well. You people are expecting miracles from thin client hardware. Meanwhile, I have been interested for a long time in low end hardware, used some of it and observed what it can do. Deck won't be a miracle today. Cyberpunk will not run at average of 40 fps and over time, Deck will certainly struggle in more and more games as time goes on. I personally don't like buying hardware that is only scrapping by today, as it feels disappointing soon and makes TCO high. Deck doesn't do anything to make me think otherwise. lol, you already have Zen 2 chip. Why don't you try disabling cores and setting it at Deck's PPT and clock speeds, I wonder how fun will that be in games. You can also downclock your graphics card, to loosely match Vega 11. Now tell me how well everything runs.
 
Either way, it will not run Cyberpunk or Valhalla very well. You people are expecting miracles from thin client hardware. Meanwhile, I have been interested for a long time in low end hardware, used some of it and observed what it can do. Deck won't be a miracle today. Cyberpunk will not run at average of 40 fps and over time, Deck will certainly struggle in more and more games as time goes on. I personally don't like buying hardware that is only scrapping by today, as it feels disappointing soon and makes TCO high. Deck doesn't do anything to make me think otherwise. lol, you already have Zen 2 chip. Why don't you try disabling cores and setting it at Deck's PPT and clock speeds, I wonder how fun will that be in games. You can also downclock your graphics card, to loosely match Vega 11. Now tell me how well everything runs.
You keep spouting crap but I showed you the GPD win3 playing Cyberpunk just fine on med settings with above 30FPS while it has a inferior GPU. Just stop at this point dude
 
Last edited:
You can't spouting crap but I showed you the GPD win3 playing Cyberpunk just fine on med settings with above 30FPS while it has a inferior GPU. Just stop at this point dude
It was running it at low and with enabled function to lower resolution to maintain fps. And 30 fps average is unplayable imo with that framerate variation.
 
It was running it at low and with enabled function to lower resolution to maintain fps. And 30 fps average is unplayable imo with that framerate variation.
I'm not going to argue as you've shown you have no clue as in to what you are talking about
 
You keep spouting crap but I showed you the GPD win3 playing Cyberpunk just fine on med settings with above 30FPS while it has a inferior GPU. Just stop at this point dude
Hi,
Yeah 30fps should be fine on a device this small.

Way too small for me though.
 
I'm not going to argue as you've shown you have no clue as in to what you are talking about
I still see no reason why you are overhyped for Deck. I'm pretty sure that someone like you never even considered buying 3400G, so why is Deck an exception? It doesn't matter if it's 800p, it's still a lousy gpu. Once you need some shading, it can't really provide that without tanking fps. Many games also tend to look really bad at low settings. It will not last (in terms of performance), not sure what's so fun about having Valve paperweight after 2 years. I'm also sure that it's heavily overpriced even for what it is. Interesting concept, poor value and terrible performance with occasional compatibility issues. Yeah, really fun.
 
Hi,
Yeah 30fps should be fine on a device this small.

Way too small for me though.
Just check this out. This is the GPDwin3 shown playing various current games at decent settings with 30-60FPS @720P. It's pretty stellar especially to have inferior specs

 
Back
Top