CPU having TFLOPs number comparable to GPU, whah?
It is if the only thing you care about is how quickly individual execution units can do floating point math. It's all really meaningless unless you're including things like how many execution units there are, how parallel the workloads are, how much throughput a single execution unit can do which is really important for a dev to know. As a developer (talking about CPUs,) there are only a few things I care about when it comes to performance:
- How much throughput do I get with any particular serial workload for whatever I'm doing (single core/execution unit performance, what can I do in lock-step)?
- How many cores do I have (how many times can I do the things above in a purely parallel manor)?
- What kind of latency am I introducing by using multiple cores?
I don't really care about what the full aggregate compute power is because in this day and age, that doesn't tend to be the bottleneck, your serial workloads tend to limit how parallel your applications can be. The things you have to do one after another on a single core to ensure that things happen properly is always what slows applications down from this standpoint and we can only use multiple when we're not adding so much overhead from using multiple cores that you end up getting nowhere (or even degrade performance.)
That's my rant about how FLOPS are a terrible gauge of aggregate performance and (in my opinion,) means very little. You could have two very different machines with the same "FLOPS" capability but, with very real differences in performance when it comes to real world applications. Aside from that, it's not like I spend my entire day doing floating point operations, integer operations are kind of an important thing too but, once again per core or EU.
</rant>
I love how console upgrades are now backward compatible. Thank you Sony, Microsoft and, of course, AMD.
You can thank Intel too, x86 and DirectX are what makes it backwards compatible. It's a PC, didn't you know this?
They said "(4K) and (1080@60Hz)". Don't know why ppl try to interpret these words in other way. Not even a flagship desktop Gpu can offer 4k@60, why put the hope into a weaker console chip.
It matters if the person is thinking about video playback and not gaming. If the iGPU is going to be as powerful as something like my 390, I doubt it will play games well at 4k by itself but, probably would have aboslutely no problem playing back 4k content or even playing some select games in 4k that might not be as graphically demanding as others.
For example, on the Xbox 360 a lot of games ran at 720P and upscaled to 1080p but, there were some games like Geometry Wars 2, which was an arcade game, which was simple enough where 1080p was more than realistic. I wouldn't be surprised if this new Xbox worked the same way, where if games were not graphically demanding enough to overload the GPU, that they could be run at a higher resolution. To me, that makes sense. It would be like me grabbing an old game (that somehow supported 4k,) and running it at that resolution. The game isn't demanding, so running it at 4k might be realistic.