Well, look at this gameplay comparison between PS4 and Ryzen 5 1600/ GTX 1060/ 16 GB RAM.
There is no difference in the quality. While we can assume that the PC is (much) faster, no?
All that shows is that Rockstar's PC ports are still garbage. There are plenty of comparisons like this showing vast differences.
But they filmed it before that.
Concerning the variable clocks, I don't mind it because a) it's not based on thermals, so everyone gets the same performance regardless and b) it's to keep cooling noise acceptable.
One example he gave for lowering the cpu clock is when it's executing a lot of 256-bit instructions because to maintain the clocks they would need to increase the size of the power supply and fan.
Congratulations on missing the most obvious joke ever, I guess? I mean, what you're saying is right there in what you quoted from me.
Even if the presentation was filmed a few days ago
Doesn't make that image any less funny IMO.
Beyond that, that it doesnt thermal throttle is nice, but "it's to keep cooling noise acceptable" is BS - if that was the case, they wouldn't be pushing GPU clocks to 2.25GHz in the first place. Designing for a fixed power target instead of a fixed clock target is perfectly viable, but it is an approach that sacrifices some performance.
I also find it rather telling that they're using SmartShift - while it's a brilliant little piece of tech, it's first and foremost designed for thermally constrained laptops (where the heat dissipation capability of the cooling system is lower than what the hardware can produce if left to run free), so using it here clearly indicates that they expect there to be a need for balancing power within a fixed total budget. In other words, the hardware could do more if it just had better cooling and power delivery.
He's not wrong, adding more CUs adds more cache however it's not that simple. In a GPU each CU has some cache dedicated to it, if you add more CUs they still have access to the same amount of cache. You'll still encounter the same limitations in memory bound situations.
A GPU with less CUs and higher clocks will perform better in memory bound situations. And it's not uncommon that some parts of a shader contain scalar code which will also run better.
Not that it matters much in this case because PS5's GPU isn't equivalent TFLOP wise with the one in X.
Well, sure, I didn't mean to say there were no advantages, just that they are mostly tiny and more than counteracted by having a wider GPU with more total resources. Not to mention that the 256-bit PS5 is far more likely to be memory-bound than the 320-bit XSX.
Yeah, I'm sure they all have their inside information and knew all about the other months in advance. I'm not sure what you mean by "it forces developers to target a lower level of performance to make sure all PS5 games perform the same in all environments"?
The PS5 has less GPU horsepower, and that power is dependent on avoiding power spikes, so developers will (at least until they become intimately familiar with the system) need to cut down their targets a bit to maintain acceptable performance.
I would think the XOX vs. PS4 Pro is a reasonable analogy (even if the difference between them is bigger than the difference here): the PS4 Pro consistently runs cross-platform games at lower resolutions and detail levels, and often
still struggles to match the frame rates of the XOX. Just check out pretty much any of Digital Foundry's excellent comparison videos (Jedi: Fallen Order springs to mind as a stand-out that I watched).
GPU and CPU is one chip, it's an APU. No RAM, only the GDDR6 chips and large SSD.
... GDDR6 is still RAM ...
But I agree, Sony needs healthy profit margins because it's not sustainable to sell at a loss all the time. They have other struggling divisions, too.
PS4 was much inferior technologically, so yes, it was quite normal to be cheaper.
Consoles are generally sold at break-even or at a loss, with profits made on game licencing. Any game sold for PS4 or Xbox One comes with a $10 licence fee to the platform owner (though I believe this is slightly lower for cheap indies etc.). That's why console games are more expensive and the hardware is much cheaper when compared to PC.
As for the PS4 being technologically inferior ... to what? It soundly beat the XBone. PS4 Pro vs. Xbox One X is another story entirely, but you didn't say Pro.
Anyone know when Nintendo will complete the trifecta of trash that will hobble games for the next decade?
No reason to expect consoles with with loads of fast memory, fast 8c16t CPUs, native flash storage and very powerful RT-enabled GPUs to hobble anything for quite a while. Jaguar was crap even back in 2012-13, Zen2 is state of the art - and they haven't even cut clocks much! These consoles are
far superior to the average gaming PC in pretty much every respect (remember, the average PC has a 4c8t CPU and a GTX 1060) and will allow for massive growth in the quality of games moving forward, including sorely missed improvements in CPU-bound tasks, audio, physics, AI, etc.
Funny thing is, AMD had 3D audio a LONG time ago... kicker is, hardly NOBODY EVER USED IT!
It was called AMD TrueAudio. From back in 2013!
Then the newer one was called TrueAudio Next.
A new version of TrueAudio, TrueAudio Next, was released with the
AMD Radeon 400 series GPUs. TrueAudio Next utilizes the GPU to simulate audio physics.
Yeah, I'm really looking forward to an increased focus on audio on both consoles. And given that the processing is done on AMD hardware it wouldn't be too big of a stretch of the imagination to see it implemented on the PC through GPU accelerated audio either. True positional and spatial audio will be a massive boon to immersion for sure.
the variable clock also means we are open to stuttering and screen tearing. one benefit consoles have over PC is that everything is a smooth solid experience, until now anyway. I'm leaning towards not buying any console and just getting a rtx 3080 ti and going balls to the wall PC
Yeah, developers need to work hard to avoid power draw spikes to keep clocks consistent and thus avoid this. Hopefully there's at least some leeway for short-term power spikes in the management system.
And which one is going to kick the other in the goolies. PS5 looks like the winner on paper.
Care to expand on that? Not that I don't think it will be good, but it will definitely need to be cheaper (even if Sony has a massive mindshare advantage).
Sony can't sell this at anything below $500 (or just a cent under that) unless they're willing to take massive losses on the BOM. The SSD itself is top of the line & will cost a pretty penny,
Flash isn't cheap, the controller likely isn't either, but it won't be that much more expensive than the competition (which has more flash).
Since it's a "custom SSD", Sony can always replace it with something cheaper. As long as it offers the same interface and performance - no one is going to complain.
And it's still almost a year until these consoles hit the shelves.
The expensive 7nm CPU/GPU is what really pushes the price up - and the impact is and will remain larger in the Xbox.
Flash is flash, it costs what it costs. Prices will come down in time, but slowly. Only way to make the controller cheaper without losing performance is moving to a smaller node, which takes time unless you want to pay a premium (which obviously negates any savings).
do you really believe the PS5 gpu running at 2.23ghz core clock. I don't believe it. PS5 actual 9TF. I'm buying an XSX.
Judging by what they said they have pushed clocks pretty much as far as they can go - a 10% drop in power for a 2-3% drop in performance was mentioned - so it'll likely still stay quite high even when power limited. 2,1GHz/9,6TFlops is likely entirely sustainable.
I wonder why sony choosen for a 825GB model SSD while MS has a 1TB model. Is it perhaps due to overprovisioning and sony wanting to have the SSD a longer life then Microsoft wants?
Many of the tech details are just AMD IP. A Zen+ chip with a RDNA2 feature set GPU. Nothing special.
But it's good for AMD in this as well; its bound to sell millions of consoles with their hardware inside of it. The whole gaming ecosystem will be based upon AMD hardware.
The odd size is due to the SSD controller having 12 flash channels instead of the "normal" 8 (or 4 for lower end drives). With flash chip capacities being the same you'll then end up with strange total capacities.
The whole gaming ecosystem has been based on AMD hardware since the current console generation launched in 2013, but the difference is that it's now high-end hardware rather than low-end CPUs and mid-range GPUs. As such it'll push games further and promote propagation of advanced tech like spatial audio and RTRT. This is rather exciting, even if the underlying tech is for the most part known.
I have to say the clock rates of this chip makes me rather excited for upcoming AMD GPUs, though. If a console can hit 2.23GHz, PC GPUs should be able to exceed that, at least when OC'd. If the upcoming RDNA 2 GPUs run at 2-2.1GHz stock without being terribly inefficient, that's a big improvement for sure.