Friday, May 24th 2013
Xbox One Chip Slower Than PlayStation 4
After bagging chip supply deals for all three new-generation consoles -- Xbox One, PlayStation 4, and Wii U, things are looking up for AMD. While Wii U uses older-generation hardware technologies, Xbox One and PlayStation 4 use the very latest AMD has to offer -- "Jaguar" 64-bit x86 CPU micro-architecture, and Graphics CoreNext GPU architecture. Chips that run the two consoles have a lot in common, but also a few less-than-subtle differences.
PlayStation 4 chip, which came to light this February, is truly an engineer's fantasy. It combines eight "Jaguar" 64-bit x86 cores clocked at 1.60 GHz, with a fairly well spec'd Radeon GPU, which features 1,156 stream processors, 32 ROPs; and a 256-bit wide unified GDDR5 memory interface, clocked at 5.50 GHz. At these speeds, the system gets a memory bandwidth of 176 GB/s. Memory isn't handled like UMA (unified memory architecture), there's no partition between system- and graphics-memory. The two are treated as items on the same 8 GB of memory, and either can use up a majority of it.Xbox One chip is a slightly different beast. It uses the same eight "Jaguar" 1.60 GHz cores, but a slightly smaller Radeon GPU that packs 768 stream processors, and a quad-channel DDR3-2133 MHz memory interface, which offers a memory bandwidth of 68.3 GB/s, and holding 8 GB of memory. Memory between the two subsystems are shared in a similar way to PlayStation 4, with one small difference. Xbox One chip uses a large 32 MB SRAM cache, which operates at 102 GB/s, but at infinitesimally lower latency than GDDR5. This cache cushions data-transfers for the GPU. Microsoft engineers are spinning this off as "200 GB/s of memory bandwidth," by somehow clubbing bandwidths of the various memory types in the system.
The two consoles also differ with software. While PlayStation 4 runs a Unix-derived operating system with OpenGL 4.2 API, Xbox One uses software developers are more familiar with -- a 64-bit Windows NT 6.x kernel-based operating system, running DirectX 11 API. Despite these differences, the chips on the two consoles should greatly reduce multi-platform production costs for game studios, as the two consoles together have a lot in common with PC.
Source:
Heise.de
PlayStation 4 chip, which came to light this February, is truly an engineer's fantasy. It combines eight "Jaguar" 64-bit x86 cores clocked at 1.60 GHz, with a fairly well spec'd Radeon GPU, which features 1,156 stream processors, 32 ROPs; and a 256-bit wide unified GDDR5 memory interface, clocked at 5.50 GHz. At these speeds, the system gets a memory bandwidth of 176 GB/s. Memory isn't handled like UMA (unified memory architecture), there's no partition between system- and graphics-memory. The two are treated as items on the same 8 GB of memory, and either can use up a majority of it.Xbox One chip is a slightly different beast. It uses the same eight "Jaguar" 1.60 GHz cores, but a slightly smaller Radeon GPU that packs 768 stream processors, and a quad-channel DDR3-2133 MHz memory interface, which offers a memory bandwidth of 68.3 GB/s, and holding 8 GB of memory. Memory between the two subsystems are shared in a similar way to PlayStation 4, with one small difference. Xbox One chip uses a large 32 MB SRAM cache, which operates at 102 GB/s, but at infinitesimally lower latency than GDDR5. This cache cushions data-transfers for the GPU. Microsoft engineers are spinning this off as "200 GB/s of memory bandwidth," by somehow clubbing bandwidths of the various memory types in the system.
The two consoles also differ with software. While PlayStation 4 runs a Unix-derived operating system with OpenGL 4.2 API, Xbox One uses software developers are more familiar with -- a 64-bit Windows NT 6.x kernel-based operating system, running DirectX 11 API. Despite these differences, the chips on the two consoles should greatly reduce multi-platform production costs for game studios, as the two consoles together have a lot in common with PC.
148 Comments on Xbox One Chip Slower Than PlayStation 4
Sony M$ and Amd are all Hsa buddy's , , these consoles will work like nothing that has come before in many ways and it's way to early to cast doubt on the performance , mid tier pcs these are Not.
Fast forward to now, the specs are nothing special here and theres more of a shift towards what Sony did at the PS3's launch, with their do everything console at an insane launch price(seems like it) that everyone made fun of thereafter. It could be a real reversal, microsoft is attempting this do all all console(if you can call it that) and Sony is focusing on power again but is also looking more at games.
I have never seen anything take advantage of HSA yet to any real notable extent, has anyone here seen that.
its going to be very interesting to see how this turns out.
I'd best describe it in the words one game developer said to me not long a go (not exact words; Greatly shortened) "working with OpenGL is great. OpenGL is also lighter on the CPU and helps to keep the framerate up when running on weaker CPUs. But OpenGL implementations on Windows just suck and are much slower than they could be."
Also, what midnightoil said.
Nobody forces you to upgrade at every iteration. You have an HD7950 and a core i7 920. These components are not even needed to run games. I've been running my Athlon II X4 for over 3 years and aint a single game it can't play well.
That is really odd. Especially since that is a i7-3770K we are talking about.
On my FX-8320 @ 4 GHz, TF2 hardly ever goes below 100 fps, despite the fact I have 8 BOINC threads crunching while gaming. With BOINC off, I have to turn on vsync as it starts pointlessly sizzling at over 150 fps at all times; most of the time near 300.
But boooo on you MS for not allowing the 360 games to work in the new box :slap:
Although, we develop Desktop and Mobile applications, not console games. I can tell you however that typically game development studios will build a game not based on high resolution textures at first, but will essentially get the guts of the game in first (including low poly textures and animations).
There is enough data analytics in the industry to show games need to compensate for low level machines first, that way they can hope to get as many people on the game as possible. The last thing you want to do is create a game, where only 15% of your market can actually play it.. this is bad business, and can lead to dramatically decreased revenue and community morale issues.
In our company for example, we are currently working on an RPG game for the Desktop (a spiritual successor to a famous 90s game by Konami). We built the initial structure of the game to handle a low level system, and than build out the rest of the textures and animations based on theoretical higher systems.
How this normally works (at least in our company), is that from our growing list of publishers and content providers, we are able to establish a timeline of hardware requirement and their usage. For instance, right now over 50% of our customers would not be able to run current generation games such as Far Cry 3, Crysis 3, Battlefield 4 etc. For this fact, we build our games based on what could reasonably be built for a standard low level system. Next we prepare requirements for higher level machines and build out from there.
At the end of the day (for our products), we have 4 optimal levels of graphic experience.. ranging from low level / no AA to ultra level / 16x AA.
Many companies follow this differently, but again being a software engineer, and being in the game industry for a little over 8 years, this is the best practice I've seen.
Thanks,
Phil
LOL I know what that would be. I think, anyway. That Konami game was for the PlayStation...
Where do I begin:wtf:
Yeah I'm not much of a gamer even though I have a decently high end rig:shadedshu and all I do on it is play farmville:twitch: Also got a Xbox360, PS3, PSVita, Gamecube & PS2 and all I do is stare at them:shadedshu
"Higher CPU clock speeds have been proven time and time again, to have zero effect on gaming" :confused:
Well tell that to my i7 920 2.66ghz OCed to 4ghz :twitch: or my i7 970 3.2ghz OCed to 4ghz or even my old AMD X2 6000+ 3ghz - 3.4ghz which all felt pretty damn smoother when playing 3D applications/games & even in the desktop and they removed any bottlenecks:rolleyes: Hell even the PSP got a CPU speed increase, 222mhz - 333mhz :confused: Thus the God Of War series came out on it because of that extra speed :toast:
Anyways IMO increased CPU speeds do help but they reach a certain point where you're not really getting anything out of it but I do agree with you on the faster GPU :toast: