Friday, November 2nd 2012
Sony PlayStation 4 "Orbis" Kits Shipping to Developers, Powered by AMD A10 APU
According to a VG 24/7 report, Sony began shipping development kits of its upcoming game console, PlayStation 4, codenamed "Orbis" to developers. The kit is described as being a "normal sized PC," driven by AMD A10 "Trinity" APU, and 8 or 16 GB of memory. We've known from reports dating back to April that Sony plans to use a combination of APU and discrete GPU, similar to today's Dual Graphics setups, where the APU graphics core works in tandem with discrete mid-range GPU. The design goal is to be able to play games 1920 x 1080 pixels resolution, with 60 Hz refresh rate, and with the ability to run stereo 3D at 60 Hz. For storage, the system has a combination of Blu-ray drive and 250 GB HDD. Sony's next-generation game console is expected to be unveiled "just before E3," 2013.
Source:
VG 24/7
354 Comments on Sony PlayStation 4 "Orbis" Kits Shipping to Developers, Powered by AMD A10 APU
i think, you should open your console machine cover then move and place your high end GPU along with intel 6 cores, high end mobo, and also your 32GB rams inside your console box. :rockout: ...and many of pc games are ported from console games.
Trololololol
AMD sucks balls, Intel is million times better. :rockout: They should just burned to the ground, what a waste of human resources. Don't get me started on their shite GPU. :mad:
:roll: :roll: :roll: :roll: :roll:
I wouldn't say Average frame rate is all that matters. FPS dips will affect gameplay way more than adjusting the LoD so its just a tad better at the cost of some visual affects. Say you have 2 situations, one where a certain game--say Modern Warfare 4--can run at 1080p with an average of 60fps, but it drops down as low as 30fps. Now compare that to the same game running at the same resultion but it average 55 fps, and never drops below 50fps, the only difference is they removed a particular lighting effect. On a console the second option will be more enjoyable because you'll get a more constant frame rate, despite having a lower average frame rate.
As for console optimization, you're kind of blurring the perception a bit. The most demanding and well optimized console games render at 720p at about 29-30 fps, with the equivalent of low\medium settings and almost no anti-aliasing. They look decent by most peoples standards, but hitting the bar isn't that hard. You could accomplish that on same title with a mediocre PC (with none of that optimization) akin to a C2D and HD4870 GPU. Hell, you could probably do a decent amount better.
You have to realize rendering stuff at 540p and 720p really gives a lot of wiggle room. I think the biggest bottleneck with current gen consoles is actually in the RAM department. Imagine making a game that can only access 256MB of system memory (PS3), and see how well it runs. I think we have to keep things in perspective. The APU's GPU is capable of running modern PC games at medium settings at 1080p with moderate amounts of AA, and still posting 20-30fps. That's really not bad. When you throw a 6670 in the mix it only gets better. The question will always be quality. They could use that setup and render a single textured cube spinning at 1080p at 60fps and their statements would be accurate, but people want a game that hits those settings and still looks good.
If they comes out with a system with high performance in EVERYGAME at high settings or ultra is good but they dont have to give the same FAKE NEW TECNOLOGY of ps3 for steal money from idiots with the excuse of the bluray .
This is very sad and we pc gamers brings again porting with shit graphics and unoptimized like gta 4 and others.
Now will see .
Well, he asked for it. :laugh:
Rumours suggest the next Xbox going the same route of using AMD APUs. So, that will make both consoles boring and pretty much identical PCs-in-a-box, with only the optical drives and the company logos setting them apart.
The worst part is, they're not even going to be high end by today's standards, which they were back in 2005/2006 when the 360 and the PS3 launched, and they certainly NEED to be if they want them to last anywhere near as long as the current consoles have. It's been about half a decade since the launch of both the 360 and the PS3 and they're already seeming more and more dated with every release.
The only good news I see from this is the fact that it'll use AMD GPUs. This should take a tonne of weight from the shoulders of their driver developers and make their lives a whole lot easier with first party developers that will maximize the capabilities of their hardware (and possibly encourage more console developers to develop or at least release decent ports of their games).
Other than that, the whole lineup of next gen consoles sounds crap. At this point, I can't see why another company, other than Sony, Microsoft and Nintendo, can't do the very same thing but better. I can't see how it seems so farfetched to you. The 360's X1950-level GPU and the PS3's crappy 7800GT can do that already. This APU is in a whole different league compared to those old dogs. With a reasonable level of antialiasing/anisotropic filtering, it sounds good by today's standards. The problem is, it'll be painfully slow by tomorrow's standards (even if they release it by 2014, it will already be way outdated).
Why you ask?
It has a larger market!
Good app ecosystem (stores). Unreal engine kit is already working with no problems, bringing no problems for devs...
The darn thing isn't only usable for gaming and doesn't gain dust in the shelf while mommy doesn't give $$ for a new[again the same] COD :D.
Do you think it won't be capable to catch this so called next gen?
Well I am currently playing this on my almost two years old crap phone based on Tegra2.
Horn Game
today I spared more than usual ammount of time to read the whole thread, register (after years) and prepare reply in the Internet 90' format. That is mathematically correct for painting pixels on 2D surface by "simple c algorithm" with predefined array of 2D vector images.
Today rendering of 3D scene to 2D surface is much more complex than your simple calculation.
Objects are represented as 3D mesh with additional properties tied to them (material, textures, etc.). Now, you need to place this objects into 3D space by applying transformation and output texture position(s) (2d, to pick color from texture [x,y]) and vertex position "on screen", usually you utilize not only [x,y] coordinations but information about how deep "in screen" vertex is (vertex processing).
For each pixel that covers calculated triangle (output of 3 executions of vertex position transformation) on screen is then run pixel (fragment) shader with interpolated values of vertex shader output (here you can mix, modify skip etc. on-screen pixel color).
You see ? Your simple calculation matches only pixel shader computation. Described procedure is very simplified totally basic projection of 3D objects with textures into 2D space. Our simple "resolution based computation power requirements" formula is now much more complex, isn't it ? (let's not start even with adding basic lighting, shadowing or God-forbid animation to the calculation).
If you try to animate some object, you need to update its vertex positions within mesh as they are not only somewhere else on-screen (happens when you turn camera) but their relative position to each other is different. If this is handled by CPU, then this is often responsible for "CPU bottleneck", as it pushes constant pressure on CPU regardless of graphical settings. You can see it in multiplayer FPS with many players, or as perfect example - MMORPG games (CPU requirements for games as Lineage 2 in seriously "mass" pvp are astronomical). If it its handled by GPU, then you again have constant computation complexity not affected by render screen resolution.
On topic:
Create compiler exactly for single x86 architecture without compromises to "universal x86" operations selection (you can take into account exact memory / cache latencies and instruction latencies and their selection) and believe me, you will see miracles.
If next gen consoles contain some sort of X86 based APU, then did you not consider, that this will force to adapt new thinking for utilizing APU's in general for considerable amount of software developers ? And that is great success even for future desktop development.
If you have exact machine specification (HW, SW), you don't need statistics to determine how much operations can you execute in a given time on (avg) target machine HW with (avg) target software layer (drivers, OS) > you can count them exactly.
AD Internet 90' format (semi-ot):
In the past, reading almost any discussion thread on sites devoted to technical stuff resulted in gaining substantial knowledge (either by users directly writing information in post, or by pointing other discussants to relevant resources). After spending hour of forum reading, you took for granted, that your knowledge base expanded (not necessarily in exactly-wanted direction).
Today after huge Internet users expansion and with connection accessible even on toilet you need to watch out to not end up more stupid after hour of reading technical stuff related forum.
If users spent single hour of reading about how 3D rendering works (You can pick DirectX SDK samples, NeHe tutorial, some other introduction material or even a completely simple "How it works" or Wikipedia [1][2] reading) instead of smashing F5 for quickest possible response to "discussion enemies", then there would be real information sharing and knowledne gain benefit for all. Today Internet is not a medium for information and knowledge sharing (I have sometimes bad feeling that knowledge-generation process is stagnating) but one great human based random "BS" generator that can without any problems compete with random number generator run on supercomputer.
Seriously - this thread contains enough text and graphics to cover PhD or some other work, but information value posts can be counted on one's fingers...
Until some genius comes up with "BS filter", it would be interesting to "emulate" such feature by manually picking of information-rich posts by moderators or even by forum users (something like existing "thanks" function) with forum filter to show only flagged posts.
EDIT: Now I checked Wikipedia second link and statement "Vertex shaders are run once for each vertex given to the graphics processor" is not alway true. If you utilize Radeon HD2k-HD4k tesselator, then vertices count processed by vertex shader is actually higher, because fixed pipeline tesselator is placed before vertex shader in rendering pipeline (see Programming for Real-Time Tessellation on GPU)
Simply summing up by coefficient isn't possible due to large data overhead that runs also in parallel. First of all memory bandwidth bottle necks, latency increases due to more complex scene and more shader intensive tasks due to light sources etc and the engine itself.
The coefficient ain't linear anymore since those late 90ies I guess.
The next thing that consoles have only! Why they can evolve the graphics on the same platform. For example Metal Gear, Hideo Kojima stated himself in interview that the development was so long due to the fact that they had to rewrite many engine parts using assembly to achieve needed performance for PS3. It is a nightmare you know, but this also bends the math about calculating what we could expect on screen, due to nonexistent recompiler software layer.
In deferred rendering what he describes only happens in the first pass (BF3 has 8+ passes): the diffuse color pass, which has very little information and after that everything from lighting, to shading, to advanced shadowing to ambient occlusion to everything happens on a per-pixel basis. Without all of this per-pixel calculations the end result would be the looks of a 90's 3d game. 90% of the work is based on pixel data == buffers == frames that are afterwards mixed (by ROPs and again pixel by pixel) into the final composition.
Now I know what you meant and I think it'll be close.
Both forward and deferred rendering require more outputs from vertex shader (e.g. normals), not only texture coordination and vertex position in 2D space. It is hard to imagine how can you produce dynamic shadows with only single projection (mentioned BF)
I recently noticed Amd have changed trinitys sucessor back to piledriver cores with radeon cores next, which to me indicates a trinity successor with Gcn2 and Hsa optimisations ahead of more cpu grunt, and its this chip with additional IP yet to be disclosed(imho interposer with lvl 4 x126mb cache and further Dsp's) that i believe will be the bases of a ps4 anyway, not directly this trinity chip thats for sure, so apples to apples will count for nothing,.
add in hardware optimisations for consoles and Amd's hint at a hardcoded gameing future (Api reduced/removed) and you will have a console that does 60 fps,
to the naysayers i have a crossfire main rig with a physx card(HYBRID) and it will do 60 Fps in EVERY game at 1080p with few settings needing to be eased ever bar AA, so to me the games amd dont do well in are nvidia biased or straight up physx games and an interesting point is that the xbox and ps3 implementation of physx use sse like extensions and are optimised better so nvidia are going to have to write physx to work well on Amd gear:eek:, of a kind:p
id swear some games work better just because an nvidia cards present(tho not used by the game)sometimes.
I'm sorry but it looks like you're learning 3D programming and you're stuck on Chapter 1 yet.
Your argument was there's more than pixel shading, which is true and no one said otherwise. However you went all off with the description of what it's basically <5% of a modern game render pipeline and frame time, as if that represents a big proportion of it. So it's essentially true, but as I said irrelevant to dispute the argument that 4x the resolution requires 4x the power*. Even modern shadows are much more than a projection into shadow maps and is dependent on pixel shading.
*I said it in a previous post, this word is the biggest problem. The problem in a way is that people understand power as == performance in reviews and that's incredibly innacurate. A card with 2x the SPs is undeniably 2x as powerful in that department whether it ends up producing 2x the fps or fails to do so because it's bottlenecked elsewhere.
Now the problem regarding the OT is that an A10 APU is severely limited in ALL fronts, SPs, ROPs, texture units... everything and is simply not going to do what a high-end GPU has difficulties on achieving even today, no matter the optimization. And that's another focus of argument because some people are saying that we don't know if it's going to be a custom APu with more GPU power, etc, but the artcile states it's a A10 and that's not a custom APU is it? It's a commercially available APU, which is why an APU is supposedly going to be used, because it's available and cheap to produce. The days of heavily customized chips is over. Otherwise (custom chip) they would have continued with PowerPC architecture and keep the backwards compatibility. First of all that's a lie or very arbitrary on what "few settings needing to be eased" trully means.
Second your crossfire setup is at least 5x more powerful than an A10 APU, so even if it was true an APU would do 12 fps on the same "few settings eased" conditions. It's a console so let's add a MASSIVE optimization from being a console and you might or might not reach 30 fps (200% increase), but 60 fps not a chance.
Anyway the Wii U is rumored to have a significantly more powerful GPU than A10 APU. Is SONY trully going to release something less capable? :laugh: