Wednesday, May 11th 2011
AMD A-Series APUs Tested Against Sandy Bridge CPUs at Gaming on IGP
What happens when you pit Intel's "Visually Smart" Sandy Bridge processors against Radeon-enriched AMD Fusion A-Series accelerated processing units? They do terribly at gaming on integrated graphics. Surprise! That is notwithstanding the fact that AMD is pitching its A-Series Fusion APUs to be a lot more than CPUs with embedded GPUs, they're pitched to be processors that make lower-mainstream graphics pointless, and to alter the software ecosystem to be more GPGPU intensive, so applications could benefit from the over 500 GFLOPs of computation power the 400 stream processor DirectX 11 GPU brings to the table.
A leaked presentation slide shows AMD's performance projections for the A-Series GPU, tests included GPU-heavy DirectX 10 titles such as Crysis Warhead and Borderlands; as well as DirectX 11 ready titles such as Dirt 2. AMD's quad-core A8-3850, A8-3650 and A8-3450 were included alongside Intel's dual-core Sandy Bridge Core i3-2100, and quad-core Core i5-2300, Core i5-2500K. The Atom-competitive E350 Zacate dual-core was also in the comparision, perhaps to show that it is nearly as good as Intel's much higher segment Core series processors at graphics.30 frames per second (FPS) is considered "playable" limit by some tech journalists, but AMD created a range between 25 and 30 FPS to define what's playable. Each of the three A-Series chips scored above 25 FPS in every test, while the A8-3650 and 3850 reached/crossed the 30 FPS barrier. Going by the test results, AMD certainly achieved what it set out to, which is to use its immense GPU-engineering potential to lift up its CPU business. The A-Series APUs should make a formidable option for home desktop buyers who require strong graphics for casual gaming, cost the same as Intel's dual-core Sandy Bridge, and give four x86-64 cores for the same price. AMD's new A-Series Fusion APUs will launch in early June.
Source:
DonanimHaber
A leaked presentation slide shows AMD's performance projections for the A-Series GPU, tests included GPU-heavy DirectX 10 titles such as Crysis Warhead and Borderlands; as well as DirectX 11 ready titles such as Dirt 2. AMD's quad-core A8-3850, A8-3650 and A8-3450 were included alongside Intel's dual-core Sandy Bridge Core i3-2100, and quad-core Core i5-2300, Core i5-2500K. The Atom-competitive E350 Zacate dual-core was also in the comparision, perhaps to show that it is nearly as good as Intel's much higher segment Core series processors at graphics.30 frames per second (FPS) is considered "playable" limit by some tech journalists, but AMD created a range between 25 and 30 FPS to define what's playable. Each of the three A-Series chips scored above 25 FPS in every test, while the A8-3650 and 3850 reached/crossed the 30 FPS barrier. Going by the test results, AMD certainly achieved what it set out to, which is to use its immense GPU-engineering potential to lift up its CPU business. The A-Series APUs should make a formidable option for home desktop buyers who require strong graphics for casual gaming, cost the same as Intel's dual-core Sandy Bridge, and give four x86-64 cores for the same price. AMD's new A-Series Fusion APUs will launch in early June.
50 Comments on AMD A-Series APUs Tested Against Sandy Bridge CPUs at Gaming on IGP
Hydrophobia: Prophecy is a good example right now. It is a GPU eating monster. I have to turn my 460's fan up to 100% just to play it and keep temps out of the 100C range. GPU usage is also almost always above 90%.
Not that I'd be looking to buy a budget level laptop right now, but if I did, Fusion would be very attractive. Actually wish Apple would switch to these in their low end Macbooks cause my old iBook is in need of an update. Only use the thing for writing and web browsing. Haven't wanted to spend the $$ to replace yet till I see some interesting features.
An APU integrates a CPU and a GPU on the same die thus improving data transfer rates between these components while reducing power consumption. APUs can also include video processing and other application-specific accelerators.
So, Amd is (will) simply but effectively targeting Intel's heart, "the onboard video"
same goes for htpcs, who wants to have a radeon hd 5570 class gpu and a phenom II class cpu in the same chip in their little htpc in the living room? instead of intel with sucky bultin crapstics
I can play games all nicely :)
Apple use OpenCL in the OS and Windows will use it in Win8, Office 2010 has GPU acceleration, as do all today's Internet browsers (getting towards everything on and off screen practically), media players, etc.
it's not only needed for games, it's usable for almost anything you do with your OS and it is increasing every year. the customer may not care about graphics processors, they just need to know price/performance information.
to say a GPGPU is not needed for "normal" applications and casual computer usage is to show ignorance of today's applications and their abilities.
"Apple use OpenCL in the OS and Windows will use it in Win8, Office 2010 has GPU acceleration, as do all today's Internet browsers (getting towards everything on and off screen practically), media players, etc. "
Apple's "Core Image" for example uses the GPU for acceleration, as well as paid programs like "Pixelmator".
GPU's are ALREADY in use for "normal" operations.
On March 3, 2011, Khronos Group announces the formation of the WebCL working group to explore defining a JavaScript binding to OpenCL. This creates the potential to harness GPU and multi-core CPU parallel processing from a Web browser.
On May 4, 2011, Nokia Research releases an open source WebCL extension for the Firefox web browser, providing a JavaScript binding to OpenCL.
the current versions of Internet browsers such as Opera, Firefox and I.E. ALL have GPU acceleration for many aspects of browsing, from off-screen web-page composition, CSS, JPEG decoding, font rendering, window scaling, scrolling, Adobe Flash video acceleration, and an increasing list of the pipeline that makes the internet surfing experience.
everything in the browser is evolving to take advantage of the CPU and GPU as efficiently as possible, using API's such as DirectX, OpenCL, etc.
Office apps, internet browsers, video players.....
between office apps and browsers alone, that's most of the planet's "normal" usage.
what definition of "normal" are you thinking of that refutes that GPU's are used already in today's everyday usage scenarios ?
imagine all of the current companies and corporations, even datacentres, clusters etc that benefit from either better performance, efficiency, or their desired balance of either.
this is not just for casual customers on the street, these chips are going to be in everything, and the more efficient they are the better for us in many ways that few people here seem to appreciate.
some people may ask what the point is of having the browser, or Office 2010 being accelerated ? maybe they won't notice the performance, but they are not the whole equation and they shouldn't be considered such.
even if they don't perceive the performance, the applications developers now have more potential processing to make the apps better. if that doesn't happen, then the processing become more efficient and prices drop, saving money. even if the money saved is not a consideration for these office people, their managers and bosses WILL consider it.
it's a WIN for us any way we look at it.
more efficient processors benefit us one way or another, and to think "i don't need a more efficient processor" demonstrates ignorance of this technology and the applications you use.
As a side note, I don't feel any difference between my Mobility 4570 and my sister's Mobility 5650 in web page performance.
Edit: Currently the main usage of processing power revolves around the x86 architecture, so as long as we are primarily using the x86, I cannot see how we will be able to migrate to GPGPU.
regarding x86, that may change with ARM and Microsoft changing things with Google's and Nvidia's help (let alone the continuing investments from T.I., Qualcomm, Marvel, Samsung, etc), and it certainly will change with languages like OpenCL that mean homogenous acceleration on any hardware.
currently you need to program for either the GPU or the CPU, with our current baby steps to make languages like OpenCL programmable to automatically take advantage of any OpenCL compatible hardware. with the OpenCL the goal is to write code without thinking of the hardware. we're just starting, and we've already got traction and practical benefits today. this will only get better with time.
as for the hardware, the traditional CPU and GPU have been on a collision course for decades, with increasing features being shared such that it's getting more difficult to differentiate them by the day.
things like the GPU becoming programmable with DirectX's programmable shaders, than later unified shaders, then Direct Compute, and OpenGL's similar parallel efforts.
we now have GPU's like Nvidia's Fermi architecture with L1 and L2 caches, ECC and a bunch of other HPC and supercomputing features that rock the previously CPU-only-driven supercomputing world. even Intel with it's Sandy bridge CPU's has evolved from the cross-bar memory bus, to what GPU's have been using for years: a ring-bus.
the defining lines separating CPU's and GPU's are blurring and there will very soon be a single processor that computes everything, there will be no more GPU, there will only be an evolved new generation of processors that may still be called CPU's, but they will no longer be "general", they will be agnostic: neither specific, like a GPU used to be, or general, like the CPU used to be.
it only leaves what architecture these chips will follow: x86, ARM, or some other flavour.
these new generation processors will be no longer "jack of all trades, yet master of none" to evolve into "master of all"
signs of such evolutionary processor design have been seen already.
Sony evolved their thinking when designing their processors with Toshiba and IBM for their Playstation consoles along this way. their playstation 2 console had a PPC with custom V0 and V1 units. these were similar to DSP's that could blaze through vector maths and could be programmed more than traditional DSP's.
games developers used them in all sorts of ways, from improving the graphics functions to add to the GPU's hard-wired features set, to EA creating an accelerated software stack for Dolby 5.1 running solely on the V0 unit, freeing up the rest of the CPU.
with the PS3 Sony took the idea forward again, and evolved the idea of the V0 and V1 units, generalizing even more their functionality, further again from their DSP heritage, and came up with the "The synergistic processing unit (SPU)". sometimes called SPelements, confusingly.
these SPE's would not only be more powerful than their previous V0 and V1 units,, they would increase in number, such that whereas in the PS2 they were the minority of the processing in the main CPU, in the PS3's CELLbe, they would be the majority of the processing potential. these SPE's would amount to 8 units attached to a familiar PPC.
sony prototyped the idea of not having a separate CPU and GPU for the PS3, toying with the idea of two identical CELLbe chips, to be used by developers as they wish. the freedom was there, but he development tools to take advantage of massively parallel processors wasn't, as seen with Sega's twin SH2 processors from Hitachi generations ago.
we have the parallel hardware, we simply need to advance programming languages to take advantage. this is the idea behind OpenCL.
to find a better balance, Sony approached Nvidia late into development of the PS3 and finally decided on the 7800GT with it's fixed function vertex and pixel shader technology to go up against ATI's more modern unified shader architecture in the XBOX360.
it will be interesting to see Sony's plans for the PS4's architecture if they continue their commitment to massively parallel processing.
meanwhile PC architectures like Intel's "Larabee" and AMD's "Fusion" projects, show that the evolution of processing is heading towards homogenous computing, with no specialty chips, and all processing potential efficiently being used due to no idle custom functions.
AMD bought ATI and their Fusion project will eventually merge the GPU's SIMD units together with the CPU's traditional floating point unites to begin the mating of the CPU and GPU into what will eventually be homogenous processor.
just as the smart folks like PC game-engine designers and Sony have predicted since 2006 and beyond
AMD, small as they are in comparison, made a massive bet buying ATI years ago, and while it may have been a little premature by a year or so, and nearly broke the company....it's is already paying MASSIVE rewards.
Intel are at least 2 years to catch up to what AMD is selling THIS YEAR with it's Fusion technology. Bulldozer tech+GPU processors have great potential. these current Fusion's are based on old AMD CPU designs, so there's even more potential ahead.
Intel seem to be fumbling around with GPU technology as if they don't understand it or something, like it's exotic or alien. why can't they make a simple and decent GPU to start ? what's with the ridiculous sub-standard and under-performaing netbook-class GPU's ?
Well thats strange because i seem a Lllano is physically smaller than a Sandybridge!!
mainboard has only pci slots;have agp card but were to put?
is a pain in the ass as i barely can watch a presentation not to mention movie.
companies buy the cheapest configuration and this is a rule everywhere so this segment users will benefit the most from this solution