Friday, June 17th 2011

AMD Charts Path for Future of its GPU Architecture

The future of AMD's GPU architecture looks more open, broken from the shackles of a fixed-function, DirectX-driven evolution model, and that which increases the role of GPU in the PC's central processing a lot more than merely accelerating GPGPU applications. At the Fusion Developer Summit, AMD detailed its future GPU architecture, revealing that in the future, AMD's GPUs will have full support for C, C++, and other high-level languages. Integrated with Fusion APUs, these new number-crunching components will be called "scalar co-processors".

Scalar co-processors will combine elements of MIMD (multiple-instruction multiple-data,) SIMD (single-instruction multiple data), and SMT (simultaneous multithreading). AMD will ditch the VLIW (very long instruction word) model that has been in use for several of AMD's past GPU architectures. While AMD's GPU model will break from the shackles of development that is pegged to that of DirectX, it doesn't believe that APIs such as DirectX and OpenGL will be discarded. Game developers can continue to develop for these APIs, and C++ support is more for general purpose compute applications. That does, however, create a window for game developers to venture out of the API-based development model (specifically DirectX). With its next Fusion processors, the GPU and CPU components will make use of a truly common memory address space. Among other things, this eliminate the "glitching" players might sometimes experience when games load textures as they go over the crest of a hill.
Source: TechReport
Add your own comment

114 Comments on AMD Charts Path for Future of its GPU Architecture

#1
MxPhenom 216
ASIC Engineer
this is looking awesome. Exciting seeing new achitecture from AMD. I want to see want nvidia has going on too
Posted on Reply
#2
dj-electric
I gawd this architecture better be good amd, ive been waiting for ages.
Posted on Reply
#3
NC37
Hopefully it won't turn into another DX10.1. ATI does it, but NV says no so the industry caves to NV.

Course this is much bigger. Saw this coming. Our CPUs are gonna be replaced by GPUs eventually. Those who laughed at AMD's purchase of ATI...heh. Nice move and I guess it makes more sense to ditch the ATI name if you are gonna eventually merge the tech even more. Oh well, I still won't ever call their discrete GPUs AMD.
Posted on Reply
#4
Benetanegia
NC37Hopefully it won't turn into another DX10.1. ATI does it, but NV says no so the industry caves to NV.
Fermi already does most of those things, so it's quite the opposite. Many of the new features in AMD's design were implemented in G80 5 years ago or later in GT200. AMD is way behind on this and is almost funny to see that they are going to follow the same architectural principle as Nvidia is being using for the past 5 years. Of course they are going to make the jump instead of doing it gradually like Nvidia did, but that's only posible thanks to Nvidia doing the hard work and opening doors for years.
Posted on Reply
#5
Over_Lord
News Editor
Wow, and to think everybody had already written HD7000 as HD6000 on 28nm with minor improvements, this is BIG!!
Posted on Reply
#6
Benetanegia
thunderisingWow, and to think everybody had already written HD7000 as HD6000 on 28nm with minor improvements, this is BIG!!
Well this is AMD's new architecture which does not equal being the next chip. HD7000 is probably what was said to be, an evolution of HD6000. Of course it could be this new architecture, but it's not very likely since HD7000 supposedly taped out some months ago.

Also the article in TechReport says:
While he didn't talk about specific products, he did say this new core design will materialize inside all future AMD products with GPUs in them over the next few years.
Don't you think that with less than 6 months left for HD7000 release it would be the time already to talk about specific products?



Extend to discreet GPU is the last step, which suggests that that will happen in 2 generations. This is for Fusion only, at least for now it seems. Not in vain the new architecture is called FSA, Fusion System Architecture.
Posted on Reply
#7
Shihab
I thought Nvidia's already covered most of these features.
I think I'll just wait for Kepler and Maxwell.
Posted on Reply
#8
techtard
Ati already had something like this, for quite a while. It was called STREAM, and it was pretty bad. AMD rebranded it as AMD APP and it is a little better, but it sounds like they are finally serious about HPC.
Either that, or they have been forced to adopt the nVidia route due to entrenched CUDA and nVidia paid de-optimizations for folding and other parallel computing.
Posted on Reply
#9
HalfAHertz
Here's the original article:

www.pcper.com/reviews/Graphics-Cards/AMD-Fusion-System-Architecture-Overview-Southern-Isle-GPUs-and-Beyond

It seems this is indeed the base for the HD7000 Southern islands architecture. This wil be interesting...

From what I understand, it sounds very similar to the old SPARC HPC processors...What I'm worried about is that such a drastic design change may require an even more drastic change on the software side which will distance the already limited number of developers backing AMD ...
Posted on Reply
#10
Over_Lord
News Editor
Well this is AMD's new architecture which does not equal being the next chip. HD7000 is probably what was said to be, an evolution of HD6000. Of course it could be this new architecture, but it's not very likely since HD7000 supposedly taped out some months ago.
so you're meaning to say they'll showcase us before tapeout?
Posted on Reply
#11
Mistral
I blame Carmack for this!

Thanks Carmack...
Posted on Reply
#12
HalfAHertz
BenetanegiaFermi already does most of those things, so it's quite the opposite. Many of the new features in AMD's design were implemented in G80 5 years ago or later in GT200. AMD is way behind on this and is almost funny to see that they are going to follow the same architectural principle as Nvidia is being using for the past 5 years. Of course they are going to make the jump instead of doing it gradually like Nvidia did, but that's only posible thanks to Nvidia doing the hard work and opening doors for years.
I think you're underestimating AMD's efforts. I highly doubt they have been sitting idly on their thumbs all these years relying purely on Nvidia to make all the breakthroughs ;) The fact that they didn't implement it straight away into their end-products doesn't mean that they haven't been experimenting with such technologies internally. No company would invest in a product until it is financially viable to produce and there is a sufficient market for it, right?
Posted on Reply
#13
Pijoto
BenetanegiaWell this is AMD's new architecture which does not equal being the next chip. HD7000 is probably what was said to be, an evolution of HD6000. Of course it could be this new architecture, but it's not very likely since HD7000 supposedly taped out some months ago.
I was holding out for the HD7000 series for an upgrade, but now I should probably wait for the HD8000 series instead for new architechure changes...my radeon 4650 barely runs at 720p on some newer games :banghead:
Posted on Reply
#14
RejZoR
Is it just me or is this a way for AMD to run away from x86 by executing the high level languages directly on GPU ? Though i have no idea if this thing relies on x86 or its a whole thing on its own.
Posted on Reply
#15
Steevo
BenetanegiaFermi already does most of those things, so it's quite the opposite. Many of the new features in AMD's design were implemented in G80 5 years ago or later in GT200. AMD is way behind on this and is almost funny to see that they are going to follow the same architectural principle as Nvidia is being using for the past 5 years. Of course they are going to make the jump instead of doing it gradually like Nvidia did, but that's only posible thanks to Nvidia doing the hard work and opening doors for years.
And while a different approach has been taken bu ATI for years they still had top performers in most fields, and still pioneered the GPU compute with their early X series of cards.


I am excited to get both on a more common platform though, and as much as I like my 5870 I have been wanting a green card for better GTA performance.
Posted on Reply
#16
theeldest
HalfAHertzHere's the original article:

www.pcper.com/reviews/Graphics-Cards/AMD-Fusion-System-Architecture-Overview-Southern-Isle-GPUs-and-Beyond

It seems this is indeed the base for the HD7000 Southern islands architecture. This wil be interesting...

From what I understand, it sounds very similar to the old SPARC HPC processors...What I'm worried about is that such a drastic design change may require an even more drastic change on the software side which will distance the already limited number of developers backing AMD ...
As I understand it, it should be just the opposite. They're working to make using the GPU transparent to developers. Microsoft was showing off C++ AMP at the conference where you can use the same executable and run it on CPU, integrated GPU, or discrete GPU with no changes.
Posted on Reply
#17
Benetanegia
HalfAHertzI think you're underestimating AMD's efforts. I highly doubt they have been sitting idly on their thumbs all these years relying purely on Nvidia to make all the breakthroughs ;) The fact that they didn't implement it straight away into their end-products doesn't mean that they haven't been experimenting with such technologies internally.
Nvidia has been much more in contact with their GPGPU customers, asking what they needed and implementing it. And once it was inplemented and tested, by asking what's next and implementing that too. They have been getting the answers and now AMD only had to implement those. Nvidia has been investing a lot in universities to teach and promote GPGPU for a very long time too. Much sooner than anyone else thought about promoting the GPGPU route.

AMD has followed a totally passive approach because that's the cheaper approach. I'm not saying that's a bad strategy for them, but they have not fought for the GPGPU side until very recently.
No company would invest in a product until it is financially viable to produce and there is a sufficient market for it, right?
In fact yes. Entrepreneur companies constantly invest in products whose viability is still in question and with little markets. They create the market.

There's nothing wrong in being one of the followers, just give credit where credit is due. And IMO AMD deserves none.
SteevoAnd while a different approach has been taken bu ATI for years they still had top performers in most fields, and still pioneered the GPU compute with their early X series of cards.
They have had top performers in gaming. Other than that Nvdia has been way ahead in professional markets.

And AMD did not pioneer GPGPU. It was a group in Standford who did it and yes they used X1900 cards, and yes AMD collaborated, but that's far from pioneering it and was not really GPGPU, it mostly used DX and OpenGL for doing math. By the time that was happening Nividia had already been working on GPGPU on their architecture for years as can be seen with the launch of G80 only few monts after the introduction of X1900.
I am excited to get both on a more common platform though, and as much as I like my 5870 I have been wanting a green card for better GTA performance.
That for sure is a good thing. My comments were just regarding how funny it is that after so many years of AMD promoting VLIW and telling everyone and dog that VLIW was the way to go and a much better approach. Even downplaying and mocking Fermi, well they are going to do the same thing Nvidia has been doing for years.

I already predicted this change in direction a few years ago anyway. When Fusion was frst promoted I knew they would eventually move into this direction and I also predcted that Fusion would represent a turning point in how aggressively would AMD promote GPGPU. And that's been the case. I have no love (neither hate) for AMD for this simple reason. I understand they are the underdog, and need some marketing on their side too, but they always sell themselves as the good company, but do nothing but downplay other's strategies until they are able to follow them and they do unltimately follow them. Just a few months ago (HD6000 introduction) VLIW was the only way to go, almost literally the godsend, while Fermi was mocked up as the wrong way to go. I knew it was all marketing BS, and now it's demostrated, but I guess people have short memories so it works for them. Oh well all these fancy new features are NOW the way to go. And it's true, except there's nothing new on them...
Posted on Reply
#18
cadaveca
My name is Dave
They are finally getting rid of GART addressing!!! Yippie!!!

Now to wait for IOMMU support in Windows-based OS!!
Posted on Reply
#19
W1zzard
this is basically what intel tried with larrabee and failed
Posted on Reply
#20
cadaveca
My name is Dave
Huh. You know what W1zz, that never even occured to me. I think you're pretty darn right there.


The question remains though...why did Larrabee really fail? I mean, they said Larrabee wouldn't get a public launch, but wasn't fully dead yet either...so they must ahve had at least some success...or this path is inevitible.
Posted on Reply
#21
Steevo
It will be complicated to keep stacks straight with a contiguous memory address space between system and vram. Much less having a GPU make the page fault call and lookup its own data out of CPU registers or straight from disk.

If they can pull it off my hats off to them.
Posted on Reply
#22
cadaveca
My name is Dave
SteevoIt will be complicated to keep stacks straight with a contiguous memory address space between system and vram.
But how, really, is it any different, than say, a multi-core CPU? Or a dual-socket system using NUMA?

I mean, they can use IOMMU for address translation, as the way i see it, the GART space right now is effectively the same, but with a limited size, so while it would be much more work for memory controllers, I don't really see anything standing in the way other than programming.
Posted on Reply
#23
Thatguy
what I am gathering is that they will merge the cisc/risc/gpu and x86 designs into a mashup resembling none of them. Imagine a fpu with the width and power of stream processors ? they need int for many things but they can do most of this in hardware itself. this is what amd was working towards with the bulldozer design.
Posted on Reply
#24
RejZoR
cadavecaHuh. You know what W1zz, that never even occured to me. I think you're pretty darn right there.


The question remains though...why did Larrabee really fail? I mean, they said Larrabee wouldn't get a public launch, but wasn't fully dead yet either...so they must ahve had at least some success...or this path is inevitible.
I can tell you why. Intel wanted to make GPU from CPU's. AMD is trying to make a CPU from GPU's. That's the main difference. And one of the reasons why AMD could possibly succeed.
Posted on Reply
#25
AsRock
TPU addict
BenetanegiaThat for sure is a good thing. My comments were just regarding how funny it is that after so many years of AMD promoting VLIW and telling everyone and dog that VLIW was the way to go and a much better approach. Even downplaying and mocking Fermi, well they are going to do the same thing Nvidia has been doing for years.
Maybe AMD's way is better but it's wiser to do what nVidia started ?. As we all know companys have not really supported AMD all that well. And all so know AMD don't have shed loads of money to get some thing fully supported.

Not trying to say your wrong just saying we don't know both sides of the story or reasoning behind it.
Posted on Reply
Add your own comment
Nov 28th, 2024 23:57 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts