# AMD Charts Path for Future of its GPU Architecture



## btarunr (Jun 17, 2011)

The future of AMD's GPU architecture looks more open, broken from the shackles of a fixed-function, DirectX-driven evolution model, and that which increases the role of GPU in the PC's central processing a lot more than merely accelerating GPGPU applications. At the Fusion Developer Summit, AMD detailed its future GPU architecture, revealing that in the future, AMD's GPUs will have full support for C, C++, and other high-level languages. Integrated with Fusion APUs, these new number-crunching components will be called "scalar co-processors".

Scalar co-processors will combine elements of MIMD (multiple-instruction multiple-data,) SIMD (single-instruction multiple data), and SMT (simultaneous multithreading). AMD will ditch the VLIW (very long instruction word) model that has been in use for several of AMD's past GPU architectures. While AMD's GPU model will break from the shackles of development that is pegged to that of DirectX, it doesn't believe that APIs such as DirectX and OpenGL will be discarded. Game developers can continue to develop for these APIs, and C++ support is more for general purpose compute applications. That does, however, create a window for game developers to venture out of the API-based development model (specifically DirectX). With its next Fusion processors, the GPU and CPU components will make use of a truly common memory address space. Among other things, this eliminate the "glitching" players might sometimes experience when games load textures as they go over the crest of a hill.





*View at TechPowerUp Main Site*


----------



## MxPhenom 216 (Jun 17, 2011)

this is looking awesome. Exciting seeing new achitecture from AMD. I want to see want nvidia has going on too


----------



## dj-electric (Jun 17, 2011)

I gawd this architecture better be good amd, ive been waiting for ages.


----------



## NC37 (Jun 17, 2011)

Hopefully it won't turn into another DX10.1. ATI does it, but NV says no so the industry caves to NV. 

Course this is much bigger. Saw this coming. Our CPUs are gonna be replaced by GPUs eventually. Those who laughed at AMD's purchase of ATI...heh. Nice move and I guess it makes more sense to ditch the ATI name if you are gonna eventually merge the tech even more. Oh well, I still won't ever call their discrete GPUs AMD.


----------



## Benetanegia (Jun 17, 2011)

NC37 said:


> Hopefully it won't turn into another DX10.1. ATI does it, but NV says no so the industry caves to NV.



Fermi already does most of those things, so it's quite the opposite. Many of the new features in AMD's design were implemented in G80 5 years ago or later in GT200. AMD is way behind on this and is almost funny to see that they are going to follow the same architectural principle as Nvidia is being using for the past 5 years. Of course they are going to make the jump instead of doing it gradually like Nvidia did, but that's only posible thanks to Nvidia doing the hard work and opening doors for years.


----------



## Over_Lord (Jun 17, 2011)

Wow, and to think everybody had already written HD7000 as HD6000 on 28nm with minor improvements, this is BIG!!


----------



## Benetanegia (Jun 17, 2011)

thunderising said:


> Wow, and to think everybody had already written HD7000 as HD6000 on 28nm with minor improvements, this is BIG!!



Well this is AMD's new architecture which does not equal being the next chip. HD7000 is probably what was said to be, an evolution of HD6000. Of course it could be this new architecture, but it's not very likely since HD7000 supposedly taped out some months ago.

Also the article in TechReport says:



> *While he didn't talk about specific products*, he did say this new core design will materialize inside all future AMD products with GPUs in them *over the next few years*.



Don't you think that with less than 6 months left for HD7000 release it would be the time already to talk about specific products?







Extend to discreet GPU is the last step, which suggests that that will happen in 2 generations. This is for Fusion only, at least for now it seems. Not in vain the new architecture is called FSA, Fusion System Architecture.


----------



## Shihab (Jun 17, 2011)

I thought Nvidia's already covered most of these _features_. 
I think I'll just wait for Kepler and Maxwell.


----------



## techtard (Jun 17, 2011)

Ati already had something like this, for quite a while. It was called STREAM, and it was pretty bad. AMD rebranded it as AMD APP and it is a little better, but it sounds like they are finally serious about HPC.
Either that, or they have been forced to adopt the nVidia route due to entrenched CUDA and nVidia paid de-optimizations for folding and other parallel computing.


----------



## HalfAHertz (Jun 17, 2011)

Here's the original article: 

http://www.pcper.com/reviews/Graphi...ecture-Overview-Southern-Isle-GPUs-and-Beyond

It seems this is indeed the base for the HD7000 Southern islands architecture. This wil be interesting...

From what I understand, it sounds very similar to the old SPARC HPC processors...What I'm worried about is that such a drastic design change may require an even more drastic change on the software side which will distance the already limited number of developers backing AMD ...


----------



## Over_Lord (Jun 17, 2011)

> Well this is AMD's new architecture which does not equal being the next chip. HD7000 is probably what was said to be, an evolution of HD6000. Of course it could be this new architecture, but it's not very likely since HD7000 supposedly taped out some months ago.



so you're meaning to say they'll showcase us before tapeout?


----------



## Mistral (Jun 17, 2011)

I blame Carmack for this!

Thanks Carmack...


----------



## HalfAHertz (Jun 17, 2011)

Benetanegia said:


> Fermi already does most of those things, so it's quite the opposite. Many of the new features in AMD's design were implemented in G80 5 years ago or later in GT200. AMD is way behind on this and is almost funny to see that they are going to follow the same architectural principle as Nvidia is being using for the past 5 years. Of course they are going to make the jump instead of doing it gradually like Nvidia did, but that's only posible thanks to Nvidia doing the hard work and opening doors for years.



I think you're underestimating AMD's efforts. I highly doubt they have been sitting idly on their thumbs all these years relying purely on Nvidia to make all the breakthroughs  The fact that they didn't implement it straight away into their end-products doesn't mean that they haven't been experimenting with such technologies internally. No company would invest in a product until it is financially viable to produce and there is a sufficient market for it, right?


----------



## Pijoto (Jun 17, 2011)

Benetanegia said:


> Well this is AMD's new architecture which does not equal being the next chip. HD7000 is probably what was said to be, an evolution of HD6000. Of course it could be this new architecture, but it's not very likely since HD7000 supposedly taped out some months ago.



I was holding out for the HD7000 series for an upgrade, but now I should probably wait for the HD8000 series instead for new architechure changes...my radeon 4650 barely runs at 720p on some newer games


----------



## RejZoR (Jun 17, 2011)

Is it just me or is this a way for AMD to run away from x86 by executing the high level languages directly on GPU ? Though i have no idea if this thing relies on x86 or its a whole thing on its own.


----------



## Steevo (Jun 17, 2011)

Benetanegia said:


> Fermi already does most of those things, so it's quite the opposite. Many of the new features in AMD's design were implemented in G80 5 years ago or later in GT200. AMD is way behind on this and is almost funny to see that they are going to follow the same architectural principle as Nvidia is being using for the past 5 years. Of course they are going to make the jump instead of doing it gradually like Nvidia did, but that's only posible thanks to Nvidia doing the hard work and opening doors for years.



And while a different approach has been taken bu ATI for years they still had top performers in most fields, and still pioneered the GPU compute with their early X series of cards.


I am excited to get both on a more common platform though, and as much as I like my 5870 I have been wanting a green card for better GTA performance.


----------



## theeldest (Jun 17, 2011)

HalfAHertz said:


> Here's the original article:
> 
> http://www.pcper.com/reviews/Graphi...ecture-Overview-Southern-Isle-GPUs-and-Beyond
> 
> ...




As I understand it, it should be just the opposite. They're working to make using the GPU transparent to developers. Microsoft was showing off C++ AMP at the conference where you can use the same executable and run it on CPU, integrated GPU, or discrete GPU with no changes.


----------



## Benetanegia (Jun 17, 2011)

HalfAHertz said:


> I think you're underestimating AMD's efforts. I highly doubt they have been sitting idly on their thumbs all these years relying purely on Nvidia to make all the breakthroughs  The fact that they didn't implement it straight away into their end-products doesn't mean that they haven't been experimenting with such technologies internally.



Nvidia has been much more in contact with their GPGPU customers, asking what they needed and implementing it. And once it was inplemented and tested, by asking what's next and implementing that too. They have been getting the answers and now AMD only had to implement those. Nvidia has been investing a lot in universities to teach and promote GPGPU for a very long time too. Much sooner than anyone else thought about promoting the GPGPU route.

AMD has followed a totally passive approach because that's the cheaper approach. I'm not saying that's a bad strategy for them, but they have not fought for the GPGPU side until very recently.



> No company would invest in a product until it is financially viable to produce and there is a sufficient market for it, right?



In fact yes. Entrepreneur companies constantly invest in products whose viability is still in question and with little markets. They create the market.

There's nothing wrong in being one of the followers, just give credit where credit is due. And IMO AMD deserves none.



Steevo said:


> And while a different approach has been taken bu ATI for years they still had top performers in most fields, and still pioneered the GPU compute with their early X series of cards.



They have had top performers in gaming. Other than that Nvdia has been way ahead in professional markets.

And AMD did not pioneer GPGPU. It was a group in Standford who did it and yes they used X1900 cards, and yes AMD collaborated, but that's far from pioneering it and was not really GPGPU, it mostly used DX and OpenGL for doing math. By the time that was happening Nividia had already been working on GPGPU on their architecture for years as can be seen with the launch of G80 only few monts after the introduction of X1900.



> *I am excited to get both on a more common platform* though, and as much as I like my 5870 I have been wanting a green card for better GTA performance.



That for sure is a good thing. My comments were just regarding how funny it is that after so many years of AMD promoting VLIW and telling everyone and dog that VLIW was the way to go and a much better approach. Even downplaying and mocking Fermi, well they are going to do the same thing Nvidia has been doing for years.

I already predicted this change in direction a few years ago anyway. When Fusion was frst promoted I knew they would eventually move into this direction and I also predcted that Fusion would represent a turning point in how aggressively would AMD promote GPGPU. And that's been the case. I have no love (neither hate) for AMD for this simple reason. I understand they are the underdog, and need some marketing on their side too, but they always sell themselves as the good company, but do nothing but downplay other's strategies until they are able to follow them and they do unltimately follow them. Just a few months ago (HD6000 introduction) VLIW was the only way to go, almost literally the godsend, while Fermi was mocked up as the wrong way to go. I knew it was all marketing BS, and now it's demostrated, but I guess people have short memories so it works for them. Oh well all these fancy new features are NOW the way to go. And it's true, except there's nothing new on them...


----------



## cadaveca (Jun 17, 2011)

They are finally getting rid of GART addressing!!! Yippie!!!

Now to wait for IOMMU support in Windows-based OS!!


----------



## W1zzard (Jun 17, 2011)

this is basically what intel tried with larrabee and failed


----------



## cadaveca (Jun 17, 2011)

Huh. You know what W1zz, that never even occured to me. I think you're pretty darn right there.


The question remains though...why did Larrabee really fail? I mean, they said Larrabee wouldn't get a public launch, but wasn't fully dead yet either...so they must ahve had at least some success...or this path is inevitible.


----------



## Steevo (Jun 17, 2011)

It will be complicated to keep stacks straight with a contiguous memory address space between system and vram. Much less having a GPU make the page fault call and lookup its own data out of CPU registers or straight from disk.

If they can pull it off my hats off to them.


----------



## cadaveca (Jun 17, 2011)

Steevo said:


> It will be complicated to keep stacks straight with a contiguous memory address space between system and vram.



But how, really, is it any different, than say, a multi-core CPU? Or a dual-socket system using NUMA?

I mean, they can use IOMMU for address translation, as the way i see it, the GART space right now is effectively the same, but with a limited size, so while it would be much more work for memory controllers, I don't really see anything standing in the way other than programming.


----------



## Thatguy (Jun 17, 2011)

what I am gathering is that they will merge the cisc/risc/gpu and x86 designs into a mashup resembling none of them. Imagine a fpu with the width and power of stream processors ? they need int for many things but they can do most of this in hardware itself. this is what amd was working towards with the bulldozer design.


----------



## RejZoR (Jun 17, 2011)

cadaveca said:


> Huh. You know what W1zz, that never even occured to me. I think you're pretty darn right there.
> 
> 
> The question remains though...why did Larrabee really fail? I mean, they said Larrabee wouldn't get a public launch, but wasn't fully dead yet either...so they must ahve had at least some success...or this path is inevitible.



I can tell you why. Intel wanted to make GPU from CPU's. AMD is trying to make a CPU from GPU's. That's the main difference. And one of the reasons why AMD could possibly succeed.


----------



## AsRock (Jun 17, 2011)

Benetanegia said:


> That for sure is a good thing. My comments were just regarding how funny it is that after so many years of AMD promoting VLIW and telling everyone and dog that VLIW was the way to go and a much better approach. Even downplaying and mocking Fermi, well they are going to do the same thing Nvidia has been doing for years.



Maybe AMD's way is better but it's wiser to do what nVidia started ?.  As we all know companys have not really supported AMD all that well.  And all so know AMD don't have shed loads of money to get some thing fully supported.

Not trying to say your wrong just saying we don't know both sides of the story or reasoning behind it.


----------



## Casecutter (Jun 17, 2011)

Nvidia bought a Physic company… AMD bought a graphics company.  So yes it make sense that Nvidia wanted to get and got a lead.  Although they wanted and kept it (as much as they could), their proprietary Intellectual Property, which is understandable.

AMD got in the graphic side, dusting off ATI and got them back in contention, all along wanting to do achieve this.  It just takes time and research is bearing fruit. 

The best reason for us that AMD appears to maintain to the open specification, and that will really make more developer want in.


----------



## bucketface (Jun 17, 2011)

this could potentially lead to cuda becoming open, if AMD can get enough support from developers, they'll have to or risk seeing it fall to the way side in favor of somthing that supports both.


----------



## Benetanegia (Jun 17, 2011)

W1zzard said:


> this is basically what intel tried with larrabee and failed



I always thought they failed because they didn't have good enough graphics drivers and as an accelerator it would not be financially viable. I remember reading that for HPC Larrabee was good enough, but you know better that me that in order for these big chips to be viable, you need the consumer market in order to have some volume and refine the process, bin chips, etc. Even if it's an small market like the enthusiast GPU market, with less than 1 million cards sold, that's far more than the 10's of thousands HPC cards you can sell. At least for now. Maybe in some years, with more demand, it would make sense to create a different chip for HPC, but then again the industry is moving in the opposite direction, and I think it's the right direction.



bucketface said:


> this could potentially lead to cuda becoming open, if AMD can get enough support from developers, they'll have to or risk seeing it fall to the way side in favor of somthing that supports both.



Eh? No. CUDA will dissapear sometime in the future most probably, when OpenCL caches on. OpenCL is 95% similar to CUDA anyway, if you have to believe CUDA/OpenCL developers and it's free so Nvidia doesn't gain anything from the use of CUDA. It will not go anywhere now and it's not going to be in 1 or 2 years probably, because Nvidia keeps updating CUDA every now and then and stays way ahead with more features (the advantage of not depending on stardardization by a consortium). At some point it should stagnate and OpenCL should be able to catch up, even if it's evolution depends on the Khronos group.


----------



## pantherx12 (Jun 17, 2011)

Benetanegia said:


> AMD has followed a totally passive approach because that's the cheaper approach. I'm not saying that's a bad strategy for them, but they have not fought for the GPGPU side until very recently.




No they haven't man, they just don't bang on about it.

They talk directly to developers and have had a forum running for years where people can communicate about it.

Go on the AMD developer forums to see : ]


----------



## bucketface (Jun 17, 2011)

Benetanegia said:


> Eh? No. CUDA will dissapear sometime in the future most probably, when OpenCL caches on. OpenCL is 95% similar to CUDA anyway, if you have to believe CUDA/OpenCL developers and it's free so Nvidia doesn't gain anything from the use of CUDA. It will not go anywhere now and it's not going to be in 1 or 2 years probably, because Nvidia keeps updating CUDA every now and then and stays way ahead with more features (the advantage of not depending on stardardization by a consortium). At some point it should stagnate and OpenCL should be able to catch up, even if it's evolution depends on the Khronos group.



all i was saying is if nvidia plans on seeing CUDA through the next 5 years or so they'll almost certainly have to open it up, i don't know the specifics of CUDA vs openCL but my understanding was that CUDA, as it stands is the more robust platform.


----------



## St.Alia-Of-The-Knife (Jun 17, 2011)

"Full GPU support of C, C++ and other high-level languages"

i know that the GPU is way faster than the CPU,
so does this mean that GPU will replace the CPU in common tasks also??


----------



## seronx (Jun 17, 2011)

1. The architecture explained in this diagram is the HD 7000

VLIW5 -> VLIW4 -> ACE or CU








http://www.realworldtech.com/forums/index.cfm?action=detail&id=120431&threadid=120411&roomid=2


> Name: David Kanter  6/15/11
> 
> Dan Fay  on 6/14/11 wrote:
> ---------------------------
> ...



http://www.realworldtech.com/

^wait for the article dar

2. In 2 years you will see this GPU in the Z-series APU(The tablet APU)



St.Alia-Of-The-Knife said:


> "Full GPU support of C, C++ and other high-level languages"
> 
> i know that the GPU is way faster than the CPU,
> so does this mean that GPU will replace the CPU in common tasks also??



In the cloud future yes, CPU will only need to command in the future



Benetanegia said:


> Fermi already does most of those things, so it's quite the opposite. Many of the new features in AMD's design were implemented in G80 5 years ago or later in GT200. AMD is way behind on this and is almost funny to see that they are going to follow the same architectural principle as Nvidia is being using for the past 5 years. Of course they are going to make the jump instead of doing it gradually like Nvidia did, but that's only posible thanks to Nvidia doing the hard work and opening doors for years.



AMD GPUs have been GPGPU compatible since the high end GPUs could do DP

This architecture just allows a bigger jump(ahead of Kepler)



NC37 said:


> Hopefully it won't turn into another DX10.1. ATI does it, but NV says no so the industry caves to NV.
> 
> Course this is much bigger. Saw this coming. Our CPUs are gonna be replaced by GPUs eventually. Those who laughed at AMD's purchase of ATI...heh. Nice move and I guess it makes more sense to ditch the ATI name if you are gonna eventually merge the tech even more. Oh well, I still won't ever call their discrete GPUs AMD.



Nvidia was very late, some late 200 series can do DX10.1 but not very well



Benetanegia said:


> Nvidia has been much more in contact with their GPGPU customers, asking what they needed and implementing it. And once it was inplemented and tested, by asking what's next and implementing that too. They have been getting the answers and now AMD only had to implement those. Nvidia has been investing a lot in universities to teach and promote GPGPU for a very long time too. Much sooner than anyone else thought about promoting the GPGPU route.
> 
> AMD has followed a totally passive approach because that's the cheaper approach. I'm not saying that's a bad strategy for them, but they have not fought for the GPGPU side until very recently.
> 
> ...



The reason they are changing is not because of the GPGPU issue its more on the scaling issue

Theoretical -> Realistic 
Performance didn't scale correctly






It's all over the place, well scaling is a GPGPU issue but this architecture will at least allow for better scaling ^


----------



## Neuromancer (Jun 17, 2011)

Wow how times are turning backwards.  

I got me a new math co-processor!


----------



## Sapientwolf (Jun 18, 2011)

St.Alia-Of-The-Knife said:


> "Full GPU support of C, C++ and other high-level languages"
> 
> i know that the GPU is way faster than the CPU,
> so does this mean that GPU will replace the CPU in common tasks also??



The GPU is faster than the CPU at arithmetic operations that can occur in parallel (Like video and graphics).  The CPU is much faster at sequential logic.  The CPU has been tailored toward its area and the GPU to its own a well.  However now we see the gray area between the two increasing more and more.  So AMD is working hard to make platforms in which the CPU can offload highly parallel arithmetic loads to their GPUS, and make it easier for programmers to program their GPUs outside the realm of DirectX and OpenGL.

One will not replace the other, they will merge and instructions will be exectuted on the hardware best for the job.


----------



## Hayder_Master (Jun 18, 2011)

So they point to big improve in performance and only benchmarks can prove it.


----------



## Disruptor4 (Jun 18, 2011)

Hayder_Master said:


> So they point to big improve in performance and only benchmarks can prove it.



Well probably not only benchmarks. You will see a decrease in the time it takes to process certain things. Similar in example to how decoding and recoding can be done by the GPU in certain programs.


----------



## Thatguy (Jun 18, 2011)

Sapientwolf said:


> The GPU is faster than the CPU at arithmetic operations that can occur in parallel (Like video and graphics).  The CPU is much faster at sequential logic.  The CPU has been tailored toward its area and the GPU to its own a well.  However now we see the gray area between the two increasing more and more.  So AMD is working hard to make platforms in which the CPU can offload highly parallel arithmetic loads to their GPUS, and make it easier for programmers to program their GPUs outside the realm of DirectX and OpenGL.
> 
> One will not replace the other, they will merge and instructions will be exectuted on the hardware best for the job.



the decoder will handle this job more then likely.


----------



## xtremesv (Jun 18, 2011)

The future is fusion, remember? CPU and GPU becoming one, it's going to happen, I believe it, but are we gonna really "own" it? 

These days cloud computing is starting to make some noice and it makes sense, average guys/gals are not interested in FLOPS performance, they just want to listen to their music, check facebook and play some fun games. What I'm saying is that in the future we'll only need a big touch screen with a mediocre ARM processor to play Crysis V. The processing, you know GPGPU, the heat and stuff, will be somewhere in China, we'll be given just what we need, the final product through a huge broadband. If you've seen the movie Wall-E, think about living in that spaceship Axiom, it'd be something like that... creepy, eh?


----------



## Wile E (Jun 18, 2011)

You know, AMD has always had these great gpu hardware features (the ability to crunch numbers, like in physics), then promises us great software to run on it (GPU accelerated Havok anyone?), but the software never materializes.

I'll get excited about this when it is actually being implemented by devs in products I can use.


----------



## seronx (Jun 18, 2011)

Well by 2013

The APU
with

Enhanced Bulldozer + Graphic Core Next

Will be perfect unison

and with 

2013 
FX+AMD Radeon 9900 series
Next-Gen Bulldozer + Next-Gen Graphic Core Next

and DDR4+PCI-e 3.0 will equal MAXIMUM POWUH!!!


----------



## pantherx12 (Jun 18, 2011)

Wile E said:


> You know, AMD has always had these great gpu hardware features (the ability to crunch numbers, like in physics), then promises us great software to run on it (GPU accelerated Havok anyone?), but the software never materializes.
> 
> I'll get excited about this when it is actually being implemented by devs in products I can use.



I know it's only one thing, but furture mark 11 does soft body simulation on the GPU on AMD cards and Nvidia cards.

Only one thing, but it does point to things to come I think.


----------



## W1zzard (Jun 18, 2011)

Wile E said:


> You know, AMD has always had these great gpu hardware features (the ability to crunch numbers, like in physics), then promises us great software to run on it (GPU accelerated Havok anyone?), but the software never materializes.



yup .. i dont think amd has successfully brought any software feature to market.. maybe x86_64 if you count intel adopting it



Benetanegia said:


> that in order for these big chips to be viable, you need the consumer market in order to have some volume and refine the process, bin chips, etc. Even if it's an small market like the enthusiast GPU market, with less than 1 million cards sold, that's far more than the 10's of thousands HPC cards you can sell. At least for now. Maybe in some years, with more demand, it would make sense to create a different chip for HPC, but then again the industry is moving in the opposite direction, and I think it's the right direction.



i agree, but why does amd waste their money with useless computation features that apparently have nowhere to go other than video encode and some hpc apps ?
if there was some killer application for gpu computing wouldn't nvidia/cuda have found it by now?


----------



## RejZoR (Jun 18, 2011)

And even that hyped video encoding is mostly done on CPU which makes it utterly useless as it's not much faster than pure CPU anyway. They were bragging about physics as well but they never made them. Not with Havok, not with Bullet, not with anything. I mean if you want to make something, endorse it, embrace it, give developers a reason to develop and build on that feature.
In stead they announce it, brag about it and then we can all forget about it as it'll never happen.
They should invest those resources into more productive things instead of wasting them on such useless stuff.

Only thing that they pulled off properly is MLAA which uses shaders to process screen and anti-alias it. It functions great in pretty much 99,9% of games, is what they promised and i hope they won't remove it like they did with most of their features (Temporal AA, TruForm, SmartShaders etc). Sure some technologies got redundant like TruForm, but others just died because AMD didn't bother to support them. SmartShaders were good example. HDRish was awesome, giving old games a fake HDR effect which looked pretty good. But it worked only in OpenGL and someone else had to make it. AMD never added anything useful for D3D which is what most of the games use. So what's the point!?!?! They should really get their stuff together and stop wasting time and resources on useless stuff and start making cool features that can last. Like again, MLAA.


----------



## swaaye (Jun 18, 2011)

ATI's support of GPGPU hasn't been as great as some say here. OpenCL support only goes back to HD4000 because older chips have limitations that make it basically infeasible. In other words HD3000 and 2000 are very poor GPGPU chips. X1900 isn't really even worth mentioning.

You can on the other hand run CUDA on old G80. NV has definitely been pushing GPGPU harder.

On the other, other hand however I can't say that GPGPU affects me whatsoever. I think AMD is mostly after that Tesla market and Photoshop filters. I won't be surprised if this architecture is less efficient for graphics. I sense a definite divergence from just making beefier graphics accelerators. NV's chips have proven with their size that GPGPU features don't really mesh with graphics speed.


----------



## Thatguy (Jun 18, 2011)

xtremesv said:


> The future is fusion, remember? CPU and GPU becoming one, it's going to happen, I believe it, but are we gonna really "own" it?
> 
> These days cloud computing is starting to make some noice and it makes sense, average guys/gals are not interested in FLOPS performance, they just want to listen to their music, check facebook and play some fun games. What I'm saying is that in the future we'll only need a big touch screen with a mediocre ARM processor to play Crysis V. The processing, you know GPGPU, the heat and stuff, will be somewhere in China, we'll be given just what we need, the final product through a huge broadband. If you've seen the movie Wall-E, think about living in that spaceship Axiom, it'd be something like that... creepy, eh?




  even at light speed the latencys will kill ya, there is no way around client power, resist the cloud, its bullshit anyways.


----------



## Thatguy (Jun 18, 2011)

W1zzard said:


> yup .. i dont think amd has successfully brought any software feature to market.. maybe x86_64 if you count intel adopting it
> 
> 
> 
> ...



   Becuase soon enough the hardware will do the work anyways. Its not always about software. As to Nvidia, they painted themselves into a corner years ago.


----------



## Thatguy (Jun 18, 2011)

RejZoR said:


> And even that hyped video encoding is mostly done on CPU which makes it utterly useless as it's not much faster than pure CPU anyway. They were bragging about physics as well but they never made them. Not with Havok, not with Bullet, not with anything. I mean if you want to make something, endorse it, embrace it, give developers a reason to develop and build on that feature.
> In stead they announce it, brag about it and then we can all forget about it as it'll never happen.
> They should invest those resources into more productive things instead of wasting them on such useless stuff.
> 
> Only thing that they pulled off properly is MLAA which uses shaders to process screen and anti-alias it. It functions great in pretty much 99,9% of games, is what they promised and i hope they won't remove it like they did with most of their features (Temporal AA, TruForm, SmartShaders etc). Sure some technologies got redundant like TruForm, but others just died because AMD didn't bother to support them. SmartShaders were good example. HDRish was awesome, giving old games a fake HDR effect which looked pretty good. But it worked only in OpenGL and someone else had to make it. AMD never added anything useful for D3D which is what most of the games use. So what's the point!?!?! They should really get their stuff together and stop wasting time and resources on useless stuff and start making cool features that can last. Like again, MLAA.




  they should call D3D round about the bend, down the street, up the alley, over 2 blocks and in the ditch 3d. Becuase it sure as shit ain't direct. AMD will move away from directX, they see where the market is headed.


----------



## W1zzard (Jun 18, 2011)

Thatguy said:


> they should call D3D round about the bend, down the street, up the alley, over 2 blocks and in the ditch 3d. Becuase it sure as shit ain't direct. AMD will move away from directX, they see where the market is headed.



the market is headed toward console games that are directx (xbox360) and that get recompiled with a few clicks for pc to maximize developer $$


----------



## RejZoR (Jun 18, 2011)

Exactly. If they will try to invent something new and not push it enough like they never really did for anything, they are just plain stupid. DirectX is the way to go at the moment, mostly because of what W1z said. Profit.


----------



## Thatguy (Jun 18, 2011)

W1zzard said:


> the market is headed toward console games that are directx (xbox360) and that get recompiled with a few clicks for pc to maximize developer $$



If you say so, I think your off base here and the microsoft design will offer huge problems downstream. The company ready for tommorow, will be the winner tommorw.


----------



## cadaveca (Jun 18, 2011)

Thatguy said:


> If you say so, I think your off base here and the microsoft design will offer huge problems downstream. The company ready for tommorow, will be the winner tommorw.



You MUST MUST keep in mind that all of this is business, and as such, the future of technology is very unfluenced by the businesses behind it. the least amount of work that brings in the most dollars is what WILL happen, without a doubt, as this is the nature of business.


What needs to be done is for someone to effectively show why other options make more sense, not fro, a technical standpoint, but from a business standpoint.

And like mentioned, none of these technologies AMD/ATI introduced over the years really seem to make much business sense, and as such, they fail hard.


Amd's board now seems to realize this...Dirk was dumped, and Bulldozer "delayed", simply becuase that made the MOST business sense...they met the market demand, and rightly so, as market demnad for those products is so high that they have no choice but to delay the launch of Bulldozer.

Delaying a new product, because an existing one is in high demand, makes good business sense.


----------



## swaaye (Jun 18, 2011)

What I see is AMD selling all of their consumer CPUs under $200, even their 6 core chips. They need new CPU tech that they can get some better margins on. Intel charges 3-4x more for their 6 core chips because they have clear performance dominance.

Buying ATI was a good move because both AMD and NV are now obviously trying to bypass Intel's dominance by creating a new GPU compute sector. I'm not sure if that will ever benefit the common user though because of the limited types of computing that work well with GPUs.

Also, Llano and Brazos are redefining the low end in a way that Intel didn't bother to so that's interesting too.


----------



## Wile E (Jun 19, 2011)

Thatguy said:


> Becuase soon enough the hardware will do the work anyways. Its not always about software. As to Nvidia, they painted themselves into a corner years ago.



The hardware need software to operate. This comment doesn't even make any sense.


----------



## Thatguy (Jun 19, 2011)

Wile E said:


> The hardware need software to operate. This comment doesn't even make any sense.



Sure it does, what if the cpu schedulre and the cpu decoder know how to break the works loads across, int,fpu,vliw etc. If it get smart enough, and there no reason it can't be, then the OS just sees x86 emulated as plain x86, but the underlying micro handles alot of the heavy lifting, if you don't really see the guiness behind bulldozer, your looking in the wrong places. How hard would it be for amd to intorduce vliw like elements into that modular core design ? Not terrifically hard, better belive that this is the way forward. Tradition x86 is dead.


----------



## bucketface (Jun 19, 2011)

RejZoR said:


> And even that hyped video encoding is mostly done on CPU which makes it utterly useless as it's not much faster than pure CPU anyway. They were bragging about physics as well but they never made them. Not with Havok, not with Bullet, not with anything. I mean if you want to make something, endorse it, embrace it, give developers a reason to develop and build on that feature.
> In stead they announce it, brag about it and then we can all forget about it as it'll never happen.
> They should invest those resources into more productive things instead of wasting them on such useless stuff.
> 
> Only thing that they pulled off properly is MLAA which uses shaders to process screen and anti-alias it. It functions great in pretty much 99,9% of games, is what they promised and i hope they won't remove it like they did with most of their features (Temporal AA, TruForm, SmartShaders etc). Sure some technologies got redundant like TruForm, but others just died because AMD didn't bother to support them. SmartShaders were good example. HDRish was awesome, giving old games a fake HDR effect which looked pretty good. But it worked only in OpenGL and someone else had to make it. AMD never added anything useful for D3D which is what most of the games use. So what's the point!?!?! They should really get their stuff together and stop wasting time and resources on useless stuff and start making cool features that can last. Like again, MLAA.



most games these days use at least, parts of the Havok, Bullet or what ever libraries. resident evil 5 and company of heroes are 2 that mention use of havok on the box. bad company 2 used parts of havok or bullet? most physics come from these libraries. its alot easier for devs than writing their own.
(below in relpy to someone above, i'm not sure how relevant it is but it's true none the less)  
the whole do what makes the most money now and we'll deal with the consequences later ideology, is why the american economy is in the state that it is. companies are like children, they want the candy & lots of it now but then they make themselves sick because they had too much. a responsible parent regulates them, it doesn't matter how big a tantrum they throw, because they know that cleaning up the resulting mess that occurs if they let them do as they please is much worse. just saying companies will do what makes the biggest short term gains regardless of the long term consequences doesn't help you or i see better games.


----------



## Damn_Smooth (Jun 19, 2011)

Speaking of AMD's graphics future this is a long, but interesting read.

http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute



> Graphics Core Next (GCN) is the architectural basis for AMD’s future GPUs, both for discrete products and for GPUs integrated with CPUs as part of AMD’s APU products. AMD will be instituting a major overhaul of its traditional GPU architecture for future generation products in order to meet the direction of the market and where they want to go with their GPUs in the future.


----------



## HTC (Jun 19, 2011)

Damn_Smooth said:


> Speaking of AMD's graphics future this is a long, *but interesting read*.
> 
> http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute



Totally agree!


----------



## Wile E (Jun 20, 2011)

Thatguy said:


> Sure it does, what if the cpu schedulre and the cpu decoder know how to break the works loads across, int,fpu,vliw etc. If it get smart enough, and there no reason it can't be, then the OS just sees x86 emulated as plain x86, but the underlying micro handles alot of the heavy lifting, if you don't really see the guiness behind bulldozer, your looking in the wrong places. How hard would it be for amd to intorduce vliw like elements into that modular core design ? Not terrifically hard, better belive that this is the way forward. Tradition x86 is dead.



There is no way to do it transparently to the OS. You still need software to tell the scheduler what type of info is coming down the pipeline. It will require a driver at minimum.


----------



## Thatguy (Jun 20, 2011)

Wile E said:


> There is no way to do it transparently to the OS. You still need software to tell the scheduler what type of info is coming down the pipeline. It will require a driver at minimum.



   Why ? The dirver makes up for the lack of logic on the chip.


----------



## Wile E (Jun 20, 2011)

If they were capable of giving a chip that kind of logic at this point, we would have things like multi-GPU gfx cards that show up to the OS as a single gpu.

We aren't anywhere near the chips being able to independently determine data type and scheduling like that.


----------



## Thatguy (Jun 20, 2011)

Wile E said:


> If they were capable of giving a chip that kind of logic at this point, we would have things like multi-GPU gfx cards that show up to the OS as a single gpu.
> 
> We aren't anywhere near the chips being able to independently determine data type and scheduling like that.



What do you think all this APU nonsense is about ? Popcorn on tuesdays ?


----------



## jagd (Jun 20, 2011)

Havoc is different from others , Intel bought Havoc and decided to use it as software physics api to advert intel cpus ,i dont think anyone would do anyting else what AMd done about this .noone would use a software api while you could do it on hardware.



Wile E said:


> You know, AMD has always had these great gpu hardware features (the ability to crunch numbers, like in physics), then promises us great software to run on it (GPU accelerated Havok anyone?), but the software never materializes.
> 
> I'll get excited about this when it is actually being implemented by devs in products I can use.






W1zzard said:


> yup .. i dont think amd has successfully brought any software feature to market.. maybe x86_64 if you count intel adopting it





I see Cloud computing renamed version of old terminal pc-thin client /server  concept ,with online gaming problem is connection more than hardware  ,youll need rock-stable connection -something hard to find always http://en.wikipedia.org/wiki/Thin_client


xtremesv said:


> The future is fusion, remember? CPU and GPU becoming one, it's going to happen, I believe it, but are we gonna really "own" it?
> 
> These days cloud computing is starting to make some noice and it makes sense, average guys/gals are not interested in FLOPS performance, they just want to listen to their music, check facebook and play some fun games. What I'm saying is that in the future we'll only need a big touch screen with a mediocre ARM processor to play Crysis V. The processing, you know GPGPU, the heat and stuff, will be somewhere in China, we'll be given just what we need, the final product through a huge broadband. If you've seen the movie Wall-E, think about living in that spaceship Axiom, it'd be something like that... creepy, eh?


----------



## zpnq (Jun 21, 2011)

from the amd developers conference 

http://hothardware.com/News/Microsoft-Demos-C-AMP-Heterogeneous-Computing-at-AFDS/

http://www.pcper.com/reviews/Graphi...ecture-Overview-Southern-Isle-GPUs-and-Beyond


----------



## a_ump (Jun 21, 2011)

Thatguy said:


> What do you think all this APU nonsense is about ? Popcorn on tuesdays ?



its about getting a CPU and GPU into one package, once die, eventually one chip that'll be way more cost effective than 2 separate chips. Oh, and taking over the entry/low end of the market from Intel. 

That's what that APU common sense is about 




Damn_Smooth said:


> Speaking of AMD's graphics future this is a long, but interesting read.
> 
> http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute



Very nice find sir,  i want to read it all but i might have to bookmark it.,


----------



## pantherx12 (Jun 21, 2011)

zpnq said:


> from the amd developers conference
> 
> http://hothardware.com/News/Microsoft-Demos-C-AMP-Heterogeneous-Computing-at-AFDS/
> 
> http://www.pcper.com/reviews/Graphi...ecture-Overview-Southern-Isle-GPUs-and-Beyond



Cool!

They better release that demo to the public!


----------



## Thatguy (Jun 21, 2011)

a_ump said:


> its about getting a CPU and GPU into one package, once die, eventually one chip that'll be way more cost effective than 2 separate chips. Oh, and taking over the entry/low end of the market from Intel.
> 
> That's what that APU common sense is about




   Long range its about comming to grips with serial processing and the lack of compute power you get from it.


----------



## a_ump (Jun 21, 2011)

Thatguy said:


> Long range its about comming to grips with serial processing and the lack of compute power you get from it.



You're talking about GCN then. I was talking short range .

Honestly, I definitely think AMD is going to take a leap in innovation over Nvidia these next 5 years or so. I really do think AMD's experience with CPU's is going to pay off when it comes to integrating compute performance in their GPU...well APU. Nvidia has the lead right now, but i can see AMD loosening that grip.


----------



## Thatguy (Jun 21, 2011)

a_ump said:


> You're talking about GCN then. I was talking short range .
> 
> Honestly, I definitely think AMD is going to take a leap in innovation over Nvidia these next 5 years or so. I really do think AMD's experience with CPU's is going to pay off when it comes to integrating compute performance in their GPU...well APU. Nvidia has the lead right now, but i can see AMD loosening that grip.



   I don't think Nvidia is going to come much further then they have thus far. AMD is set to put a whoppin on Intel and nvidia in that area. given the limits of IPC and clock speed, its the only way to get where they need to go in the first place.


----------



## Wile E (Jun 22, 2011)

Thatguy said:


> What do you think all this APU nonsense is about ? Popcorn on tuesdays ?



A fancy name for using gpu shaders to accelerate programs. AKA: The same shit we already have for a gfx cards in the way of CUDA/whatever stream was renamed to.

I would bet money this is not hardware based at all, and requires special software/drivers to work properly.


----------



## pantherx12 (Jun 22, 2011)

Wile E said:


> A fancy name for using gpu shaders to accelerate programs. AKA: The same shit we already have for a gfx cards in the way of CUDA/whatever stream was renamed to.
> 
> I would bet money this is not hardware based at all, and requires special software/drivers to work properly.



I bet it is hardware based, it's not just a fancy name though, it's gpu shaders in the cpu ( or next to in this case) meaning your cpu/gpu (apu) can handle all the physics and your gpu can focus on being a graphics card.

Or if all of AMDs cpus go this way, means people don't have to buy a gpu straight away which is also nice.


----------



## Thatguy (Jun 22, 2011)

Wile E said:


> A fancy name for using gpu shaders to accelerate programs. AKA: The same shit we already have for a gfx cards in the way of CUDA/whatever stream was renamed to.
> 
> I would bet money this is not hardware based at all, and requires special software/drivers to work properly.



   No, its about compute power, these first generation GPU's are about figuring out how to get the transistor and some of the basic technology figured out with How to make the transistors on the same piece of silica. The next step will be more transistors on both side cpu/gpu and the step beyond that will be a intergration of x86 cpu logic and gpu parellelism. Which will Give AMD a massive advantage over Nvidia and Intel in compute power and heavy workloads. 

  AMD got it right, 6 years ago when they started down this road, thats why bulldozer is modular.


----------



## Wile E (Jun 25, 2011)

pantherx12 said:


> I bet it is* hardware based*, it's not just a fancy name though, it's gpu shaders in the cpu ( or next to in this case) meaning your cpu/gpu (apu) can handle all the physics and your gpu can focus on being a graphics card.
> 
> Or if all of AMDs cpus go this way, means people don't have to buy a gpu straight away which is also nice.



No it isn't. It's basically a gpu put on the same pcb as the cpu. The concept is exactly the same as current gpu accelerated programs. The only difference is the location of the gpu.





Thatguy said:


> No, its about compute power, these first generation GPU's are about figuring out how to get the transistor and some of the basic technology figured out with How to make the transistors on the same piece of silica. The next step will be more transistors on both side cpu/gpu and the step beyond that will be a intergration of x86 cpu logic and gpu parellelism. Which will Give AMD a massive advantage over Nvidia and Intel in compute power and heavy workloads.
> 
> AMD got it right, 6 years ago when they started down this road, thats why bulldozer is modular.


WIll give an advantage =/= currently having an advantage.

Again, this is just gpgpu. Same thing we've had for ages. It is not transparent to the OS, and must specifically be coded for. Said coding is always where AMD ends up dropping the ball on this crap. I will not be excited until I see this actually being used extensively in the wild.


----------



## pantherx12 (Jun 25, 2011)

No, it's on the same silicon man, there's no latency between the communication of CPU-GPU ( or very little)

It does have benefits.


----------



## Thatguy (Jun 25, 2011)

Wile E said:


> No it isn't. It's basically a gpu put on the same pcb as the cpu. The concept is exactly the same as current gpu accelerated programs. The only difference is the location of the gpu.WIll give an advantage =/= currently having an advantage.
> 
> Again, this is just gpgpu. Same thing we've had for ages. It is not transparent to the OS, and must specifically be coded for. Said coding is always where AMD ends up dropping the ball on this crap. I will not be excited until I see this actually being used extensively in the wild.



Imagine the power of GPU with the programming front end of x86 or x87, which are widely supported instructions in compilers right now. 

  Thats where this is headed, INT + GPU the FPU is on borrowed time and thats likely why they shared it.


----------



## cadaveca (Jun 25, 2011)

Wile E said:


> It is not transparent to the OS, and must specifically be coded for.



You cna thanks nVidia for that. Had they actually adopted DX9 properly, and DX10, all the needed software would be part of the OS now. But due to them doing thier own thing, we the consumer got screwed.

I don't know why you even care if it uses software. All computing does....PC's are useless without software.


----------



## Wile E (Jun 26, 2011)

pantherx12 said:


> No, it's on the same silicon man, there's no latency between the communication of CPU-GPU ( or very little)
> 
> It does have benefits.



But you add it back by longer traces to memory. The benefits are mostly matters of convenience, marketing and packaging, not any performance benefits noticeable to end user. It makes sense from a business standpoint and may eventually lead to performance gains. I'm not arguing that. What I am arguing is that what is currently using these APUs is not hardware based, as in transparent to the OS. They are software based, just like CUDA and Stream. To use the APUs, the program must be specifically written to take advantage of them. Nothing changes that fact.





Thatguy said:


> Imagine the power of GPU with the programming front end of x86 or x87, which are widely supported instructions in compilers right now.
> 
> Thats where this is headed, INT + GPU the FPU is on borrowed time and thats likely why they shared it.


I don't see it happening any time soon.





cadaveca said:


> You cna thanks nVidia for that. Had they actually adopted DX9 properly, and DX10, all the needed software would be part of the OS now. But due to them doing thier own thing, we the consumer got screwed.
> 
> I don't know why you even care if it uses software. All computing does....PC's are useless without software.


I care only when people claim it's hardware based, when it isn't.

And I don't buy the nV argument either.


----------



## cadaveca (Jun 26, 2011)

Wile E said:


> And I don't buy the nV argument either.




CUDA says you have no choice. The whole point of DX10 was to provide OPEN access to features such as what CUDA offers, and nV said, quite literally, Microsoft developed APIs, so knew nothing about hardware design, and that thier API (DX) wasn't the right approach. DX10.1 is the perfect example of this behavior continuing.

DirectX, is largely, broken, because of CUDA. Should I mention the whole Batman antialiasing mumbo-jumbo?


I mean, I understand teh business side, and CUDA, potentially, has saved nV's butt.

But it's existence as a closed platform does more harm than good.

Thankfully, AMD will have thier GPUs in thier CPUs, which, in hardware, will provide alot more functionality than nV can ever bring to the table.


----------



## Wile E (Jun 26, 2011)

cadaveca said:


> CUDA says you have no choice. The whole point of DX10 was to provide OPEN access to features such as what CUDA offers, and nV said, quite literally, Microsoft developed APIs, so knew nothing about hardware design, and that thier API (DX) wasn't the right approach. DX10.1 is the perfect example of this behavior continuing.
> 
> DirectX, is largely, broken, because of CUDA. Should I mention the whole Batman antialiasing mumbo-jumbo?
> 
> ...


It is not broken because of CUDA. 10.1 didn't add what CUDA added. And CUDA certainly didn't effect DX9. Granted, 10.1 is what 10 should have been, mostly due to nV, but it had nothing to do with CUDA.

More anti-CUDA bs with nothing to back it.


----------



## pantherx12 (Jun 26, 2011)

Wile E said:


> It is not broken because of CUDA. 10.1 didn't add what CUDA added. And CUDA certainly didn't effect DX9. Granted, 10.1 is what 10 should have been, mostly due to nV, but it had nothing to do with CUDA.
> 
> More anti-CUDA bs with nothing to back it.



10.1 Didn't add that stuff because of Nvidia not being ready for the features that later became dx11. 

Tesselation and compute features. ( Ati had a a tessellation unit ready a long time ago)


----------



## cadaveca (Jun 26, 2011)

Wile E said:


> More anti-CUDA bs with nothing to back it.



Unfortunately, it is what is, but not because I'm anti-CUDA.


> Computing is evolving from "central processing" on the CPU to "co-processing" on the CPU and GPU. To enable this new computing paradigm,* NVIDIA invented the CUDA parallel computing architecture *that is now shipping in GeForce, ION, Quadro, and Tesla GPUs, representing a significant installed base for application developers.



The bolded part is the BS, simply because it's DirectX and Windows that enables such fuctionality, not CUDA. In fact, it's like they are saying they invented GPGPU.

In that regard, it's impossible for me to be "anti-CUDA". It's wrapping GPGPU functions into that specific term that's the issue.


----------



## Benetanegia (Jun 26, 2011)

pantherx12 said:


> 10.1 Didn't add that stuff because of Nvidia not being ready for the features that later became dx11.
> 
> Tesselation and compute features. ( Ati had a a tessellation unit ready a long time ago)



DX10 or DX10.1 or whatever was going to be the DX after DX9 never had compute. Compute came to DX thanks to other APIs that came first, like Stream and CUDA, because those ones created demand. And it certainly was not Nvidia the one who prevented compute features added to DirectX. It would have been a COMPLETE win for Nvidia, if DX10 had included them, for instance. Nvidia was ready for compute back then with G80 and with a 6 months lead over Ati's chip, which was clearly inferior. Cayman can barely outclass Nvidia's 5 year old G80 chip on compute oriented features, let alone previous cards. HD2000/3000 and even 4000 were simply no match for G80 for compute tasks.

As for tesselation, it was not included because it didn't make sense to include it at all, not because Nvidia was not ready. ANYTHING besides a current high-end card is brought to its knees when tesselation is enabled, so tesselation in HD4000 and worse yet HD2/3000 was a waste of time that no developer really wanted, because it was futile. If they had wanted it then no one would have stopped them from implementing it in games, they don't even use it on the Xbox which is a closed platform and much easier to implement without worries of screwing up for non-supporting cards.

Besides a tesselator (especially the one that Ati used before the DX11 implementation) is the most simple thing you can throw on a circuit, it's just an interpolator, and Nvidia already toyed with the idea of interpolated meshes with the FX series. It even had some dedicated hardware for it, like a very archaic tesselator. Remember how that went? Ati also created something similar, much more advanced (yet nowhere near close to DX11 tesselation) and was also scrapped by game developers, because it was not viable.



cadaveca said:


> Unfortunately, it is what is, but not because I'm anti-CUDA.
> 
> 
> The bolded part is the BS, simply because it's DirectX and Windows that enables such fuctionality, not CUDA. In fact, it's like they are saying they invented GPGPU.
> ...



What are you talking about man? CUDA has nothing to do with DirectX. They are two very different API's that have hardware (ISA) correlation on the GPU and are exposed via the GPU drivers. DirectX and Windows have nothing to do with that. BTW considering what you think about it, how do you explain CUDA (GPGPU) on Linux and Apple OS's?


----------



## cadaveca (Jun 26, 2011)

Benetanegia said:


> DX10 or DX10.1 or whatever was going to be the DX after DX9 never had compute. Compute came to DX thanks to other APIs that came first, like Stream and CUDA, because those ones created demand. And it certainly was not Nvidia the one who prevented compute features added to DirectX. It would have been a COMPLETE win for Nvidia, if DX10 had included them, for instance. Nvidia was ready for compute back then with G80 and with a 6 months lead over Ati's chip



Um, yeah.

G80 launched November 2006.

R520, which featured CTM, and Compute support(and as such, even supported F@H on GPU long before nVidia did), launched a year earlier, when nVidia had no such options, due to a lack of "double precision", which was the integral feature that G80 brought to the market for nV. This "delay" is EXACTLY what delayed DirectCompute.


----------



## Benetanegia (Jun 26, 2011)

cadaveca said:


> Um, yeah.
> 
> G80 launched November 2006.
> 
> R520, which featured CTM, and Compute support(and as such, even supported F@H on GPU long before nVidia did), launched a year earlier, when nVidia had no such options, due to a lack of "double precision", which was the integral feature that G80 brought to the market for nV. This "delay" is EXACTLY what delayed DirectCompute.



That GPGPU implementation was not Ati's work in reality, but Standford University's. That was nothing but BrookGPU and used DirectX instead of accesing the ISA directly like now. Of course Ati collaborated in the development of drivers so they deserve the credit of .

That has nothing to do with our discussion though. Ati being first means nothing as to the current and 5 past years situation. *Ati* was bought and dissapeared a long time ago and in the process the project was abandoned. *AMD** was simply not ready to let GPGPU interfere with their need to sell high-end CPU (none is Intel), and that's why they have never really pused for GPGPU programs until now. Until Fusion, so that they can continue selling high-end CPU AND high end GPUs. There's nothing honorable on this Fusion thing.

* I want to be clear about a fact that not many see apparently. Ati != AMD and has never been. I never said nothing about what Ati pursued, achieved or made before it was bought. It's after the acquisition that the GPGPU push was completely abandoned.

BTW your last sentence holds no water. So DirectCompute was not included in DX10 because Nvidia released a DX10 card 7 months earlier than AMD, which also happens to be compute ready (and can be used even on todays GPGPU programs)? Makes no sense dude. Realistically only AMD could have halted DirectCompute, but reality is that they didn't because DirectCompute never existed, nor was it planned until other APIs appeared and showed that DirectX's supremacy and Windows as a gaming platform was in danger.


----------



## cadaveca (Jun 26, 2011)

Benetanegia said:


> It's after the acquisition that the GPGPU push was completely abandoned.



OK, if you wanna take that tact, I'll agree. 

I said, very simply, that nVidia's delayed implementations ("CUDA" hardware support), and the supporting software, has greatly affected the transparacy of "stream"-based computing iin the end-user space.



W1zzard said:


> if there was some killer application for gpu computing wouldn't nvidia/cuda have found it by now?



Says it all.

The "software" needed is already there(there's actually very limited purposes for "GPU" based computing), and has been for a long time. Hardware functionality is here, with APUs.



Benetanegia said:


> What are you talking about man? CUDA has nothing to do with DirectX. They are two very different API's that have hardware (ISA) correlation on the GPU and are exposed via the GPU drivers.



CUDA has EVERYTHING to do with DirectX, as it replaces it, rather than works with it. Because the actual uses are very limited, there's no reason for a closed API such as CUDA, except to make money. And that's fine, that's business, but it does hurt the consumer in the end.


----------



## Wile E (Jun 26, 2011)

pantherx12 said:


> 10.1 Didn't add that stuff because of Nvidia not being ready for the features that later became dx11.


Wrong. All of nVidia's DX10 cards are capable of computing. nVidia did not hold back DX11 development, they did hold back some features in 10, but those were added back for 10.1. None of those said feature were GPGPU. The compute features of DX11 were developed BECAUSE of the demand for compute functions like CUDA.



pantherx12 said:


> Tesselation and compute features. ( Ati had a a tessellation unit ready a long time ago)



The early implementation of ATI's tessellation engine is completely different to the current implementation. Their earlier version was proprietary. Exactly the same concept as CUDA vs DX compute. And guess what, that proprietary innovation lead to an open standard. Also just like CUDA.

As per usual in this forum, there is a lot of CUDA/nV hate, with no real substance to back it up.





cadaveca said:


> OK, if you wanna take that tact, I'll agree.
> 
> I said, very simply, that nVidia's delayed implementations ("CUDA" hardware support), and the supporting software, has greatly affected the transparacy of "stream"-based computing iin the end-user space.
> 
> ...


Wrong. See above. It creates a market that open standards eventually capitalize on. Again, your disdain for CUDA is still completely unfounded.


----------



## Benetanegia (Jun 26, 2011)

cadaveca said:


> OK, if you wanna take that tact, I'll agree.
> 
> I said, very simply, that nVidia's delayed implementations ("CUDA" hardware support), and the supporting software, has greatly affected the transparacy of "stream"-based computing iin the end-user space.
> 
> ...



Without CUDA GPGPU would have died. Plain and simple. After the only other company interested in GPGPU was bought by a CPU manufacturer, only CUDA remained and only Nvidia pushed for GPGPU. And please don't say AMD has also pushed for it, because that's simply not true. Ati pushed it in 2006 and it's true that AMD has been pushing a little bit, but only since 2009 or so, when it became obvious they would be left behind if they didn't. They always talked about supporting it but never actually released any software or put money on it. That is until now, until they have released Fusion and thanks to that they can still continue milking us customers, by making us buy high end CPUs and high-end GPUs, when a mainstream CPU and high-end GPU would do it just as well.

The idea of APU for laptops and HTPC is great, but for HPC or enthusiast use it's retarded and I don't know why so many people are content with it. Why I need 400 SPs on a CPU, which are not enough for modern games, just to run GPGPU code on it, when I can have 3000 on a GPU and use as many as I want? Also when a new game is released and needs 800 SP, oh well I need a new CPU, not because I need a better CPU, but because I need the integrated GPU to have 800 SP. RETARDED. And of course I would still need the 6000 SP GPU for the game to run.

It's also false that GPGPU runs better on an APU because t's close to the CPU. It varies with the task. many tasks are run much much better on dedicated GPU, thanks to the high bandwidth and numenrous and fast local cache and registers.


----------



## cadaveca (Jun 26, 2011)

You're missing the point. I'll tend to agree that nVidia, with CUDA has kept GPGPU going, but like I said earlier...it's actually uses are so few and far between, it's almost stupid. It doesn't offer anything to the end user, really.

Like why haven't they jsut sold the software to microsoft, already?

Why don't they make it work on ATI GPUs too?

I mean really...uses are so few, what's the point?


----------



## Benetanegia (Jun 27, 2011)

cadaveca said:


> You're missing the point. I'll tend to agree that nVidia, with CUDA has kept GPGPU going, but like I said earlier...it's actually uses are so few and far between, it's almost stupid. It doesn't offer anything to the end user, really.



You don't really follow the news a lot isn't it? There's hundreds of uses for GPGPU



> Like why haven't they jsut sold the software to microsoft, already?



Because Microsoft never buys something they can copy. Hello DirectCompute.

And I'm not saying they copied CUDA btw (although it's very similar), but the concept and CUDA is in fact the evolution of Brook/Brook++/BrookGPU, made by the same people who made Brook in Standford and who actually invented the Stream processor concept. Nvdia didn't invent GPGPU, but many people who *did* work fr Nvidia now. i.e. Bill Dally.



> Why don't they make it work on ATI GPUs too?



Because AMD doesn't want it and they can't do it without permission. And never wanted it tbh, because it would have exposed their inferiority on that front. Nvidia already offered CUDA and PhysX to AMD and for free in 2007, but AMD refused.

Also there's OpenCL which is the same thing and something both AMD and Nvidia are supporting so...



> I mean really...uses are so few, what's the point?



Uses are few, there's no point, yet AMD is promoting the same concept as the future. A hint, uses are not few. Until now you don't see many because:

1- Intel and AMD have been trying hard to delay GPGPU.
2- It takes time to implement things. i.e. How much it took developers to implement SSE? And the complexity of SSE in comparison to GPGPU is like... 
3- You don't read a lot. There's hundreds of implementations in the scientific arena.


----------



## jaydeejohn (Jun 27, 2011)

http://blogs.msdn.com/b/ptaylor/archive/2007/03/03/optimized-for-vista-does-not-mean-dx10.aspx
Given the state of the NV drivers for the G80 and that ATI hasn’t released their hw yet; it’s hard to see how this is really a bad plan. We really want to see final ATI hw and production quality NV and ATI drivers before we ship our DX10 support. Early tests on ATI hw show their geometry shader unit is much more performant than the GS unit on the NV hw. That could influence our feature plan.


----------



## Benetanegia (Jun 27, 2011)

jaydeejohn said:


> http://blogs.msdn.com/b/ptaylor/archive/2007/03/03/optimized-for-vista-does-not-mean-dx10.aspx
> Given the state of the NV drivers for the G80 and that ATI hasn’t released their hw yet; it’s hard to see how this is really a bad plan. We really want to see final ATI hw and production quality NV and ATI drivers before we ship our DX10 support. Early tests on ATI hw show their geometry shader unit is much more performant than the GS unit on the NV hw. That could influence our feature plan.



How is that relevant?


----------



## cadaveca (Jun 27, 2011)

Benetanegia said:


> There's hundreds of implementations in the scientific arena.



That's one use, to me, and not one that I personally get any use out of. You falsely inflating the possibilities.

As a home user, there's 3D browser acceleration, encoding accelleration, and game physics. Is there more than that for a HOME user? Because that's what I am, right, so that's all I care about.

Which brings me to my point...why do I care? GPGPU doesn't offer me much.


----------



## pantherx12 (Jun 27, 2011)

Benetanegia said:


> . Cayman can barely outclass Nvidia's 5 year old G80 chip on compute oriented features, let alone previous cards. HD2000/3000 and even 4000 were simply no match for G80 for compute tasks.





Wish people would stop thinking of folding at home when they think of compute.

Theirs actually a lot of stuff ATIs architecture is better at. 

Well cept the 580, that's built for that stuff 

But barely outclass G80?

I've got apps where my 6870 completely smashes apart even top end nvidia cards.

May sound a bit fan-boyish here but just sharing my experience take it as you will.


Geeks3d and the other tech blogs demi-frequently post up comparisons of cards on new benchmarks or compute programs can find results there.

Been a while since I've read up though so can't point you in a specific direction, only that it's not so much a case of hardware vs hardware.

Cheers for clearing up about the compute though.


----------



## jaydeejohn (Jun 27, 2011)

The rest is obviously history.
MS shifted their goals seeing this


----------



## Benetanegia (Jun 27, 2011)

cadaveca said:


> That's one use, to me, and not one that I personally get any use out of. You falsely inflating the possibilities.



That is not one use. Scientist use GPGPU for physics simulations, treatment and comparison of image data (medical, satellite, military), artificial/distributed intelligence, data reorganization, stock market flow control and many many others. That is not one use.



> As a home user, there's 3D browser acceleration, encoding accelleration, and game physics. Is there more than that for a HOME user? Because that's what I am, right, so that's all I care about.
> 
> Which brings me to my point...why do I care? GPGPU doesn't offer me much.



There, then you finally said what you wanted to say. "It does not offer *me*" is not the same as "it has no use".


----------



## jaydeejohn (Jun 27, 2011)

I wonder whatll happen if a layer of SW is removed for gpgpu?


----------



## cadaveca (Jun 27, 2011)

Benetanegia said:


> There, then you finally said what you wanted to say. "It does not offer me" is not the same as "it has no use".



LuLz. It's your error to think i meant anything other than that.


----------



## pantherx12 (Jun 27, 2011)

Benetanegia said:


> 1- Intel and AMD have been trying hard to delay GPGPU.
> 2- It takes time to implement things. i.e. How much it took developers to implement SSE? And the complexity of SSE in comparison to GPGPU is like...
> 3- You don't read a lot. There's hundreds of implementations in the scientific arena.



You forget awesome game effects.

Physics toys! ( my favourite, I love n-body simulations and water simulations)


I believe GPGPU can help with search results too if I'm not mistaken.

Lot's of stuff can benefit just hard to think of stuff of the top of your head.


----------



## Benetanegia (Jun 27, 2011)

pantherx12 said:


> Wish people would stop thinking of folding at home when they think of compute.
> 
> Theirs actually a lot of stuff ATIs architecture is better at.
> 
> ...



Feture set != performance.

There's many apps where AMD cards are faster. This is obvious, highly parallel applications which riquire very little CPU-like behavior, will always run on a highly parallelized architecture. That's not to say that Cayman has many GPGPU oriented hardware features that G80 didn't have 5 years ago.

And regarding that advantage, AMD is stepping away from that architecture in the future right? They are embracing scalar design. So which architecture was essentially right in 2006? VLIW or scalar? It really is that simple, if moving into the future for AMD means going scalar, there really is very few questions unanswered. When AMD's design is almost a copy* of Kepler and Maxwell which were announced a year ago, there's very few questions about what is the correct direction. And then it just becomes obvious who followed that path before...



cadaveca said:


> LuLz. It's your error to think i meant anything other than that.



Well you said "it does not offer anything to the end user". That's not the same as saying that it does not offer anything to *you*. It offers a lot to me. Of course that's subjective, but even for the arguably few apps where it works, I feel it helps a lot. Kinda unrelated or not, but I usually hear how useless it is because "it only boasts video encoding by 50-100%". Lol you need a completely new $1000 CPU + supporting MB to achieve the same improvement, but nevermind.


----------



## pantherx12 (Jun 27, 2011)

Benetanegia said:


> Feture set != performance.
> 
> There's many apps where AMD cards are faster. This is obvious, highly parallel applications which riquire very little CPU-like behavior, will always run on a highly parallelized architecture. That's not to say that Cayman has many GPGPU oriented hardware features that G80 didn't have 5 years ago.
> 
> And regarding that advantage, AMD is stepping away from that architecture in the future right? They are embracing scalar design. So which architecture was essentially right in 2006? VLIW or scalar? It really is that simple, if moving into the future for AMD means going scalar, there really is very few questions unanswered. When AMD's design is almost a copy* of Kepler and Maxwell which were announced a year ago, there's very few questions about what is the correct direction. And then it just becomes obvious who followed that path before...



Still not sure how scalar has a performance advantage tbh, at a glance it should be weaker 

It's something I'll need to research more.


----------



## Benetanegia (Jun 27, 2011)

pantherx12 said:


> Still not sure how scalar has a performance advantage tbh, at a glance it should be weaker
> 
> It's something I'll need to research more.



CPUs are scalar (+ a vector unit) and GPGPU means running code that typically runs on CPU on the GPU, hence scalar is an advantage for a wider range of code.

Both future architectures from AMD and Nvidia are going to be scalar + vector. For AMD it's the arch in the OP. For Nvidia I'm not sure if it was kepler or Maxwell, but in any case by 2013 both companies will be there.


----------



## Wile E (Jun 27, 2011)

cadaveca said:


> That's one use, to me, and not one that I personally get any use out of. You falsely inflating the possibilities.
> 
> As a home user, there's 3D browser acceleration, encoding accelleration, and game physics. Is there more than that for a HOME user? Because that's what I am, right, so that's all I care about.
> 
> *Which brings me to my point...why do I care? GPGPU doesn't offer me much.*



Then AMD's use of APUs is just as useless. It operates on exactly the same principles.


----------



## cadaveca (Jun 27, 2011)

Wile E said:


> Then AMD's use of APUs is just as useless. It operates on exactly the same principles.



Currently, for me, it is useless. 

Until we get games that take advantage of what's offered, to me, APUs are nothing more than a XBOX360.

I mean, what does sandybridge on Z68 offer? It'll do the same acceleration that discrete cards can, but...that's it?

Unless it offers me a better gaming experience, I don't care.

They can't run CUDA, Intel's SandyBridge and AMD's APUs both, so I don't get any game benefits, such a Phys-X; so why would I be interested?


----------



## Wile E (Jun 27, 2011)

Then why come in here spreading misinformation?


----------



## cadaveca (Jun 27, 2011)

Wile E said:


> Then why come in here spreading misinformation?



It's not misinformation. It's my opinion. If CUDA was opened to those other platforms, then there might be reason to be interested, hence it hurting the consumer.

I mean, if there was a real APU with an nVidia GPU, that'd be great, but because alot of these chips are intended for desktops, and is you want better 3D performance than what an AMD APU or SB offers, the AMD APU's paired with an AMD GPU are going to be the very best option, performance wise.

But I can't get Phsy-x on that high-performance option...

We know AMD isn't going ot be there on the software side; it's up to the dev's to decide to implement the technologies, but at the same time, when it comes to gmaing, nV is going to be pushing thier options, and that doesn't help.


----------



## Wile E (Jun 27, 2011)

cadaveca said:


> It's not misinformation. It's my opinion. If CUDA was opened to those other platforms, then there might be reason to be interested, hence it hurting the consumer.
> 
> I mean, if there was a real APU with an nVidia GPU, that'd be great, but because alot of these chips are intended for desktops, and is you want better 3D performance than what an AMD APU or SB offers, the AMD APU's paired with an AMD GPU are going to be the very best option, performance wise.
> 
> But I can't get Phsy-x on that high-performance option...



It is misinformation when without CUDA, we would not have these new APUs, as there would be no interest in this form of computing. CUDA did not hurt consumers, period. It drove an entire market into being, which is producing new open standards. That is the very definition of good for consumers.


----------



## cadaveca (Jun 27, 2011)

Wile E said:


> It drove an entire market into being, which is producing new open standards. That is the very definition of good for consumers.



OK, sure. But with AMD's GPUs following closer and closer to nV's solutions, it makes far less sense for nV to restrict thier software to thier chips alone.

THAT doesn't help anyone, but them.


----------



## Wile E (Jun 27, 2011)

cadaveca said:


> OK, sure. But with AMD's GPUs following closer and closer to nV's solutions, it makes far less sense for nV to restrict thier software to thier chips alone.
> 
> THAT doesn't help anyone, but them.



They are also developing for the open standards, so your point is irrelevant. Sure, it helps only them, but it doesn't _hurt_ anyone in the process.


----------



## cadaveca (Jun 27, 2011)

Wile E said:


> so your point is irrelevant



Thanks for your opinion.


----------



## Wile E (Jun 27, 2011)

cadaveca said:


> Thanks for your opinion.



No, it's a fact. CUDA benefiting nV is completely irrelevant to the topic at hand. The topic at hand is whether or not it hurts consumers. Benefiting nV does not automatically equal harming consumers. It does not, becasue nV still fully supports the open standards as well.


----------



## cadaveca (Jun 27, 2011)

Wile E said:


> The topic at hand is whether or not it hurts consumers



No, actually, the topic is what AMD is doing with thier GPUs. I'd like to see them run CUDA, but it's not gonna happen. CUDA sucks.


----------



## Wile E (Jun 27, 2011)

cadaveca said:


> No, actually, the topic is what AMD is doing with thier GPUs. I'd like to see them run CUDA, but it's not gonna happen. CUDA sucks.



At least it sucks less than Stream.


----------



## Damn_Smooth (Jun 27, 2011)

cadaveca said:


> Currently, for me, it is useless.
> 
> Until we get games that take advantage of what's offered, to me, APUs are nothing more than a XBOX360.
> 
> ...



Would this be an implementation that you would consider useful?

http://blogs.msdn.com/b/somasegar/archive/2011/06/15/targeting-heterogeneity-with-c-amp-and-ppl.aspx



> I’m excited to announce that we are introducing a new technology that helps C++ developers use the GPU for parallel programming.  Today at the AMD Fusion Developer Summit, we announced C++ Accelerated Massive Parallelism (C++ AMP). Additionally, I’m happy to say that we intend to make the C++ AMP specification an open specification.



And, forgive my ignorance here, but wouldn't this also render Cuda somewhat obsolete?


----------



## Wile E (Jun 27, 2011)

Damn_Smooth said:


> Would this be an implementation that you would consider useful?
> 
> http://blogs.msdn.com/b/somasegar/archive/2011/06/15/targeting-heterogeneity-with-c-amp-and-ppl.aspx
> 
> ...



That depends solely on what the API is capable of. CUDA as we know it right now will eventually die off for a more hardware independent approach.


----------



## Damn_Smooth (Jun 27, 2011)

Wile E said:


> That depends solely on what the API is capable of. CUDA as we know it right now will eventually die off for a more hardware independent approach.



I really don't know enough to pretend that I know what I'm talking about yet, so thank's for explaining that to me.


----------

