Sunday, January 18th 2009
GT300 A Leap Forward for NVIDIA GPU Architecture
Every once in a while, comes a GPU by NVIDIA that marks the evolution of GPU architecture for NVIDIA. A good example of this would be the G80, which was a distinct evolution of the GPU architecture for the company. Sources tell Hardware-Infos that the GT300 is on course of being one such GPU that comes with distinct architectural changes. To begin with, GT300 will start the company's DirectX 11 conquest the way its ancestor, the G80 did for DirectX 10, which turned out to be largely a successful one.
The GT300's architecture will be based on a new form of number-crunching machinery. While today's NVIDIA GPUs feature a SIMD (single instruction multiple data) computation mechanism, the GT300 will introduce the GPU to MIMD (multiple instructions multiple data) mechanism. This is expected to boost the computational efficiency of the GPU many-fold. The ALU cluster organization will be dynamic, pooled, and driven by a crossbar switch. Once again, NVIDIA gets to drop clock-speeds and power consumptions, while achieving greater levels of performance than current-generation GPUs. With GT300, NVIDIA will introduce the next major update to CUDA. With the new GPUs being built on the 40nm silicon fabrication process, transistor counts are expected to spiral-up. NVIDIA's GT300 is expected to go to office in Q4 2009, with its launch schedule more or less dependent on that of Microsoft's Windows 7 operating system that brings in DirectX 11 support.
Source:
Hardware-Infos
The GT300's architecture will be based on a new form of number-crunching machinery. While today's NVIDIA GPUs feature a SIMD (single instruction multiple data) computation mechanism, the GT300 will introduce the GPU to MIMD (multiple instructions multiple data) mechanism. This is expected to boost the computational efficiency of the GPU many-fold. The ALU cluster organization will be dynamic, pooled, and driven by a crossbar switch. Once again, NVIDIA gets to drop clock-speeds and power consumptions, while achieving greater levels of performance than current-generation GPUs. With GT300, NVIDIA will introduce the next major update to CUDA. With the new GPUs being built on the 40nm silicon fabrication process, transistor counts are expected to spiral-up. NVIDIA's GT300 is expected to go to office in Q4 2009, with its launch schedule more or less dependent on that of Microsoft's Windows 7 operating system that brings in DirectX 11 support.
46 Comments on GT300 A Leap Forward for NVIDIA GPU Architecture
GT212 will be a place holder and only be used to get to 40nm, but there are rumors already thatr its being scraped.
I hope this new architecture brings some good performance gains and I also hope AMD have something good to counter them.
-shader clock
-512 bit bus
-MIMD
-Physics
Here's to my wishful thinking, because I serious doubt it's all going to happen.
I never like the small bus bandwidth of ati cards. And the fact they have never implemented a shader clock sucks. If they would keep pace with Nvidia and use the MIMD architecture and implement physics that would make one hell of a 5870x2. But ya wishful thinking.:rolleyes:
End result is some jerky looking games and very few ones that look good.
Except the crysis series we have some ugly games and if we look at what ATI/Nvidia tells us about their graphics cards we should expect an orgasm or some life changing experience.
Probably the next GT300 will run crysis with all details even on full HD , wow , what an achievement.
Don't expect more than this people , it's more likely to see better graphics that will wow everyone on the next xbox or playstation 4 than you will see on computer , because we lack games and good people to bother making anything in the land where a game is pirated in hours after release.
This Nvidia people amaze me how stupid they think we are , well , some are , the ones that cheer CUDA but they don't use it in anything they do or the people that cheer physx but they didn't played a game in their life with physx , the brainwashed people.
With this nVidia will get a lot more flexibility in math, making CUDA much more powerful for GENERAL MATH rather than very specific SIMD math that it does now.
I'm not so sure how MIMD will help GPU rendering though. The "graphics pipeline" remains the same. However, it would allow CUDA AND graphics rendering to happen at the same time. (At the moment IIRC it can't. It can only do ONE thing at a time... so if you mix graphics and CUDA it needs to "swap" between math and graphics processing which is incredibly inefficient.)
If someone can explain how MIMD helps GRAPHICS performance, pls post.
lets say this if 1 company or 2 falls out of existence i will probably stop following computer tech and become a hermit on that front, go back to playing Console games.:roll::banghead::shadedshu
As for PhysX, 3DMark Vantage? :banghead: :banghead:
:roll:
LOL some of you are right, we wont even have a need or uses for DX11 when this hits, and when the first WOW factor DX11 games hit, this card will be like the 8800GTX on crysis, lots of bitching going on.
So as cool as it will be to have the first topend DX11 GPU it will be just like all the rest, plays DX9/10 game maxed, but will be hard pressed to cope with the DX11 killers like "Crysis" 2 " The other Island", and its X-pac "Crysis, the cave we forgot about on the other side of the island".
That's not related to DX11 or any other API, that's how the GPU works internally and it's a HUGE improvement over SIMD.
I agree with lemonadesoda in that this might mostly affect GPGPU and very little to graphics processing, but that's assuming that the load balance is fairly efficient nowadays, which we really don't know. I think that we probably don't know enough about how exactly they work in that front. IMO it is assumed by most that Nvidia's SPs are very efficient, because they are centainly much more efficient than Ati ones when load balancing, "scalar" versus VLIW and all, and IMO that makes us believe that Nvidia's ones have to be above 90-95% efficiency. But it still remains the fact that maybe Nvidia ones are still below a 75% and if MIMD can increase that to around 90-95% that's already a 15-20% increase for free. Add into the mix what lemonade said about graphics+CUDA at the same time and also that the card will probably be able to perform a context change (from vertex to pixel, for example) in the same clock and we might be really getting somewhere.
Maybe this can help answering your last question lemonade? It's funny, because I thought this and almost convinced myself of that possibility, as I was writing... :laugh: