Wednesday, February 21st 2007
New NVIDIA compiler lets developers offload math functions to GPU
NVIDIA has announced the release of beta versions of the SDK and C compiler for their Compute Unified Device Architecture (CUDA) technology. The C compiler includes a set of C language extensions that will enable developers to write C code that targets NVIDIA's GPUs directly. These extensions are supported by software libraries and a special CUDA driver that exposes the GPU to the OS and applications as a math coprocessor.
This approach differs to that taken by AMD/ATI and their "Close to Metal" (CTM) initiative. With CTM, AMD/ATI has opened up the low-level ISA so that their graphics products can be programmed directly in assembly language. CTM relies on developers creating libraries and higher-level tools for in-game use.
This approach differs to that taken by AMD/ATI and their "Close to Metal" (CTM) initiative. With CTM, AMD/ATI has opened up the low-level ISA so that their graphics products can be programmed directly in assembly language. CTM relies on developers creating libraries and higher-level tools for in-game use.
NVIDIA CUDA technology is a fundamentally new computing architecture that enables the GPU to solve complex computational problems in consumer, business, and technical applications. CUDA (Compute Unified Device Architecture) technology gives computationally intensive applications access to the tremendous processing power of NVIDIA graphics processing units (GPUs) through a revolutionary new programming interface. Providing orders of magnitude more performance and simplifying software development by using the standard C language, CUDA technology enables developers to create innovative solutions for data-intensive problems. For advanced research and language development, CUDA includes a low level assembly language layer and driver interface.
9 Comments on New NVIDIA compiler lets developers offload math functions to GPU
I saw ATI apply this type of thing to SETI@Home a couple of years back, & it made tearing thru unit processing way, WAY fast...
It would definitely seem that videocard GPU's are much faster @ that type of computation (largely "FPU/floating point unit" oriented on the mobo CPU).
APK
And, you're probably correct - It probably was most likely Folding@Home this was applied to, after all!
(I did both projects for a decent stretch (for this forums' team in fact for Folding@Home) - but, I did FAR more on SETI though, & since it began in 1999 (took a break 2001 - 2002, didn't have "enough machine" imo back then to do units fast))
* However: I am fairly certain where I saw mention of it was RIGHT before I joined here, around a year ago, SO YOU ALSO, may be "off" on the dates you mention (a few months ago, because I joined here way longer ago, than that, & I saw it on their forums quite a bit before I left them) on the SETI@Home forums!
Hey - they're both (as I am sure you know) 'distributed computing' concepts, & I was a part of them both...
Now, as to details on them? Heh, they're 'hazy' for me now, & what I saw on forums about them. Why hazy?? Simply because they're not 'crucial to my existence'... non-essential information, for me???
I let it fade... it 'takes up space' is why, & gets a "DB reorg" (compacted out blank records).
APK
P.S.=> Am I human? Do I forget things that aren't "110% crucial to my existence"?? Heck, absolutely - this 'factoid' isn't paying the bills for me, it's merely "trivia" @ this point...
Plus, I can stand correction @ times, like anybody else - so, thanks for that, on that note! apk
@APK: The F@H GPGPU client was first beta launched at the beginning of last october (2006), but discussion was flying about it for well over a year before :)
(Couldn't recall if it was beta or what was what... just that I knew it was going on OR going to be going on, rather...)
:)
* Yep... getting old & senile here I think... has to be this in part: I never used to forget anything, even details... not the case anymore!
(LOL - OH WELL!)
APK