Wednesday, December 14th 2011
NVIDIA Opens Up CUDA Platform by Releasing Compiler Source Code
NVIDIA today announced that it will provide the source code for the new NVIDIA CUDA LLVM-based compiler to academic researchers and software-tool vendors, enabling them to more easily add GPU support for more programming languages and support CUDA applications on alternative processor architectures.
LLVM is a widely-used open source compiler infrastructure with a modular design that makes it easy to add support for new programming languages and processor architectures. It is used for a range of programming requirements by many leading companies, including Adobe, Apple, Cray, Electronic Arts, and others.The new LLVM-based CUDA compiler, which is enhanced with architecture support for NVIDIA's parallel GPUs, is included in the latest release of the CUDA Toolkit (v4.1), now available to the public.
"Opening up the CUDA platform is a significant step," said Sudhakar Yalamanchili, professor at Georgia Institute of Technology and lead of the Ocelot project, which maps software written in CUDA C to different processor architectures. "The future of computing is heterogeneous, and the CUDA programming model provides a powerful way to maximize performance on many different types of processors, including AMD GPUs and Intel x86 CPUs."
Enabling alternative approaches to programming heterogeneous parallel systems for domain-specific problems and future programming models will help accelerate the path to exascale computing. By releasing the source code to the CUDA compiler and internal representation (IR) format, NVIDIA is enabling researchers with more flexibility to map the CUDA programming model to other architectures, and furthering development of next-generation higher performance computing platforms.
Software tools vendors can also access compiler source code technology to build custom solutions.
"This initiative enables PGI to create native CUDA Fortran and OpenACC compilers that leverage the same device-level optimization technology used by NVIDIA CUDA C/C++," said Doug Miles, director of The Portland Group. "It will enable seamless debugging and profiling using existing tools, and allow PGI to focus on higher-level optimizations and language features."
Early access to the CUDA compiler source code is available for qualified academic researchers and software tools developers by registering here.
To learn more about the NVIDIA CUDA programming environment, visit the CUDA web site.
LLVM is a widely-used open source compiler infrastructure with a modular design that makes it easy to add support for new programming languages and processor architectures. It is used for a range of programming requirements by many leading companies, including Adobe, Apple, Cray, Electronic Arts, and others.The new LLVM-based CUDA compiler, which is enhanced with architecture support for NVIDIA's parallel GPUs, is included in the latest release of the CUDA Toolkit (v4.1), now available to the public.
"Opening up the CUDA platform is a significant step," said Sudhakar Yalamanchili, professor at Georgia Institute of Technology and lead of the Ocelot project, which maps software written in CUDA C to different processor architectures. "The future of computing is heterogeneous, and the CUDA programming model provides a powerful way to maximize performance on many different types of processors, including AMD GPUs and Intel x86 CPUs."
Enabling alternative approaches to programming heterogeneous parallel systems for domain-specific problems and future programming models will help accelerate the path to exascale computing. By releasing the source code to the CUDA compiler and internal representation (IR) format, NVIDIA is enabling researchers with more flexibility to map the CUDA programming model to other architectures, and furthering development of next-generation higher performance computing platforms.
Software tools vendors can also access compiler source code technology to build custom solutions.
"This initiative enables PGI to create native CUDA Fortran and OpenACC compilers that leverage the same device-level optimization technology used by NVIDIA CUDA C/C++," said Doug Miles, director of The Portland Group. "It will enable seamless debugging and profiling using existing tools, and allow PGI to focus on higher-level optimizations and language features."
Early access to the CUDA compiler source code is available for qualified academic researchers and software tools developers by registering here.
To learn more about the NVIDIA CUDA programming environment, visit the CUDA web site.
20 Comments on NVIDIA Opens Up CUDA Platform by Releasing Compiler Source Code
IMO makes more sense to offer PhysX to Radeon users but still keep the PPU a GeForce product- Win/Win. Presently, I'm sure a number of AMD/ATi users would love to use PhysX and pick up cheapie GTX460 (or similar), whilst still retaining their HDwhatever as primary graphics- seems like a no brainer.
My secondary rig (Q9400/P45) uses the SLI hack...another case of Nvidia limiting it's own opportunities. Might make sense for Nvidia to keep it proprietry when they still produced mobo chipsets...now? Not so much. Personally I'd allow SLI free on all and any board that could support dual cards to maximize sales potential...but maybe I'm simple, who knows?
An "inofficial" hack is going to be public once made, and obviously they'll be one of the first to know about it. It's a non-possibility.
"PhysX" is under the registery of nVidia. It's owned by them and can't be re-applied.
You're acting no different than a stupid AMD fanboy.
edit: And I hope you don't call me an AMD fanboy too, because I don't even really like them :laugh:
In the greater scheme of things, I think Nvidia opening up CUDA is just Nvidia covering it's bases. They probably see that OpenCL (Nvidia are part of the Khronos group) will gain traction over time, and that CUDA ports to OpenCL without too much difficulty. Most Dev's would likely know the same, so it looks as though Nvidia looks to widen CUDA uptake while still playing the "open source" cards.
All people are interested in is whether or not this will make a hack possible that improves performance in PhysX-powered games when using AMD cards. Which it more than likely will since people won't have to spend tons of time on trial and error with reverse engineering. Not sure what prompted that considering you could not be any more wrong.
if you use CUDA to simulate your weapons of mass destructions you can now build your own processor (ASIC) and port CUDA to work with it, so you can simulate more and better