Wednesday, May 9th 2012

NVIDIA Contributes CUDA Compiler to Open Source Community

NVIDIA today announced that LLVM, one of the industry's most popular open source compilers, now supports NVIDIA GPUs, dramatically expanding the range of researchers, independent software vendors (ISVs) and programming languages that can take advantage of the benefits of GPU acceleration.

LLVM is a widely used open source compiler infrastructure, with a modular design that makes it easy to add support for programming languages and processor architectures. The CUDA compiler provides C, C++ and Fortran support for accelerating application using the massively parallel NVIDIA GPUs. NVIDIA has worked with LLVM developers to provide the CUDA compiler source code changes to the LLVM core and parallel thread execution backend. As a result, programmers can develop applications for GPU accelerators using a broader selection of programming languages, making GPU computing more accessible and pervasive than ever before.
LLVM supports a wide range of programming languages and front ends, including C/C++, Objective-C, Fortran, Ada, Haskell, Java bytecode, Python, Ruby, ActionScript, GLSL and Rust. It is also the compiler infrastructure NVIDIA uses for its CUDA C/C++ architecture, and it has been widely adopted by leading companies such as Apple, AMD and Adobe.

"Double Negative has ported their fluid dynamics solver over to use their domain-specific language, Jet, which is based on LLVM," said Dan Bailey, researcher at Double Negative and contributor to the LLVM project. "In addition to the existing architectures supported, the new open-source LLVM compiler from NVIDIA has allowed them to effortlessly compile highly optimized code for NVIDIA GPU architectures to massively speed up the computation of simulations used in film visual effects."

"MathWorks uses elements of the LLVM toolchain to add GPU support to the MATLAB language," said Silvina Grad-Freilich, senior manager, parallel computing marketing, MathWorks. "The GPU support with the open source LLVM compiler is valuable for the technical community we serve."

"The code we provided to LLVM is based on proven, mainstream CUDA products, giving programmers the assurance of reliability and full compatibility with the hundreds of millions of NVIDIA GPUs installed in PCs and servers today," said Ian Buck general manager of GPU computing software at NVIDIA. "This is truly a game-changing milestone for GPU computing, giving researchers and programmers an incredible amount of flexibility and choice in programming languages and hardware architectures for their next-generation applications."

To download the latest version of the LLVM compiler with NVIDIA GPU support, visit the LLVM site. To learn more about GPU computing, visit the NVIDIA website. To learn more about CUDA or download the latest version, visit the CUDA website.
Add your own comment

9 Comments on NVIDIA Contributes CUDA Compiler to Open Source Community

#1
GSquadron
This is a very nice news. It will be of great help and will make, at least me, to think twice to buy nvidia cards.
Posted on Reply
#3
Cheeseball
Not a Potato
Wow, last year the CUDA Compiler gets revamped using LLVM (which resulted in a 20% reduction in compile time), now LLVM itself supports NVIDIA GPUs.

This basically means that almost all compilers that use the LLVM core libraries just got a huge speed boost if you use a NVIDIA card for code generation.
Posted on Reply
#4
SIGSEGV
very interesting, since i've begun working with Cuda and OpenCL parallel processing :cool:, ingredients : one nvidia card and one amd card :rockout:
Posted on Reply
#5
RejZoR
This might be a bit stupid question, but does this mean anything at all to the Intel and AMD GPU users?
Posted on Reply
#6
Jacez44
RejZoRThis might be a bit stupid question, but does this mean anything at all to the Intel and AMD GPU users?
No, Intel and AMD have had LLVM Compilers for a while.

This news indicates that nVidia has joined its competitors in offering this feature.
Posted on Reply
#7
faramir
Uh-huh, I seem to have found outwhy they need a performance boost (from Wikipedia's entry on clang):

"Although Clang's overall compatibility with GCC is very good, and its compilation speed typically better than GCC's, as of early 2011 the runtime performance of clang/LLVM output is sometimes worse than GCC's."

So basically LLVM is a faster and more compact compiler that has worse optimisation ? Nothing new here, this has been a trade-off since ... forever ?
Posted on Reply
#8
Dippyskoodlez
Hopefully Apple picks these changes up for xCode. My MBA has a little bit of a spike when I compile :/
Posted on Reply
#9
atikkur
CUDA (kuda) in my country means horse.
Posted on Reply
Nov 21st, 2024 13:02 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts