- Joined
- Oct 9, 2007
- Messages
- 47,229 (7.55/day)
- Location
- Hyderabad, India
System Name | RBMK-1000 |
---|---|
Processor | AMD Ryzen 7 5700G |
Motherboard | ASUS ROG Strix B450-E Gaming |
Cooling | DeepCool Gammax L240 V2 |
Memory | 2x 8GB G.Skill Sniper X |
Video Card(s) | Palit GeForce RTX 2080 SUPER GameRock |
Storage | Western Digital Black NVMe 512GB |
Display(s) | BenQ 1440p 60 Hz 27-inch |
Case | Corsair Carbide 100R |
Audio Device(s) | ASUS SupremeFX S1220A |
Power Supply | Cooler Master MWE Gold 650W |
Mouse | ASUS ROG Strix Impact |
Keyboard | Gamdias Hermes E2 |
Software | Windows 11 Pro |
The growing ranks of programmers using the Python open-source language can now take full advantage of GPU acceleration for their high performance computing (HPC) and big data analytics applications by using the NVIDIA CUDA parallel programming model, NVIDIA today announced.
Easy to learn and use, Python is among the top 10 programming languages with more than three million users. It enables users to write high-level software code that captures their algorithmic ideas without delving deep into programming details. Python's extensive libraries and advanced features make it ideal for a broad range of HPC science, engineering and big data analytics applications. Support for NVIDIA CUDA parallel programming comes from NumbaPro, a Python compiler in the new Anaconda Accelerate product from Continuum Analytics.
"Hundreds of thousands of Python programmers will now be able to leverage GPU accelerators to improve performance on their applications," said Travis Oliphant, co-founder and CEO at Continuum Analytics. "With NumbaPro, programmers have the best of both worlds: they can take advantage of the flexibility and high productivity of Python with the high performance of NVIDIA GPUs."
Expanded Access to Accelerated Computing via LLVM
This new support for GPU-accelerated application development is the result of NVIDIA's contribution of the CUDA compiler source code into the core and parallel thread execution backend of LLVM, a widely used open source compiler infrastructure.
Continuum Analytics' Python development environment uses LLVM and the NVIDIA CUDA compiler software development kit to deliver GPU-accelerated application capabilities to Python programmers.
The modularity of LLVM makes it easy for language and library designers to add support for GPU acceleration to a wide range of general-purpose languages like Python, as well as to domain-specific programming languages. LLVM's efficient just-in-time compilation capability lets developers compile dynamic languages like Python on the fly for a variety of architectures.
"Our research group typically prototypes and iterates new ideas and algorithms in Python and then rewrites the algorithm in C or C++ once the algorithm is proven effective," said Vijay Pande, professor of Chemistry and of Structural Biology and Computer Science at Stanford University. "CUDA support in Python enables us to write performance code while maintaining the productivity offered by Python."
Anaconda Accelerate is available for Continuum Analytics' Anaconda Python offering, and as part of the Wakari browser-based data exploration and code development environment.
View at TechPowerUp Main Site
Easy to learn and use, Python is among the top 10 programming languages with more than three million users. It enables users to write high-level software code that captures their algorithmic ideas without delving deep into programming details. Python's extensive libraries and advanced features make it ideal for a broad range of HPC science, engineering and big data analytics applications. Support for NVIDIA CUDA parallel programming comes from NumbaPro, a Python compiler in the new Anaconda Accelerate product from Continuum Analytics.
"Hundreds of thousands of Python programmers will now be able to leverage GPU accelerators to improve performance on their applications," said Travis Oliphant, co-founder and CEO at Continuum Analytics. "With NumbaPro, programmers have the best of both worlds: they can take advantage of the flexibility and high productivity of Python with the high performance of NVIDIA GPUs."
Expanded Access to Accelerated Computing via LLVM
This new support for GPU-accelerated application development is the result of NVIDIA's contribution of the CUDA compiler source code into the core and parallel thread execution backend of LLVM, a widely used open source compiler infrastructure.
Continuum Analytics' Python development environment uses LLVM and the NVIDIA CUDA compiler software development kit to deliver GPU-accelerated application capabilities to Python programmers.
The modularity of LLVM makes it easy for language and library designers to add support for GPU acceleration to a wide range of general-purpose languages like Python, as well as to domain-specific programming languages. LLVM's efficient just-in-time compilation capability lets developers compile dynamic languages like Python on the fly for a variety of architectures.
"Our research group typically prototypes and iterates new ideas and algorithms in Python and then rewrites the algorithm in C or C++ once the algorithm is proven effective," said Vijay Pande, professor of Chemistry and of Structural Biology and Computer Science at Stanford University. "CUDA support in Python enables us to write performance code while maintaining the productivity offered by Python."
Anaconda Accelerate is available for Continuum Analytics' Anaconda Python offering, and as part of the Wakari browser-based data exploration and code development environment.
View at TechPowerUp Main Site