News Posts matching #CUDA

Return to Keyword Browsing

NVIDIA RTX Voice Now Officially Supported on Non-RTX Cards

NVIDIA should probably start thinking about removing the RTX moniker from its RTX Voice suite, the (supposedly) AI-based audio noise-cancellation software the company launched about this time last year. At the time, NVIDIA announced it as an exclusive feature for their RTX GPUs, due to their AI-processing capabilities - and that led everyone to think RTX Voice employed the in-chip Tensor cores for leveraged AI operation. However, soon enough, mods started to appear that allowed GTX graphics cards - going back at least as much as the "hot-oven Fermi" in unofficial support - and that pointed towards a CUDA-based processing solution.

It appears that NVIDIA has now decided to officially extend support for the RTX Voice software to other, non-RTX graphics cards from the latest RTX 30-cards down to their 600-series (essentially any card supported under Nvidia's 410.18 driver or newer). So if you were hoping to leverage the software and wanted to do it officially, in a pre-RTX 20-series graphics card, with no patches - now you can. You can check out our RTX Voice review, where our very own Inle declared it to be "like magic".

NVIDIA Enables GPU Passthrough for Virtual Machines on Consumer-Grade GeForce GPUs

Editor's note: This is not a part of April Fools.

NVIDIA has separated professional users and regular gamers with the company's graphics card offering. There is a GeForce lineup of GPUs, which represents a gaming-oriented version and its main task is to simply play games, display graphics, and run some basic CUDA accelerated software. However, what would happen if you were to start experimenting with your GPU? For example, if you are running Linux, and you want to spin a virtual machine with Windows on it for gaming, you could just use your integrated GPU as the GeForce card doesn't allow for virtual GPU passthrough. For these purposes, NVIDIA has its professional graphics card lineup like Quadro and Tesla.

However, this specific feature is about to arrive in even the GeForce lineup offerings. NVIDIA has announced that the company is finally bringing the basic virtual machine passthrough functions to the gaming GPUs. While the feature is representing a step in the right direction, it is still limited. For example, the GeForce GPU passthrough supports only one virtual machine and SR-IOV feature is still not supported on GeForce. "If you want to enable multiple virtual machines to have direct access to a single GPU or want the GPU to be able to assign virtual functions to multiple virtual machines, you will need to use NVIDIA Tesla, Quadro, or RTX enterprise GPUs.", says the NVIDIA FAQ. GeForce virtualization, which is now still in beta, is supported on R465 or higher drivers.
The full content from NVIDIA's website is written below.

NVIDIA Unveils AI Enterprise Software Suite to Help Every Industry Unlock the Power of AI

NVIDIA today announced NVIDIA AI Enterprise, a comprehensive software suite of enterprise-grade AI tools and frameworks optimized, certified and supported by NVIDIA, exclusively with VMware vSphere 7 Update 2, separately announced today.

Through a first-of-its-kind industry collaboration to develop an AI-Ready Enterprise platform, NVIDIA teamed with VMware to virtualize AI workloads on VMware vSphere with NVIDIA AI Enterprise. The offering gives enterprises the software required to develop a broad range of AI solutions, such as advanced diagnostics in healthcare, smart factories for manufacturing, and fraud detection in financial services.
NVIDIA AI Enterprise Software Suite

NVIDIA Could Give a SUPER Overhaul to its GeForce RTX 3070 and RTX 3080 Graphics Cards

According to kopite7kimi, a famous leaker of information about NVIDIA graphics cards, we have some pieces of data about NVIDIA's plans to bring back its SUPER series of graphics cards. The SUPER graphics cards have first appeared in the GeForce RTX 2000 series "Turing" GPUs with GeForce RTX 2080 SUPER and RTX 2070 SUPER designs, after which RTX 2060 followed. Thanks to the source, we have information that NVIDIA plans to give its newest "Ampere" 3000 series of GeForce RTX GPUs a SUPER overhaul. Specifically, the company allegedly plans to introduce GeForce RTX 3070 SUPER and RTX 3080 SUPER SKUs to its offerings.

While there is no concrete information about the possible specifications of these cards, we can speculate that just like the previous SUPER upgrade, new cards would receive an upgrade in CUDA core count, and possibly a memory improvement. The last time a SUPER upgrade happened, NVIDIA just added more cores to the GPU and overclocked the GDDR6 memory and thus increased the memory bandwidth. We have to wait and see how the company plans to position these alleged cards and if we get them at all, so take this information with a grain of salt.
NVIDIA GeForce RTX 3080 SUPER Mock-Up
This is only a mock-up image and is not representing a real product.

NVIDIA GeForce RTX 3080 Ti Graphics Card Launch Postponed to February

In the past, we heard rumors about NVIDIA's upcoming GeForce RTX 3080 Ti graphics card. Being scheduled for January release, we were just a few weeks away from it. The new graphics card is designed to fill the gap between the RTX 3080 and higher-end RTX 3090, by offering the same GA102 die with the only difference being that the 3080 Ti is GA102-250 instead of GA102-300 die found RTX 3090. It allegedly has the same CUDA core count of 10496 cores, same 82 RT cores, 328 Tensor Cores, 328 Texture Units, and 112 ROPs. However, the RTX 3080 Ti is supposed to bring the GDDR6X memory capacity down to 20 GBs, instead of the 24 GB found on RTX 3090.

However, all of that is going to wait a little bit longer. Thanks to the information obtained by Igor Wallosek from Igor's Lab, we have data that NVIDIA's upcoming high-end GeForce RTX 3080 Ti graphics card is going to be postponed to February for release. Previous rumors suggested that we are going to get the card in January with the price tag of $999. That, however, has changed and NVIDIA allegedly postponed the launch to February. It is not yet clear what the cause behind it is, however, we speculate that the company can not meet the high demand that the new wave of GPUs is producing.

NVIDIA GeForce RTX 3080 Ti Landing in January at $999

According to the unknown manufacturer (AIB) based in Taiwan, NVIDIA is preparing to launch the new GeForce RTX 3000 series "Ampere" graphics card. As reported by the HKEPC website, the Santa Clara-based company is preparing to fill the gap between its top-end GeForce RTX 3090 and a bit slower RTX 3080 graphics card. The new product will be called GeForce RTX 3080 Ti. If you are wondering what the specification of the new graphics card will look like, you are in luck because the source has a few pieces of information. The new product will be based on GA102-250-KD-A1 GPU core, with a PG133-SKU15 PCB design scheme. The GPU will contain the same 10496 CUDA core configuration as the RTX 3090.

The only difference to the RTX 3090 will be a reduced GDDR6X amount of 20 GB. Along with the 20 GB of GDDR6X memory, the RTX 3080 Ti graphics cards will feature a 320-bit bus. The TGP of the card is limited to 320 Watts. The sources are reporting that the card will be launched sometime in January of 2021, and it will come at $999. This puts the price category of the RTX 3080 Ti in the same range as AMD's recently launched Radeon RX 6900 XT graphics card, so it will be interesting to see how these two products are competing.

NVIDIA and Atos Team Up to Build World's Fastest AI Supercomputer

NVIDIA today announced that the Italian inter-university consortium CINECA—one of the world's most important supercomputing centers—will use the company's accelerated computing platform to build the world's fastest AI supercomputer.

The new "Leonardo" system, built with Atos, is expected to deliver 10 exaflops of FP16 AI performance to enable advanced AI and HPC converged application use cases. Featuring nearly 14,000 NVIDIA Ampere architecture-based GPUs and NVIDIA Mellanox HDR 200 Gb/s InfiniBand networking, Leonardo will propel Italy as the global leader in AI and high performance computing research and innovation.

NVIDIA Unveils RTX A6000 "Ampere" Professional Graphics Card and A40 vGPU

NVIDIA today unveiled its RTX A6000 professional graphics card, the first professional visualization-segment product based on its "Ampere" graphics architecture. With this, the company appears to be deviating from the Quadro brand for the graphics card, while several software-side features retain the brand. The card is based on the same 8 nm "GA102" silicon as the GeForce RTX 3080, but configured differently. For starters, it gets a mammoth 48 GB of GDDR6 memory across the chip's 384-bit wide memory interface, along with ECC support.

The company did not reveal the GPU's CUDA core count, but mentioned that the card's typical board power is 300 W. The card also gets NVLink support, letting you pair up to two A6000 cards for explicit multi-GPU. It also supports GPU virtualization, including NVIDIA GRID, NVIDIA Quadro Virtual Data Center Workstation, and NVIDIA Virtual Compute Server. The card features a conventional lateral blower-type cooling solution, and its most fascinating aspect is its power input configuration, with just the one 8-pin EPS power input. We will update this story with more information as it trickles out.
Update 13:37 UTC: The company also unveiled the A40, a headless professional-visualization graphics card dedicated for virtual-GPU/cloud-GPU applications (deployments at scale in data-centers). The card has similar specs to the RTX A6000.

Update 13:42 UTC: NVIDIA website says that both the A40 and RTX A6000 a 4+4 pin EPS connector (and not 8-pin PCIe) for power input. An 8-pin EPS connector is capable of delivering up to 336 W (4x 7 A @ 12 V).

Folding @ Home Bakes in NVIDIA CUDA Support for Increased Performance

GPU Folders make up a huge fraction of the number-crunching power of Folding@home, enabling us to help projects like the COVID Moonshot open science drug discovery project evaluate thousands of molecules per week in their quest to produce a new low-cost patent-free therapy for COVID-19. The COVID Moonshot (@covid_moonshot) is using the number-crunching power of Folding@home to evaluate thousands of molecules per week, synthesizing hundreds of these molecules in their quest to develop a patent-free drug for COVID-19 that could be taken as a simple 2x/day pill.

As of today, your folding GPUs just got a big powerup! Thanks to NVIDIA engineers, our Folding@home GPU cores—based on the open source OpenMM toolkit—are now CUDA-enabled, allowing you to run GPU projects significantly faster. Typical GPUs will see 15-30% speedups on most Folding@home projects, drastically increasing both science throughput and points per day (PPD) these GPUs will generate.

Editor's Note:TechPowerUp features a strong community surrounding the Folding @ Home project. Remember to fold aggregated to the TPU team, if you so wish: we're currently 44# in the world, but have plans for complete world domination. You just have to input 50711 as your team ID. This is a way to donate efforts to cure various diseases affecting humanity that's at the reach of a few computer clicks - and the associated power cost with these computations.

NVIDIA RTX IO Detailed: GPU-assisted Storage Stack Here to Stay Until CPU Core-counts Rise

NVIDIA at its GeForce "Ampere" launch event announced the RTX IO technology. Storage is the weakest link in a modern computer, from a performance standpoint, and SSDs have had a transformational impact. With modern SSDs leveraging PCIe, consumer storage speeds are now bound to grow with each new PCIe generation doubling per-lane IO bandwidth. PCI-Express Gen 4 enables 64 Gbps bandwidth per direction on M.2 NVMe SSDs, AMD has already implemented it across its Ryzen desktop platform, Intel has it on its latest mobile platforms, and is expected to bring it to its desktop platform with "Rocket Lake." While more storage bandwidth is always welcome, the storage processing stack (the task of processing ones and zeroes to the physical layer), is still handled by the CPU. With rise in storage bandwidth, the IO load on the CPU rises proportionally, to a point where it can begin to impact performance. Microsoft sought to address this emerging challenge with the DirectStorage API, but NVIDIA wants to build on this.

According to tests by NVIDIA, reading uncompressed data from an SSD at 7 GB/s (typical max sequential read speeds of client-segment PCIe Gen 4 M.2 NVMe SSDs), requires the full utilization of two CPU cores. The OS typically spreads this workload across all available CPU cores/threads on a modern multi-core CPU. Things change dramatically when compressed data (such as game resources) are being read, in a gaming scenario, with a high number of IO requests. Modern AAA games have hundreds of thousands of individual resources crammed into compressed resource-pack files.

NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

Just ahead of the September launch, specifications of NVIDIA's upcoming RTX Ampere lineup have been leaked by industry sources over at VideoCardz. According to the website, three alleged GeForce SKUs are being launched in September - RTX 3090, RTX 3080, and RTX 3070. The new lineup features major improvements: 2nd generation ray-tracing cores and 3rd generation tensor cores made for AI and ML. When it comes to connectivity and I/O, the new cards use the PCIe 4.0 interface and have support for the latest display outputs like HDMI 2.1 and DisplayPort 1.4a.

The GeForce RTX 3090 comes with 24 GB of GDDR6X memory running on a 384-bit bus at 19.5 Gbps. This gives a memory bandwidth capacity of 936 GB/s. The card features the GA102-300 GPU with 5,248 CUDA cores running at 1695 MHz, and is rated for 350 W TGP (board power). While the Founders Edition cards will use NVIDIA's new 12-pin power connector, non-Founders Edition cards, from board partners like ASUS, MSI and Gigabyte, will be powered by two 8-pin connectors. Next up is specs for the GeForce RTX 3080, a GA102-200 based card that has 4,352 CUDA cores running at 1710 MHz, paired with 10 GB of GDDR6X memory running at 19 Gbps. The memory is connected with a 320-bit bus that achieves 760 GB/s bandwidth. The board is rated at 320 W and the card is designed to be powered by dual 8-pin connectors. And finally, there is the GeForce RTX 3070, which is built around the GA104-300 GPU with a yet unknown number of CUDA cores. We only know that it has the older non-X GDDR6 memory that runs at 16 Gbps speed on a 256-bit bus. The GPUs are supposedly manufactured on TSMC's 7 nm process, possibly the EUV variant.

NVIDIA Announces GTC 2020 Keynote to be Held on October 5-9

NVIDIA today announced that it will be hosting another GTC keynote for the coming month of October. To be held between October 5th and October 9th, the now announced keynote will bring updates to NVIDIA's products and technologies, as well as provide an opportunity for numerous computer science companies and individuals to take center stage on discussing new and upcoming technologies. More than 500 sessions will form the backbone of GTC, with seven separate programming streams running across North America, Europe, Israel, India, Taiwan, Japan and Korea - each with access to live demos, specialized content, local startups and sponsors.

This GTC keynote follows the May 2020 keynote where the world was presented to NVIDIA's Ampere-based GA100 accelerator. A gaming and consumer-oriented event is also taking place on September 1st, with expectations being set high for NVIDIA's next-generation of consumer graphics products. Although if recent rumors of a $2,000 RTX 3090 graphics card are anything to go by, not only expectations will be soaring by then.

Dynics Announces AI-enabled Vision System Powered by NVIDIA T4 Tensor Core GPU

Dynics, Inc., a U.S.-based manufacturer of industrial-grade computer hardware, visualization software, network security, network monitoring and software-defined networking solutions, today announced the XiT4 Inference Server, which helps industrial manufacturing companies increase their yield and provide more consistent manufacturing quality.

Artificial intelligence (AI) is increasingly being integrated into modern manufacturing to improve and automate processes, including 3D vision applications. The XiT4 Inference Server, powered by the NVIDIA T4 Tensor Core GPUs, is a fan-less hardware platform for AI, machine learning and 3D vision applications. AI technology is allowing manufacturers to increase efficiency and throughput of their production, while also providing more consistent quality due to higher accuracy and repeatability. Additional benefits are fewer false negatives (test escapes) and fewer false positives, which reduce downstream re-inspection needs, all leading to lower costs of manufacturing.

GALAX Designs a GeForce GTX 1650 "Ultra" with TU106 Silicon

NVIDIA board partners carving out GeForce RTX 20-series and GTX 16-series SKUs from ASICs they weren't originally based on, is becoming more common, but GALAX has taken things a step further. The company just launched a GeForce GTX 1650 (GDDR6) graphics card based on the "TU106" silicon (ASIC code: TU106-125-A1). The company carved a GTX 1650 out of this chip by disabling all of its RT cores, all its tensor cores, and a whopping 61% of its CUDA cores, along with proportionate reductions in TMU- and ROP counts. The memory bus width has been halved from 256-bit down to 128-bit.

The card, however, is only listed by the Chinese regional arm of GALAX. The card's marketing name is "GALAX GeForce GTX 1650 Ultra," with "Ultra" being a GALAX brand extension, and not an NVIDIA SKU (i.e. the GPU isn't called "GTX 1650 Ultra"). The GPU clock speeds for this card is identical to those of the original GTX 1650 that's based on TU117 - 1410 MHz base, 1590 MHz GPU Boost, and 12 Gbps (GDDR6-effective) memory.

Aetina Launches New Edge AI Computer Powered by the NVIDIA Jetson

Aetina Corp., a provider of high-performance GPGPU solutions, announced the new AN110-XNX edge AI computer leveraging the powerful capabilities of the NVIDIA Jetson Xavier NX, expanding its range of edge AI systems built on the Jetson platform for applications in smart transportation, factories, retail, healthcare, AIoT, robotics, and more.

The AN110-XNX combines the NVIDIA Jetson Xavier NX and Aetina AN110 carrier board in a compact form factor of 87.4 x 68.2 x 52 mm (with fan). AN110-XNX supports the MIPI CSI-2 interface for 1x4k or 2xFHD cameras to handle intensive AI workloads from ultra-high-resolution cameras to more accurate image analysis. It is as small as Aetina's AN110-NAO based on the NVIDIA Jetson Nano platform, but delivers more powerful AI computing via the new Jetson Xavier NX. With 384 CUDA cores, 48 Tensor Cores, and cloud-native capability the Jetson Xavier NX delivers up to 21 TOPS and is the ideal platform to accelerate AI applications. Bundled with the latest NVIDIA Jetpack 4.4 SDK, the energy-efficient module significantly expands the choices now available for developers and customers looking for embedded edge-computing options that demand increased performance to support AI workloads but are constrained by size, weight, power budget, or cost.

DirectX Coming to Linux...Sort of

Microsoft is preparing to add the DirectX API support to WSL (Windows Subsystem for Linux). The latest Windows Subsystem for Linux 2 will virtualize DirectX to Linux applications running on top of it. WSL is a translation layer for Linux apps to run on top of Windows. Unlike Wine, which attempts to translate Direct3D commands to OpenGL, what Microsoft is proposing is a real DirectX interface for apps in WSL, which can essentially talk to hardware (the host's kernel-mode GPU driver) directly.

To this effect, Microsoft introduced the Linux-edition of DXGkrnl, a new kernel-mode driver for Linux that talks to the DXGkrnl driver of the Windows host. With this, Microsoft is promising to expose the full Direct3D 12, DxCore, and DirectML. It will also serve as a conduit for third party APIs, such as OpenGL, OpenCL, Vulkan, and CUDA. Microsoft expects to release this feature-packed WSL out with WDDM 2.9 (so a future version of Windows 10).

AAEON Unveils AI and Edge Computing Solutions Powered by NVIDIA

AAEON, a leading developer of embedded AI and edge-computing solutions, today announced it is unveiling several new rugged embedded platforms—augmenting an already extensive lineup of AAEON AI edge-computing solutions powered by the NVIDIA Jetson platform. The new AAEON products provide key interfaces needed for edge computing in a small form factor, making it easier to build applications for all levels of users, from makers to more advanced developers for deployments in the field.

AAEON also introduced a new version of the popular BOXER-8120AI, now featuring the Jetson TX2 4 GB module, providing an efficient and cost-effective solution for AI edge computing with 256 CUDA cores delivering processing speeds up to 1.3 TFLOPS."Partnering with an AI and edge computing leader like NVIDIA supports our mission to deliver more diversified embedded products and solutions at higher quality standards," said Alex Hsueh, Senior Director of AAEON's System Platform Division. "These new offerings powered by the Jetson platform complement our existing lineup of rugged embedded products, providing an optimal combination of performance and price in a smaller form factor for customers to easily deploy across a full range of applications."

NVIDIA RTX Voice Modded to Work on Non-RTX GeForce GPUs

NVIDIA made headlines with the release of its RTX Voice free software, which gives your communication apps computational noise-cancellation, by leveraging AI. The software is very effective at what it does, but requires a GeForce RTX 20-series GPU. PC enthusiast David Lake, over at Guru3D Forums disagrees. With fairly easy modifications to its installer payload, Lake was able to remove its system requirements gate, and get it to install on his machine with a TITAN V graphics card, and find that the software works as intended.

Our first instinct was to point out that the "Volta" based TITAN V features tensor cores, and has hardware AI capabilities, until we found dozens of users across Guru3D forums, Reddit, and Twitter claiming that the mod gets RTX Voice to work on their GTX 16-series, "Pascal," "Maxwell," and even older "Fermi" hardware. So in all likelihood, RTX Voice uses a CUDA-based GPGPU codepath, rather than something fancy leveraging tensor cores. Find instructions on how to mod the RTX Voice installer in the Guru3D Forums thread here.

Three Unknown NVIDIA GPUs GeekBench Compute Score Leaked, Possibly Ampere?

(Update, March 4th: Another NVIDIA graphics card has been discovered in the Geekbench database, this one featuring a total of 124 CUs. This could amount to some 7,936 CUDA cores, should NVIDIA keep the same 64 CUDA cores per CU - though this has changed in the past, as when NVIDIA halved the number of CUDA cores per CU from Pascal to Turing. The 124 CU graphics card is clocked at 1.1 GHz and features 32 GB of HBM2e, delivering a score of 222,377 points in the Geekbench benchmark. We again stress that these can be just engineering samples, with conservative clocks, and that final performance could be even higher).

NVIDIA is expected to launch its next-generation Ampere lineup of GPUs during the GPU Technology Conference (GTC) event happening from March 22nd to March 26th. Just a few weeks before the release of these new GPUs, a Geekbench 5 compute score measuring OpenCL performance of the unknown GPUs, which we assume are a part of the Ampere lineup, has appeared. Thanks to the twitter user "_rogame" (@_rogame) who obtained a Geekbench database entry, we have some information about the CUDA core configuration, memory, and performance of the upcoming cards.
NVIDIA Ampere CUDA Information NVIDIA Ampere Geekbench

NVIDIA to Reuse Pascal for Mobility-geared MX300 Series

NVIDIA will apparently still be using Pascal when they launch their next generation of low-power discrete graphics solutions for mobile systems. The MX300 series will replace the current crop of MX200 series (segregated in three products in the form of the MX230, MX250 10 W and MX250 25 W). The new MX300 keeps the dual-tiered system, but ups the ante on the top of the line MX350. Even though it's still Pascal, on a 14 nm process, the MX350 should see an increase in CUDA cores to 640 (by using NVIDIA's Pascal GP107 chip) from the MX250's 384. Performance, then, should be comparable to the NVIDIA GTX 1050.

The MX330, on the other hand, will keep specifications of the MX250, which signals a tier increase from the 256 execution units in the MX230 to 384. This should translate to appreciable performance increases for the new MX300 series, despite staying on NVIDIA's Pascal architecture. The new lineup is expected to be announced on February.

Rumor: NVIDIA's Next Generation GeForce RTX 3080 and RTX 3070 "Ampere" Graphics Cards Detailed

NVIDIA's next-generation of graphics cards codenamed Ampere is set to arrive sometime this year, presumably around GTC 2020 which takes place on March 22nd. Before the CEO of NVIDIA, Jensen Huang officially reveals the specifications of these new GPUs, we have the latest round of rumors coming our way. According to VideoCardz, which cites multiple sources, the die configurations of the upcoming GeForce RTX 3070 and RTX 3080 have been detailed. Using the latest 7 nm manufacturing process from Samsung, this generation of NVIDIA GPU offers a big improvement from the previous generation.

For starters the two dies which have appeared have codenames like GA103 and GA104, standing for RTX 3080 and RTX 3070 respectively. Perhaps the biggest surprise is the Streaming Multiprocessor (SM) count. The smaller GA104 die has as much as 48 SMs, resulting in 3072 CUDA cores, while the bigger, oddly named, GA103 die has as much as 60 SMs that result in 3840 CUDA cores in total. These improvements in SM count should result in a notable performance increase across the board. Alongside the increase in SM count, there is also a new memory bus width. The smaller GA104 die that should end up in RTX 3070 uses a 256-bit memory bus allowing for 8/16 GB of GDDR6 memory, while its bigger brother, the GA103, has a 320-bit wide bus that allows the card to be configured with either 10 or 20 GB of GDDR6 memory. In the images below you can check out the alleged diagrams for yourself and see if this looks fake or not, however, it is recommended to take this rumor with a grain of salt.

NVIDIA Introduces DRIVE AGX Orin Platform

NVIDIA today introduced NVIDIA DRIVE AGX Orin, a highly advanced software-defined platform for autonomous vehicles and robots. The platform is powered by a new system-on-a-chip (SoC) called Orin, which consists of 17 billion transistors and is the result of four years of R&D investment. The Orin SoC integrates NVIDIA's next-generation GPU architecture and Arm Hercules CPU cores, as well as new deep learning and computer vision accelerators that, in aggregate, deliver 200 trillion operations per second—nearly 7x the performance of NVIDIA's previous generation Xavier SoC.

Orin is designed to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots, while achieving systematic safety standards such as ISO 26262 ASIL-D. Built as a software-defined platform, DRIVE AGX Orin is developed to enable architecturally compatible platforms that scale from a Level 2 to full self-driving Level 5 vehicle, enabling OEMs to develop large-scale and complex families of software products. Since both Orin and Xavier are programmable through open CUDA and TensorRT APIs and libraries, developers can leverage their investments across multiple product generations.

NVIDIA and Tech Leaders Team to Build GPU-Accelerated Arm Servers

NVIDIA today introduced a reference design platform that enables companies to quickly build GPU-accelerated Arm -based servers, driving a new era of high performance computing for a growing range of applications in science and industry.

Announced by NVIDIA founder and CEO Jensen Huang at the SC19 supercomputing conference, the reference design platform — consisting of hardware and software building blocks — responds to growing demand in the HPC community to harness a broader range of CPU architectures. It allows supercomputing centers, hyperscale-cloud operators and enterprises to combine the advantage of NVIDIA's accelerated computing platform with the latest Arm-based server platforms.

New NVIDIA EGX Edge Supercomputing Platform Accelerates AI, IoT, 5G at the Edge

NVIDIA today announced the NVIDIA EGX Edge Supercomputing Platform - a high-performance, cloud-native platform that lets organizations harness rapidly streaming data from factory floors, manufacturing inspection lines and city streets to securely deliver next-generation AI, IoT and 5G-based services at scale, with low latency.

Early adopters of the platform - which combines NVIDIA CUDA-X software with NVIDIA-certified GPU servers and devices - include Walmart, BMW, Procter & Gamble, Samsung Electronics and NTT East, as well as the cities of San Francisco and Las Vegas.

Primate Labs Introduces GeekBench 5, Drops 32-bit Support

Primate Labs, developers of the ubiquitous benchmarking application GeekBench, have announced the release of version 5 of the software. The new version brings numerous changes, and one of the most important (since if affects compatibility) is that it will only be distributed in a 64-bit version. Some under the hood changes include additions to the CPU benchmark tests (including machine learning, augmented reality, and computational photography) as well as increases in the memory footprint for tests so as to better gauge impacts of your memory subsystem on your system's performance. Also introduced are different threading models for CPU benchmarking, allowing for changes in workload attribution and the corresponding impact on CPU performance.

On the Compute side of things, GeekBench 5 now supports the Vulkan API, which joins CUDA, Metal, and OpenCL. GPU-accelerated compute for computer vision tasks such as Stereo Matching, and augmented reality tasks such as Feature Matching are also available. For iOS users, there is now a Dark Mode for the results interface. GeekBench 5 is available now, 50% off, on Primate Labs' store.
Return to Keyword Browsing
Nov 21st, 2024 07:47 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts