News Posts matching #CUDA

Return to Keyword Browsing

NVIDIA to Unveil GeForce GTX TITAN P at Gamescom

NVIDIA is preparing to launch its flagship graphics card based on the "Pascal" architecture, the so-called GeForce GTX TITAN P, at the 2016 Gamescom, held in Cologne, Germany, between 17-21 August. The card is expected to be based on the GP100 silicon, and could likely come in two variants - 16 GB and 12 GB. The two differ by memory bus width besides memory size. The 16 GB variant could feature four HBM2 stacks over a 4096-bit memory bus; while the 12 GB variant could feature three HBM2 stacks, and a 3072-bit bus. This approach by NVIDIA is identical to the way it carved out Tesla P100-based PCIe accelerators, based on this ASIC. The cards' TDP could be rated between 300-375W, drawing power from two 8-pin PCIe power connectors.

The GP100 and GTX TITAN P isn't the only high-end graphics card lineup targeted at gamers and PC enthusiasts, NVIDIA is also working the GP102 silicon, positioned between the GP104 and the GP100. This chip could lack FP64 CUDA cores found on the GP100 silicon, and feature up to 3,840 CUDA cores of the same kind found on the GP104. The GP102 is also expected to feature simpler 384-bit GDDR5X memory. NVIDIA could base the GTX 1080 Ti on this chip.

NVIDIA Announces a PCI-Express Variant of its Tesla P100 HPC Accelerator

NVIDIA announced a PCI-Express add-on card variant of its Tesla P100 HPC accelerator, at the 2016 International Supercomputing Conference, held in Frankfurt, Germany. The card is about 30 cm long, 2-slot thick, and of standard height, and is designed for PCIe multi-slot servers. The company had introduced the Tesla P100 earlier this year in April, with a dense mezzanine form-factor variant for servers with NVLink.

The PCIe variant of the P100 offers slightly lower performance than the NVLink variant, because of lower clock speeds, although the core-configuration of the GP100 silicon remains unchanged. It offers FP64 (double-precision floating-point) performance of 4.70 TFLOP/s, FP32 (single-precision) performance of 9.30 TFLOP/s, and FP16 performance of 18.7 TFLOP/s, compared to the NVLink variant's 5.3 TFLOP/s, 10.6 TFLOP/s, and 21 TFLOP/s, respectively. The card comes in two sub-variants based on memory, there's a 16 GB variant with 720 GB/s memory bandwidth and 4 MB L3 cache, and a 12 GB variant with 548 GB/s and 3 MB L3 cache. Both sub-variants feature 3,584 CUDA cores based on the "Pascal" architecture, and core clock speed of 1300 MHz.

NVIDIA's Next Flagship Graphics Cards will be the GeForce X80 Series

With the GeForce GTX 900 series, NVIDIA has exhausted its GeForce GTX nomenclature, according to a sensational scoop from the rumor mill. Instead of going with the GTX 1000 series that has one digit too many, the company is turning the page on the GeForce GTX brand altogether. The company's next-generation high-end graphics card series will be the GeForce X80 series. Based on the performance-segment "GP104" and high-end "GP100" chips, the GeForce X80 series will consist of the performance-segment GeForce X80, the high-end GeForce X80 Ti, and the enthusiast-segment GeForce X80 TITAN.

Based on the "Pascal" architecture, the GP104 silicon is expected to feature as many as 4,096 CUDA cores. It will also feature 256 TMUs, 128 ROPs, and a GDDR5X memory interface, with 384 GB/s memory bandwidth. 6 GB could be the standard memory amount. Its texture- and pixel-fillrates are rated to be 33% higher than those of the GM200-based GeForce GTX TITAN X. The GP104 chip will be built on the 16 nm FinFET process. The TDP of this chip is rated at 175W.

NVIDIA Coming Around to Vulkan Support

NVIDIA is preparing to add support for Vulkan, the upcoming 3D graphics API by Khronos, and successor to OpenGL, to its feature-set. The company's upcoming GeForce 358.66 series driver will introduce support for Vulkan. What makes matters particularly interesting is the API itself. Vulkan is heavily based on AMD's Mantle API, which the company gracefully retired in favor of DirectX 12, and committed its code to Khronos. The 358 series drivers also reportedly feature function declarations in their CUDA code for upcoming NVIDIA GPU architectures, such as Pascal and Volta.

BIOSTAR Announces Gaming H170T Mainboard and GeForce GTX 980 Ti Graphics Card

BIOSTAR's latest gaming products cover both the mainboard and VGA card spaces with two products that offer a sweet spot for performance versus investment in hardware. The motherboard is the Gaming H170T which is the best valued motherboard of the Gaming Skylake platform (other brands are at least $169 and up) and the GPU is the Gaming GeForce GTW980Ti, a 6GB GDDR5, 384bit, full size PCB, high-end 3D graphic solution, supporting NVIDIA's PhyX and CUDA technology at 4K resolutions.

BIOSTAR's latest mainboard, the Gaming H170T is based on Intel's H170 chipset which is a single-chip design that supports Intel 6th generation socket 1151 Intel Core processors. The Gaming H170T comes with Hi-Fi Technology built inside, delivering Blu-Ray audio, Puro Hi-Fi features an integrated independent audio power design with a built-in amplifier. The technology utilizes audio components with an independent power delivery design for a significant reduction in electronic noise producing superb sound quality. Moreover, the Hi-Fi H170T supports USB 3.0, PCIe M.2 (32Gb/s), SATA Express (16Gb/s) and has DisplayPort for monitor output.

NVIDIA Preparing a dual-GM200 Graphics Card

If it could make a dual-GK110 graphics card, the forgettable $2,999 GTX TITAN-Z, it's only conceivable that NVIDIA could launch one based on its newer and slightly more energy-efficient GM200 chips. According to a WCCFTech report, the company is doing just that. The dual-GPU GM200 graphics card could bear the company's coveted "GTX TITAN" branding, and could be a doubling of the GTX TITAN X, with twice as many CUDA cores (6,144 in all), TMUs (384 in all), ROPs (192 in all), and memory (24 GB in all), spread across two GPU systems, in an SLI-on-a-stick solution. It remains to be seen if NVIDIA gets the pricing wrong the second time.

NVIDIA Doubles Performance for Deep Learning Training

NVIDIA today announced updates to its GPU-accelerated deep learning software that will double deep learning training performance. The new software will empower data scientists and researchers to supercharge their deep learning projects and product development work by creating more accurate neural networks through faster model training and more sophisticated model design.

The NVIDIA DIGITS Deep Learning GPU Training System version 2 (DIGITS 2) and NVIDIA CUDA Deep Neural Network library version 3 (cuDNN 3) provide significant performance enhancements and new capabilities. For data scientists, DIGITS 2 now delivers automatic scaling of neural network training across multiple high-performance GPUs. This can double the speed of deep neural network training for image classification compared to a single GPU.

NVIDIA GeForce GTX 980 Ti Smiles for the Camera

Here are some of the first pictures of an NVIDIA GeForce GTX 980 Ti graphics card, in the flesh. As predicted, the reference design board reuses the PCB of the GeForce GTX TITAN-X, and its cooler is a silver version of its older sibling. According to an older report, the GTX 980 Ti will be carved out of the 28 nm GM200 silicon, by disabling 2 of its 24 SMM units, resulting in a CUDA core count of 2,816. The card retains its 384-bit GDDR5 memory bus width, but holds 6 GB of memory, half that of the GTX TITAN-X. The card is expected to launch in early June, 2015. NVIDIA's add-in card (AIC) partners will be free to launch custom-design boards with this SKU, so you could hold out for the MSI Lightnings, the EVGA Classifieds, the ASUS Strixes, the Gigabyte G1s, and the likes.

NVIDIA GeForce GTX 980 Ti Silicon Marked "GM200-310"

NVIDIA's upcoming high-end single-GPU graphics card, based on the GM200 silicon, which debuted with the GTX TITAN X, will feature a silicon marked "GM200-310." The SKU will be named GeForce GTX 980 Ti, and is more likely to be priced around the $600-650 mark, than replacing the $550 GTX 980 off the shelves. Going by the way NVIDIA re-positioned the GTX 780 to $499 with the introduction of the GTX 780 Ti, we imagine something similar could happen to the GTX 980. From what we gathered so far, the GTX 980 Ti will be based on the GM200 silicon. Its CUDA core count is unknown, but it wouldn't surprise us if it's unchanged from the GTX TITAN X. Its different SKU numbering shouldn't be an indication of its CUDA core count. GTX 780 Ti and GTX TITAN Black had different numbering, but the same CUDA core counts of 2,880.

The card will feature 6 GB of GDDR5 memory across the chip's 384-bit wide memory interface. It will feature five display outputs, similar to that of the GTX 980. Unlike with the GTX TITAN X, NVIDIA partners will have the freedom to launch custom-design GTX 980 Ti products from day-one. There are two theories doing rounds on when NVIDIA plans to launch this card. One suggests that it could launch in mere weeks from now, probably even on the sidelines of Computex. The other suggests that it will launch towards the end of Summer, as NVIDIA wants to make the most cash from its existing GTX 980 inventory.

ZOTAC Unveils a Pair of 4 GB GeForce GTX 960 Graphics Cards

ZOTAC joined the 4 GB GeForce GTX 960 party with two factory-overclocked models. The ZT-90308-10M features a minor overclock of 1177 MHz core, with 1240 MHz GPU Boost (vs. reference clocks of 1126/1178 MHz); while the AMP! variant (ZT-90309-10M) offers higher 1266 MHz core with 1329 MHz GPU Boost. The memory clock is untouched on both cards, at 7.00 GHz (GDDR5-effective).

The ZT-90308-10M features a more cost-effective dual-fan cooling solution, while the AMP! offers a meatier dual-fan one. Both cards draw power from single 8-pin PCIe power connectors, and come with back-plates. Display outputs on both include three DisplayPort 1.2, one HDMI 2.0, and one DVI. Based on the 28 nm GM206 silicon, the GTX 960 features 1,024 CUDA cores, 64 TMUs, 32 ROPs, and a 128-bit wide GDDR5 memory interface. ZOTAC didn't announce pricing or availability information.
More pictures follow.

NVIDIA GeForce GTX TITAN-X Pictured Up-close

Here are some of the first close-up shots of NVIDIA's new flagship graphics card, the GeForce GTX TITAN-X, outside Jen-Hsun Huang's Rafiki moment at a GDC presentation. If we were to throw in an educated guess, NVIDIA probably coined the name "TITAN-X" as it sounds like "Titan Next," much like it chose "TITAN-Z" as it sounds like "Titans" (plural, since it's a dual-GPU card). Laid flat out on a table, the card features an a matte-black colored reference cooling solution that looks identical to the one on the original TITAN. Other cosmetic changes include a green glow inside the fan intake, the TITAN logo, and of course, the green glow on the GeForce GTX marking on the top.

The card lacks a back-plate, giving us a peek at its memory chips. The card features 12 GB of GDDR5 memory, and looking at the twelve memory chips on the back of the PCB, with no other traces, we reckon the chip features a 384-bit wide memory interface. The 12 GB is achieved using twenty-four 4 Gb chips. The card draws power from a combination of 8-pin and 6-pin power connectors. The display I/O is identical to that of the GTX 980, with three DisplayPorts, one HDMI, and one DVI. Built on the 28 nm GM200 silicon, the GTX TITAN-X is rumored to feature 3,072 CUDA cores. NVIDIA CEO claimed that the card will be faster than even the previous generation dual-GPU flagship product by NVIDIA, the GeForce GTX TITAN-Z.

NVIDIA Frees PhysX Source Code

After Epic's Unreal Engine 4 and Unity 5 game engines went "free,"with their source-codes put up by their makes for anyone to inspect freely, NVIDIA decided to join the bandwagon of showering game developers with technical empowerment, by putting up the entire source-code of PhysX 3.3.3, including its cloth and destruction physics code, on GitHub. The move to put up free-code of PhysX appears to be linked to the liberation of Unreal Engine 4 code.

NVIDIA PhysX is the principal physics component of Unreal-driven game titles for several years now. There's a catch, though. NVIDIA is only freeing CPU-based implementation of PhysX, and not its GPU-accelerated one, which leverages NVIDIA's proprietary CUDA GPU compute technology. There should still be plenty for game devs and students in the field, to chew on. In another interesting development, the PhysX SDK has been expanded from its traditionally Windows roots to cover more platforms, namely OS X, Linux, and Android. Find instructions on how to get your hands on the code, at the source link.

NVIDIA GM206-300 Silicon Pictured

Here's the first picture of NVIDIA next chip based on its "Maxwell" architecture, GM206-300. Powering the upcoming GeForce GTX 960 graphics card, the silicon appears to have half the die-size of the company's current flagship GM204, which powers the GTX 980 and GTX 970. The package itself is smaller, with its much lower pin-count, owing to half the memory bus width to the GM204, at 128-bit, and fewer power pins. No other specs are leaked, but we won't be surprised if its CUDA core count is about half that of the GTX 980. NVIDIA plans to launch the GeForce GTX 960 on the 22nd of January, 2015.

NVIDIA Scoops Up Computex Tradeshow Awards for Tegra K1, GRID

For the sixth year running, NVIDIA (NASDAQ: NVDA) has clinched a Best Choice Award at Computex, picking up the honor for NVIDIA GRID technology, as well as a coveted "Golden Award" for the NVIDIA Tegra K1 mobile processor. NVIDIA's honors mark the longest Best Choice Award win-streak of any international Computex exhibitor. More than 475 technology products from nearly 200 vendors competed for this year's recognition.

Tegra K1 is a 192-core super chip, built on the NVIDIA Kepler architecture -- the world's most advanced and energy-efficient GPU. Tegra K1's 192 fully programmable CUDA cores deliver the most advanced mobile graphics and performance, and its compute capabilities open up many new applications and experiences in fields such as computer vision, advanced imaging, speech recognition and video editing.

NVIDIA GPUs Power Latest Adobe Creative Cloud Enhancements

NVIDIA today announced its GPUs are powering the upcoming releases and new GPU-accelerated features of Adobe Creative Cloud video applications, such as Adobe Premiere Pro CC, Adobe After Effects CC, SpeedGrade CC, Adobe Media Encoder (AME), and Adobe Anywhere. The upcoming releases were revealed by Adobe at the National Association of Broadcasters (NAB) Show.

NVIDIA's GPUs are a "must-have" tool for Adobe video professionals -- enabling the fastest performance across the widest range of Creative Cloud applications, whether working in HD or 4K, desktop or mobile, workstation or cloud. NVIDIA and Adobe have engaged in a long-standing effort to develop unmatched GPU-accelerated features designed for professional Video Editors, VFX artists and Colorists.

Finalwire Releases AIDA64 v4.30

Finalwire released the latest update to AIDA64, popular system information, diagnostic, and bechmarking suite. Version 4.30 builds on its predecessors by adding support for new hardware, newer versions of Windows, new technologies, and offers a new benchmark. To begin with, it adds support for Windows 8.1 Update 1 and Windows Server 2012 R2 Update 1. It also adds support / detection for NVIDIA's new CUDA 6.0 GPGPU API. Among the new hardware support are the AMD socket AM1 platform, Intel "Broadwell" CPUs, early support for AMD "Carrizo" and "Toronto" APUs, early support for Intel "Skylake," "Cherry Trail," and "Denverton" CPUs. Among the new GPUs supported, are AMD Radeon R7 265, and NVIDIA GeForce GTX 745, and the 800M series. A new OpenCL SHA-1 hash benchmark is added.
DOWNLOAD: Finalwire AIDA64 v4.30 (installer) | Finalwire AIDA64 v4.30 (ZIP package)

Leadtek Launches Its GTX 750 Ti and GTX 750 Graphics Cards

Leadtek simultaneously launches GTX 750 Ti and GTX 750, two brand-new GeForce series graphics cards. They are equipped with the first generation NVIDIA Maxwell GPU architecture, and their main appeal is low power consumption. Their capabilities have also been improved, compared to products of the same grade in the previous generation.

GTX 750 Ti and GTX 750 are equipped with 2 GB GDDR5 and 1 GB GDDR5, respectively. Their memory bandwidth is 128-bit. GTX 750 Ti is embedded with 640 CUDA cores and its base clock is 1020 MHz, allowing it to support Boost 2.0 technology to 1085 MHz. Leadtek will soon launch the single-fan overclocking and dual-fan Hurricane overclocking graphics cards made in-house, offering better performance and superb cooling effects, and giving more choices to professional gamers.

NVIDIA Slides Supercomputing Technology Into the Car With Tegra K1

NVIDIA's new Tegra K1 mobile processor will help self-driving cars advance from the realm of research into the mass market with its automotive-grade version of the same GPU that powers the world's 10 most energy-efficient supercomputers. The first mobile processor to bring advanced computational capabilities to the car, the NVIDIA Tegra K1 runs a variety of auto applications that had previously not been possible with such low power consumption.

Tegra K1 features a quad-core CPU and a 192-core GPU using the NVIDIA Kepler architecture, the basis for NVIDIA's range of powerful GPUs -- including the processors that are used in the top 10 systems featured in the latest Green500 list of the world's most energy-efficient supercomputers. Tegra K1 will drive camera-based, advanced driver assistance systems (ADAS) -- such as pedestrian detection, blind-spot monitoring, lane-departure warning and street sign recognition -- and can also monitor driver alertness via a dashboard-mounted camera.

ASUS Announces GTX 780 Ti DirectCU II Graphics Card

ASUS today announced GTX 780 Ti DirectCU II, a graphics card powered by the new GeForce GTX 780 Ti graphics-processing unit (GPU) and fitted with exclusive DirectCU II technology for cooler, quieter and faster performance.

The GeForce GTX 780 Ti GPU is powered by 25% more CUDA (Compute Unified Device Architecture) cores and benefits from a boosted clock speed of 1020 MHz - both significant increases on its predecessor that enable ASUS GTX 780 Ti to deliver astonishing gaming performance.

NVIDIA Launches the Tesla K40 GPU Accelerator

NVIDIA today unveiled the NVIDIA Tesla K40 GPU accelerator, the world's highest performance accelerator ever built, delivering extreme performance to a widening range of scientific, engineering, high performance computing (HPC) and enterprise applications.

Providing double the memory and up to 40 percent higher performance than its predecessor, the Tesla K20X GPU accelerator, and 10 times higher performance than today's fastest CPU, the Tesla K40 GPU is the world's first and highest-performance accelerator optimized for big data analytics and large-scale scientific workloads.

NVIDIA Dramatically Simplifies Parallel Programming With CUDA 6

NVIDIA today announced NVIDIA CUDA 6, the latest version of the world's most pervasive parallel computing platform and programming model.

The CUDA 6 platform makes parallel programming easier than ever, enabling software developers to dramatically decrease the time and effort required to accelerate their scientific, engineering, enterprise and other applications with GPUs.

GIGABYTE Unveils GeForce GTX 780 Ti Overclock Edition Graphics Card

GIGABYTE, the world leader in high-performance gaming hardware and system, is pleased to announce the latest graphics card, GeForce GTX 780 Ti Overclock Edition (GV-N78TOC-3GD). With 25% plus CUDA cores than GTX 780, the most powerful thermal design in the world and exciting new technologies, GV-N78TOC-3GD is going to bring all the gamers an extremely gaming experience to a whole new level.

The exclusive WINDFORCE 3X cooling design guarantees you to dissipate 450 watt heat out. Enjoy your game play with a stunningly beautiful and quiet design is no longer a dream. It also supports OC GURU II and GPU Boost 2.0 for maximum clock speeds. GV-N78TOC-3GD features PhysX and TXAA technologies for smooth, sharp graphics, and GeForce ShadowPlay to capture all your greatest gaming moments automatically. Whether you play on 4K monitors at extreme settings, GV-N78TOC-3GD can provide the horsepower to drive all your next-gen gaming visual experiences.

GeForce GTX 780 Ti Specifications Leaked

NVIDIA's upcoming GeForce GTX 780 Ti is configured to be a notch above the GTX TITAN after all, as leaked specifications sheets reveal it utilize all components available on the GK110 silicon. Specifications sheets of a Galaxy-branded GTX 780 Ti was leaked to the web by @asder00, which reveal it to feature the full complement of 2,880 CUDA cores on the GK110 silicon, which work out to 240 texture memory units (TMUs). Other specifications include 48 ROPs, and a 384-bit wide GDDR5 memory interface, holding 3 GB of memory. Clock speeds include 876 MHz core, 928 MHz GPU Boost, and 1750 MHz (7.00 GHz GDDR5-effective) memory. With these specifications on paper, the GTX 780 Ti shouldn't have too many problems beating the GTX TITAN, and with it, the Radeon R9 290X from AMD. It features everything there is on the GK110, half the memory of the GTX TITAN, but one that's faster.

CYBERPOWERPC Announces Power Mega III Graphics Workstation Series

Cyberpower Inc., a manufacturer of custom gaming machines, notebook systems, and high performance workstations, today announced its Power Mega III series - a family of Intel 4th Generation Haswell or Xeon-based professional workstation PCs with NVIDIA Quadro K series or AMD FirePro graphics that set the bar for high-performance workstations.

Power Mega III workstations offer precise, professional performance to tackle the most demanding CPU and GPU-intensive applications out of the box. Offered in six configurations, the Power Mega III series is powered by the latest 4th Generation Intel Core Haswell processors with Z87 Express chipset for supreme multi-tasking and content creation. For those who demand extreme processing power, Intel Xeon processors are also offered in several models in single or dual CPU configurations. These advanced performance workstations are perfect for applications such as 3D rendering/modeling, sciences and medical imaging, engineering and earth sciences, matte painting, compositing, and CAD/CAM.

NVIDIA GeForce GTX 760 Specifications Disclosed

The information was sourced and publicized by the VideoCardz.com crew on Tuesday, confirming some previous leaks and refuting others. The new GeForce GTX 760 employs the same reference design NVIDIA used for its previous generation cards (GTX 670, GTX 660 Ti, GTX 660 and GTX 650 Ti) and is designed to replace the GTX 660 Ti in NVIDIA's current lineup. The card employs a cut down version of the GK 104 GPU, with 1152 CUDA Cores, 96 TMUs and 32 ROPs. With a base clock of 980 Mhz and a boost value of 53 MHz, for a maximum out of the box frequency of 1033 MHz, the new card supports GPU Boost 2.0 which is a temperature controlled feature (the cooler the chip the higher the clocks). Stock memory size will be 2 GB and reference memory clocks were set at 1502 MHz, for a slightly over 6 Ghz effective speed. A 256-bit wide memory bus is employed to offer 192 GB/s of memory bandwidth at stock clocks. TDP is set at 170W for the new card, requiring two 6-pin PCIe connectors

Also relevant to the topic is another piece of information unveiled along with the above mentioned specifications, the fact that the GeForce GTX 760 will complete NVIDIA's portfolio for the coming months. NVIDIA presumably awaiting AMD's move before launching any more GeForce products of its own. AMD, in turn, being expected to bring out the Radeon HD 8000 Sea Islands cards in September.

Return to Keyword Browsing
Nov 21st, 2024 08:27 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts