News Posts matching #Ampere

Return to Keyword Browsing

TSMC Allocation the Next Battleground for Intel, AMD, and Possibly NVIDIA

With its own 7 nm-class silicon fabrication node nowhere in sight for its processors, at least not until 2022-23, Intel is seeking out third-party semiconductor foundries to support its ambitious discrete GPU and scalar compute processor lineup under the Xe brand. A Taiwanese newspaper article interpreted by Chiakokhua provides a fascinating insight to the the new precious resource in the high-technology industry - allocation.

TSMC is one of these foundries, and will give Intel access to a refined 7 nm-class node, either the N7P or N7+, for some of its Xe scalar compute processors. The company could also seek out nodelets such as the N6. Trouble is, Intel will be locking horns with the likes of AMD for precious foundry allocation. NVIDIA too has secured a certain allocation of TSMC 7 nm for some of its upcoming "Ampere" GPUs. Sources tell China Times that TSMC will commence mass-production of Intel silicon as early as 2021, on either N7P, N7+, or N6. Business from Intel is timely for TSMC as it is losing orders from HiSilicon (Huawei) in wake of the prevailing geopolitical climate.

NVIDIA Ampere A100 GPU Gets Benchmark and Takes the Crown of the Fastest GPU in the World

When NVIDIA introduced its Ampere A100 GPU, it was said to be the company's fastest creation yet. However, we didn't know how fast the GPU exactly is. With the whopping 6912 CUDA cores, the GPU can pack all that on a 7 nm die with 54 billion transistors. Paired with 40 GB of super-fast HBM2E memory with a bandwidth of 1555 GB/s, the GPU is set to be a good performer. And how fast it exactly is you might wonder? Well, thanks to the Jules Urbach, the CEO of OTOY, a software developer and maker of OctaneRender software, we have the first benchmark of the Ampere A100 GPU.

Scoring 446 points in OctaneBench, a benchmark for OctaneRender, the Ampere GPU takes the crown of the world's fastest GPU. The GeForce RTX 2080 Ti GPU scores 302 points, which makes the A100 GPU up to 47.7% faster than Turing. However, the fastest Turing card found in the benchmark database is the Quadro RTX 8000, which scored 328 points, showing that Turing is still holding well. The result of Ampere A100 was running with RTX turned off, which could yield additional performance if RTX was turned on and that part of the silicon started working.

The Curious Case of the 12-pin Power Connector: It's Real and Coming with NVIDIA Ampere GPUs

Over the past few days, we've heard chatter about a new 12-pin PCIe power connector for graphics cards being introduced, particularly from Chinese language publication FCPowerUp, including a picture of the connector itself. Igor's Lab also did an in-depth technical breakdown of the connector. TechPowerUp has some new information on this from a well placed industry source. The connector is real, and will be introduced with NVIDIA's next-generation "Ampere" graphics cards. The connector appears to be NVIDIA's brain-child, and not that of any other IP- or trading group, such as the PCI-SIG, Molex or Intel. The connector was designed in response to two market realities - that high-end graphics cards inevitably need two power connectors; and it would be neater for consumers to have a single cable than having to wrestle with two; and that lower-end (<225 W) graphics cards can make do with one 8-pin or 6-pin connector.

The new NVIDIA 12-pin connector has six 12 V and six ground pins. Its designers specify higher quality contacts both on the male and female ends, which can handle higher current than the pins on 8-pin/6-pin PCIe power connectors. Depending on the PSU vendor, the 12-pin connector can even split in the middle into two 6-pin, and could be marketed as "6+6 pin." The point of contact between the two 6-pin halves are kept leveled so they align seamlessly.

NVIDIA Prepares to Stop Production of Popular RTX 20-series SKUs, Raise Prices

With its GeForce RTX 30-series "Ampere" graphics cards on the horizon, NVIDIA has reportedly taken the first steps toward discontinuing popular SKUs in its current RTX 20-series graphics cards. Chinese publication ITHome reports that several premium RTX 20-series SKUs, which include the RTX 2070, RTX 2070 Super, RTX 2080 Super, and the RTX 2080 Ti, are on the chopping block, meaning that NVIDIA partners are placing the last orders with upstream suppliers for parts that make up their graphics cards based on these GPUs.

It is a slow process toward product discontinuation from this point, which usually takes 6-9 months, as the market is left to soak up leftover inventory. Another juicy bit of information from the ITHome report is NVIDIA allegedly guiding its partners to increase prices of its current-gen high-end graphics cards in response to a renewal in interest in crypto-currency, which could drive up demand for graphics cards. NVIDIA is expected to announce its GeForce RTX 30-series on September 17, 2020.

NVIDIA GeForce RTX 3070 and RTX 3070 Ti Rumored Specifications Appear

NVIDIA is slowly preparing to launch its next-generation Ampere graphics cards for consumers after we got the A100 GPU for data-centric applications. The Ampere lineup is getting more and more leaks and speculations every day, so we can assume that the launch is near. In the most recent round of rumors, we have some new information about the GPU SKU and memory of the upcoming GeForce RTX 3070 and RTX 3070 Ti. Thanks to Twitter user kopite7kimi, who had multiple confirmed speculations in the past, we have information that GeForce RTX 3070 and RTX 3070 Ti use a GA104 GPU SKU, paired with GDDR6 memory. The cath is that the Ti version of GPU will feature a new GDDR6X memory, which has a higher speed and can reportedly go up to 21 Gbps.

The regular RTX 3070 is supposed to have 2944 CUDA cores on GA104-400 GPU die, while its bigger brother RTX 3070 Ti is designed with 3072 CUDA cores on GA104-300 die. Paired with new technologies that Ampere architecture brings, with a new GDDR6X memory, the GPUs are set to be very good performers. It is estimated that both of the cards would reach a memory bandwidth of 512 GB/s. So far that is all we have. NVIDIA is reportedly in Design Validation Test (DVT) phase with these cards and is preparing for mass production in August. Following those events is the official launch which should happen before the end of this year, with some speculations indicating that it is in September.

Mercedes-Benz, NVIDIA Partner to Build Advanced, Software-Defined Vehicles

Mercedes-Benz, one of the largest manufacturers of premium passenger cars, and NVIDIA, the global leader in accelerated computing, plan to enter into a cooperation to create a revolutionary in-vehicle computing system and AI computing infrastructure. Starting in 2024, this will be rolled out across the fleet of next-generation Mercedes-Benz vehicles, enabling them with upgradable automated driving functions. Working together, the companies plan to develop the most sophisticated and advanced computing architecture ever deployed in an automobile.

The new software-defined architecture will be built on the NVIDIA DRIVE platform and will be standard in Mercedes-Benz's next-generation fleet, enabling state-of-the-art automated driving functionalities. A primary feature will be the ability to automate driving of regular routes from address to address. In addition, there will be numerous future safety and convenience applications. Customers will be able to purchase and add capabilities, software applications and subscription services through over-the-air software updates during the life of the car.

Ampere Altra Family of Cloud Native Arm Processors expands to 128 cores with Altra Max

Ampere today announced further roadmap details of its Ampere Altra server processor family. In March the company announced Ampere Altra, the world's first cloud native processor, featuring 80 cores. Today, Ampere unveiled preliminary details of the expansion of the cloud-native processor family by adding Ampere Altra Max, which has 128 cores, providing customers with another cloud-optimized processor to maximize overall performance and cores-per-rack density.

Ampere Altra Max is ideal for applications that take advantage of scale-out and elastic cloud architectures. Compatible with the 80-core Ampere Altra and also supporting 2-socket platforms, Ampere Altra Max offers the industry's highest socket-level performance and I/O scalability. It will be sampling in the fourth quarter and additional details will be provided later this year.
Ampere Altra Processor

GIGABYTE Introduces a Broad Portfolio of G-series Servers Powered by NVIDIA A100 PCIe

GIGABYTE, an industry leader in high-performance servers and workstations, announced its G-series servers' validation plan. Following the NVIDIA A100 PCIe GPU announcement today, GIGABYTE has completed the compatibility validation of the G481-HA0 / G292-Z40 and added the NVIDIA A100 to the support list for these two servers. The remaining G-series servers will be divided into two waves to complete their respective compatibility tests soon. At the same time, GIGABYTE also launched a new G492 series server based on the AMD EPYC 7002 processor family, which provides PCIe Gen4 support for up to 10 NVIDIA A100 PCIe GPUs. The G492 is a server with the highest computing power for AI models training on the market today. GIGABYTE will offer two SKUs for the G492. The G492-Z50 will be at a more approachable price point, whereas the G492-Z51 will be geared towards higher performance.

The G492 is GIGABYTE's second-generation 4U G-series server. Based on the first generation G481 (Intel architecture) / G482 (AMD architecture) servers, the user-friendly design and scalability have been further optimized. In addition to supporting two 280 W 2nd Gen AMD EPYC 7002 processors, the 32 DDR4 memory slots support up to 8 TB of memory and maintain data transmission at 3200 MHz. The G492 has built-in PCIe Gen4 switches, which can provide more PCIe Gen4 lanes. PCIe Gen4 has twice the I/O performance of PCIe Gen3 and fully enables the computing power of the NVIDIA A100 Tensor Core GPU, or it can be applied to PCIe storage to help provide a storage upgrade path that is native to the G492.

NVIDIA Announces A100 PCIe Tensor Core Accelerator Based on Ampere Architecture

NVIDIA and partners today announced a new way for interested users to partake in the AI-training capabilities of their Ampere graphics architecture in the form of the A100 PCIe. Diving a little deeper, and as the name implies, this solution differs from the SXM form-factor in that it can be deployed through systems' existing PCIe slots. The change in interface comes with a reduction in TDP from 400 W down to 250 W in the PCIe version - and equivalent reduced performance.

NVIDIA says peak throughput is the same across the SXM and PCIe version of their A100 accelerator. The difference comes in sustained workloads, where NVIDIA quotes the A100 as delivering 10% less performance compared to its SXM brethren. The A100 PCIe comes with the same 2.4 Gbps, 40 GB HBM2 memory footprint as the SXM version, and all other chip resources are the same. We're thus looking at the same 862 mm² silicon chip and 6,192 CUDA cores across both models. The difference is that the PCIe accelerator can more easily be integrated in existing server infrastructure.

NVIDIA GeForce "Ampere" Hits 3DMark Time Spy Charts, 30% Faster than RTX 2080 Ti

An unknown NVIDIA GeForce "Ampere" GPU model surfaced on 3DMark Time Spy online database. We don't know if this is the RTX 3080 (RTX 2080 successor), or the top-tier RTX 3090 (RTX 2080 Ti successor). Rumored specs of the two are covered in our older article. The 3DMark Time Spy score unearthed by _rogame (Hardware Leaks) is 18257 points, which is close to 31 percent faster than the RTX 2080 Ti Founders Edition, 22 percent faster than the TITAN RTX, and just a tiny bit slower than KINGPIN's record-setting EVGA RTX 2080 Ti XC. Futuremark SystemInfo reads the GPU clock speeds of the "Ampere" card as 1935 MHz, and its memory clock at "6000 MHz." Normally, SystemInfo reads the memory actual clock (i.e. 1750 MHz for 14 Gbps GDDR6 effective). Perhaps SystemInfo isn't yet optimized for reading memory clocks on "Ampere."

ASUS Announces SC4000A-E10 GPGPU Server with NVIDIA A100 Tensor Core GPUs

ASUSTek, the leading IT Company in server systems, server motherboards and workstations today announced the new NVIDIA A100-powered server - ESC4000A E10 to accelerate and optimize data centers for high utilization and low total cost of ownership with the PCIe Gen 4 expansions, OCP 3.0 networking, faster compute and better GPU performance. ASUS continues building a strong partnership with NVIDIA to deliver unprecedented acceleration and flexibility to power the world's highest-performing elastic data centers for AI, data analytics, and HPC applications.

ASUS ESC4000A-E10 is a 2U server powered by the AMD EPYC 7002 series processors that deliver up to 2x the performance and 4x the floating point capability in a single socket versus the previous 7001 generation. Targeted for AI, HPC and VDI applications in data center or enterprise environments which require powerful CPU cores, more GPUs support, and faster transmission speed, ESC4000A E10 focuses on delivering GPU-optimized performance with support for up to four double-deck high performance or eight single-deck GPUs including the latest NVIDIA Ampere-architecture V100, Tesla, and Quadro. This also benefits on virtualization to consolidate GPU resources in to shared pool for users to utilize resources in more efficient ways.

Possible NVIDIA GeForce RTX 3090, RTX 3080, and "TITAN Ampere" Specs Surface

Alleged specifications of NVIDIA's upcoming GeForce RTX 3090, RTX 3080, and next-generation TITAN graphics cards, based on the "Ampere" graphics architecture, surfaced in tweets by KatCorgi, mirroring an early-June kopite7kimi tweet, sources with a high hit-rate on NVIDIA rumors. All three SKUs will be based on the 7 nm "GA102" silicon, but with varying memory and core configurations, targeting three vastly different price-points. The RTX 3080 succeeds the current RTX 2080/Super, and allegedly features 4,352 CUDA cores. It features a 320-bit GDDR6X memory interface, with its memory ticking at 19 Gbps.

The RTX 3090 is heir-apparent to the RTX 2080 Ti, and is endowed with 5,248 CUDA cores, 12 GB of GDDR6X memory across a 384-bit wide memory bus clocked at 21 Gbps. The king of the hill is the TITAN Ampere, succeeding the TITAN RTX. It probably maxes out the GA102 ASIC with 5,326 CUDA cores, offers double the memory amount of the RTX 3090, at 24 GB, but at lower memory clock speeds of 17 Gbps. NVIDIA is expected to announce these cards in September, 2020.

NVIDIA GeForce RTX 3090 and RTX 3080 Production Timeline Revealed

NVIDIA's next-generation GeForce "Ampere" RTX 3000 series graphics cards are heading for a September reveal, along with availability shortly after. Much of the news cycle over the past couple of weeks revolved around alleged leaks of the card's cooling solution that provides insights into what the finished product could look like, with some even doubting the veracity of the picture leaks given the September launch. Igor's Lab did some digging into the production timeline of these cards. The leaks seem to perfectly align with the timeline.

The chip design, prototyping, taping-out, and testing of "Ampere" IP completed before the mass-production timeline kicks off. This begins in April/May, with NVIDIA's OEM partners and other suppliers finalizing a bill of materials (BOM). June is also when the products go through the EVT (engineering validation test) and DVT (design validation test). It is at these stages that NVIDIA has the opportunity to approve or summarily reject/change the design of the product and finalize it. By July, there are working samples of the finished products for NVIDIA and its industry partners to validate. This is also when regulators such as the FCC and CE conduct EMI tests. Production validation tests (PVT), or proofing of the production line, occurs in late-July/early-August. The final BIOS is released to the OEM by NVIDIA around this time. Mass-production finally commences in August, and the onward march to distributors rolls on. The media event announcing the product and press reviews follow in September, and market availability shortly thereafter.

NVIDIA Ampere Cooling Solution Heatsink Pictured, Rumors of Airflow Magic Quashed

Although still a blurry-cam pic, this new picture of three GeForce RTX 3080 "Ampere" graphics card reference heatsinks on a factory-floor reveals exactly how the cooling solution works. The main heat-dissipation component appears to be a vapor chamber base, above which there are four flattened copper heat pipes, which hold the cooler's four aluminium fin arrays together. The first array is directly above the CPU/memory/VRM area, and consists of a dense stack of aluminium fins that make up a cavity for the fan on the obverse side of the graphics card. This fan vents air onto the first heatsink element, and some of its air is guided by the heatsink to two trapezium shaped aluminium fin-stacks that pull heat from the flattened heat pipes, and get airflow from the obverse fan.

The heat pipes make their way to the card's second dense aluminium fin-stack. This fin-stack is as thick as the card itself, as there's no PCB here. This fin-stack is ventilated by the card's second fan, located on the reverse side, which pulls air through this fin-stack and vents upward. We attempted to detail the cooling solution, the card, and other SKU details in an older article. We've also added a picture of a Sapphire Radeon RX Vega 56 Pulse graphics card. This NVIDIA heatsink is essentially like that, but with the second fan on the other side of the card to make it look more complicated than it actually is.

NVIDIA's Next-Gen Reference Cooler Costs $150 By Itself, to Feature in Three SKUs

Pictures of alleged next-generation GeForce "Ampere" graphics cards emerged over the weekend, which many of our readers found hard to believe. It's features a dual-fan cooling solution, in which one of the two fans is on the reverse side of the card, blowing air outward from the cooling solution, while the PCB extends two-thirds the length of the card. Since then, there have been several fan-made 3D renders of the card. NVIDIA is not happy with the leak, and started an investigation into two of its contractors responsible for manufacturing Founders Edition (reference design) GeForce graphics cards, Foxconn and BYD (Build Your Dreams), according to a report by Igor's Lab.

According to the report, the cooling solution, which looks a lot more overengineered than the company's RTX 20-series Founders Edition cooler, costs a hefty USD $150, or roughly the price of a 280 mm AIO CLC. It wouldn't surprise us if Asetek's RadCard costs less. The cooler consists of several interconnected heatsink elements with the PCB in the middle. Igor's Lab reports that the card is estimated to be 21.9 cm in length. Given its cost, NVIDIA is reserving this cooler for only the top three SKUs in the lineup, the TITAN RTX successor, the RTX 2080 Ti successor, and the RTX 2080/SUPER successor.

NVIDIA GeForce RTX 3080 Pictured?

Here are what could be the very first pictures of a reference NVIDIA GeForce RTX 3080 "Ampere" graphics card revealing an unusual board design, which is the biggest departure in NVIDIA's design schemes since the original GeForce TITAN. It features a dual-fan aluminium fin-stack cooler, except that one of its fans is located on the obverse side, and the other on the reverse side of the card. The PCB of the card appears to extend only two-thirds the length of the card, ending in an inward cutout, beyond which there's only an extension of the cooling solution. The cooler shroud, rather than being a solid covering of the heatsink, is made of aluminium heatsink ridges. All in all, a very unusual design, which NVIDIA could implement on its top-tier SKUs, such as the RTX 3080, RTX 3080 Ti, and in a cosmetic form on lower SKUs. We get the feeling that "Cyberpunk 2077" has influenced this design.

Microsoft Builds a Supercomputer for AI

Microsoft has held a Build 2020 conference for developers from all over the world, and they live-streamed it online. Among some of the announcements, Microsoft has announced a new supercomputer dedicated to the OpenAI company, which works on building Artificial General Intelligence (AGI). The new supercomputer is a part of Microsoft's Azure cloud infrastructure and it will allow OpenAI developers to train very large scale machine learning models in the cloud. The supercomputer is said to be the fifth most powerful supercomputer in the world with specifications of "more than 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server."

Specific information wasn't announced and we don't know what CPUs and GPUs go into this machine, but we can speculate that the latest Nvidia A100 "Ampere" GPU could be used. The company hasn't yet submitted its entry to the Top500 website, so we can't keep track of the FLOPs count and see what power it holds.
Microsoft Azure Data Center

NVIDIA Discontinues the Tesla Brand to Avoid Confusion with Tesla Car Maker

At its ambitious "Ampere" A100 Tensor Core processor reveal, the "Tesla" brand was nowhere to be seen. Heise.de reports that the company has decided to discontinue "Tesla" as the top-level brand for its HPC, AI, and scalar compute accelerator product line. NVIDIA introduced the Tesla compute accelerator brand along with its "Tesla" graphics architecture. It was the first major GPGPU product, and saw CUDA take flight as a prominent scalar compute language.

Over the years, NVIDIA kept the Tesla moniker as a top-level brand (alongside GeForce and Quadro), with an alphabetic portion of the model numbers denoting the graphics architecture the accelerator is based on (eg: Tesla P100 being "Pascal" based, K10 being "Kepler" based, and M40 being "Maxwell" based). The Tesla T4, based on "Turing," is the final product with the old nomenclature. Interestingly, Heise reports that NVIDIA dropped the name to avoid confusion with fellow Californian brand Tesla Inc.

Atos Launches First Supercomputer Equipped with NVIDIA A100 Tensor Core GPU

Atos, a global leader in digital transformation, today announces its new BullSequana X2415, the first supercomputer in Europe to integrate NVIDIA's Ampere next-generation graphics processing unit architecture, the NVIDIA A100 Tensor Core GPU. This new supercomputer blade will deliver unprecedented computing power to boost application performance for HPC and AI workloads, tackling the challenges of the exascale era. The BullSequana X2415 blade will increase computing power by more than 2x and optimize energy consumption thanks to Atos' 100% highly efficient water-cooled patented DLC (Direct Liquid Cooling) solution, which uses warm water to cool the machine.

Forschungszentrum Jülich will integrate this new blade into its booster module, extending its existing JUWELS BullSequana supercomputer, making it the first system worldwide the use this new technology. The JUWELS Booster will provide researchers across Europe with significantly increased computational resources. Some of the projects it will fuel are the European Commission's Human Brain Project and the Jülich Laboratories of "Climate Science" and "Molecular Systems". Once fully deployed this summer the upgraded supercomputing system, operated under ParTec's software ParaStation Modulo, is expected to provide a computational peak performance of more than 70 Petaflops/s making it the most powerful supercomputer in Europe and a showcase for European exascale architecture.

NVIDIA "Ampere" Designed for both HPC and GeForce/Quadro

NVIDIA CEO Jensen Huang in a pre-GTC press briefing stressed that the upcoming "Ampere" graphics architecture will spread across both the company's compute-accelerator and commercial graphics product lines. The architecture makes its debut later today with the Tesla A100 HPC processor for breakthrough AI acceleration. It's unlikely that any GeForce products will be formally announced this month, with rumors pointing to a GeForce "Ampere" product launch at a gaming-focused event in September, close to "Cyberpunk 2077" launch.

It was earlier believed that NVIDIA had forked its breadwinning IP into two lines, one focused on headless scalar compute, and the other on graphics products through the company's GeForce and Quadro product lines. To that effect, its "Volta" architecture focused on scalar-compute (with the exception of the forgotten TITAN V); and the "Turing" architecture focused solely on GeForce and Quadro. It was then believed that "Ampere" will focus on compute, and the so-called "Hopper" would be this generation's graphics-focused architecture. We now know that won't be the case. We've compiled a selection of GeForce Ampere rumors in this article.

NVIDIA Tesla A100 "Ampere" AIC (add-in card) Form-Factor Board Pictured

Here's the first picture of a Tesla A100 "Ampere" AIC (add-in card) form-factor board, hot on the heals of the morning big A100 reveal. The AIC card is a bare PCB, which workstation builders will add compatible cooling solutions on. The PCB features the gigantic GA100 processor with its six HBM2E stacks, in the center, surrounded by VRM components, and I/O on three sides. On the bottom side, you will find a conventional PCI-Express 4.0 x16 host interface. Above it, are NVLink fingers. The rear I/O has high-bandwidth network interfaces (likely 200 Gbps InfiniBand), by Mellanox. The tail end has hard points for 12 V power input. Find juicy details of the GA100 in our older article.

NVIDIA Ampere A100 Has 54 Billion Transistors, World's Largest 7nm Chip

Not long ago, Intel's Raja Koduri claimed that the Xe HP "Ponte Vecchio" silicon was the "big daddy" of Xe GPUs, and the "largest chip co-developed in India," larger than the 35 billion-transistor Xilinix VU19P FPGA co-developed in the country. It turns out that NVIDIA is in the mood for setting records. The "Ampere" A100 silicon has 54 billion transistors crammed into a single 7 nm die (not counting transistor counts of the HBM2E memory stacks).

NVIDIA claims a 20 Times boost in both AI inference and single-precision (FP32) performance over its "Volta" based predecessor, the Tesla V100. The chip also offers a 2.5X gain in FP64 performance over "Volta." NVIDIA has also invented a new number format for AI compute, called TF32 (tensor float 32). TF32 uses 10-bit mantissa of FP16, and the 8-bit exponent of FP32, resulting in a new, efficient format. NVIDIA attributes its 20x performance gains over "Volta" to this. The 3rd generation tensor core introduced with Ampere supports FP64 natively. Another key design focus for NVIDIA is to leverage the "sparsity" phenomenon in neural nets, to reduce their size, and improve performance.

NVIDIA Tesla A100 GPU Pictured

Thanks to the sources of VideoCardz, we now have the first picture of the next-generation NVIDIA Tesla A100 graphics card. Designed for computing oriented applications, the Tesla A100 is a socketed GPU designed for NVIDIA's proprietary SXM socket. In a post few days ago, we were suspecting that you might be able to fit the Tesla A100 GPU in the socket of the previous Volta V100 GPUs as it is a similar SXM socket. However, the mounting holes have been re-arranged and this one requires a new socket/motherboard. The Tesla A100 GPU is based on GA100 GPU die, which we don't know specifications of. From the picture, we can only see that there is one very big die attached to six HBM modules, most likely HBM2E. Besides that everything else is unknown. More details are expected to be announced today at the GTC 2020 digital keynote.
NVIDIA Tesla A100

NVIDIA CEO Jensen Huang has been Cooking the World's Largest GPU - Is this Ampere?

NVIDIA is rumored to introduce their next-generation Ampere architecture very soon, at its GTC event happening on May 14th. We're expecting to see an announcement for the successor to the company's DGX lineup of pre-built compute systems—using the upcoming Ampere architecture of course. At the heart of these machines, will be a new GA100 GPU, that's rumored to be very fast. A while ago, we've seen NVIDIA register a trademark for "DGX A100", which seems to be a credible name for these systems featuring the new Tesla A100 graphics cards.

Today, NVIDIA's CEO was spotted in an unlisted video that's published on the official NVIDIA YouTube channel. It shows him pulling out of the oven what he calls "world's largest GPU", that he has been cooking all the time. Featuring eight Tesla A100 GPUs, this DGX A100 system appears to be based on a similar platform design as previous DGX systems, where the GPU is a socketed SXM2 design. This looks like a viable upgrade path for owners of previous DGX systems—just swap out the GPUs and enjoy higher performance. It's been a while since we have seen Mr. Huang appear with his leather jacket, and in the video, he isn't wearing one, is this the real Jensen? Jokes aside, you can check out the video below, if it is not taken down soon.
NVIDIA DGX A100 System
Update May 12th, 5 pm UTC: NVIDIA has listed the video and it is not unlisted anymore.

Graphics Cards Shipments to Pick Up in 2H-2020: Cooling Solution Maker Power Logic

Power Logic, a graphics card cooling solution OEM, in an interview with Taiwan tech industry observer DigiTimes, commented that it expects graphics card shipments to rise in the second half of 2020, on the backs of new product announcements from both NVIDIA and AMD, as well as HPC accelerators from the likes of Intel and NVIDIA. NVIDIA is expected to launch its "Ampere" based GeForce RTX 30-series graphics cards, while AMD is preparing to launch its Radeon RX 6000-series "Navi 2#" graphics cards based on the RDNA2 graphics architecture. Power Logic has apparently commenced prototyping certain cooling solutions, and is expected to begin mass-production at its Jiangxi-based plant towards the end of Q2-2020; so it could begin shipping coolers to graphics card manufacturers in the following quarters.
Return to Keyword Browsing
Nov 21st, 2024 11:16 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts