News Posts matching #192 GB

Return to Keyword Browsing

NVIDIA B100 "Blackwell" AI GPU Technical Details Leak Out

Jensen Huang's opening GTC 2024 keynote is scheduled to happen tomorrow afternoon (13:00 Pacific time)—many industry experts believe that the NVIDIA boss will take the stage and formally introduce his company's B100 "Blackwell" GPU architecture. An enlightened few have been treated to preview (AI and HPC) units—including Dell's CEO, Jeff Clarke—but pre-introduction leaks have not flowed out. Team Green is likely enforcing strict conditions upon a fortunate selection of trusted evaluators, within a pool of ecosystem partners and customers.

Today, a brave soul has broken that silence—tech tipster, AGF/XpeaGPU, fears repercussions from the leather-jacketed one. They revealed a handful of technical details, a day prior to Team Green's highly anticipated unveiling: "I don't want to spoil NVIDIA B100 launch tomorrow, but this thing is a monster. 2 dies on (TSMC) CoWoS-L, 8x8-Hi HBM3E stacks for 192 GB of memory." They also crystal balled an inevitable follow-up card: "one year later, B200 goes with 12-Hi stacks and will offer a beefy 288 GB. And the performance! It's... oh no Jensen is there... me run away!" Reuters has also joined in on the fun, with some predictions and insider information: "NVIDIA is unlikely to give specific pricing, but the B100 is likely to cost more than its predecessor, which sells for upwards of $20,000." Enterprise products are expected to arrive first—possibly later this year—followed by gaming variants, maybe months later.

Financial Analyst Outs AMD Instinct MI300X "Projected" Pricing

AMD's December 2023 launch of new Instinct series accelerators has generated a lot of tech news buzz and excitement within the financial world, but not many folks are privy to Team Red's MSRP for the CDNA 3.0 powered MI300X and MI300A models. A Citi report has pulled back the curtain, albeit with "projected" figures—an inside source claims that Microsoft has purchased the Instinct MI300X 192 GB model for ~$10,000 a piece. North American enterprise customers appear to have taken delivery of the latest MI300 products around mid-January time—inevitably, top secret information has leaked out to news investigators. SeekingAlpha's article (based on Citi's findings) alleges that the Microsoft data center division is AMD's top buyer of MI300X hardware—GPT-4 is reportedly up and running on these brand new accelerators.

The leakers claim that businesses further down the (AI and HPC) food chain are having to shell out $15,000 per MI300X unit, but this is a bargain when compared to NVIDIA's closest competing package—the venerable H100 SXM5 80 GB professional card. Team Green, similarly, does not reveal its enterprise pricing to the wider public—Tom's Hardware has kept tabs on H100 insider info and market leaks: "over the recent quarters, we have seen NVIDIA's H100 80 GB HBM2E add-in-card available for $30,000, $40,000, and even much more at eBay. Meanwhile, the more powerful H100 80 GB SXM with 80 GB of HBM3 memory tends to cost more than an H100 80 GB AIB." Citi's projection has Team Green charging up to four times more for its H100 product, when compared to Team Red MI300X pricing. NVIDIA's dominant AI GPU market position could be challenged by cheaper yet still very performant alternatives—additionally chip shortages have caused Jensen & Co. to step outside their comfort zone. Tom's Hardware reached out to AMD for comment on the Citi pricing claims—a company representative declined this invitation.

V-COLOR Announces DDR5-7200 192GB RDIMM Kit for AMD TRX50 Threadripper Platform

V-COLOR announces the launch of their DDR5 overclocking R-DIMM the TRX50 Motherboards powered by AMD Ryzen Threadripper 7000 series processors with capacity 192 GB (4x48GB) ready to hit the market in speeds ranging from 6400 MHz up to 7200 MHz for end users who require the maximum capacity and speed. This package is geared toward content makers, intensive 3D modelers, AI programmers, trading machines, and HFT (high frequency trading) firms.

With EXPO ready v-color DDR5 OC R-DIMM memory is ready to for full potential, designed for a diverse user base that includes both non-overclocking users and overclocking enthusiasts, with a specific focus on content creators, intensive 3D modelers, AI programmers, trading machines or HFT (high frequency trading) companies. This kit is meticulously crafted with the most advanced SK Hynix DDR5 chips and Automated Binning sort for more reliability and endurance, continuously tested for full compatibility with all the motherboards TRX50.

GIGABYTE Unveils Next-gen HPC & AI Servers with AMD Instinct MI300 Series Accelerators

GIGABYTE Technology: Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, and IT infrastructure, today announced the GIGABYTE G383-R80 for the AMD Instinct MI300A APU and two GIGABYTE G593 series servers for the AMD Instinct MI300X GPU and AMD EPYC 9004 Series processor. As a testament to the performance of AMD Instinct MI300 Series family of products, the El Capitan supercomputer at Lawrence Livermore National Laboratory uses the MI300A APU to power exascale computing. And these new GIGABYTE servers are the ideal platform to propel discoveries in HPC & AI at exascale.⁠

Marrying of a CPU & GPU: G383-R80
For incredible advancements in HPC there is the GIGABYTE G383-R80 that houses four LGA6096 sockets for MI300A APUs. This chip integrates a CPU that has twenty-four AMD Zen 4 cores with a powerful GPU built with AMD CDNA 3 GPU cores. And the chiplet design shares 128 GB of unified HBM3 memory for impressive performance for large AI models. The G383 server has lots of expansion slots for networking, storage, or other accelerators, with a total of twelve PCIe Gen 5 slots. And in the front of the chassis are eight 2.5" Gen 5 NVMe bays to handle heavy workloads such as real-time big data analytics and latency-sensitive workloads in finance and telecom. ⁠

CORSAIR & Intel 14th Gen Processors - A Great Match for Your Next Gen Build

CORSAIR, a world leader in enthusiast components for gamers, creators, and PC builders, today announced its range-wide compatibility and readiness for Intel's new 14th Generation Intel Core processors. With a enormous range of tested and validated components such as DDR5 memory, all-in-one liquid CPU coolers, innovative and reliable power supplies, and blazing fast M.2 SSDs, you can be confident that CORSAIR components will pair exceptionally with the latest in Intel processors to help your PC reach its full potential.

The latest Core processors utilize Intel's new Performance Hybrid Architecture, featuring more efficiency-cores for the best gaming experience—even while streaming and encoding video. Other features, such as Intel Thread Director to keep background tasks from slowing down games and Smart Cache for smoother gameplay and faster load times, make 14th Gen Core processors a great choice for PC gaming. But it's not enough to just have the CPU; you need to complete your system with components that can keep up and tap into its full potential, and that's where CORSAIR comes in.

Corsair Launches the Dominator Titanium DDR5 Memory

CORSAIR, a world leader in enthusiast components for gamers, creators, and PC builders, today launched the much anticipated latest addition to its award-winning memory line-up, DOMINATOR TITANIUM DDR5 memory. Built using some of the fastest DDR5 ICs alongside patented CORSAIR DHX cooling technology for improved overclocking potential, DOMINATOR TITANIUM continues the DOMINATOR legacy with a stunning design and blazing performance.

Sporting an elegant, fresh new aesthetic and built using premium materials and components, DOMINATOR TITANIUM DDR5 memory will be available for both Intel and AMD platforms, supporting Intel XMP 3.0 when paired with 12th and 13th-Gen Core processors or AMD EXPO for Ryzen 7000 CPUs. These technologies enable easy overclocking in just a couple of clicks on compatible platforms.

Major CSPs Aggressively Constructing AI Servers and Boosting Demand for AI Chips and HBM, Advanced Packaging Capacity Forecasted to Surge 30~40%

TrendForce reports that explosive growth in generative AI applications like chatbots has spurred significant expansion in AI server development in 2023. Major CSPs including Microsoft, Google, AWS, as well as Chinese enterprises like Baidu and ByteDance, have invested heavily in high-end AI servers to continuously train and optimize their AI models. This reliance on high-end AI servers necessitates the use of high-end AI chips, which in turn will not only drive up demand for HBM during 2023~2024, but is also expected to boost growth in advanced packaging capacity by 30~40% in 2024.

TrendForce highlights that to augment the computational efficiency of AI servers and enhance memory transmission bandwidth, leading AI chip makers such as Nvidia, AMD, and Intel have opted to incorporate HBM. Presently, Nvidia's A100 and H100 chips each boast up to 80 GB of HBM2e and HBM3. In its latest integrated CPU and GPU, the Grace Hopper Superchip, Nvidia expanded a single chip's HBM capacity by 20%, hitting a mark of 96 GB. AMD's MI300 also uses HBM3, with the MI300A capacity remaining at 128 GB like its predecessor, while the more advanced MI300X has ramped up to 192 GB, marking a 50% increase. Google is expected to broaden its partnership with Broadcom in late 2023 to produce the AISC AI accelerator chip TPU, which will also incorporate HBM memory, in order to extend AI infrastructure.

Apple Introduces M2 Ultra

Apple today announced M2 Ultra, a new system on a chip (SoC) that delivers huge performance increases to the Mac and completes the M2 family. M2 Ultra is the largest and most capable chip Apple has ever created, and it makes the new Mac Studio and Mac Pro the most powerful Mac desktops ever made. M2 Ultra is built using a second-generation 5-nanometer process and uses Apple's groundbreaking UltraFusion technology to connect the die of two M2 Max chips, doubling the performance. M2 Ultra consists of 134 billion transistors—20 billion more than M1 Ultra. Its unified memory architecture supports up to a breakthrough 192 GB of memory capacity, which is 50 percent more than M1 Ultra, and features 800 GB/s of memory bandwidth—twice that of M2 Max. M2 Ultra features a more powerful CPU that's 20 percent faster than M1 Ultra, a larger GPU that's up to 30 percent faster, and a Neural Engine that's up to 40 percent faster. It also features a media engine with twice the capabilities of M2 Max for blazing ProRes acceleration. With all these advancements, M2 Ultra takes Mac performance to a whole new level yet again.

"M2 Ultra delivers astonishing performance and capabilities for our pro users' most demanding workflows, while maintaining Apple silicon's industry-leading power efficiency," said Johny Srouji, Apple's senior vice president of Hardware Technologies. "With huge performance gains in the CPU, GPU, and Neural Engine, combined with massive memory bandwidth in a single SoC, M2 Ultra is the world's most powerful chip ever created for a personal computer."

MSI Provides Motherboard UEFI Update for AMD's Ryzen 7000X3D Series CPUs, Adds 192 GB Memory Support

MSI has been in close contact with the AMD and has referred to their official technical guidance to provide users with a safer and more optimized hardware environment. To achieve this goal, MSI will release a new list of BIOS updates specifically for the AMD Ryzen 7000 series CPU.

According to AMD's design specifications, the Ryzen 7000X3D series CPU does not fully support overclocking or overvoltage adjustments, including CPU ratio and CPU Vcore voltage. However, AMD EXPO technology can be used to optimize memory performance by appropriately increasing the CPU SoC voltage to ensure system stability when operating at higher memory frequencies.
Return to Keyword Browsing
May 1st, 2024 00:46 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts