News Posts matching #GPU

Return to Keyword Browsing

Intel Arc Battlemage Could Arrive Before Black Friday, Right in Time for Holidays

According to the latest report from ComputerBase, Intel had a strong presence at the recently concluded Embedded World 2024 conference. The company officially showcased its Arc series of GPUs for the embedded market, based on the existing Alchemist chips rebranded as the "E series." However, industry whispers hint at a more significant development—the impending launch of Intel's second-generation Arc Xe² GPUs, codenamed "Battlemage," potentially before the lucrative Black Friday shopping season. While Alchemist serves as Intel's current offering for embedded applications, many companies in attendance expressed keen interest in Battlemage, the successor to Alchemist. These firms often cover a broad spectrum, from servers and desktops to notebooks and embedded systems, necessitating a hardware platform that caters to this diverse range of applications.

Officially, Intel had previously stated that Battlemage would "hopefully" arrive before CES 2025, implying a 2024 launch. However, rumors from the trade show floor suggest a more ambitious target—a release before Black Friday, which falls on November 29th this year. This timeline aligns with Intel's historical launch patterns, as the original Arc A380 and notebook GPUs debuted in early October 2022, albeit with a staggered and limited rollout. Intel's struggles with the Alchemist launch serve as a learning experience for the company. Early promises and performance claims for the first-generation Arc GPUs failed to materialize, leading to a stuttering market introduction. This time, Intel has adopted a more reserved approach, avoiding premature and grandiose proclamations about Battlemage's capabilities.

Intel Unleashes Enterprise AI with Gaudi 3, AI Open Systems Strategy and New Customer Wins

At the Intel Vision 2024 customer and partner conference, Intel introduced the Intel Gaudi 3 accelerator to bring performance, openness and choice to enterprise generative AI (GenAI), and unveiled a suite of new open scalable systems, next-gen products and strategic collaborations to accelerate GenAI adoption. With only 10% of enterprises successfully moving GenAI projects into production last year, Intel's latest offerings address the challenges businesses face in scaling AI initiatives.

"Innovation is advancing at an unprecedented pace, all enabled by silicon - and every company is quickly becoming an AI company," said Intel CEO Pat Gelsinger. "Intel is bringing AI everywhere across the enterprise, from the PC to the data center to the edge. Our latest Gaudi, Xeon and Core Ultra platforms are delivering a cohesive set of flexible solutions tailored to meet the changing needs of our customers and partners and capitalize on the immense opportunities ahead."

Acer Launches New Nitro 14 and Nitro 16 Gaming Laptops Powered by AMD Ryzen 8040 Series Processors

Acer today announced the new Nitro 14 and Nitro 16 gaming laptops, powered by AMD Ryzen 8040 Series processors with Ryzen AI[1]. With up to NVIDIA GeForce RTX 4060[2] Laptop GPUs supported by DLSS 3.5 technology, both are backed by NVIDIA's RTX AI platform, providing an array of capabilities in over 500 games and applications, enhanced by AI. Gamers are immersed in their 14- and 16-inch NVIDIA G-SYNC compatible panels with up to WQXGA (2560x1600) resolution.

Whether in call or streaming in-game, Acer PurifiedVoice 2.0 harnesses the power of AI to block out external noises, while Acer PurifiedView keeps users always front and center of all the action. Microsoft Copilot in Windows (with a dedicated Copilot key) helps accelerate everyday tasks on these AI laptops, and with one month of Xbox Game Pass Ultimate included with every device, players will enjoy hundreds of high-quality PC games. To seamlessly take command of device performance and customizations, one click of the NitroSense key directs users to the control center and the library of available AI-related functions through the new Experience Zone.

U.S. Updates Advanced Semiconductor Ban, Actual Impact on the Industry Will Be Insignificant

On March 29th, the United States announced another round of updates to its export controls, targeting advanced computing, supercomputers, semiconductor end-uses, and semiconductor manufacturing products. These new regulations, which took effect on April 4th, are designed to prevent certain countries and businesses from circumventing U.S. restrictions to access sensitive chip technologies and equipment. Despite these tighter controls, TrendForce believes the practical impact on the industry will be minimal.

The latest updates aim to refine the language and parameters of previous regulations, tightening the criteria for exports to Macau and D:5 countries (China, North Korea, Russia, Iran, etc.). They require a detailed examination of all technology products' Total Processing Performance (TPP) and Performance Density (PD). If a product exceeds certain computing power thresholds, it must undergo a case-by-case review. Nevertheless, a new provision, Advanced Computing Authorized (ACA), allows for specific exports and re-exports among selected countries, including the transshipment of particular products between Macau and D:5 countries.

Imagination's new Catapult CPU is Driving RISC-V Device Adoption

Imagination Technologies today unveils the next product in the Catapult CPU IP range, the Imagination APXM-6200 CPU: a RISC-V application processor with compelling performance density, seamless security and the artificial intelligence capabilities needed to support the compute and intuitive user experience needs for next generation consumer and industrial devices.

"The number of RISC-V based devices is skyrocketing with over 16Bn units forecast by 2030, and the consumer market is behind much of this growth" says Rich Wawrzyniak, Principal Analyst at SHD Group. "One fifth of all consumer devices will have a RISC-V based CPU by the end of this decade. Imagination is set to be a force in RISC-V with a strategy that prioritises quality and ease of adoption. Products like APXM-6200 are exactly what will help RISC-V achieve the promised success."

AIO Workstation Combines 128-Core Arm Processor and Four NVIDIA GPUs Totaling 28,416 CUDA Cores

All-in-one computers are often traditionally seen as lower-powered alternatives to traditional desktop workstations. However, a new offering from Alafia AI, a startup focused on medical imaging appliances, aims to shatter that perception. The company's upcoming Alafia Aivas SuperWorkstation packs serious hardware muscle, demonstrating that all-in-one systems can match the performance of their more modular counterparts. At the heart of the Aivas SuperWorkstation lies a 128-core Ampere Altra processor, running at 3.0 GHz clock speed. This CPU is complemented by not one but three NVIDIA L4 GPUs for compute, and a single NVIDIA RTX 4000 Ada GPU for video output, delivering a combined 28,416 CUDA cores for accelerated parallel computing tasks. The system doesn't skimp on other components, either. It features a 4K touch display with up to 360 nits of brightness, an extensive 2 TB of DDR4 RAM, and storage options up to an 8 TB solid-state drive. This combination of cutting-edge CPU, GPU, memory, and storage is squarely aimed at the demands of medical imaging and AI development workloads.

The all-in-one form factor packs this incredible hardware into a sleek, purposefully designed clinical research appliance. While initially targeting software developers, Alafia AI hopes that institutions that can optimize their applications for the Arm architecture can eventually deploy the Aivas SuperWorkstation for production medical imaging workloads. The company is aiming for application integration in Q3 2024 and full ecosystem device integration by Q4 2024. With this powerful new offering, Alafia AI is challenging long-held assumptions about the performance limitations of all-in-one systems. The Aivas SuperWorkstation demonstrates that the right hardware choices can transform these compact form factors into true powerhouse workstations. Especially with a combined total output of three NVIDIA L4 compute units, alongside RTX 4000 Ada graphics card, the AIO is more powerful than some of the high-end desktop workstations.

X-Silicon Startup Wants to Combine RISC-V CPU, GPU, and NPU in a Single Processor

While we are all used to having a system with a CPU, GPU, and, recently, NPU—X-Silicon Inc. (XSi), a startup founded by former Silicon Valley veterans—has unveiled an interesting RISC-V processor that can simultaneously handle CPU, GPU, and NPU workloads in a chip. This innovative chip architecture, which will be open-source, aims to provide a flexible and efficient solution for a wide range of applications, including artificial intelligence, virtual reality, automotive systems, and IoT devices. The new microprocessor combines a RISC-V CPU core with vector capabilities and GPU acceleration into a single chip, creating a versatile all-in-one processor. By integrating the functionality of a CPU and GPU into a single core, X-Silicon's design offers several advantages over traditional architectures. The chip utilizes the open-source RISC-V instruction set architecture (ISA) for both CPU and GPU operations, running a single instruction stream. This approach promises lower memory footprint execution and improved efficiency, as there is no need to copy data between separate CPU and GPU memory spaces.

Called the C-GPU architecture, X-Silicon uses RISC-V Vector Core, which has 16 32-bit FPUs and a Scaler ALU for processing regular integers as well as floating point instructions. A unified instruction decoder feeds the cores, which are connected to a thread scheduler, texture unit, rasterizer, clipping engine, neural engine, and pixel processors. All is fed into a frame buffer, which feeds the video engine for video output. The setup of the cores allows the users to program each core individually for HPC, AI, video, or graphics workloads. Without software, there is no usable chip, which prompts X-Silicon to work on OpenGL ES, Vulkan, Mesa, and OpenCL APIs. Additionally, the company plans to release a hardware abstraction layer (HAL) for direct chip programming. According to Jon Peddie Research (JPR), the industry has been seeking an open-standard GPU that is flexible and scalable enough to support various markets. X-Silicon's CPU/GPU hybrid chip aims to address this need by providing manufacturers with a single, open-chip design that can handle any desired workload. The XSi gave no timeline, but it has plans to distribute the IP to OEMs and hyperscalers, so the first silicon is still away.

Apple M3 Ultra Chip Could be a Monolithic Design Without UltraFusion Interconnect

As we witness Apple's generational updates of the M series of chips, the highly anticipated SKU of the 3rd generation of Apple M series yet-to-be-announced top-of-the-line M3 Ultra chip is growing speculations from industry insiders. The latest round of reports suggests that the M3 Ultra might step away from its predecessor's design, potentially adopting a monolithic architecture without the UltraFusion interconnect technology. In the past, Apple has relied on a dual-chip design for its Ultra variants, using the UltraFusion interconnect to combine two M series Max chips. For example, the second generation M Ultra chip, M2 Ultra, boasts 134 billion transistors across two 510 mm² chips. However, die-shots of the M3 Max have sparked discussions about the absence of dedicated chip space for the UltraFusion interconnect.

While the absence of visible interconnect space on early die-shots is not conclusive evidence, as seen with the M1 Max not having visible UltraFusion interconnect and still being a part of M1 Ultra with UltraFusion, industry has led the speculation that the M3 Ultra may indeed feature a monolithic design. Considering that the M3 Max has 92 billion transistors and is estimated to have a die size between 600 and 700 mm², going Ultra with these chips may be pushing the manufacturing limit. Considering the maximum die size limit of 848 mm² for the TSMC N3B process used by Apple, there may not be sufficient space for a dual-chip M3 Ultra design. The potential shift to a monolithic design for the M3 Ultra raises questions about how Apple will scale the chip's performance without the UltraFusion interconnect. Competing solutions, such as NVIDIA's Blackwell GPU, use a high-bandwidth C2C interface to connect two 104 billion transistor chips, achieving a bandwidth of 10 TB/s. In comparison, the M2 Ultra's UltraFusion interconnect provided a bandwidth of 2.5 TB/s.

US Government Wants Nuclear Plants to Offload AI Data Center Expansion

The expansion of AI technology affects not only the production and demand for graphics cards but also the electricity grid that powers them. Data centers hosting thousands of GPUs are becoming more common, and the industry has been building new facilities for GPU-enhanced servers to serve the need for more AI. However, these powerful GPUs often consume over 500 Watts per single card, and NVIDIA's latest Blackwell B200 GPU has a TGP of 1000 Watts or a single kilowatt. These kilowatt GPUs will be present in data centers with 10s of thousands of cards, resulting in multi-megawatt facilities. To combat the load on the national electricity grid, US President Joe Biden's administration has been discussing with big tech to re-evaluate their power sources, possibly using smaller nuclear plants. According to an Axios interview with Energy Secretary Jennifer Granholm, she has noted that "AI itself isn't a problem because AI could help to solve the problem." However, the problem is the load-bearing of the national electricity grid, which can't sustain the rapid expansion of the AI data centers.

The Department of Energy (DOE) has been reportedly talking with firms, most notably hyperscalers like Microsoft, Google, and Amazon, to start considering nuclear fusion and fission power plants to satisfy the need for AI expansion. We have already discussed the plan by Microsoft to embed a nuclear reactor near its data center facility and help manage the load of thousands of GPUs running AI training/inference. However, this time, it is not just Microsoft. Other tech giants are reportedly thinking about nuclear as well. They all need to offload their AI expansion from the US national power grid and develop a nuclear solution. Nuclear power is a mere 20% of the US power sourcing, and DOE is currently financing a Holtec Palisades 800-MW electric nuclear generating station with $1.52 billion in funds for restoration and resumption of service. Microsoft is investing in a Small Modular Reactors (SMRs) microreactor energy strategy, which could be an example for other big tech companies to follow.

Intel Arc "Battlemage" Xe2-HPG BMG-10 & BMG-21 GPUs Discovered in Shipping Manifest

Speculated lower-end Intel second generation Arc GPUs popped up via SiSoftware Sandra database entries around mid-March—evaluation samples are likely in the hands of trusted hardware partners. Yesterday, momomo_us happened upon another interesting shipping manifest, following a series of AMD-related leaks. The latest list reveals five "Battlemage" products—three utilizing the BMG-21 GPU, and the remaining two being based on the BMG-10 design. These identifiers have appeared in older leaks, although the latter has been viewed in place sight—chez Intel Malaysia's Failure Analysis Lab.

Previous leaks suggest that these second generation Arc models (Xe2) reside within a "High-Performance Graphics" (HPG) discrete GPU family—the Xe2-HPG BMG-10 range is likely targeting an "enthusiast" market segment, while the Xe2-HPG BMG-21 tier is rumored to offer mid-tier performance. Intel staffers have expressed confidence about a possible late 2024 launch window. Back in January, Tom "TAP" Petersen revealed that the Arc hardware team had already moved onto third-gen "Celestial" GPU endeavors: "I'd say about 30% of our engineers are working on Battlemage, mostly on the software side because our hardware team is on the next thing." The first-gen deck has not been cleared fully it seems—the Alchemist family could be joined by two new variants in the near future.

Unannounced AMD Instinct MI388X Accelerator Pops Up in SEC Filing

AMD's Instinct family has welcomed a new addition—the MI388X AI accelerator—as discovered in a lengthy regulatory 10K filing (submitted to the SEC). The document reveals that the unannounced SKU—along with the MI250, MI300X and MI300A integrated circuits—cannot be sold to Chinese customers due to updated US trade regulations (new requirements were issued around October 2023). Versal VC2802 and VE2802 FPGA products are also mentioned in the same section. Earlier this month, AMD's Chinese market-specific Instinct MI309 package was deemed to be too powerful for purpose by the US Department of Commerce.

AMD has not published anything about the Instinct MI388X's official specification, and technical details have not emerged via leaks. The "X" tag likely implies that it has been designed for AI and HPC applications, akin to the recently launched MI300X accelerator. The designation of a higher model number could (naturally) point to a potentially more potent spec sheet, although Tom's Hardware posits that MI388X is a semi-custom spinoff of an existing model.

ASUS ROG Strix GeForce RTX 4090 D Tweaked to Match RTX 4090 FE Performance

NVIDIA's GeForce RTX 4090 D GPU was launched late last year in China—this weakened variant (of the standard RTX 4090) was designed with US trade regulations in mind. Chinese media outlets have toyed around with various custom models for several months—January 2024 evaluations indicated a 5% performance disadvantage when lined up against unrestricted models. The GeForce RTX 4090 D GPU is a potent beast despite a reduced core count and restricted TDP limit, but Chinese enthusiasts have continued to struggle with the implementation of worthwhile overclocks. HKEPC—a Hong Kong-situated PC hardware review outlet—has bucked that trend.

The mega-sized flagship ZOTAC RTX 4090 D PGF model has the technical credentials to break beyond the expected overclock increase of "2 to 5%," courtesy of a powerful 28-phase power PCB design and 530 W max. TGP limit. The Expreview team pulled a paltry 3.7% extra bit of performance from ZOTAC China's behemoth. In contrast, HKEPC wrangled out some bigger numbers with a sampled ASUS ROG STRIX RTX 4090 GAMING OC graphics card—matching unrestricted variants: "it turns out that NVIDIA only does not allow AIC manufacturers to preset overclocking, but it does not restrict users from overclocking by themselves. After a high degree of overclocking adjustment, the ROG Strix RTX 4090 D actually has a way to achieve the performance level of the RTX 4090 FE."

PGL Investigating GeForce RTX 4080 GPU Driver Crash, Following Esports Event Disruption

The Professional Gamers League (PGL) showcased its newly upgraded tournament rig specification prior to the kick-off of their (still ongoing) CS2 Major Copenhagen 2024 esports event. As reported, over a week ago, competitors have been treated to modern systems decked out with AMD's popular gaming-oriented Ryzen 7 7800X3D CPU and NVIDIA GeForce RTX 4080 graphics cards, while BenQ's ZOWIE XL2566K 24.5" 360 Hz gaming monitor delivers a superfast visual feed. A hefty chunk of change has been spent on new hardware, but expensive cutting-edge tech can falter. Virtus.pro team member—Jame—experienced a major software crash during a match against rival group, G2.

PCGamesN noted that this frustrating incident ended the affected team's chance to grab a substantial cash reward. Their report put a spotlight on this unfortunate moment: "in the second round of a best of three, Virtus Pro were a few rounds away from qualifying for the playoffs, only for their aspirations to be squashed through no fault of their own...Jame experiences a graphics card driver crash that irrecoverably steers the round in G2's favor, culminating in Virtus Pro losing the match 11-13. Virtus Pro would then go on to lose the subsequent tie-break match as the round was not replayed. In effect, the graphics card driver crash partly cost the team their chance at winning an eventual $1.25 million prize pool." PGL revealed, via a social media post, that officials are doing some detective work: "we wish to clarify the situation involving Jame during the second map, Inferno, in the series against G2. A technical malfunction occurred due to an NVIDIA driver crash, resulting in a game crash. We are continuing our investigation into the matter." The new tournament rigs were "meticulously optimized" and tested in the weeks leading up to CS2 Major Copenhagen 2024—it is believed that the driver crash was a random anomaly. PGL and NVIDIA are currently working on a way to "identify and fix the issue."

Introspect Technology Ships World's First GDDR7 Memory Test System

Introspect Technology, a JEDEC member and a leading manufacturer of test and measurement instruments, announced today that it has shipped the M5512 GDDR7 Memory Test System, the world's first commercial solution for testing JEDEC's newly minted JESD239 Graphics Double Data Rate (GDDR7) SGRAM specification. This category-creating solution enables graphics memory engineers, GPU design engineers, product engineers in both memory and GPU spaces, and system integrators to rapidly bring up new GDDR7 memory devices, debug protocol errors, characterize signal integrity, and perform detailed memory read/write functional stress testing without requiring any other tool.

The GDDR7 specification is the latest industry standard that is aimed at the creation of high-bandwidth and high-capacity memory implementations for graphics processing, artificial intelligence (AI), and AI-intensive networking. Featuring pulse-amplitude modulation (PAM) and an improved signal to noise ratio compared to other PAM4 standards used in networking, the GDDR7 PAM3 modulation technology achieves greater power-efficiency while significantly increasing data transmission bandwidth over constrained electrical channels.

Report Suggests Naver Siding with Samsung in $752 Million "Mach-1" AI Chip Deal

Samsung debuted its Mach-1 generation of AI processors during a recent shareholder meeting—the South Korean megacorp anticipates an early 2025 launch window. Their application-specific integrated circuit (ASIC) design is expected to "excel in edge computing applications," with a focus on low power and efficiency-oriented operating environments. Naver Corporation was a key NVIDIA high-end AI customer in South Korea (and Japan), but the leading search platform firm and creator of HyperCLOVA X LLM (reportedly) deliberated on an adoption alternative hardware last October. The Korea Economic Daily believes that Naver's relationship with Samsung is set to grow, courtesy of a proposed $752 million investment: "the world's top memory chipmaker, will supply its next-generation Mach-1 artificial intelligence chips to Naver Corp. by the end of this year."

Reports from last December indicated that the two companies were deep into the process of co-designing power-efficient AI accelerators—Naver's main goal is to finalize a product that will offer eight times more energy efficiency than NVIDIA's H100 AI accelerator. Naver's alleged bulk order—of roughly 150,000 to 200,000 Samsung Mach-1 AI chips—appears to be a stopgap. Industry insiders reckon that Samsung's first-gen AI accelerator is much cheaper when compared to NVIDIA H100 GPU price points—a per-unit figure of $3756 is mentioned in the KED Global article. Samsung is speculated to be shopping its fledgling AI tech to Microsoft and Meta.

Baldur's Gate 3 Up and Running on Snapdragon X Elite Reference Platform

Qualcomm claimed that most Windows games will run (via emulation) on its upcoming series of ARM-based Snapdragon X Elite processors—during a "Windows on Snapdragon, a Platform Ready for your PC Games" presentation at the recently concluded GDC 2024 event. Issam Khalil (principal engineer) did not mention specific titles, but divulged that the Snapdragon Studios division has spent time working through "top games" on Steam. Qualcomm laptop-oriented Adreno GPU design is being hyped up as demonstrating an "80% performance uplift versus AMD's Radeon 780M iGPU," but the overall Snapdragon X Elite package is often compared to Apple's M3 chipsets (also ARM-based).

Reference laptops—running early Snapdragon X Elite silicon—have been distributed to favored partners. A semi-public showcase—recorded by Devin Arthur—revealed some promising gaming performance credentials on one of these devices. The self-proclaimed "Snapdragon Insider" was invited to the company's San Diego, California headquarters—Arthur happily reported that a red "23 W Snapdragon X Elite model" was demoing: "Baldur's Gate 3 running at 1080p hovering around 30 FPS, which is perfectly playable!" Specific visual settings were not listed in Arthur's substack article, or showcased during his brief video recording—but Qualcomm's claims about the Adreno GPU beating Team Red's best iGPU could be valid. Windows Central reckons that Control and Redout 2 have been demoed on Snapdragon X Elite devices.

Intel Patch Notes Reveal Arc A750E & A580E SKUs

Phoronix has taken a short break away from monitoring the latest goings-on at AMD's software department—the site's editor-in-chief, Michael Larabel, took a moment to investigate patch notes relating to Intel's Xe and i915 Linux kernel graphics drivers. Earlier today, he noticed that "two additional PCI IDs" have been added to Team Blue's DG2/Alchemist family. This discovery prompted further sleuthing—after: "some searching and turning up hits within the Intel Compute Runtime code, 0x56BE is for an Intel Arc Graphics A750E variant and 0x56BF is for an Intel Arc Graphics A580E."

The aforementioned GPU identification codes seem to exist in gray area—the patch notes do not reveal whether these new variants are destined for desktop or mobile platforms. VideoCardz cited a remark made by "Bionic_Squash"—the reputable leaker reckons that the: "IDs are linked to Intel's Arc Embedded series. This family is tailored for industrial, business, and commercial applications, ranging from edge systems to powering large interactive screens." It is highly likely that Intel is paving the way for embedded/low-power variants of its existing Arc A750 and A580 GPUs. Tom's Hardware proposes that Team Blue is clearing out its inventory of remaining Alchemist silicon ahead of a successive generation's rollout—Battlemage is a major priority in 2024.

SPARKLE Arc A380 GENIE GPU & A310 ECO Cooler Hybridized

SPARKLE unveiled its low-profile series around mid-January—this lineup included an Intel Arc A380 GENIE dual-fan/dual-slot model and an Arc A310 ECO (single-slot, single-fan config) card. Compact device expert/YouTuber, ETA Prime, has uploaded a fascinating video that covers a modification project that involved a hybridization of SPARKLE's latest low-profile graphics cards and a Minisforum MS-01 test system. SPARKLE has released various models based on Intel's "Alchemist" Arc A380 6 GB GPU, but their PCB design is shared across a range of cooling options. ETA Prime could not source an aftermarket lower-profile cooler for his SPARKLE A380 GENIE, so he resorted to cannibalizing the A310 ECO model for relevant parts.

The ECO's single-slot cooling solution was not well proportioned enough to make contact with the SPARKLE A380 GENIE's VRM, so ETA Prime had to "add an aftermarket heatsink." He sold the remaining unneeded pieces—A310 board and GENIE cooler—to a friend for $60. The resultant hybrid—the "world's first-ever single-slot Intel Arc A380"—was bunged into the SFF Minisforum MS-01 test system. Notable specs included the Intel Core i9-13900H CPU and 32 GB of DDR5-5200 RAM. ETA Prime utilized Acer's Predator BiFrost graphics card utility to "trick" in a stable 54 W power limit. 60-ish FPS performance results—with low-to-medium settings at 1080p across a selection of games—were promising, especially for a restrictive small form factor build. ETA Prime hopes that SPARKLE will launch a smaller A380 model in the future—alternatively a specialist firm could produce a nice aftermarket copper part.

Manli Readies GeForce RTX 4070 Ti & 4070 SUPER Gallardo "Slim" Cards

Graphics card enthusiasts with a thing for earthy green hues, will likely appreciate Manli's latest products—its Gallardo graphics card range has expanded with two new models. The Asian and European market-focused manufacturer has already unleashed "refreshed" models that utilize NVIDIA's GeForce RTX 40-series SUPER GPUs, but the newest entries sport a revised signature green flagship "Gallardo" design. VideoCardz has pored over the small details—it turns out that Manli has produced a slimmer profile: "perhaps something that is not obvious is that Manli has introduced the RTX 4070 Ti SUPER Gallardo already. The new version marks a second revision, featuring an entirely different aesthetic compared to its predecessor. The most important change is that it no longer occupies a 3.5-slot space." Manli has evidently put together a "much slimmer 2-slot version" under a "M3604+N693" moniker.

The non-Ti model is likely coming out soon, but Manli has not yet announced official pricing or launch date details for their newly redesigned Gallardo Ada Lovelace cards—official product pages were created last week. Despite flagship status, VideoCardz notes that Manli has not implemented any factory overclocking—the Gallardo range is often associated with: "system integrators like Sycom for further customization." The spec sheet advertises integrated LED lighting with seven available color cycles, four 6 mm copper heat pipes with segmented heatsinks, and a metal backplate for reinforcement and protective purposes.

Intel Arc "Battlemage" GPUs Appear on SiSoftware Sandra Database

Intel is quietly working on second generation Arc GPUs—we have not heard much about Xe2 "Battlemage" since CES 2024. Back in January, Tom "TAP" Petersen—an Intel fellow and marketing guru—casually revealed during an interview conducted by PC World: "I'd say about 30% of our engineers are working on Battlemage, mostly on the software side because our hardware team is on the next thing (Celestial)...Battlemage has already has its first silicon in the labs which is very exciting and there's more good news coming which I can't talk about right now." Intel appears to be targeting a loose late 2024 launch window; Petersen stated that he would like to see second generation products arrive at retail before CES 2025's commencement. The SiSoftware Sandra database was updated around mid-March with two very intriguing new Intel GPU entries—test systems (built on an ASUS PRIME Z790-P WIFI mainboard) were spotted running graphics solutions "equipped with 20 Xe-Core (160 EU) and 24 Xe-Cores (160 EU)."

Michael/miktdt commented on the freshly discovered database entries: "some smaller versions are on Sisoft...I guess they are coming. Single-float GP Compute looks quite good for just 160 VE/192 VE. Doesn't tell much about the release though, I guess anything between Q4 2024 and Q2 2025 is a possibility." Both models seem to sport 11.6 GB VRAM capacities—likely 12 GB—and 8 MB of L2 cache. Wccftech has guesstimated potential 192-bit memory bandwidth for these speculative lower-level GPUs. Team Blue has a bit more tweaking to do—based on leaked figures—but time is on their side: "the performance per core for Alchemist currently sits an average of 16% faster than the alleged Battlemage GPU which isn't a big deal since driver-level optimizations and final silicon can give a huge boost when the retail products come out."

Vastarmor Debuts White Graphics Card Design - Radeon RX 7700 XT ALLOY

Vastarmor is a relatively young graphics card manufacturer and AMD board partner (since the RDNA 2 days)—their Alloy product range was updated with new Radeon RX 7800 XT and RX 7700 XT triple-fan "PRO" models last September. VideoCardz has discovered a new dual-fan "non-PRO" white variant—the Vastarmor RX 7700 XT Alloy White 12 GB. TPU's GPU database entry lists a release date of August 25—according to the VideoCardz report, Vastarmor has not settled on final pricing or an official launch date. The standard black (with small red accents) RX 7700 XT Alloy model did reach Chinese retailers last year—the pale variant is predicted to cost about the same, or demand a slight premium over the black version.

Specifications remain consistent across both—according to VideoCardz: "Vastarmor has verified that the card maintains a base clock of 1784 MHz, a game clock of 2276 MHz, and a boost clock that reaches up to 2600 MHz (an overclock of 2.2% for boost). Despite its compact size, measuring at 26.3 x 13.2 cm, the card demands three slots due to its thickness of 5.6 cm. Power-wise, it relies on standard 8-pin power connectors, installed in pairs." The factory-set overclocks are identical to the numbers designated to Vastarmor's RX 7700 XT Alloy PRO model, although their triple-fan design is slightly slimmer. The longer design accommodates a 90 mm fan, positioned between two 100 mm units.

NVIDIA GeForce RTX 4060, 4060 Ti & 4070 GPU Refreshes Spotted in Leak

NVIDIA completed its last round of GeForce NVIDIA RTX 40-series GPU refreshes at the very end of January—new evidence suggests that another wave is scheduled for imminent release. MEGAsizeGPU has acquired and shared a tabulated list of new Ada Lovelace GPU variants—the trusted leaker's post presents a timetable that was supposed to kick off within the second half of this month. First up is the GeForce RTX 4070 GPU, with a current designation of AD104-251—the leaked table suggests that a new variant, AD103-175-KX, is due very soon (or overdue). Wccftech pointed out that the new ID was previously linked to NVIDIA's GeForce RTX 4070 SUPER SKU. Moving into April, next up is the GeForce RTX 4060 Ti—jumping from the current AD106-351 die to a new unit; AD104-150-KX. The third adjustment (allegedly) affects the GeForce RTX 4060—going from AD107-400 to AD106-255, also timetabled for next month. MEGAsizeGPU reckons that Team Green will be swapping chips, but not rolling out broadly adjusted specifications—a best case scenario could include higher CUDA, RT, and Tensor core counts. According to VideoCardz, the new die designations have popped up in freshly released official driver notes—it is inferred that the variants are getting an "under the radar" launch treatment.

EMTEK Launches GeForce RTX 4070 SUPER MIRACLE X3 White 12 GB Graphics Card

EMTEK products rarely pop up on TPU's news section, but the GPU database contains a smattering of the South Korean manufacturer's Ampere-based GeForce RTX graphics card. VideoCardz has discovered an updated MIRACLE X3 White model—EMTEK's latest release is a GeForce RTX 4070 SUPER 12 GB card. The triple-fan model seems to stick with NVIDIA's reference specifications—VideoCardz also noticed a physical similarity: "under the cooler shroud, the card boasts a non-standard U-shaped PCB, reminiscent of Team Green's Founders Edition. However, it remains uncertain whether EMTEK utilizes the same PCB as NVIDIA." The asking price—of ₩919,990—converts to around $680, when factoring in regional taxes. EMTEK's MIRACLE X3 cooling solution seems to be fairly robust—featuring four 6 mm heat pipes—so an adherence to stock clocks is a slight surprise. The company's GAMING PRO line includes a couple of factory overclocked options.

Nvidia CEO Reiterates Solid Partnership with TSMC

One key takeaway from the ongoing GTC is that Nvidia's AI empire has taken shape with strong partnerships from TSMC and other Taiwanese makers, such as those major server ODMs.

According to the news report from the technology-focused media DIGITIMES Asia, during his keynote at GTC on March 18, Huang underscored his company's partnerships with TSMC, as well as the supply chain in Taiwan. Speaking to the press later, Huang said Nvidia will have a very strong demand for CoWoS, the advanced packaging services TSMC offers.

Jensen Huang Discloses NVIDIA Blackwell GPU Pricing: $30,000 to $40,000

Jensen Huang has been talking to media outlets following the conclusion of his keynote presentation at NVIDIA's GTC 2024 conference—an NBC TV "exclusive" interview with the Team Green boss has caused a stir in tech circles. Jim Cramer's long-running "Squawk on the Street" trade segment hosted Huang for just under five minutes—NBC's presenter labelled the latest edition of GTC the "Woodstock of AI." NVIDIA's leader reckoned that around $1 trillion of industry was in attendance at this year's event—folks turned up to witness the unveiling of "Blackwell" B200 and GB200 AI GPUs. In the interview, Huang estimated that his company had invested around $10 billion into the research and development of its latest architecture: "we had to invent some new technology to make it possible."

Industry watchdogs have seized on a major revelation—as disclosed during the televised NBC report—Huang revealed that his next-gen AI GPUs "will cost between $30,000 and $40,000 per unit." NVIDIA (and its rivals) are not known to publicly announce price ranges for AI and HPC chips—leaks from hardware partners and individuals within industry supply chains are the "usual" sources. An investment banking company has already delved into alleged Blackwell production costs—as shared by Tae Kim/firstadopter: "Raymond James estimates it will cost NVIDIA more than $6000 to make a B200 and they will price the GPU at a 50-60% premium to H100...(the bank) estimates it costs NVIDIA $3320 to make the H100, which is then sold to customers for $25,000 to $30,000." Huang's disclosure should be treated as an approximation, since his company (normally) deals with the supply of basic building blocks.
Return to Keyword Browsing
Nov 22nd, 2024 20:58 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts