News Posts matching #GPU

Return to Keyword Browsing

Sparkle Embracing Arc A380 & A310 GPUs with Low-Profile GENIE Series

Sparkle presented a pair of custom Intel Arc A380 and A310 cards[/url] at last month's Computex expo—reaffirming its commitment to presenting the full lineup of Arc GPUs. It is now reported that these "Industrial Low-Profile" cooled units will form the company's "GENIE" series. Sparkle's triple-fan TITAN series is comprised of Arc A770 and A750 GPUs, while the dual-fan ORC is formed solely of an A750. ELF is a single fan design A380 card.

The aforementioned GENIE models are both one slot designs with single fans and a low profile shrouds that only covers part of the PCB (comparable to the reference card). The A380 unit offers 8 Xe-Cores with 6 GB GDDR6 96-bit memory, while the lesser A310 gets 6 Xe-Cores and 4 GB of GDDR6 64-bit memory. The leaked presentation slide does not show any release date information, but reasonably final looking hardware making an appearance at Computex 2023 suggests that the GENIE series is not too far off from reaching retail.

NVIDIA Espouses Generative AI for Improved Productivity Across Industries

A watershed moment on Nov. 22, 2022, was mostly virtual, yet it shook the foundations of nearly every industry on the planet. On that day, OpenAI released ChatGPT, the most advanced artificial intelligence chatbot ever developed. This set off demand for generative AI applications that help businesses become more efficient, from providing consumers with answers to their questions to accelerating the work of researchers as they seek scientific breakthroughs, and much, much more.

Businesses that previously dabbled in AI are now rushing to adopt and deploy the latest applications. Generative AI—the ability of algorithms to create new text, images, sounds, animations, 3D models and even computer code—is moving at warp speed, transforming the way people work and play. By employing large language models (LLMs) to handle queries, the technology can dramatically reduce the time people devote to manual tasks like searching for and compiling information.

Acer Prepping Radeon RX 7600 GPU for Predator BiFrost Series

According to information and images released by Xfastest, Acer seems to be preparing a new trio of Predator BiFrost custom cards. The series is currently limited to a single factory overclocked model, based on Intel's Arc A770 16 GB GPU. One of the new cards seems to be a cheaper (~$258) A750 8 GB BiFrost model, so Acer's Alchemist ACM-G10 GPU variant lineup is welcoming one addition.

Acer is also embracing RDNA 3 courtesy of AMD, although graphics card enthusiasts could see the introduction of two new Predator BiFrost models based on Radeon RX 7600 8 GB GPU as less than exciting prospects. The leaked photos seem to show a cooler design that lacks ARGB around the two cooling fans—budget friendly pricing (~$290 for the overclocked model, and ~$258 for non-OC) suggests that fancy livery is not so important in the low-to-mid-range tier.

Intel Brings Gaudi2 Accelerator to China, to Fill Gap Created By NVIDIA Export Limitations

Intel has responded to the high demand for advanced chips in mainland China by bringing its processor, the Gaudi2, to the market. This move comes as the country grapples with US export restrictions, leading to a thriving market for smuggled NVIDIA GPUs. At a press conference in Beijing, Intel presented the Gaudi2 processor as an alternative to NVIDIA's A100 GPU, widely used for training AI systems. Despite US export controls, Intel recognizes the importance of the Chinese market, with 27 percent of its 2022 revenue generated from China. NVIDIA has also tried to comply with restrictions by offering modified versions of its GPUs, but limited supplies have driven the demand for smuggled GPUs. Intel's Gaudi2 aims to provide Chinese companies with various hardware options and bolster their ability to deploy AI through cloud and smart-edge technologies. By partnering with Inspur Group, a major AI server manufacturer, Intel plans to build Gaudi2-powered machines tailored explicitly for the Chinese market.

China's AI ambitions face potential challenges as the US government considers restricting Chinese companies access to American cloud computing services. This move could impede the utilization of advanced AI chips by major players like Amazon Web Services and Microsoft for their Chinese clients. Additionally, there are reports of a potential expansion of the US export ban to include NVIDIA's A800 GPU. As China continues to push forward with its AI development projects, Intel's introduction of the Gaudi2 processor helps country's demand for advanced chips. Balancing export controls and technological requirements within this complex trade landscape remains a crucial task for both companies and governments involved in the Chinese AI industry.

No Official Review Program for NVIDIA GeForce RTX 4060 Ti 16 GB Cards

NVIDIA is reported to be taking a hands off approach prior to the launch of its GeForce RTX 4060 Ti 16 GB GPU next week—rumored to take place on July 18. Murmurs from last week posited that add-in card (AIC) partners were not all that confident in the variant's prospects, with very little promotional activity lined up. NVIDIA itself is not releasing a Founders Edition GeForce RTX 4060 Ti 16 GB model, so it will be relying on board partners to get custom design units sent out to press outlets/reviewers. According to Hardware Unboxed, as posted on Twitter earlier today, no hardware will be distributed to the media: "Now there's no official review program for this model, there will be no FE version and it seems that NVIDIA and their partners really don't want to know about it. Every NVIDIA partner I've spoken to so far has said they won't be providing review samples, and they're not even sure when their model will be available."

Their announcement continued: "So I don't know when you'll be able to view our review, but I will be buying one as soon as I can. I expect coverage will be pretty thin and that's probably the plan, the release strategy here is similar to that of the RTX 3080 12 GB." TPU can confirm that test samples have not been sent out by NVIDIA's board partners, so a retail unit will be purchased (out of pocket) for reviewing purposes. Previous reports have theorized that not many custom models will be available at launch, with the series MSRP of $499 not doing it many favors in terms of buyer interest. MSI has prepared a new white GAMING X design for the 16 GB variant, so it is good to see at least one example of an AIB putting the effort in...but it would be nice to get a press sample.

Imagination GPUs Gains OpenGL 4.6 Support

When it comes to APIs, OpenGL is something of a classic. According to the Khronos Group, OpenGL is the most widely adopted 2D and 3D graphics API. Since its launch in 1992 it has been used extensively by software developers for PCs and workstations to create high-performance, visually compelling graphics applications for markets such as CAD, content creation, entertainment, game development and virtual reality.

To date, Imagination GPUs have natively supported OpenGL up until Release 3.3 as well as OpenGL ES (the version of OpenGL for embedded systems), Vulkan (a cross-platform graphics API) and OpenCL (an API for parallel programming). However, thanks to the increasing performance of our top-end GPUs, especially with the likes of the DXT-72-2304, they present a competitive offering to the data centre and desktop (DCD) market. Indeed, we have multiple customers - including the likes of Innosilicon - choosing Imagination GPUs for the flexibility an IP solution, their scalability and their ability to offer up to 6 TFLOPS of compute.

Two-ExaFLOP El Capitan Supercomputer Starts Installation Process with AMD Instinct MI300A

When Lawrence Livermore National Laboratory (LLNL) announced the creation of a two-ExaFLOP supercomputer named El Capitan, we heard that AMD would power it with its Instinct MI300 accelerator. Today, LNLL published a Tweet that states, "We've begun receiving & installing components for El Capitan, @NNSANews' first #exascale #supercomputer. While we're still a ways from deploying it for national security purposes in 2024, it's exciting to see years of work becoming reality." As published images show, HPE racks filled with AMD Instinct MI300 are showing up now at LNLL's facility, and the supercomputer is expected to go operational in 2024. This could mean that November 2023 TOP500 list update wouldn't feature El Capitan, as system enablement would be very hard to achieve in four months until then.

The El Capitan supercomputer is expected to run on AMD Instinct MI300A accelerator, which features 24 Zen4 cores, CDNA3 architecture, and 128 GB of HBM3 memory. All paired together in a four-accelerator configuration goes inside each node from HPE, also getting water cooling treatment. While we don't have many further details on the memory and storage of El Capitan, we know that the system will exceed two ExFLOPS at peak and will consume close to 40 MW of power.

ASRock Adds A380 Low Profile 6 GB Graphics Card to its Arc Lineup

ASRock has added another Arc model to its small selection of Intel graphics cards—this time in low profile form. The entry level A380 GPU is well suited for this narrow (zero dB/silent) dual fan cooling solution due to its diminutive 75 W TDP rating. ASRock has stayed in the safe zone by sticking with the default base clock of 2.0 GHz, as opposed to the sibling Challenger ITX 6 GB OC model's slightly more ambitious 2.25 GHz.

The specifications are typical A380—you get 6 GB of GDDR6 VRAM with a 96-bit memory bus, granting a bandwidth going up to 186 GB/s (memory is clocked at 15.5 Gbps), although the selection of ports has been reduced in number due to the card's small stature. Only single DisplayPort 2.0 and HDMI 2.0b connections here. ASRock's product page for their Arc A380 Low Profile model includes the usual yammering about the GPU's "next-gen gaming" capabilities thanks to Intel's Xe Super Sampling (XeSS) technology, but the card is better suited for compact budget builds and users who require a decent level of AV1 encoding (for the price—not announced at the time of writing).

Intel Developing Efficient Solution for Path Tracing on Integrated GPUs

Intel's software engineers are working on path-traced light simulation and conducting neural graphics research, as documented in a recent company news article, with an ambition to create a more efficient solution for integrated graphics cards. The company's Graphics Research Organization is set to present their path-traced optimizations at SIGGRAPH 2023. Their papers have been showcased at recent EGSR and HPG events. The team is aiming to get iGPUs running path-tracing in real time, by reducing the number of calculations required to simulate light bounces.

The article covers three different techniques, all designed to improve GPU performance: "Across the process of path tracing, the research presented in these papers demonstrates improvements in efficiency in path tracing's main building blocks, namely ray tracing, shading, and sampling. These are important components to make photorealistic rendering with path tracing available on more affordable GPUs, such as Intel Arc GPUs, and a step toward real-time performance on integrated GPUs." Although there is an emphasis on in-house products in the article, Intel's "open source-first mindset" hints that their R&D could be shared with others—NVIDIA and AMD are likely still struggling to make ray tracing practical on their modest graphics card models.

Adlink's Next-Gen IPC Strives to Revolutionize Industry Use Cases at the Edge

ADLINK Technology Inc., a global leader in edge computing, and a Titanium member of the Intel Partner Alliance, is proud to announce the launch of its latest MVP Series fanless modular computers—the MVP-5200 Compact Modular Industrial Computers and MVP-6200 Expandable Modular Industrial Computers—powered by 12/13th Gen Intel Core i9/i7/i5/i3 and Celeron processors. Featuring Intel R680E chipset and supporting up to 65 W, the computers can also incorporate GPU cards in a rugged package suitable for AI inferencing at the Edge, and can be used for but not limited to smart manufacturing, semiconductor equipment, and warehouse applications.

The MVP-5200/MVP-6200 series though expandable remains compact with support for up to 4 PCI/PCIe slots that allow for performance acceleration through GPUs, accelerators, and other expansion cards. Comprehensive modularized options and the ease of configuration can effectively reduce lead times for customers' diverse requirements. In addition, ADLINK also offers a broad range of pre-validated expansion cards, such as GPU, motion, vision, and I/O embedded cards, all can be easily deployed for your industrial applications.

Gigabyte Launches the GeForce RTX 4070 Ti Series Water-Cooled Graphics Cards

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of premium gaming hardware, today launches GeForce RTX 4070 Ti series water-cooled graphics cards powered by NVIDIA ADA Lovelace architecture. GIGABYTE launches two AORUS WATERFORCE graphics cards - AORUS GeForce RTX 4070 Ti XTREME WATERFORCE 12G and AORUS GeForce RTX 4070 Ti XTREME WATERFORCE WB 12G. Both graphics cards are equipped with the top-of-the-line GPU cores that have overclocking capabilities and are certified by GIGABYTE GPU Gauntlet sorting technology. AORUS provides the all-around cooling solution for all key components of the graphics card. In addition to cooling the GPU, AORUS also takes care of the VRAM and MOSFET, ensuring stable overclocking performance and enhanced durability.

The AORUS WATERFORCE graphics cards features RGB Fusion, protection metal backplate, aerospace-grade PCB coating for dust and moisture protection, Ultra-Durable top-grade components, and extended warranty for registered members. These features make the AORUS WATERFORCE graphics cards the best choice for enthusiasts who desire both silent operation and high performance.

Oracle to Spend Billions on NVIDIA Data Center GPUs, Even More on Ampere & AMD CPUs

Oracle founder and Chairman Larry Ellison last week announced a substantial spending spree on new equipment as he prepares his business for a cloud computing service expansion that will be aimed at attracting a "new wave" of artificial intelligence (AI) companies. He made this announcement at a recent Ampere event: "This year, Oracle will buy GPUs and CPUs from three companies...We will buy GPUs from NVIDIA, and we're buying billions of dollars of those. We will spend three times that on CPUs from Ampere and AMD. We still spend more money on conventional compute." His cloud division is said to be gearing up to take on larger competition—namely Amazon Web Services and Microsoft Corp. Oracle is hoping to outmaneuver these major players by focusing on the construction of fast networks, capable of shifting around huge volumes of data—the end goal being the creation of its own ChatGPT-type model.

Ellison's expressed that he was leaving Team Blue behind—Oracle has invested heavily in Ampere Computing—a startup founded by ex-Intel folks: "It's a major commitment to move to a new supplier. We've moved to a new architecture...We think that this is the future. The old Intel x86 architecture, after many decades in the market, is reaching its limit." Oracle's database software has been updated to run on Ampere's Arm-based chips, Ellison posits that these grant greater power efficiency when compared to AMD and NVIDIA enterprise processors. There will be some reliance on "x86-64" going forward, since Oracle's next-gen Exadata X10M platform was recently announced with the integration of Team Red's EPYC 9004 series processors—a company spokesman stated that these server CPUs offer higher core counts and "extreme scale and dramatically improved price performance," when compared to older Intel Xeon systems.

Inflection AI Builds Supercomputer with 22,000 NVIDIA H100 GPUs

The AI hype continues to push hardware shipments, especially for servers with GPUs that are in very high demand. Another example is the latest feat of AI startup, Inflection AI. Building foundational AI models, the Inflection AI crew has secured an order of 22,000 NVIDIA H100 GPUs and built a supercomputer. Assuming a configuration of a single Intel Xeon CPU with eight GPUs, almost 700 four-node racks should go into the supercomputer. Scaling and connecting 22,000 GPUs is easier than it is to acquire them, as NVIDIA's H100 GPUs are selling out everywhere due to the enormous demand for AI applications both on and off premises.

Getting 22,000 H100 GPUs is the biggest challenge here, and Inflection AI managed to get them by having NVIDIA as an investor in the startup. The supercomputer is estimated to cost around one billion USD and consume 31 Mega-Watts of power. The Inflection AI startup is now valued at 1.5 billion USD at the time of writing.

NVIDIA GeForce GTX 1650 is Still the Most Popular GPU in the Steam Hardware Survey

NVIDIA GeForce GTX 1650 was released more than four years ago. With its TU117 graphics processor, it features 896 CUDA cores, 56 texture mapping units, and 32 ROPs. NVIDIA has paired 4 GB GDDR5 memory with the GeForce GTX 1650, which are connected using a 128-bit memory interface. Interestingly, according to the latest Steam Hardware Survey results, this GPU still remains the most popular choice among gamers. While the total addressable market is unknown with the exact number, it is fair to assume that a large group participates every month. The latest numbers for June 2023 indicate that the GeForce GTX 1650 is still the number one GPU, with 5.50% of the users having that GPU. The second closest one was GeForce RTX 3060, with 4.60%.

Other information in the survey remains similar, with CPUs mostly ranging from 2.3 GHz to 2.69 GHz in frequency and with six cores and twelve threads. Storage also recorded a small bump with capacity over 1 TB surging 1.48%, indicating that gamers are buying larger drives as game sizes get bigger.

AI and HPC Demand Set to Boost HBM Volume by Almost 60% in 2023

High Bandwidth Memory (HBM) is emerging as the preferred solution for overcoming memory transfer speed restrictions due to the bandwidth limitations of DDR SDRAM in high-speed computation. HBM is recognized for its revolutionary transmission efficiency and plays a pivotal role in allowing core computational components to operate at their maximum capacity. Top-tier AI server GPUs have set a new industry standard by primarily using HBM. TrendForce forecasts that global demand for HBM will experience almost 60% growth annually in 2023, reaching 290 million GB, with a further 30% growth in 2024.

TrendForce's forecast for 2025, taking into account five large-scale AIGC products equivalent to ChatGPT, 25 mid-size AIGC products from Midjourney, and 80 small AIGC products, the minimum computing resources required globally could range from 145,600 to 233,700 Nvidia A100 GPUs. Emerging technologies such as supercomputers, 8K video streaming, and AR/VR, among others, are expected to simultaneously increase the workload on cloud computing systems due to escalating demands for high-speed computing.

ASUS has a GeForce RTX 4060 Ti Card with an M.2 SSD Slot

ASUS Chinese GM—Tony Yu—has shown off a graphics card concept on Bilibili that has a rather unusual feature, a slot for an M.2 NVMe SSD. The card is based on NVIDIA's GeForce RT 4060 Ti GPU and although not all details are clear at this point in time, but ASUS is taking advantage of the unused PCIe lanes on the card, since the AD106 GPU only uses eight PCIe lanes, the PCIe connector on the card has space for a further eight lanes. In theory ASUS could have added a pair of SSDs, since there are a total of eight lanes available, but as this was just a proof of concept, they seemingly stuck with a single SSD.

It's unclear if ASUS relies on bifurcation or if the company has added some kind of bridge chip, but bifurcation makes more sense, as a bridge chip would add a lot more cost. The neat thing with the NVMe drive being on the GPU, is that it also connects to the heatsink of the graphics card, which means the cooling should be rather good. However, for this to work properly, the SSD would have to be mounted back to front compared to how it would be mounted on a motherboard. Based on the test results, the SSD runs at a cool 42 degrees C, even when the GPU is being stress tested. It's likely that this product will not make it to markets outside of China, if it's ever launched into retail.

MSI Unveils its NVIDIA GeForce RTX 4060 Series Graphics Cards

As a leading brand in True Gaming hardware, MSI unveils the latest line-up of graphics cards featuring the NVIDIA GeForce RTX 4060 GPU, with the GAMING and VENTUS 2X BLACK series, which are available starting on June 29th, 2023.

The latest MSI GeForce RTX 4060 series graphics cards are designed to deliver incredible performance for mainstream gamers and creators at 1080p resolution at high frame rates with ray tracing and DLSS 3. The GeForce RTX 4060 GPU product delivers all the advancements of the NVIDIA Ada Lovelace architecture—including DLSS 3 neural rendering, third-generation ray tracing technologies at high frame rates, and an eighth generation NVIDIA Encoder (NVENC) with AV1 encoding

GIGABYTE Launches GeForce RTX 4060 Series Graphics Cards

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of premium gaming hardware, today launched the GeForce RTX 4060 series graphics cards powered by NVIDIA Ada Lovelace architecture, including DLSS 3 neural rendering and third-generation ray tracing technologies at high frame rates. The cards will be available on June 29. GIGABYTE's GeForce RTX 4060 Series includes five models with the WINDFORCE thermal solution, the AORUS GeForce RTX 4060 ELITE 8G, the GeForce RTX 4060 AERO OC 8G, the GeForce RTX 4060 GAMING OC 8G, the GeForce RTX 4060 EAGLE OC 8G and the GeForce RTX 4060 WINDFORCE OC 8G graphics cards. All the models provide gamers with diverse choices to enjoy the AAA gaming experience. All graphics cards from GIGABYTE are perfectly tuned with powerful performance, low temperature, and quietness at once, bringing gamers an ultimate gaming experience.

AORUS GeForce RTX 4060 ELITE 8G
AORUS GeForce RTX 4060 ELITE 8G graphics card stands out for the powerful cooling capabilities and outstanding performance, it's the top-tier gaming graphics card pursued by gamers and fans. The AORUS graphics card features the famous RGB Halo which takes advantage of the persistence of human vision to create exclusive lighting effects via rotating fans. The exquisite and colorful illumination RGB, favored by AORUS fans and DIY enthusiasts, allows gamers to customize its colors through the GIGABYTE CONTROL CENTER software with ease.

Tachyum Readying First Tape-out of its Prodigy SoCs

Tachyum announced today it will cease taking orders for its Prodigy Universal Processor Field Programmable Gate Array (FPGA) emulation system boards effective immediately. The company releases the final Prodigy build for tape-out. New partners and customers who wish to work with Prodigy FPGAs for product evaluation, performance measurements, software development, debugging and compatibility testing can arrange for private testing in Tachyum's facility. As these are shared systems, they can't be used for classified or proprietary data or data subject to regulatory governance.

The Prodigy hardware emulator consists of multiple FPGA and IO boards connected by cables in a rack. A single board with four FPGAs emulates eight Prodigy processor cores (a small fraction of the final Prodigy product design, which consists of 128 cores) including vector and matrix fixed and floating-point processing units. Deploying more FPGAs will improve test cycles by orders of magnitudes to achieve target quality, a risk reduction mechanism for early adopters.

NVIDIA Ada Lovelace Successor Set for 2025

According to the NVIDIA roadmap that was spotted in the recently published MLCommons training results, the Ada Lovelace successor is set to come in 2025. The roadmap also reveals the schedule for Hopper Next and Grace Next GPUs, as well as the BlueField-4 DPU.

While the roadmap does not provide a lot of details, it does give us a general idea of when to expect NVIDIA's next GeForce architecture. Since NVIDIA usually launches a new GeForce architecture every two years or so, the latest schedule might sound like a small delay, at least if it plans to launch the Ada Lovelace Next in early 2025 and not later. NVIDIA Pascal was launched in May 2016, Turing in September 2018, Ampere in May 2020, and Ada Lovelace in October 2022.

NVIDIA Allegedly Preparing H100 GPU with 94 and 64 GB Memory

NVIDIA's compute and AI-oriented H100 GPU is supposedly getting an upgrade. The H100 GPU is NVIDIA's most powerful offering and comes in a few different flavors: H100 PCIe, H100 SXM, and H100 NVL (a duo of two GPUs). Currently, the H100 GPU comes with 80 GB of HBM2E, both in the PCIe and SXM5 version of the card. A notable exception if the H100 NVL, which comes with 188 GB of HBM3, but that is for two cards, making it 94 GB per each. However, we could see NVIDIA enable 94 and 64 GB options for the H100 accelerator soon, as the latest PCI ID Repository shows.

According to the PCI ID Repository listing, two messages are posted: "Kindly help to add H100 SXM5 64 GB into 2337." and "Kindly help to add H100 SXM5 94 GB into 2339." These two messages indicate that NVIDIA could prepare its H100 in more variations. In September 2022, we saw NVIDIA prepare an H100 variation with 120 GB of memory, but that still isn't official. These PCIe IDs could just come from engineering samples that NVIDIA is testing in the labs, and these cards could never appear on any market. So, we have to wait and see how it plays out.

Intel & HPE Declare Aurora Supercomputer Blade Installation Complete

What's New: The Aurora supercomputer at Argonne National Laboratory is now fully equipped with all 10,624 compute blades, boasting 63,744 Intel Data Center GPU Max Series and 21,248 Intel Xeon CPU Max Series processors. "Aurora is the first deployment of Intel's Max Series GPU, the biggest Xeon Max CPU-based system, and the largest GPU cluster in the world. We're proud to be part of this historic system and excited for the groundbreaking AI, science and engineering Aurora will enable."—Jeff McVeigh, Intel corporate vice president and general manager of the Super Compute Group

What Aurora Is: A collaboration of Intel, Hewlett Packard Enterprise (HPE) and the Department of Energy (DOE), the Aurora supercomputer is designed to unlock the potential of the three pillars of high performance computing (HPC): simulations, data analytics and artificial intelligence (AI) on an extremely large scale. The system incorporates more than 1,024 storage nodes (using DAOS, Intel's distributed asynchronous object storage), providing 220 terabytes (TB) of capacity at 31TBs of total bandwidth, and leverages the HPE Slingshot high-performance fabric. Later this year, Aurora is expected to be the world's first supercomputer to achieve a theoretical peak performance of more than 2 exaflops (an exaflop is 1018 or a billion billion operations per second) when it enters the TOP 500 list.

NVIDIA Makes GeForce RTX 4060 MSRP Official - Starting at $299

In May we announced the GeForce RTX 4060 Family, and launched the GeForce RTX 4060 Ti. On June 29th, the GeForce RTX 4060 will go on sale, with prices starting at $299. For gamers playing on previous-gen GPUs, the NVIDIA Ada Lovelace architecture at the heart of the GeForce RTX 4060 delivers a massive upgrade, multiplying your performance, and supercharging creative apps. And thanks to the Ada architecture's industry-leading efficiency, you'll use measurably less power, your graphics card will run cooler, and fans will run at quieter speeds or even idle.

Based on the May 2023 Steam Hardware Survey, 9 of the top 10 most used GPUs on Steam are 60 Class or lower, and 77% of Steam gamers play at 1080p or lower resolutions. For these gamers, the new GeForce RTX 4060 is a great upgrade, enabling them to play new, more demanding games at 1080p at excellent levels of fidelity. For gamers coming from a GeForce RTX 2060, performance is multiplied by an average of 2.3X across a suite of 18 games, and for GeForce GTX 1060 users, in addition to higher frame rates, they also get ray tracing and DLSS acceleration for the first time.

Intel Graphics Releases Arc & Iris Xe Graphics Drivers 101.4502 WHQL

Intel Graphics has released Arc GPU and Iris Xe Graphics Drivers version 101.4502 WHQL. There are no gaming highlights or brand new Game On Driver support included in this release, according to the notes. An Arc-related iTunes application crash (upon launch) has been fixed, as well as blank screen and error messages encountered in Microsoft Edge's WebView2. There are plenty of "Known Issues" listed this time—Intel and EA Sports are looking into a problem where adjustments to XeSS presets cause crashes in F1 2023. Corruption in Game Capture mode for Dota 2 (via XSplit Broadcaster) has been noted. Media playback and encoding with some versions of Adobe Premiere Pro cannot utilize GPU hardware acceleration.

There are also various in-game issues - for owners of Intel Core Processors - logged for these titles: Total War: Warhammer III (DX11), Call of Duty Warzone 2.0 (DX12), Conqueror's Blade (DX12) and A Plague Tale: Requiem. The Arc Control Performance Tuning app is still in Beta, so expect to encounter some inconsistencies when using it. Intel and Cooler Master have partnered up on the continued development of Arc's RGB Controller software—it is custom designed "to allow users to harness 90 individually addressable LEDs on Intel Arc A770 Graphics Limited Edition cards."

DOWNLOAD: Intel GPU Graphics Drivers 101.4502 WHQL

Major CSPs Aggressively Constructing AI Servers and Boosting Demand for AI Chips and HBM, Advanced Packaging Capacity Forecasted to Surge 30~40%

TrendForce reports that explosive growth in generative AI applications like chatbots has spurred significant expansion in AI server development in 2023. Major CSPs including Microsoft, Google, AWS, as well as Chinese enterprises like Baidu and ByteDance, have invested heavily in high-end AI servers to continuously train and optimize their AI models. This reliance on high-end AI servers necessitates the use of high-end AI chips, which in turn will not only drive up demand for HBM during 2023~2024, but is also expected to boost growth in advanced packaging capacity by 30~40% in 2024.

TrendForce highlights that to augment the computational efficiency of AI servers and enhance memory transmission bandwidth, leading AI chip makers such as Nvidia, AMD, and Intel have opted to incorporate HBM. Presently, Nvidia's A100 and H100 chips each boast up to 80 GB of HBM2e and HBM3. In its latest integrated CPU and GPU, the Grace Hopper Superchip, Nvidia expanded a single chip's HBM capacity by 20%, hitting a mark of 96 GB. AMD's MI300 also uses HBM3, with the MI300A capacity remaining at 128 GB like its predecessor, while the more advanced MI300X has ramped up to 192 GB, marking a 50% increase. Google is expected to broaden its partnership with Broadcom in late 2023 to produce the AISC AI accelerator chip TPU, which will also incorporate HBM memory, in order to extend AI infrastructure.
Return to Keyword Browsing
Jul 13th, 2025 02:26 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts