News Posts matching #GPU

Return to Keyword Browsing

EK & XFX Announce XFX Speedster ZERO Radeon RX 6900XT RGB EKWB

EK, has partnered up with XFX to bring you a factory water-cooled Radeon RX 6900 XT GPU. The XFX Speedster ZERO Radeon RX 6900XT RGB EKWB is one of the fastest AMD Radeon-based graphics cards on the market. This new Speedster series GPU is equipped with a 14-phase VRM power delivery system, consisting of DrMOS and high polymer capacitors. To make sure these lightning-fast graphics processors manage to hit their maximum clocks, a unique EK water block is pre-installed which also brings a prolonged lifespan due to the superior thermals that the liquid cooling provides. This also makes sure that no precious gaming time is spent on the water block mounting, and there are no questions regarding the warranty.

A powerful 14+2 Phase Power Design allows more stable performance by better distributing power across more power phases in the VRM which results in more overclocking and boosting headroom. Couple that with the incredibly cool components due to the full cover EK water block and you get a recipe for high performance, stability, and a long lifespan.

AMD Announces Radeon Pro W6600X GPU for Mac Pro

AMD today announced availability of the new AMD Radeon PRO W6600X GPU for Mac Pro, developed to help professional users push the limits of what is possible. Built on the award-winning AMD RDNA 2 architecture, AMD Infinity Cache and other advanced technologies, the new GPU delivers stunning visuals and exceptional performance to power a variety of today's popular professional applications and workloads.

AMD Radeon PRO W6000X Series GPUs provide several graphics options for Mac Pro, which is engineered for extreme performance, expandability and configurability. The new AMD Radeon PRO W6600X GPU delivers an outstanding combination of image quality and exceptional performance, helping enable Mac Pro users to achieve amazing levels of productivity and creativity. Users can also select from several other powerful AMD GPUs to power an extensive range of professional workloads, including the previously announced AMD Radeon PRO W6900X, AMD Radeon PRO W6800X and AMD Radeon PRO W6800X Duo GPUs.

Basemark Launches World's First Cross-Platform Raytracing Benchmark - GPUScore Relic of Life

Basemark launched today GPUScore, an all-new GPU (graphics processing unit) performance benchmarking suite for a wide device range from smartphones to high-end gaming PCs. GPUScore supports all modern graphics APIs, such as Vulkan, Metal and DirectX, and operating systems such as Windows, Linux, macOS, Android and iOS.

GPUScore will consist of three different testing suites. Today, the first one of these was launched, named Relic of Life. It is available immediately. Basemark will introduce the two other GPUScore testing suites during the following months. Relic of Life is ideal for benchmarking high-end gaming PCs' discrete graphics cards' GPUs. It requires hardware accelerated ray tracing, supports Vulkan and DirectX, and is available for both Windows and Linux. GPUScore: Relic of Life is an ideal benchmark for comparing Vulkan and DirectX accelerated ray tracing performance.

NVIDIA Confirms System Hacks, Doesn't Anticipate Any Business Disruption

Last week, NVIDIA systems were compromised by the attack of a hacking group called LAPSUS$. It has been a few days since the attack happened, and we managed to see source code of various software leaks through third-party anonymous tipsters and next-generation GPU codenames making an appearance. Today, NVIDIA issues a statement for the German PC enthusiast website Hardwareluxx, and we manage to see it below fully. The key takeaway from this quote is that NVIDIA believes that the compromised files will not impact the company's business in any meaningful manner, and operations continue as usual for NVIDIA's customers. The company's security team is analyzing the situation, and you can check out the complete statement below.
NVIDIA StatementOn February 23, 2022, NVIDIA became aware of a cybersecurity incident which impacted IT resources. Shortly after discovering the incident, we further hardened our network, engaged cybersecurity incident response experts, and notified law enforcement.

We have no evidence of ransomware being deployed on the NVIDIA environment or that this is related to the Russia-Ukraine conflict. However, we are aware that the threat actor took employee credentials and some NVIDIA proprietary information from our systems and has begun leaking it online. Our team is working to analyze that information. We do not anticipate any disruption to our business or our ability to serve our customers as a result of the incident.

Security is a continuous process that we take very seriously at NVIDIA - and we invest in the protection and quality of our code and products daily.

Hackers Threaten to Release NVIDIA GPU Drivers Code, Firmware, and Hash Rate Limiter Bypass

A few days ago, we found out that NVIDIA corporation has been hacked and that attackers managed to steal around 1 TB of sensitive data from the company. This includes various kinds of files like GPU driver and GPU firmware source codes and something a bit more interesting. The LAPSUS$ hacking group responsible for the attack is now threatening to "help mining and gaming community" by releasing a bypass solution for the Lite Hash Rate (LHR) GPU hash rate limiter. As the group notes, the full LHR V2 workaround for anything between GA102-GA104 is on sale and is ready for further spreading.

Additionally, the hacking group is making blackmailing claims that the company should remove the LHR from its software or share details of the "hw folder," presumably a hardware folder with various confidential schematics and hardware information. NVIDIA did not respond to these claims and had no official statement regarding the situation other than acknowledging that they are investigating an incident.

Update 01:01 UTC: The hackers have released part of their files to the public. It's a 18.8 GB RAR file, which uncompresses to over 400,000 (!) files occupying 75 GB, it's mostly source code.

Intel Fails to Deliver on Promised Day-0 Elden Ring Graphics Driver

It seems that someone at Intel forgot to press "post" on the company's promised day-0 driver update for one of this year's most anticipated games - Elden Ring. The company previously announced a partnership with Elden Ring developer FromSoftware in the development of an updated driver that wold give Intel-based Elden Ring players streamlined performance and a (hopefully) bug-free experience when it comes to graphics rendering. But Elden Ring's launch day of February 24th has come and gone - and Intel is mum on where exactly its updated driver lies. For now, the latest available Intel graphics driver stands at version 101.1121 - released in November last year.

It may be the case that the driver development hit an unexpected snag, or perhaps Intel has simply opted to delay the driver's launch until there are actually some discrete-level graphics cards available for purchase - the company's initial Arc Alchemist lineup is expected to be announced and launched later this month. That would make sense - especially considering how a driver update this close to release might include some interesting data on the upcoming graphics cards that could be pursued by data miners. Even so, it doesn't seem like a good PR move for Intel to have loudly promised an updated driver and then fail to release it - especially as Intel's uphill battle in the discrete GPU market is just beginning. Perhaps the driver developers are having too much fun with the critically and consumer-acclaimed latest installment from FromSoftware?

Intel Targeting 2024+ for 'Ultra Enthusiast' Arc Celestial GPUs

Intel has recently unveiled its plans for their 3rd generation Celestial Arc graphics cards to compete with NVIDIA and AMD in the "Ultra Enthusiast" GPU market. The Arc Celestial GPU series is now scheduled to launch in 2024 with the architecture currently under active development. These cards will target future flagship cards from NVIDIA however in 2023/2024 we should see the launch of 2nd generation Arc Battlemage products that may narrow the gap. The timeline Intel shared indicates a launch date of 2024+ for Celestial GPUs so the launch date may slip into 2025. This was previously the year which Intel was rumored to launch 4th generation Arc Druid graphics cards so it remains to be seen if this official timeline will hold.

AMD Radeon RDNA2 680M iGPU Beats NVIDIA MX450 Discrete GPU

The recently announced AMD Ryzen 6000 series mobile Zen 3+ processors feature a significant graphics improvement with up to 12 RDNA2 Compute Units available. These new graphics solutions have recently been tested and compared by an engineer working for Lenovo in China. The Radeon 680M and Radeon 660M feature 12 and 6 RDNA2 Compute Units respectively and have been tested against the NVIDIA MX450 and the Intel Iris Xe Graphics G7. The Radeon 680M represents an 85% performance improvement over the Radeon RX Vega 8 and is 24% faster than the discrete NVIDIA MX450 mobile GPU in 3DMark. This lead narrows in real-world tests where the 680M is only 1.1% faster than the MX450 and the 660M is 37% slower.

The mid-range Radeon 660M is still significantly faster than the Intel Iris Xe Graphics G7 (96EU) found in the i7-12700H beating it by 9% in 1080P gaming. The review also looks at power efficiency for the Radeon 660M & 680M showing that in their highest power configurations performance increases by 10% for the 660M and 42% for the 680M. The Radeon 680M remains behind the NVIDIA GTX 1650 Max-Q which holds a 25% lead. The Ryzen 6000 mobile series will be available in laptops starting from next month.

Intel Introduces Arctic Sound-M Data Center Graphics Card Based on DG2 Design and AV1 Encoding

At Intel's 2022 investor meeting, the company has presented a technology roadmap update to give its clients an insight into what is to come. Today, team blue announced one of the first discrete data-centric graphics cards in the lineup, codenamed Arctic Sound-M GPU. Based on the DG2 Xe-HPG variation of Intel Xe GPUs, Arctic Sound-M is the company's first design to enter the data center space. The DG2 GPU features 512 Execution Units (EUs), which get passive cooling from the single-slot design of Arctic Sound's heatsink, envisioned for data center enclosures with external airflow.

One of the most significant selling points that Intel advertises is support for hardware-based AV1 encoding standard. This feature allows the card to achieve a 30% greater bandwidth, and it is the main differentiator between consumer-oriented Arc Alchemist GPUs and itself. The card is powered by PCIe power and an 8-pin EPS power connector. Arctic Sound-M is already sampling to select customers and it will become available in the middle of 2022.

Below is Intel's teaser video.

Intel Updates Technology Roadmap with Data Center Processors and Game Streaming Service

At Intel's 2022 Investor Meeting, Chief Executive Officer Pat Gelsinger and Intel's business leaders outlined key elements of the company's strategy and path for long-term growth. Intel's long-term plans will capitalize on transformative growth during an era of unprecedented demand for semiconductors. Among the presentations, Intel announced product roadmaps across its major business units and key execution milestones, including: Accelerated Computing Systems and Graphics, Intel Foundry Services, Software and Advanced Technology, Network and Edge, Technology Development, More: For more from Intel's Investor Meeting 2022, including the presentations and news, please visit the Intel Newsroom and Intel.com's Investor Meeting site.

NVIDIA Announces Financial Results for Fourth Quarter and Fiscal 2022

NVIDIA (NASDAQ: NVDA) today reported record revenue for the fourth quarter ended January 30, 2022, of $7.64 billion, up 53 percent from a year ago and up 8 percent from the previous quarter. Gaming, Data Center and Professional Visualization market platforms each achieved record revenue for the quarter and year. GAAP earnings per diluted share for the quarter were a record $1.18, up 103 percent from a year ago and up 22 percent from the previous quarter. Non-GAAP earnings per diluted share were $1.32, up 69 percent from a year ago and up 13 percent from the previous quarter.

For fiscal 2022, revenue was a record $26.91 billion, up 61 percent from $16.68 billion a year ago. GAAP earnings per diluted share were a record $3.85, up 123 percent from $1.73 a year ago. Non-GAAP earnings per diluted share were $4.44, up 78 percent from $2.50 a year ago. "We are seeing exceptional demand for NVIDIA computing platforms," said Jensen Huang, founder and CEO of NVIDIA. "NVIDIA is propelling advances in AI, digital biology, climate sciences, gaming, creative design, autonomous vehicles and robotics - some of today's most impactful fields.

NVIDIA Provides a Statement on MIA RTX 3090 Ti GPUs

NVIDIA's RTX 3090 Ti graphics card could very well be a Spartan from 343 Industries' Halo, in that it too is missing in action. Originally announced at CES 2022 for a January 27th release, the new halo product for the RTX 30-series family even had some of its specifications announced in a livestream. However, the due date has come and gone for more than half a month, and NVIDIA still hadn't said anything about the why and the how of it - or when should gamers hoping to snag the best NVIDIA graphics card of this generation ready their F5 keys (and bank accounts). Until now - in a statement to The Verge, NVIDIA spokesperson Jen Andersson said that "We don't currently have more info to share on the RTX 3090 Ti, but we'll be in touch when we do". Disappointed? So are we.

While the reasons surrounding the RTX 3090 Ti's delayed launch still aren't clear - and with NVIDIA's response, we're left wondering if they ever will be - there were some warning signs that not all the grass was green on the RTX 3090 Ti's launch. The consensus seems to be that NVIDIA found some last-minute production issues with the RTX 3090 Ti, which prompted an emergency delay on the cards' launch. The purported problems range from issues with the card's PCB, BIOS, and even GDDR6X 21 Gbps memory modules - but it's unclear which of these (or perhaps which combination) truly prompted the very real delay on the product launch.

Adobe Premiere Pro 22.2 Update Brings HEVC 10-Bit Encoding with Major Performance Increase for NVIDIA and Intel Graphics Cards

Adobe's Premiere Pro, one of the most common video editing tools in the industry, has received a February update today with version 22.2. The new version brings a wide array of features like Adobe Remix, an advanced audio retiming tool. Alongside that, the latest update accelerates offline text-to-speech capabilities by as much as three times. However, this is not the most significant feature, as we are about to see. Adobe also enabled 10-bit 420 HDR HEVC H/W encoding on Window with Intel and NVIDIA graphics. This feature allows the software to use advanced hardware built-in the NVIDIA Quadro RTX and Intel Iris Xe graphics cards.

The company managed to run some preliminary tests, and you can see the charts below. They significantly improve export times with the latest 22.2 software version that enables HEVC 10-Bit hardware encoding. For Intel GPUs, no special drivers need to be installed. However, for NVIDIA GPUs, Adobe is advising official Studio drivers in combination with Quadro RTX GPUs.

Upcoming PCIe 12VHPWR Connectors Compatible with NVIDIA RTX 30-series Founders Edition Graphics Cards

As you're most likely aware of, NVIDIA introduced a new power connector with its RTX 30-series Founders Edition graphics cards and at the time it was something of a controversy, especially as none of its AIB partners went for the connector. As it turned out, the connector was largely accepted by the PCI-SIG, with a few additions which lead to the 12VHPWR connector. The main difference between the two was the addition of a small set of sense connectors, for a 12+4-pin type connector. It has now been confirmed that the 12VHPWR will work with NVIDIA's Founders Edition cards, although this isn't a huge surprise as such, but rather good news for those that happen to own a Founders Edition card and are looking to invest in a new PSU.

However, what's more interesting in the news about the 12VHPWR connector is that it will operate in two distinct modes. If the 4-pin sense connector isn't connected to the GPU, the PSU will only deliver 450 Watts to the GPU, presumably as some kind of safety precaution. On the other hand, if the sense connector is used, the same cable can deliver up to 600 Watts, which would allow for a combined card power draw of up to 675 Watts for next generation GPUs. It's possible that we'll see cards with multiple power thresholds that will be negotiated on the fly with the PSU and we might also see PSU's that can force a lower power state of the GPU in case the overall system load gets too high. It'll be interesting to see what the new standard delivers, since so far not a lot of details have been released with regards to how the sense function works in detail.

Intel Adds Experimental Mesh Shader Support in DG2 GPU Vulkan Linux Drivers

Mesh shader is a relatively new concept of a programmable geometric shading pipeline, which promises to simplify the whole graphics rendering pipeline organization. NVIDIA introduced this concept with Turing back in 2018, and AMD joined with RDNA2. Today, thanks to the finds of Phoronix, we have gathered information that Intel's DG2 GPU will carry support for mesh shaders and bring it under Vulkan API. For starters, the difference between mesh/task and traditional graphics rendering pipeline is that the mesh edition is much simpler and offers higher scalability, bandwidth reduction, and greater flexibility in the design of mesh topology and graphics work. In Vulkan, the current mesh shader state is NVIDIA's contribution called the VK_NV_mesh_shader extension. The below docs explain it in greater detail:
Vulkan API documentationThis extension provides a new mechanism allowing applications to generate collections of geometric primitives via programmable mesh shading. It is an alternative to the existing programmable primitive shading pipeline, which relied on generating input primitives by a fixed function assembler as well as fixed function vertex fetch.

There are new programmable shader types—the task and mesh shader—to generate these collections to be processed by fixed-function primitive assembly and rasterization logic. When task and mesh shaders are dispatched, they replace the core pre-rasterization stages, including vertex array attribute fetching, vertex shader processing, tessellation, and geometry shader processing.

Researchers Exploit GPU Fingerprinting to Track Users Online

Online tracking of users happens when 3rd party services collect information about various people and use that to help identify them in the sea of other online persons. This collection of specific information is often called "fingerprinting," and attackers usually exploit it to gain user information. Today, researchers have announced that they managed to use WebGL (Web Graphics Library) to their advantage and create a unique fingerprint for every GPU out there to track users online. This exploit works because every piece of silicon has its own variations and unique characteristics when manufactured, just like each human has a unique fingerprint. Even among the exact processor models, silicon differences make each product distinct. That is the reason why you can not overclock every processor to the same frequency, and binning exists.

What would happen if someone were to precisely explore the differences in GPUs and use those differences to identify online users by those characteristics? This is exactly what researchers that created DrawnApart thought of. Using WebGL, they run a GPU workload that identifies more than 176 measurements across 16 data collection places. This is done using vertex operations in GLSL (OpenGL Shading Language), where workloads are prevented from random distribution on the network of processing units. DrawnApart can measure and record the time to complete vertex renders, record the exact route that the rendering took, handle stall functions, and much more. This enables the framework to give off unique combinations of data turned into fingerprints of GPUs, which can be exploited online. Below you can see the data trace recording of two GPUs (same models) showing variations.

AMD Radeon RX 6900 XT Scores Top Spot in 3DMark Fire Strike Hall of Fame with 3.1 GHz Overclock

3DMark Fire Strike Hall of Fame is where overclockers submit their best hardware benchmark trials and try to beat the very best. For years, one record managed to hold, and today it just got defeated. According to an extreme overclocker called "biso biso" from South Korea and a part of the EVGA OC team, the top spot now belongs to the AMD Radeon RX 6900 XT graphics card. The previous 3DMark Fire Strike world record was set on April 22nd in 2020, when Vince Lucido, also known as K|NGP|N, set a record with four-way SLI of NVIDIA GeForce GTX 1080 Ti GPUs. However, that record is old news since January 27th, when biso biso set the history with AMD Radeon RX 6900 XT GPU.

The overclocker scored 62389 points, just 1,183 more from the previous record. He pushed the Navi 21 XTX silicon that powers the Radeon RX 6900 XT card to an impressive 3,147 MHz. Paired with a GPU memory clock of 2370 MHz, the GPU was probably LN2 cooled to achieve these results. The overclocker used EVGA's Z690 DARK KINGPIN motherboard with Intel Core i9-12900K processor as a platform of choice to achieve this record. You can check it out on the 3DMark Fire Strike Hall of Fame website to see yourself.

NVIDIA "Hopper" Might Have Huge 1000 mm² Die, Monolithic Design

Renowned hardware leaker kopike7kimi on Twitter revealed some purported details on NVIDIA's next-generation architecture for HPC (High Performance Computing), Hopper. According to the leaker, Hopper is still sporting a classic monolithic die design despite previous rumors, and it appears that NVIDIA's performance targets have led to the creation of a monstrous, ~1000 mm² die package for the GH100 chip, which usually maxes out the complexity and performance that can be achieved on a particular manufacturing process. This is despite the fact that Hopper is also rumored to be manufactured under TSMC's 5 nm technology, thus achieving higher transistor density and power efficiency compared to the 8 nm Samsung process that NVIDIA is currently contracting. At the very least, it means that the final die will be bigger than the already enormous 826 mm² of NVIDIA's GA100.

If this is indeed the case and NVIDIA isn't deploying a MCM (Multi-Chip Module) design on Hopper, which is designed for a market with increased profit margins, it likely means that less profitable consumer-oriented products from NVIDIA won't be featuring the technology either. MCM designs also make more sense in NVIDIA's HPC products, as they would enable higher theoretical performance when scaling - exactly what that market demands. Of course, NVIDIA could be looking to develop an MCM version of the GH100 still; but if that were to happen, the company could be looking to pair two of these chips together as another HPC product (rumored GH-102). ~2,000 mm² in a single GPU package, paired with increased density and architectural improvements might actually be what NVIDIA requires to achieve the 3x performance jump from the Ampere-based A100 the company is reportedly targeting.

MAINGEAR Launches New NVIDIA GeForce RTX 3050 Desktops, Offering Next-Gen Gaming Features

MAINGEAR—an award-winning PC system integrator of custom gaming desktops, notebooks, and workstations—today announced that new NVIDIA GeForce RTX 3050 graphics cards are now available to configure within MAINGEAR's product line of award-winning custom gaming desktop PCs and workstations. Featuring support for real-time ray tracing effects and AI technologies, MAINGEAR PCs equipped with the NVIDIA GeForce RTX 3050 offer gamers next-generation ray-traced graphics and performance comparable to the latest consoles.

Powered by Ampere, the NVIDIA GeForce RTX 3050 features NVIDIA's 2nd gen Ray Tracing Cores and 3rd generation Tensor Cores. Combined with new streaming multiprocessors and high-speed G6 memory, the NVIDIA GeForce RTX 3050 can power the latest and greatest games. NVIDIA RTX on 30 Series GPUs deliver real-time ray tracing effects—including shadows, reflections, and Ambient Occlusion (AO). The groundbreaking NVIDIA DLSS (Deep Learning Super Sampling) 2.0 AI technology utilizes Tensor Core AI processors to boost frame rates while producing sharp, uncompromised visual fidelity comparable to high native resolutions.

Alphacool Launches Aurora Vertical GPU Mount & Eisblock Acryl GPX for Zotac RTX 3070Ti

With the new Aurora Vertical GPU Mount, Alphacool now offers the possibility to install the graphics card vertically inside compatible PC cases, whether they're air or liquid cooled. In addition, the mount features 11 digitally addressable 5V RGB LEDs that create a unique, very classy looking illumination. The digital aRGB LED lighting can be controlled either with a digital RGB controller or a digital RGB capable motherboard.

There is more new blood within the Ice Block range! In the future, Zotac RTX 3070Ti AMP Holo, Trinity and Trinity OC can also be water-cooled with Alphacool's Eisblock Aurora Acryl GPX custom cooler.

EVGA Introduces E1 Gaming Rig

With the launch of the EVGA E1, EVGA is once again taking extreme gaming to the next level by setting a statement with our new gaming rig. The E1 will be extremely limited and available to EVGA Members only.

Back in 1999 when EVGA first was founded, graphics cards were basic. EVGA was the first to introduce Heat Pipe Cooling and iCX Technology along with 4-Way SLI bringing more benefits to gamers. In 2012, EVGA introduced its first Power Supply. Through quality and features EVGA has become a top choice for Power Supplies when gamers are building their systems. E1 continues the tradition of top tier hardware for gamers.

NVIDIA "GA103" GeForce RTX 3080 Ti Laptop GPU SKU Pictured

When NVIDIA announced the appearance of the GeForce RTX 3080 Ti mobile graphics card, we were left with a desire to see just what the GA103 silicon powering the GPU looks like. And thanks to the Chinese YouTuber Geekerwan, we have the first pictures of the GPU. Pictured below is GA103S/GA103M SKU with GN20-E8-A1 labeling. It features 58 SMs that make up for 7424 CUDA cores in total. The number of Tensor cores for this SKU is set to 232, while there are 58 RT cores. NVIDIA has decided to pair this GPU with a 256-bit memory bus and 16 GB GDDR6 memory.

As it turns out, the full GA103 silicon has a total of 7680 CUDA cores and a 320-bit memory bus, so this mobile version is a slightly cut-down variant. It sits perfectly between GA104 and GA102 SKUs, providing a significant improvement to the core count of GA104 silicon. Power consumption of the GA103 SKU for GeForce RTX 3080 Ti mobile is set to a variable 80-150 Watt range, which can be adjusted according to the system's cooling capacity. An interesting thing to point out is a die size of 496 mm², which is a quarter more significant compared to GA104, for a quarter higher number of CUDA cores.

Tachyum Selected for Pan-European Project Enabling 1 AI Zettaflop in 2024

Tachyum today announced that it was selected by the Slovak Republic to participate in the latest submission for the Important Projects of Common European Interest (IPCEI), to develop Prodigy 2 for HPC/AI. Prodigy 2 for HPC/AI will enable 1 AI Zettaflop and more than 10 DP Exaflops computers to support superhuman brain-scale computing by 2024 for under €1B. As part of this selection, Tachyum could receive a 49 million Euro grant to accelerate a second-generation of its Tachyum Prodigy processor for HPC/AI in a 3-nanometer process.

The IPCEI program can make a very important contribution to sustainable economic growth, jobs, competitiveness and resilience for industry and the economy in the European Union. IPCEI will strengthen the EU's open strategic autonomy by enabling breakthrough innovation and infrastructure projects through cross-border cooperation and with positive spill-over effects on the internal market and society as a whole.

Intel Arc Alchemist DG2 GPU Memory Configurations Leak

Intel's upcoming Arc Alchemist lineup of discrete graphics cards generates a lot of attention from consumers. Leaks of these cards' performance and detailed specifications appear more and more as we enter the countdown to the launch day, which is sometime in Q1 of this year. Today, we managed to see a slide from @9950pro on Twitter that shows the laptop memory configuration of Intel's DG2 GPU. As the picture suggests, we can see that the top-end SKU1 with 512 EUs supports a 16 GB capacity of GDDR6 memory that runs at 16 Gbps speeds. The memory runs on a 256-bit bus and generates 512 GB/s bandwidth while having eight VRAM modules present.

When it comes to SKU2, which is a variant with 384 EUs, this configuration supports six VRAM modules on a 192-bit bus, running at 16 Gbps speeds. They generate a total capacity of 12 GBs and a bandwidth of 384 GB/s. We have SKU3 DG2 GPU going down the stack, featuring 256 EUs, four VRAM modules on a 128-bit bus, 8 GB capacity, and a 256 GB/s bandwidth. And last but not least, the smallest DG2 variants come in the form of SKU4 and SKU5, feating 128 EUs and 96 EUs, respectively. Intel envisions these lower-end SKUs with two VRAM modules on a 64-bit bus, and this time slower GDDR6 memory running at 14 Gbps. They are paired with 4 GB of total capacity, and the total bandwidth comes down to 112 GB/s.

Intel Arc Alchemist Xe-HPG Graphics Card with 512 EUs Outperforms NVIDIA GeForce RTX 3070 Ti

Intel's Arc Alchemist discrete lineup of graphics cards is scheduled for launch this quarter. We are getting some performance benchmarks of the DG2-512EU silicon, representing the top-end Xe-HPG configuration. Thanks to a discovery of a famous hardware leaker TUM_APISAK, we have a measurement performed in the SiSoftware database that shows Intel's Arc Alchemist GPU with 4096 cores and, according to the report from the benchmark, just 12.8 GB of GDDR6 VRAM. This is just an error on the report, as this GPU SKU should be coupled with 16 GB of GDDR6 VRAM. The card was reportedly running at 2.1 GHz frequency. However, we don't know if this represents base or boost speeds.

When it comes to actual performance, the DG2-512EU GPU managed to score 9017.52 Mpix/s, while something like NVIDIA GeForce RTX 3070 Ti managed to get 8369.51 Mpix/s in the same test group. Comparing these two cards in floating-point operations, Intel has an advantage in half-float, double-float, and quad-float tests, while NVIDIA manages to hold the single-float crown. This represents a 7% advantage for Intel's GPU, meaning that Arc Alchemist has the potential for standing up against NVIDIA's offerings.
Return to Keyword Browsing
Nov 23rd, 2024 23:00 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts