News Posts matching #GPU

Return to Keyword Browsing

AMD's Pain Point is ROCm Software, NVIDIA's CUDA Software is Still Superior for AI Development: Report

The battle of AI acceleration in the data center is, as most readers are aware, insanely competitive, with NVIDIA offering a top-tier software stack. However, AMD has tried in recent years to capture a part of the revenue that hyperscalers and OEMs are willing to spend with its Instinct MI300X accelerator lineup for AI and HPC. Despite having decent hardware, the company is not close to bridging the gap software-wise with its competitor, NVIDIA. According to the latest report from SemiAnalysis, a research and consultancy firm, they have run a five-month experiment using Instinct MI300X for training and benchmark runs. And the findings were surprising: even with better hardware, AMD's software stack, including ROCm, has massively degraded AMD's performance.

"When comparing NVIDIA's GPUs to AMD's MI300X, we found that the potential on paper advantage of the MI300X was not realized due to a lack within AMD public release software stack and the lack of testing from AMD," noted SemiAnalysis, breaking down arguments in the report further, adding that "AMD's software experience is riddled with bugs rendering out of the box training with AMD is impossible. We were hopeful that AMD could emerge as a strong competitor to NVIDIA in training workloads, but, as of today, this is unfortunately not the case. The CUDA moat has yet to be crossed by AMD due to AMD's weaker-than-expected software Quality Assurance (QA) culture and its challenging out-of-the-box experience."

ASUS Reveals the V16 Gaming Laptop

Today marks the debut of the brand-new ASUS V16 (V3607), an entry-level 16-inch gaming laptop that broadens the appeal of the innovative ASUS laptop portfolio. Featuring futuristic uber-cool design details and unparalleled performance from its up to Intel Core 7 processor and NVIDIA GeForce RTX 40 Series Laptop GPU, ASUS V16 is built to win—or create—in distinctive style.

Its fast 16-inch 16:10 144 Hz FHD IPS display, with an impressive 89% screen-to-body ratio, ensures fluid gaming visuals, while Dirac audio technology and ASUS Audio Booster provide powerful and immersive sound. Offering an outstanding user experience, the laptop also includes a large touchpad and a comfortable ASUS ErgoSense keyboard, along with AI noise-cancelation technology and 3DNR for enhanced video conferencing.

NVIDIA RTX 5080 Laptop GPU Might Be Up to 60% Faster Than RTX 4080 Laptop

Moore's Law is Dead, a prominent YouTube channel specializing in computer hardware leaks, has revealed its expectations for the RTX 50-series Laptop GPUs. We have already reported on a massive product listing leak shedding light on almost every single "Blackwell" laptop GPU, but needless to say, more information is always welcome. According to Moore's Law is Dead, the RTX 5090 Laptop GPU, and this has been hinted at by the aforementioned prior leak, will only sport 16 GB of GDDR7 VRAM - the same as the RTX 5080 Laptop.

Moreover, his sources indicate that the RTX 5080 will drop with a 175-watt TGP and 7,680 CUDA cores, which is shockingly only a hair more than the 7,424 found in its predecessor. However, the source did state that the RTX 5080 will be around 40 to 60% faster than the RTX 4080, which is a massive generational leap in performance. It is not clear at this point how this number was arrived at, but it sure does seem rather utopian. Yet another source has also indicated that an RTX 5090 Laptop card with a whopping 24 GB of VRAM is also in the works which might launch down the line, but there is little else to be said about it. As MLID notes, NVIDIA has very little to no competition in the high-end laptop segment, which inevitably makes things worse for the end-user.

Imagination Technology Reportedly Shipped GPU IP to Chinese Companies like Moore Threads and Biren Technology

According to a recent investigative report, UK-based Imagination Technologies faces allegations of transferring sensitive GPU intellectual property to Chinese companies with potential military connections. The UK-China Transparency organization claims that following its 2020 acquisition by China-controlled investment firm Canyon Bridge, Imagination provided complete access to its GPU IP to Chinese entities with military connections. The report suggests this included sharing detailed architectural documentation typically reserved for premier clients like Apple. At the center of the controversy are Chinese firms Moore Threads and Biren Technology, which have emerged as significant players in China's AI and GPU sectors. The report indicates Moore Threads maintains connections with military GPU suppliers, while Biren Technology has received partial Russian investment.

The organization argues that Canyon Bridge, which has ties to the state-owned China Reform enterprise, helped these technological transfers to benefit China's military-industrial complex. Imagination Technologies has defended its actions, maintaining that all licensing agreements comply with industry standards. The allegations have sparked renewed debate about foreign ownership of strategic technology assets and the effectiveness of current export controls. When Canyon Bridge acquired Imagination in 2020, security experts raised concerns about potential military applications of the firm's technology. UKCT plans to release additional findings, including information from legal disputes involving Imagination's previous management. Rising concerns over technology transfers have prompted governments to reassess export controls and corporate oversight in the semiconductor industry, as nations struggle to balance international commerce with national security priorities. We are yet to see official government response to this situation.

VeriSilicon Unveils Next-Gen Vitality Architecture GPU IP Series

VeriSilicon today announced the launch of its latest Vitality architecture Graphics Processing Unit (GPU) IP series, designed to deliver high-performance computing across a wide range of applications, including cloud gaming, AI PC, and both discrete and integrated graphics cards.

VeriSilicon's new generation Vitality GPU architecture delivers exceptional advancements in computational performance with scalability. It incorporates advanced features such as a configurable Tensor Core AI accelerator and a 32 MB to 64 MB Level 3 (L3) cache, offering both powerful processing power and superior energy efficiency. Additionally, the Vitality architecture supports up to 128 channels of cloud gaming per core, addressing the needs of high concurrency and high image quality cloud-based entertainment, while enabling large-scale desktop gaming and applications on Windows systems. With robust support for Microsoft DirectX 12 APIs and AI acceleration libraries, this architecture is ideally suited for a wide range of performance-intensive applications and complex computing workloads.

Intel Arc B580 Selling Like Hot Cakes, Weekly Restocks Planned

It's a tacitly known reality that Intel has not been having a great time lately. However, calling the company's recently announced Arc B580 gaming graphics card a smash hit would be a wild understatement. The company's previous major GPU launch, the Arc Alchemist, was riddled with mediocre reviews and received a lukewarm reception. The Arc B580, on the other hand, has received overwhelmingly positive reviews across the board, with many even hailing the GPU as a saving grace for the borderline deserted budget-class segment.

Keeping that in mind, it is no surprise that Intel's Arc B580 is getting sold out nearly everywhere, with the company barely managing to keep enough inventory. As revealed to popular YouTube channel Linus Tech Tips, Intel plans on having weekly restocks of its Arc B580 gaming GPU. We sure do look forward to that, considering that no one really likes a GPU, no matter how great, that can't be bought. The Arc B580 rocks a higher 12 GB of VRAM, a more affordable pricing, as well as arguably better performance than its primary competitors, the RTX 4060 and the RX 7600. Of course, with Blackwell and RDNA 4 around the corner, it sure does appear that the arena of the ultimate budget GPU is about to get heated once again.

AMD Radeon RX 7900 GRE China-Edition GPU Reaches End-of-Life

According to Tweakers, AMD's Radeon RX 7900 GRE graphics card has reached end-of-life status, as confirmed by multiple AMD board partners they have contacted. The announcement comes just months after the card's expansion into European markets following its initial 2023 exclusive launch in China. Tweakers report that the supply of the RX 7900 GRE is rapidly declining across retail channels. While ASUS models remain somewhat available, the manufacturer has informed Tweakers that deliveries are currently "limited." AMD has not responded to their multiple requests for comment regarding the discontinuation. The RX 7900 GRE offers compelling specifications that position it as a slightly scaled-down variant of the more premium RX 7900 XT.

Built on AMD's RDNA 3 architecture, the card features 80 CUs and 16 GB of GDDR6 memory and operates at a 260 W TDP. The timing of this discontinuation is particularly interesting as AMD prepares to unveil its next-generation RDNA 4-based Radeon RX 8000 series. Perhaps AMD is trying to flush out its remaining inventory to make room for its Radeon RX 8000 series GPUs, which should mainly target the middle-range of the next-generation GPU families, including competition like NVIDIA with "Blackwell" and Intel with "Battlemage." With the new card scheduled to appear during AMD's CES keynote on January 6 in Las Vegas, we have to wait and see what products AMD puts out before analyzing why AMD decided to EOL its Radeon RX 7900 GRE.

NVIDIA Blackwell RTX and AI Features Leaked by Inno3D

NVIDIA's RTX 5000 series GPU hardware has been leaked repeatedly in the weeks and months leading up to CES 2025, with previous leaks tipping significant updates for the RTX 5070 Ti in the VRAM department. Now, Inno3D is apparently hinting that the RTX 5000 series will also introduce updated machine learning and AI tools to NVIDIA's GPU line-up. An official CES 2025 teaser published by Inno3D, titled "Inno3D At CES 2025, See You In Las Vegas!" makes mention of potential updates to NVIDIA's AI acceleration suite for both gaming and productivity.

The Inno3D teaser specifically points out "Advanced DLSS Technology," "Enhanced Ray Tracing" with new RT cores, "better integration of AI in gaming and content creation," "AI-Enhanced Power Efficiency," AI-powered upscaling tech for content creators, and optimizations for generative AI tasks. All of this sounds like it builds off of previous NVIDIA technology, like RTX Video Super Resolution, although the mention of content creation suggests that it will be more capable than previous efforts, which were seemingly mostly consumer-focussed. Of course, improved RT cores in the new RTX 5000 GPUs is also expected, although it will seemingly be the first time NVIDIA will use AI to enhance power draw, suggesting that the CES announcement will come with new features for the NVIDIA App. The real standout feature, though, are called "Neural Rendering" and "Advanced DLSS," both of which are new nomenclatures. Of course, Advanced DLSS may simply be Inno3D marketing copy, but Neural Rendering suggests that NVIDIA will "Revolutionize how graphics are processed and displayed," which is about as vague as one could be.

NVIDIA GeForce RTX 5080 to Stand Out with 30 Gbps GDDR7 Memory, Other SKUs Remain on 28 Gbps

NVIDIA is preparing to unveil its "Blackwell" GeForce RTX 5080 graphics card, featuring cutting-edge GDDR7 memory technology. However, RTX 5080 is expected to be equipped with 16 GB of GDDR7 memory running at an impressive 30 Gbps. Combined with a 256-bit memory bus, this configuration will deliver approximately 960 GB/s bandwidth—a 34% improvement over its predecessor, the RTX 4080, which operates at 716.8 GB/s. While the RTX 5080 will stand as the sole card in the lineup featuring 30 Gbps memory modules, while other models in the RTX 50 series will incorporate slightly slower 28 Gbps variants. This strategic differentiation is possibly due to the massive CUDA cores gap between the rumored RTX 5080 and RTX 5090.

The flagship RTX 5090 is set to push boundaries even further, implementing a wider 512-bit memory bus that could potentially achieve bandwidth exceeding 1.7 TB/s. NVIDIA appears to be reserving larger memory configurations of 16 GB+ exclusively for this top-tier model, at least until higher-capacity GDDR7 modules become available in the market. Despite these impressive specifications, the RTX 5080's bandwidth still falls approximately 5% short of the current RTX 4090, which benefits from a physically wider bus configuration. This performance gap between the 5080 and the anticipated 5090 suggests NVIDIA is maintaining a clear hierarchy within its product stack, and we have to wait for the final launch to conclude what, how, and why of the Blackwell gaming GPUs.

Framework Laptops Announces Further Expansion for Framework 16 Gaming Laptop

Framework, the company known for making consumer-friendly, repairable, upgradeable laptops, has officially announced the first expansion bay for the Framework 16, its AMD-powered gaming laptop. The new storage module, which slots into the Expansion Bay, has dual M.2 slots for up to 16 TB total additional storage for the Framework 16. Part of the idea behind the storage expansion seems to be turning what is essentially a gaming laptop into a capable workstation. Crucially, upgrading the storage with the expansion bay requires removing the discrete Radeon 7700s GPU, although the Framework 16 already has dual M.2 slots on the motherboard, so this expansion isn't really intended for gamers, anyway.

One of the major selling points for the Framework 16 was that it offered PCIe expansion via a modular interface, and this is Framework's first real foray into expanding that ecosystem for its largest laptop. In addition to the storage expansion, Framework also announced a new Mystery Box system for its US and Canada Outlets to offload spare parts, like returned modules and components that it doesn't want to relegate to the e-waste pile but also cannot financially justify sorting through and refurbishing. These Mystery Boxes each contain at least three items and come with a warning that reads "Note that these don't come with a warranty and are non-returnable, so only get it if you want random scrap to play with!"

NVIDIA GeForce RTX 5070 Ti Leak Tips More VRAM, Cores, and Power Draw

It's an open secret by now that NVIDIA's GeForce RTX 5000 series GPUs are on the way, with an early 2025 launch on the cards. Now, preliminary details about the RTX 5070 Ti have leaked, revealing an increase in both VRAM and TDP and suggesting that the new upper mid-range GPU will finally address the increased VRAM demand from modern games. According to the leak from Wccftech, the RTX 5070 Ti will have 16 GB of GDDR7 VRAM, up from 12 GB on the RTX 4070 Ti, as we previously speculated. Also confirming previous leaks, the new sources confirm that the 5070 Ti will use the cut-down GB203 chip, although the new leak points to a significantly higher TBP of 350 W. The new memory configuration will supposedly run on a 256-bit memory bus and run at 28 Gbps for a total memory bandwidth of 896 GB/s, which is a significant boost over the RTX 4070 Ti.

Supposedly, the RTX 5070 Ti will also see a bump in total CUDA cores, from 7680 in the RTX 4070 Ti to 8960 in the RTX 5070 Ti. The new RTX 5070 Ti will also switch to the 12V-2x6 power connector, compared to the 16-pin connector from the 4070 Ti. NVIDIA is expected to announce the RTX 5000 series graphics cards at CES 2025 in early January, but the RTX 5070 Ti will supposedly be the third card in the 5000-series launch cycle. That said, leaks suggest that the 5070 Ti will still launch in Q1 2025, meaning we may see an indication of specs at CES 2025, although pricing is still unclear.

Update Dec 16th: Kopite7kimi, ubiquitous hardware leaker, has since responded to the RTX 5070 Ti leaks, stating that 350 W may be on the higher end for the RTX 5070 Ti: "...the latest data shows 285W. However, 350W is also one of the configs." This could mean that a TBP of 350 W is possible, although maybe only on certain graphics card models, if competition is strong, or in certain boost scenarios.

Intel Co-CEO Dampens Expectations for First-Gen "Falcon Shores" GPU

Intel's ambitious plan to challenge AMD and NVIDIA in the AI accelerator market may still be a little questionable, according to recent comments from interim co-CEO Michelle Johnston Holthaus at the Barclays 22nd Annual Global Technology Conference. The company's "Falcon Shores" project, which aims to merge Gaudi AI capabilities with Intel's data center GPU technology for HPC workloads, received surprising commentary from Holthaus. "We really need to think about how we go from Gaudi to our first generation of Falcon Shores, which is a GPU," she stated, before acknowledging potential limitations. "And I'll tell you right now, is it going to be wonderful? No, but it is a good first step."

Intel's pragmatic approach to AI hardware development was further highlighted when Holthaus addressed the company's product strategy. Rather than completely overhauling their development pipeline, she emphasized the value of iterative progress: "If you just stop everything and you go back to doing like all new product, products take a really long time to come to market. And so, you know, you're two years to three years out from having something." The co-CEO advocated for a more agile approach, stating, "I'd rather have something that I can do in smaller volume, learn, iterate, and get better so that we can get there." She acknowledged the enduring nature of AI market opportunities, particularly noting the current focus on training while highlighting the potential in other areas: "Obviously, AI is not going away. Obviously training is, you know, the focus today, but there's inference opportunities in other places where there will be different needs from a hardware perspective."

The Witcher IV Gets New Trailer at The Game Awards, Pre-Rendered on Unannounced NVIDIA RTX GPU

CD Projekt RED pulled out a rabbit out of its hat, revealing the first trailer for the upcoming The Witcher IV title. The new trailer is pre-rendered in a custom build of Unreal Engine 5, which CD Projekt RED is now using instead of its own in-house engine, as announced earlier. More interestingly, it is pre-rendered on an "unannounced NVIDIA GeForce RTX GPU." CD Projekt RED teamed up with Platige Image, who were responsible for intro cinematics in the previous Witcher games.

The trailer, which is almost six minutes long, follows Ciri, which confirms earlier rumors that the game might focus on Ciri as the main character. As detailed by CD Projekt RED, the trailer shows Ciri's new abilities and tools, as well as a small part of the story, a witcher contract set in a remote village. CD Projekt RED is promising that The Witcher IV will be "the most immersive and ambitious open-world Witcher game to date."

Thermal Grizzly Launches New Thermal Putty Gap Fillers in Three Different Versions

Thermal Grizzly's Thermal Putty offers a premium alternative to traditional thermal pads. It is electrically non-conductive, easy to apply, and functions as a flexible gap filler that compensates for height differences. This makes it an ideal replacement for thermal pads in graphics cards. Graphics cards are typically equipped with thermal pads of varying heights from the factory. When replacing these pads or upgrading to a GPU water cooler, matching replacement pads are necessary.

TG Thermal Putty can compensate for height differences from 0.2 to 3.0 mm, making it a versatile solution. Thermal Putty can be applied in two ways. Firstly, it can be applied over large areas using the included spatulas. Alternatively, it can be applied manually (gloves are recommended). When applied by hand, small beads can be shaped to fit the specific contact surfaces (e.g., VRAM, SMD).

JPR: Q3'24 PC Graphics AiB Shipments Decreased 14.5% Compared to the Last Quarter

According to a new research report from the analyst firm Jon Peddie Research, the growth of the global PC-based graphics add-in board market reached 8.1 million units in Q3'24 and desktop PC CPU shipments increased to 20.1 million units. Overall, AIBs will have a compound annual growth rate of -6.0% from 2024 to 2028 and reach an installed base of 119 million units at the end of the forecast period. Over the next five years, the penetration of AIBs in desktop PCs will be 83%.

As indicated in the following chart, AMD's overall AIB market share decreased -2.0% from last quarter, and NVIDIA's market share increased by 2.0%. These slight flips of market share in a down quarter don't mean much except to the winner. The overall market dynamics haven't changed.
  • The AIB overall attach rate in desktop PCs for the quarter decreased to 141%, down -26.9% from last quarter.
  • The desktop PC CPU market decreased -3.4% year to year and increased 42.2% quarter to quarter, which influenced the attach rate of AIBs.

Advantech Unveils Hailo-8 Powered AI Acceleration Modules for High-Efficiency Vision AI Applications

Advantech, a leading provider of AIoT platforms and services, proudly unveils its latest AI acceleration modules: the EAI-1200 and EAI-3300, powered by Hailo-8 AI processors. These modules deliver AI performance of up to 52 TOPS while achieving more than 12 times the power efficiency of comparable AI modules and GPU cards. Designed in standard M.2 and PCIe form factors, the EAI-1200 and EAI-3300 can be seamlessly integrated with diverse x86 and Arm-based platforms, enabling quick upgrades of existing systems and boards to incorporate AI capabilities. With these AI acceleration modules, developers can run inference efficiently on the Hailo-8 NPU while handling application processing primarily on the CPU, optimizing resource allocation. The modules are paired with user-friendly software toolkits, including the Edge AI SDK for seamless integration with HailoRT, the Dataflow Compiler for converting existing models, and TAPPAS, which offers pre-trained application examples. These features accelerate the development of edge-based vision AI applications.

EAI-1200 M.2 AI Module: Accelerating Development for Vision AI Security
The EAI-1200 is an M.2 AI module powered by a single Hailo-8 VPU, delivering up to 26 TOPS of computing performance while consuming approximately 5 watts of power. An optional heatsink supports operation in temperatures ranging from -40 to 65°C, ensuring easy integration. This cost-effective module is especially designed to bundle with Advantech's systems and boards, such as the ARK-1221L, AIR-150, and AFE-R770, enhancing AI applications including baggage screening, workforce safety, and autonomous mobile robots (AMR).

NVIDIA Shows Future AI Accelerator Design: Silicon Photonics and DRAM on Top of Compute

During the prestigious IEDM 2024 conference, NVIDIA presented its vision for the future AI accelerator design, which the company plans to chase after in future accelerator iterations. Currently, the limits of chip packaging and silicon innovation are being stretched. However, future AI accelerators might need some additional verticals to gain the required performance improvement. The proposed design at IEDM 24 introduces silicon photonics (SiPh) at the center stage. NVIDIA's architecture calls for 12 SiPh connections for intrachip and interchip connections, with three connections per GPU tile across four GPU tiles per tier. This marks a significant departure from traditional interconnect technologies, which in the past have been limited by the natural properties of copper.

Perhaps the most striking aspect of NVIDIA's vision is the introduction of so-called "GPU tiers"—a novel approach that appears to stack GPU components vertically. This is complemented by an advanced 3D stacked DRAM configuration featuring six memory units per tile, enabling fine-grained memory access and substantially improved bandwidth. This stacked DRAM would have a direct electrical connection to the GPU tiles, mimicking the AMD 3D V-Cache on a larger scale. However, the timeline for implementation reflects the significant technological hurdles that must be overcome. The scale-up of silicon photonics manufacturing presents a particular challenge, with NVIDIA requiring the capacity to produce over one million SiPh connections monthly to make the design commercially viable. NVIDIA has invested in Lightmatter, which builds photonic packages for scaling the compute, so some form of its technology could end up in future NVIDIA accelerators

Sparkle Working On More Intel Arc Battlemage Graphics Card Designs, Coming Next Year

In addition to the TITAN and GUARDIAN SKUs announced earlier this month, Sparkle is working on several other SKUs. The roadmap includes the low-profile version of the Arc B570, as well as the Arc B580 ROC OC Ultra, which is expected to come with a 2,800 MHz GPU factory overclock and 210 W TBP, both coming next year.

According to the roadmap, Sparkle plans to release the B580 ROC OC Ultra version in February 2025, and this one will be a part of Sparkle's ROC Luna series, featuring an all-white design. As said, it gets a 2,800 MHz GPU factory-overclock, which is 60 MHz higher than the Sparkle Arc B580 TITAN OC. It also has a slightly higher 210 W TBP. Sparkle included a small picture showing a 2.5-slot thick design with a dual-fan cooler. Additional roadmap also confirms the launch of the Arc B570 Low-Profile version, which will feature a lower 170 W TBP and a three-fan cooler, similar to what we have seen from GUNNIR lately.

MAXSUN Designs Arc B580 GPU with Two M.2 SSDs, Putting Leftover PCIe Lanes to Good Use

Thanks to the discovery of VideoCardz, we get a glimpse of MAXSUN's latest Arc 580 GPU with not only a GPU but extra room for two additional M.2 SSDs. The PCIe connector on the Intel Arc B580 has x16 physical pins but runs at PCIe 4.0 x8 speeds. Intel verified it runs only x8 lanes instead of the full x16 slot, leaving x8 lanes unsued. However, MAXSUN thought of a clever way to put the leftover x8 lanes to good use by adding two PCIe x4 M.2 SSDs to thelatest triple-fan iCraft B580 SKU. Power delivery for the M.2 drives comes directly from the graphics card, which is made possible by the GPU's partial PCIe lane utilization. This configuration could prove particularly valuable for compact builds or systems with limited motherboard storage options.

Interestingly, the SSD pair appears to have its own thermal enclosure, which acts as a heatsink. Having constant airflow from the GPU's fans, the M.2 SSD configuration should be able to maximize the full bandwidth of the SSDs without thermals throttling the SSD read/write speeds. The design follows in the footsteps of AMD's Radeon Pro SSG, which introduced integrated storage in workstation cards with PCIe 3.0 M.2 slots. Back then, it was mainly a target for workstation users. However, MAXSUN has envisioned gamers unusually expanding their storage space now. The pricing of the card and availability date remains a mystery.

Intel Xe3 "Celestial" Architecture is Complete, Hardware Team Moves on to Xe4 "Druid" Design

We have already confirmed that Intel is continuing the development of Arc gaming GPUs beyond the current Xe2 "Battlemage" series, with the new Xe3 "Celestial" architecture in the works. However, thanks to PCWorld's The Full Nerd podcast, Tom Petersen of Intel confirmed that the Xe3 IP has been finished, and the hardware teams are already working on the next Xe4 "Druid" GPU IP. "Our architects are way ahead of us, and they are already working on not the next thing but the next thing after the next thing," said Petersen, adding: "The way I would like to comment is our IP that's kind of called Xe3, which is the one after Xe2, that's pretty much baked, right. And so the software teams have a lot of work to do on Xe3. The hardware teams are off on the next thing, right. That's our cadence, that we need to keep going."

The base IP of next-generation Xe3 "Celestial" GPUs is done. That means the basic media engines, Xe cores, XMX matrix engines, ray tracing engines, and other parts of the gaming GPU are already designed and most likely awaiting trial fabrication. The software to support this Xe3 is also being developed while Intel's team is working on enabling more optimizations for the Xe2 "Battlemage" architecture, which we previewed recently. We assume that Intel's Xe GPU will now follow a stricter cadence of releases, with SKUs getting updated much faster, given that a lot is prepared for the future.

Lenovo Legion Go S Leak Details €600 MSRP, AMD Ryzen Z2 SoC, and Bigger Battery for Affordable Gaming Handheld

It's been public knowledge for a while now that Lenovo is planning an imminent successor to its Legion Go handheld that has proven rather popular among handheld gamers. Previous leaks and rumors indicated that the Legion Go S 8ARP1, as it will apparently be named, will be a more affordable version of the current Legion Go. Now, thanks to Roland Quandt, Windows Central, and WinFuture, more details about the upcoming Legion Go S have leaked, including images of the device, supposed specifications, and a potential price.

According to the leaks, the new affordable handheld gaming PC will feature some substantial hardware changes, including a slightly smaller eight-inch display, this time with a much lower 1920 × 1200p resolution and a slightly lower 120 Hz refresh rate. Gone, too, are the Nintendo Switch-style detachable controllers, with the Legion Go S instead featuring a white unibody design. What's more interesting than the leaked images of the Legion Go S or the hardware changes—detachable controllers or not, the Legion Go is still intended to be used as a handheld—is the new AMD APU that will seemingly power the Go S. The as-yet unannounced AMD Ryzen Z2G looks like it will be an odd core configuration featuring an AMD Radeon 680M iGPU and Zen 3+ cores. Ultimately, the APU seems like it will put the Legion Go S somewhere between the current-generation Legion Go and devices featuring the AMD Ryzen Z1 (non-extreme), which is a good place to be if Lenovo hopes to compete with the likes of the Steam Deck OLED, which will seemingly cost around the same as the Legion Go S, depending on which region you are in.

Sparkle Introduces its Intel Arc B-Series Graphics Cards

SPARKLE Intel Arc B-Series Graphics Card offer offers high-resolution gaming with Intel XeSS AI upscaling, ray tracing, and 8K media support, plus accelerated AI features for enhanced creation and editing through Intel AI Playground.

SPARKLE, an Intel official AIB partner, is announcing:

SPARKLE Intel Arc B580 TITAN OC - with Limited GPU Holder
The SPARKLE Intel Arc B580 TITAN OC debuts with 12 GB GDDR6 memory and advanced TORN Cooling 2.0 featuring triple AXL fans, a 2.2-slot design, and a full-metal backplate. With a boost clock of 2740 MHz and 200 W power consumption, it delivers top-tier gaming performance.
A blue breathing light effect adds elegance, while a limited SPARKLE GPU Holder completes this powerhouse package. The TITAN Series continues its legacy of performance and style.

Acer Boosts Gaming Lineup with New Nitro Intel Arc B-Series Graphics Cards

Acer today announced an expansion to its gaming portfolio with the new Nitro Intel Arc B-Series graphics cards, aimed at DIY gamers seeking high-performance gaming and content creation upgrades for their PC setups.

The Nitro Intel Arc B570 OC 10 GB and Nitro Intel Arc B580 OC 12 GB graphics cards, with clock speeds up to 2,740 MHz and up to 12 GB GDDR6 memory, offer gamers an immersive experience and access to the latest AI technologies via the Intel AI Playground application. These graphics cards are equipped with Acer's advanced FrostBlade cooling systems to ensure peak performance.

Intel Announces the Arc B-Series Graphics Cards

Today, Intel announced the new Intel Arc B-Series graphics cards (code-named Battlemage). The Intel Arc B580 and B570 GPUs offer best-in-class value for performance at price points that are accessible to most gamers, deliver modern gaming features and are engineered to accelerate AI workloads. The included Intel Xe Matrix Extensions (XMX) AI engines power the newly introduced XeSS 2, comprised of three technologies that together increase performance, visual fluidity and responsiveness.

"The new Intel Arc B-Series GPUs are the perfect upgrades for gamers. They deliver leading performance-per-dollar and great 1440p gaming experiences with XeSS 2, second-generation ray tracing engines and XMX AI engines. We're delighted to be joined by more partners than ever so that gamers have more choice in finding their perfect design." -Vivian Lien, Intel vice president and general manager of Client Graphics
[Editor's note: Our preview of the Arc Battlemage Series is now live]

ASRock Launches Intel Arc B-Series Graphics Cards Born To Shine Your PC Builds

ASRock, the global leading manufacturer of motherboards, graphics cards, mini PCs, gaming monitors and power supply units, today launches the all-new Intel Arc B-Series graphics cards, including Steel Legend and Challenger products based on Intel Arc B580 and Intel Arc B570 Graphics, which are born to shine your PC builds!

Based on the latest Xe2-HPG architecture, Intel Arc B-Series GPUs are designed for high performance gaming at 1440p and 1080p with AI upscaling and ray tracing. They are equipped with cutting-edge features: Intel Xe Super Sampling technology (XeSS), which takes your gaming experience to the next level with AI-enhanced upscaling for higher performance and high image fidelity. Intel XeSS Frame Generation and Intel Xe Low Latency technologies make your games play smoother and more responsive. Intel Xe Matrix eXtensions (XMX) AI engines accelerate AI-enhanced gaming, creation, and media generation. The Advanced Media Engine accelerates your content creation with two full-featured media transcoders to speed up the media exporting across the most popular formats including AV1.
Return to Keyword Browsing
Dec 23rd, 2024 22:46 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts