News Posts matching #Performance

Return to Keyword Browsing

NVIDIA GeForce RTX 5090 Spotted with Missing ROPs, Performance Loss Confirmed, Multiple Vendors Affected

TechPowerUp has discovered that there are NVIDIA GeForce RTX 5090 graphics cards in retail circulation that come with too few render units, which lowers performance. Zotac's GeForce RTX 5090 Solid comes with fewer ROPs than it should—168 are enabled, instead of the 176 that are part of the RTX 5090 specifications. This loss of 8 ROPs has a small, but noticeable impact on performance. During recent testing, we noticed our Zotac RTX 5090 Solid sample underperformed slightly, falling behind even the NVIDIA RTX 5090 Founders Edition card. At the time we didn't pay attention to the ROP count that TechPowerUp GPU-Z was reporting, and instead spent time looking for other reasons, like clocks, power, cooling, etc.

Two days ago, one of our readers who goes by "Wuxi Gamer," posted this thread on the TechPowerUp Forums, reporting that his retail Zotac RTX 5090 Solid was showing fewer ROPs in GPU-Z than the RTX 5090 should have. The user tried everything from driver to software re-installs, to switching between the two video BIOSes the card comes with, all to no avail. What a coincidence that we had this card in our labs already, so we then dug out our sample. Lo and behold—our sample is missing ROPs, too! GPU-Z is able to read and report these units counts, in this case through NVIDIA's NVAPI driver interface. The 8 missing ROPs constitute a 4.54% loss in the GPU's raster hardware capability, and to illustrate what this means for performance, we've run a couple of tests.

Apple M4 MacBook Air Gets Geekbenched, Leaked Results Suggest Near MacBook Pro-level Performance

Apple's unannounced M4 MacBook Air family is likely reaching market availability status at some point next month. Last December, slimline notebook enthusiasts started hearing about an updated lineup; macOS's Sequoia 15.2 update reportedly referenced upcoming MacBook Air M4 13-inch and 15-inch models. An early sample unit—named "Mac16,12"—has participated in a Geekbench 6.4.0 (macOS AArch64) gauntlet; results appeared online yesterday. The alleged "MacBook Air 13" candidate posted an overall Metal score of 54,806, and an overall OpenCL tally of 36,305. The two separate Geekbench Browser entries confirm that the sampled device makes use of a 10-core M4 processor, with Cluster 1 containing four performance cores. Cluster 2 consists of six power efficiency-oriented cores. Base frequency is listed at 4.41 GHz; reportedly the highest recorded for an M4 SoC. Said chip accessed 24 GB of unified memory, during its macOS 15.2 (Build 24C2101)-based test session.

Notebookcheck and Wccftech compared the aforementioned data points with slightly older M4-equipped hardware, including a premium model. Both outlets observed a "measly" five percent performance difference. Elaborating on their findings, Notebookcheck stated: "as always, we would recommend taking early benchmark results with a healthy amount of skepticism for the time being. With that being said, the MacBook Air 13 benchmarked falls about 5% short of the median Geekbench OpenCL and Geekbench Metal results we achieved so far when benchmarking the M4 versions of Apple's Mac Mini and MacBook Pro 14." The rumored next-gen MacBook Air is expected to operate with a fan-less cooling system—press outlets reckon that the MacBook Pro's air-cooled operation puts it at a slight advantage (in benchmarks).

GIGABYTE Showcases Comprehensive AI Computing Portfolio at MWC 2025

GIGABYTE, a global leader in computing innovation and technology, will showcase its full-spectrum AI computing solutions that bridge development to deployment at MWC 2025, taking place from March 3-6.

"AI+" and "Enterprise-Reinvented" are two of the themes for MWC. As enterprises accelerate their digital transformation and intelligent upgrades, the transition of AI applications from experimental development to democratized commercial deployment has become a critical turning point in the industry. Continuing its "ACCEVOLUTION" initiative, GIGABYTE provides the comprehensive infrastructure products and solutions spanning cloud-based supercomputing centers to edge computing terminals, aiming to accelerate the next evolution and empower industries to scale AI applications efficiently.

Moore Threads Claims 120% Gaming Performance Improvement for MTT S Series GPUs

Moore Threads has released version 290.100 of its MTT S Series Windows desktop driver; today's freshly published patch notes describe "performance and experience optimizations" for multiple modern games titles. Press coverage of the Chinese graphics card manufacturer's hardware portfolio has concentrated mostly on deficiencies, relative to Western offerings. Despite being the first desktop gaming graphics card to arrive with a PCI Express Gen 5 bus interface, Moore Threads' MTT S80 model has consistently struggled to keep up with mainstream competition. Most notably, their current 200 W TDP-rated flagship—packing 4096 "MUSA" cores—trailed behind AMD Radeon iGPUs, according to March 2024 benchmarks.

The latest Moore Threads driver improvements were tested out internally, prior to public release. Patch notes claim that Infinity Nikki (DirectX 12-only) average frame rates "increased by more than 40%." Another DX12 title was benched—Hideo Kojima's Death Stranding: "average frame rate has increased by more than 50%." The largest upgrade was observed when playing A Plague Tale: Requiem; the MTT engineering team claims that average in-game frame rates climbed by more than 120%. We hope that independent outlets will publish results based on their own testing methodologies, in the near future. Going back to September 2023, Moore Threads boasted about driver update 230.40.0.1 producing a 40% gaming performance uplift for MTT S80 and S70 cards. Outside the gaming sphere, Moore Threads has hinted about its MTT S80 GPU being a high achiever with DeepSeek's R1-Distill-Qwen-7B distilled model.

Radeon 8060S Early Reviews: RTX 4070 Laptop-Class Performance in an iGPU

Well, the wait is over and early reviews for AMD's Strix Halo APUs have finally dropped. For those who kept up with the leaks and rumors, the high-end RDNA 3.5 Radeon 8060S iGPU was repeatedly rumored to features up to 40 CUs, allowing for raw performance that keeps up with several discrete-class mobile GPUs. Now that we have concrete information, it appears that the Strix Halo iGPU does indeed trade blows with mid-range mobile GPUs, which is an undeniably impressive feat for an integrated unit. Some of the fastest x86 iGPUs - the Arc 140 V, Radeon 890M, are all left in the dust, although Apple's highest-end offerings are unsurprisingly well ahead.

Starting off with 3D Mark Time Spy, the 40-CU Radeon 8060S, housed in the 13-inch ROG Flow Z13, managed an impressive score of 10,200 points according to Notebookcheck. This puts the iGPU in close proximity to other RTX 4070-powered 14-inch gaming laptops, such as the Zephyrus G14 which managed to rake in around 10,300 points. Compared to the previous iteration of the ROG Flow Z13, which boasts a 65-watt RTX 4070, the Radeon 8060S-powered Z13 pulls ahead by around 5%. Laptops with more substantial power envelopes do race ahead significantly, such as the 140-watt RTX 4070 Laptop-powered Razer Blade 14 which managed over 13,000 points. In the Steel Nomad benchmark, however, the Radeon 8060S appears less impressive, trailing behind not only the RTX 4070 Laptop, but also systems with the RTX 4060 Laptop GPU (110 W).

Micron Unveils Its First PCIe Gen5 NVMe High-Performance Client SSD

Micron Technology, Inc., today announced the Micron 4600 PCIe Gen 5 NVMe SSD, an innovative client storage drive for OEMs that is designed to deliver exceptional performance and user experience for gamers, creators and professionals. Leveraging Micron G9 TLC NAND, the 4600 SSD is Micron's first Gen 5 client SSD and doubles the performance of its predecessor.

The Micron 4600 SSD showcases sequential read speeds of 14.5 GB/s and write speeds of 12.0 GB/s. These capabilities allow users to load a large language model (LLM) from the SSD to DRAM in less than one second, enhancing the user experience with AI PCs. For AI model loading times, the 4600 SSD reduces load times by up to 62% compared to Gen 4 performance SSDs ensuring rapid deployment of LLMs and other AI workloads. Additionally, the 4600 SSD provides up to 107% improved energy efficiency (MB/s per watt) compared to Gen 4 performance SSDs, enhancing battery life and overall system efficiency.

Osaka Scientists Unveil 'Living' Electrodes That Can Enhance Silicon Devices

Shrinking components was (and still is) the main way to boost the speed of all electronic devices; however, as devices get tinier, making them becomes trickier. A group of scientists from SANKEN (The Institute of Scientific and Industrial Research), at Osaka University has discovered another method to enhance performance: putting a special metal layer known as a metamaterial on top of a silicon base to make electrons move faster. This approach shows promise, but the tricky part is managing the metamaterial's structure so it can adapt to real-world needs.

To address this, the team looked into vanadium dioxide (VO₂). When heated, VO₂ changes from non-conductive to metallic, allowing it to carry electric charge like small adjustable electrodes. The researchers used this effect to create 'living' microelectrodes, which made silicon photodetectors better at spotting terahertz light. "We made a terahertz photodetector with VO₂ as a metamaterial. Using a precise method, we created a high-quality VO₂ layer on silicon. By controlling the temperature, we adjusted the size of the metallic regions—much larger than previously possible—which affected how the silicon detected terahertz light," says lead author Ai I. Osaka.

Intel Core Ultra 255H "Arrow Lake-H" Delivers 32% Single-Core Performance Improvement Over "Meteor Lake" Predecessor

Intel's Core Ultra 7 255H "Arrow Lake" processor has demonstrated impressive performance improvements in recent PassMark benchmarks, achieving a 32% higher single-core score compared to its "Meteor Lake" predecessor. The Arrow Lake-H chip recorded 4,631 points in single-threaded tests, significantly outpacing the Core Ultra 7 155H's 3,500 points while delivering a 15% overall improvement in CPU Mark ratings. The performance leap comes from Intel's architectural overhaul, implementing "Lion Cove" performance cores alongside "Skymont" efficiency cores on TSMC's N3B process node. This combination enables the 255H to achieve higher boost frequencies while maintaining the same core configuration as its predecessor—six P-cores, eight E-cores, and two Low Power Efficiency (LPE) cores.

Notable in this iteration is the absence of Hyper-Threading, resulting in 16 threads compared to the 155H's 22 threads. Arrow Lake-H maintains Intel's heterogeneous structure, incorporating up to eight Xe-LPG+ graphics cores derived from the Alchemist architecture. The neural processing unit (NPU) capabilities remain consistent with Meteor Lake, delivering 13 TOPS of INT8 performance. This positions the chip below Lunar Lake's 45 TOPS. Despite performance improvements, market success will largely depend on system integrators' ability to deliver compelling devices at competitive price points, particularly as AMD's Strix Point platforms maintain strong positioning in the $1,000 range. The battle of laptop chip supremacy is poised to be a good one in the coming quarters, especially as more Arm-based entries will force both Intel and AMD to compete harder.

UL Solutions Adds Support for DLSS 4 and DLSS Multi Frame Generation to the 3DMark NVIDIA DLSS Feature Test

We're excited to announce that in today's update to 3DMark, we're adding support for DLSS 4 and DLSS Multi Frame generation to the NVIDIA DLSS feature test. The NVIDIA DLSS feature test and this update were developed in partnership with NVIDIA. The 3DMark NVIDIA DLSS feature test lets you compare performance and image quality brought by enabling DLSS processing. If you have a new GeForce RTX 50 Series GPU, you'll also be able to compare performance with and without the full capabilities of DLSS 4.

You can choose to run the NVIDIA DLSS feature test using DLSS 4, DLSS 3 or DLSS 2. DLSS 4 includes the new DLSS Multi Frame Generation feature, and you can choose between several image quality modes—Quality, Balanced, Performance, Ultra Performance and DLAA. These modes are designed for different resolutions, from Full HD up to 8K. DLSS Multi Frame Generation uses AI to boost frame rates with up to three additional frames generated per traditionally rendered frame. In the 3DMark NVIDIA DLSS feature test, you are able to choose between 2x, 3x and 4x Frame Generation settings if you have an NVIDIA GeForce RTX 50 series GPU.

AMD Radeon 9070 XT Rumored to Outpace RTX 5070 Ti by Almost 15%

It would be fair to say that the GeForce RTX 5080 has been quite disappointing, being roughly 16% faster in gaming than the RTX 4080 Super. Unsurprisingly, this gives AMD a lot of opportunity to offer excellent price-to-performance with its upcoming RDNA 4 GPUs, considering that the RTX 5070 and RTX 5070 Ti aren't really expected to pull off any miracles. According to a recent tidbit shared by the renowned leaker Moore's Law is Dead, the Radeon RX 9070 XT is expected to be around 3% faster than the RTX 4080, if AMD's internal performance goals are anything to go by. MLID also notes that RDNA 4's performance is improving by roughly around 1% each month, which makes it quite likely that the RDNA 4 cards will exceed the targets.

If it does turn out that way, the Radeon RX 9070 XT, according to MLID, should be roughly around 15% faster than its competitor from the Green Camp, the RTX 5070 Ti, and roughly match the RTX 4080 Super in gaming performance. The Radeon RX 9070, on the other hand, is expected to be around 12% faster than the RTX 5070. Of course, these performance improvements are limited to rasterization performance, and when ray tracing is brought to the scene, the performance improvements are expected to be substantially more modest, as per tradition. Citing our data for Cyberpunk 4K with RT, MLID stated that his sources indicate that the RX 9070 XT falls somewhere between the RTX 4070 Ti Super and RTX 3090 Ti, whereas the RX 9070 should likely trade blows with the RTX 4070 Super. Considering AMD's track record with ray tracing, this sure does sound quite enticing.

AMD Believes EPYC CPUs & Instinct GPUs Will Accelerate AI Advancements

If you're looking for innovative use of AI technology, look to the cloud. Gartner reports, "73% of respondents to the 2024 Gartner CIO and Tech Executive Survey have increased funding for AI." And IDC says that AI: "will have a cumulative global economic impact of $19.9 trillion through 2030." But end users aren't running most of those AI workloads on their own hardware. Instead, they are largely relying on cloud service providers and large technology companies to provide the infrastructure for their AI efforts. This approach makes sense since most organizations are already heavily reliant the cloud. According to O'Reilly, more than 90% of companies are using public cloud services. And they aren't moving just a few workloads to the cloud. That same report shows a 175% growth in cloud-native interest, indicating that companies are committing heavily to the cloud.

As a result of this demand for infrastructure to power AI initiatives, cloud service providers are finding it necessary to rapidly scale up their data centers. IDC predicts: "the surging demand for AI workloads will lead to a significant increase in datacenter capacity, energy consumption, and carbon emissions, with AI datacenter capacity projected to have a compound annual growth rate (CAGR) of 40.5% through 2027." While this surge creates massive opportunities for service providers, it also introduces some challenges. Providing the computing power necessary to support AI initiatives at scale, reliably and cost-effectively is difficult. Many providers have found that deploying AMD EPYC CPUs and Instinct GPUs can help them overcome those challenges. Here's a quick look at three service providers who are using AMD chips to accelerate AI advancements.

AMD Teases Ryzen AI Max+ 395 "Strix Halo" APU 1080p Gaming Performance, Claims 68% Faster than RTX 4070M

AMD has just published its "How to Sell" Ryzen AI MAX series guide—several news outlets have pored over the "claimed" gaming performance charts contained within this two-page document. Team Red appears to be in a boastful mood—their 1080p benchmark results reveal compelling numbers, as produced by their flagship Zen 5 "Strix Halo" processor (baseline 55 W TDP). According to Team Red's marketing guidelines, the Ryzen AI Max+ 395 APU: "competes with a GeForce RTX 4070 Mobile GPU at similar TDP and form factor." The first-party produced comparison points to their Radeon 8060S integrated graphics solution being up to 68% faster—in modern gaming environments at 1080p settings—than the competing Team Green dedicated laptop-oriented GPU, limited to 65 W TGP due to form factor restrictions. Overall, the AMD test unit does better by 23.2% on average (referring to Wccftech's calculations).

According to the document, AMD's reference system was lined up against an ASUS ROG Flow Z13 (2023) gaming laptop specced with an Intel Core i9-13900H processor, and a GeForce RTX 4070 mobile graphics card. The Ryzen AI Max+ 395's "massive iGPU" can unleash the full force of forty RDNA 3.5 compute units, paired with up to 96 GB of unified on-board memory (from a total pool of 128 GB). Non-gaming benchmarks place the flagship Team Red processor above Intel Core Ultra 9 288V and Apple M4 Pro (12-core) CPUs—as always, it is best to wait for verification from independent evaluators. Saying that, the "Strix Halo" APU family has generated a lot of excitement—even going back to early leaks—and the latest marketed performance could drum up further interest.

Ubisoft Unveils Assassin's Creed Shadows Recommended PC Specs

Hi everyone, Assassin's Creed Shadows is launching March 20, inviting you to experience the intertwined stories of Naoe, an adept shinobi Assassin, and Yasuke, a powerful African samurai. Today, you can pre-order the game on console and PC, and read up on Shadows' upcoming expansion, Claws of Awaji, which brings 10 hours of additional content free with your pre-order.

For those of you playing on PC, we've got all of Assassin's Creed Shadows' recommended PC specs listed in this article. Assassin's Creed Shadows will support raytraced global illumination and reflections, and will feature an in-game benchmark tool for performance analysis, ultra-wide resolutions, an uncapped framerate, and more. Check out all the specs chart below.

AMD Ryzen 9 9950X3D & 9900X3D Gaming Performance Akin to Ryzen 7 9800X3D

AMD's Ryzen 9 9950X3D and Ryzen 9 9900X3D "Zen 5" processors are scheduled for launch around March time, with many a hardcore PC enthusiast salivating at the prospect of an increase in core counts over already released hardware—the ever popular Ryzen 7 9800X3D CPU makes do with eight cores (and sixteen threads). Under normal circumstances, higher core counts do not provide a massive advantage in gaming applications—over the years, Team Red's 8-core 3D V-Cache-equipped models have reigned supreme in this so-called "sweet spot." Many have wondered whether new-gen 12 and 16-core SKU siblings had any chance of stealing some gaming performance thunder—a recently published VideoGamer article provides a definitive answer for the "Granite Ridge" generation.

The publication managed to extract key quotes from Martijn Boonstra—a Team Red product and business development manager—providing a slightly tepid outlook for the incoming Ryzen 9 9950X3D and 9900X3D models. The company executive stated: "(our) new chips will provide similar overall gaming performance to the Ryzen 7 9800X3D. There will be some games that perform a bit better—if the game engine utilizes more cores and threads—and some games will perform a little worse (if the game engine favors a single CCD configuration), but on the whole, the experience is comparable." Boonstra did not reveal any details regarding forthcoming prices—the Ryzen 7 9800X3D has an MSRP of $479 (if you are lucky enough to find one)—but he hinted that finalized digits will be announced "closer to launch." He signed off with standard marketing spiel: "Ryzen 9000X3D Series desktop processors are perfect for gamers and content creators alike...whether you are already on the AM5 platform, on AM4 or another platform, these products are sure to impress."

NVIDIA GeForce RTX 5090 3DMark Performance Reveals Impressive Improvements

The RTX 50-series gaming GPUs have the gaming community divided. While some appreciate the DLSS 4 and MFG technologies driving impressive improvements in FPS through AI wizardry, others are left disappointed by the seemingly poor improvements in raw performance. For instance, when DLSS and MFG are taken out of the equation, the RTX 5090, RTX 5080, and RTX 5070 are around 33%, 15%, and 20% faster than their predecessors respectively in gaming performance. That said, VideoCardz has tapped into its sources, and revealed the 3DMark scores for the RTX 5090 GPU, and the results certainly do appear to exceed expectations.

In the non-ray traced Steel Nomad test at 4K, the RTX 5090 managed to score around 14,133 points, putting it roughly 53% ahead of its predecessor. In the Port Royal test, which does utilize ray tracing, the RTX 5090 raked in 36,667 points - a 40% improvement over the RTX 4090. The results are much the same in the older Time Spy and Fire Strike tests as well, indicating at roughly a 31% and 38% jump in performance respectively. Moreover, according to the benchmarks, the RTX 5090 appears to be roughly twice as powerful as the RTX 4080 Super. Of course, synthetic benchmarks do not entirely dictate gaming performance, and VideoCardz clearly mentions that gaming performance (without MFG) will witness a substantially more modest improvement. There is no denying that Blackwell's vastly superior memory bandwidth is helping a lot with the synthetic tests, with the 33% extra shaders doing the rest of the work.

NVIDIA RTX 5090 Geekbench Leak: OpenCL and Vulkan Tests Reveal True Performance Uplifts

The RTX 50-series fever continues to rage on, with independent reviews for the RTX 5080 and RTX 5090 dropping towards the end of this month. That does not stop benchmarks from leaking out, unsurprisingly, and a recent lineup of Geekbench listings have revealed the raw performance uplifts that can be expected from NVIDIA's next generation GeForce flagship. A sizeable chunk of the tech community was certainly rather disappointed with NVIDIA's reliance on AI-powered frame generation for much of the claimed improvements in gaming. Now, it appears we can finally figure out how much raw improvement NVIDIA was able to squeeze out with consumer Blackwell, and the numbers, for the most part, appear decent enough.

Starting off with the OpenCL tests, the highest score that we have seen so far from the RTX 5090 puts it around 367,000 points, which marks an acceptable jump from the RTX 4090, which manages around 317,000 points according to Geekbench's official average data. Of course, there are a plethora of cards that may easily exceed the average scores, which must be kept in mind. That said, we are not aware of the details of the RTX 5090 that was tested, so pitting it against average scores does seem fair. Moving to Vulkan, the performance uplift is much more satisfying, with the RTX 5090 managing a minimum of 331,000 points and a maximum of around 360,000 points, compared to the RTX 4090's 262,000 - a sizeable 37% improvement at the highest end. Once again, we are comparing the best results posted so far against last year's averages, so expect slightly more modest gains in the real world. Once more reviews start appearing after the embargo lifts, the improvement figures should become much more reliable.

NVIDIA GeForce RTX 5090 Performance in Cyberpunk 2077 With and Without DLSS 4 Detailed

It is no secret that NVIDIA's RTX 50-series launch was welcomed with a mixed reception. On one hand, DLSS 4 with Multi-Frame Generation has allowed for obscene jumps in performance, much to the dismay of purists who would rather do away with AI-powered wizardry. A recent YouTube video has detailed what the RTX 5090 is capable of in Cyberpunk 2077 with Path Tracing at 4K, both with and without the controversial AI features. With DLSS set to performance mode and 4x frame generation (three generated frames), the RTX 5090 managed around 280 FPS. Pretty good, especially when considering the perfectly acceptable latency of around 52 ms, albeit with occasional spikes.

Turning DLSS to quality, the frame rate drops to around 230 FPS, with latency continuing to hover around 50 ms. Interestingly, with frame generation set to 3x or even 2x, the difference in latency was borderline negligible between the two, right around 44 ms or so. However, the FPS takes a massive nosedive when frame generation is turned off entirely. With DLSS set to quality mode and FG turned off, the RTX 5090 barely managed around 70 FPS in the game. Taking things a step further, the presenter turned off DLSS as well, resulting in the RTX 5090 struggling to hit 30 FPS in the game, with latency spiking to 70 ms. Clearly, DLSS 4 and MFG allows for an incredible uplift in performance with minimal artefacting, at least in Cyberpunk 2077 unless one really looks for it.

Nintendo Switch 2 Docked and Handheld Performance Revealed By Tipster

It is a known fact that the Switch 2 is by no means planning on being a performance beast. Nintendo's focus has always been on their ecosystem, and not on raw performance, which will continue being the case. As such, the Switch 2 is widely expected to sport an NVIDIA Tegra SoC paired with 12 GB of LPDDR5 system memory and an Ampere-based GPU. Now, a fresh leak has detailed the docked and handheld mode performance that can be expected from the widely anticipated Switch successor, and the numbers seem to fall right around what was initially expected.

The leak, sourced from a Nintendo forum, reveals that in docked mode, the Nintendo Switch 2's GPU will be clocked at 1000 MHz, up from 768 MHz for the soon-to-be previous generation Switch, allowing for 3.1 TFLOPS of performance. In handheld mode, unsurprisingly, the GPU clock will be limited to 561 MHz, allowing for 1.71 TFLOPS of raw performance. These numbers are far from impressive for 2025, although Nintendo will likely make up for the lack of raw horsepower using upscaling technologies similar to DLSS, allowing for a vastly improved experience than what its otherwise unimpressive hardware could have afforded.

Gigabyte Brix Extreme Mini PC Launched With Ryzen 7 8840U "Hawk Point" APU

The list of mini PCs available on the market has grown quite a bit in the past few weeks, with a bunch of such systems getting unveiled at CES 2025. Now, Gigabyte clearly does not wish to be left out of the party either, and has unveiled its Brix Extreme mini PC powered by a last-gen, but decently powerful AMD "Hawk Point" APU and a plethora of connectivity options in a compact package.

The system, as mentioned, boasts the 28-watt Ryzen 7 8840U PRO APU, which sports 8 Zen 4 cores and 16 threads. Performance should be identical to its non-PRO counterpart, which should put it roughly in the same class as the Intel Core Ultra 256V "Lunar Lake" CPU. The APU is paired with up to 64 GB of DDR5-5600 memory. Dual M.2 2280 slots take care of storage requirements, both of which are user-accessible.

First Taste of Intel Arc B570: OpenCL Benchmark Reports Good Price-to-Performance

In the past few weeks, all eyes have on NVIDIA's and AMD's next-gen GPU offerings, and rightly so. Now, it's about time to turn our attention to what appears to be the third major player in the GPU industry - Intel. This is, of course, all thanks to the Blue Camp's wildly successful Arc B580 launch, which propelled the beleaguered chip giant to the favorable side of the GPU price-to-performance line.

Now, it appears that a fresh leak has revealed how its soon-to-be sibling, the Arc B570, is about to perform. The leaked performance data, courtesy of Geekbench OpenCL, reveals that the Arc B570 is right around 11% slower than the Arc B580 in the synthetic OpenCL benchmark, which makes complete sense, because the card is also expected to be around 12% cheaper than its more powerful sibling, as noted by Wccftech. With a score of 86,716, the Arc B570 is well ahead of the RX 7600 XT, which manages around 84000 points, and well behind the RTX 4060, which rakes in just above 100000.

Gigabyte Redefines Intel and AMD B800 Series Motherboards Performance with AI Technology at CES 2025

GIGABYTE, the world's leading computer brand, unveils the new generation of Intel B860 and AMD B850 series motherboards at CES 2025. These new series are designed to unleash the performance of the latest Intel Core Ultra and AMD Ryzen processors by leveraging AI-enhanced technology and user-friendly design for a seamless gaming and PC-building experience. Equipped with all digital power and enhanced thermal design, GIGABYTE B800 series motherboards are the gateway to mainstream PC gamers.

GIGABYTE achieved the remarkable milestone of claiming the highest market share on X870 series motherboards due to fully supporting AMD Ryzen 5 7000 and 9000 series X3D processors. The new B800 series motherboards are also adopted with ultra-durable and high-end components and the revolutionary AI suite, D5 Bionics Corsa, integrates software, hardware, and firmware to boost DD5 memory performance up to 8600 MT/s on AMD B850 models and 9466 MT/s on Intel B860 motherboards. The AI SNATCH is an exclusive AI-based software for enhancing DDR5 performance with just a few clicks. Meanwhile, the AI-Driven PCB Design ensures low signal reflection for peak performance across multiple layers through AI simulation. Plus, HyperTune BIOS integrates AI-driven optimizations to fine-tune the Memory Reference Code on Intel B860 series motherboards for high-demand gaming and multitasking. Specially built for AMD Ryzen 9000 series X3D processors, GIGABYTE applies X3D Turbo mode on AMD B850 series motherboards by adjusting core count to boost gaming performance.

Razer Introduces Redesigned Blade 16 Gaming Laptop With NVIDIA GeForce RTX 50 series laptop GPUs and AMD Ryzen AI 9 processors

Razer, the leading global lifestyle brand for gamers, today announced the new Razer Blade 16, redesigned to be thinner and more mobile for on-the-go gamers but packed with the performance expected from a Razer Blade. Featuring the all-new NVIDIA GeForce RTX 50 series laptop GPUs and, for the first time ever, AMD Ryzen AI 9 processors, Blade 16 offers incredible levels of raw power and maximum AI performance. Equipped with a fast and vibrant QHD+ 240 Hz OLED display, a new keyboard and more speakers, the Blade 16 is more fun to play on that ever before.

Travis Furst, Head of the Notebook & Accessories Division at Razer, stated, "The new Razer Blade 16 is a game-changer, blending ultra-portable design with powerhouse performance. It's tailored for gamers who demand the best in mobility without compromising on power or features, truly embodying what the future of gaming laptops looks like."

Gigabyte Expands Its Accelerated Computing Portfolio with New Servers Using the NVIDIA HGX B200 Platform

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, announced new GIGABYTE G893 series servers using the NVIDIA HGX B200 platform. The launch of these flagship 8U air-cooled servers, the G893-SD1-AAX5 and G893-ZD1-AAX5, signifies a new architecture and platform change for GIGABYTE in the demanding world of high-performance computing and AI, setting new standards for speed, scalability, and versatility.

These servers join GIGABYTE's accelerated computing portfolio alongside the NVIDIA GB200 NVL72 platform, which is a rack-scale design that connects 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs. At CES 2025 (January 7-10), the GIGABYTE booth will display the NVIDIA GB200 NVL72, and attendees can engage in discussions about the benefits of GIGABYTE platforms with the NVIDIA Blackwell architecture.

Digital Enhancement Unveils First Commercial RPU (Radio Processing Unit), Marking a Leap in Wireless Performance for Consumer Electronics

Digital Enhancement (Hangzhou) Co., Ltd (hereafter referred to as "Digital Enhancement") is set to unveil the world's first commercial-grade Radio Processing Unit (RPU) designed for Wi-Fi wireless access at the CES International Consumer Electronics Show in Las Vegas, USA.

This groundbreaking RPU and solution leverage Digital Enhancement's innovative "Digital RF" technology, delivering a 10x performance boost in Wi-Fi high-speed coverage. The innovation promises to redefine the wireless connectivity experience for consumer electronics, paving the way for a new era of seamless and high-performance wireless connections.

AMD Strix Halo Radeon 8050S and 8060S iGPU Performance Look Promising - And Confusing

AMD fans are undoubtedly on their toes to witness the performance improvements that Strix Halo is ready to bring forth. Unlike Strix Point, which utilizes a combination of Zen 5c and full-fat Zen 5 cores, Strix Halo will do away with the small cores for a Zen 5 "only" setup, allowing for substantially better multicore performance. Moreover, it is also widely expected that Strix Halo will boast chunky iGPUs that will bring the heat to entry-level and even some mid-range mobile GPUs, allowing Strix Halo systems to not require discrete graphics at all, with a prime example being the upcoming ROG Flow Z13 tablet.

As per recent reports, the upcoming Ryzen AI Max+ Pro 395 APU will sport an RDNA 3.5-based iGPU with a whopping 40 CUs, and will likely be branded as the Radeon 8060S. In a leaked Geekbench Vulkan benchmark, the Radeon 8060S managed to outpace the RTX 4060 Laptop dGPU in performance. However, according to yet another leaked benchmark, Passmark, the Radeon 8060S and the 32-CU 8050S scored 16,454 and 16,663 respectively - and no, that is not a typo. The 8060S with 40 CUs is marginally slower than the 8050S with 32 CUs, clearly indicating that the numbers are far from final. That said, performance in this range puts the Strix Halo APUs well below the RTX 4070 laptop GPU, and roughly the same as the RTX 3080 Laptop. Not bad for an iGPU, although it is almost certain that actual performance of the retail units will be higher, judging by the abnormally small delta between the 8050S and the 8060S.
Return to Keyword Browsing
Feb 21st, 2025 19:12 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts