News Posts matching #Performance

Return to Keyword Browsing

NVIDIA RTX 5090 Geekbench Leak: OpenCL and Vulkan Tests Reveal True Performance Uplifts

The RTX 50-series fever continues to rage on, with independent reviews for the RTX 5080 and RTX 5090 dropping towards the end of this month. That does not stop benchmarks from leaking out, unsurprisingly, and a recent lineup of Geekbench listings have revealed the raw performance uplifts that can be expected from NVIDIA's next generation GeForce flagship. A sizeable chunk of the tech community was certainly rather disappointed with NVIDIA's reliance on AI-powered frame generation for much of the claimed improvements in gaming. Now, it appears we can finally figure out how much raw improvement NVIDIA was able to squeeze out with consumer Blackwell, and the numbers, for the most part, appear decent enough.

Starting off with the OpenCL tests, the highest score that we have seen so far from the RTX 5090 puts it around 367,000 points, which marks an acceptable jump from the RTX 4090, which manages around 317,000 points according to Geekbench's official average data. Of course, there are a plethora of cards that may easily exceed the average scores, which must be kept in mind. That said, we are not aware of the details of the RTX 5090 that was tested, so pitting it against average scores does seem fair. Moving to Vulkan, the performance uplift is much more satisfying, with the RTX 5090 managing a minimum of 331,000 points and a maximum of around 360,000 points, compared to the RTX 4090's 262,000 - a sizeable 37% improvement at the highest end. Once again, we are comparing the best results posted so far against last year's averages, so expect slightly more modest gains in the real world. Once more reviews start appearing after the embargo lifts, the improvement figures should become much more reliable.

NVIDIA GeForce RTX 5090 Performance in Cyberpunk 2077 With and Without DLSS 4 Detailed

It is no secret that NVIDIA's RTX 50-series launch was welcomed with a mixed reception. On one hand, DLSS 4 with Multi-Frame Generation has allowed for obscene jumps in performance, much to the dismay of purists who would rather do away with AI-powered wizardry. A recent YouTube video has detailed what the RTX 5090 is capable of in Cyberpunk 2077 with Path Tracing at 4K, both with and without the controversial AI features. With DLSS set to performance mode and 4x frame generation (three generated frames), the RTX 5090 managed around 280 FPS. Pretty good, especially when considering the perfectly acceptable latency of around 52 ms, albeit with occasional spikes.

Turning DLSS to quality, the frame rate drops to around 230 FPS, with latency continuing to hover around 50 ms. Interestingly, with frame generation set to 3x or even 2x, the difference in latency was borderline negligible between the two, right around 44 ms or so. However, the FPS takes a massive nosedive when frame generation is turned off entirely. With DLSS set to quality mode and FG turned off, the RTX 5090 barely managed around 70 FPS in the game. Taking things a step further, the presenter turned off DLSS as well, resulting in the RTX 5090 struggling to hit 30 FPS in the game, with latency spiking to 70 ms. Clearly, DLSS 4 and MFG allows for an incredible uplift in performance with minimal artefacting, at least in Cyberpunk 2077 unless one really looks for it.

Nintendo Switch 2 Docked and Handheld Performance Revealed By Tipster

It is a known fact that the Switch 2 is by no means planning on being a performance beast. Nintendo's focus has always been on their ecosystem, and not on raw performance, which will continue being the case. As such, the Switch 2 is widely expected to sport an NVIDIA Tegra SoC paired with 12 GB of LPDDR5 system memory and an Ampere-based GPU. Now, a fresh leak has detailed the docked and handheld mode performance that can be expected from the widely anticipated Switch successor, and the numbers seem to fall right around what was initially expected.

The leak, sourced from a Nintendo forum, reveals that in docked mode, the Nintendo Switch 2's GPU will be clocked at 1000 MHz, up from 768 MHz for the soon-to-be previous generation Switch, allowing for 3.1 TFLOPS of performance. In handheld mode, unsurprisingly, the GPU clock will be limited to 561 MHz, allowing for 1.71 TFLOPS of raw performance. These numbers are far from impressive for 2025, although Nintendo will likely make up for the lack of raw horsepower using upscaling technologies similar to DLSS, allowing for a vastly improved experience than what its otherwise unimpressive hardware could have afforded.

Gigabyte Brix Extreme Mini PC Launched With Ryzen 7 8840U "Hawk Point" APU

The list of mini PCs available on the market has grown quite a bit in the past few weeks, with a bunch of such systems getting unveiled at CES 2025. Now, Gigabyte clearly does not wish to be left out of the party either, and has unveiled its Brix Extreme mini PC powered by a last-gen, but decently powerful AMD "Hawk Point" APU and a plethora of connectivity options in a compact package.

The system, as mentioned, boasts the 28-watt Ryzen 7 8840U PRO APU, which sports 8 Zen 4 cores and 16 threads. Performance should be identical to its non-PRO counterpart, which should put it roughly in the same class as the Intel Core Ultra 256V "Lunar Lake" CPU. The APU is paired with up to 64 GB of DDR5-5600 memory. Dual M.2 2280 slots take care of storage requirements, both of which are user-accessible.

First Taste of Intel Arc B570: OpenCL Benchmark Reports Good Price-to-Performance

In the past few weeks, all eyes have on NVIDIA's and AMD's next-gen GPU offerings, and rightly so. Now, it's about time to turn our attention to what appears to be the third major player in the GPU industry - Intel. This is, of course, all thanks to the Blue Camp's wildly successful Arc B580 launch, which propelled the beleaguered chip giant to the favorable side of the GPU price-to-performance line.

Now, it appears that a fresh leak has revealed how its soon-to-be sibling, the Arc B570, is about to perform. The leaked performance data, courtesy of Geekbench OpenCL, reveals that the Arc B570 is right around 11% slower than the Arc B580 in the synthetic OpenCL benchmark, which makes complete sense, because the card is also expected to be around 12% cheaper than its more powerful sibling, as noted by Wccftech. With a score of 86,716, the Arc B570 is well ahead of the RX 7600 XT, which manages around 84000 points, and well behind the RTX 4060, which rakes in just above 100000.

Gigabyte Redefines Intel and AMD B800 Series Motherboards Performance with AI Technology at CES 2025

GIGABYTE, the world's leading computer brand, unveils the new generation of Intel B860 and AMD B850 series motherboards at CES 2025. These new series are designed to unleash the performance of the latest Intel Core Ultra and AMD Ryzen processors by leveraging AI-enhanced technology and user-friendly design for a seamless gaming and PC-building experience. Equipped with all digital power and enhanced thermal design, GIGABYTE B800 series motherboards are the gateway to mainstream PC gamers.

GIGABYTE achieved the remarkable milestone of claiming the highest market share on X870 series motherboards due to fully supporting AMD Ryzen 5 7000 and 9000 series X3D processors. The new B800 series motherboards are also adopted with ultra-durable and high-end components and the revolutionary AI suite, D5 Bionics Corsa, integrates software, hardware, and firmware to boost DD5 memory performance up to 8600 MT/s on AMD B850 models and 9466 MT/s on Intel B860 motherboards. The AI SNATCH is an exclusive AI-based software for enhancing DDR5 performance with just a few clicks. Meanwhile, the AI-Driven PCB Design ensures low signal reflection for peak performance across multiple layers through AI simulation. Plus, HyperTune BIOS integrates AI-driven optimizations to fine-tune the Memory Reference Code on Intel B860 series motherboards for high-demand gaming and multitasking. Specially built for AMD Ryzen 9000 series X3D processors, GIGABYTE applies X3D Turbo mode on AMD B850 series motherboards by adjusting core count to boost gaming performance.

Razer Introduces Redesigned Blade 16 Gaming Laptop With NVIDIA GeForce RTX 50 series laptop GPUs and AMD Ryzen AI 9 processors

Razer, the leading global lifestyle brand for gamers, today announced the new Razer Blade 16, redesigned to be thinner and more mobile for on-the-go gamers but packed with the performance expected from a Razer Blade. Featuring the all-new NVIDIA GeForce RTX 50 series laptop GPUs and, for the first time ever, AMD Ryzen AI 9 processors, Blade 16 offers incredible levels of raw power and maximum AI performance. Equipped with a fast and vibrant QHD+ 240 Hz OLED display, a new keyboard and more speakers, the Blade 16 is more fun to play on that ever before.

Travis Furst, Head of the Notebook & Accessories Division at Razer, stated, "The new Razer Blade 16 is a game-changer, blending ultra-portable design with powerhouse performance. It's tailored for gamers who demand the best in mobility without compromising on power or features, truly embodying what the future of gaming laptops looks like."

Gigabyte Expands Its Accelerated Computing Portfolio with New Servers Using the NVIDIA HGX B200 Platform

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, announced new GIGABYTE G893 series servers using the NVIDIA HGX B200 platform. The launch of these flagship 8U air-cooled servers, the G893-SD1-AAX5 and G893-ZD1-AAX5, signifies a new architecture and platform change for GIGABYTE in the demanding world of high-performance computing and AI, setting new standards for speed, scalability, and versatility.

These servers join GIGABYTE's accelerated computing portfolio alongside the NVIDIA GB200 NVL72 platform, which is a rack-scale design that connects 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs. At CES 2025 (January 7-10), the GIGABYTE booth will display the NVIDIA GB200 NVL72, and attendees can engage in discussions about the benefits of GIGABYTE platforms with the NVIDIA Blackwell architecture.

Digital Enhancement Unveils First Commercial RPU (Radio Processing Unit), Marking a Leap in Wireless Performance for Consumer Electronics

Digital Enhancement (Hangzhou) Co., Ltd (hereafter referred to as "Digital Enhancement") is set to unveil the world's first commercial-grade Radio Processing Unit (RPU) designed for Wi-Fi wireless access at the CES International Consumer Electronics Show in Las Vegas, USA.

This groundbreaking RPU and solution leverage Digital Enhancement's innovative "Digital RF" technology, delivering a 10x performance boost in Wi-Fi high-speed coverage. The innovation promises to redefine the wireless connectivity experience for consumer electronics, paving the way for a new era of seamless and high-performance wireless connections.

AMD Strix Halo Radeon 8050S and 8060S iGPU Performance Look Promising - And Confusing

AMD fans are undoubtedly on their toes to witness the performance improvements that Strix Halo is ready to bring forth. Unlike Strix Point, which utilizes a combination of Zen 5c and full-fat Zen 5 cores, Strix Halo will do away with the small cores for a Zen 5 "only" setup, allowing for substantially better multicore performance. Moreover, it is also widely expected that Strix Halo will boast chunky iGPUs that will bring the heat to entry-level and even some mid-range mobile GPUs, allowing Strix Halo systems to not require discrete graphics at all, with a prime example being the upcoming ROG Flow Z13 tablet.

As per recent reports, the upcoming Ryzen AI Max+ Pro 395 APU will sport an RDNA 3.5-based iGPU with a whopping 40 CUs, and will likely be branded as the Radeon 8060S. In a leaked Geekbench Vulkan benchmark, the Radeon 8060S managed to outpace the RTX 4060 Laptop dGPU in performance. However, according to yet another leaked benchmark, Passmark, the Radeon 8060S and the 32-CU 8050S scored 16,454 and 16,663 respectively - and no, that is not a typo. The 8060S with 40 CUs is marginally slower than the 8050S with 32 CUs, clearly indicating that the numbers are far from final. That said, performance in this range puts the Strix Halo APUs well below the RTX 4070 laptop GPU, and roughly the same as the RTX 3080 Laptop. Not bad for an iGPU, although it is almost certain that actual performance of the retail units will be higher, judging by the abnormally small delta between the 8050S and the 8060S.

AMD Radeon RX 9070 XT Alleged Benchmark Leaks, Underwhelming Performance

Recent benchmark leaks have revealed that AMD's upcoming Radeon RX 9070 XT graphics card may not deliver the groundbreaking performance initially hoped for by enthusiasts. According to leaked 3DMark Time Spy results shared by hardware leaker @All_The_Watts, the RDNA 4-based GPU achieved a graphics score of 22,894 points. The benchmark results indicate that the RX 9070 XT performs only marginally better than AMD's current RX 7900 GRE, showing a mere 2% improvement. It falls significantly behind the RX 7900 XT, which maintains almost a 17% performance advantage over the new card. These findings contradict earlier speculation that suggested the RX 9070 XT would compete directly with NVIDIA's RTX 4080.

However, synthetic benchmarks tell only part of the story. The GPU's real-world gaming performance remains to be seen, and rumors indicate that the RX 9070 XT may offer significantly improved ray tracing capabilities compared to its RX 7000 series predecessors. This could be crucial for market competitiveness, particularly given the strong ray tracing performance of NVIDIA's RTX 40 and the upcoming RTX 50 series cards. The success of the RX 9070 XT depends on how well it can differentiate itself through features like ray tracing while maintaining an attractive price-to-performance ratio in an increasingly competitive GPU market. We expect these scores not to be the final tale in the AMD RDNA 4 story, as we must wait and see what AMD delivers during CES. Third-party reviews and benchmarks will give the final verdict in the RDNA 4 market launch.

Alphawave Semi Scales UCIe to 64 Gbps for 3nm Die-to-Die Chiplet Connectivity

Alphawave Semi (LSE: AWE), a global leader in high-speed connectivity and compute silicon for the world's technology infrastructure, proudly introduces the industry's first 64 Gbps Universal Chiplet Interconnect Express (UCIe) Die-to-Die (D2D) IP Subsystem to deliver unprecedented chiplet interconnect data rates, setting a new standard for ultra-high-performance D2D connectivity solutions in the industry. The third generation, 64 Gbps IP Subsystem builds on the successes of the most recent Gen 2 36 Gbps IP subsystem and silicon-proven Gen 1 24 Gbps and is available in TSMC's 3 nm Technology for both Standard and Advanced packaging. The silicon proven success and tapeout milestones pave the way for Alphawave Semi's Gen 3 UCIe IP subsystem offering.

Alphawave Semi is set to revolutionize connectivity with its Gen 3 64 Gbps UCIe IP, delivering a bandwidth density of over 20 Tbps/mm, with ultra-low power and latency. This solution is highly configurable supporting multiple protocols, including AXI-4, AXI-S, CXS, CHI and CHI-C2C to address the growing demands for high-performance connectivity across disaggregated systems in High-Performance Computing (HPC), Data Centers, and Artificial Intelligence (AI) applications.

UEFI Forum Releases the UEFI 2.11 and the PI 1.9 Specifications

The UEFI Forum today announced the release of the Unified Extensible Firmware Interface (UEFI) 2.11 specification and the Platform Initialization (PI) 1.9 specification. The goal of these specification updates is to streamline user implementation by providing increased compatibility across hardware architectures, including security updates, algorithm support and improving alignment implementation guidance.

"We have created a vibrant firmware community, and these specification updates provide maintenance and enhancement of fundamental capabilities in order to help increase the momentum of the UEFI specifications and add value to the ecosystem," said Mark Doran, UEFI Forum President. "The latest specifications continue the UEFI Forum's commitment to developing standards for all significant CPU architectures as underscored by additions such as the new LoongArch support in the PI 1.9 specification."

NVIDIA App Allegedly Degrades Gaming Performance by Up to 15%, But There Is a Fix

Recent testing has revealed that latest NVIDIA App v1.0 software utility may significantly impact gaming performance, with benchmarks from Tom's Hardware showing frame rate drops of up to 15% in certain games when the new NVIDIA App is installed alongside graphics drivers. The performance issues appear to be linked to the application's overlay features, particularly its game filters and photo mode capabilities, which seem to affect system resources regardless of whether users actively engage with them. Gamers primarily interested in the app's video capture and optimization features can restore regular performance levels by disabling these problematic overlay functions. In the meantime, NVIDIA issued the following statement on its GeForce forums in the "Game Filters and Performance in NVIDIA App" thread:
NVIDIA Official StatementWe are aware of a reported performance issue related to Game Filters and are actively looking into it. You can turn off Game Filters from the NVIDIA App Settings > Features > Overlay > Game Filters and Photo Mode, and then relaunch your game.

F2P Hero Shooter Marvel Rivals Shatters Expectations With Over 400,000 Concurrent Players Less Than 24 Hours After Launch

It's no secret that 2024 hasn't been kind to live-service games, with recent months seeing games like XDefiant and Concord shut down—although not always without reason—so it's a bit refreshing to see the recently released Marvel Rivals hit what can only be described as a home run. Not only is the game already at "Mostly Positive" in terms of Steam reviews, the free-to-play hero shooter also managed to draw in massive amounts of players in its first week on Steam. According to SteamDB, Marvel Rivals peaked at 444,286 concurrent players a mere two and a half hours after its launch on Friday, December 6.

The game launched on the same day as Path of Exile 2, which had an equally successful launch, despite its Early Access fee and some serious technical hiccups along the way. These two games have not been the norm, although Marvel Rivals signals that gaming properties based on Marvel characters may have finally found their audience. In previous years, both Marvel's Midnight Suns and Marvel's Guardians of the Galaxy made their own impacts, scoring big with both reviewers and audiences. Of course, neither of the aforementioned games garnered quite the player count as Marvel Rivals, but that's likely simply due to the free-to-play nature of Rivals.

SPEC Delivers Major SPECworkstation 4.0 Benchmark Update, Adds AI/ML Workloads

The Standard Performance Evaluation Corporation (SPEC), the trusted global leader in computing benchmarks, today announced the availability of the SPECworkstation 4.0 benchmark, a major update to SPEC's comprehensive tool designed to measure all key aspects of workstation performance. This significant upgrade from version 3.1 incorporates cutting-edge features to keep pace with the latest workstation hardware and the evolving demands of professional applications, including the increasing reliance on data analytics, AI and machine learning (ML).

The new SPECworkstation 4.0 benchmark provides a robust, real-world measure of CPU, graphics, accelerator, and disk performance, ensuring professionals have the data they need to make informed decisions about their hardware investments. The benchmark caters to the diverse needs of engineers, scientists, and developers who rely on workstation hardware for daily tasks. It includes real-world applications like Blender, Handbrake, LLVM and more, providing a comprehensive performance measure across seven different industry verticals, each focusing on specific use cases and subsystems critical to workstation users. SPECworkstation 4.0 benchmark marks a significant milestone for measuring workstation AI performance, providing an unbiased, real-world, application-driven tool for measuring how workstations handle AI/ML workloads.

AMD Quietly Disables Zen 4's Loop Buffer Feature Without Performance Penalty

AMD has silently disabled the loop buffer feature in its Zen 4 processor architecture through an AGESA microcode update. This development, first reported by the website Chips and Cheese, affects the entire Ryzen 7000 series processors and related EPYC models. The loop buffer, a power-optimization feature capable of storing 144 entries (72 per thread with SMT enabled), was implemented for the first time in AMD's Zen 4 architecture but has been notably absent from the newer Zen 5 design. The feature's primary function was to allow the processor's front end to power down while maintaining operational efficiency. The change was discovered when testing an ASRock B650 PG Lightning motherboard paired with a Ryzen 9 7950X3D processor. Hardware performance monitoring showed the loop buffer was active in BIOS version 1.21 (AGESA 1.0.0.6) but ceased to function after updating to BIOS 3.10 with AGESA 1.2.0.2a.

In a performance test conducted by Chips and Cheese, we learned that there is no significant impact from the feature's deactivation, suggesting the existing op cache provides sufficient bandwidth for optimal processor operation. AMD's architectural design has historically relied on its op cache for similar functionality. The feature appeared experimental, given the lack of documentation and the absence of programming guides for loop buffer optimization. Unlike competitors Intel and Arm, who have extensively documented their loop buffer implementations, AMD's approach appeared less developed. While the exact reasoning behind the deactivation remains unclear, disabling undocumented features is a step in the right direction, mainly as future Zen architecture iteration doesn't rely on a loop buffer, as seen with Zen 5.

NVIDIA GeForce NOW Gets Six New Games and Black Friday Promotion

Turn Black Friday into Green Thursday with a new deal on GeForce NOW Ultimate and Performance memberships this week. For a limited time, get 50% off new Ultimate or Performance memberships for the first three months to experience the power of GeForce RTX-powered gaming at a fraction of the cost. The giving continues for GeForce NOW members: SteelSeries is offering a 30% discount exclusively to all GeForce NOW members on Stratus+ or Nimbus+ controllers, perfect for gaming anytime, anywhere when paired with GeForce NOW on Android and iOS devices. To redeem the discount, opt in to GeForce NOW rewards and look out for an email with details. Enjoy this exclusive offer on its own - it can't be combined with other SteelSeries promotions. It's not a GFN Thursday without new games - this week, six are joining the over 2,000 titles in the GeForce NOW library.

Plus, the Steam Autumn Sale is happening now, featuring stellar discounts on GeForce NOW-supported games. Snag beloved publishers' top titles, including Nightingale from Inflexion Games, Remnant and Remnant II from Arc Games, and Cult of the Lamb and The Plucky Squire from Devolver - and even more from publishers Frost Giant Studios, Metric Empire, tinyBuild, Torn Banner Studios and Tripwire. The sale runs through Wednesday, Dec. 4.

Dragon Age: The Veilguard Gets New Title Update 3

BioWare has released the new Title Update 3 for Dragon Age: The Veilguard, fixing some bug issues and bringing some "quality of life" changes. Unfortunately, it does not bring any major graphics or performance improvements, so hopefully those will have to wait for the new update.

According to the changelog, the new update fixes several quest issues, issues where enemies and companions could get stuck, issues seen with the Skill Tree nodes, and various other crashes. The update also fixes several "pop-in" issues, camera stutter issues, and extremely bright VFX issues, but do not expect any major graphics or performance improvements.

JEDEC Announces Enhanced NAND Flash Interface Standard With Increased Speeds and Efficiency

JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of JESD230G: NAND Flash Interface Interoperability Standard. JESD230G introduces speeds of up to 4800 MT/s, as compared to 400 MT/s in the first version of JESD230 published in 2011. Also, JESD230G adds a separate Command/Address Bus Protocol (SCA), delivering enhanced throughput and efficiency by allowing hosts and NAND devices to take maximum advantage of the latest interface speeds. JESD230G is available for free download from the JEDEC website.

"JEDEC is excited to release JESD230G," said David Landsman, Distinguished Engineer at Western Digital and Chair of the JEDEC NAND TG. He added, "This version of JESD230 further advances the capabilities of NAND flash devices to meet the growing demands of their expanding range of applications and continues the JEDEC tradition of building interoperable ecosystems through open industry standards."

NVIDIA B200 "Blackwell" Records 2.2x Performance Improvement Over its "Hopper" Predecessor

We know that NVIDIA's latest "Blackwell" GPUs are fast, but how much faster are they over the previous generation "Hopper"? Thanks to the latest MLPerf Training v4.1 results, NVIDIA's HGX B200 Blackwell platform has demonstrated massive performance gains, measuring up to 2.2x improvement per GPU compared to its HGX H200 Hopper. The latest results, verified by MLCommons, reveal impressive achievements in large language model (LLM) training. The Blackwell architecture, featuring HBM3e high-bandwidth memory and fifth-generation NVLink interconnect technology, achieved double the performance per GPU for GPT-3 pre-training and a 2.2x boost for Llama 2 70B fine-tuning compared to the previous Hopper generation. Each benchmark system incorporated eight Blackwell GPUs operating at a 1,000 W TDP, connected via NVLink Switch for scale-up.

The network infrastructure utilized NVIDIA ConnectX-7 SuperNICs and Quantum-2 InfiniBand switches, enabling high-speed node-to-node communication for distributed training workloads. While previous Hopper-based systems required 256 GPUs to optimize performance for the GPT-3 175B benchmark, Blackwell accomplished the same task with just 64 GPUs, leveraging its larger HBM3e memory capacity and bandwidth. One thing to look out for is the upcoming GB200 NVL72 system, which promises even more significant gains past the 2.2x. It features expanded NVLink domains, higher memory bandwidth, and tight integration with NVIDIA Grace CPUs, complemented by ConnectX-8 SuperNIC and Quantum-X800 switch technologies. With faster switching and better data movement with Grace-Blackwell integration, we could see even more software optimization from NVIDIA to push the performance envelope.

Intel Working on Fixing "Arrow Lake" Gaming Performance with Upcoming Patches

In an exclusive interview with Hot Hardware, Intel acknowledged that its recently launched Core Ultra 200 desktop processors, codenamed "Arrow Lake," have significant performance issues. However, Intel announced that a set of fixes are being developed. As our review confirmed, the launch of these new processors fell short of both consumer expectations and Intel's own projections, particularly in gaming performance, despite showing promise in productivity, content creation, and some AI workloads. In a discussion during a recent livestream, Intel's Robert Hallock, VP and general manager of client AI and technical marketing, addressed these concerns head-on, describing the Arrow Lake launch as "disastrous" and attributing the underwhelming performance to inadequately optimized systems.
Robert HallockI can't go into all the details yet, but we identified a series of multifactor issues at the OS level, at the BIOS level, and I will say that the performance we saw in reviews is not what we expected and not what we intended. The launch just didn't go as planned. That has been a humbling lesson for all of us, inspiring a fairly large response internally to get to the bottom of what happened and to fix it.

ROG Maximus Z890 Apex Achieves Record-Breaking Overclocking Performance

ASUS Republic of Gamers (ROG) today announced that new ROG Maximus Z890 Apex motherboards have been used to achieve 5 world records, 19 global first-place records and 31 first-place records. In the hands of some of the world's premier professional overclockers, the Maximus Z890 Apex has coaxed dazzling performance out of the latest Intel Core Ultra processor (Series 2) lineup and the latest high-performance memory kits.

Veterans of the overclocking scene will not be surprised to learn that these records were achieved with an Apex motherboard on the bench. This series has an undeniable pedigree. Since the very first model, ASUS has designed Apex motherboards for the singular purpose of helping the world's most talented overclockers shatter barriers on their way to new records.

Gigabyte Announces AORUS Z890 Motherboards Now Available, Unlocking AI-Enhanced Performance With D5 Bionic Corsa

Gigabyte, the world's leading computer brand, proudly announces that the AORUS Z890 series motherboards are now officially available for purchase. Designed to maximize the performance of the latest Intel Core Ultra processors, the groundbreaking D5 Bionic Corsa technology was introduced along with advanced thermal management and optimized power design on these boards. GIGABYTE's continuous partnership with HWiNFO further enhances the boards with real-time monitoring of CPU vCore power phase outputs and efficiency. With these powerful boards now on sale, users can enjoy unmatched performance and seamless customization, making them the ideal platform for enthusiasts and professionals.

D5 Bionic Corsa is the core technology for AORUS Z890 series motherboards, which leverages AI-enhanced innovations across software, hardware, and firmware to boost DDR5 memory speeds to an unprecedented 9500+ MT/s. The AI SNATCH Engine, powered by advanced AI overclocking models, optimizes configurations for DDR5 XMP memory and CPUs, enabling up to 20% faster speeds. With XMP AI BOOST and CPU AI BOOST, users can achieve world-class overclocking performance with just one click. The AI-driven PCB Design enhances signal integrity by reducing reflection by 28.2%, while HyperTune BIOS fine-tunes the Memory Reference Code (MRC) for peak performance. The VRM Thermal Balance mechanism ensures heat dissipation across the VRM with a heatpipe design, while optimized PWM firmware balances current output for superior stability.

Renesas Collaborates With Intel on Power Management Solution for New Intel Core Ultra 200V Series Processors

Renesas Electronics Corporation, a premier supplier of advanced semiconductor solutions, today announced a collaboration with Intel resulting in a power management solution that delivers best-in-class battery efficiency for laptops based on the new Intel Core Ultra 200V series.

Collaborating closely with Intel, Renesas has developed an innovative new customized power management IC (PMIC) that covers the entire power management needs for this newest generation of Intel processors. This advanced and highly integrated PMIC, combined with a pre-regulator and a battery charger, offers a complete solution for PCs leveraging the new Intel processor. The three new devices work together to provide a purpose-built power solution targeted at client laptops, particularly those running AI applications that tend to consume a lot of power.
Return to Keyword Browsing
Jan 20th, 2025 17:30 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts