News Posts matching #Benchmarks

Return to Keyword Browsing

ScaleFlux SFX 5016 Will Set New Benchmarks for Enterprise SSD Efficiency and AI Workload Performance

As the IT sector continues to seek answers for scaling data processing performance while simultaneously improving efficiency - in terms of performance and density per watt, per system, per rack, and per dollar of CapEx and OpEx - ScaleFlux is answering the call with innovative design choices in its SSD controllers. The SFX 5016 promises to set new standards both for performance and for power efficiency.

In addition to carrying forward the transparent compression feature that ScaleFlux first released in 2020 in upgraded in 2022 with the SFX 3016 computational storage drive controller, the new SFX 5016 SOC processor includes a number of design advances.

Apple MacBook Air M3 Teardown Reveals Two NAND Chips on Basic 256 GB Config

Apple introduced its new generation of MacBook Air subcompact laptops last week—their press material focused mostly on the "powerful M3 chip" and its more efficient Neural Engine. Storage options were not discussed deeply—you had to dive into the Air M3's configuration page or specification sheet to find out more. Media outlets have highlighted a pleasing upgrade for entry-level models, in the area of internal SSD transfer speeds. Apple has seemingly taken onboard feedback regarding the disappointing performance of its basic MacBook Air M2 model—its 256 GB storage solution houses a lone 3D NAND package. Max Tech's Vadim Yuryev was one of the first media personalities to discover the presence of two NAND flash chips within entry-level MacBook Air M3 systems—his channel's video teardown can be watched below.

The upgrade from a single chip to a twin configuration has granted higher read and write speeds—Yuryev shared Blackmagic SSD speed test results; screengrabs from his video coverage are attached to this article. M3 MacBook Air's 256 GB solution achieved write speeds of 2,108 MB/s, posting 33% faster performance when compared to an equivalent M2 MacBook Air configuration. The M3 model recorded read speeds of 2,880 MB/s—Wccftech was suitably impressed by this achievement: "making it a whopping 82 percent than its direct predecessor, making it quite an impressive result. The commendable part is that Apple does not require customers to upgrade to the 512 GB storage variants of the M3 MacBook Air to witness higher read and write speeds." Performance is still no match when lined up against "off-the-shelf" PCIe 3.0 x4 drives, and tech enthusiasts find the entry price point of $1099 laughable. Apple's lowest rung option nets a 13-inch model that packs non-upgradable 8 GB of RAM and 256 GB of storage. Early impressions have also put a spotlight on worrying thermal issues—Apple's fan-less cooling solution is reportedly struggling to temper their newly launched M3 mobile chipset.

AMD Ryzen 7 8840U "Hawk Point" APU Exceeds Expectations in 10 W TDP Gaming Test

AMD Ryzen 8040 "Hawk Point" mobile processors continue to roll out in all sorts of review sample guises—mostly within laptops/notebooks and handheld gaming PC segments. An example of the latter would be GPD's Hawk Point-refreshed Win Max 2 model—Cary Golomb, a tech reviewer and self-described evangelist of "PC Gaming Handhelds Since 2016" has acquired this device for benchmark comparison purposes. A Ryzen 7 8840U-powered GPD Win Max 2 model was pitched against similar devices that house older Team Red APU technologies. Golomb's collection included Valve's Steam Deck LCD model, and three "Phoenix" Ryzen 7840U-based GPD models. He did not have any top-of-the-line ASUS or Lenovo handhelds within reach, but the onboard Ryzen Z1 Extreme APU is a close relative of 7840U.

Golomb's social media post included a screenshot of a Batman: Arkham Knight "average frames per second" comparison chart—all devices were running on a low 10 W TDP setting. The overall verdict favors AMD's new Hawk Point part: "Steam Deck low TDP performance finally dethroned...GPD continues to make the best AMD devices. 8840U shouldn't be better, but everywhere I'm testing, it is consistently better across every TDP. TSP measuring similar." Hawk Point appears to be a slight upgrade over Phoenix—most of the generational improvements reside within a more capable XDNA NPU, so it is interesting to see that the 8840U outperforms its predecessor. They both sport AMD's Radeon 780M integrated graphics solution (RDNA 3), while the standard/first iteration Steam Deck makes do with an RDNA 2-era "Van Gogh" iGPU. Golomb found that the: "three other GPD 7840U devices behaved somewhat consistently."

MSI Claw Review Units Observed Trailing Behind ROG Ally in Benchmarks

Chinese review outlets have received MSI Claw sample units—the "Please, Xiao Fengfeng" Bilibili video channel has produced several comparison pieces detailing how the plucky Intel Meteor Lake-powered handheld stands up against its closest rival; ASUS ROG Ally. The latter utilizes an AMD Ryzen Z1 APU—in Extreme or Standard forms—many news outlets have pointed out that the Z1 Extreme processor is a slightly reworked Ryzen 7 7840U "Phoenix" processor. Intel and its handheld hardware partners have not dressed up Meteor Lake chips with alternative gaming monikers—simply put, the MSI Claw arrives with Core Ultra 7-155H or Ultra 5-135H processors onboard. The two rival systems both run on Window 11, and also share the same screen size, resolution, display technology (IPS) and 16 GB LPDDR5-6400 memory configuration. The almost eight months old ASUS handheld seems to outperform its near-launch competition.

Xiao Fengfeng's review (Ultra 7-155H versus Z1 Extreme) focuses on different power levels and how they affect handheld performance—the Claw and Ally have user selectable TDP modes. A VideoCardz analysis piece lays out key divergences: "Both companies offer easy TDP profile switches, allowing users to adjust performance based on the game's requirements or available battery life. The Claw's larger battery could theoretically offer more gaming time or higher TDP with the same battery life. The system can work at 40 W TDP level (but in reality it's between 35 and 40 watts)...In the Shadow of the Tomb Raider test, the Claw doesn't seem to outperform the ROG Ally. According to a Bilibili creator's test, the system falls short at four different power levels: 15 W, 20 W, 25 W, and max TDP (40 W for Claw and 30 W for Ally)."

AMD Develops ROCm-based Solution to Run Unmodified NVIDIA's CUDA Binaries on AMD Graphics

AMD has quietly funded an effort over the past two years to enable binary compatibility for NVIDIA CUDA applications on their ROCm stack. This allows CUDA software to run on AMD Radeon GPUs without adapting the source code. The project responsible is ZLUDA, which was initially developed to provide CUDA support on Intel graphics. The developer behind ZLUDA, Andrzej Janik, was contracted by AMD in 2022 to adapt his project for use on Radeon GPUs with HIP/ROCm. He spent two years bringing functional CUDA support to AMD's platform, allowing many real-world CUDA workloads to run without modification. AMD decided not to productize this effort for unknown reasons but did open-source it once funding ended per their agreement. Over at Phoronix, there were several benchmarks testing AMD's ZLUDA implementation over a wide variety of benchmarks.

Benchmarks found that proprietary CUDA renderers and software worked on Radeon GPUs out-of-the-box with the drop-in ZLUDA library replacements. CUDA-optimized Blender 4.0 rendering now runs faster on AMD Radeon GPUs than the native ROCm/HIP port, reducing render times by around 10-20%, depending on the scene. The implementation is surprisingly robust, considering it was a single-developer project. However, there are some limitations—OptiX and PTX assembly codes still need to be fully supported. Overall, though, testing showed very promising results. Over the generic OpenCL runtimes in Geekbench, CUDA-optimized binaries produce up to 75% better results. With the ZLUDA libraries handling API translation, unmodified CUDA binaries can now run directly on top of ROCm and Radeon GPUs. Strangely, the ZLUDA port targets AMD ROCm 5.7, not the newest 6.x versions. Only time will tell if AMD continues investing in this approach to simplify porting of CUDA software. However, the open-sourced project now enables anyone to contribute and help improve compatibility. For a complete review, check out Phoronix tests.

NVIDIA CG100 "Grace" Server Processor Benchmarked by Academics

The Barcelona Supercomputing Center (BSC) and the State University of New York (Stony Brook and Buffalo campuses) have pitted NVIDIA's relatively new CG100 "Grace" Superchip against several rival products in a "wide variety of HPC and AI benchmarks." Team Green marketing material has focused mainly on the overall GH200 "Grace Hopper" package—so it is interesting to see technical institutes concentrate on the company's "first true" server processor (ARM-based), rather than the ever popular GPU aspect. The Next Platform's article summarized the chip's internal makeup: "(NVIDIA's) Grace CPU has a relatively high core count and a relatively low thermal footprint, and it has banks of low-power DDR5 (LPDDR5) memory—the kind used in laptops but gussied up with error correction to be server class—of sufficient capacity to be useful for HPC systems, which typically have 256 GB or 512 GB per node these days and sometimes less."

Benchmark results were revealed at last week's HPC Asia 2024 conference (in Nagoya, Japan)—Barcelona Supercomputing Center (BSC) and the State University of New York also uploaded their findings to the ACM Digital Library (link #1 & #2). BSC's MareNostrum 5 system contains an experimental cluster portion—consisting of NVIDIA Grace-Grace and Grace-Hopper superchips. We have heard plenty about the latter (in press releases), but the former is a novel concept—as outlined by The Next Platform: "Put two Grace CPUs together into a Grace-Grace superchip, a tightly coupled package using NVLink chip-to-chip interconnects that provide memory coherence across the LPDDR5 memory banks and that consumes only around 500 watts, and it gets plenty interesting for the HPC crowd. That yields a total of 144 Arm Neoverse "Demeter" V2 cores with the Armv9 architecture, and 1 TB of physical memory with 1.1 TB/sec of peak theoretical bandwidth. For some reason, probably relating to yield on the LPDDR5 memory, only 960 GB of that memory capacity and only 1 TB/sec of that memory bandwidth is actually available."

Intel Core i9-14900T Geekbenched - Comparable to AMD Ryzen 9 7900

Intel's Core i9-14900T processor was "officially" released last month alongside an expanded population of "Raptor Lake Refresh" products—the T-class alternative to Team Blue's flagship desktop Core i9-14900 CPU is a less glamorous prospect, hence almost zero press coverage and tech reviews. Its apparent lack of visibility is not helped by non-existent availability at retail, despite inclusion in Team Blue's second wave of 14th Generation Core processors (Marketing Status = Launched). The Core i9-14900 (non-K) is readily obtainable around the globe, as a lower-power alternative to the ever greedy Core i9-14900K, but their T-class SKU sibling takes frugality to another level. TPU's resident CPU tester, W1zzard, implemented six distinct power limit settings during a i9-14900K supplemental experiment, with the lowest being 35 W—coincidentally, matching the i9-14900T's default base power.

His simulated findings were not encouraging, to say the least, but late last week BenchLeaks noticed that a lone test system had gauged the T-class part's efficiency-oriented processing prowess. Geekbench 6.2.2 results were generated by an ASRock Z790 PG-ITX/TB4 build (with 64 GB of 5586 MT/s DDR5 SDRAM)—scoring 3019 in the overall single-core category, and 16385 in multi-core stakes. The latter score indicates a 22% performance penalty when referenced against Tom Hardware's Geekbenched i9-14900K sample. The publication reckons that these figures place Intel's Core i9-14900T CPU in good company—notably AMD's Ryzen 9 7900 processor, one of the company's trio of 65 W "non-X" SKUs. Last March, W1zzard was suitably impressed by his review sample's "fantastic energy efficiency"—the Geekbench 6 official scoreboard awards it 2823 (single-core) and 16750 (multi-core) based on aggregated data from multiple submissions.

AMD Ryzen 7 8700G AI Performance Enhanced by Overclocked DDR5 Memory

We already know about AMD Ryzen 7 8700G APU's enjoyment of overclocked memory—early reviews demonstrated the graphical benefits granted by fiddling with "iGPU engine clock and the processor's memory frequency." While gamers can enjoy a boosted integrated graphics solution that is comparable in performance 1080p stakes to a discrete Radeon RX 6500 XT GPU, AI enthusiasts are eager to experiment with the "Hawk Point" pat's Radeon 780M IGP and Neural Processing Unit (NPU)—the first generation Ryzen XDNA inference engine can unleash up to 16 AI TOPs. One individual, chi11eddog, posted their findings through social media channels earlier today, coinciding with the official launch of Ryzen 8000G processors. The initial set of results concentrated on the Radeon 780M aspect; NPU-centric data may arrive at a later date.

They performed quick tests on AMD's freshly released Ryzen 7 8700G desktop processor, combined with an MSI B650 Gaming Plus WiFi motherboard and two sticks of 16 GB DDR5-4800 memory. The MSI exclusive "Memory Try It" feature was deployed further up in the tables—this assisted in achieving and gauging several "higher system RAM frequency" settings. Here is chi11eddog's succinct interpretation of benchmark results: "7600 MT/s is 15% faster than 4800 MT/s in UL Procyon AI Inference Benchmark and 4% faster in GIMP with Stable Diffusion." The processor's default memory state is capable of producing 210 Float32 TOPs, according to chi11eddog's inference chart. The 6000 MT/s setting produces a 7% improvement over baseline, while 7200 MT/s drives proceedings to 11%—the flagship APU's Radeon 780M iGPU appears to be quite dependent on bandwidth. Their GIMP w/ Stable Diffusion benchmarks also taxed the integrated RDNA 3 graphics solution—again, it was deemed to be fairly bandwidth hungry.

AMD Ryzen 7 8700G & Ryzen 5 8600G APUs Geekbenched

AMD announced its Ryzen 8000G series of Zen 4-based desktop APUs earlier this month, with an official product launch date: January 31. The top models within this range are the "Hawk Point" Ryzen 7 8700G and Ryzen 5 8600G processors—Olrak29_ took to social media after spotting pre-release examples popping up on the Geekbench Browser database. It is highly likely that evaluation samples are in the hands of reviewers, and more benchmarked results are expected to be uploaded over the next week and a half. The Ryzen 7 8700G (w/ Radeon 780M Graphics) was benched on an ASUS ROG STRIX B650-A GAMING WIFI board with 32 GB (6398 MT/s) of DDR5 system memory. Leaked figures appeared online last weekend, originating from an Ryzen 5 8600G (w/ Radeon 760M Graphics) paired with an MSI B650 GAMING PLUS WIFI (MS-7E26) motherboard and 32 GB (6400 MT/s) of DDR5 RAM.

The Geekbench 6 results reveal that the Ryzen 7 8700G and Ryzen 5 8600G APUs are slightly less performant than "Raphael" Ryzen 7000 non-X processors—not a massive revelation, given the underlying technological similarities between these AMD product lines. Evaluations could change with the publication of official review data, but the 8000G series is at a natural disadvantage here—lower core clock frequencies and smaller L3 cache designations are the likely culprits. The incoming APUs are also somewhat hobbled with PCIe support only reaching 4.0 standards. VideoCardz, Tom's Hardware and Wccftech have taken the time to compile the leaked Geekbench 6 results into handy comparison charts—very much worth checking out.

AMD Ryzen Threadripper Pro 7995WX & 7975WX Specs Leaked

A pair of Dell Precision workstations have been tested in SiSoftware's Sandra benchmark suite—database entries for the 7875 Tower (Dell 00RP38) and 7875 Tower (Dell 00RP38) reveal specifications of next generation AMD Ryzen Threadripper Pro CPUs. The 32 core 7975WX model was outed a couple of weeks ago, but the Sandra benchmark database has been updated with additional scores. Its newly leaked sibling is getting a lot of attention—the recently benchmarked 7995WX sample appears to possess 96 Zen 4 cores, and 192 threads (via SMT) with a 5.14 GHz maximum single-core boost clock. Tom's Hardware is intrigued by benchmark data showing that the CPU has: "a 3.2 GHz all-core turbo frequency."

There are 12 CCDs onboard, with a combined total of 384 MB of L3 cache (each CCD has access to 32 MB of L3)—therefore Wccftech believes that "this chip is based on the Genoa SP5 die and will adopt the top 8-channel and SP5 socket platform. The chip also features 96 MB of L2 cache and the top clock speed was reported at 5.14 GHz." The repeat benched Ryzen Threadripper Pro 7975WX CPU is slightly less exciting—with 32 Zen 4 cores, 64 threads, 128 MB of L3 cache, and 32 MB of L2 cache. According to older information, this model is believed to have a TDP rating of 350 W and apparent clock speeds peaking at 4.0 GHz—Wccftech reckons that this frequency reflects an all-core boost. They have produced a bunch of comparative performance charts and further analysis—well worth checking out.

NVIDIA GH200 Superchip Aces MLPerf Inference Benchmarks

In its debut on the MLPerf industry benchmarks, the NVIDIA GH200 Grace Hopper Superchip ran all data center inference tests, extending the leading performance of NVIDIA H100 Tensor Core GPUs. The overall results showed the exceptional performance and versatility of the NVIDIA AI platform from the cloud to the network's edge. Separately, NVIDIA announced inference software that will give users leaps in performance, energy efficiency and total cost of ownership.

GH200 Superchips Shine in MLPerf
The GH200 links a Hopper GPU with a Grace CPU in one superchip. The combination provides more memory, bandwidth and the ability to automatically shift power between the CPU and GPU to optimize performance. Separately, NVIDIA HGX H100 systems that pack eight H100 GPUs delivered the highest throughput on every MLPerf Inference test in this round. Grace Hopper Superchips and H100 GPUs led across all MLPerf's data center tests, including inference for computer vision, speech recognition and medical imaging, in addition to the more demanding use cases of recommendation systems and the large language models (LLMs) used in generative AI.

Intel Arc Linux Gaming Performance Boosted by Vastly Improved Vulkan Drivers

Intel's Alchemist engineering team has been working on improving its open-source Vulkan drivers for Linux—recent coverage from Phoronix shows that Team Blue's hard work is paying off, especially in the area of gaming performance. The site's founder, Michael Larabel, approves of the latest Mesa work produced by Intel engineers, and has commended them on their efforts to better the Arc Graphics family. His mid-month testings—on a Linux 6.4-based system running an Intel Arc A770 GPU—demonstrated a "~10% speed-up for the Intel Arc Graphics on Linux." He has benchmarked this system again over the past weekend, following the release of a new set of optimizations for Mesa 23.3-devel: "The latest performance boost for Intel graphics on Linux is by supporting the I915_FORMAT_MOD_4_TILED_DG2_RC_CCS modifier. Indeed it's panning out nicely for furthering the Intel Arc Graphics Vulkan performance."

He apologized for the limited selection of games, due to: "the Intel Linux graphics driver still not having sparse support in place, but at least that will hopefully be here in the coming months when the Intel Xe kernel driver is upstreamed. Another recent promising development for the Intel open-source graphics driver support is fake sparse support to at least help some games and that code will hopefully be merged soon." First up was Counter-Strike: Global Offensive—thanks to the optimized Vulkan drivers it: "enjoyed another nice boost to the performance as a result of this latest code. For CS Linux gamers, it's great seeing the 21% boost just over the past month."

Intel N100 Quad E-Core Gaming Performance Assessed

Team Pandory has tested the gaming potential of an Intel Alder-Lake-N SoC—not many outlets have bothered to give the N100 much coverage in this aspect, since the chip's makeup is E-core only and it only offers single-channel memory support. Team Blue has emphasized power efficiency rather than raw performance with its super low budget 2022 successor to old Pentium and Celeron processor product lines. The utilization of modern Gracemount CPU cores does it some favors—notably granting L3 cache support, but the chip has been designed with entry-level productivity in mind.

Naturally, in-game testing focuses attention on the N100's integrated GPU, based on Team Blue's Xe-LP architecture—it features 24 execution units (EUs), support for AV1 decode capabilities, and 8K 60 FPS video playback. Arc Alchemist offers roughly double the performance when compared to the Xe-LP iGPU, so we are not expecting a big "wow factor" to be delivered by the plucky Alder-Lake-N SoC (6 W TDP). Team Pandory benchmarked a laptop sporting a single stick of 8 GB DDR5 RAM and the N100 quad E-core CPU (capable of 3.4 GHz turbo boosting), with 6 MB of L3 cache. The ultra portable device was able to hit 60 FPS in a couple of older games, but the majority of tested titles ran at 20 to 30 20 FPS (on average). Graphics settings were universally set to minimum, with a resolution of 1280 x 720 (720p) across ten games: CS:GO, Dota 2, Forza Horizon 4, Genshin Impact, GTA V, Grid Autosport, Minecraft, Resident Evil 5, Skyrim, and Sleeping Dogs.

Geekbench Leak Suggests NVIDIA GeForce RTX 4060 Nearly 20% Faster than RTX 3060

NVIDIA is launching its lower end GeForce RTX 4060 graphics card series next week, but has kept schtum about the smaller Ada Lovelace AD107 GPU's performance level. This more budget-friendly offering (MSRP $299) is rumored to have 3,072 CUDA cores, 24 RT cores, 96 Tensor cores, 96 TMUs, and 32 ROPs. It will likely sport 8 GB of GDDR6 memory across a 128-bit wide memory bus. Benchleaks has discovered the first set of test results via a database leak, and posted these details on social media earlier today. Two Geekbench 6 runs were conducted on a test system comprised of an Intel Core i5-13600K CPU, ASUS Z790 ROG APEX motherboard, DDR5-6000 memory and the aforementioned GeForce card.

The GPU Compute test utilizing the Vulkan API resulted in a score of 99419, and another using OpenCL achieved 105630. We are looking at a single sample here, so expect variations when other units get tested in Geekbench prior to the June 29 launch. The RTX 4060 is about 12% faster (in Vulkan) than its direct predecessor—RTX 3060. The gap widens with its Open CL performance, where it offers an almost 20% jump over the older card. The RTX 3060 Ti presents around 3-5% faster performance over the RTX 4060. We hope to see actual in-game benchmarking carried out soon.

Moore Threads MTT S80 GPU Benchmarked by PC Watch Japan

The Moore Threads MTT S80 gaming-oriented graphics card has been tested mostly by Chinese hardware publications, but Japan's PC Watch has managed to get hold of a sample unit configured with 16 GB GDDR6 (14 Gbps) for evaluation purposes and soon published their findings in a "HotHot REVIEW!" The MTT S80 GPU appears to be based on PowerVR architecture (developed by Imagination Technologies), but official Moore Threads literature boasts that their own Chunxaio design is behind all proceedings with 4096 "MUSA" cores. The GPU's clock speed is set at 1.8 GHz, and maximum compute performance has been measured at 14.2 TFLOPS. A 256-bit memory bus grants a bandwidth transfer rate of 448 GB/s. PC Watch notes that the card's support for PCIe Gen 5 x 16 (offering up to 128 GB/s bandwidth) is quite surprising, given the early nature of this connection standard.

Moore Threads has claimed in the past that their cards support Direct X, but PC Watch has discovered that the S80 does not work with DX12, and their tests also demonstrated significant compatibility issues under DX11—with plenty of system freezes and error messages logged. The reviewer(s) had to downshift in some cases to DX9 game environments, in order to gather reliable/stable data. TPU's GPU-Z utility is shown to have no registration information for the S80, and it cannot read the GPU's clock. PC Watch compared their sample unit to an NVIDIA GeForce GTX 1050 Ti graphics card—the entry level 2016-era GPU managed to best the newer competition in terms of in-game performance and power efficiency.

NVIDIA RTX 4000 Ada Lovelace GPU Roughly Equivalent to GeForce RTX 3060 Ti, Consumes 65% Less Power

The NVIDIA RTX 4000 SFF Ada Generation graphics card was released to the public in late April, but very few reviews and benchmarks have emerged since then. Jisaku Hibi, a Japanese hardware site, has published an in-depth evaluation that focuses mostly on gaming performance. The RTX 4000 Ada SFF has been designed as a compact workstation graphics card, but its usage of an AD104 die makes it a sibling of NVIDIA's GeForce RTX 4070 and 4070 Ti gaming-oriented cards. Several PC hardware sites have posited that the 70 W RTX 4000 Ada SFF would "offer GeForce RTX 3070-like performance," but Jisaku Hibi's investigation points to the RTX 3060 Ti being the closest equivalent card (in terms of benchmark results).

According to the TPU GPU database: "NVIDIA has disabled some shading units on the RTX 4000 SFF Ada Generation to reach the product's target shader count. It features 6144 shading units, 192 texture mapping units, and 80 ROPs. Also included are 192 tensor cores which help improve the speed of machine learning applications. The card also has 48 ray tracing acceleration cores. NVIDIA has paired 20 GB GDDR6 memory with the RTX 4000 SFF Ada Generation, which are connected using a 160-bit memory interface. The GPU is operating at a frequency of 1290 MHz, which can be boosted up to 1565 MHz, memory is running at 1750 MHz (14 Gbps effective)." The SKU's 70 W TGP and limited memory interface are seen as the card's main weak points, resulting in average clock speeds and a maximum memory bandwidth of only 280 GB/s.

AMD Ryzen 7040HS and 7040H "Phoenix" Laptop CPUs Get Tested

AMD is late in releasing its Phoenix Zen 4 lineup of mobile APUs - the original April launch has been missed, and laptops bearing Ryzen 7000HS & H-series are expected to arrive at some point this month. Preview hardware has made its way into the hands of testers, and one particular outlet - Golden Pig Upgrade, a content creator on the Chinese Bilibili video site - has performed benchmark tests. He seems to be the first reviewer to get hands-on time with AMD Ryzen 7040 Phoenix APUs, and his findings point to class leading performance results in terms of graphical capabilities - the 7840HS (packing a Radeon 780M RDNA3 iGPU) is compared to the Rembrandt-based 7735H, as well as a pair of Intel Raptor Lake CPUs - the 13700H and 13500H models.

AMD's newest Phoenix APU is the group leader in GPU performance stakes, but the jump up from the last-gen Rembrandt (RDNA2 iGPU) chip is not all that significant. VideoCardz reckons that the Radeon 780M integrated GPU is roughly equivalent to an NVIDIA GeForce MX550 dGPU and not far off from a GeForce GTX 1650 Max-Q graphics card (in terms of benchmark performance). According to AMD's internal documentation the RDNA 3 core architecture utilized in Phoenix APUs is referred to as "2.5" so this perhaps explains why the 780M is not doing laps around its older silbing(s).

Samsung Exynos 2400 SoC Performance Figures Leaked, Prototype Betters Next Gen Snapdragon GPU

Samsung's unannounced Exynos 2400 mobile chipset has been linked to the upcoming Galaxy S24 smartphone family, but internet tipsters have speculated that the in-house SoC will be reserved for the baseline model only. The more expensive Plus and Ultra variants could be the main targets for flagship smartphone fetishists - it is possible that Qualcomm's upper echelon Snapdragon 8 Gen 3 chipset is set to feature within these premium devices. Samsung's Exynos processors are not considered to be fan favorites, but industry insiders reckon that the latest performance figures indicate that Samsung's up-and-comer has the potential to turn some heads. Exact specifications for the Exynos 2400 are not public knowledge - one of the tipsters suggests that a 10-core layout has been settled on by Samsung, as well as a recent bump up in GPU core count - from 6 to 12. The company's own 4 nm SF4P process is the apparent choice set for production line.

A leaker has posted benchmark scores generated by an unknown device that was running an Exynos 2400 SoC - the Geekbench 5 results indicate an average single-core score of 1530 with a peak of 1711. The multi-core average score is shown to be 6210, and the highest number achieved is 6967. Therefore the Exynos 2400 is 31% percent faster (in multi-core performance) than the refreshed Snapdragon 8 Gen 2 variant currently found in Galaxy S23 Ultra smartphones, but the divide between the two in terms of single-core performance is not so great. The 2400 manages to outpace (by 30%) Apple's present generation Bionic A16's average multi-core score, although the latter beats the presumed engineering sample's single-core result by 20%. The Exynos 2400 will face a new lineup of rival mobile processors in 2024 - namely Apple's next generation Bionic A17 and Qualcomm's Snapdragon 8 Gen 3, so it is difficult to extrapolate today's leaked figures into a future scenario.

3DMark Gets AMD FidelityFX Super Resolution 2 (FSR 2) Feature Test

UL Benchmarks today released an update to 3DMark that adds a Feature Test for AMD FidelityFX Super Resolution 2 (FSR 2), the company's popular upscaling-based performance enhancement. This was long overdue, as 3DMark has had a Feature Test for DLSS for years now; and as of October 2022, it even got one for Intel XeSS. The new FSR 2 Feature Test uses a scene from the Speed Way DirectX 12 Ultimate benchmark, where it compares fine details of a vehicle and a technic droid between native resolution with TAA and FSR 2, and highlights the performance uplift. To use the feature test, you'll need any GPU that supports DirectX 12 and FSR 2 (that covers AMD, NVIDIA, and Intel Arc). For owners of 3DMark who purchased it before October 12, 2022, they'll need to purchase the Speed Way upgrade to unlock the AMD FSR feature test.

Intel Xeon W-3400/2400 "Sapphire Rapids" Processors Run First Benchmarks

Thanks to the attribution of Puget Systems, we have a preview of Intel's latest Xeon W-3400 and Xeon W-2400 workstation processors based on Sapphire Rapids core technology. Delivering up to 56 cores and 112 threads, these CPUs are paired with up to eight TeraBytes of eight-channel DDR5-4800 memory. For expansion, they offer up to 112 PCIe 5.0 lanes come with up to 350 Watt TDP; some models are unlocked for overclocking. This interesting HEDT family for workstation usage comes at a premium with an MSRP of $5,889 for the top-end SKU, and motherboard prices are also on the pricey side. However, all of this should come as no surprise given the expected performance professionals expect from these chips. Puget Systems has published test results that include: Photoshop, After Effects, Premiere Pro, DaVinci Resolve, Unreal Engine, Cinebench R23.2, Blender, and V-Ray. Note that Puget Systems said that: "While this post has been an interesting preview of the new Xeon processors, there is still a TON of testing we want to do. The optimizations Intel is working on is of course at the top, but there are several other topics we are highly interested in." So we expect better numbers in the future.
Below, you can see the comparison with AMD's competing Threadripper Pro HEDT SKUs, along with power usage using different Windows OS power profiles:

Alleged NVIDIA AD106 GPU Tested in 3DMark and AIDA64

Benchmarks and specifications of an alleged NVIDIA AD106 GPU have tipped up on Chiphell, although the original poster has since removed all the details. Thanks to @harukaze5719 on Twitter, who posted the details, we still get an insight into what we might be able to expect from NVIDIA's upcoming mid-range cards. All these details should be taken as is, as the original source isn't exactly what we'd call trustworthy. Based on the data in the TPU GPU database, the GPU in question should be the GeForce RTX 4070 Mobile with much higher clock speeds or an equivalent desktop part that offers more CUDA cores than the RTX 4060 Ti. Whatever the specific AD106 GPU is, it's being compared to the GeForce RTX 2080 Super and the RTX 3070 Ti.

The GPU was tested in AIDA64 and 3DMark and it beats the RTX 2080 Super in all of the tests, while drawing some 55 W less power at the same time. In some of the benchmarks the wins are within the margin of testing error, for example when it comes to the memory performance in AIDA64. However, we're looking at a GPU connected to only half the memory bandwidth here, as the AD106 GPU only has a 128-bit memory bus, compared to 256-bit for the RTX 2080 Super, although the memory clocks are much higher, but the overall memory bandwidth is still nearly 36 percent higher in the RTX 2080 Super. Yet, the AD106 GPU manages to beat the RTX 2080 Super in all of the memory benchmarks in AIDA64.

First Alleged AMD Radeon RX 7900-series Benchmarks Leaked

With only a couple of days to go until the AMD RX 7900-series benchmarks go live, some alleged benchmarks from both the RX 7900 XTX and RX 7900 XT have leaked on Twitter. The two cards are being compared to a NVIDIA RTX 4080 card in no less than seven different game titles, all running at 4K resolution. The games are God of War, Cyberpunk 2077, Assassin's Creed Valhalla, Watchdogs Legion, Red Dead Redemption 2, Doom Eternal and Horizon Zero Dawn. The cards were tested on a system with a Core i9-12900K CPU which was paired with 32 GB of RAM of unknown type.

It's too early to draw any real conclusions from this test, but in general, the RX 7900 XTX comes out on top, ahead of the RTX 4080, so no surprises here. The RX 7900 XT is either tied with the RTX 4080 or a fair bit slower, with the exception being Red Dead Redemption 2, where the RTX 4080 is the slowest card, although it also appears to have some issues, since the one percent lows are hitting 2 FPS. Soon, the reviews will be out and everything will become more clear, but it appears that AMD's RX 7900 XTX will give NVIDIA's RTX 4080 a run for its money, if these benchmarks are anything to go by.

Update Dec 11th: The original tweet has been removed, for unknown reasons. It could be because the numbers were fake, or because they were in breach of AMD's NDA.

UL Launches New 3DMark Feature Test for Intel XeSS

We're excited to release a new 3DMark feature test for Intel's new XeSS AI-enhanced upscaling technology. This new feature test is available in 3DMark Advanced and Professional Editions. 3DMark feature tests are special tests designed to highlight specific techniques, functions, or capabilities. The Intel XeSS feature test shows you how XeSS affects performance.

The 3DMark Intel XeSS frame inspector tool helps you compare image quality with an interactive side-by-side comparison of XeSS and native-resolution rendering. Check out the images below to see an example comparison of native resolution rendering and XeSS in the new 3DMark feature test.

Intel Outs First Xeon Scalable "Sapphire Rapids" Benchmarks, On-package Accelerators Help Catch Up with AMD EPYC

Intel in the second day of its InnovatiON event, turned attention to its next-generation Xeon Scalable "Sapphire Rapids" server processors, and demonstrated on-package accelerators. These are fixed-function hardware components that accelerate specific kinds of popular server workloads (i.e. run them faster than a CPU core can). With these, Intel hopes to close the CPU core-count gap it has with AMD EPYC, with the upcoming "Zen 4" EPYC chips expected to launch with up to 96 cores per socket in its conventional variant, and up to 128 cores per socket in its cloud-optimized variant.

Intel's on-package accelerators include AMX (advanced matrix extensions), which accelerate recommendation-engines, natural language processing (NLP), image-recognition, etc; DLB (dynamic load-balancing), which accelerates security-gateway and load-balancing; DSA (data-streaming accelerator), which speeds up the network stack, guest OS, and migration; IAA (in-memory analysis accelerator), which speeds up big-data (Apache Hadoop), IMDB, and warehousing applications; a feature-rich implementation of the AVX-512 instruction-set for a plethora of content-creation and scientific applications; and lastly, the QAT (QuickAssist Technology), with speed-ups for data compression, OpenSSL, nginx, IPsec, etc. Unlike "Ice Lake-SP," QAT is now implemented on the processor package instead of the PCH.

Puget Systems Publishes Windows 11 Content Creation Benchmarks

Puget Systems has recently performed a variety of tests to determine if Windows 11 or 10 is the fastest for content-creation tasks such as photo and video editing. The tests were each conducted on four systems with an AMD Threadripper 5995WX, AMD Threadripper 5975WX, AMD Ryzen 9 5950X, and Intel Core i9 12900K each paired with an RTX 3080 and 64/128 GB of memory. The benchmarks were primarily taken from the PugetBench suite of tests with each test run multiple times.

The video editing tests were conducted using Premiere Pro, After Effects, and DaVinci Resolve where Premiere Pro saw a small performance improvement in Windows 10 while the other programs performed similarly on both operating systems. The photo editing tests used Photoshop, and Lightroom Classic with average performance equal across Windows 10 and 11. The CPU rendering benchmarks featured Cinebench, V-Ray, and Blender where once again the results were all within the margin of error. The GPU rendering tests using Octane, V-Ray, and Blender showed some differences with V-Ray and Blender both performing best in Windows 11. The final section was Game Development in Unreal Engine where a small advantage could be had by using Windows 11.
Return to Keyword Browsing
May 1st, 2024 05:47 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts