News Posts matching #3DMark

Return to Keyword Browsing

Intel Xeon W9-3495X Sets World Record, Dethrones AMD Threadripper

When Intel announced the appearance of the 4th generation Xeon-W processors, the company announced that the clock multiplier was left unlocked, available for overclockers to try and push these chips even harder. However, it was only a matter of time before we saw the top-end Xeon-W SKU take a chance at beating the world record in Cinebench R23. The Intel Xeon W9-3495X SKU is officially the world record score holder with 132,484 points in Cinebench R23. The overclocker OGS from Greece managed to push all 56 cores and 112 threads of the CPU to 5.4 GHz clock frequency using liquid nitrogen (LN2) cooling setup. Using ASUS Pro WS W790E-SAGE SE motherboard and G-SKILL Zeta R5 RAM kit, the OC record was set on March 8th.

The previous record holder of this position was AMD with its Threadripper Pro 5995WX with 64 cores and 128 threads clocked at 5.4 GHz. Not only did Xeon W9-3495X set the Cinebench R23 record, but the SKU also placed the newest record for Cinebench R20, Y-Cruncher, 3DMark CPU test, and Geekbench 3 as well.

Alleged NVIDIA AD106 GPU Tested in 3DMark and AIDA64

Benchmarks and specifications of an alleged NVIDIA AD106 GPU have tipped up on Chiphell, although the original poster has since removed all the details. Thanks to @harukaze5719 on Twitter, who posted the details, we still get an insight into what we might be able to expect from NVIDIA's upcoming mid-range cards. All these details should be taken as is, as the original source isn't exactly what we'd call trustworthy. Based on the data in the TPU GPU database, the GPU in question should be the GeForce RTX 4070 Mobile with much higher clock speeds or an equivalent desktop part that offers more CUDA cores than the RTX 4060 Ti. Whatever the specific AD106 GPU is, it's being compared to the GeForce RTX 2080 Super and the RTX 3070 Ti.

The GPU was tested in AIDA64 and 3DMark and it beats the RTX 2080 Super in all of the tests, while drawing some 55 W less power at the same time. In some of the benchmarks the wins are within the margin of testing error, for example when it comes to the memory performance in AIDA64. However, we're looking at a GPU connected to only half the memory bandwidth here, as the AD106 GPU only has a 128-bit memory bus, compared to 256-bit for the RTX 2080 Super, although the memory clocks are much higher, but the overall memory bandwidth is still nearly 36 percent higher in the RTX 2080 Super. Yet, the AD106 GPU manages to beat the RTX 2080 Super in all of the memory benchmarks in AIDA64.

NVIDIA RTX 4080 20-30% Slower than RTX 4090, Still Smokes the RTX 3090 Ti: Leaked Benchmarks

Benchmarks of NVIDIA's upcoming GeForce RTX 4080 (formerly known as the RTX 4080 16 GB) are already out as the leaky taps in the Asian tech forumscape know no bounds. Someone with access to an RTX 4080 sample and drivers on ChipHell forums, put it through a battery of synthetic and gaming tests. The $1,200 MSRP graphics card was tested on 3DMark Time Spy, Port Royal, and games that include Forza Horizon 5, Call of Duty Modern Warfare II, Cyberpunk 2077, Borderlands 3, and Shadow of the Tomb Raider.

The big picture: the RTX 4080 is found to be halfway between the RTX 3090 Ti and the RTX 4090. At stock settings, and in 3DMark Time Spy Extreme (4K), it has 71% the performance of an RTX 4090, whereas the RTX 3090 Ti is 55% that of the RTX 4090. With its "power limit" slider maxed out, the RTX 4080 inches 2 percentage-points closer to the RTX 4090 (73% that of the RTX 4090), and with a bit of manual OC, it adds another 4 percentage-points. Things change slightly with 3DMark Port Royal, where the RTX 4080 is 69% the performance of the RTX 4090 in a test where the RTX 3090 Ti does 58% that of the RTX 4090.

UL Benchmarks Launches 3DMark Speedway DirectX 12 Ultimate Benchmark

UL Solutions is excited to announce that our new DirectX 12 Ultimate Benchmark 3DMark Speed Way is now available to download and buy on Steam and on the UL Solutions website. 3DMark Speed Way is sponsored by Lenovo Legion. Developed with input from AMD, Intel, NVIDIA, and other leading technology companies, Speed Way is an ideal benchmark for comparing the DirectX 12 Ultimate performance of the latest graphics cards.

DirectX 12 Ultimate is the next-generation application programming interface (API) for gaming graphics. It adds powerful new capabilities to DirectX 12, helping game developers improve visual quality, boost frame rates, reduce loading times and create vast, detailed worlds. 3DMark Speed Way's engine demonstrates what the latest DirectX API brings to ray-traced gaming, using DirectX Raytracing tier 1.1 for real-time global illumination and real-time ray-traced reflections, coupled with new performance optimizations like mesh shaders.

3DMark Speed Way DirectX 12 Ultimate Benchmark is Launching on October 12

3DMark Speed Way is a new GPU benchmark that showcases the graphics technology that will power the next generation of gaming experiences. We're excited to announce Speed Way, sponsored by Lenovo Legion, is releasing on October 12. Our team has been working hard to get Speed Way ready for you to use benchmarking, stress testing, and comparing the new PC hardware coming this fall.

From October 12 onward, Speed Way will be included in the price when you buy 3DMark from Steam or our own online store. Since we released Time Spy in 2016, 3DMark users have enjoyed many free updates, including Time Spy Extreme, the 3DMark CPU Profile, 3DMark Wild Life, and multiple tests demonstrating new DirectX features. With the addition of Speed Way, the price of 3DMark on Steam and 3DMark Advanced Edition will go up from $29.99 to $34.99.

UL Launches New 3DMark Feature Test for Intel XeSS

We're excited to release a new 3DMark feature test for Intel's new XeSS AI-enhanced upscaling technology. This new feature test is available in 3DMark Advanced and Professional Editions. 3DMark feature tests are special tests designed to highlight specific techniques, functions, or capabilities. The Intel XeSS feature test shows you how XeSS affects performance.

The 3DMark Intel XeSS frame inspector tool helps you compare image quality with an interactive side-by-side comparison of XeSS and native-resolution rendering. Check out the images below to see an example comparison of native resolution rendering and XeSS in the new 3DMark feature test.

NVIDIA RTX 4090 "Ada" Scores Over 19000 in Time Spy Extreme, 66% Faster Than RTX 3090 Ti

NVIDIA's next-generation GeForce RTX 4090 "Ada" flagship graphics card allegedly scores over 19000 points in the 3DMark Time Spy Extreme synthetic benchmark, according to kopite7kimi, a reliable source with NVIDIA leaks. This would put its score around 66 percent above that of the current RTX 3090 Ti flagship. The RTX 4090 is expected to be based on the 5 nm AD102 silicon, with a rumored CUDA core count of 16,384. The higher IPC from the new architecture, coupled with higher clock speeds and power limits, could be contributing to this feat. Time Spy Extreme is a traditional DirectX 12 raster-only benchmark, with no ray traced elements. The Ada graphics architecture is expected to reduce the "cost" of ray tracing (versus raster-only rendering), although we're yet to see leaks of RTX performance, yet.

Intel i9-13900K "Raptor Lake" ES Improves Gaming Minimum Framerates by 11-27% Over i9-12900KF

Intel's 13th Gen Core "Raptor Lake" is shaping up to be another leadership desktop processor lineup, with an engineering sample clocking significant increases in gaming minimum framerates over the preceding 12th Gen Core i9-12900K "Alder Lake." Extreme Player, a tech-blogger on Chinese video streaming site Bilibili, posted a comprehensive gaming performance review of an i9-13900K engineering sample covering eight games across three resolutions, comparing it with a retail i9-12900KF. The games include CS:GO, Final Fantasy IX: Endwalker, PUBG, Forza Horizon 5, Far Cry 6, Red Dead Redemption 2, Horizon Zero Dawn, and the synthetic benchmark 3DMark. Both processors were tested with a GeForce RTX 3090 Ti graphics card, 32 GB of DDR5-6400 memory, and a 1.5 kW power supply.

The i9-13900K ES is shown posting performance leads ranging wildly between 1% to 2% in the graphics tests of 3DMark, but an incredible 36% to 38% gain in the CPU-intensive tests of the suite. This is explained not just by increased per-core performance of both the P-cores and E-cores, but also the addition of 8 more E-cores. Although the same "Gracemont" E-cores are used in "Raptor Lake," the L2 cache size per E-core cluster has been doubled in size. Horizon Zero Dawn sees -0.7% to 10.98% increase in frame rates. There are some anomalous 70% frame-rate increases in RDR2, discounting which, we still see a 2-9% increase. FC6 posts modest 2.4% increases. Forza Horizon 5, PUBG, Monster Hunter Rise, and FF IX, each report significant increases in minimum framerates, well above 20%.

Intel Arc A550M & A770M 3DMark Scores Surface

The upcoming Intel Arc A550M & A770M mobile graphics cards have recently appeared on 3DMark in Time Spy and Fire Strike Extreme. The Intel Arc Alchemist A550M features an ACM-G10 GPU with 16 Xe cores paired with 8 GB of GDDR6 memory on a 128-bit bus while the A770M features the same GPU but with 32 Xe cores and 16 GB of GDDR6 memory on a 256-bit bus.

The A550M was tested in 3DMark Time Spy where it scored 6017 points running on an older 1726 driver with Intel Advanced Performance Optimizations (APO) enabled. The A770M was benchmarked with 3DMark Fire Strike Extreme where it scored a respectable 13244 points in graphics running on test drivers which places it near the RTX 3070M. This score does not correlate to real-world gaming performance with figures provided directly by Intel showing the Arc A730M only being 12% faster than the RTX 3060M.

First Intel Arc A730M Powered Laptop Goes on Sale, in China

The first benchmark result of an Intel Arc A730M laptop made an appearance online and the mysterious laptop used to run 3DMark turned out to be from a Chinese company called Machenike. The laptop itself appears to go under the name of Dawn16 Discovery Edition and features a 16-inch display with a native resolution of 2560 x 1600, with a 165 Hz refresh rate. CPU wise, Machenike went with a Core i7-12700H, which is a 6+8 core CPU with 20 threads, where the performance cores top out at 4.7 GHz. The CPU has been paired with 16 GB of 4800 MHz DDR5 memory and the system also has a PCIe 4.0 NVMe SSD of some kind, with a max read speed of 3500 MB/s, which isn't particularly impressive. Other features include Thunderbolt 4 support, WiFi 6E and Bluetooth 5.2, as well as an 80 Whr battery pack.

However, none of the above is particularly unique and what matters here is of course the Intel Arc A730M GPU. It has been paired with 12 GB of GDDR6 memory with a 192-bit interface, at 14 Gbps according to the specs. The memory bandwidth is said to be 336 GB/s. The company also provided a couple of performance metrics, with a 3DMark TimeSpy figure of 10002 points and a 3DMark Fire Strike figure of 23090 points. The TimeSpy score is a few points slower than the numbers posted earlier, but helps verify the earlier test result. Other interesting nuggets of information include support for 8k60 12-bit HDR video decoding for AV1, HEVC, AVC and VP9, as well as 8k 10-bit HDR encoding for said formats. Here a figure for the Puget Benchmark in what appears to be Photoshop (PS) is provided, where it scores 1188 points. The laptop is up for what appears to be pre-order, with a price tag of 7,499 RMB, or about US$1,130.

Intel Arc A730M 3DMark TimeSpy Score Spied, in League of RTX 3070 Laptop GPU

Someone with access to a gaming notebook powered by Intel Arc "Alchemist" A730M discrete GPU posted its alleged 3DMark TimeSpy score, and it looks pretty interesting—10.138 points, which is somewhat higher than that of the GeForce RTX 3070 Laptop GPU or halfway between those of the desktop GeForce RTX 3060 and desktop RTX 3060 Ti.

Based on the Xe-HPG graphics architecture, the Arc A730M features 24 Xe Cores, or 384 execution units, which work out to 3,072 unified shaders. This is not even Intel's most powerful mobile GPU, with that title going to the A770M, which maxes out the ACM-G10 ASIC, with all 512 execution units (4,096 unified shaders) being enabled. It particularly raises hopes for a competitive high-end GPU for gaming notebooks, which can perform in the league of the RTX 3080 Laptop GPU, or the Radeon RX 6800M.

AMD Radeon RX 6950XT Beats GeForce RTX 3090 Ti in 3DMark TimeSpy

We are nearing the arrival of AMD's Radeon RX 6x50XT graphics card refresh series, and benchmarks are starting to appear. Today, we received a 3DMark TimeSpy benchmark of the AMD Radeon RX 6950XT GPU and compared it to existing solutions. More notably, we compared it to NVIDIA's GeForce RTX 3090 Ti and came to a surprise. The Radeon RX 6950XT GPU scored 22209 points in the 3DMark TimeSpy test and looking at Graphics score, while the GeForce RTX 3090 Ti GPU scored 20855 points in the same test. Of course, we have to account that 3DMark TimeSpy is a synthetic benchmark and tends to perform very well on AMD RDNA2 hardware, so we have to wait and see for official independent testing like TechPowerUp's reviews.

AMD Radeon RX 6950XT card was tested with Ryzen 7 5800X3D CPU paired with DDR4-3600 memory and pre-released 22.10-220411n drivers on Windows 10. We could experience higher graphics scores with final drivers and see better performance of the upcoming refreshed SKUs.

UL Benchmarks Unveils the 3DMark Speed Way Benchmark

UL Benchmarks is ready with a new graphics benchmark for enthusiast-segment graphics cards, to show you whether your present hardware can deal with games coming out several years from now. The new "Speed Way" benchmark utilizes all of the features that make up the DirectX 12 Ultimate API, which include real-time ray tracing, mesh shaders, variable-rate shading, and sampler feedback. It uses ray tracing for reflections, lighting, and global-illumination.

Until now, the various DirectX 12 Ultimate features were only part of 3DMark as API feature-tests. The older "Port Royal" benchmark, which released around the time of NVIDIA "Turing," is the company's first benchmark with real-time ray tracing, although it doesn't use all DirectX 12 Ultimate features. With each new benchmark, the makers of 3DMark collaborate with a new gaming brand sponsor. For Speed Way, it's Legion, a brand of pre-built gaming notebooks and desktops by Lenovo. UL Benchmarks didn't reveal an exact release date except "later this year."

AMD Radeon RX 6900 XT Scores Top Spot in 3DMark Fire Strike Hall of Fame with 3.1 GHz Overclock

3DMark Fire Strike Hall of Fame is where overclockers submit their best hardware benchmark trials and try to beat the very best. For years, one record managed to hold, and today it just got defeated. According to an extreme overclocker called "biso biso" from South Korea and a part of the EVGA OC team, the top spot now belongs to the AMD Radeon RX 6900 XT graphics card. The previous 3DMark Fire Strike world record was set on April 22nd in 2020, when Vince Lucido, also known as K|NGP|N, set a record with four-way SLI of NVIDIA GeForce GTX 1080 Ti GPUs. However, that record is old news since January 27th, when biso biso set the history with AMD Radeon RX 6900 XT GPU.

The overclocker scored 62389 points, just 1,183 more from the previous record. He pushed the Navi 21 XTX silicon that powers the Radeon RX 6900 XT card to an impressive 3,147 MHz. Paired with a GPU memory clock of 2370 MHz, the GPU was probably LN2 cooled to achieve these results. The overclocker used EVGA's Z690 DARK KINGPIN motherboard with Intel Core i9-12900K processor as a platform of choice to achieve this record. You can check it out on the 3DMark Fire Strike Hall of Fame website to see yourself.

NVIDIA GeForce RTX 2050 & MX550 Laptop Graphics Cards Benchmarked

The recently announced NVIDIA GeForce RTX 2050, MX570, and MX550 Ampere graphics cards have recently been benchmarked in 3DMark TimeSpy. The RTX 2050 and MX570 both feature the Ampere GA107 GPU with 2048 CUDA cores paired with 4 GB and 2 GB of 64-bit GDDR6 memory respectively. The MX550 uses the TU117 Turing GPU with 1024 CUDA cores running at 1320 MHz paired with 2 GB of 64-bit GDDR6 12 Gbps memory. The RTX 2050 and MX570 performed similarly in the 3DMark TimeSpy benchmark achieving a graphics score of 3369 while the MX550 scores 2510 points. These new laptop graphics cards will be officially launching in Spring 2022.

EVGA RTX 3090 Kingpin & Intel Core i9-12900K Set New 3DMark Port Royal Record

The 3DMark Port Royal single card benchmark has a new record of 20,014 set by South Korean overclocker biso biso for Team EVGA. The overclocker used an Intel Core i9-12900K running at 5.4 GHz on an EVGA Z690 Dark Kingpin motherboard paired with the EVGA GeForce RTX 3090 Kingpin overclocked to 2,895 MHz. The system also featured 16 GB of DDR5 memory running at 6000 MHz from SK Hynix along with liquid nitrogen for cooling and Thermal Grizzly Kryonaut Extreme thermal paste on the CPU and GPU. This new record beats the previous record of 19,600 also from biso biso which featured the Core i9-10900K and RTX 3090 Kingpin.
EVGAArmed with the latest hardware, including an EVGA Z690 DARK K|NGP|N motherboard, and an EVGA GeForce RTX 3090 K|NGP|N running at a blistering 2,895 MHz GPU clock, extreme overclocker "biso biso" set a new single-GPU standard for 3DMark Port Royal with a score of 20,014! This marks the first ever 3DMark Port Royal (single card) score over 20,000 and a testiment to the capabilities of the highest performing EVGA products.

3DMark Receives DirectX 12 Ultimate Sampler Feedback Feature Test

DirectX 12 Ultimate is the next generation for gaming graphics. It adds powerful new capabilities to DirectX 12, including DirectX Raytracing Tier 1.1, Mesh Shaders, Sampler Feedback, and Variable Rate Shading (VRS). These features help game developers improve visual quality, boost frame rates, reduce loading times, and create vast, detailed worlds.

3DMark already has dedicated tests for DirectX Raytracing, Mesh Shaders, and Variable Rate Shading. Today, we're adding a Sampler Feedback feature test, making 3DMark the first publicly available application to include all four major DirectX 12 Ultimate features. Experience DirectX 12 Ultimate today, only with 3DMark! Measure performance and compare image quality on your PC with our DirectX 12 Ultimate feature tests. Each test also has an interactive mode that lets you change settings and visualizations in real time.

Samsung Exynos SoC with RDNA2 Graphics Scores Highest Mobile Graphics Score

We recently reported that Samsung would be announcing their next-generation flagship Exynos processor with AMD RDNA2 graphics next month. We heard that the RDNA2 GPU was expected to be ~30% faster than the Mali-G78 GPU present in Galaxy S21 Ultra however according to a new 3DMark Wild Life benchmark it would appear the new processor scores 56% higher. This result would give the upcoming Exynos processor the fastest graphics available in any Android phone even matching/beating out the Apple A14 Bionic found in the iPhone 12. This early performance benchmark paints a very positive picture for the upcoming processor however we still don't know how the score will be affected under sustained load or if this will performance will even be replicated in the final product.

3DMark Updated with New CPU Benchmarks for Gamers and Overclockers

UL Benchmarks is expanding 3DMark today by adding a set of dedicated CPU benchmarks. The 3DMark CPU Profile introduces a new approach to CPU benchmarking that shows how CPU performance scales with the number of cores and threads used. The new CPU Profile benchmark tests are available now in 3DMark Advanced Edition and 3DMark Professional Edition.

The 3DMark CPU Profile introduces a new approach to CPU benchmarking. Instead of producing a single number, the 3DMark CPU Profile shows how CPU performance scales and changes with the number of cores and threads used. The CPU Profile has six tests, each of which uses a different number of threads. The benchmark starts by using all available threads. It then repeats using 16 threads, 8 threads, 4 threads, 2 threads, and ends with a single-threaded test. These six tests help you benchmark and compare CPU performance for a range of threading levels. They also provide a better way to compare different CPU models by looking at the results from thread levels they have in common.

Intel Core i9-11900KB Beast Canyon NUC 11 Extreme Benchmarked

We now have preliminary benchmarks for the unreleased NUC 11 Extreme Beast Canyon NUC in 3DMark. We have already seen various leaks of the upcoming device detailing the available configurations and options. The NUC 11 Extreme will be offered with a choice of four processors with the highest being the upcoming Intel Core i9-11900KB which is the star of today's benchmarks. The i9-11900KB equipped NUC was paired with a desktop RTX 3060 and was put through its paces in the Fire Strike and Time Spy 3DMark benchmarks. The i9-11900KB is a specialty processor designed for use in NUC products with the new add-in card form factor and is largely comparable to the desktop i9-11900.

The Intel Core i9-11900KB is an 8 core 16 thread mobile processor with a base clock of 3.3 GHz, a boost clock of 4.9 GHz and a thermal velocity boost of 5.3 GHz. The chip features 24 MB of L3 cache and comes with a configurable TDP of 55 W/65 W. The base clock is higher than the 2.5 GHz found on the desktop i9-11900 while the boost clock is 300 MHz less and the thermal velocity boost is 100 MHz higher. The Intel Core i9-11900KB averages 93% - 101% of the performance of the i9-11900 which is to be expected considering the clock speeds.

UL Benchmarks Announces 3DMark Wild Life Extreme Cross-platform Benchmark

Last year UL Benchmarks released 3DMark Wild Life, a cross-platform benchmark for Android, iOS and Windows. Today we are releasing 3DMark Wild Life Extreme, a more demanding cross-platform benchmark for comparing the graphics performance of mobile computing devices such as Windows notebooks, Always Connected PCs powered by Windows 10 on Arm, Apple Mac computers powered by the M1 chip, and the next generation of smartphones and tablets.

3DMark Wild Life Extreme is a new cross-platform benchmark for Apple, Android and Windows devices. Run Wild Life Extreme to test and compare the graphics performance of the latest Windows notebooks, Always Connected PCs powered by Windows 10 on Arm, Apple Mac computers powered by the M1 chip, and the next generation of smartphones and tablets.

DOWNLOAD: 3DMark for Windows v2.18.7181

Intel Xe HPG Graphics Card Could Compete with Big Navi & Ampere

Intel started shipping their first Xe DG1 graphics card to OEMs earlier this year which features 4 GB of LPDDR4X and 80 execution units. While these initial cards aren't high performance and only compete with entry-level products like the NVIDIA MX350 they demonstrated Intel's ability to produce a discrete GPU. We have recently received some more potential performance information about Intel's next card the DG2 from chief GPU architect, Raja Koduri. Koduri posted an image of him with the team at Intel's Folsom lab working on the Intel Iris Pro 5200 iGPU back in 2012 and noted that now 9 years later their new GPU is over 20 times faster. The Intel Iris Pro 5200 scores 1015 on Videocard Benchmarks and ~1,400 points on 3DMark Fire Strike, if we assume these scores will be 20x more we get values of ~20,300 in Videocard Benchmarks and 28,000 points in Fire Strike. While these values don't extrapolate perfectly they provide a good indication of potential performance placing the GPU in the same realm of the NVIDIA RTX 3070 and AMD RX 6800.

NVIDIA GeForce RTX 3070 Memory-Modded with 16GB

PC enthusiast and overclocker VIK-on pulled off a daring memory chip mod on his Palit GeForce RTX 3070 GamingPro OC graphics card, swapping its 8 GB of 14 Gbps GDDR6 memory with 16 GB of it, using eight replacement 16 Gbit chips. The modded card is able to recognize the 16 GB of memory, is able to utilize it like other 16 GB graphics cards (such as the Radeon RX 6800), and is fairly stable with benchmarks and stress tests, although not initially stable. It did spring up some black-screens. VIK-on later discovered that locking the clock-speeds using EVGA Precision-X stabilizes the card, so it performs as expected.

The mod involves a physical replacement of the card's stock 8 Gbit memory chips with 16 Gbit ones; and shorting certain straps on the PCB that let it recognize the desired memory chip brand and density. After the mod, the GeForce driver and GPU-Z are able to read 16 GB of video memory, and the card is able to handle stress tests such as FurMark. The card was initially underperforming in 3DMark, putting out a TimeSpy score of just 8356 points; but following the clock-speed lock fix, is able to score around 13000 points. The video presentation can be watched from the source link below. Kudos to VIK-on!

UL Benchmarks Announces DirectX 12 3DMark Mesh Shader Test

DirectX 12 Ultimate adds powerful new features and capabilities to DirectX 12 including DirectX Raytracing Tier 1.1, Mesh Shaders, Sampler Feedback, and Variable Rate Shading (VRS). After DirectX 12 Ultimate was announced, we started adding new tests to 3DMark to show how games can benefit from these new features. Our latest addition is the 3DMark Mesh Shader feature test, a new test that shows how game developers can boost frame rates by using mesh shaders in the graphics pipeline.

Kosin Demonstrates RTX 3090 Running Via Ryzen Laptop's M.2 Slot

Kosin a Chinese subsidiary of Lenovo has recently published a video showing how they modded their Ryzen notebook to run an RTX 3090 from the NVMe M.2 slot. Kosin used their Xiaoxin Air 14 laptop with a Ryzen 5 4600U processor for the demonstration. The systems internal M.2 NVMe SSD was removed and an M.2 to PCIe expansion cable was attached allowing them to connect the RTX 3090. Finally, the laptop housing was modified to allow the PCIe cable to exit the chassis and a desktop power supply was attached to the RTX 3090 for power.

The system booted and correctly detected and utilized the attached RTX 3090. The system performed admirably scoring 14,008 points in 3DMark TimeSpy, for comparison the RTX 3090 paired with a desktop Ryzen 5 3600 scores 15,552, and when paired with a Ryzen 7 5800X scores 17,935. While this is an extreme example pairing an RTX 3090 with a mid-range mobile processor it goes to show the amount of performance achievable over the NVMe M.2 connector. The x4 PCIe 3.0 link of the laptop's M.2 slot could handle a maximum of 4 GB/s, while the x16 PCIe 3.0 slot on previous generation processors offered 16 GB/s, and the new x16 PCIe 4.0 connector doubles that providing 32 GB/s of available bandwidth.
Return to Keyword Browsing
Jul 14th, 2025 15:26 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts