News Posts matching #3DMark

Return to Keyword Browsing

GDDR6 GeForce RTX 4070 Tested, Loses 0-1% Performance Against RTX 4070 with GDDR6X

NVIDIA quietly released a variant of the GeForce RTX 4070 featuring slower 20 Gbps GDDR6 memory, replacing the 21 Gbps GDDR6X that the original RTX 4070 comes with. Wccftech has access to a GALAX branded RTX 4070 GDDR6, and put it through benchmarks focused on comparing it to a regular RTX 4070. Memory type and speed are the only changes in specs, the core-configuration isn't changed, nor is the GPU clock speed. Wccftech's testing shows that the RTX 4070 GDDR6 measures within 0-1% slower than the RTX 4070 (GDDR6X) at 1080p and 1440p resolutions; while the difference between the two is about 2% at 4K Ultra HD.

Wccftech's test-bed is comprehensive, with 27 game tests, each across 3 resolutions; and 7 synthetic tests. The synthetic tests are mainly part of the 3DMark test suite, including Speed Way, Fire Strike, Time Spy, Port Royal, and their presets. Here, the RTX 4070 GDDR6 is nearly identical in performance, with a 0-0.2% delta with the RTX 4070 GDDR6X. In the game tests, performance varies by resolution. 1080p has 0-1% performance delta, with the only noteworthy outliers being "Metro Exodus" (extreme preset), where the RTX 4070 GDDR6 loses 4.2%, and "Alan Wake 2," where it loses 2.3%.

Intel Ships 0x129 Microcode Update for 13th and 14th Generation Processors with Stability Issues

Intel has officially started shipping the "0x129" microcode update for its 13th and 14th generation "Raptor Lake" and "Raptor Lake Refresh" processors. This critical update is currently being pushed to all OEM/ODM partners to address the stability issues that Intel's processors have been facing. According to Intel, this microcode update fixes "incorrect voltage requests to the processor that are causing elevated operating voltage." Intel's analysis shows that the root cause of stability problems is caused by too high voltage during operation of the processor. These increases to voltage cause degradation that increases the minimum voltage required for stable operation. Intel calls this "Vmin"—it's a theoretical construct, not an actual voltage, think "speed for an airplane required to fly". The latest 0x129 microcode patch will limit the processor's voltage to no higher than 1.55 V, which should avoid further degradation. Overclocking is still supported, enthusiasts will have to disable the eTVB setting in their BIOS to push the processor beyond the 1.55 V threshold. The company's internal testing shows that the new default settings with limited voltages with standard run-to-run variations show minimal performance impact, with only a single game (Hitman 3: Dartmoor) showing degradation. For a full statement from Intel, see the quote below.

Colorful Presents iGame Lab Project: Highest-Performance GeForce RTX 4090 GPUs Limited to 300 Pieces, OC'd to 3.8 GHz

At Computex 2024, Colorful has launched an ultra-exclusive new graphics card - the iGame Lab 4090. This limited edition GPU is squarely targeted at hardcore overclockers and performance enthusiasts willing to pay top dollar for the absolute best. With only 300 units produced globally, the iGame Lab 4090 represents the pinnacle of Colorful's engineering efforts. Each chip was hand-selected from thousands after rigorous binning to ensure premium silicon capable of extreme overclocks. The card's striking aesthetics feature a clean white shroud with silver accent armor. Beyond the intricate design, the real draw is performance. The iGame Lab 4090 has already shattered records, with professional overclocker CENs pushing it past 3.8 GHz under 3D load. It set a new world record 3DMark Time Spy Extreme score of 24,103 points. Out of the box, the card features a base clock of 2235 MHz, a boost clock of 2520 MHz, and a turbo mode of 2625 MHz, all while being a 3-slot design.

AMD Ryzen 7 8700G Loves Memory Overclocking, which Vastly Favors its iGPU Performance

Entry level discrete GPUs are in trouble, as the first reviews of the AMD Ryzen 7 8700G desktop APU show that its iGPU is capable of beating the discrete GeForce GTX 1650, which means it should also beat the Radeon RX 6500 XT that offers comparable performance. Based on the 4 nm "Hawk Point" monolithic silicon, the 8700G packs the powerful Radeon 780M iGPU based on the latest RDNA3 graphics architecture, with as many as 12 compute units, worth 768 stream processors, 48 TMUs, and an impressive 32 ROPs; and full support for the DirectX 12 Ultimate API requirements, including ray tracing. A review by a Chinese tech publication on BiliBili showed that it's possible for an overclocked 8700G to beat a discrete GTX 1650 in 3DMark TimeSpy.

It's important to note here that both the iGPU engine clock and the APU's memory frequency are increased. The reviewer set the iGPU engine clock to 3400 MHz, up from its 2900 MHz reference speed. It turns out that much like its predecessor, the 5700G "Cezanne," the new 8700G "Hawk Point" features a more advanced memory controller than its chiplet-based counterpart (in this case the Ryzen 7000 "Raphael"). The reviewer succeeded in a DDR5-8400 memory overclock. A combination of the two resulted in a 17% increase in the Time Spy score over stock speeds; which is how the chip manages to beat the discrete GTX 1650 (comparable performance to the RX 6500 XT at 1080p).

UL Solutions Previews Upcoming 3DMark Steel Nomad Benchmark

Thank you to the 3DMark community - the gamers, overclockers, hardware reviewers, tech-heads and those in the industry using our benchmarks, who have joined us in discovering what the cutting edge of PC hardware can do over this last quarter of a century. Looking back, it's amazing how far graphics have come, and we're very excited to see what the next 25 years bring.

After looking back, it's time to share a sneak peek of what's coming next. Here are some preview screenshots for 3DMark Steel Nomad, our successor to 3DMark Time Spy. It's been more than seven years since we launched Time Spy, and after more than 42 million submitted results, we think it's time for a new heavy non-ray tracing benchmark. Steel Nomad will be our most demanding non-ray tracing benchmark and will not only support Windows using DirectX 12, but also macOS and iOS using Metal, Android using Vulkan, and Linux using Vulkan for Enterprise and reviewers. To celebrate 3DMark's 25th year, the scene will feature some callbacks to many of our previous benchmarks. We hope you have fun finding them all!

UL Solutions Launches 3DMark Solar Bay, New Cross-Platform Ray Tracing Benchmark

We're excited to announce the launch of 3DMark Solar Bay, a new cross-platform benchmark for testing ray traced graphics performance on Windows PCs and high-end Android devices. This benchmark measures games-related graphics performance by rendering a demanding, ray-traced scene in real time. Solar Bay is available now for Android on the Google Play Store and for Windows on Steam, Epic Games or directly from UL Solutions.

Compare ray tracing performance across platforms
Ray tracing is the showcase technology for Solar Bay, simulating real-time reflections. Compared to traditional rasterization, ray-traced scenes produce far more realistic lighting. While dedicated desktop and laptop graphics processing units (GPUs) have supported ray tracing for several years, it's only recently that integrated GPUs and Android devices have been capable of running real-time ray-traced games at frame rates acceptable to gamers.

Curious MSI GeForce RTX 3080 Ti 20 GB Card pops up on FB Marketplace

An unusual MSI RTX 3080 Ti SUPRIM X graphics card is up for sale, second hand, on Facebook Marketplace—the Sydney, Australia-based seller is advertising this component as a truly custom model with a non-standard allocation of VRAM: "Yes this is 20 GB not 12 GB." The used item is said to be in "good condition" with its product description elaborating on a bit of history: "There are some scuff marks from the previous owner, but the card works fine. It is an extremely rare collector's item, due to NVIDIA cancelling these variants a month before release. This is not an engineering sample card—this was a finished OEM product that got cancelled, unfortunately." The seller is seeking AU$1100 (~$740 USD), after a reduction from the original asking price of AU$1,300 (~$870 USD).

MSI and Gigabyte were reportedly on the verge of launching GeForce RTX 3080 Ti 20 GB variants two years ago, but NVIDIA had a change of heart (probably due to concerns about costs and production volumes) and decided to stick with a public release of the standard 12 GB GPU. Affected AIBs chose to not destroy their stock of 20 GB cards—these were instead sold to crypto miners and shady retailers. Wccftech points out that mining-oriented units have identifying marks on their I/O ports.

Leaked AMD Radeon RX 7700 & RX 7800 GPU Benchmarks Emerge

A set of intriguing 3DMark Time Spy benchmark results have been released by hardware leaker All_The_Watts!!—these are alleged to have been produced by prototype Radeon RX 7700 and Radeon RX 7800 graphics cards (rumored to be based on variants of the Navi 32 GPU). The current RDNA 3 lineup of mainstream GPUs is severely lacking in middle ground representation, but Team Red is reported to be working on a number of models to fill in the gap. We expect a number of leaks to emerge as we get closer to a rumored product reveal scheduled for late August (to coincide with Gamescon).

The recently released 3DMark Time Spy scores reveal that the alleged Radeon RX 7700 candidate scored 15,465 points, while the RX 7800 achieved 18,197 points—both running on an unspecified test system. The results (refer to the Tom's Hardware-produced chart placed below) are not going to generate a lot of excitement at this stage when compared to predecessors and some of the competition—evaluation samples are not really expected to be optimized to a great degree. We hope to see finalized products with decent drivers putting in a good appearance and performing better later on this year.

AMD Radeon RX 7600 Slides Down to $249

The AMD Radeon RX 7600 mainstream graphics card slides a little closer to its ideal price, with an online retailer price-cut sending it down to $249, about $20 less than its MSRP of $269. The cheapest RX 7600 graphics card in the market right now is the MSI RX 7600 MECH 2X Classic, going for $249 on Amazon; followed by the XFX RX 7600 SWFT 210 at $258, and the ASRock RX 7600 Challenger at $259.99.

The sliding prices of the RX 7600 should improve its prospects against the upcoming NVIDIA GeForce RTX 4060, which leaked 3DMark benchmarks show to be around 17% faster than the previous-generation RTX 3060 (12 GB) and 30% faster than its 8 GB variant. Our real-world testing puts the RX 7600 about 15% faster than the RTX 3060 (12 GB) at 1080p, which means there could be an interesting square-off between the RTX 4060 and RX 7600. NVIDIA has announced $299 as the baseline price for the RTX 4060, which should put pressure on AMD partners to trim prices of the RX 7600 to below the $250-mark.

3DMark Now Available on Epic Games Store

We're excited to announce that 3DMark is now also available for purchase in the Epic Games Store from today, June 20, 2023. 3DMark is a computer benchmarking tool for gamers, overclockers and system builders who want to get more out of their hardware. With its wide range of benchmarks, tests and features, 3DMark has everything you need to test the performance of your gaming PC.

When purchasing 3DMark through the Epic Games Store, it includes all current 3DMark GPU and CPU benchmarks released since the application's launch over a decade ago. Our latest GPU benchmark, Speed Way, tests ray-traced gaming performance using the latest DirectX 12 Ultimate API for Windows 10 and Windows 11. 3DMark offers more than just benchmarking tools. Test your system stability with stress tests, explore how new engine technologies affect visuals and performance with interactive mode, or compete for top PC performance with your friends and the 3DMark community as you chase a spot in the 3DMark Hall of Fame.

NVIDIA H100 Hopper GPU Tested for Gaming, Slower Than Integrated GPU

NVIDIA's H100 Hopper GPU is a device designed for pure AI and other compute workloads, with the least amount of consideration for gaming workloads that involve graphics processing. However, it is still interesting to see how this 30,000 USD GPU fairs in comparison to other gaming GPUs and whether it is even possible to run games on it. It turns out that it is technically feasible but not making much sense, as the Chinese YouTube channel Geekerwan notes. Based on the GH100 GPU SKU with 14,592 CUDA, the H100 PCIe version tested here can achieve 204.9 TeraFLOPS at FP16, 51.22 TeraFLOPS at FP32, and 25.61 TeraFLOPS at FP64, with its natural power laying in accelerating AI workloads.

However, how does it fare in gaming benchmarks? Not very well, as the testing shows. It scored 2681 points in 3DMark Time Spy, which is lower than AMD's integrated Radeon 680M, which managed to score 2710 points. Interestingly, the GH100 has only 24 ROPs (render output units), while the gaming-oriented GA102 (highest-end gaming GPU SKU) has 112 ROPs. This is self-explanatory and provides a clear picture as to why the H100 GPU is used for computing only. Since it doesn't have any display outputs, the system needed another regular GPU to provide the picture, while the computation happened on the H100 GPU.

3DMark Gets AMD FidelityFX Super Resolution 2 (FSR 2) Feature Test

UL Benchmarks today released an update to 3DMark that adds a Feature Test for AMD FidelityFX Super Resolution 2 (FSR 2), the company's popular upscaling-based performance enhancement. This was long overdue, as 3DMark has had a Feature Test for DLSS for years now; and as of October 2022, it even got one for Intel XeSS. The new FSR 2 Feature Test uses a scene from the Speed Way DirectX 12 Ultimate benchmark, where it compares fine details of a vehicle and a technic droid between native resolution with TAA and FSR 2, and highlights the performance uplift. To use the feature test, you'll need any GPU that supports DirectX 12 and FSR 2 (that covers AMD, NVIDIA, and Intel Arc). For owners of 3DMark who purchased it before October 12, 2022, they'll need to purchase the Speed Way upgrade to unlock the AMD FSR feature test.

Intel Xeon W9-3495X Can Pull up to 1,900 Watts in Extreme OC Scenarios

Intel's latest Xeon processors based on Sapphire Rapids uArch have arrived in the hands of overclockers. Last week, we reported that the Intel Xeon W9-3495X is officially a world record holder for achieving the best scores in Cinebench R23 and R20, Y-Cruncher, 3DMark CPU test, and Geekbench 3. However, today we have another extreme overclocking attempt to beat the world record, with little more details about power consumption and what the new SKU is capable of. Elmor, an overclocker working with ASUS, has tried to break the world record and overclocked the Intel Xeon W9-3495X CPU to 5.5 GHz on all 56 cores. What is more impressive is the power that the processor can consume.

With a system powered by two Superflower Leadex 1,600 Watt power supply units, the CPU consumed almost 1,900 Watts of power from the wall. To manage to cool this heat output, liquid nitrogen was used, and the CPU stayed at a cool negative 95 degrees Celsius. The motherboard of choice for this attempt was ASUS Pro WS W790E-SAGE SE, paired with eight GSKILL Zeta R5 DDR5 R-DIMMs modules. And results were incredible, as the CPU achieved 132,220 points in Cinebench R23. However, the world record of the previous week has remained intact, as Elmor 's result is a bit behind last week's score of 132,484 points. Check the video below for more info.

3% of AMD Radeon Users May Experience Unusually Low 3DMark TimeSpy Performance, Driver Fix Underway

About 3% of AMD Radeon graphics card users may experience lower than usual 3DMark TimeSpy performance, says UL Benchmarks, developers of the 3DMark graphics benchmark suite. The issue came to light when a Google developer noticed that his RX 7900 XTX exhibited lower than expected performacne in TimeSpy, and took it up with UL. While the 3DMark developer hasn't been able to reproduce the issue on their end, it mentions that AMD is aware of it, has had more luck in reproducing it, and is working on a driver-level fix. For now, UL offers no solution other than to roll back to older driver versions and try testing again.

Intel Xeon W9-3495X Sets World Record, Dethrones AMD Threadripper

When Intel announced the appearance of the 4th generation Xeon-W processors, the company announced that the clock multiplier was left unlocked, available for overclockers to try and push these chips even harder. However, it was only a matter of time before we saw the top-end Xeon-W SKU take a chance at beating the world record in Cinebench R23. The Intel Xeon W9-3495X SKU is officially the world record score holder with 132,484 points in Cinebench R23. The overclocker OGS from Greece managed to push all 56 cores and 112 threads of the CPU to 5.4 GHz clock frequency using liquid nitrogen (LN2) cooling setup. Using ASUS Pro WS W790E-SAGE SE motherboard and G-SKILL Zeta R5 RAM kit, the OC record was set on March 8th.

The previous record holder of this position was AMD with its Threadripper Pro 5995WX with 64 cores and 128 threads clocked at 5.4 GHz. Not only did Xeon W9-3495X set the Cinebench R23 record, but the SKU also placed the newest record for Cinebench R20, Y-Cruncher, 3DMark CPU test, and Geekbench 3 as well.

Alleged NVIDIA AD106 GPU Tested in 3DMark and AIDA64

Benchmarks and specifications of an alleged NVIDIA AD106 GPU have tipped up on Chiphell, although the original poster has since removed all the details. Thanks to @harukaze5719 on Twitter, who posted the details, we still get an insight into what we might be able to expect from NVIDIA's upcoming mid-range cards. All these details should be taken as is, as the original source isn't exactly what we'd call trustworthy. Based on the data in the TPU GPU database, the GPU in question should be the GeForce RTX 4070 Mobile with much higher clock speeds or an equivalent desktop part that offers more CUDA cores than the RTX 4060 Ti. Whatever the specific AD106 GPU is, it's being compared to the GeForce RTX 2080 Super and the RTX 3070 Ti.

The GPU was tested in AIDA64 and 3DMark and it beats the RTX 2080 Super in all of the tests, while drawing some 55 W less power at the same time. In some of the benchmarks the wins are within the margin of testing error, for example when it comes to the memory performance in AIDA64. However, we're looking at a GPU connected to only half the memory bandwidth here, as the AD106 GPU only has a 128-bit memory bus, compared to 256-bit for the RTX 2080 Super, although the memory clocks are much higher, but the overall memory bandwidth is still nearly 36 percent higher in the RTX 2080 Super. Yet, the AD106 GPU manages to beat the RTX 2080 Super in all of the memory benchmarks in AIDA64.

NVIDIA RTX 4080 20-30% Slower than RTX 4090, Still Smokes the RTX 3090 Ti: Leaked Benchmarks

Benchmarks of NVIDIA's upcoming GeForce RTX 4080 (formerly known as the RTX 4080 16 GB) are already out as the leaky taps in the Asian tech forumscape know no bounds. Someone with access to an RTX 4080 sample and drivers on ChipHell forums, put it through a battery of synthetic and gaming tests. The $1,200 MSRP graphics card was tested on 3DMark Time Spy, Port Royal, and games that include Forza Horizon 5, Call of Duty Modern Warfare II, Cyberpunk 2077, Borderlands 3, and Shadow of the Tomb Raider.

The big picture: the RTX 4080 is found to be halfway between the RTX 3090 Ti and the RTX 4090. At stock settings, and in 3DMark Time Spy Extreme (4K), it has 71% the performance of an RTX 4090, whereas the RTX 3090 Ti is 55% that of the RTX 4090. With its "power limit" slider maxed out, the RTX 4080 inches 2 percentage-points closer to the RTX 4090 (73% that of the RTX 4090), and with a bit of manual OC, it adds another 4 percentage-points. Things change slightly with 3DMark Port Royal, where the RTX 4080 is 69% the performance of the RTX 4090 in a test where the RTX 3090 Ti does 58% that of the RTX 4090.

UL Benchmarks Launches 3DMark Speedway DirectX 12 Ultimate Benchmark

UL Solutions is excited to announce that our new DirectX 12 Ultimate Benchmark 3DMark Speed Way is now available to download and buy on Steam and on the UL Solutions website. 3DMark Speed Way is sponsored by Lenovo Legion. Developed with input from AMD, Intel, NVIDIA, and other leading technology companies, Speed Way is an ideal benchmark for comparing the DirectX 12 Ultimate performance of the latest graphics cards.

DirectX 12 Ultimate is the next-generation application programming interface (API) for gaming graphics. It adds powerful new capabilities to DirectX 12, helping game developers improve visual quality, boost frame rates, reduce loading times and create vast, detailed worlds. 3DMark Speed Way's engine demonstrates what the latest DirectX API brings to ray-traced gaming, using DirectX Raytracing tier 1.1 for real-time global illumination and real-time ray-traced reflections, coupled with new performance optimizations like mesh shaders.

3DMark Speed Way DirectX 12 Ultimate Benchmark is Launching on October 12

3DMark Speed Way is a new GPU benchmark that showcases the graphics technology that will power the next generation of gaming experiences. We're excited to announce Speed Way, sponsored by Lenovo Legion, is releasing on October 12. Our team has been working hard to get Speed Way ready for you to use benchmarking, stress testing, and comparing the new PC hardware coming this fall.

From October 12 onward, Speed Way will be included in the price when you buy 3DMark from Steam or our own online store. Since we released Time Spy in 2016, 3DMark users have enjoyed many free updates, including Time Spy Extreme, the 3DMark CPU Profile, 3DMark Wild Life, and multiple tests demonstrating new DirectX features. With the addition of Speed Way, the price of 3DMark on Steam and 3DMark Advanced Edition will go up from $29.99 to $34.99.

UL Launches New 3DMark Feature Test for Intel XeSS

We're excited to release a new 3DMark feature test for Intel's new XeSS AI-enhanced upscaling technology. This new feature test is available in 3DMark Advanced and Professional Editions. 3DMark feature tests are special tests designed to highlight specific techniques, functions, or capabilities. The Intel XeSS feature test shows you how XeSS affects performance.

The 3DMark Intel XeSS frame inspector tool helps you compare image quality with an interactive side-by-side comparison of XeSS and native-resolution rendering. Check out the images below to see an example comparison of native resolution rendering and XeSS in the new 3DMark feature test.

NVIDIA RTX 4090 "Ada" Scores Over 19000 in Time Spy Extreme, 66% Faster Than RTX 3090 Ti

NVIDIA's next-generation GeForce RTX 4090 "Ada" flagship graphics card allegedly scores over 19000 points in the 3DMark Time Spy Extreme synthetic benchmark, according to kopite7kimi, a reliable source with NVIDIA leaks. This would put its score around 66 percent above that of the current RTX 3090 Ti flagship. The RTX 4090 is expected to be based on the 5 nm AD102 silicon, with a rumored CUDA core count of 16,384. The higher IPC from the new architecture, coupled with higher clock speeds and power limits, could be contributing to this feat. Time Spy Extreme is a traditional DirectX 12 raster-only benchmark, with no ray traced elements. The Ada graphics architecture is expected to reduce the "cost" of ray tracing (versus raster-only rendering), although we're yet to see leaks of RTX performance, yet.

Intel i9-13900K "Raptor Lake" ES Improves Gaming Minimum Framerates by 11-27% Over i9-12900KF

Intel's 13th Gen Core "Raptor Lake" is shaping up to be another leadership desktop processor lineup, with an engineering sample clocking significant increases in gaming minimum framerates over the preceding 12th Gen Core i9-12900K "Alder Lake." Extreme Player, a tech-blogger on Chinese video streaming site Bilibili, posted a comprehensive gaming performance review of an i9-13900K engineering sample covering eight games across three resolutions, comparing it with a retail i9-12900KF. The games include CS:GO, Final Fantasy IX: Endwalker, PUBG, Forza Horizon 5, Far Cry 6, Red Dead Redemption 2, Horizon Zero Dawn, and the synthetic benchmark 3DMark. Both processors were tested with a GeForce RTX 3090 Ti graphics card, 32 GB of DDR5-6400 memory, and a 1.5 kW power supply.

The i9-13900K ES is shown posting performance leads ranging wildly between 1% to 2% in the graphics tests of 3DMark, but an incredible 36% to 38% gain in the CPU-intensive tests of the suite. This is explained not just by increased per-core performance of both the P-cores and E-cores, but also the addition of 8 more E-cores. Although the same "Gracemont" E-cores are used in "Raptor Lake," the L2 cache size per E-core cluster has been doubled in size. Horizon Zero Dawn sees -0.7% to 10.98% increase in frame rates. There are some anomalous 70% frame-rate increases in RDR2, discounting which, we still see a 2-9% increase. FC6 posts modest 2.4% increases. Forza Horizon 5, PUBG, Monster Hunter Rise, and FF IX, each report significant increases in minimum framerates, well above 20%.

Intel Arc A550M & A770M 3DMark Scores Surface

The upcoming Intel Arc A550M & A770M mobile graphics cards have recently appeared on 3DMark in Time Spy and Fire Strike Extreme. The Intel Arc Alchemist A550M features an ACM-G10 GPU with 16 Xe cores paired with 8 GB of GDDR6 memory on a 128-bit bus while the A770M features the same GPU but with 32 Xe cores and 16 GB of GDDR6 memory on a 256-bit bus.

The A550M was tested in 3DMark Time Spy where it scored 6017 points running on an older 1726 driver with Intel Advanced Performance Optimizations (APO) enabled. The A770M was benchmarked with 3DMark Fire Strike Extreme where it scored a respectable 13244 points in graphics running on test drivers which places it near the RTX 3070M. This score does not correlate to real-world gaming performance with figures provided directly by Intel showing the Arc A730M only being 12% faster than the RTX 3060M.

First Intel Arc A730M Powered Laptop Goes on Sale, in China

The first benchmark result of an Intel Arc A730M laptop made an appearance online and the mysterious laptop used to run 3DMark turned out to be from a Chinese company called Machenike. The laptop itself appears to go under the name of Dawn16 Discovery Edition and features a 16-inch display with a native resolution of 2560 x 1600, with a 165 Hz refresh rate. CPU wise, Machenike went with a Core i7-12700H, which is a 6+8 core CPU with 20 threads, where the performance cores top out at 4.7 GHz. The CPU has been paired with 16 GB of 4800 MHz DDR5 memory and the system also has a PCIe 4.0 NVMe SSD of some kind, with a max read speed of 3500 MB/s, which isn't particularly impressive. Other features include Thunderbolt 4 support, WiFi 6E and Bluetooth 5.2, as well as an 80 Whr battery pack.

However, none of the above is particularly unique and what matters here is of course the Intel Arc A730M GPU. It has been paired with 12 GB of GDDR6 memory with a 192-bit interface, at 14 Gbps according to the specs. The memory bandwidth is said to be 336 GB/s. The company also provided a couple of performance metrics, with a 3DMark TimeSpy figure of 10002 points and a 3DMark Fire Strike figure of 23090 points. The TimeSpy score is a few points slower than the numbers posted earlier, but helps verify the earlier test result. Other interesting nuggets of information include support for 8k60 12-bit HDR video decoding for AV1, HEVC, AVC and VP9, as well as 8k 10-bit HDR encoding for said formats. Here a figure for the Puget Benchmark in what appears to be Photoshop (PS) is provided, where it scores 1188 points. The laptop is up for what appears to be pre-order, with a price tag of 7,499 RMB, or about US$1,130.

Intel Arc A730M 3DMark TimeSpy Score Spied, in League of RTX 3070 Laptop GPU

Someone with access to a gaming notebook powered by Intel Arc "Alchemist" A730M discrete GPU posted its alleged 3DMark TimeSpy score, and it looks pretty interesting—10.138 points, which is somewhat higher than that of the GeForce RTX 3070 Laptop GPU or halfway between those of the desktop GeForce RTX 3060 and desktop RTX 3060 Ti.

Based on the Xe-HPG graphics architecture, the Arc A730M features 24 Xe Cores, or 384 execution units, which work out to 3,072 unified shaders. This is not even Intel's most powerful mobile GPU, with that title going to the A770M, which maxes out the ACM-G10 ASIC, with all 512 execution units (4,096 unified shaders) being enabled. It particularly raises hopes for a competitive high-end GPU for gaming notebooks, which can perform in the league of the RTX 3080 Laptop GPU, or the Radeon RX 6800M.
Return to Keyword Browsing
Nov 18th, 2024 22:45 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts