News Posts matching #Benchmarks

Return to Keyword Browsing

Intel Plays the Pre-Zen AMD Tune: Advocates Focus Shift from "Benchmarks to Benefits"

Intel CEO Bob Swan, in his Virtual Computex YouTube stream, advocated that the industry should focus less on benchmarks, and more on the benefits of technology, a line of thought strongly advocated by rival AMD in its pre-Ryzen era, before the company began getting competitive with Intel again. "We should see this moment as an opportunity to shift our focus as an industry from benchmarks to the benefits and impacts of the technology we create," he said, referring to technology keeping civilization and economies afloat during the COVID-19 global pandemic, which has thrown Computex among practically every other public gathering out of order.

"The pandemic has underscored the need for technology to be purpose-built so it can meet these evolving business and consumer needs. And this requires a customer-obsessed mindset to stay close, anticipate those needs, and develop solutions. In this mindset, the goal is to ensure we are optimizing for a stronger impact that will support and accelerate positive business and societal benefits around the globe," he added. An example of what Swan is trying to say is visible with Intel's 10th generation Core "Cascade Lake XE" and "Ice Lake" processors, which feature AVX-512 and DL-Boost, accelerating deep learning neural nets; but lose to AMD's flagship offerings on the vast majority of benchmarks. Swan also confirmed that the company's "Tiger Lake" processors will launch this Summer.
The Computex video address by CEO Bob Swan can be watched below.

Benchmarks Surface for AMD Ryzen 4700G, 4400G and 4200G Renoir APUs

Renowned leaker APISAK has digged up benchmarks for AMD's upcoming Ryzen 4700G, 4400G and 4200G Renoir APUs in 3D Mark. These are actually for the PRO versions of the APUs, but these tend to be directly comparable with AMD's non-PRO offerings, so we can look at them to get an idea of where AMD's 4000G series' performance lies. AMD's 4000G will be increasing core-counts almost across the board - the midrange 4400G now sports 6 cores and 12 threads, which is more than the previous generation Ryzen 5 3400G offered (4 cores / 8 threads), while the top-of-the-line 4700G doubles the 3400G's core-cpount to 8 physical and 16 logical threads.

This increase in CPU cores, of course, has implied a reduction in the area of the chip that's dedicated to the integrated Vega graphics GPU - compute units have been reduced from the 3400G's 11 down to 8 compute units on the Ryzen 7 4700G and 7 compute units on the 4400G - while the 4200G now makes do with just 6 Vega compute units. Clocks have been severely increased across the board to compensate the CU reduction, though - the aim is to achieve similar GPU performance using a smaller amount of semiconductor real-estate.

Leaked Benchmark Shows Intel Core i5-10400 Matching i7-9700F in Gaming Performance

Benchmarks for one of the recently announced 10th generation Intel Core chips have been leaked early by Chinese review channel Bilibili as reported by @Momomo_US. The mid-range Intel Core i5-10400 is a new 6 core/12 thread CPU with a base clock of 2.9 GHz and a max all core boost of 4.0 GHz with a recommended customer price of $182. The Core i5-10400 was put up against the current-gen Core i5-9400F and Core i7-9700F in a variety of games by Bilibili with the i5-10400 matching or beating the i7-9700F in most tests.

The Intel Core i7-9700F is an 8 core/8 thread CPU that was released in 2019 with a recommended customer price of $310 which it still retails for today. To see the i5-10400 match the i7-9700F is significant news for gamers as Intel is forced to lower prices and increase performance as the threat of Ryzen 3000 looms. The CPU was tested in Grand Theft Auto V and Assassins Creed Odyssey at 1080p where the i5-10400 came ahead in Assassins Creed Odyssey and just behind the i7-9700F in Grand Theft Auto V. The full results can be viewed below.

Intel Iris Plus Graphics G7 iGPU Beats AMD RX Vega 10: Benchmarks

Intel is taking big strides forward with its Gen11 integrated graphics architecture. Its performance-configured variant, the Intel Iris Plus Graphics G7, featured in the Core i7-1065G7 "Ice Lake" processor, is found to beat AMD Radeon RX Vega 10 iGPU, found in the Ryzen 7 2700U processor ("Raven Ridge"), by as much as 16 percent in 3DMark 11, a staggering 23 percent in 3DMark FireStrike 1080p. Notebook Check put the two iGPUs through these, and a few game tests to derive an initial verdict that Intel's iGPU has caught up with AMD's RX Vega 10. AMD has since updated its iGPU incrementally with the "Picasso" silicon, providing it with higher clock speeds and updated display and multimedia engines.

The machines tested here are the Lenovo Ideapad S540-14API for the AMD chip, and Lenovo Yoga C940-14IIL with the i7-1065G7. The Iris Plus G7 packs 64 Gen11 execution units, while the Radeon RX Vega 10 has 640 stream processors based on the "Vega" architecture. Over in the gaming performance, and we see the Intel iGPU 2 percent faster than the RX Vega 10 at Bioshock Infinite at 1080p, 12 percent slower at Dota 2 Reborn 1080p, and 8 percent faster at XPlane 11.11.

GIGABYTE Smashes 11 World Records with New AMD EPYC 7002 Processors

GIGABYTE, a leading server systems builder which recently released a total of 17 new AMD EPYC 7002 Series "Rome" server platforms simultaneously with AMD's own official launch of their next generation CPU, is proud to announce that our new systems have already broken 11 different SPEC benchmark world records. These new world records have not only been achieved against results from all alternative processor based systems but even against competing vendor solutions using the same 2nd Generation AMD EPYC 7002 Series "Rome" processor platform, illustrating that GIGABYTE's system design and engineering is perfectly optimized to deliver the maximum performance possible from the 2nd Generation AMD EPYC.

UL Announces New 3DMark Benchmarks for Testing PCIe Performance Across Generations

UL Benchmarks via its 3DMark product have announced that they'll be introducing a new, comprehensive test that aims to test PCIe bandwidth across generations. Citing the introduction of PCIe 4.0 to the masses - soon available in the consumer market via AMD's Ryzen 3000 series release - UL wants users to be able to know what a difference this makes towards allowing for more complex games and scenarios that aren't data-constrained by PCIe 3.0.

The 3D Mark PCIe Performance Test will be made available this summer for free for 3DMark Advanced Edition and for 3DMark Professional Edition customers with a valid annual license.

Intel Puts Out Benchmarks Showing Minimal Performance Impact of MDS Mitigation

Intel Tuesday once again shook the IT world by disclosing severe microarchitecture-level security vulnerabilities affecting its processors. The Microarchitectural Data Sampling (MDS) class of vulnerabilities affect Intel CPU architectures older than "Coffee Lake" to a greater extent. Among other forms of mitigation software patches, Intel is recommending that users disable HyperThreading technology (HTT), Intel's simultaneous multithreading (SMT) implementation. This would significantly deplete multi-threaded performance on older processors with lower core-counts, particularly Core i3 2-core/4-thread chips.

On "safer" microarchitectures such as "Coffee Lake," though, Intel is expecting a minimal impact of software patches, and doesn't see any negative impact of disabling HTT. This may have something to do with the 50-100 percent increased core-counts with the 8th and 9th generations. The company put out a selection of benchmarks relevant to client and enterprise (data-center) use-cases. On the client use-case that's we're more interested in, a Core i9-9900K machine with software mitigation and HTT disabled is negligibly slower (within 2 percent) of a machine without mitigation and HTT enabled. Intel's selection of benchmarks include SYSMark 2014 SE, WebXprt 3, SPECInt rate base (1 copy and n copies), and 3DMark "Skydiver" with the chip's integrated UHD 630 graphics. Comparing machines with mitigations applied but toggling HTT presents a slightly different story.

UL Corporation Announces Two New Benchmarks Coming to PCMark 10

UL Corporation today announces that two new benchmark tests that will soon be coming to PCMark 10. The first is our eagerly awaited PCMark 10 battery life benchmark. The second is a new benchmark test based on Microsoft Office applications.

PCMark 10 Battery Life benchmark
Battery life is one of the most important criteria for choosing a laptop, but consumers and businesses alike find it hard to compare systems fairly. The challenge, of course, is that battery life depends on how the device is used. Unfortunately, manufacturers' claims are often based on unrealistic scenarios that don't reflect typical use. Figures for practical, day-to-day battery life, which are usually much lower, are rarely available.

NVIDIA GTX 1660 Ti to Perform Roughly On-par with GTX 1070: Leaked Benchmarks

NVIDIA's upcoming "Turing" based GeForce GTX 1660 Ti graphics card could carve itself a value proposition between the $250-300 mark that lets it coexist with both the GTX 1060 6 GB and the $350 RTX 2060, according to leaked "Final Fantasy XV" benchmarks scored by VideoCardz. In these benchmarks, the GTX 1660 Ti was found to perform roughly on par with the previous-generation GTX 1070 (non-Ti), which is plausible given that the 1,536 CUDA cores based on "Turing," architecture, with their higher IPC and higher GPU clocks, are likely to catch up with the 1,920 "Pascal" CUDA cores of the GTX 1070, while 12 Gbps 192-bit GDDR6 serves up more memory bandwidth than 8 Gbps 256-bit GDDR5 (288 GB/s vs. 256 GB/s). The GTX 1070 scores in memory size, with 8 GB of it. NVIDIA is expected to launch the GTX 1660 Ti later this month at USD $279. Unlike the RTX 20-series, these chips lack NVIDIA RTX real-time raytracing technology, and DLSS (deep-learning supersampling).

UL Corporation Announces 3D Mark Port Royal Raytracing Suite is Now Available - Benchmark Mode On!

Perhaps gliding through the tech-infused CES week, UL Corporation has just announced that the much-expected Port Royal, the world's first dedicated real-time ray tracing benchmark for gamers, is now available. Port Royal uses DirectX Raytracing to enhance reflections, shadows, and other effects that are difficult to achieve with traditional rendering techniques, and enables both performance benchmarking for cutthroat competition throughout the internet (and our own TPU forums, of course), but is also an example of what to expect from ray tracing in upcoming games - ray tracing effects running in real-time at reasonable frame rates at 2560 × 1440 resolution.

UL Benchmarks Unveils 3DMark "Port Royal" Ray-tracing Benchmark

Port Royal is the name of the latest component of UL Benchmarks 3DMark. Designed to take advantage of the DirectX Raytracing (DXR) API, this benchmark features an extreme poly-count test-scene with real-time ray-traced elements. Screengrabs of the benchmark depict spacecraft entering and leaving mirrored spheres suspended within a planet's atmosphere, which appear to be docks. It's also a shout out to of a number of space-sims such as "Star Citizen," which could up their production in the future by introducing ray-tracing. The benchmark will debut at the GALAX GOC Grand Final on December 8, where the first public run will be powered by a GALAX GeForce RTX 2080 Ti HOF graphics card. It will start selling in January 2019.

NVIDIA Releases Comparison Benchmarks for DLSS-Accelerated 4K Rendering

NVIDIA released comparison benchmarks for its new AI-accelerated DLSS technology, which is part of their new Turing architecture's call to fame. Using the Infiltrator benchmark with its stunning real-time graphics, NVIDIA showcased the performance benefits of using DLSS-improved 4K rendering instead of the usual 4K rendering + TAA (Temporal Anti-Aliasing). Using a Core i9-7900X 3.3GHz CPU paired with 16 GB of Corsair DDR4 memory, Windows 10 (v1803) 64-bit, and version 416.25 of the NVIDIA drivers, the company showed tremendous performance improvements that can be achieved with the pairing of both Turing's architecture strengths and the prowess of DLSS in putting Tensor cores to use in service of more typical graphics processing workloads.

The results speak for themselves: with DLSS at 4K resolution, the upcoming NVIDIA RTX 2070 convincingly beats its previous-gen pair by doubling performance. Under these particular conditions, the new king of the hill, the RTX 2080 Ti, convincingly beats the previous gen's halo product in the form of the Titan Xp, with a 41% performance lead - but so does the new RTX 2070, which is being sold at half the asking price of the original Titan Xp.

Intel's 9th Gen Core Gaming Benchmarks Flawed and Misleading

At its 9th Generation Core processor launch extravaganza earlier this week, Intel posted benchmark numbers to show just how superior its processors are to AMD 2nd generation Ryzen "Pinnacle Ridge." PC enthusiasts worth their salt were quick to point out that Intel's numbers are both flawed and misleading as they misrepresent both test setups - by optimizing Intel processors beyond their out-of-the-box performance, and by running AMD processors with sub-optimal settings.

Intel paid Principled Technologies, a third-party performance testing agency, to obtain performance numbers comparing the Core i9-9900K with the Ryzen 7 2700X across a spectrum of gaming benchmarks, instead of testing the two chips internally, and posting their test setup data in end-notes, as if to add a layer of credibility/deniability to their charade. The agency posted its numbers that were almost simultaneously re-posted PCGamesN, gleaming the headline "Up to 50% Faster than Ryzen at Gaming." You could fertilize the Sahara with this data.

UL Benchmarks Kicks Huawei Devices from its Database over Cheating

UL Benchmarks de-listed several popular Huawei devices from its database over proof of cheating in its benchmarks. Over the month, it was found that several of Huawei's devices, such as P20 Pro, Nova 3, and Play; overclocked their SoCs while ignoring all power and thermal limits, to achieve high benchmark scores, when it detected that a popular benchmark such as 3DMark, was being run. To bust this, UL Benchmarks tested the three devices with "cloaked" benchmarks, or "private benchmarks" as they call it. These apps are identical in almost every way to 3DMark, but lack the identification or branding that lets Huawei devices know when to overclock themselves to cheat the test.

The results were startling. When the devices have no clue that a popular benchmark is being run (or if has no way of telling that 3DMark is being run), it chugs along at its "normal" speed, which is 35% to 36% lower. The rules that bind device manufacturers from advertising UL's 3DMark scores explicitly state that the device must not detect the app and optimize its hardware on the fly to ace the test. Huawei responded to UL by stating that it will unlock a new "performance mode" to users that lets them elevate their SoCs to the same high clocks for any application.

NVIDIA Releases First Internal Performance Benchmarks for RTX 2080 Graphics Card

NVIDIA today released their first official performance numbers for their new generation of GeForce products - particularly, the RTX 2080. The RTX 20 series of graphics cards, according to the company, offers some 50% performance improvements (on average) on architectural improvements alone, in a per-core basis. This number is then built upon with the added RTX performance of the new RT cores, which allows the RTX 2080 to increase its performance advantage over the last generation 1080 by up to 2x more - while using the new DLSS technology. PUBG, Shadow of the Tomb Raider, and Final Fantasy XV are seeing around 75 percent or more improved performance when using this tech.

NVIDIA is also touting the newfound ability to run games at 4K resolutions at over 60 FPS performance, making the RTX 2080 the card to get if that's your preferred resolution (especially if paired with one of those dazzling OLED TVs...) Of course, IQ settings aren't revealed in the slides, so there's an important piece of the puzzle still missing. But considering performance claims of NVIDIA, and comparing the achievable performance on last generation hardware, it's fair to say that these FPS scores refer to the high or highest IQ settings for each game.

Futuremark Sets a Date for UL Rebranding

Futuremark set a date for its re-branding to align with its parent company UL. April 23, 2018 is when Futuremark products and services will be sold under the new branding scheme. Futuremark became part of UL in 2014. UL is an independent, global company with more than 10,000 professionals in 40 countries. UL offers a wide range of testing, inspection, auditing, and certification services and solutions to help customers, purchasers and policymakers manage risk and complexity in modern markets. A set of FAQs associated with the change are answered below.

Intel Releases CPU Benchmarks with Meltdown and Spectre Mitigations

It's safe to say that there's one thing that you don't mess around with, and that's performance. Enthusiasts don't spend hundreds of dollars on a processor to watch it underperform. Given the complicated nature of the Meltdown and Spectre vulnerabilities, Microsoft's so-called mitigations were bound to have an impact on processor performance. The million dollar question was: Just how much? The initial estimate was somewhere around 30%, but Intel, being optimistic as usual, expected the performance impact to be insignificant for the average user. They recently provided some preliminary benchmark results that looked quite convincing too. Well, let's take a look at their findings, shall we?

Intel measured the mitgations' impact on CPU performance using their 6th, 7th, and 8th Generation Intel Core processors but, more specifically, the i7-6700K, i7-7920HQ, i7-8650U, and i7-8700K. The preferred operating system used in the majority of the benchmarks was Windows 10, however, Windows 7 also made a brief appearance. Intel chose four key benchmarks for their testing. SYSmark 2014 SE evaluated CPU performance on an enterprise level simulating office productivity, data and financial analysis, and media creation. PC Mark 10, on the other hand, tested performance in real-world usage employing different workloads like web browsing, video conferencing, application start-up time, spreadsheets, writing, and digital content creation. 3DMark Sky Diver assessed CPU performance in a DirectX 11 gaming scenario. Lastly, WebXPRT 2015 measured system performance using six HTML5- and JavaScript-based workloads which include photo enhancement, organize album, stock option pricing, local notes, sales graphs, and explore DNA sequencing.

AMD EPYC 7601 Processors Set Two New World Records on SPEC CPU Benchmarks

​AMD today announced that the new Hewlett Packard Enterprise ProLiant DL385 Gen10 server, powered by AMD EPYC processors set world records in both SPECrate2017_fp_base and SPECfp_rate2006. The secure and flexible 2P 2U HPE ProLiant DL385 Gen10 Server joins the HPE Cloudline CL3150 server in featuring AMD EPYC processors. With designs ranging from 8-core to 32-core, AMD EPYC delivers industry-leading memory bandwidth across the HPE line-up, with eight channels of memory and unprecedented support for integrated, high-speed I/O with 128 lanes of PCIe 3 on every EPYC processor.

"HPE is joining with AMD today to extend the world's most secure industry standard server portfolio to include the AMD EPYC processor. We now give customers another option to optimize performance and security for today's virtualized workloads," said Justin Hotard, vice president and GM, Volume Global Business Unit, HPE. "The HPE ProLiant DL385 featuring the AMD EPYC processor is the result of a long-standing technology engagement with AMD and a shared belief in continuing innovation."

Intel's 18-core Core i9-7980XE Benchmarks Surface

A user on Coolenjoy has apparently gotten his hands on Intel's upcoming i9-7980XE silicon, putting it through its paces on Cinebench and a number of other benchmarks. The 18-core, 36-thread Core i9-7980XE is set to be Intel's most advanced HEDT processor of all time by a wide margin - both in number of cores and pricing. It seems that even in the face of a competitive AMD, that puts value towards core counts with its $999 Threadripper 1950X 16-core, 32-thread CPU, Intel still sees it fit to charge an arm, a leg, and both of your kidneys for a 2-core advantage. Intel's XE processors have become more synonymous of eXtremely Expensive and less about being eXtreme Edition over the years, and the i9-7980XE, with its $1999 price-tag, does nothing to alleviate the issue. This is a halo product, though - the most advanced HEDT processor in the world. And with it being as niche a product as it is, it actually makes some kind of sense for it to be so expensive - an immoral, "where has the world gone" kind of sense, but still, some measure of it.

Intel Core i7-8700K and i5-8400 SANDRA Benchmarks Surface

Ahead of their launch later this quarter, SiSoft SANDRA benchmarks of Intel 8th generation Core i7-8700K and Core i5-8400 six-core processors surfaced in benchmark databases, which were promptly compared to their predecessors by HotHardware. The results put to the test Intel's claims of "over 40 percent more performance" compared to the 7th generation Core processors, which the company made in its 8th Generation Core Launch Event presentation. A bulk of these performance increases are attributed to the increasing core-count over generation, which directly yields higher multi-threaded performance; while a small but significant portion of it is attributed to increases in single-threaded performance. Since the "Coffee Lake" micro-architecture is essentially a refresh of the "Skylake" architecture, single-threaded performance increases could be attributed to higher clock speeds.

The Core i7-8700K is the top-dog of the 8th generation Core mainstream-desktop processor family. This six-core chip was compared to the product it succeeds in Intel's MSDT product-stack, the quad-core Core i7-7700K. There is a 45 percent increase in performance, in the "processor arithmetic" test; and a 47 percent increase in the "processor multimedia" test. These two test-suites are multi-threaded, and hence benefit from the two added cores, which in turn add four additional logical CPUs, thanks to HyperThreading. "Processor cryptography" sees a 12 percent increase. The single-precision and double-precision "Scientific Analysis" tests, which again are multi-threaded, see 26 percent and 32 percent performance gains over the i7-7700K, respectively.

New Performance Benchmarks of AMD's Vega Frontier Edition Surface

You probably took a long, hard read at our article covering a single-minded user's experience of his new Vega Frontier Edition. Now, courtesy of PCPer, and charitable soul Ekin at Linus Tech Tips, we have some more performance benchmarks of AMD's latest (non gaming specific) graphics card.

Starting with 2560x1440, let's begin with the good news: in what seems to be the best performance scenario we've seen until now, the Vega Frontier Edition stands extremely close to NVIDIA's GTX 1080 Ti video card in Fallout 4. It trails it for about 10 FPS most of the test, and even surpasses it at some points. These numbers should be taken with a grain of salt regarding the RX Vega consumer cards: performance on those models will probably be higher than the Frontier Edition's results. And for the sake of AMD, they better be, because in all other tests, the Frontier Edition somewhat disappoints. It's beaten by NVIDIA's GTX 1070 in Grand Theft Auto V, mirrors its performance in The Witcher 3, and delivers slightly higher performance than the GTX 1070 on Hitman and Dirt Rally (albeit lower than the GTX 1080.)

Intel's Core i7-7740K Kaby Lake-X Benchmarks Surface

Two days, two leaks on an upcoming Intel platform (the accelerated release dates gods are working hard with the blue giant, it would seem.) Now, it's Intel's own i7-7740K, a Kaby Lake-X HEDT processor that packs 4 cores and 8 threads, which is interesting when one considers that AMD's latest mainstream processors, Ryzen, already pack double the cores and threads in a non-HEDT platform. Interesting things about the Kaby Lake-X processors is that they are rumored to carry 16x PCIe 3.0 lane from the CPU (which can be configured as a singularly populated 16x or as a triple-populated 1x @ 8x and 2x @ 4x PCIe ports. Since these parts are reported as being based of on consumer, LGA-1151 Kaby Lake processors, it would seem these eschew Intel's integrated graphics, thus saving die space. And these do seem to deliver a quad-channel memory controller as well, though we've seen with Ryzen R7 reviews how much of a difference that makes for some of the use cases.

AMD Radeon RX 580 Overclocking and Benchmarks Surface

Some photos, screenshots and benchmarks of what appears to be an XFX RX 580 graphics card are doing the rounds, courtesy of overclocker Lau Kin Lam, who shared them (alongside a three-hour log video) on his Facebook page. Apparently, this is a special, China-only edition of the card, which is a shame, considering the great-looking waterblock that is smiling for the camera. The fact that this card is using a reference board with one 8-pin power connector may prove relevant to its overclocking efforts (and those of other, non-reference boards that we've seen carry both the 8-pin and an extra 6-pin power connector.

AMD's Ryzen 7 1700X Glorious Benchmarks Leak; IHS, Pin Layout Photographed

Another day, another leak: the folks at XFastest have indeed been the fastest to leak images of an actual Ryzen 7 1700X processor, with pictures of the processor's IHS and pin area running rampant throughout the Internet (the Ryzen chip is located to the right in both pictures, with a sample of AMD's previous-generation FX CPUs on the left side for comparison sake).

While revealing shots may have their appeal, it's the benchmarking portion that most of us are expectant about. Until actual reviews are out, we're left nothing more than these leaks (which should be taken with appropriate amounts of salt). In this case, benchmarks of AMD's upcoming Ryzen 7 1700X have been released, showing just how the upcoming CPU delivers in 3D Mark Fire Strike, CPU Mark and Cinebench R15.

AMD Ryzen 1700X, 1600X & 1300 Benchmarks Leaked

A number of sites have been reporting on some leaked (as in, captured from Futuremark's database) scores on AMD's upcoming CPUs. Now, some benchmarks seem to have surfaced regarding not only the company's 8-core, 16-thread monsters, but also towards its sweet-spot 6-core, 12-thread CPUs and its more mundane 4-core offerings.

Taking into account some metrics (which you should, naturally, take with some grains of salt), and comparing Intel's and AMD's Ryzen offerings on 3DMark's Fire Strike Physics scores, we can see that a $389 Ryzen 7 1700X (8 cores, 16 threads) at its base clock of 3.4 GHz manages to surpass Intel's competing (in thread count alone, since it retails for $1089) 6900K running at its base 3.2 GHz frequency - with the Ryzen processor scoring 17,878 points versus the 6900K's 17,100. Doing some fast and hard maths, this would mean that if the R7 1700X was to be clocked at the same speed as the 6900K, it would still be faster, clock for clock (though not by much, admittedly). We don't know whether Turbo was disabled or not on these tests, for either AMD's or Intel's processor, so we have to consider that. However, if Turbo were enabled, that would mean that the R7 1700X's clockspeed would only be 100 MHz higher than the 6900K's (3.8 GHz max, vs 3.7 GHz max on the Intel CPU).
Return to Keyword Browsing
Jul 7th, 2025 08:57 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts