News Posts matching #Time Spy

Return to Keyword Browsing

GDDR6 GeForce RTX 4070 Tested, Loses 0-1% Performance Against RTX 4070 with GDDR6X

NVIDIA quietly released a variant of the GeForce RTX 4070 featuring slower 20 Gbps GDDR6 memory, replacing the 21 Gbps GDDR6X that the original RTX 4070 comes with. Wccftech has access to a GALAX branded RTX 4070 GDDR6, and put it through benchmarks focused on comparing it to a regular RTX 4070. Memory type and speed are the only changes in specs, the core-configuration isn't changed, nor is the GPU clock speed. Wccftech's testing shows that the RTX 4070 GDDR6 measures within 0-1% slower than the RTX 4070 (GDDR6X) at 1080p and 1440p resolutions; while the difference between the two is about 2% at 4K Ultra HD.

Wccftech's test-bed is comprehensive, with 27 game tests, each across 3 resolutions; and 7 synthetic tests. The synthetic tests are mainly part of the 3DMark test suite, including Speed Way, Fire Strike, Time Spy, Port Royal, and their presets. Here, the RTX 4070 GDDR6 is nearly identical in performance, with a 0-0.2% delta with the RTX 4070 GDDR6X. In the game tests, performance varies by resolution. 1080p has 0-1% performance delta, with the only noteworthy outliers being "Metro Exodus" (extreme preset), where the RTX 4070 GDDR6 loses 4.2%, and "Alan Wake 2," where it loses 2.3%.

Colorful Presents iGame Lab Project: Highest-Performance GeForce RTX 4090 GPUs Limited to 300 Pieces, OC'd to 3.8 GHz

At Computex 2024, Colorful has launched an ultra-exclusive new graphics card - the iGame Lab 4090. This limited edition GPU is squarely targeted at hardcore overclockers and performance enthusiasts willing to pay top dollar for the absolute best. With only 300 units produced globally, the iGame Lab 4090 represents the pinnacle of Colorful's engineering efforts. Each chip was hand-selected from thousands after rigorous binning to ensure premium silicon capable of extreme overclocks. The card's striking aesthetics feature a clean white shroud with silver accent armor. Beyond the intricate design, the real draw is performance. The iGame Lab 4090 has already shattered records, with professional overclocker CENs pushing it past 3.8 GHz under 3D load. It set a new world record 3DMark Time Spy Extreme score of 24,103 points. Out of the box, the card features a base clock of 2235 MHz, a boost clock of 2520 MHz, and a turbo mode of 2625 MHz, all while being a 3-slot design.

AMD Ryzen 7 8700G Loves Memory Overclocking, which Vastly Favors its iGPU Performance

Entry level discrete GPUs are in trouble, as the first reviews of the AMD Ryzen 7 8700G desktop APU show that its iGPU is capable of beating the discrete GeForce GTX 1650, which means it should also beat the Radeon RX 6500 XT that offers comparable performance. Based on the 4 nm "Hawk Point" monolithic silicon, the 8700G packs the powerful Radeon 780M iGPU based on the latest RDNA3 graphics architecture, with as many as 12 compute units, worth 768 stream processors, 48 TMUs, and an impressive 32 ROPs; and full support for the DirectX 12 Ultimate API requirements, including ray tracing. A review by a Chinese tech publication on BiliBili showed that it's possible for an overclocked 8700G to beat a discrete GTX 1650 in 3DMark TimeSpy.

It's important to note here that both the iGPU engine clock and the APU's memory frequency are increased. The reviewer set the iGPU engine clock to 3400 MHz, up from its 2900 MHz reference speed. It turns out that much like its predecessor, the 5700G "Cezanne," the new 8700G "Hawk Point" features a more advanced memory controller than its chiplet-based counterpart (in this case the Ryzen 7000 "Raphael"). The reviewer succeeded in a DDR5-8400 memory overclock. A combination of the two resulted in a 17% increase in the Time Spy score over stock speeds; which is how the chip manages to beat the discrete GTX 1650 (comparable performance to the RX 6500 XT at 1080p).

NVIDIA H100 Hopper GPU Tested for Gaming, Slower Than Integrated GPU

NVIDIA's H100 Hopper GPU is a device designed for pure AI and other compute workloads, with the least amount of consideration for gaming workloads that involve graphics processing. However, it is still interesting to see how this 30,000 USD GPU fairs in comparison to other gaming GPUs and whether it is even possible to run games on it. It turns out that it is technically feasible but not making much sense, as the Chinese YouTube channel Geekerwan notes. Based on the GH100 GPU SKU with 14,592 CUDA, the H100 PCIe version tested here can achieve 204.9 TeraFLOPS at FP16, 51.22 TeraFLOPS at FP32, and 25.61 TeraFLOPS at FP64, with its natural power laying in accelerating AI workloads.

However, how does it fare in gaming benchmarks? Not very well, as the testing shows. It scored 2681 points in 3DMark Time Spy, which is lower than AMD's integrated Radeon 680M, which managed to score 2710 points. Interestingly, the GH100 has only 24 ROPs (render output units), while the gaming-oriented GA102 (highest-end gaming GPU SKU) has 112 ROPs. This is self-explanatory and provides a clear picture as to why the H100 GPU is used for computing only. Since it doesn't have any display outputs, the system needed another regular GPU to provide the picture, while the computation happened on the H100 GPU.

NVIDIA RTX 4080 20-30% Slower than RTX 4090, Still Smokes the RTX 3090 Ti: Leaked Benchmarks

Benchmarks of NVIDIA's upcoming GeForce RTX 4080 (formerly known as the RTX 4080 16 GB) are already out as the leaky taps in the Asian tech forumscape know no bounds. Someone with access to an RTX 4080 sample and drivers on ChipHell forums, put it through a battery of synthetic and gaming tests. The $1,200 MSRP graphics card was tested on 3DMark Time Spy, Port Royal, and games that include Forza Horizon 5, Call of Duty Modern Warfare II, Cyberpunk 2077, Borderlands 3, and Shadow of the Tomb Raider.

The big picture: the RTX 4080 is found to be halfway between the RTX 3090 Ti and the RTX 4090. At stock settings, and in 3DMark Time Spy Extreme (4K), it has 71% the performance of an RTX 4090, whereas the RTX 3090 Ti is 55% that of the RTX 4090. With its "power limit" slider maxed out, the RTX 4080 inches 2 percentage-points closer to the RTX 4090 (73% that of the RTX 4090), and with a bit of manual OC, it adds another 4 percentage-points. Things change slightly with 3DMark Port Royal, where the RTX 4080 is 69% the performance of the RTX 4090 in a test where the RTX 3090 Ti does 58% that of the RTX 4090.

NVIDIA RTX 4090 "Ada" Scores Over 19000 in Time Spy Extreme, 66% Faster Than RTX 3090 Ti

NVIDIA's next-generation GeForce RTX 4090 "Ada" flagship graphics card allegedly scores over 19000 points in the 3DMark Time Spy Extreme synthetic benchmark, according to kopite7kimi, a reliable source with NVIDIA leaks. This would put its score around 66 percent above that of the current RTX 3090 Ti flagship. The RTX 4090 is expected to be based on the 5 nm AD102 silicon, with a rumored CUDA core count of 16,384. The higher IPC from the new architecture, coupled with higher clock speeds and power limits, could be contributing to this feat. Time Spy Extreme is a traditional DirectX 12 raster-only benchmark, with no ray traced elements. The Ada graphics architecture is expected to reduce the "cost" of ray tracing (versus raster-only rendering), although we're yet to see leaks of RTX performance, yet.

Intel Arc A550M & A770M 3DMark Scores Surface

The upcoming Intel Arc A550M & A770M mobile graphics cards have recently appeared on 3DMark in Time Spy and Fire Strike Extreme. The Intel Arc Alchemist A550M features an ACM-G10 GPU with 16 Xe cores paired with 8 GB of GDDR6 memory on a 128-bit bus while the A770M features the same GPU but with 32 Xe cores and 16 GB of GDDR6 memory on a 256-bit bus.

The A550M was tested in 3DMark Time Spy where it scored 6017 points running on an older 1726 driver with Intel Advanced Performance Optimizations (APO) enabled. The A770M was benchmarked with 3DMark Fire Strike Extreme where it scored a respectable 13244 points in graphics running on test drivers which places it near the RTX 3070M. This score does not correlate to real-world gaming performance with figures provided directly by Intel showing the Arc A730M only being 12% faster than the RTX 3060M.

AMD Radeon Software Adrenalin 21.7.1 Released

AMD today released the latest version of Radeon Software Adrenalin. Version 21.7.1 beta introduces optimization for F1 2021, with a 6% performance improvement seen at 4K UHD on an RX 6800 XT, over the previous drivers. Support is also introduced for the new Radeon RX 6700M and RX 6600M mobile GPUs.

Among the issues fixed are an Oculus service error preventing the Oculus Link setup from running on machines with RX 5000 and RX 6000 series graphics cards; lighting corruption noticed in Apex Legends with Radeon Boost enabled; AMD User Experience Program consuming abnormally high memory; driver version mismatch between Windows Store and AMD Support versions; high memory usage on some running 3DMark Time Spy; and an image corruption in Carrion with AF enabled.

DOWNLOAD: AMD Radeon Software Adrenalin 21.7.1 beta

AMD Liquid-Cooled Reference RX 6900 "XTX" Tested on 3DMark

PC enthusiasts on the Bili Bili community posted the first performance benchmarks of the Made-by-AMD liquid-cooled Radeon RX 6900 XT graphics card, which has been doing rounds on the rumor mill as an "XTX" part. This card features engine clock speeds in the league of the recent RX 6900 XT "XTXH silicon" factory-overclocked cards, but its more striking specification is the use of 18.48 Gbps-rated GDDR6 memory (15.5% increased memory bandwidth), and liquid cooling. The engine clocks are set at 2250 MHz game, with 2435 MHz boost.

Tested across 3DMark Time Spy, Time Spy Extreme, Fire Stike Extreme, and Port Royal, the card is tested to be anywhere between 5-8 percent faster than an air-cooled reference RX 6900 XT card. This would put it slightly behind the custom RX 6900 XT ("XTXH silicon") cards, though a significant upgrade from the air-cooled card. Coreteks in a recent report stated that the liquid-cooled reference RX 6900 XT is being targeted exclusively at the SI (system integrator) market, and so far, the card has only been spotted in China.

ASUS ROG Zephyrus Duo 15 Owners are Applying Custom GPU vBIOS with Higher TGP Presets

With NVIDIA's GeForce RTX 30-series lineup of GPUs, laptop manufacturers are offered a wide variety of GPU SKUs that internally differ simply by having different Total Graphics Power (TGP), which in turn results in different clock speeds and thus different performance. ASUS uses NVIDIA's variant of GeForce RTX 3080 mobile GPU inside the company's ROG Zephyrus Duo (GX551QS) with a TGP of 115 Watts, and Dynamic Boost technology that can ramp up the card to 130 Watts. However, this doesn't represent the maximum for RTX 3080 mobile graphics card. The maximum TGP for RTX 3080 mobile goes up to 150 Watts, which is a big improvement that lets the GPU reach higher frequencies and more performance.

Have you ever wondered what would happen if you manually applied vBIOS that allows the card to use more power? Well, Baidu forum users are reporting a successful experiment of transforming their 115 W RTX 3080 to 150 W TGP card. Using GPU vBIOS from MSI Leopard G76, which features a 150 W power limit, and applying it to the ROG's Zephyrus Duo power-limited RTX 3080 cards is giving results. Users have successfully used this vBIOS to squeeze out more performance from their laptops. As seen on the 3D Mark Time Spy rank list, the entries are now dominated solely by modified laptops. Performance improvement is, of course, present and it reaches up to a 20% increase.

NVIDIA GeForce RTX 3060 Ti Fire Strike and Time Spy Scores Surface

3DMark scores of the upcoming NVIDIA GeForce RTX 3060 Ti were leaked to the web by VideoCardz. The RTX 3060 Ti was put through standard 3DMark Fire Strike and Time Spy benchmark runs. In the DirectX 11-based Fire Strike benchmark, the card allegedly scores 30706 points, with 146.05 FPS in GT1 and 122 FPS in GT2. With the newer DirectX 12-based Time Spy test, it allegedly scores 12175 points, with 80.82 FPS in GT1, and 68.71 FPS in GT2. There are no system specs on display, but the scores put the RTX 3060 Ti slightly ahead of the previous-generation high-end GeForce RTX 2080 Super.

The GeForce RTX 3060 Ti, bound for a December 2 launch, is an upcoming performance-segment graphics card based on the "Ampere" architecture, and is carved out of the same 8 nm "GA104" silicon as the RTX 3070. It reportedly packs 4,864 "Ampere" CUDA cores, 38 second-gen RT cores, 152 third-gen Tensor cores, and the same memory configuration as the RTX 3070—8 GB of 14 Gbps GDDR6 across a 256-bit wide bus. NVIDIA is targeting a "<$399" price-point, making the card at least 43% cheaper than the RTX 2080 Super.

NVIDIA GeForce "Ampere" Hits 3DMark Time Spy Charts, 30% Faster than RTX 2080 Ti

An unknown NVIDIA GeForce "Ampere" GPU model surfaced on 3DMark Time Spy online database. We don't know if this is the RTX 3080 (RTX 2080 successor), or the top-tier RTX 3090 (RTX 2080 Ti successor). Rumored specs of the two are covered in our older article. The 3DMark Time Spy score unearthed by _rogame (Hardware Leaks) is 18257 points, which is close to 31 percent faster than the RTX 2080 Ti Founders Edition, 22 percent faster than the TITAN RTX, and just a tiny bit slower than KINGPIN's record-setting EVGA RTX 2080 Ti XC. Futuremark SystemInfo reads the GPU clock speeds of the "Ampere" card as 1935 MHz, and its memory clock at "6000 MHz." Normally, SystemInfo reads the memory actual clock (i.e. 1750 MHz for 14 Gbps GDDR6 effective). Perhaps SystemInfo isn't yet optimized for reading memory clocks on "Ampere."

Intel 8-core/16-thread "Rocket Lake-S" Processor Engineering Sample 3DMarked

The "Rocket Lake-S" microarchitecture by Intel sees the company back-port its next-generation "Willow Cove" CPU core to the existing 14 nm++ silicon fabrication process in the form of an 8-core die with a Gen12 Xe iGPU. An engineering sample of one such processor made it to the Futuremark database. Clocked at 3.20 GHz with 4.30 GHz boost frequency, the "Rocket Lake-S" ES was put through 3DMark "Fire Strike" and "Time Spy," with its iGPU in play, instead of a discrete graphics card.

In "Fire Strike," the "Rocket Lake-S" ES scores 18898 points in the physics test, 1895 points in the graphics tests, and an overall score of 1746 points. With "Time Spy," the overall score is 605, with a CPU score of 4963 points, and graphics score of 524. The 11th generation Core "Rocket Lake-S" processor is expected to be compatible with existing Intel 400-series chipset motherboards, and feature a PCI-Express gen 4.0 root complex. Several 400-series chipset motherboards have PCIe gen 4.0 preparation for exactly this. The increased IPC from the "Willow Cove" cores is expected to make the 8-core "Rocket Lake-S" a powerful option for gaming and productivity tasks that don't scale across too many cores.

Core i3-10100 vs. Ryzen 3 3100 Featherweight 3DMark Showdown Surfaces

AMD's timely announcement of the Ryzen 3 "Matisse" processor series could stir things up in the entry-level as Intel kitted its 10th generation Core i3 processors as 4-core/8-thread. Last week, a head-to-head Cinebench comparison between the i3-10300 and 3300X ensued, and today we have a 3DMark Firestrike and Time Spy comparison between their smaller siblings, the i3-10100 and the 3100, courtesy of Thai PC enthusiast TUM_APISAK. The two were benchmarked on Time Spy and Fire Strike on otherwise constant hardware: an RTX 2060 graphics card, 16 GB of memory, and a 1 TB Samsung 970 EVO SSD.

With Fire Strike, the 3100-powered machine leads in overall 3DMark score (by 0.31%), CPU-dependent Physics score (by 13.7%), and the Physics test. The i3-10100 is ahead by 1.4% in the Graphics score thanks to a 1.6% lead in graphics test 1, and 1.4% lead in graphics test 2. Over to the more advanced Time Spy test, which uses the DirectX 12 API that better leverages multi-core CPUs, we see the Ryzen 3 3100 post a 0.63% higher overall score, 1.5% higher CPU score; while the i3-10100 powered machines post within 1% higher graphics score. These numbers may suggest that the i3-10100 and the 3100 are within striking distance of each other and that either is a good pick for gamers, until you look at pricing. Intel's official pricing for the i3-10100 is $122 (per chip in 1,000-unit tray), whereas AMD lists the SEP price of the Ryzen 3 3100 at $99 (the Intel chip is at least 22% pricier), giving AMD a vast price-performance advantage that's hard to ignore, more so when you take into account value additions such as an unlocked multiplier and PCIe gen 4.0.

Ryzen 7 3700X Trades Blows with Core i7-10700, 3600X with i5-10600K: Early ES Review

Hong Kong-based tech publication HKEPC posted a performance review of a few 10th generation Core "Comet Lake-S" desktop processor engineering samples they scored. These include the Core i7-10700 (8-core/16-thread), the i5-10600K (6-core/12-thread), the i5-10500, and the i5-10400. The four chips were paired with a Dell-sourced OEM motherboard based on Intel's B460 chipset, 16 GB of dual-channel DDR4-4133 memory, and an RX 5700 XT graphics card to make the test bench. This bench was compared to several Intel 9th generation Core and AMD 3rd generation Ryzen processors.

Among the purely CPU-oriented benchmarks, the i7-10700 was found to be trading blows with the Ryzen 7 3700X. It's important to note here, that the i7-10700 is a locked chip, possibly with 65 W rated TDP. Its 4.60 GHz boost frequency is lesser than that of the unlocked, 95 W i9-9900K, which ends up topping most of the performance charts where it's compared to the 3700X. Still the comparison between i7-10700 and 3700X can't be dismissed, since the new Intel chip could launch at roughly the same price as the 3700X (if you go by i7-9700 vs. i7-9700K launch price trends).

Leaked 3DMark Time Spy Result shows Radeon RX 5700 XT matching GeForce RTX 2070

Reviewers should have received their Radeon "Navi" review samples by now, so it's just natural that the number of leaks is increasing. WCCFTech has spotted one such leak in the 3DMark Time Spy database. The card which is just labeled "Generic VGA" achieved a final score of 8575 points, GPU score of 8719 and 7843 CPU points, which is almost identical to WCCFTech's own comparison benchmarks for the GeForce RTX 2070 Founders Edition (8901). The Vega 64 scored 7427, which leads WCCFTech to believe this must be Radeon RX 5700 XT. The result has since been removed from the 3DMark database, which also suggests it's for an unreleased product.

AMD Radeon VII 3D Mark, Final Fantasy XV Benchmarks Surface - Beats and Loses to RTX 2080

Benchmarks of AMD's upcoming Radeon VII graphics card have surfaced, courtesy of the one and only, graphics card info and results leaker extraordinaire Tum Apisak. In these scores, and looking purely at the graphics portion of the benchmarks, AMD's solution really does seem to bring the fight to NVIDIA's RTX 2080 - no small feat, considering that it's mostly a shrunk-down version of AMD's previous-gen Vega with overcharged memory and core clocks.

The Radeon VII scores, according to Tum Apisak (take it with a grain of salt), 27400 on the FireStrike test; 13400 on the FIreStrike Extreme bench; 6800 on the FireStrike Ultra test; and finally, 8700 points on Time Spy. Consulting 3D Mark's database, it seems that factory-overclocked RTX 2080 graphics cards usually score around 27000 points on the FIreStrike base and 6400 points on the FireStrike Ultra tests, which means that at least in this synthetic scenario, AMD's graphics card ekes out a win.

Alleged AMD RX 590 3D Mark Time Spy Scores Surface

Benchmark scores for 3D Mark's Time Spy have surface, and are purported to represent the performance level of an unidentified "Generic VGA" - which is being identified as AMD's new 12 nm Polaris revision. The RX 590 product name makes almost as much sense as it doesn't, though; for one, there's no real reason to release another entire RX 600 series, unless AMD is giving the 12 nm treatment to the entire lineup (which likely wouldn't happen, due to the investment in fabrication process redesign and node capacity required for such). As such, the RX 590 moniker makes sense if AMD is only looking to increase its competitiveness in the sub-$300 space as a stop-gap until they finally have a new graphics architecture up their shader sleeves.

First Time Spy Benchmark of Upcoming NVIDIA RTX 2080 Graphics Card Leaks

A Time Spy benchmark score of one of NVIDIA's upcoming RTX 20-series graphics cards has come out swinging in a new leak. We say "one of NVIDIA's" because we can't say for sure which core configuration this graphics card worked on: the only effective specs we have are the 8 GB of GDDR6 memory working at 14 Gbps, which translates to either NVIDIA's RTX 2070 or RTX 2080 graphics cards. If we were of the betting type, we'd say these scores are likely from an NVIDIA RTX 2080, simply because the performance improvement over the last generation 1080 (which usually scores around the 7,300's) sits pretty at some 36% - more or less what NVIDIA has been doing with their new generation introductions.

The 10,030 points scored in Time Spy by this NVIDIA RTX graphics card brings its performance levels up to GTX 1080 Ti levels, and within spitting distance of the behemoth Titan Xp. This should put to rest questions regarding improved performance in typical (read, non-raytracing) workloads on NVIDIA's upcoming RTX series. It remains to be seen, as it comes to die size, which part of this improvement stems from actual rasterization performance improvements per core, or if this comes only from increased number of execution units (NVIDIA says it doesn't, by the way).

UL's Raytracing Benchmark Not Based on Time Spy, Completely New Development

After we've covered news of UL's (previously known as 3D Mark) move to include a raytracing benchmark mode on Time Spy, the company has contacted us and other members of the press to clarify their message and intentions. As it stands, the company will not be updating their Time Spy testing suite with Raytracing technologies. Part of the reason is that this would need an immense rewrite of the benchmark itself, which would be counterproductive - and this leads to the rest of the reason why it's not so: such a significant change would invalidate previous results that didn't have the Raytracing mode activated.

As such, UL has elected to develop a totally new benchmark, built from the ground up to use Microsoft's DirectX Raytracing (DXR). This new benchmark will be added to the 3D Mark app as an update. The new test will produce its own benchmarking scores, very much like Fire Strike and Time Spy did, and will provide users with yet another ladder to climb on their way to the top of the benchmarking scene. Other details are scarce - which makes sense. But the test should still be available on or around the time of NVIDIA's 20-series launch, come September 20th.

3D Mark's Time Spy With Raytracing to be Launched by the End of September

(Update: UL has come forward to clarify the way they're integrating Raytracing into their benchmarking suite. You can read the follow-up article here.)

UL (who acquired and is in the process of changing 3D Mark's image to that of its own) has revealed that the new, raytracing-supporting version of their Time Spy high performance and high quality benchmark will be arriving by the end of September.

The new version of the benchmark will be released around the launch of Microsoft's next version of its Windows 10 Operating System, codenamed Redstone 5, and thus will fall in some time after NVIDIA's RTX 20-series launch on September 20th. Here's hoping it will be available in time for comparative reviews on NVIDIA's new family of products, and that some light can be shed on the new series' framerates delivery, and not just their GigaRays/sec capabilities.

Futuremark Releases 3DMark v2.4.4254 Update

Futuremark today released 3DMark v2.4.4254 update (includes the "Time Spy" DirectX 12 benchmark). The latest version forces hardware monitoring information to be sent to Futuremark for validation of scores (and not just a general hardware and drivers probe). It also corrects a rare crash noticed when the system returns unexpected values for video memory amounts. The splash-screen has been restored. The installer is now available in Japanese, Korean, and Spanish. Grab the update from the link below.
DOWNLOAD: 3DMark v2.4.4254

Futuremark Releases 3DMark v2.4.3819 with "Time Spy Extreme" Benchmark

Futuremark today released the latest update to the 3DMark graphics benchmark suite. Version 2.4.3819, released to the public today, introduces the new "Time Spy Extreme" benchmark for machines running Windows 10 and DirectX 12 compatible graphics cards. With a rendering resolution of 4K Ultra HD (3840 x 2160 pixels), the new benchmark applies enough stress to put today's 4K UHD gaming PCs through their paces. You don't require a 4K monitor to run the test, however, your graphics card must feature at least 4 GB of video memory.

Time Spy Extreme also comes with a new CPU benchmark that is up to 3 times more taxing than the older CPU tests. It can take advantage of practically any number of CPU cores you can throw at it, and benefits from the the AVX2 instruction-set. "Time Spy Extreme," isn't available on the free version of 3DMark. You will require at least 3DMark Advanced, with a license purchased after July 14, 2016, to get it as a free upgrade. The update also improves the API overhead tests.
DOWNLOAD: Futuremark 3DMark v2.4.3819

The change-log follows.

EVGA and K|NGP|N Break New World Records

Extreme overclocker Vince "K|NGP|N" Lucido has once again set new performance World Records. Armed with the latest EVGA hardware, a new Intel Core i9 7980XE CPU and Liquid Nitrogen cooling, K|NGP|N was able to overclock the EVGA hardware to new heights, setting the standard for PC enthusiast hardware. Upon obtaining these new World Records, K|NGP|N had this to say:

"Using the new Intel Core i9 7980XE CPU at over 5.7GHz on an EVGA X299 Dark and 4x EVGA GeForce GTX 1080 Ti K|NGP|N's at over 2.3GHz, allowed me to annihilate the existing 3DMark Time Spy World Record at 37,596 points! The new Intel Core i9 7980XE CPU, EVGA X299 Dark and EVGA GeForce GTX 1080 Ti K|NGP|N are incredible!"

EVGA Announces the GeForce GTX 1080 Ti K|NGP|N

The GeForce GTX 1080 Ti was designed to be the most powerful desktop GPU ever created, and indeed it was. EVGA built upon its legacy of innovative cooling solutions and powerful overclocking with its GTX 1080 Ti SC2 and FTW3 graphics cards. Despite the overclocking headroom provided by the frigid cooling of EVGA's patented iCX Technology, the potential of the GTX 1080 Ti still leaves room for one more card at the top...and man is it good to be the K|NG.
Return to Keyword Browsing
Dec 19th, 2024 04:04 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts