News Posts matching #Performance

Return to Keyword Browsing

NVIDIA Announces Grace CPU for Giant AI and High Performance Computing Workloads

NVIDIA today announced its first data center CPU, an Arm-based processor that will deliver 10x the performance of today's fastest servers on the most complex AI and high performance computing workloads.

The result of more than 10,000 engineering years of work, the NVIDIA Grace CPU is designed to address the computing requirements for the world's most advanced applications—including natural language processing, recommender systems and AI supercomputing—that analyze enormous datasets requiring both ultra-fast compute performance and massive memory. It combines energy-efficient Arm CPU cores with an innovative low-power memory subsystem to deliver high performance with great efficiency.

Intel Kills Extended Warranty Program for Overclocking

Some time ago, Intel has introduced the Performance Tuning Protection Plan (PTPP), which was used as a warranty for any damage that has occurred during overclocking. Customers of PTPP, mainly buyers of Intel Core processors having a "K" tag were able to get a replacement processor whenever they damaged their CPU by overclocking it. When it comes to the pricing of such a service, typical plans were spanning from $19.99 to $29.99, depending on the processor you had. However, there will no longer be such a program, as Intel is discontinuing its PTPP extended overclocking warranty. The company has updated its site to refer to End-Of-Life (EOL) page displaying a quote below.

NVIDIA's Mining Performance Cap On Unreleased ZOTAC RTX 3060 Shows Results

The NVIDIA RTX 3060 isn't even released yet, but as you might've heard, cards are already doing the rounds throughout the secondhand market at ridiculous prices. And now, to sour the pot even more, one crypto enthusiast going by the name of CryptoLeo on YouTube has shown that he already has his hands on the card - and performed a quick mining test on it. The user showcases the cards' serial number, so I hope NVIDIA is reading this post so as to know exactly which distributor this graphics card came from; breaking time-to-market likely isn't to be taken lightly by the company.

The test, done without the RTX 3060's release drivers (which are still a week away), showcases the graphics card capping its own mining performance a little after the mining algorithms begin to be processed. The card, identified in the below screenshots as tagged "1", shows a decline in performance from the initial 41.5 MH/s down to 24-24 MH/s. The card tagged as "2" is a GeForce GTX 1080 Ti, which doesn't show the same performance decline (naturally). That the card exhibited this behavior sans release drivers goes to show that NVIDIA's solution is, at the very, very least, BIOS-based, and isn't a shoestring-budget driver-based solution that was haphazardly thrown in for good measure. And once again, it's a ZOTAC card in the mining spotlight. Is this a pattern?

ASUS ROG Zephyrus Duo 15 Owners are Applying Custom GPU vBIOS with Higher TGP Presets

With NVIDIA's GeForce RTX 30-series lineup of GPUs, laptop manufacturers are offered a wide variety of GPU SKUs that internally differ simply by having different Total Graphics Power (TGP), which in turn results in different clock speeds and thus different performance. ASUS uses NVIDIA's variant of GeForce RTX 3080 mobile GPU inside the company's ROG Zephyrus Duo (GX551QS) with a TGP of 115 Watts, and Dynamic Boost technology that can ramp up the card to 130 Watts. However, this doesn't represent the maximum for RTX 3080 mobile graphics card. The maximum TGP for RTX 3080 mobile goes up to 150 Watts, which is a big improvement that lets the GPU reach higher frequencies and more performance.

Have you ever wondered what would happen if you manually applied vBIOS that allows the card to use more power? Well, Baidu forum users are reporting a successful experiment of transforming their 115 W RTX 3080 to 150 W TGP card. Using GPU vBIOS from MSI Leopard G76, which features a 150 W power limit, and applying it to the ROG's Zephyrus Duo power-limited RTX 3080 cards is giving results. Users have successfully used this vBIOS to squeeze out more performance from their laptops. As seen on the 3D Mark Time Spy rank list, the entries are now dominated solely by modified laptops. Performance improvement is, of course, present and it reaches up to a 20% increase.

NVIDIA's DLSS 2.0 is Easily Integrated into Unreal Engine 4.26 and Gives Big Performance Gains

When NVIDIA launched the second iteration of its Deep Learning Super Sampling (DLSS) technique used to upscale lower resolutions using deep learning, everyone was impressed by the quality of the rendering it is putting out. However, have you ever wondered how it all looks from the developer side of things? Usually, games need millions of lines of code and even some small features are not so easy to implement. Today, thanks to Tom Looman, a game developer working with Unreal Engine, we have found out just how the integration process of DLSS 2.0 looks like, and how big are the performance benefits coming from it.

Inthe blog post, you can take a look at the example game shown by the developer. The integration with Unreal Engine 4.26 is easy, as it just requires that you compile your project with a special UE4 RTX branch, and you need to apply your AppID which you can apply for at NVIDIA's website. Right now you are probably wondering how is performance looking like. Well, the baseline for the result was TXAA sampling techniques used in the game demo. The DLSS 2.0 has managed to bring anywhere from 60-180% increase in frame rate, depending on the scene. These are rather impressive numbers and it goes to show just how well NVIDIA has managed to build its DLSS 2.0 technology. For a full overview, please refer to the blog post.

Intel Core i7-11700K "Rocket Lake" CPU Outperforms AMD Ryzen 9 5950X in Single-Core Tests

Intel's Rocket Lake-S platform is scheduled to arrive at the beginning of the following year, which is just a few days away. The Rocket Lake lineup of processors is going to be Intel's 11th generation of Core desktop CPUs and the platform is expected to make a debut with Intel's newest Cypress Cove core design. Thanks to the Geekbench 5 submission, we have the latest information about the performance of the upcoming Intel Core i7-11700K 8C/16T processor. Based on the Cypress Cove core, the CPU is allegedly bringing a double-digit IPC increase, according to Intel.

In the single-core result, the CPU has managed to score 1807 points, while the multi-core score is 10673 points. The CPU ran at the base clock of 3.6 GHz, while the boost frequency is fixed at 5.0 GHz. Compared to the previous, 10th generation, Intel Core i7-10700K which scores 1349 single-core score and 8973 points multi-core score, the Rocket Lake CPU has managed to put out 34% higher single-core and 19% higher multi-core score. When it comes to the comparison to AMD offerings, the highest-end Ryzen 9 5950X is about 7.5% slower in single-core result, and of course much faster in multi-core result thanks to double the number of cores.

AMD Ryzen 5 5600H "Cezanne" Processor Benchmarked, Crushes Renoir in Single Core and Multi Core Performance

With the launch of AMD's next-generation mobile processors just around the corner, with an expected launch date in the beginning of 2021 at the CES virtual event. The Cezanne lineup, as it is called, is based on AMD's latest Zen 3 core, which brings many IPC improvements, along with better frequency scaling thanks to the refined architecture design. Today, we get to see just how much the new Cezanne generation brings to the table thanks to the GeekBench 5 submission. In the test system, a Ryzen 5 5600H mobile processor was used, found inside of a Xiaomi Mi Notebook, paired with 16 GB of RAM.

As a reminder, the AMD Ryzen 5 5600H is a six-core, twelve threaded processor. So you are wondering how the performance looks like. Well, in the single-core test, the Zen 3 enabled core has scored 1372 points, while the multi-threaded performance result equaled 5713 points. If we compare that to the last generation Zen 2 based "Renoir" design, the equivalent Ryzen 5 4600H processor, the new design is about 37% faster in single-threaded, and about 14% faster in multi-threaded workloads. We are waiting for the announcement to see the complete AMD Cezanne lineup and see the designs it will bring.

The Ultimate Zen: AMD's Zen 3 Achieves 89% Higher Performance Than First-generation Zen

An investigative, generation-upon-generation review from golem.de paints an extremely impressive picture for AMD's efforts in iterating upon their original Zen architecture. While the first generation Zen achieved a sorely needed inflection point in the red team's efforts against arch-rival Intel and its stranglehold in the high-performance CPU market, AMD couldn't lose its vision on generational performance improvements on pain of being steamrolled (pun intended) by the blue giant's sheer scale and engineering prowess. However, perhaps this is one of those showcases of "small is nimble", and we're now watching Intel slowly changing its posture, crushed under its own weight, so as to offer actual competition to AMD's latest iteration of the Zen microarchitecture.

The golem.de review compares AMD's Zen, Zen+, Zen 2 and Zen 3 architectures, represented by the Ryzen 7 1800X, Ryzen 7 2700X, Ryzen 7 3800X and Ryzen 7 5800X CPUs. Through it, we see a generational performance increase that mostly exceeds the 20% performance points across every iteration of Zen when it comes to both gaming and general computing workloads. This generational improvement hits its (nowadays) most expressive result in that AMD's Ryzen 7 5800X manages to deliver 89% higher general computing, and 84% higher gaming performance than the company's Zen-based Ryzen 7 1800X. And this, of course, ignoring performance/watt improvements that turn the blue giant green with envy.

MSI Will Offer BIOS Update for all AMD 400-Series Motherboards to Optimize Performance for AMD Ryzen 5000 CPU Support

As a world-leading motherboards brand, MSI commits to deliver gamers and creators genuine pleasure, and will keep moving. BIOS update is always an exhilarating news for most users, so MSI keeps announcing relative news for our users. From this week, MSI will release AMD AGESA COMBO PI V2 1.1.0.0 Patch D BIOS for all AMD 400-series motherboards and it is expected to be completely uploaded before the end of 2020.

All AMD 400-Series Motherboards Comprehensively Support Ryzen 5000 CPU with AMD AGESA COMBO PI V2 1.1.0.0 Patch D
The purpose to keep releasing BIOS update is not only for increasing motherboards performance but also for better compatibilities. After AMD launches Ryzen 5000 CPU, most are inquisitive about whether Ryzen 5000 CPU is compatible with AMD previous platforms. MSI realizes that users are eager to enhance their motherboards with the latest CPU; therefore, we are determined to offer AGESA 1.1.0.0 Patch D for all AMD 400-series motherboards. With AGESA 1.1.0.0 Patch D, your 400-series motherboards can support Ryzen 5000 CPU and achieve its true performance. Since there are some technical issues on AGESA 1.1.8.0, it will not be released. Thus, AGESA 1.1.0.0. Patch D is the finest choice to update your motherboards.

Intel and Argonne Developers Carve Path Toward Exascale 

Intel and Argonne National Laboratory are collaborating on the co-design and validation of exascale-class applications using graphics processing units (GPUs) based on Intel Xe-HP microarchitecture and Intel oneAPI toolkits. Developers at Argonne are tapping into Intel's latest programming environments for heterogeneous computing to ensure scientific applications are ready for the scale and architecture of the Aurora supercomputer at deployment.

"Our close collaboration with Argonne is enabling us to make tremendous progress on Aurora, as we seek to bring exascale leadership to the United States. Providing developers early access to hardware and software environments will help us jumpstart the path toward exascale so that researchers can quickly start taking advantage of the system's massive computational resources." -Trish Damkroger, Intel vice president and general manager of High Performance Computing.

AMD Radeon RX 6800 XT Raytracing Performance Leaked

It's only tomorrow that reviewers will take the lids off AMD's latest and greatest Navi-powered graphics cards, but it's hard to keep a secret such as this... well... secret. Case in point: Videocardz has accessed some leaked slides from the presentation AMD has given to its partners, and these shed some light on what raytracing performance users can expect from AMD's RX 6800 XT, the card that's meant to bring the fight to NVIDIA's RTX 3080 graphics card. AMD's RDNA2 features support for hardware-accelerated raytracing from the get go, with every CU receiving on additional hardware piece: a Ray Accelerator. As such, the RX 6800 XT, with its 72 enabled CUs, features 72 Ray Accelerators; the RX 6800, with its 60 CUs, features 60 of these Ray Accelerators.

The RX 6800 XT was tested in five titles: Battlefield V, Call of Duty MW, Crysis Remastered, Metro Exodus and Shadow of the Tomb Raider. At 1440p resolution with Ultra Settings and DXR options enabled according to the game, AMD claims an RX 6800 XT paired with their Ryzen 9 5900X can deliver an average of 70 FPS on Battlefield V; 95 FPS on Call of Duty MW; 90 FPS in Crysis Remastered; 67 FPS in Metro Exodus; and 82 FPS in Shadow of the Tomb Raider. These results are, obviously, not comparable to our own results in previous NVIDIA RTX reviews; there's just too many variables in the system to make that a worthwhile comparison. You'll just have to wait for our own review in our normalized test bench so you can see where exactly does AMD's latest stand against NVIDIA.

TOP500 Expands Exaflops Capacity Amidst Low Turnover

The 56th edition of the TOP500 saw the Japanese Fugaku supercomputer solidify its number one status in a list that reflects a flattening performance growth curve. Although two new systems managed to make it into the top 10, the full list recorded the smallest number of new entries since the project began in 1993.

The entry level to the list moved up to 1.32 petaflops on the High Performance Linpack (HPL) benchmark, a small increase from 1.23 petaflops recorded in the June 2020 rankings. In a similar vein, the aggregate performance of all 500 systems grew from 2.22 exaflops in June to just 2.43 exaflops on the latest list. Likewise, average concurrency per system barely increased at all, growing from 145,363 cores six months ago to 145,465 cores in the current list.

AMD Looks to Keep Performance, Efficiency Gains Momentum With Zen 4, RDNA 3, and Commitment to Threadripper

AMD's Executive Vice President Rick Bergman in an interview with The Street shed some light on the company's future plans for Zen 4 and RDNA 3, even as we are still reeling from (or coming in to) Zen 3 and RDNA 2's launches. Speaking on RDNA 3, Rick Bergman mentioned the company's commitment to achieve the same 50% performance-per-watt increase they achieved with RDNA 2, and had some interesting takes on the matter of why this is actually one of the most important metrics:
Rick BergmanIt just matters so much in many ways, because if your power is too high -- as we've seen from our competitors -- suddenly our potential users have to buy bigger power supplies, very advanced cooling solutions. And in a lot of ways, very importantly, it actually drives the [bill of materials] of the board up substantially. This is a desktop perspective. And invariably, that either means the retail price comes up, or your GPU cost has to come down. We focused on that for RDNA 2. It's a big focus on RDNA 3 as well.

AMD Radeon RX 6800 and RX 6800 XT GPU OpenCL Performance Leaks

AMD has just recently announced its next-generation Radeon RX 6000 series GPU based on the new RDNA 2 architecture. The architecture is set to compete with NVIDIA Ampere architecture and highest offerings of the competing company. Today, thanks to the well-known leaker TUM_APISAK, we have some Geekbench OpenCL scores. It appears that some user has gotten access to the system with the Radeon RX 6800 and RX 6800 XT GPUs, running Cinebench 4.4 OpenCL tests. In the tests, the system ran on the Intel platform with Core i9-10900K CPU with 16 GB DDR4 RAM running at 3600 MHz. The motherboard used was ASUS top-end ROG Maximus XII Extreme Z490 board.

When it comes to results, the system with RX 6800 GPU scored anywhere from 347137 points to 336367 points in three test runs. For comparison, NVIDIA GeForce RTX 3070 scores about 361042 points, showcasing that the Radeon card is not faster in any of the runs. When it comes to the higher-end Radeon RX 6800 XT GPU, it scored 407387 and 413121 points in two test runs. Comparing that to GeForce RTX 3080 GPU that scores 470743 points, the card is slower compared to the competition. There has been a Ryzen 9 5950X test setup that boosted the performance of Radeon RX 6800 XT card by quite a lot, making it reach 456837 points, making a huge leap over the Intel-based system thanks to the Smart Access Memory (SAM) technology that all-AMD system provides.

Samsung's 5 nm Node in Production, First SoCs to Arrive Soon

During its Q3 earnings call, Samsung Electronics has provided everyone with an update on its foundry and node production development. In the past year or so, Samsung's foundry has been a producer of a 7 nm LPP (Low Power Performance) node as its smallest node. That is now changed as Samsung has started the production of the 5 nm LPE (Low Power Early) semiconductor manufacturing node. In the past, we have reported that the company struggled with yields of its 5 nm process, however, that seems to be ironed out and now the node is in full production. To contribute to the statement that the new node is doing well, we also recently reported that Samsung will be the sole manufacturer of Qualcomm Snapdragon 875 5G SoC.

The new 5 nm semiconductor node is a marginal improvement over the past 7 nm node. It features a 10% performance improvement that is taking the same power and chip complexity or a 20% power reduction of the same processor clocks and design. When it comes to density, the company advertises the node with x1.33 times increase in transistor density compared to the previous node. The 5LPE node is manufactured using the Extreme Ultra-Violet (EUV) methodology and its FinFET transistors feature new characteristics like Smart Difusion Break isolation, flexible contact placement, and single-fin devices for low power applications. The node is design-rule compatible with the previous 7 nm LPP node, so the existing IP can be used and manufactured on this new process. That means that this is not a brand new process but rather an enhancement. First products are set to arrive with the next generation of smartphone SoCs, like the aforementioned Qualcomm Snapdragon 875.

New Alienware Products Deliver Performance Every Gamer Deserves

Alienware has always prided itself on creating high-performance PCs that power the most immersive gaming experiences on the planet. Never afraid to push the envelope is what has allowed Alienware to stand out in the gaming industry for decades. This desire to drive innovation and performance continues to be reflected in our newest Alienware devices. We're not only bringing the latest & greatest technologies to market, but we're doing it in a way that pushes the design to the limit, giving our customers the ultimate battle station. With the newly announced NVIDIA GeForce RTX 30 Series GPUs and 360Hz NVIDIA G-SYNC - combined with our engineering ingenuity - Alienware's gaming products take another giant leap forward in performance and provide stunning visuals.

Delivering the same high-level experience at home as it does in the world's most demanding e-sports arenas, the Alienware Aurora R11 is for gamers and creators everywhere. Available with custom-engineered NVIDIA GeForce RTX 30 Series GPUs and up to 10th Gen Intel Core i9 10900KF processors, the new Aurora builds on its future-ready promise through a tool-less upgradable chassis.

Arm Highlights its Next Two Generations of CPUs, codenamed Matterhorn and Makalu, with up to a 30% Performance Uplift

Editor's Note: This is written by Arm vice president and general manager Paul Williamson.

Over the last year, I have been inspired by the innovators who are dreaming up solutions to improve and enrich our daily lives. Tomorrow's mobile applications will be even more imaginative, immersive, and intelligent. To that point, the industry has come such a long way in making this happen. Take app stores for instance - we had the choice of roughly 500 apps when smartphones first began shipping in volume in 2007 and today there are 8.9 million apps available to choose from.

Mobile has transformed from a simple utility to the most powerful, pervasive device we engage with daily, much like Arm-based chips have progressed to more powerful but still energy-efficient SoCs. Although the chip-level innovation has already evolved significantly, more is still required as use cases become more complex, with more AI and ML workloads being processed locally on our devices.

NVIDIA: RTX 3090 Performance 10-15% Higher Than RTX 3080 in 4K

NVIDIA themselves have shared performance slides for their imminent RTX 3090 graphics card, the new halo product that's been marketed as the new Titan. Previous-gen Titans have achieved extremely meager performance uplifts compared to NVIDIA's top-of-the-line cards (see RTX 2080 Ti vs RTX Titan, an average of 8% performance difference in favor of the Titan. According to the company, users should expect a slightly higher performance uplift this time around, though the 10-15% higher performance in 4K still seems meager - in pure price/performance terms - for the average consumer.

The average consumer who isn't the main focus for this graphics card and its gargantuan 24 GB of GDDR6X memory, anyway - this is more aimed at the semi-professional or professional crowds working with specialized software, whether it be in rendering or AI-based workloads. The RTX 3090 is thus not so much a product for the discerning computer enthusiast, but more of a halo product for gamers, and a crucial product for professionals and academics.

NVIDIA RTX 3090 Dagger-Hashimoto Mining Performance Leaked; Ampere Likely Not on Miners' Minds

Alleged mining benchmarks of NVIDIA's upcoming RTX 3090 graphics card have leaked, and the scenario looks great for non-mining usages. The RTX 3090 is being quoted as achieving 120 MH/s on the ubiquitous Dagger-Hashimoto ETHash protocol. That number in itself is impressive - but not when one considers the cards' 350 W board power. granted, a 100% PL isn't the best scenario for mining - and one would expect no knowledgeable miners to use their graphics cards on the NVIDIA-shipped power-curve spot their graphics cards come in at (nor AMD cards, mind you).

The RTX 3080 may be a better example, as there have been more numerous benchmarks done on that particular GPU. It strikes the best balance in performance and power at around 65% PL (210 W), where it achieves 79.8 MH/s. However, previus-gen AMD RX 5700 XT graphics cards have been shown around 50 MH/s whilst consuming only 110 W (with underclocking and undervoltage), which, paired with that particular graphics card's pricing, makes it a much, much better bet for mining efficiency and return on investment. The point is this: reports of miners gobbling up RTX 3000 series stock are, at least for now, apparently unfounded. And this may mean us regular users of graphics cards can rest assured that we won't have to deal with miner-induced shortages. At least until AMD's Navi flounders (eh) to shore.

AMD Zen 3-based EPYC Milan CPUs to Usher in 20% Performance Increase Compared to Rome

According to a report courtesy of Hardwareluxx, where contributor Andreas Schilling reportedly gained access to OEM documentation, AMD's upcoming EPYC Milan CPUs are bound to offer up to 20% performance improvements over the previous EPYC generation. The report claims a 15% IPC performance, paired with an extra 5% added via operating frequency optimization. The report claims that AMD's 64-core designs will feature a lower-clock all-core operating mode, and a 32-core alternate for less threaded workloads where extra frequency is added to the working cores.

Apparently, AMD's approach for the Zen 3 architecture does away with L3 subdivisions according to CCXs; now, a full 32 MB of L3 cache is available for each 8-core Core Compute Die (CCD). AMD has apparently achieved new levels of frequency optimization under Zen 3, with higher upward frequency limits than before. This will see the most benefits in lower core-count designs, as the amount of heat being generated is necessarily lesser compared to more core-dense designs. Milan keeps the same 7 nm manufacturing tech, DDR4, PCIe 4.0, and 120-225 W TDP as the previous-gen Rome. It remains to be seen how these changes actually translate to the consumer versions of Zen 3, Vermeer, later this year.

Lenovo Delivers Outstanding Q1 Performance and Strong Growth

Lenovo Group (HKSE: 992) PINK SHEETS: LNVGY) today announced Group revenue in the first quarter of US $13.3 billion, up almost 7% year-on-year (up 10% year-on-year excluding currency impact). Pre-tax income grew 38% compared to the same quarter a year earlier, to US $332 million, while net income also increased by 31% year-on-year to US$213 million. Basic earnings per share for the first quarter were 1.80 US cents or 13.95 HK cents.

"Our outstanding performance last quarter proves that Lenovo has quickly regained momentum from the impact of the pandemic and is capturing the new opportunities emerging from remote working, education and accelerated digitalization," said Yang Yuanqing, Lenovo Chairman and CEO. "While the world continues to face challenges, Lenovo is focused on delivering sustainable growth through our core businesses as well as the new services and solutions opportunities presented by our service-led intelligent transformation."

Linux Performance of AMD Rome vs Intel Cascade Lake, 1 Year On

Michael Larabel over at Phoronix posted an extremely comprehensive analysis on the performance differential between AMD's Rome-based EPYC and Intel's Cascade Lake Xeons one-year after release. The battery of tests, comprising more than 116 benchmark results, pits a Xeon Platinum 8280 2P system against an EPYC 7742 2P one. The tests were conducted pitting performance of both systems while running benchmarks under the Ubuntu 19.04 release, which was chosen as the "one year ago" baseline, against the newer Linux software stack (Ubuntu 20.10 daily + GCC 10 + Linux 5.8).

The benchmark conclusions are interesting. For one, Intel gained more ground than AMD over the course of the year, with the Xeon platform gaining 6% performance across releases, while AMD's EPYC gained just 4% over the same period of time. This means that AMD's system is still an average of 14% faster across all tests than the Intel platform, however, which speaks to AMD's silicon superiority. Check some benchmark results below, but follow the source link for the full rundown.

New AMD Radeon Pro 5600M Mobile GPU Brings Desktop-Class Graphics Performance and Enhanced Power Efficiency to 16-inch MacBook Pro

AMD today announced availability of the new AMD Radeon Pro 5600M mobile GPU for the 16-inch MacBook Pro. Designed to deliver desktop-class graphics performance in an efficient mobile form factor, this new GPU powers computationally heavy workloads, enabling pro users to maximize productivity while on-the-go.

The AMD Radeon Pro 5600M GPU is built upon industry-leading 7 nm process technology and advanced AMD RDNA architecture to power a diverse range of pro applications, including video editing, color grading, application development, game creation and more. With 40 compute units and 8 GB of ultra-fast, low-power High Bandwidth Memory (HBM2), the AMD Radeon Pro 5600M GPU delivers superfast performance and excellent power efficiency in a single GPU package.

Arm Announces new IP Portfolio with Cortex-A78 CPU

During this unprecedented global health crisis, we have experienced rapid societal changes in how we interact with and rely on technology to connect, aid, and support us. As a result of this we are increasingly living our lives on our smartphones, which have been essential in helping feed our families through application-based grocery or meal delivery services, as well as virtually seeing our colleagues and loved ones daily. Without question, our Arm-based smartphones are the computing hub of our lives.

However, even before this increased reliance on our smartphones, there was already growing interest among users to explore the limits of what is possible. The combination of these factors with the convergence of 5G and AI, are generating greater demand for more performance and efficiency in the palm of our hands.
Arm Cortex-A78

Intel Showcases Ice Lake iGPU Performance in Premiere Pro 14.2

As we reported earlier this week, the release of Adobe Premiere Pro 14.2 brought GPU acceleration to select NVIDIA and AMD GPUs taking advantage of NVIDIA's NVENC chips to boost encoding and decoding speeds. Intel has now showcased the improvements to encoding and decoding with Intel Quick Sync Video (QSV) on 11th generation iGPUs found in mobile Ice Lake chips with Adobe Premiere Pro 14.2.

Compared to the previous 9th generation graphics found in Skylake and Kabylake CPUs the new 11th generation iGPUs perform anywhere from 49-82% better. While impressive, these performance gains can only be found on limited low power 10 nm mobile chips with a maximum of four cores and are yet to arrive on desktop platforms.
Return to Keyword Browsing
Nov 21st, 2024 09:55 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts