News Posts matching #GPU

Return to Keyword Browsing

AMD Said to Become TSMC's Third Largest Customer in 2023

Based on a report in the Taiwanese media, AMD is quickly becoming a key customer for TSMC and is expected to become its third largest customer in 2023. This is partially due to new orders that AMD has placed with TSMC for its 5 nm node. AMD is said to become TSMC's single largest customer for its 5 nm node in 2023, although it's not clear from the report how large of a share of the 5 nm node AMD will have.

The additional orders are said to be related to AMD's Zen 4 based processors, as well as its upcoming RDNA3 based GPUs. AMD is expected to be reaching a production volume of some 20,000 wafers in the fourth quarter of 2022, although there's no mention of what's expected in 2023. Considering most of AMD's products for the next year or two will all be based on TSMC's 5 nm node, this shouldn't come as a huge surprise though, as AMD has a wide range of new CPU and GPU products coming.

Jon Peddie Research: Q1 of 2022 Saw a Decline in GPU Shipments Quarter-to-Quarter

Jon Peddie Research reports that the global PC-based graphics processor units (GPU) market reached 96 million units in Q1'22 and PC GPUs shipments decreased 6.2% due to disturbances in China, Ukraine, and the pullback from the lockdown elsewhere. However, the fundamentals of the GPU and PC market are solid over the long term, JPR predicts GPUs will have a compound annual growth rate of 6.3% during 2022-2026 and reach an installed base of 3.3 million units at the end of the forecast period. Over the next five years, the penetration of discrete GPUs (dGPU) in the PC market will grow to reach a level of 46%.

AMD's overall market share percentage from last quarter increased 0.7%, Intel's market share decreased by -2.4%, and Nvidia's market share increased 1.69%, as indicated in the following chart.

ORNL Frontier Supercomputer Officially Becomes the First Exascale Machine

Supercomputing game has been chasing various barriers over the years. This has included MegaFLOP, GigaFLOP, TeraFLOP, PetaFLOP, and now ExaFLOP computing. Today, we are witnessing for the first time an introduction of an Exascale-level machine contained at Oak Ridge National Laboratory. Called the Frontier, this system is not really new. We have known about its upcoming features for months now. What is new is the fact that it was completed and is successfully running at ORNL's facilities. Based on the HPE Cray EX235a architecture, the system uses 3rd Gen AMD EPYC 64-core processors with a 2 GHz frequency. In total, the system has 8,730,112 cores that work in conjunction with AMD Instinct MI250X GPUs.

As of today's TOP500 supercomputers list, the system is overtaking Fugaku's spot to become the fastest supercomputer on the planet. Delivering a sustained HPL (High-Performance Linpack) score of 1.102 Exaflop/s, it features a 52.23 GigaFLOPs/watt power efficiency rating. In the HPL-AI metric, dedicated to measuring the system's AI capabilities, the Frontier machine can output 6.86 exaFLOPs at reduced precisions. This alone is, of course, not a capable metric for Exascale machines as AI works with INT8/FP16/FP32 formats, while the official results are measured in FP64 double-precision form. Fugaku, the previous number one, scores about 2 ExaFLOPs in HPL-AI while delivering "only" 442 PetaFlop/s in HPL FP64 benchmarks.

AMD RDNA 3 GPUs to Support DisplayPort 2.0 UHBR 20 Standard

AMD's upcoming Radeon RX 7000 series of graphics cards based on the RDNA 3 architecture are supposed to feature next-generation protocols all over the board. Today, according to a patch committed to the Linux kernel, we have information about display output choices AMD will present to consumers in the upcoming products. According to a Twitter user @Kepler_L2, who discovered this patch, we know that AMD will bundle DisplayPort 2.0 technology with UHBR 20 transmission mode. The UHBR 20 standard can provide a maximum of 80 Gbps bi-directional bandwidth, representing the highest bandwidth in a display output connector currently available. With this technology, a sample RDNA 3 GPU could display 16K resolution with Display Stream Compression, 10K without compression, or two 8K HDR screens running at 120 Hz refresh rate. All of this will be handled by Display Controller Next (DCN) engine for media.

The availability of DisplayPort 2.0 capable monitors is a story of its own. VESA noted that they should come at the end of 2021; however, they got delayed due to the lack of devices supporting this output. Having AMD's RDNA 3 cards as the newest product to support these monitors, we would likely see the market adapt to demand and few available products as the transition to the latest standard is in the process.

Intel to Present Meteor/Arrow Lake with Foveros 3D Packaging at Hot Chips 34

Hot Chips 34, the upcoming semiconductor conference from Sunday, August 21 to Tuesday, August 23, 2022, will feature many significant contributions from folks like Intel, AMD, Tesla, and NVIDIA. Today, thanks to Intel's registration at the event, we discovered that the company would present its work on Meteor Lake and Arrow Lake processors with the novel Foveros 3D packaging. The all-virtual presentation from Intel will include talks about Ponte Vecchio GPU and its architecture, system, and software; Meteorlake and Arrowlake 3D Client Architecture Platform with Foveros; and some Xeon D and FPGA presentations. You can see the official website here for a complete list of upcoming talks.

As a little reminder, Meteor Lake is supposed to arrive next year, replacing the upcoming Raptor Lake design, and it has already ahs been pictured, which you can see below. The presentation will be recorded and all content posted on Hot Chips's website for non-attendees to catch up on.

AMD Robotics Starter Kit Kick-Starts the Intelligent Factory of the Future

Today AMD announced the Kria KR260 Robotics Starter Kit, the latest addition to the Kria portfolio of adaptive system-on-modules (SOMs) and developer kits. A scalable and out-of-the-box development platform for robotics, the Kria KR260 offers a seamless path to production deployment with the existing Kria K26 adaptive SOMs. With native ROS 2 support, the standard framework for robotics application development, and pre-built interfaces for robotics and industrial solutions, the new SOM starter kit enables rapid development of hardware-accelerated applications for robotics, machine vision and industrial communication and control.

"The Kria KR260 Robotics Starter Kits builds on the success of our Kria SOMs and KV260 Vision AI Starter Kit for AI and embedded developers, providing roboticists with a complete, out-of-the-box solution for this rapidly growing application space," said Chetan Khona, senior director of Industrial, Vision, Healthcare and Sciences Markets at AMD. "Roboticists will now be able to work in their standard development environment on a platform that has all the interfaces and capabilities needed to be up and running in less than an hour. The KR260 Starter Kit is an ideal platform to accelerate robotics innovation and easily take ideas to production at scale."

AMD Claims Higher FPS/$ Radeon GPU Value Over NVIDIA Offerings

Frank Azor, Chief Architect of Gaming Solutions & Marketing at AMD, has posted an interesting slide on Twitter, claiming that AMD Radeon products possess higher FPS/$ value than NVIDIA's graphics offerings. According to the slide, AMD Radeon graphics cards are the best solutions for gamers looking at performance per dollar ratings and performance per watt. This means that AMD claims that Radeon products are inherently higher-value products than NVIDIA's offerings while also more efficient. As the chart shows, which you can see below, some AMD Radeon cards are offering up to 89% better FPS/$ value with up to 123% better FPS/Watt metric. This highest rating is dedicated to Radeon RX 6400 GPU; however, there are all GPUs included in comparison with up to the latest Radeon RX 6950 XT SKU.

Compared to TechPowerUp's own testing of AMD's Radeon cards and multiple reviews calculating the performance per dollar metric, we could not see numbers as high as AMD's. This means that AMD's marketing department probably uses a different selection of games that may perform better on AMD Radeon cards than NVIDIA GeForce RTX. Of course, as with any company marketing material, you should take it with a grain of salt, so please check some of our reviews for a non-biased comparison.

NVIDIA Releases Security Update 473.47 WHQL Driver for Kepler GPUs

Ten years ago, in 2012, NVIDIA introduced its Kepler series of graphics cards based on the TSMC 28 nm node. Architecture has been supported for quite a while now by NVIDIA's drivers, and the last series to carry support was the 470 driver class. Today, NVIDIA pushed a security update in the form of a 473.47 WHQL driver that brings fixes to various CVE vulnerabilities that can cause anything from issues that may lead to denial of service, information disclosure, or data tampering. This driver version has no fixed matters and doesn't bring any additional features except the fix for vulnerabilities. With CVEs rated from 4.1 to 8.5, NVIDIA has fixed major issues bugging Kepler GPU users. With a high risk for code execution, denial of service, escalation of privileges, information disclosure, and data tampering, the 473.47 WHQL driver is another step for supporting Kepler architecture until 2024, when NVIDIA plans to drop the support for this architecture. Supported cards are GT 600, GT 700, GTX 600, GTX 700, Titan, Titan Black, and Titan Z.

The updated drivers are available for installation on NVIDIA's website and for users of TechPowerUp's NVCleanstall software.

NVIDIA GeForce RTX 4090 Twice as Fast as RTX 3090, Features 16128 CUDA Cores and 450W TDP

NVIDIA's next-generation GeForce RTX 40 series of graphics cards, codenamed Ada Lovelace, is shaping up to be a powerful graphics card lineup. Allegedly, we can expect to see a mid-July launch of NVIDIA's newest gaming offerings, where customers can expect some impressive performance. According to a reliable hardware leaker, kopite7kimi, NVIDIA GeForce RTX 4090 graphics card will feature AD102-300 GPU SKU. This model is equipped with 126 Streaming Multiprocessors (SMs), which brings the total number of FP32 CUDA cores to 16128. Compared to the full AD102 GPU with 144 SMs, this leads us to think that there will be an RTX 4090 Ti model following up later as well.

Paired with 24 GB of 21 Gbps GDDR6X memory, the RTX 4090 graphics card has a TDP of 450 Watts. While this number may appear as a very power-hungry design, bear in mind that the targeted performance improvement over the previous RTX 3090 model is expected to be a two-fold scale. Paired with TSMC's new N4 node and new architecture design, performance scaling should follow at the cost of higher TDPs. These claims are yet to be validated by real-world benchmarks of independent tech media, so please take all of this information with a grain of salt and wait for TechPowerUp reviews once the card arrives.

Alleged AMD Instinct MI300 Exascale APU Features Zen4 CPU and CDNA3 GPU

Today we got information that AMD's upcoming Instinct MI300 will be allegedly available as an Accelerated Processing Unit (APU). AMD APUs are processors that combine CPU and GPU into a single package. AdoredTV managed to get ahold of a slide that indicates that AMD Instinct MI300 accelerator will also come as an APU option that combines Zen4 CPU cores and CDNA3 GPU accelerator in a single, large package. With technologies like 3D stacking, MCM design, and HBM memory, these Instinct APUs are positioned to be a high-density compute the product. At least six HBM dies are going to be placed in a package, with the APU itself being a socketed design.

The leaked slide from AdoredTV indicates that the first tapeout is complete by the end of the month (presumably this month), with the first silicon hitting AMD's labs in Q3 of 2022. If the silicon turns out functional, we could see these APUs available sometime in the first half of 2023. Below, you can see an illustration of the AMD Instinct MI300 GPU. The APU version will potentially be of the same size with Zen4 and CDNA3 cores spread around the package. As Instinct MI300 accelerator is supposed to use eight compute tiles, we could see different combinations of CPU/GPU tiles offered. As we await the launch of the next-generation accelerators, we are yet to see what SKUs AMD will bring.

AMD's Integrated GPU in Ryzen 7000 Gets Tested in Linux

It appears that one of AMD's partners has a Ryzen 7000 CPU or APU, with integrated graphics up and running in Linux. Based on details leaked, courtesy of the partner testing the chip using the Phoronix Test Suite and submitting the results to the OpenBenchmarking database. The numbers are by no means impressive, suggesting that this engineering sample isn't running at the proper clock speeds. For example, it only scores 63.1 FPS in Enemy Territory: Quake Wars, where a Ryzen 9 6900HX manages 182.1 FPS, where both GPUs have been allocated 512 MB of system memory as the minimum graphics memory allocation.

The integrated GPU goes under the model name of GFX1036, with older integrated RDNA2 GPUs from AMD having been part of the GFX103x series. It's reported to have a clock speed of 2000/1000 MHz, although it's presumably running at the lower of the two clock speeds, if not even slower, as it's only about a third of the speed or slower, than the GPU in the Ryzen 9 6900HX. That said, the GPU in the Ryzen 7000-series is as far as anyone's aware, not really intended for gaming, since it's a very stripped down GPU that is meant to mainly be for desktop use and media usage, so it's possible that it'll never catch up with the current crop of integrated GPUs from AMD. We'll hopefully find out more in less than two weeks time, when AMD has its keynote at Computex.

NVIDIA Releases Open-Source GPU Kernel Modules

NVIDIA is now publishing Linux GPU kernel modules as open source with dual GPL/MIT license, starting with the R515 driver release. You can find the source code for these kernel modules in the NVIDIA Open GPU Kernel Modules repo on GitHub. This release is a significant step toward improving the experience of using NVIDIA GPUs in Linux, for tighter integration with the OS and for developers to debug, integrate, and contribute back. For Linux distribution providers, the open-source modules increase ease of use.

They also improve the out-of-the-box user experience to sign and distribute the NVIDIA GPU driver. Canonical and SUSE are able to immediately package the open kernel modules with Ubuntu and SUSE Linux Enterprise Distributions. Developers can trace into code paths and see how kernel event scheduling is interacting with their workload for faster root cause debugging. In addition, enterprise software developers can now integrate the driver seamlessly into the customized Linux kernel configured for their project.

Tachyum Delivers the Highest AI and HPC Performance with the Launch of the World's First Universal Processor

Tachyum today launched the world's first universal processor, Prodigy, which unifies the functionality of a CPU, GPU and TPU in a single processor, creating a homogeneous architecture, while delivering massive performance improvements at a cost many times less than competing products.

After the company undertook its mission to conquer the processor performance plateau in nanometer-class chips and the systems they power, Tachyum has succeeded by launching its first commercial product. The Prodigy Cloud/AI/HPC supercomputer processor chip offers 4x the performance of the fastest Xeon, has 3x more raw performance than NVIDIA's H100 on HPC and has 6x more raw performance on AI training and inference workloads, and up to 10x performance at the same power. Prodigy is poised to overcome the challenges of increasing data center power consumption, low server utilization and stalled performance scaling.

Supermicro Accelerates AI Workloads, Cloud Gaming, Media Delivery with New Systems Supporting Intel's Arctic Sound-M and Intel Habana Labs Gaudi 2

Super Micro Computer, Inc. (Nasdaq: SMCI), a global leader in enterprise computing, storage, networking, and green computing technology, supports two new Intel-based accelerators for demanding cloud gaming, media delivery, AI and ML workloads, enabling customers to deploy the latest acceleration technology from Intel and Intel Habana. "Supermicro continues to work closely with Intel and Habana Labs to deliver a range of server solutions supporting Arctic Sound-M and Gaudi 2 that address the demanding needs of organizations that require highly efficient media delivery and AI training," said Charles Liang, president and CEO. "We continue to collaborate with leading technology suppliers to deliver application-optimized total system solutions for complex workloads while also increasing system performance."

Supermicro can quickly bring to market new technologies by using a Building Block Solutions approach to designing new systems. This methodology allows new GPUs and acceleration technology to be easily placed into existing designs or, when necessary, quickly adapt an existing design when needed for higher-performing components. "Supermicro helps deliver advanced AI and media processing with systems that leverage our latest Gaudi 2 and Arctic Sound-M accelerators," stated Sandra Rivera, executive vice president and general manager of the Datacenter and AI Group at Intel. "Supermicro's Gaudi AI Training Server will accelerate deep learning training in some of the fastest growing workloads in the datacenter."

NVIDIA H100 SXM Hopper GPU Pictured Up Close

ServeTheHome, a tech media outlet focused on everything server/enterprise, posted an exclusive set of photos of NVIDIA's latest H100 "Hopper" accelerator. Being the fastest GPU NVIDIA ever created, H100 is made on TSMC's 4 nm manufacturing process and features over 80 billion transistors on an 814 mm² CoWoS package designed by TSMC. Complementing the massive die, we have 80 GB of HBM3 memory that sits close to the die. Pictured below, we have an SXM5 H100 module packed with VRM and power regulation. Given that the rated TDP for this GPU is 700 Watts, power regulation is a serious concern and NVIDIA managed to keep it in check.

On the back of the card, we see one short and one longer mezzanine connector that acts as a power delivery connector, different from the previous A100 GPU layout. This board model is labeled PG520 and is very close to the official renders that NVIDIA supplied us with on launch day.

NVIDIA GeForce RTX 3090 Ti Gets Custom 890 Watt XOC BIOS

Extreme overclocking is an enthusiast discipline where overclockers try to push their hardware to extreme limits. Combining powerful cooling solutions like liquid nitrogen (LN2), which reaches sub-zero temperatures alongside modified hardware, the silicon can output tremendous power. Today, we are witnessing a custom XOC (eXtreme OverClocking) BIOS for the NVIDIA GeForce RTX 3090 Ti graphics card that can push the GA102 SKU to impressive 890 Watts of power, representing almost a two-fold increase to the stock TDP. Enthusiasts pursuing large frequencies with their RTX 3090 Ti are likely users of this XOC BIOS. However, most likely, we will see GALAX HOF or EVGA KINGPIN cards with dual 16-pin power connectors utilize this.

As shown below, MEGAsizeGPU, the creator of this BIOS, managed to push his ASUS GeForce RTX 3090 Ti TUF with XOC BIOS to 615 Watts, so KINGPIN and HOF designs will have to be used to draw all the possible heat. The XOC BIOS was uploaded to our VGA BIOS database, however, caution is advised as this can break your graphics card.

Intel Buys Finnish Graphics IP Developer Siru Innovations

Intel has announced that it has bought 11 year old, veteran Finnish graphics IP developer Siru Innovations. You'd be forgiven if you've never heard about the company, but it has pedigree harking back to the late 1980's and early 1990's, as at least one of its founders was part of the legendary demogroup Future Crew that made some of the most impressive graphics and audio demo software during the BBS era. All three founders were at Bitboys when it was founded in the 1990's and if you haven't heard about Bitboys, you might simply not be old enough. The company was hyped for its Glaze3D graphics architecture that never actually launched, due to the fact that Infineon stopped manufacturing a very specific type of embedded memory that the GPUs were based on.

Bitboys was later acquired by ATI, who in turn of course was taken over by AMD. However, the story doesn't end here, as AMD sold the Imageon business unit to Qualcomm in 2009 and the three founders of Siru moved to Qualcomm for a couple of years, before starting Siru. Since the Intel announcement, the Siru website has been taken down, but the company was working on developing mobile graphics IP, as well as helping other companies develop their own graphics related IP, drivers and so on. As to what Intel is planning on doing with the Siru team isn't entirely clear, but Balaji Kanigicherla, Intel's VP and General Manager, AXG Custom Compute Group Innovating Custom Silicon & Platform Solutions in Blockchain, High Performance Edge Compute and Cloud Computing, Supercomputer, posted on LinkedIn saying that Siru will be joining the AXG Group. You can read the full post below.

AMD Radeon RX 6950XT Beats GeForce RTX 3090 Ti in 3DMark TimeSpy

We are nearing the arrival of AMD's Radeon RX 6x50XT graphics card refresh series, and benchmarks are starting to appear. Today, we received a 3DMark TimeSpy benchmark of the AMD Radeon RX 6950XT GPU and compared it to existing solutions. More notably, we compared it to NVIDIA's GeForce RTX 3090 Ti and came to a surprise. The Radeon RX 6950XT GPU scored 22209 points in the 3DMark TimeSpy test and looking at Graphics score, while the GeForce RTX 3090 Ti GPU scored 20855 points in the same test. Of course, we have to account that 3DMark TimeSpy is a synthetic benchmark and tends to perform very well on AMD RDNA2 hardware, so we have to wait and see for official independent testing like TechPowerUp's reviews.

AMD Radeon RX 6950XT card was tested with Ryzen 7 5800X3D CPU paired with DDR4-3600 memory and pre-released 22.10-220411n drivers on Windows 10. We could experience higher graphics scores with final drivers and see better performance of the upcoming refreshed SKUs.

Sapphire Radeon RX 6950X TOXIC Reportedly Boosts to 2565 MHz at 346W TGP

As AMD is preparing to launch a highly-anticipated refresh of the Radeon RX 6000 series, codenamed RX 6x50 XT series. Alongside AMD, add-in board partners (AIBs) will have their say as well, and today we get to take a look at the alleged specifications of Sapphire's highest-end upcoming products. According to Chiphell member RaulMee, who claims to possess the specification of the newest Sapphire models, we are expected to see a bit higher total board power (TGP) with the arrival of this refresh. First and foremost, the Sapphire RX 6950XT TOXIC is the fastest air-cooled model from Sapphire, with a boost clock of up to 2565 MHz (255 MHz over AMD's reference 2310 MHz model), carrying a TGP of 364 Watts in OC BIOS. Regular TGP for this model is 332 Watts with a boost speed of up to 2532 MHz. Please note that this includes the power output of GPU and memory.

Next up, we have Sapphire's RX 6950XT NITRO+ SKUs. The non-SE card is a minor improvement over the AMD Radeon RX 6950XT reference GPU and offers a Silent BIOS option. The RX 6950XT NITRO+ Special Edition can go up to 325 Watts and 2435 MHz with OC BIOS applied. Silent BIOS is also an option, and it lowers the TGP to 303 Watts and 2368 MHz. The alleged specification chart also carries Sapphires' RX 6750XT & 6650XT NITRO+ GPUs, of which you can check the clock speeds and TGPs below.

VESA Launches AdaptiveSync and MediaSync VRR Standards and Compliance Program

The Video Electronics Standards Association (VESA ) today announced the first publicly open standard for front-of-screen performance of variable refresh rate displays. The VESA Adaptive-Sync Display Compliance Test Specification (Adaptive-Sync Display CTS) provides for a comprehensive and rigorous set of more than 50 test criteria, an automated testing methodology and performance mandates for PC monitors and laptops supporting VESA's Adaptive-Sync protocols.

The Adaptive-Sync Display CTS also establishes a product compliance logo program comprising two performance tiers: AdaptiveSync Display, which is focused on gaming with significantly higher refresh rates and low latency; and MediaSync Display, which is designed for jitter-free media playback supporting all international broadcast video formats. By establishing the VESA Certified AdaptiveSync Display and MediaSync Display logo programs.VESA will enable consumers to easily identify and compare the variable refresh rate performance of displays supporting Adaptive-Sync prior to purchase. Only displays that pass all Adaptive-Sync Display CTS and VESA DisplayPort compliance tests can qualify for the VESA Certified AdaptiveSync Display or MediaSync Display logos.

NVIDIA Allegedly Testing a 900 Watt TGP Ada Lovelace AD102 GPU

With the release of Hopper, NVIDIA's cycle of new architecture releases is not yet over. Later this year, we expect to see next-generation gaming architecture codenamed Ada Lovelace. According to a well-known hardware leaker for NVIDIA products, @kopite7kimi, on Twitter, the green team is reportedly testing a potent variant of the upcoming AD102 SKU. As the leak indicates, we could see an Ada Lovelace AD102 SKU with a Total Graphics Power (TGP) of 900 Watts. While we don't know where this SKU is supposed to sit in the Ada Lovelace family, it could be the most powerful, Titan-like design making a comeback. Alternatively, this could be a GeForce RTX 4090 Ti SKU. It carries 48 GB of GDDR6X memory running at 24 Gbps speeds alongside monstrous TGP. Feeding the card are two 16-pin connectors.

Another confirmation from the leaker is that the upcoming RTX 4080 GPU uses the AD103 SKU variant, while the RTX 4090 uses AD102. For further information, we have to wait a few more months and see what NVIDIA decides to launch in the upcoming generation of gaming-oriented graphics cards.

Sapphire Radeon RX 6400 PULSE Low Profile GPU Pictured

Sapphire looks set to launch one of the first low-profile RDNA2 graphics cards with the single-slot Radeon RX 6400 PULSE that has recently been leaked by VideoCardz. The card features a nearly identical design to the companies existing low-profile Radeon PRO W6400 product offering a single HDMI 2.1 and DisplayPort 1.4 port along with an optional half-height bracket. The Sapphire Radeon RX 6400 PULSE features 768 Stream Processors and 12 Ray Accelerators along with 4 GB of GDDR6 memory running at 16 Gbps. The card doesn't require any additional power connectors with a TDP of 53 W which could make it a good option for low-power builds. The Radeon RX 6400 was first announced by AMD in January for the OEM market with DIY market products set to launch in a few days on April 20th.

NVIDIA Launches "Restocked & Reloaded" GPU Availability Campaign

NVIDIA has recently launched a global campaign to promote the availability of RTX 30 series graphics cards with multiple retailers and manufacturers informing customers of increased shipments. The launch of this campaign also coincides with the 5th consecutive month of price drops for NVIDIA GPU prices with the average price now at 119% of MSRP according to the latest report from 3D Center. The stores participating in the campaign appear to have most cards as now available or restocking with some cards receiving minor price cuts.
NVIDIAGeForce RTX 30 Series graphics cards are now available! Get the ultimate play with immersive ray tracing, a huge AI performance boost with NVIDIA DLSS, game-winning responsiveness with NVIDIA Reflex, and AI-powered voice & video with NVIDIA Broadcast.

Blackmagic Design Announces DaVinci Resolve 18

Blackmagic Design today announced DaVinci Resolve 18, a major new cloud collaboration update which allows multiple editors, colorists, VFX artists and audio engineers to work simultaneously on the same project, on the same timeline, anywhere in the world. DaVinci Resolve 18 supports the Blackmagic Cloud for hosting and sharing projects, as well as a new DaVinci proxy workflow. This update also includes new Resolve FX AI tools powered by the DaVinci Neural Engine, as well as time saving tools for editors, Fairlight legacy fixed bus to FlexBus conversion, GPU accelerated paint in Fusion, and more! DaVinci Resolve 18 public beta is available for download now from the Blackmagic Design web site.

DaVinci Resolve 18 is a major release featuring cloud based workflows for a new way to collaborate remotely. Customers can host project libraries using Blackmagic Cloud and collaborate on the same timeline, in real time, with multiple users globally. The new Blackmagic Proxy generator automatically creates proxies linked to camera originals, for a faster editing workflow. There are new Resolve FX such as ultra beauty and 3D depth map, improved subtitling for editors, GPU accelerated Fusion paint and real time title template playback, Fairlight fixed to FlexBus conversion and more. DaVinci Resolve 18 supports Blackmagic Cloud, so customers can host their project libraries on the DaVinci Resolve Project Server in the cloud. Share projects and work collaboratively with editors, colorists, VFX artists and audio engineers on the same project at the same time, anywhere in the world.

Intel Arc A350M GPU Gets Performance Boost with Dynamic Tuning Technology Disabled

Last month, Intel released its Arc Alchemist lineup for mobile/laptop configurations. As expected, being the first discrete GPU that the company made, there are some hiccups here and there that happen along the way. Today, we have an interesting case of Intel Arc A350M getting a heavy performance boost with Dynamic Tuning Technology (DTT) disabled. The DTT is Intel's solution to automatically and dynamically allocate power between an Intel processor and an Intel Discrete Graphics Card to optimize performance and improve battery life. This is essentially a competing tech for AMD SmartShift and NVIDIA Dynamic Boost implementations. Thanks to a South Korean YouTuber, BullsLab, we have information that disabling DTT in drivers helps Arc 350M GPU reach higher performance targets.

He found when disabling DTT in drivers that the gaming performance improved significantly and that the Arc 350M was outputting 30-80 more frames per second. This is no slight improvement and shows that the drivers are still not yet mature. Creating a discrete graphics card is not an easy task, as noted here; however, we hope to see Intel put out more fixes in the coming weeks and hopefully end this strange behavior.
Below, you can see the YouTube video with benchmarks.
Return to Keyword Browsing
Jul 12th, 2025 00:32 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts