News Posts matching #GPU

Return to Keyword Browsing

Tianshu Zhixin Big Island GPU is a 37 TeraFLOP FP32 Computing Monster

Tianshu Zhixin, a Chinese startup company dedicated to designing advanced processors for accelerating various kinds of tasks, has officially entered the production of its latest GPGPU design. Called "Big Island" GPU, it is the company's entry into the GPU market, currently dominated by AMD, NVIDIA, and soon Intel. So what is so special about Tianshu Zhixin's Big Island GPU, making it so important? Firstly, it represents China's attempt of independence from the outside processor suppliers, ensuring maximum security at all times. Secondly, it is an interesting feat to enter a market that is controlled by big players and attempt to grab a piece of that cake. To be successful, the GPU needs to represent a great design.

And great it is, at least on paper. The specifications list that Big Island is currently being manufactured on TSMC's 7 nm node using CoWoS packaging technology, enabling the die to feature over 24 billion transistors. When it comes to performance, the company claims that the GPU is capable of crunching 37 TeraFLOPs of single-precision FP32 data. At FP16/BF16 half-precision, the chip is capable of outputting 147 TeraFLOPs. When it comes to integer performance, it can achieve 317, 147, and 295 TOPS in INT32, INT16, and IN8 respectively. There is no data on double-precision floating-point numbers, so the chip is optimized for single-precision workloads. There is also 32 GB of HBM2 memory present, and it has 1.2 TB of bandwidth. If we compare the chip to the competing offers like NVIDIA A100 or AMD MI100, the new Big Island GPU outperforms both at single-precision FP32 compute tasks, for which it is designed.
Tianshu Zhixin Big Island Tianshu Zhixin Big Island Tianshu Zhixin Big Island Tianshu Zhixin Big Island
Pictures of possible solutions follow.

Razer Could Introduce Company's First AMD-Powered Laptop

Razer, the maker of various gaming peripherals and gaming PCs/Laptops, has been a long-time user of Intel CPUs in their laptops devices. However, that might be changing just about now. According to some findings by @_rogame, there was a 3D Mark benchmark run that featured AMD Ryzen 5000 series "Cezanne" mobile processors. What is more interesting is the system it was running in. Called Razer PI411, this system is officially Razer's first AMD-powered laptop. While we don't have many details about it, we have some basic system configuration details. For starters, the laptop carries AMD's top-tier Ryzen 9 5900HX overclockable mobile processor. Carrying a configured TDP of 45 Watts (the maximum is 54 W), the system is likely not equipped with sufficient cooling for overclocking.

When it comes to the rest of the laptop, it features NVIDIA's GeForce RTX 3060 GPU, 16 GB of RAM, and 512 GB of storage. Being that this laptop was codenamed PI411, it could indicate a 14-inch model. However, we still don't know if it is ever going to hit consumer shelves. Being that Razer never carried an AMD CPU option, this could just be an engineering sample that the company was experimenting with, so we have to wait to find out more.

GIGABYTE Launches GeForce RTX 3080 GAMING OC WATERFORCE WB 10G graphics card

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of premium gaming hardware, today announced the GIGABYTE WATERFORCE graphics card - GeForce RTX 3080 GAMING OC WATERFORCE WB 10G powered by the NVIDIA GeForce RTX 3080 GPU. Whether users are looking to fulfill the demands of their high-end water-cooled systems, or to enjoy the benefits of water-cooled GPU and CPUs, GIGABYTE GAMING OC WATERFORCE graphics card is the best choice. GIGABYTE provides easy-to-install and quality-guaranteed water-cooled graphics cards for desktop PCs.

With the continuous increase in CPU power consumption, high-end water-cooled motherboards in the market are becoming more and more popular. The easy-to-install and quality-guaranteed GIGABYTE GAMING OC WATERFORCE graphics card is an easy choice for maximizing your graphics power. Just invest a little more than the GAMING OC air-cooled version and you can enjoy the water cooling benefits for the GPU and CPU. The GAMING OC WATERFORCE graphics card is equipped with a top-of-the-line overclocked GPU. It provides an all-around cooling solution for all key components of the graphics card, keeping the GPU, VRAM and MOSFET running cool to ensure stable overclock operation and longer durability.

NVIDIA Enables GPU Passthrough for Virtual Machines on Consumer-Grade GeForce GPUs

Editor's note: This is not a part of April Fools.

NVIDIA has separated professional users and regular gamers with the company's graphics card offering. There is a GeForce lineup of GPUs, which represents a gaming-oriented version and its main task is to simply play games, display graphics, and run some basic CUDA accelerated software. However, what would happen if you were to start experimenting with your GPU? For example, if you are running Linux, and you want to spin a virtual machine with Windows on it for gaming, you could just use your integrated GPU as the GeForce card doesn't allow for virtual GPU passthrough. For these purposes, NVIDIA has its professional graphics card lineup like Quadro and Tesla.

However, this specific feature is about to arrive in even the GeForce lineup offerings. NVIDIA has announced that the company is finally bringing the basic virtual machine passthrough functions to the gaming GPUs. While the feature is representing a step in the right direction, it is still limited. For example, the GeForce GPU passthrough supports only one virtual machine and SR-IOV feature is still not supported on GeForce. "If you want to enable multiple virtual machines to have direct access to a single GPU or want the GPU to be able to assign virtual functions to multiple virtual machines, you will need to use NVIDIA Tesla, Quadro, or RTX enterprise GPUs.", says the NVIDIA FAQ. GeForce virtualization, which is now still in beta, is supported on R465 or higher drivers.
The full content from NVIDIA's website is written below.

MonsterLabo Plays Flight Simulator with The Beast, Achieves Fully-Fanless Gaming Experience

MonsterLabo, the maker of fanless PC cases designed for gaming with zero noise, has today tested its upcoming flagship offering in the case lineup. Called The Beast, the case is designed to handle high-end hardware with large TDPs and dissipate all that heat without any moving parts. Using only big heatsinks and heat pipes to transfer the heat to the big heatsink area. In a completely fanless configuration, the case can absorb and dissipate a CPU TDP of 150 Watts and a GPU TPD with 250 Watts. However, when equipped with two 140 mm fans running below 500 RPM, it can accommodate a 250 W CPU, and 320 W GPU. MonsterLabo has tested the fully fanless configuration, which was equipped with AMD Ryzen 7 3800XT processor, paired with NVIDIA's latest GeForce RTX 3080 Ampere graphics card.

There were no fans present in the system to help move the heat away, and the PC was being stress-tested using Microsoft's Flight Simulator. The company has posted a chart of CPU and GPU temperatures over time, where we see that the GPU has managed to hit about 75 degrees Celsius at one point. The CPU has remained a bit cooler, where the CPU package hit just above the 70-degree mark. Overall, the case is more than capable of cooling the hardware it was equipped with. By adding two slow-spinning fans, the temperatures would get even lower, however, that is no longer a fanless system. MonsterLabo's The Beast is expected to get shipped in Q3 of this year when reviewers will get their hands on it and test it for themselves. You can watch the videos in MonsterLabo's blog post here.

NVIDIA Repurposing Scrapped RTX 3080 Ti GA102-250 GPUs to GA102-300 for RTX 3090

The NVIDIA RTX 3080 Ti has experienced numerous delays with the card's launch most recently being pushed to mid-May. The unreleased RTX 3080 Ti has gone through various internal revisions with the card expected to use a GA102-250 GPU until plans were scrapped in late January. The RTX 3080 Ti is now expected to feature the GA102-225 GPU instead when it finally releases. NVIDIA having already produced the required processors is now repurposing the GA102-250 GPUs slated for the RTX 3080 Ti for use in RTX 3090 Founders Edition cards. This switch makes sense as the GA102-250 was rumored to feature the same number of cores as the RTX 3090 just with a smaller memory size. NVIDIA appears to have now enabled the full 384-bit memory bus and has laser engraved the chips to show their change to GA102-300's.

Qualcomm Extends the Leadership of its 7-Series with the Snapdragon 780G 5G Mobile Platform

Qualcomm Technologies, Inc. announced the latest addition to its 7-series portfolio, the Qualcomm Snapdragon 780G 5G Mobile Platform. Snapdragon 780G is designed to deliver powerful AI performance and brilliant camera capture backed by the Qualcomm Spectra 570 triple ISP and 6th generation Qualcomm AI Engine, allowing users to capture, enhance, and share their favorite moments seamlessly. This platform enables a selection of premium-tier features for the first time in the 7-series, making next generation experiences more broadly accessible.

"Since introducing the Snapdragon 7-series three years ago, more than 350 devices have launched based on 7-series mobile platforms. Today, we are continuing this momentum by introducing the Snapdragon 780G 5G Mobile Platform," said Kedar Kondap, vice president, product management, Qualcomm Technologies, Inc. "Snapdragon 780G was designed to bring in-demand, premium experiences to more users around the world."

Next-Generation Nintendo Switch SoC to be Powered by NVIDIA's Ada Lovelace GPU Architecture

Nintendo's Switch console is one of the most successful consoles ever made by the Japanese company. It has sold in millions of units and has received great feedback from the gaming community. However, as the hardware inside the console becomes outdated, the company is thinking about launching a new revision of the console, with the latest hardware and technologies. Today, we got ahold of information about the graphics side of things in Nintendo's upcoming console. Powered by NVIDIA Tegra SoC, it will incorporate unknown Arm-based CPU cores. The latest rumors suggest that the CPU will be accommodated with NVIDIA's Ada Lovelace GPU architecture. According to @kopite7kimi, a known hardware leaker, who simply replied to VideoCardz's tweet with "Ada", we are going to see the appearance of Ada Lovelace GPU architecture in the new SoC. Additionally, the new Switch SoC will have hardware accelerated NVIDIA Deep Learning Super Sampling (DLSS) and 4K output.

Raja Koduri Teases "Petaflops in Your Palm" Intel Xe-HPC Ponte Vecchio GPU

Raja Koduri of Intel has today posted an interesting video on his Twitter account. Showing one of the greatest engineering marvels Intel has ever created, Mr. Koduri has teased what is to come when the company launches the Xe-HPC Ponte Vecchio graphics card designed for high-performance computing workloads. Showcased today was the "petaflops in your palm" chip, designed to run AI workloads with a petaflop of computing power. Having over 100 billion transistors, the chip uses as much as 47 tiles combined in the most advanced packaging technology ever created by Intel. They call them "magical tiles", and they bring logic, memory, and I/O controllers, all built using different semiconductor nodes.

Mr. Koduri also pointed out that the chip was born after only two years after the concept, which is an awesome achievement, given that the research of the new silicon takes years. The chip will be the heart of many systems that require massive computational power, especially the ones like AI. Claiming to have the capability to perform quadrillion floating-point operations per second (one petaflop), the chip will be a true monster. So far we don't know other details like the floating-point precision it runs at with one petaflop or the total power consumption of those 47 tiles, so we have to wait for more details.
More pictures follow.

Capcom Announces Resident Evil Village PC Requirements

Capcom, the Japanese video game maker, has today announced specification requirements for its upcoming Resident Evil Village PC game, needed to play the game at certain resolutions/graphics presets. Starting with the minimum settings, Capcom is thinking of 1080p 60 FPS gaming. To achieve that you need at least an Intel Core i5-7500 or AMD Ryzen 3 1200 processor paired with 8 GB of RAM. The minimum specification also requires a DirectX 12 capable GPU, with 4 GB of VRAM, just like NVIDIA GeForce GTX 1050 Ti or AMD Radeon RX 560. The company notes that using this configuration, framerate may drop below 60 FPS during heavy loads. If you want to use raytracing, which is now also present in the game engine, you must switch to at least NVIDIA GeForce RTX 2060 or AMD Radeon RX 6700 XT.

The recommended specification of course requires much beefier hardware compared to the minimum specification. If you want to have a steady 1080p 60 FPS experience without frame drops, Capcom recommends an Intel Core i7 8700 or AMD Ryzen 5 3600 processor, paired with 16 GB of RAM, and a GPU like an NVIDIA GeForce GTX 1070 or AMD Radeon RX 5700. However, if you want the raytracing feature you need a better GPU. To achieve a 4K resolution with 60 FPS and raytracing turned on, the GPU needs a bump to at least an NVIDIA GeForce RTX 3070 or AMD Radeon RX 6900 XT graphics card. You can check out the game requirements in greater detail below.

NVIDIA GeForce RTX 3060 Anti-Mining Feature Bypassed by HDMI Dummy Plug

When NVIDIA introduced its GeForce RTX 3060 graphics card, the company also introduced a new feature to go along with it. As the card is priced well, it is positioning itself as a very good value offer for mining. Given that NVIDIA has now separate products for mining, it naturally would like to limit the number of gaming cards sold to miners. To achieve that, the company introduced an anti-mining algorithm that is essentially a handshake between the driver, RTX 3060 silicon, and the GPU VBIOS. This handshake checks those three components to detect if mining is going on, so it can limit the performance of the card.

However, even such a thing can be bypassed. Usually, miners put their GPUs in rigs where most of the GPUs don't use their video outputs. And the GPU can detect if it is connected to the monitor or not, triggering the anti-mining algorithm. A user from Quasar Zone forums has managed to bypass the restriction by simply installing a dummy HDMI plug. By using the dummy plug, the card thinks that it is connected to a monitor and thus runs normally. Using this workaround, the user was able to set-up a four-way GeForce RTX 3060 mining rig with 48 MH/s hashing power per GPU, for the total 192 MH/s hash rate. You can buy HDMI dummy plugs for as low as $5.99 on Amazon or at any other store.

TrendForce: Consumer DRAM Pricing to Increase 20% in 2Q2021 Due to Increased Demand

According to TrendForce, we technology enthusiasts will have other rising prices to contend with throughout 2021, adding to the already ballooning discrete GPU and latest-gen CPUs from the leading manufacturer. The increased demand due to the COVID pandemic stretched the usual stocks to their limits, and due to the tremendous, multiple-month lead times between semiconductor orders and their fulfillment from manufacturers, the entire supply infrastructure was spread too thin for the increased worldwide needs. This leads to increased component pricing, which in turn leads to higher ASP pricing for DRAM. Adding to that equation, of course, is the fact that companies are now more careful, and are placing bigger orders so as to be able to weather these sudden demand changes.

TrendForce says that DRAM pricing has already increased 3-8% in 1Q2021, and that market adjustments will lead to an additional increase somewhere between 13-18% for contract pricing. Server pricing is projected to increase by 20%; graphics DRAM is expected to increase 10-15% in the same time-span, thus giving us that strange stomach churn that comes from having to expect even further increases in graphics card end-user pricing; and overall DRAM pricing for customers is expected to increase by 20% due to the intensifying shortages. What a time to be a system builder.

GIGABYTE Launches Radeon RX 6700 XT AORUS Elite Graphics Card

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of premium gaming hardware, today announced new AMD Radeon RX 6700 XT graphics card - AORUS Radeon RX 6700 XT ELITE 12G, powered by AMD RDNA2 gaming architecture. Inheriting the last-gen RGB three-ring design and light effect, the light source guides light internally so that it creates a brighter and natural RGB light effect. Thereby achieving a wonderful balance between cooling and RGB lights.

AORUS Radeon RX 6700 XT ELITE not only keeps the design spirit of the last-gen, but also has a distinct product recognition in the hardware industry, perfectly expressing the art of gaming, and once again remixes the classic style. Furthermore, gamers have more DOF (degree of freedom) to mix unique RGB lights, customizing with up to 8 kinds of color patterns on the "Dazzling" light effect via RGB Fusion 2.0 software.

AMD's Next-Generation Van Gogh APU Shows Up with Quad-Channel DDR5 Memory Support

AMD is slowly preparing to launch its next-generation client-oriented accelerated processing unit (APU), which is AMD's way of denoting a CPU+GPU combination. The future design is codenamed after Van Gogh, showing AMD's continuous use of historic names for their products. The APU is believed to be a design similar to the one found in the SoC of the latest PlayStation 5 and Xbox Series X/S consoles. That means that there are Zen 2 cores present along with the latest RDNA 2 graphics, side by side in the same processor. Today, one of AMD's engineers posted a boot log of the quad-core Van Gogh APU engineering sample, showing some very interesting information.

The boot log contains information about the memory type used in the APU. In the logs, we see a part that says "[drm] RAM width 256bits DDR5", which means that the APU has an interface for the DDR5 memory and it is 256-bit wide, which represents a quad-channel memory configuration. Such a wide memory bus is typically used for applications that need lots of bandwidth. Given that Van Gogh uses RDNA 2 graphics, the company needs a sufficient memory bandwidth to keep the GPU from starving for data. While we don't have much more information about it, we can expect to hear greater details soon.

First NVIDIA Palit CMP 30HX Mining GPU Available at a Tentative $723

NVIDIA's recently-announced CMP (Cryptocurrency Mining Processor) products seem to already be hitting the market - at least in some parts of the world. Microless, a retailer in Dubai, listed the cryptocurrency-geared graphics card for $723 - $723 which are equivalent to some 26 MH/s, as per NVIDIA, before any optimizatons have been enacted on the clock/voltage/BIOS level, as more serious miners will undoubtedly do.

The CMP 30HX is a re-released TU116 chip (Turing, sans RT hardware), which powered the likes of the GeForce GTX 1660 Super in NVIDIA's previous generation of graphics cards. The card features a a 1,530 MHz base clock; a 1,785 MHz boost clock; alongside 6 GB of GDDR6 memory that clocks in at 14 Gbps (which actually could soon stop being enough to hold the entire workload completely in memory). Leveraging a 192-bit memory interface, the graphics card supplies a memory bandwidth of up to 336 GB/s. It's also a "headless" GPU, meaning that it has no display outputs that would only add to cost in such a specifically-geared product. It's unclear how representative the pricing from Microless actually is of NVIDIA's MSRP for the 30HX products, but considering current graphics cards' pricing worldwide, this pricing seems to be in line with GeForce offerings capable of achieving the same hash rates, so its ability to concentrate demand from miners compared to other NVIDIA mainstream, GeForce offerings depends solely on the prices that are both set by NVIDIA and practiced by retailers.

TYAN Now Offers AMD EPYC 7003 Processor Powered Systems

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, today introduced AMD EPYC 7003 Series Processor-based server platforms featuring efficiency and performance enhancements in hardware, security, and memory density for the modern data center.

"Big data has become capital today. Large amounts of data and faster answers drive better decisions. TYAN's industry-leading server platforms powered by 3rd Gen AMD EPYC processors enable businesses to make more accurate decisions with higher precision," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure BU. "Moving the bar once more for workload performance, EPYC 7003 Series processors provide the performance needed in the heart of the enterprise to help IT professionals drive faster time to results," said Ram Peddibhotla, corporate vice president, EPYC product management, AMD. "Time is the new metric for efficiency and EPYC 7003 Series processors are the perfect choice for the most diverse workloads, helping provide more and better data to drive better business outcomes."

ARCTIC Introduces New MX-5 Thermal Compound

ARCTIC, one of the leading manufacturers of low-noise PC coolers and components, officially launches its MX-5 thermal paste today. The newly developed, high-performance paste comes with an ideal set of thermal properties, guaranteeing reliable heat dissipation over long periods of time. ARCTIC MX-5 is available now in variants ranging from 2 to 50 grams, with and without a spatula.

Like all thermal pastes from ARCTIC, MX-5 is completely metal-free. Because MX-5 is neither conductive nor capacitive, it is particularly safe to use: the possibility of short circuits, corrosion damage or discharges is eliminated.

AMD to Supply Only a Few Thousand Radeon RX 6700 XT GPUs for Europe at Launch

The global supply chain of graphics cards is currently not very well equipped to handle the massive demand that exists for the latest generation of GPUs. Just like we have seen with the launch of NVIDIA GeForce RTX 3000 series Ampere, and AMD Radeon RX 6000 series Big Navi SKUs, the latest generation graphics cards are experiencing massive demand. And manufacturers of these GPUs are not very well equipped to handle it all, so there is a huge scarce for GPUs in the global market. With AMD's recent announcement of the Radeon RX 6700 XT graphics card, things are not looking any better, and the availability of this GPU could be very tight at launch.

According to information obtained by Igor's Lab, AMD could supply only a few thousand Radeon RX 6700 XT GPUs for Europe as a whole. To be precise, Igor's Lab notes that "If you condense the information of various board partners and distributors to a trend, then there are, depending on the manufacturer and model, only a few pieces (for Germany) to a few thousand for the EU as a whole." This could be a very bad indication of AMD's supply of these new GPUs globally, not just for Europe. The company is currently relying on the overbooked TSMC, which can only produce a limited amount of chips at the time, and we don't know how much capacity AMD allocated for the new chip.

Blizzard Benchmarks NVIDIA's Reflex Technology in Overwatch

Blizzard, a popular game developer, has today implemented NVIDIA's latest technology for latency reduction into its first-person shooter—Overwatch. Called NVIDIA Reflex, the technology aims to reduce system latency by combining the NVIDIA GPUs with G-SYNC monitors, and specially certified peripherals, all of which can be found on the company website. NVIDIA Reflex dynamically reduces system latency by combining both GPU and game optimizations, which game developers implement, and the gamer is left with a much more responsive system that can edge out a competitive advantage. Today, we get to see just how much the new technology helps in the latest Overwatch update that brings NVIDIA's Reflex with it.

Blizzard has tested three NVIDIA GPUs: GeForce RTX 3080, RTX 2060 SUPER, and GTX 1660 SUPER. All three GPUs cover three different segments, so they are a good sign of what you can expect from your system. Starting from the GeForce GTX 1660 Super, the system latency, which was measured in milliseconds, was cut by over 50%. The middle-end RTX 2060 SUPER GPU experienced a similar gain, while the RTX 3080 was seen with the smallest gain, however, it did achieve the lowest latency out of all GPUs tested. You can check out the results for yourself below.

NVIDIA Unveils AI Enterprise Software Suite to Help Every Industry Unlock the Power of AI

NVIDIA today announced NVIDIA AI Enterprise, a comprehensive software suite of enterprise-grade AI tools and frameworks optimized, certified and supported by NVIDIA, exclusively with VMware vSphere 7 Update 2, separately announced today.

Through a first-of-its-kind industry collaboration to develop an AI-Ready Enterprise platform, NVIDIA teamed with VMware to virtualize AI workloads on VMware vSphere with NVIDIA AI Enterprise. The offering gives enterprises the software required to develop a broad range of AI solutions, such as advanced diagnostics in healthcare, smart factories for manufacturing, and fraud detection in financial services.
NVIDIA AI Enterprise Software Suite

Apple is Discontinuing Intel-based iMac Pro

According to the official company website, Apple will no longer manufacture its iMac Pro computers based on Intel processors. Instead, the company will carry these models in its store, only while the supplies last. Apple will be replacing these models with next-generation iMac Pro devices that will be home to the custom Apple Silicon processors, combining Arm CPU cores with custom GPU design. Having a starting price of 4990 USD, the Apple iMac Pro was able to max out at 15000 USD. The most expensive part was exactly the Intel Xeon processor inside it, among the AMD GPU with HBM. Configuration pricing was also driven by storage/RAM options. However, even the most expensive iMac Pro with its 2017 hardware had no chance against the regular 2020 iMac, so the product was set to be discontinued sooner or later.

When the stock of the iMac Pro runs out, Apple will replace this model with its Apple Silicon equipped variant. According to the current rumor mill, Apple is set to hold a keynote on March 16th that will be an announcement for new iMac Pro devices with custom processors. What happens is only up to Apple, so we have to wait and see.

Intel Prepares 19 Alder Lake Processors for Laptops Ranging from 5-55 Watts

As we are getting closer to the launch of Intel's next-generation Alder Lake processors, more information is getting leaked. Today, thanks to the leaked presentation slide, we have some more details regarding Intel's Alder Lake offerings in the laptop sector. As a reminder, Alder Lake uses a hybrid approach to core configuration with the similar mindset Arm's big.LITTLE works. There are a few smaller cores for processing smaller tasks that don't need much power and, of course, there are a few big cores that are used for heavyweight processing as some advanced applications require. The small cores are going to be based on the Gracemont microarchitecture, while the big one will use the Golden Cove design.

Thanks to @9550pro on Twitter, we have a slide that showcases 19 different Alder Lake configurations for the laptop segment. At the very bottom, there are configurations with a TDP of just five Watts. That is achieved by having just one big, four smaller cores, 48 EU Gen 12 GPU and that is meant for the tablet segment. Going up, we have different ranges depending on the application device, and the highest end is a chip with 55 Watts of power. That model has eight small and eight big cores, combined with 32 EUs of Gen 12 graphics. All models include integrated graphics. The variations of big and small cores have allowed Intel to have as many as 19 different SKUs, that cover every segment needed, by simply balancing the core count. You can check out the rest of the models below for yourself.
Intel Alder Lake Intel Alder Lake Mobile Configurations

AMD is Preparing RDNA-Based Cryptomining GPU SKUs

Back in February, NVIDIA has announced its GPU SKUs dedicated to the cryptocurrency mining task, without any graphics outputs present on the chips. Today, we are getting information that AMD is rumored to introduce its own lineup of graphics cards dedicated to cryptocurrency mining. In the latest patch for AMD Direct Rendering Manager (DRM), a subsystem of the Linux kernel responsible for interfacing with GPUs, we see the appearance of the Navi 12. This GPU SKU was not used for anything except Apple's Mac devices in a form of Radeon Pro 5600M GPU. However, it seems like the Navi 12 could join forces with Navi 10 GPU SKU and become a part of special "blockchain" GPUs.

Way back in November, popular hardware leaker, KOMACHI, has noted that AMD is preparing three additional Radeon SKUs called Radeon RX 5700XTB, RX 5700B, and RX 5500XTB. The "B" added to the end of each name is denoting the blockchain revision, made specifically for crypto-mining. When it comes to specifications of the upcoming mining-specific AMD GPUs, we know that both use first-generation RDNA architecture and have 2560 Stream Processors (40 Compute Units). Memory configuration for these cards remains unknown, as AMD surely won't be putting HBM2 stacks for mining like it did with Navi 12 GPU. All that remains is to wait and see what AMD announces in the coming months.

GALAX GeForce RTX 3090 Hall Of Fame (HOF) Edition GPU Benched with Custom 1000 W vBIOS

GALAX, the maker of the popular premium Hall Of Fame (HOF) edition of graphics cards, has recently announced its GeForce RTX 3090 HOF Edition GPU. Designed for extreme overclocking purposes, the card is made with a 12 layer PCB, 26 phase VRM power delivery configuration, and three 8-pin power connectors. Today, we have managed to get the first comprehensive review of the card by a Chinese YouTube channel 二斤自制. However, this wasn't just regular testing being conducted on a card with factory settings. The channel has applied 1000 Watt vBIOS to the GPU and ran it all on the air cooler the GPU comes with.

In the default 420 Watt setting, the card has been running with a GPU clock of 1845 MHz and a temperature of 69 degrees Celsius. However, when the 1000 Watt vBIOS was applied to the card, the GPU core has managed to ramp to 2000 MHz and consume as much as 630 W of power. If you were wondering if the stock cooler was able to handle it all, the answer is yes. The card has reached a toasty 96 C temperature. While GALAX doesn't offer BIOS like this, the ID of the BIOS corresponds to that of a custom XOC 1000 W BIOS for EVGA Kingpin GeForce RTX 3090 GPU, which you can find in our database. When it comes to performance, the gains were very minimal at only 2-3%. That must have been due to the insufficient cooling, and the card could have done much better on water or LN2. The Firestrike Ultra and Firestrike Extreme results are displayed below.

AMD Confirms Radeon RX 6000 Series Laptop GPUs are "Coming Soon"

AMD has just announced its Navi 22 RDNA 2 devices, spanning the middle-end gaming sector. The Radeon RX 6700 XT, which is the latest addition to the 6000 series of Radeon graphics cards, is carrying the Navi 22 chip inside it. However, AMD GPUs need to satisfy another sector in addition to the desktop market and that is the laptop/mobile market. With the past 5000 series of laptop GPUs, AMD has made a bit of a disappointing launch. Given the availability of the first-generation RDNA GPUs in mobile devices, many gamers were unable to find 5000 series Radeon GPUs in laptops, as it was rarely a choice. MSI and Dell have carried a few models with the Radeon RX 5500M and RX 5600M, and the highest-end Radeon RX 5700M availability was limited to Dell Alienware Area-51m R2 laptop.

During the announcement of Radeon RX 6700 XT, Scott Herkleman (CVP & GM AMD Radeon) has announced that AMD is preparing the launch of the next-generation RDNA 2 based RX 6000 series of graphics cards for mobile/laptop devices. While there should be a range of models based on Navi 22, Navi 23, and Navi 24, the availability is unknown for now. The only information we have so far is that it is "coming soon". The exact configurations of these chips remain a mystery until the launch happens, so we have to wait to find out more.
Return to Keyword Browsing
Nov 27th, 2024 20:03 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts