News Posts matching #GPU

Return to Keyword Browsing

Apple M1 Processor Manages to Mine Ethereum

Ethereum mining has been a crazy ride over the years. In recent times, it has become very popular due to a huge surge in Ethereum prices, following those of the main coin currently present on the market - Bitcoin. However, Ethereum miners use a customized PC stocked with many graphics cards to mine the Ethereum coin. Any other alternative is not viable and graphics cards have a high hash rate of the KECCAK-256 hashing algorithm. But have you ever wondered could you mine Ethereum on your shiny new Apple M1-equipped Mac? Our guess is no, however, there are still some people making experiments with the new Apple M1 processor and testing its capabilities.

Software engineer Yifan Gu, working for Zensors, has found a way to use Apple's M1 GPU to mine Ethereum. Mr. Gu has ported Ethminer utility to Apple's macOS for Apple Silicon and has managed to get GPU mining the coins. While technically it was possible, the results were rather poor. The integrated GPU has managed to get only 2 MH/s of mining power, which is rather low compared to alternatives (desktop GPUs). Being possible doesn't mean it is a good idea. The software will consume all of the GPU power and it will limit your work with the GPU, so it isn't exactly a profitable solution.

Samsung is Preparing Exynos SoC with Radeon GPU for Next-Generation PCs

In 2019, AMD and Samsung have announced that they will be joining forces to develop a new class of mobile SoCs, carrying the Exynos name and having a Radeon GPU inside. These Exynos SoCs could be used for almost everything that needs a low-power processor. While the original plan was to have these processors run inside Samsung's mobile phone offerings, it seems like there is another application for them. If the rumors coming from ZDNet Korea are correct, we are in for a surprise. According to the source, Samsung is preparing to use the Exynos SoC with Radeon graphics in the company's next-generation laptops lineup.

While there is little to no information regarding the specifications of the said system, we can expect it to be a fully Samsung-made laptop. That means that Samsung will provide display, RAM, storage, battery, and other components manufactured by the company or its divisions. This laptop is expected to replace Samsung's Galaxy Book S, which currently uses Qualcomm Snapdragon 8cx SoC. The new PC is going to be Windows 10 based system. For more details, we have to wait for the announcement.

Intel Alder Lake Processor Tested, Big Cores Ramp Up to 3 GHz

Intel "Alder Lake" is the first processor generation coming from the company to feature the hybrid big.LITTLE type core arrangement and we are wondering how the configurations look like and just how powerful the next-generation processors are going to be. Today, a Geekbench submission has appeared that gave us a little more information about one out of twelve Alder Lake-S configurations. This time, we are getting an 8-core, 16-threaded design with all big cores and no smaller cores present. Such design with no little cores in place is exclusive to the Alder Lake-S desktop platform, and will not come to the Alder Lake-P processors designed for mobile platforms.

Based on the socket LGA1700, the processor was spotted running all of its eight cores at 2.99 GHz frequency. Please note that this is only an engineering sample and the clock speeds of the final product should be higher. It was paired with the latest DDR5 memory and NVIDIA GeForce RTX 2080 GPU. The OpenCL score this CPU ran has shown that it has provided the GPU with more than enough performance. Typically, the RTX 2080 GPU scores about 106101 points in Geekbench OpenCL tests. Paired with the Alder Lake-S CPU, the GPU has managed to score as much as 108068 points, showing the power of the new generation of cores. While there is still a lot of mystery surrounding the Alder Lake-S series, we have come to know that the big cores used are supposed to be very powerful.

Xilinx Revolutionizes the Modern Data Center with Software-Defined, Hardware Accelerated Alveo SmartNICs

Addressing the demands of the modern data center, Xilinx, Inc. (NASDAQ: XLNX) today announced a range of new data center products and solutions, including a new family of Alveo SmartNICs, smart world AI video analytics applications, an accelerated algorithmic trading reference design for sub-microsecond trading, and the Xilinx App Store.

Today's most demanding and complex applications, from networking and AI analytics to financial trading, require low-latency and real-time performance. Achieving this level of performance has been limited to expensive and lengthy hardware development. With these new products and solutions, Xilinx is eliminating the barriers for software developers to quickly create and deploy software-defined, hardware accelerated applications on Alveo accelerator cards.

AMD Instinct MI200 to Launch This Year with MCM Design

AMD is slowly preparing the next-generation of its compute-oriented flagship graphics card design called Instinct MI200 GPU. It is the card of choice for the exascale Frontier supercomputer, which is expected to make a debut later this year at the Oak Ridge Leadership Computing Facility. With the supercomputer planned for the end of this year, AMD Instinct MI200 is also going to get launched eight a bit before or alongside it. The Frontier exascale supercomputer is supposed to bring together AMD's next-generation Trento EPYC CPUs with Instinct MI200 GPU compute accelerators. However, it seems like AMD will utilize some new technologies for the making of this supercomputer. While we do not know what Trento EPYC CPUs will look like, it seems like Instinct MI200 GPU is going to feature a multi-chip-module (MCM) design with the new CDNA 2 GPU architecture. With this being the only information about the GPU, we have to wait a bit to find out more details.
AMD CDNA Die

GIGABYTE Releases 2U Server: G262-ZR0 with NVIDIA HGX A100 4-GPU

GIGABYTE Technology, (TWSE: 2376), an industry leader in high-performance servers and workstations, today announced the G262-ZR0 for HPC, AI, and data analytics. Designed to support the highest-level of performance in GPU computing, the G262-ZR0 incorporates fast PCIe 4.0 throughput in addition to NVIDIA HGX technologies and NVIDIA NVLink to provide industry leading bandwidth performance.

NVIDIA GeForce RTX 3060 Anti-Mining Feature Goes Beyond Driver Version, Could Expand to More SKUs

Yesterday NVIDIA announced the company's first Crypto Mining Processor (CPM) that serves the purpose of having a dedicated processor only for mining with no video outputs. Alongside the new processors, the company has also announced that in the next driver update the GeForce RTX 3060 GPU will get Etherium mining performance halved, limiting the use of this GPU SKU by miners. However, up until now, we have thought that NVIDIA is limiting the mining performance of this card by simply having a driver detect if crypto mining algorithms are in place and limit the performance. However, that doesn't seem to be the case. According to Bryan Del Rizzo, director of global PR for GeForce, more things are working behind the driver.

According to Mr. Del Rizzo: "It's not just a driver thing. There is a secure handshake between the driver, the RTX 3060 silicon, and the BIOS (firmware) that prevents removal of the hash rate limiter." This means that essentially, NVIDIA can find any way to cripple the mining hash rate even if you didn't update your driver version. At the same time, according to Kopite7Kimi, we are possibly expecting to see NVIDIA relaunch its existing SKUs under a different ID, which would feature a built-in anti-crypto mining algorithm. What the company does remains to be seen.

NVIDIA Announces New CMP Series Specifically Designed for Cryptocurrency Mining; Caps Mining Performance on RTX 3060

This is a big one: NVIDIA has officially announced a new family of products specifically designed to satiate the demand coming from cryptocurrency mining workloads and farms. At the same time, the company has announced that the RTX 3060 launch driver will include software limitations for cryptocurrency mining workloads specifically correlated with Ethereum mining, essentially halving the maximum theoretical hashrate that could be achieved from a purely hardware perspective. The new family of products, termed CMP (Crypto Mining Processor) series, will see its products under the HX branding, and will be available in four different tiers: 30HX, 40HX, 50HX and 90HX. These products will not have any display outputs, and therefore are not applicable for gaming scenarios.

NVIDIA's stance here is that their new product will bring some justice in the overall distribution of its GeForce graphics cards, which are marketed and meant for gaming workloads. The new cryptocurrency-geared series will be distributed by NVIDIA authorized partners in the form of ASUS, Colorful, EVGA, Gigabyte, MSI, Palit, and PC Partner (more may be added down the line). There is currently no information on what silicon actually powers these graphics cards; and of course, the success of this enterprise depends on A) the driver restrictions not being limited to the RTX 3060 graphics card - it isn't clear from NVIDIA's press release if other RTX 30-series graphics cards will see the same performance cap. Even if NVIDIA did release those drivers, however, cryptocurrency miners would just opt to, well, not update them. So it is possible that NVIDIA will release a revision of the RTX 3090, RTX 3080, RTX 3070 and RTX 3060 Ti with silicon enhancements that will only work with the latest GeForce drivers - after allowing the channels to move all of their existing, cryptocurrency-enabled stock.

ZOTAC Promotes Cryptocurrency Mining Farm via Twitter

ZOTAC's official Twitter channel has posted an image that promotes the usage of its graphics cards in mining environments. Under the caption "An army of #ZOTACGAMING GPU's hungry for coin!", the company posted an image of a cryptocurrency mining farm populated with its graphics cards. The usage of a gaming hashtag in the post just adds insult to injury, with the picture showcasing at least eight of its GeForce RTX 3070 Twin Edge graphics cards on mining duties.

Of course, for ZOTAC, its cards being re-purposed for mining isn't a make-it-or-break-it affair: the company makes money selling its products to gamers or miners alike. However, considering how global supply of the latest gaming-branded graphics cards has been constrained ever since the initial RTX 3000-series launch, it seems that this Tweet might have been a misstep on the company. It's bound to attract crosshairs towards a seemingly unjust distribution of graphics cards in the market, with several would-be users of their gaming-branded graphics cards being left out in the cold regarding the primary purpose these graphics cards are developed and marketed for: gaming. The company has in the meantime deleted the Tweet.

Sony Playstation 5 SoC Die Has Been Pictured

When AMD and Sony collaborated on making the next generation console chip, AMD has internally codenamed it Flute, while Sony codenamed it Oberon or Ariel. This PlayStation 5 SoC die has today been pictured thanks to the Fritzchens Fritz and we get to see a closer look at the die internals. Featuring eight of AMD's Zen2 cores that can reach frequencies of up to 3.5 GHz, the CPU is paired with 36 CU GPU based on the RDNA 2 technology. The GPU is capable of running at speed of up to 2.23 GHz. The SoC has been made to accommodate all of that hardware, and bring IO to connect it all.

When tearing down the console, the heatsink and the SoC are connected by liquid metal, which is used to achieve the best possible heat transfer between two surfaces. Surrounding the die there is a small amount of material used to prevent liquid metal (a conductive material) from possibly spilling and shorting some components. Using a special short wave infrared light (SWIR) microscope, we can take a look at what is happening under the hood without destroying the chip. And really, there are a few distinct areas that are highlighted by the Twitter user @Locuza. As you can see, the die has special sectors with the CPU complex and a GPU matrix with plenty of workgroups and additional components for raytracing.

Sapphire Prepping a Return to Form With Toxic and Atomic Graphics Card Designs?

Sapphire via Weibo teased a new, impending product launch to occur within the next few days. Details are all but absent - the only thing we have to go on is the company's teaser image, which features a TA branding in prominent gold. It's been speculated that this teaser may refer to a Sapphire return to some of its most well-known sub-brands for their graphics cards products - the TOXIC and ATOMIC series.

The TOXIC branding hasn't graced Sapphire's cards ever since the RX 390X days; it's historically been the company's highest-tier, enthusiast-focused graphics cards on air cooling. the ATOMIC series, on the other hand, is Sapphire's AIO-integrated lineup of graphics cards, which the company has kept dormant since way back in 2013, with the AMD Radeon HD 7990 - a beautiful piece of engineering featuring a pair of AMD's Tahiti XT2 cores - and featured 3 GB of GDDR5 memory mirrored for each of the GPU cores. The TA branding might also represent a new design from Sapphire, rather than a return to old lineups, but we'll have to wait for Sapphire's announcement proper to dispel any lingering doubts.

HPE Develops New Spaceborne Computer-2 Computing System for the International Space Station

Hewlett Packard Enterprise (HPE) today announced it is accelerating space exploration and increasing self-sufficiency for astronauts by enabling real-time data processing with advanced commercial edge computing in space for the first time. Astronauts and space explorers aboard the International Space Station (ISS) will speed time-to-insight from months to minutes on various experiments in space, from processing medical imaging and DNA sequencing to unlocking key insights from volumes of remote sensors and satellites, using HPE's Spaceborne Computer-2 (SBC-2), an edge computing system.

Spaceborne Computer-2 is scheduled to launch into orbit on the 15th Northrop Grumman Resupply Mission to Space Station (NG-15) on February 20 and will be available for use on the International Space Station for the next 2-3 years. The NG-15 spacecraft has been named "SS. Katherine Johnson" in honor of Katherine Johnson, a famed Black, female NASA mathematician who was critical to the early success of the space program.

ASUS ROG Zephyrus Duo 15 Owners are Applying Custom GPU vBIOS with Higher TGP Presets

With NVIDIA's GeForce RTX 30-series lineup of GPUs, laptop manufacturers are offered a wide variety of GPU SKUs that internally differ simply by having different Total Graphics Power (TGP), which in turn results in different clock speeds and thus different performance. ASUS uses NVIDIA's variant of GeForce RTX 3080 mobile GPU inside the company's ROG Zephyrus Duo (GX551QS) with a TGP of 115 Watts, and Dynamic Boost technology that can ramp up the card to 130 Watts. However, this doesn't represent the maximum for RTX 3080 mobile graphics card. The maximum TGP for RTX 3080 mobile goes up to 150 Watts, which is a big improvement that lets the GPU reach higher frequencies and more performance.

Have you ever wondered what would happen if you manually applied vBIOS that allows the card to use more power? Well, Baidu forum users are reporting a successful experiment of transforming their 115 W RTX 3080 to 150 W TGP card. Using GPU vBIOS from MSI Leopard G76, which features a 150 W power limit, and applying it to the ROG's Zephyrus Duo power-limited RTX 3080 cards is giving results. Users have successfully used this vBIOS to squeeze out more performance from their laptops. As seen on the 3D Mark Time Spy rank list, the entries are now dominated solely by modified laptops. Performance improvement is, of course, present and it reaches up to a 20% increase.

NVIDIA GeForce RTX 30-Series GPU Availability to Reportedly Worsen in Q1

The availability of NVIDIA's GeForce RTX 3000 series "Ampere" graphics cards has been a problem ever since it launched. High demand paired with insufficient supply has caused quite some disturbance in the supply chain and has caused the MSRP of the GPUs to increase. Firstly, we were promised that the situation would resolve around May when NVIDIA is expecting to match the supply with the demand. However, according to the recent report, that might not be the case. Alternate, a European retailer operating in Belgium, the Netherlands, and Germany, has spoken to NVIDIA about the supply of the GeForce RTX 3000 series Ampere graphics cards.

According to the retailer, the situation with the card is such that the availability is scarce. When it comes to the GeForce RTX 3090, there are very few deliveries, but only a few open orders. The RTX 3080 sees very few cards coming with many open orders. The RTX 3070 has few cards incoming, but few open orders. And last but not least, the RTX 3060 Ti has very few cards coming, and a moderately high amount of open orders. If you are aiming to buy a card, your best chances would be with RTX 3090 and RTX 3070, as they do not have such high demand. On the other hand, RTX 3080 and RTX 3060 Ti cards are almost impossible to source as they all have a big waiting list. Alternate says that they work on a "first in first out" principle of delivering cards to consumers, so if you are not on the list you are likely going to wait for even longer.

ASUS Publishes Full GeForce RTX 3000 Series Laptop GPU Specifications Including TGP and Frequency

On a request from Tweakers, ASUS has decided to reveal full GPU specifications for the entire laptop GPU lineup. Having NVIDIA GeForce RTX 3000 series GPUs in their laptops, companies were not committed to listing the TGP and whatever the GPU inside was a Max-Q or Max-P variant. That would confuse the average consumer and a GPU variant they got could be significantly slower than what they have expected. So to clear up the confusion, ASUS has decided to provide us with the table of GPU TGPs and frequencies found inside the company's laptops. Not only has ASUS published a table of TGPs and frequencies, but the company has also updated its website to reflect the exact TDP and exact frequency of any GPU used in a laptop to avoid any confusion and give consumers reassurance in their purchase. You can find the table of laptops with their exact GPU TGP and GPU clock speeds below.

Microchip Announces World's First PCI Express 5.0 Switches

Applications such as data analytics, autonomous-driving and medical diagnostics are driving extraordinary demands for machine learning and hyperscale compute infrastructure. To meet these demands, Microchip Technology Inc. today announced the world's first PCI Express (PCIe) 5.0 switch solutions—the Switchtec PFX PCIe 5.0 family—doubling the interconnect performance for dense compute, high speed networking and NVM Express (NVMe ) storage. Together with the XpressConnect retimers, Microchip is the industry's only supplier of both PCIe Gen 5 switches and PCIe Gen 5 retimer products, delivering a complete portfolio of PCIe Gen 5 infrastructure solutions with proven interoperability.

"Accelerators, graphic processing units (GPUs), central processing units (CPUs) and high-speed network adapters continue to drive the need for higher performance PCIe infrastructure. Microchip's introduction of the world's first PCIe 5.0 switch doubles the PCIe Gen 4 interconnect link rates to 32 GT/s to support the most demanding next-generation machine learning platforms," said Andrew Dieckmann, associate vice president of marketing and applications engineering for Microchip's data center solutions business unit. "Coupled with our XpressConnect family of PCIe 5.0 and Compute Express Link (CXL ) 1.1/2.0 retimers, Microchip offers the industry's broadest portfolio of PCIe Gen 5 infrastructure solutions with the lowest latency and end-to-end interoperability."

GALAX GeForce RTX 3090 Hall Of Fame (HOF) Edition GPU Breaks 3 GHz Barrier and 16 World Records

GALAX has just yesterday launched its top-end GeForce RTX 3090 Hall Of Fame (HOF) edition graphics card. Designed with overclocking in mind, the card was spotting many interesting design solutions like 12 layer PCB, 26 phase VRM power delivery configuration, and three 8-pin power connectors. If you were wondering if any application is going to use that much power and if all of that is really needed, don't search for an answer any longer, because GALAX has managed to break some world records with its HOF design. According to the company, the cards sent to overclockers have managed to break 16 world records, which you can find listed below.

Overclockers like OGS from HwBox Hellas OC Team and Rauf from Alza OC have managed to push their HOF cards over the 3.0 GHz barrier, which represents the first GeForce RTX 3090 graphics card to achieve such frequencies. Having the right design. OGS has managed to OC the HOF design to 3015 MHz, while the overclocker Rauf has managed to pull off exactly 3000 MHz. You can find their HWBOT entries in the source.

GALAX Shows Off GeForce RTX 3090 Hall Of Fame (HOF) Edition Graphics Card

GALAX has today decided to take the lid off its upcoming premium GeForce RTX 3090 Hall Of Fame (HOF) edition graphics card and showcase to the world what the company has been working on. The HOF edition is usually GALAX's highest-end custom graphics card design with one simple goal - ultimate performance. Featuring all-white aesthetics, the card has a 12-layer white PCB with a white three fan air cooler. The air cooler features three fans with one in the middle being 92 mm and the other two being 102 mm. The card comes paired with HOF Panel III, representing a small 4.3 inch LCD screen that can stand on its own or stick to the GPU using magnets. It is used for some software diagnostics like temperature monitoring.

The GPU comes with a diamond-shaped aluminium backplate used for additional heat dissipation. When it comes to power delivery, there are three 8-pin connectors (also colored in white to match the aesthetics), that supply 26 VRM phases for the power delivery system. Such configuration is envisioned for extreme overclocking purposes like LN2. There are two BIOS versions, P and S variants, where they are used for maximum performance or quieter operation respectively. The boost frequency of this GPU is 1875 MHz (using one-click OCing), however, any buyer of such a card is not going to just use it like that and will probably prefer to push higher frequencies.
More pictures follow:

AMD Files Patent for Chiplet Machine Learning Accelerator to be Paired With GPU, Cache Chiplets

AMD has filed a patent whereby they describe a MLA (Machine Learning Accelerator) chiplet design that can then be paired with a GPU unit (such as RDNA 3) and a cache unit (likely a GPU-excised version of AMD's Infinity Cache design debuted with RDNA 2) to create what AMD is calling an "APD" (Accelerated Processing Device). The design would thus enable AMD to create a chiplet-based machine learning accelerator whose sole function would be to accelerate machine learning - specifically, matrix multiplication. This would enable capabilities not unlike those available through NVIDIA's Tensor cores.

This could give AMD a modular way to add machine-learning capabilities to several of their designs through the inclusion of such a chiplet, and might be AMD's way of achieving hardware acceleration of a DLSS-like feature. This would avoid the shortcomings associated with implementing it in the GPU package itself - an increase in overall die area, with thus increased cost and reduced yields, while at the same time enabling AMD to deploy it in other products other than GPU packages. The patent describes the possibility of different manufacturing technologies being employed in the chiplet-based design - harkening back to the I/O modules in Ryzen CPUs, manufactured via a 12 nm process, and not the 7 nm one used for the core chiplets. The patent also describes acceleration of cache-requests from the GPU die to the cache chiplet, and on-the-fly usage of it as actual cache, or as directly-addressable memory.

Apple Patents Multi-Level Hybrid Memory Subsystem

Apple has today patented a new approach to how it uses memory in the System-on-Chip (SoC) subsystem. With the announcement of the M1 processor, Apple has switched away from the traditional Intel-supplied chips and transitioned into a fully custom SoC design called Apple Silicon. The new designs have to integrate every component like the Arm CPU and a custom GPU. Both of these processors need good memory access, and Apple has figured out a solution to the problem of having both the CPU and the GPU accessing the same pool of memory. The so-called UMA (unified memory access) represents a bottleneck because both processors share the bandwidth and the total memory capacity, which would leave one processor starving in some scenarios.

Apple has patented a design that aims to solve this problem by combining high-bandwidth cache DRAM as well as high-capacity main DRAM. "With two types of DRAM forming the memory system, one of which may be optimized for bandwidth and the other of which may be optimized for capacity, the goals of bandwidth increase and capacity increase may both be realized, in some embodiments," says the patent, " to implement energy efficiency improvements, which may provide a highly energy-efficient memory solution that is also high performance and high bandwidth." The patent got filed way back in 2016 and it means that we could start seeing this technology in the future Apple Silicon designs, following the M1 chip.

Update 21:14 UTC: We have been reached out by Mr. Kerry Creeron, an attorney with the firm of Banner & Witcoff, who provided us with additional insights about the patent. Mr. Creeron has provided us with his personal commentary about it, and you can find Mr. Creeron's quote below.

Intel Starts Shipping Xe LP-based DG1 Discrete GPU to OEMs; Locks it out of Other Systems

Intel has apparently begun shipment of its discrete Iris Xe LP-based DG1 graphics card to OEMs and system integrators, which means we will soon see these graphics cards hitting the market - in a manner of speaking. The quantities aren't yet known, but considering Intel's intentions of only shipping it to OEMs, volume shouldn't be quite significant. It remains to be seen whether DG1-toting systems will even be available to the general public, or if these will be sold primarily to business customers. However, considering that the discrete DG1 only offers entry-level performance due to its 80 EUs (less than even the 96 available through integrated graphics on Intel Tiger Lake CPUs), hopes placed on this particular graphics card as somewhat remedying the current industry ailment of undersupply won't materialize.

One interesting tidbit, however, is that system integrators will have to use specific hardware on the systems they build that carry Intel's DG1, as the blue giant has specified that these graphics cards will only work pending specific firmware updates that enable them to function on certain chipset and processor products. Namely, and according to Intel speaking to Legit Reviews, "The Iris Xe discrete add-in card will be paired with 9th gen (Coffee Lake-S) and 10th gen (Comet Lake-S) Intel Core desktop processors and Intel B460, H410, B365, and H310C chipset-based motherboards and sold as part of pre-built systems. These motherboards require a special BIOS that supports Intel Iris Xe, so the cards won't be compatible with other systems."

AMD Nashira Summit GPU Gets Spotted in Ashes of the Singularity Database

AMD's mysterious Nashira Summit GPU has been spotted in Ashes of the Singularity database. A similarly named Nashira Point GPU has appeared some time ago on the USB-IF website, which was also a mysterious product in AMD's Radeon graphics processors lineup. The Nashira Summit and Nashira Point seem to be a part of the common Nashira GPU family, which is presumably a codename for a lower-end Navi 22 or Navi 23 GPU models. Today, we managed to get a Nashira Summit score in the Ashes of the Singularity database. The GPU has been put through a set of AotS benchmarks and we have the scores. Unfortunately, tests have been run using all-custom settings, so it is impossible to compare it to some other GPU as a reference. The test was probably performed by AMD or some AIB. So far it is impossible to distinct whatever this is a mobile or a desktop product as both mobile and desktop GPUs are tested in the same manner. It remains a question what the mysterious Nashira Summit GPU is, so we have to wait for more information to find out.

GALAX GeForce RTX 3090 Hall Of Fame (HOF) PCB Pictured, Features Massive VRM Configuration

GALAX is preparing to launch its flagship graphics card based on NVIDIA's Ampere lineup of GPU, specifically the GeForce RTX 3090 variant. The company is currently developing the GeForce RTX 3090 Hall Of Fame (HOF) edition GPU that is supposed to have a regular HOF treatment. That means a white aesthetics (white PCB plus white cooling solution), a massive three fan air-cooler, and of course, a PCB that is designed for extreme overclocking. Today, thanks to the sources over at VideoCardz, we have the first look at the PCB of GALAX's upcoming GeForce RTX 3090 HOF edition graphics card.

Featuring a massive VRM configuration consisting out of 26 phases, the GPU is swimming in VRM phases and it is the highest number of VRM phases we have seen on any GeForce RTX 3090 GPU. It is not exactly clear from the pictures how much of the total 26 VRMs is going to Vcore (GPU), and how much to Vmem (memory). To power the card, there are three 8-pin power connectors. It is important to note that these specifications are not finalized, as this is only a prototype. Nonetheless, the card is made with LN2 extreme overclocking in mind and is going to probably be more expensive. There are event probes for voltage measuring directly from the card, to avoid having to do it in software. NVLINK fingers are present as well, meaning that dual-card setups are still an option with this GPU. The real product is expected to arrive sometime in February according to the source, however, we don't know the exact date or pricing.

AMD is Allegedly Preparing Navi 31 GPU with Dual 80 CU Chiplet Design

AMD is about to enter the world of chiplets with its upcoming GPUs, just like it has been doing so with the Zen generation of processors. Having launched a Radeon RX 6000 series lineup based on Navi 21 and Navi 22, the company is seemingly not stopping there. To remain competitive, it needs to be in the constant process of innovation and development, which is reportedly true once again. According to the current rumors, AMD is working on an RDNA 3 GPU design based on chiplets. The chiplet design is supposed to feature two 80 Compute Unit (CU) dies, just like the ones found inside the Radeon RX 6900 XT graphics card.

Having two 80 CU dies would bring the total core number to exactly 10240 cores (two times 5120 cores on Navi 21 die). Combined with the RDNA 3 architecture, which brings better perf-per-watt compared to the last generation uArch, Navi 31 GPU is going to be a compute monster. It isn't exactly clear whatever we are supposed to get this graphics card, however, it may be coming at the end of this year or the beginning of the following year 2022.

Linux Gets Ported to Apple's M1-Based Devices

When Apple introduces its lineup of devices based on the custom Apple Silicon, many people have thought that it represents the end for any further device customization and that Apple is effectively locking-up the ecosystem even more. That is not the case we have today. Usually, developers working on Macs are always in need of another operating system to test their software and try it out. It means that they have to run some virtualization software like virtual machines to test another OS like Linux and possibly Windows. However, it would be a lot easier if they could just boot that OS directly on the device and that is exactly why we are here today.

Researchers from Corellium, a startup company based in Florida, working on ARM device virtualization, have pulled off an incredible feat. They have managed to get Linux running on Apple's M1 custom silicon based devices. The CTO of Corellium, Mr. Chris Wade, has announced that Linux is now fully usable on M1 silicon. The port can take full advantage of the CPU, however, there is no GPU acceleration for now, and graphics are set to the software rendering mode. Corellium also promises to take the changes it made upstream to the Linux kernel itself, meaning open-source and permissive license model. Below you can find an image of Apple M1 Mac Mini running the latest Ubuntu OS build.
Return to Keyword Browsing
Nov 24th, 2024 02:25 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts