News Posts matching #ASIC

Return to Keyword Browsing

AMD RX 6950 XT, RX 6750 XT, and RX 6650 XT Pictured, Launching on May 10

AMD's Radeon RX product stack refresh for Spring-Summer, is reportedly set to launch on May 10, 2022. Here's the first picture of what a reference-design RX 6950 XT flagship, RX 6750 XT, and the mid-range RX 6650 XT, could look like. These reference board designs are essentially identical to the original RX 6000 made-by-AMD (MBA) reference designs, but ditch the two-tone silver+black color-scheme for an all-black scheme with some diamond-cut edges around the fan vents, and some piano-black accents.

At this point it is not known if this refresh sees the Navi 20-series ASICs optically-shrunk to the TSMC N6 (6 nm) silicon fabrication node, or if it's the existing 7 nm ASICs with their total graphics power (TGP) values dialed up to make room for increased engine clocks, and faster 18 Gbps-rated GDDR6 memory chips. It's interesting to see the RX 6750 XT now come with a triple-fan cooler that resembles the RX 6800 (non-XT) cooler in design, if not color. We're not sure if the RX 6650 XT reference design will ever make it to the real-world, or if it's just a concept, and the SKU is an AIB-exclusive (custom-designs only).

Intel Launches New Intel Blockscale Technology for Energy-Efficient Blockchain Hashing

Intel today announced details for its new Intel Blockscale ASIC. Building on years of Intel research and development (R&D), this application-specific integrated circuit (ASIC) will provide customers with energy-efficient hashing for proof-of-work consensus networks. Compute requirement for blockchains utilizing proof-of-work consensus mechanisms is growing at a rapid rate due to their resiliency and ability to scale without sacrificing decentralization. This growing pool of computing power requires an enormous amount of energy, necessitating new computing technologies that can provide the requisite power in a more energy-efficient manner while also being durable enough to mitigate long-term e-waste concerns.

"Momentum around blockchain continues to build. It is the enabler of decentralized and distributed computing, making way for innovative business models. To power this new era of computing, Intel is delivering solutions that can offer an optimal balance of hashing throughput and energy efficiency regardless of a customer's operating environment. Intel's decades of R&D in cryptography, hashing techniques and ultra-low voltage circuits make it possible for blockchain applications to scale their computing power without compromising on sustainability," said Balaji Kanigicherla, Intel vice president and general manager of Custom Compute in the Accelerated Computing Systems and Graphics Group.

Intel Arc DG2-512 Built on TSMC 6nm, Has More Transistors than GA104 and Navi 22

Some interesting technical specifications of the elusive two GPUs behind the Intel Arc "Alchemist" series surfaced. The larger DG2-512 silicon in particular, which forms the base for the Arc 5 and Arc 7 series, is interesting, in that it is larger in every way than the performance-segment ASICs from both NVIDIA and AMD. The table below compares the physical specs of the DG2-512, with the NVIDIA GA104, and the AMD Navi 22. This segment of GPUs has fairly powerful use-cases, including native 1440p gameplay, or playing at 4K with a performance enhancement—something Intel has, in the form of the XeSS.

The DG2-512 is built on the 6 nm TSMC N6 foundry node, the most advanced node among the three GPUs in this class. It has the highest transistor density of 53.4 mTr/mm², and the largest die-area of 406 mm², and the highest transistor-count of 21.7 billion. The Xe-HPG graphics architecture is designed for full DirectX 12 Ultimate feature support, and the DG2-512 dedicated hardware for ray tracing, as well as AI acceleration. The Arc A770M is the fastest product based on this silicon, however, it is a mobile GPU with aggressive power-management characteristic to the form-factor it serves. Here, the DG2-512 has an FP32 throughput of 13.5 TFLOPs, compared to 13.2 TFLOPs of the Navi 22 on the Radeon RX 6700 XT desktop graphics card, and the 21.7 TFLOPs of the GA104 that's maxed out on the GeForce RTX 3070 Ti desktop graphics card.

NVIDIA GA107-based GeForce RTX 3050 is Real, Comes with 11% Lower TDP, Same Specs

When NVIDIA launched the GeForce RTX 3050 "Ampere" based on the "GA106" silicon with specifications that could be fulfilled with the smaller "GA107," we knew that the company could eventually start making RTX 3050 boards with the smaller chip, and they did. Igor's Lab reports that RTX 3050 cards based on GA107 come with a typical board power of 115 W, which is about 11 percent lower than that of the GA106-based cards (130 W).

There's no difference in specifications between the two cards. Both feature 2,560 CUDA cores across 20 streaming multiprocessors, 80 Tensor cores, 20 RT cores, and a 128-bit wide GDDR6 memory interface, holding 8 GB of memory that ticks at 14 Gbps data-rate (224 GB/s bandwidth). The GA106 and GA107 ASICs share a common fiberglass substrate, and hence are pin-compatible for the convenience of board partners, with the latter having a smaller die, so any cooling solution designed for the launch-day RTX 3050 should work perfectly fine with those based on GA107.

Intel Announces a Roadmap of Energy-efficient Blockchain Accelerators

Digital computing continues to enrich our lives in more ways than we can imagine. We acquire, consume, and create content and services with a few clicks or taps of our fingertips. Exponential increases in compute performance, enabled by Moore's Law, play a significant role in making these experiences seamless. Moore's Law is also enabling us to democratize access to this enormous pool of processing power. Amazing things happen when a lot of compute is available to a lot of people without much friction.

Blockchain is a technology that has the potential to enable everyone to own much of the digital content and services they create. Some even call it an inflection point in computing, fundamentally disrupting the way we store, process and transact our digital assets as we usher in the era of metaverse and Web 3.0. No matter how the future evolves, it is certain the availability of a lot more compute to everyone will play a central role.

Intel "Bonanza Mine" Bitcoin ASIC Secures First Big Customer, a $3.3 Billion Crypto-Mining Startup

Just a few days ago, we reported that Intel is preparing to unveil the company's first application-specific integrated circuit (ASIC) dedicated to mining cryptocurrency. To be more specific, Intel plans to show off its "Bonanza Mine" ASIC at the 2022 ISSCC Conference, describing the chip as "ultra low-voltage energy-efficient Bitcoin mining ASIC." We have yet to see how this competes with other industry-made ASICs like the ones from Bitmain. However, it seems like the startup company GRIID, valued at around $3.3 billion, thinks that the Bonanza Mine ASIC is the right choice and has entered a definitive supply agreement with Intel.

According to the S-4 filing, GRIID has "entered into a definitive supply contract with Intel to provide ASICs that we expect to fuel our growth. The initial order will supply units to be delivered in 2022 and GRIID will have access to a significant share of Intel's future production volumes." There are a few other mentions of Intel in the document, and you can see another exciting tidbit below.

Intel "Bonanza Mine" is a Bitcoin Mining ASIC, Intel Finally Sees Where the Money is

Intel is reportedly looking to disrupt the cryptocurrency mining hardware business with fixed-function ASICs that either outperform GPUs, or end up with lower enough performance/Watt or performance/Dollar to take make GPUs unviable as a mining hardware option. The company is planning to unveil its first such product, codenamed "Bonanza Mine," an ASIC purpose-built for Bitcoin mining.

Since it's an ASIC, "Bonanza Mine" doesn't appear to be a re-purposed Xe-HPC processor, or even an FPGA that's been programmed to mine Bitcoin. It's a purpose-built piece of silicon. Intel will unveil "Bonanza Mine" at the 2022 ISSCC Conference. It describes the chip as being an "ultra low-voltage energy-efficient Bitcoin mining ASIC," putting power-guzzling GPUs on notice. If Intel can clinch Bitcoin with "Bonanza Lake," designing ASICs for other cryptocurrencies is straightforward. With demand from crypto-miners slashed, graphics cards will see a tremendous fall in value, forcing scalpers to cut prices.

TrendForce: Annual Foundry Revenue Expected to Reach Historical High Again in 2022 with 13% YoY Increase with Chip Shortage Showing Sign of Easing

While the global electronics supply chain experienced a chip shortage, the corresponding shortage of foundry capacities also led various foundries to raise their quotes, resulting in an over 20% YoY increase in the total annual revenues of the top 10 foundries for both 2020 and 2021, according to TrendForce's latest investigations. The top 10 foundries' annual revenue for 2021 is now expected to surpass US$100 billion. As TSMC leads yet another round of price hikes across the industry, annual foundry revenue for 2022 will likely reach US$117.69 billion, a 13.3% YoY increase.

TrendForce indicates that the combined CAPEX of the top 10 foundries surpassed US$50 billion in 2021, a 43% YoY increase. As new fab constructions and equipment move-ins gradually conclude next year, their combined CAPEX for 2022 is expected to undergo a 15% YoY increase and fall within the US$50-60 billion range. In addition, now that TSMC has officially announced the establishment of a new fab in Japan, total foundry CAPEX will likely increase further next year. TrendForce expects the foundry industry's total 8-inch and 12-inch wafer capacities to increase by 6% YoY and 14% YoY next year, respectively.

Longsys Launches FORESEE DDR4 DRAM Chips

With the rapid development of advanced technologies, such as 5G, the Internet of Things (IoT), Artificial Intelligence (AI), and 8K, people are placing more stringent requirements on the convenience, intelligence, and functional integration of their electronics. This has given rise to new development opportunities in the storage industry. As we progress further into the digital revolution, intelligent electronics will require small-capacity storage products which feature an increased level of reliability and stability. High-temperature tolerance in storage products will be vital for customers in the intelligent and small-sized consumer electronics market.

Longsys recently launched the FORESEE DDR4, which utilizes 96-ball thin fine ball grid array (TFBGA) encapsulation. The product's manufacturing process, transmission speed, power consumption, and high-temperature reliability all perform at an industry-leading level.

Penetration Rate of Ice Lake CPUs in Server Market Expected to Surpass 30% by Year's End as x86 Architecture Remains Dominant, Says TrendForce

While the server industry transitions to the latest generation of processors based on the x86 platform, the Intel Ice Lake and AMD Milan CPUs entered mass production earlier this year and were shipped to certain customers, such as North American CSPs and telecommunication companies, at a low volume in 1Q21, according to TrendForce's latest investigations. These processors are expected to begin seeing widespread adoption in the server market in 3Q21. TrendForce believes that Ice Lake represents a step-up in computing performance from the previous generation due to its higher scalability and support for more memory channels. On the other hand, the new normal that emerged in the post-pandemic era is expected to drive clients in the server sector to partially migrate to the Ice Lake platform, whose share in the server market is expected to surpass 30% in 4Q21.

TrendForce: Enterprise SSD Contract Prices Likely to Increase by 15% QoQ for 3Q21 Due to High SSD Demand and Short Supply of Upstream IC Components

The ramp-up of the Intel Ice Lake and AMD Milan processors is expected to not only propel growths in server shipment for two consecutive quarters from 2Q21 to 3Q21, but also drive up the share of high-density products in North American hyperscalers' enterprise SSD purchases, according to TrendForce's latest investigations. In China, procurement activities by domestic hyperscalers Alibaba and ByteDance are expected to increase on a quarterly basis as well. With the labor force gradually returning to physical offices, enterprises are now placing an increasing number of IT equipment orders, including servers, compared to 1H21. Hence, global enterprise SSD procurement capacity is expected to increase by 7% QoQ in 3Q21. Ongoing shortages in foundry capacities, however, have led to the supply of SSD components lagging behind demand. At the same time, enterprise SSD suppliers are aggressively raising the share of large-density products in their offerings in an attempt to optimize their product lines' profitability. Taking account of these factors, TrendForce expects contract prices of enterprise SSDs to undergo a staggering 15% QoQ increase for 3Q21.

New Intel XPU Innovations Target HPC and AI

At the 2021 International Supercomputing Conference (ISC) Intel is showcasing how the company is extending its lead in high performance computing (HPC) with a range of technology disclosures, partnerships and customer adoptions. Intel processors are the most widely deployed compute architecture in the world's supercomputers, enabling global medical discoveries and scientific breakthroughs. Intel is announcing advances in its Xeon processor for HPC and AI as well as innovations in memory, software, exascale-class storage, and networking technologies for a range of HPC use cases.

"To maximize HPC performance we must leverage all the computer resources and technology advancements available to us," said Trish Damkroger, vice president and general manager of High Performance Computing at Intel. "Intel is the driving force behind the industry's move toward exascale computing, and the advancements we're delivering with our CPUs, XPUs, oneAPI Toolkits, exascale-class DAOS storage, and high-speed networking are pushing us closer toward that realization."

Seagate Introduces Groundbreaking Exos CORVAULT Hardware-Based Self-Healing Block Storage System

Seagate, a world leader in data storage infrastructure solutions, launched a uniquely intelligent category of mass-capacity storage designed to streamline data management and reduce human intervention for macro edge and data center environments. The new Exos CORVAULT high-density storage system offers SAN-level performance built on Seagate's breakthrough storage architecture that combines the sixth generation VelosCT ASIC, ADAPT erasure code data protection, and Autonomous Drive Regeneration.

Designed on the Seagate Exos 4U106 12 Gb/s platform, CORVAULT offers "five nines" availability (99.999%) helping to deliver consistently high reliability. The maximum-density 4U chassis accommodates 106 drives in only seven inches (18 cm) of rack space. It is tuned to maximize drive performance by protecting against vibrational and acoustic interference, heat, and power irregularities.

Bosch Unveils One Billion Euro Chip Manufacturing Facility in Germany

Robert Bosch GmbH, commonly known as just Bosch, has today unveiled the results of the company's biggest investment ever. On Monday, the company has unveiled its one billion Euro manufacturing facility, which roughly translates to 1.2 billion US Dollars. The manufacturing plant is located in Dresden, Germany, and it aims to supply the leading self-driving automobile companies with chips that are in great demand. As the main goal for the plant is to manufacture chips for the automotive industry, this new 7,200 m² Dresden facility is supposed to provide car makers with Application-Specific Integrated Circuits (ASICs) for power management and tasks such as triggering the automatic braking system of cars.

The one billion Euro facility was funded partly by the funds coming from the European Union investment scheme, which donated as much as 200 million Euros ($243 million). The goal of the plan is to start with the manufacturing of chips for power tools as early as July and start production of automotive chips in September. All of the chips will be manufactured on 300 mm wafers, which offers a major improvement in quantity compared to 200 and 150 mm wafers currently used by Bosch. The opening of this facility will surely help with the global chip shortages, which have even hit the automotive sector.

Marvell Launches Industry's First 1.6T Ethernet PHY with 100G PAM4 I/Os in 5nm

Marvell today introduced the industry's first 1.6T Ethernet PHY with 100G PAM4 electrical input/outputs (I/Os) in 5nm. The demand for increased bandwidth in the data center to support massive data growth is driving the transition to 1.6T (Terabits per second) in the Ethernet backbone. 100G serial I/Os play a critical role in the cloud infrastructure to help move data across compute, networking and storage in a power-efficient manner. The new Marvell Alaska C PHY is designed to accelerate the transition to 100G serial interconnects and doubles the bandwidth speeds of the previous generation of PHYs to bring scalability for performance-critical cloud workloads and applications such as artificial intelligence and machine learning.

Marvell's 1.6T Ethernet PHY solution, the 88X93160, enables next-generation 100G serial-based 400G and 800G Ethernet links for high-density switches. The doubling of the signaling rate creates signal integrity challenges, driving the need for retimer devices for high port count switch designs. It's critical that retimer and gearboxes used for these applications are extremely power efficient. Implemented in the latest 5nm node, the Marvell 800GbE PHY provides a 40% savings in I/O power compared to existing 50G PAM4 based I/Os.

TSMC to Execute Bitmain's Orders for 5nm Crypto-Mining ASICs from Q3-2021

TSMC will be manufacturing next-generation 5 nm ASICs for Bitmain. The company designs purpose-built machines for mining crypto-currency, using ASICs. DigiTimes reports that the 5 nm volume production could kick off form Q3-2021. Bitmain's latest Antminer ASIC-based mining machines announced last month were purported to be up to 32 times faster than a GeForce RTX 3080 at mining Ethereum. Recent history has shown that whenever ASICs catch up or beat GPUs at mining, prices of GPUs tend to drop. With no 5 nm GPUs on the horizon for Q3-2021, one really can expect market pressure from crypto-miners to drop off when Antminers gain traction.

Xilinx Reports Fiscal Fourth Quarter and Fiscal Year 2021 Results

Xilinx, Inc. (Nasdaq: XLNX), the leader in adaptive computing, today announced record revenues of $851 million for the fiscal fourth quarter, up 6% over the previous quarter and an increase of 13% year over year. Fiscal 2021 revenues were $3.15 billion, largely flat from the prior fiscal year. GAAP net income for the fiscal fourth quarter was $188 million, or $0.75 per diluted share. Non-GAAP net income for the quarter was $204 million, or $0.82 per diluted share. GAAP net income for fiscal year 2021 was $647 million, or $2.62 per diluted share. Non-GAAP net income for fiscal year 2021 was $762 million, or $3.08 per diluted share.

Additional fourth quarter of fiscal year 2021 comparisons are provided in the charts below. "We are pleased with our fourth quarter results as we delivered record revenues and double-digit year-over-year growth in the midst of a challenging supply chain environment," said Victor Peng, Xilinx president and CEO. "Xilinx saw further improvement in demand across a majority of our diversified end markets with key strength in our Wireless, Data Center and Automotive markets, the pillars of our growth strategy. Our teams have executed well and we remain focused on continuing to meet customers' critical needs.

Team Group Announces T-CREATE EXPERT NVMe SSD with Extreme 12,000 TBW Endurance

In recent years, the cryptocurrency market has been gaining a great deal of attention, leading to a continuous surge in global mining. Chia, started trading in May, is one of the new types of cryptocurrencies. Its mining method is different from previous cryptocurrencies that use GPUs and ASICs to complete calculations and earn profits. The everlastingly durability EXPERT PCIe SSD, developed by TEAMGROUP's creator sub-brand T-CREATE, is the best choice for the environmentally-friendly "storage capacity mining" that Chia promotes.

The Chia Network utilizes a consensus algorithm called "Proof of Space and Time." A Chia farmer's possible yield is directly proportional to their amount of storage space. If you want to earn higher profits today, you need to have more hard drive space. This approach ensures that no one will design special-purpose hardware (ASIC) for mining it. Storage capacity and power consumption are also relatively unrelated. Therefore, Chia Network is a new "green" currency system. If you want to join the mining community utilizing this environmentally-friendly model, T-CREATE EXPERT PCIe SSD can help you get the greatest results. It features spectacular TBW values of up 12,000 TB, making it the perfect tool for supporting the intense write-cycle algorithms required for the mining process.

YouTube Updates Server Infrastructure With Custom ASICs for Video Transcoding

Video streaming is looking a bit like magic. The uploader sends a video to one platform in one resolution and encoding format, while the viewer requests a video in a specific resolution and encoding format used by the device the video is streamed on. YouTube knows this best, as it represents the world's largest video platform with over 2 billion users visiting the platform each month. That takes a massive load on the server infrastructure over at Google's data centers that host the service. There is about 500 hours worth of video content uploaded to the platform every minute, and regular hardware isn't being enough anymore to handle everything.

That is why YouTube has developed custom chips, ASICs, that are called VCUs or Video (trans)Coding Units. In Google data centers, there is a large problem with transcoding. Each video needs to adapt to the streaming platform and desired specifications, and doing that on regular hardware is a problem. By using ASIC devices, such as VCUs, Google can keep up with the demand and deliver the best possible quality. Codenamed Argos, the chip can deliver 20-33x improvement in efficiency compared to the regular server platform. In data centers, the VCU is implemented as a regular PCIe card, with two chips under the heatsinks.

Commodore 64 Modded To Mine Bitcoin

We saw the modified Nintendo Game Boy last month which could crank out Bitcoins at a blistering 0.8 hashes per second or ~125 trillion times slower than a modern Bitcoin ASIC miner. If you are searching for something a bit more modest than the Game Boy take a look at the Commodore 64 which has been modded to achieve a Bitcoin mining rate of 0.3 hashes per second. The Commodore 64 was released by IBM in 1982 featuring the MOS Technology 6510 processor clocked at 1.023 MHz and paired with 64 KB RAM and 20 KB ROM.

While the Commodore currently falls behind the Game Boy there is hope on the horizon with the creator of the program claiming a 10x performance improvement to over 3 hashes per second is possible by re-writing the code in machine language. The commodore 64 can be further upgraded with the SuperCPU upgrade which boosts mining speeds to over 60 hashes per second completely destroying the Game Boy but still falling just short of the latest ASIC miners at ~18,000,000,000,000 hashes per second. Obviously, this demonstration was not meant as a practical application but it is interesting to see how cryptocurrency mining can be implemented on older hardware and the amazing rate of technological advancement we have seen over the last 40 years.

Bitmain Releases Antminer E9 Ethereum ASIC With Performance of 32 RTX 3080 Cards

Antminer has recently announced their most powerful Ethereum miner yet the E9 with performance of 3 GH/s as the price of Ethereum reaches all-time highs. The Chinese manufacturer advertises that this is equivalent to 32 NVIDIA RTX 3080 cards while coming in with significantly less power consumption and likely a lower price. The Antminer E9 achieves its 3 GH/s mining speed with a power consumption of just 2556 W which gives it an efficiency of 0.85 J/M which would make it one of the most efficient Ethereum miners available. While the ASIC appears to offer significant advantages it is unlikely to meet the global demand for global Ethereum miners and is unlikely to affect global GPU shortages. Bitmain did not announce specific pricing or availability information for the Antminer E9 ASIC.

Tenstorrent Selects SiFive Intelligence X280 for Next-Generation AI Processors

SiFive, Inc., the industry leader in RISC-V processors and silicon solutions, today announced that Tenstorrent, an AI semiconductor and software start-up developing next-generation computers, will license the new SiFive Intelligence X280 processor in its AI training and inference processor. SiFive will deliver more details of its SiFive Intelligence initiative including the SiFive Intelligence X280 processor at the Linley Spring Processor Conference on April 23rd.

Tenstorrent's novel approach to inference and training effectively and efficiently accommodates the exponential growth in the size of machine learning models while offering best-in-class performance.

Nintendo Game Boy Modded to Mine Bitcoin

Nintendo's Game Boy handheld console was launched in 1989, making it 32 years old. Being widely regarded as the icon of handheld gaming, it was sold in millions and has been copied countless times. However, with some spare time and a crazy mind, the console has been modified to mine Bitcoin cryptocurrency. Yes, you are reading that right. An 8-bit console is mining the biggest and the most valuable cryptocurrency. An electronics enthusiast named "stacksmashing" has set himself a difficult task - to prove that the console can mine some Bitcoin, at any possible rate. And he has managed to prove it is possible, although with some modifications.

Given that the console lacks any connectivity options to the outside world due to its age, the modder had to use SPI (Serial Peripheral Interface) to connect the Game Boy with the Raspberry Pi, which had the task of connecting the Game Boy to the internet to mine some Bitcoin. Using the custom 8-bit Sharp LR35902 processor running at 4.19 MHz, the console is naturally not very powerful. Thus, it can not do any meaningful mining and to compare it to modern mining ASICs is just silly. However, it is interesting to see proof of concept and get to see some engineering fun. For more information, please check out the YouTube video here.

Google Hires Intel Veteran Uri Frank To Lead Datacenter SoC Development

Google has recently hired Intel veteran Uri Frank as VP of Engineering for the newly created server chip design division. The new division will develop custom Systems on Chip for use in Google datacenters to gain higher performance and to use less power by integrating hardware and software. Google has considerable experience in hardware development starting with its Tensor Processing Unit in 2015, its Video Processing Units in 2018, and in 2019 he first open-source silicon root-of-trust project. Google has also developed custom hardware solutions for SSDs, HDDs, network switches, and network interface cards in collaboration with external partners.

Google hopes to reduce the latency and bandwidth between different components by integrating them all into custom SoCs to improve power consumption and cost compared to individual ASICs on a motherboard. The development of these custom SoCs will be a long one with Google planning to hire hundreds of SoC engineers so we will be waiting a few years before we begin to see these deployed. This move is consistent with rivals Amazon Web Services and Microsoft Azure who are both also developing custom server chips for their datacenters. Google will continue to purchase existing productions where it is more practical to do so and hopes to create an ecosystem that will benefit the entire industry.

NVIDIA GeForce RTX 3070 Ti and RTX 3080 Ti Alleged Memory Specs and ASIC Codes Surface

An add-in card partner source shared with VideoCardz some juicy details about a pair of upcoming high-end GeForce RTX 30-series "Ampere" graphics cards. Called the GeForce RTX 3070 Ti and GeForce RTX 3080 Ti, the two are aimed to restore NVIDIA's competitiveness against the likes of AMD's recent Radeon RX 6000 series GPUs. It looks like NVIDIA doesn't want to play the memory size game just yet, despite giving the RTX 3060 12 GB of it.

The GeForce RTX 3070 Ti appears to be maxing out the GA104 silicon and carries the ASIC code "GA104-400-A#." The current RTX 3070 enables all but one of the TPCs on the GA104, working out to 5,888 CUDA cores. The new RTX 3070 Ti probably maxes out the GA104 to its CUDA core count of 6,144. The more substantial upgrade, however, is memory. The card ditches 14 Gbps GDDR6 for fast GDDR6X memory of an unknown speed—probably higher than 16 Gbps. The memory size remains 8 GB, across 256-bit.
Return to Keyword Browsing
Dec 18th, 2024 02:47 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts