News Posts matching #5 nm

Return to Keyword Browsing

Rapidus Installs Japan's First ASML NXE:3800E EUV Lithography Machine

Rapidus Corporation, a manufacturer of advanced logic semiconductors, today announced the delivery and installation of ASML's EUV lithography equipment at its Innovative Integration for Manufacturing (IIM-1) foundry, an advanced semiconductor development and manufacturing fab currently under construction in Chitose, Hokkaido. To commemorate the installation, a ceremony was held at Portom Hall in the New Chitose Airport.

This is a significant milestone for Japan's semiconductor industry, marking the first time that an EUV lithography tool will be used for mass production in the country. In addition to the EUV lithography machinery, Rapidus will install additional complementary advanced semiconductor manufacturing equipment, as well as full automated material handling systems in its IIM-1 foundry to optimize 2 nm generation gate-all-around (GAA) semiconductor manufacturing.

TSMC and NVIDIA Reportedly in Talks to Bring "Blackwell" GPU Production to Arizona

TSMC is reportedly negotiating with NVIDIA to manufacture advanced "Blackwell" GPUs in its Arizona facility. First reported by Reuters, this partnership could mark another major shift in AI chip production toward US soil. The discussion centers around TSMC's Fab 21 in Phoenix, Arizona, specializing in 4 nm and 5 nm chip production. NVIDIA's Blackwell GPUs utilize TSMC's 4NP process technology, making the Arizona facility a technically viable production site. However, the proposed arrangement faces several logistical challenges. A key issue is the absence of advanced packaging facilities in the United States. There is Amkor that planned to do advanced packaging, but it's only scheduled to begin packaging in 2027. TSMC's sophisticated CoWoS packaging technology is currently available only in Taiwan. This means that chips manufactured in Arizona would need to be shipped back to Taiwan for final assembly, potentially increasing production costs.

While alternative solutions exist, such as redesigning the chips to use Intel's packaging technology or focusing on gaming GPU production in Arizona, these options present their own complications. Intel's packaging methods would likely increase costs, and the current absence of graphics card manufacturing infrastructure in the US makes domestic gaming GPU production less practical. Both TSMC and NVIDIA have declined to comment on the ongoing negotiations, as this is confidential information unknown to the public. Interestingly, TSMC's Arizona facility has already attracted a few more US firms for domestic manufacturing, like Apple, rumored to manufacture its A16 Bionic chip and AMD with high-performance designs, likely either EPYC or Instinct MI chips.

AMD Ryzen AI MAX 300 "Strix Halo" iGPU to Feature Radeon 8000S Branding

AMD Ryzen AI MAX 300-series processors, codenamed "Strix Halo," have been on in the news for close to a year now. These mobile processors combine "Zen 5" CPU cores with an oversized iGPU that offers performance rivaling discrete GPUs, with the idea behind these chips being to rival the Apple M3 Pro and M3 Max processors powering MacBook Pros. The "Strix Halo" mobile processor is an MCM that combines one or two "Zen 5" CCDs (some ones featured on "Granite Ridge" desktop processors and "Turin" server processors), with a large SoC die. This die is built either on the 5 nm (TSMC N5) or 4 nm (TSMC N4P) node. It packs a large iGPU based on the RDNA 3.5 graphics architecture, with 40 compute units (CU), and a 50 TOPS-class XDNA 2 NPU carried over from "Strix Point." The memory interface is a 256-bit wide LPDDR5X-8000 for sufficient memory bandwidth for the up to 16 "Zen 5" CPU cores, the 50 TOPS NPU, and the large 40 CU iGPU.

Golden Pig Upgrade leaked what looks like a company slide from a notebook OEM, which reveals the iGPU model names for the various Ryzen AI MAX 300-series SKUs. Leading the pack is the Ryzen AI MAX+ 395. This is a maxed out SKU with a 16-core/32-thread "Zen 5" CPU that uses two CCDs. All 16 cores are full-sized "Zen 5." The CPU has 64 MB of L3 cache (32 MB per CCD), each of the 16 cores has 1 MB of dedicated L2 cache. The iGPU is branded Radeon 8060S, it comes with all 40 CU (2,560 stream processors) enabled, besides 80 AI accelerators, and 40 Ray accelerators. The Ryzen AI MAX 390 is the next processor SKU, it comes with a 12-core/24-thread "Zen 5" CPU. Like the 395, the 390 is a dual-CCD processor, all 12 cores are full-sized "Zen 5." There's 64 MB of L3 cache, and 1 MB of L2 cache per core. The Radeon 8060S graphics solution is the same as the one on the Ryzen AI MAX+ 395, it comes with all 40 CU enabled.

M31 Launches USB4 IP for TSMC 5 nm Process

M31 Technology Corporation, a leading global provider of silicon intellectual property (IP), today announced that its cutting-edge USB4 IP has achieved silicon validation on TSMC's 5 nm (N5) process. The newly validated IP enhances data transfer capabilities for a new wave of mobile and portable devices. The announcement coincides with M31's participation in TSMC's 2024 Open Innovation Platform (OIP) Ecosystem Forum in Taiwan. This milestone underscores the close collaboration between M31 and TSMC, reflecting M31's commitment to advancing high-performance IP solutions by leveraging TSMC's innovative platform to drive next-generation connectivity.

M31's USB4 IP is built on the latest USB4 specification and represents a major leap in the evolution of USB architecture. It supports multi-protocol tunneling, enabling simultaneous transmission of multiple data types—such as USB, DisplayPort, and PCIe—over a single connection. The USB4 IP achieves 40 Gbps data transfer rates, significantly enhancing bandwidth associated with previous USB standards. The IP is fully compatible with USB 3.2, USB 2.0, and Thunderbolt 3, ensuring seamless integration with existing and future devices.

Samsung Electronics Announces Results for Third Quarter of 2024, 7 Percent Revenue Increase

Samsung Electronics today reported financial results for the third quarter ended Sept. 30, 2024. The Company posted KRW 79.1 trillion in consolidated revenue, an increase of 7% from the previous quarter, on the back of the launch effects of new smartphone models and increased sales of high-end memory products. Operating profit declined to KRW 9.18 trillion, largely due to one-off costs, including the provision of incentives in the Device Solutions (DS) Division. The strength of the Korean won against the U.S. dollar resulted in a negative impact on company-wide operating profit of about KRW 0.5 trillion compared to the previous quarter.

In the fourth quarter, while memory demand for mobile and PC may encounter softness, growth in AI will keep demand at robust levels. Against this backdrop, the Company will concentrate on driving sales of High Bandwidth Memory (HBM) and high-density products. The Foundry Business aims to increase order volumes by enhancing advanced process technologies. Samsung Display Corporation (SDC) expects the demand of flagship products from major customers to continue, while maintaining a quite conservative outlook on its performance. The Device eXperience (DX) Division will continue to focus on premium products, but sales are expected to decline slightly compared to the previous quarter.

Die-Shots of Intel Core Ultra "Arrow Lake-S" Surface, Thanks to ASUS

As Intel's Core Ultra "Arrow Lake-S" desktop processors near their launch, ASUS China put out a video presentation about its Z890 chipset motherboards ready for these processors, which included a technical run-down of Intel's first tile-based desktop processor, which included detailed die-shots of the various tiles. This is stuff that would require not just de-lidding the processor (removing the integrated heat-spreader), but also clearing up the top layers of the die to reveal the various components underneath.

The whole-chip die-shot gives us a bird's eye view of the four key logic tiles—Compute, Graphics, SoC, and I/O, sitting on top of the Foveros base tile. Our article from earlier this week goes into the die areas of the individual tiles, and the base tile. The Compute tile is built on the most advanced foundry node among the four tiles, the 3 nm TSMC N3B. Unlike the older generation "Raptor Lake-S" and "Alder Lake-S," the P-cores and E-core clusters aren't clumped into the two ends of the CPU complex. In "Arrow Lake-S," they follow a staggered layout, with a row of P-cores, followed by a row of E-core clusters, followed by two rows of P-cores, and then another row of E-core clusters, before the final row of P-cores, to achieve the total core-count of 8P+16E. This arrangement reduces concentration of heat when the P-cores are loaded (eg: when gaming), and ensures each E-core cluster is just one ringbus stop away from a P-core, which should improve thread-migration latencies. The central region of the tile has this ringbus, and 36 MB of L3 cache shared among the P-cores and E-core clusters.

Intel Arrow Lake-S Die Visibly Larger Than Raptor Lake-S, Die-size Estimated

As a quick follow-up to last week's "Arrow Lake-S" de-lidding by Madness727, we now have a line-up of a de-lidded Core Ultra 9 285K "Arrow Lake-S" processor placed next to a Core i9-14900K "Raptor Lake-S," and the Core i9-12900K "Alder Lake-S." The tile-based "Arrow Lake-S" is visibly larger than the two, despite being made on more advanced foundry nodes. Both the 8P+16E "Raptor Lake-S" and 8P+8E "Alder Lake-S" chips are built on the Intel 7 node (10 nm Enhanced SuperFin). The "Raptor Lake-S" monolithic chip comes with a die-area of 257 mm². The "Alder Lake-S" is physically smaller, at 215 mm². What sets the two apart isn't just the two additional E-core clusters on "Raptor Lake-S," but also larger caches—2 MB of L2 per P-core, increased form 1.25 MB/core, and 4 MB per E-core cluster, increased from 2 MB/cluster.

Thanks to high quality die-shots of the "Arrow Lake-S" by Madness727, we have our first die-area estimations by A Hollow Knight on Twitter. The LGA1851 fiberglass substrate has the same dimensions as the LGA1700 substrate. This is to ensure the socket retains cooler compatibility. Using geometrical measurements, the base tile of the "Arrow Lake-S" is estimated to be 300.9 mm² in area. The base-tile is a more suitable guideline for "die-area," since Intel uses filler tiles to ensure gaps in the arrangement of logic tiles are filled, and the chip aligns with the base-tile below. The base tile, built on an Intel 22 nm foundry node, serves like a silicon interposer, facilitating high-density microscopic wiring between the various logic tiles stacked on top, and an interface to the fiberglass substrate below.

TSMC Reports Third Quarter EPS Results, Expects Gross Profit Margin of Up to 59% in Q4 2024

TSMC today announced consolidated revenue of NT$759.69 billion (US$23.50 billion), net income of NT$325.26 billion (US$10.08 billion), and diluted earnings per share of NT$12.54 (US$1.94 per ADR unit) for the third quarter ended September 30, 2024. Year-over-year, third quarter revenue increased 39.0% while net income and diluted EPS both increased 54.2%. Compared to second quarter 2024, third quarter results represented a 12.8% increase in revenue and a 31.2% increase in net income. All figures were prepared in accordance with TIFRS on a consolidated basis.

In US dollars, third quarter revenue was $23.50 billion, which increased 36.0% year-over-year and increased 12.9% from the previous quarter. Gross margin for the quarter was 57.8%, operating margin was 47.5%, and net profit margin was 42.8%. In the third quarter, shipments of 3-nanometer accounted for 20% of total wafer revenue; 5-nanometer accounted for 32%; 7-nanometer accounted for 17%. Advanced technologies, defined as 7-nanometer and more advanced technologies, accounted for 69% of total wafer revenue.

Marvell Collaborates with Meta for Custom Ethernet Network Interface Controller Solution

Marvell Technology, Inc. (NASDAQ: MRVL), a leader in data infrastructure semiconductor solutions, today announced the development of FBNIC, a custom 5 nm network interface controller (NIC) ASIC in collaboration with Meta to meet the company's infrastructure and use case requirements. The FBNIC board design will also be contributed by Marvell to the Open Compute Project (OCP) community. FBNIC combines a customized network controller designed by Marvell and Meta, a co-designed board, and Meta's ASIC, firmware and software. This custom design delivers innovative capabilities, optimizes performance, increases efficiencies, and reduces the average time needed to resolve potential network and server issues.

"The future of large-scale, data center computing will increasingly revolve around optimizing semiconductors and other components for specific applications and cloud infrastructure architectures," said Raghib Hussain, President of Products and Technologies at Marvell. "It's been exciting to partner with Meta on developing their custom FBNIC on our industry-leading 5 nm accelerated infrastructure silicon platform. We look forward to the OCP community leveraging the board design for future innovations."

AMD to Become Major Customer of TSMC Arizona Facility with High-Performance Designs

After Apple, we just learned that AMD is the next company in line for US-based manufacturing in the TSMC Arizona facility. Industry analyst Tim Culpan reports that TSMC's Fab 21 in Arizona will soon be producing AMD's high-performance computing (HPC) processors, with tape out and manufacturing expected to commence on TSMC's 5 nm node next year. This move comes after previously reported Apple's A16 SoC production, which is already in progress at the facility and could see shipments before the end of this year, significantly ahead of the initially projected early 2025 schedule. The production of AMD's HPC chips in Arizona marks a crucial step towards establishing an AI-hardware supply chain operating entirely on American soil, which is expected to further expand with Intel Foundry and Samsung Texas facility.

Making HPC processors domestically serves as a significant milestone in reducing dependence on overseas semiconductor manufacturing and strengthening the US's position in the global chip industry. Adding to the momentum, TSMC and Amkor recently announced a collaboration on advanced packaging technologies, including Integrated Fan-Out (InFO) and Chip-on-Wafer-on-Substrate (CoWoS), which are vital for high-performance AI chips. However, as Amkor facilities are yet to be built, these chips are going to be shipped back to Taiwan for packaging before being integrated into the final product. Once the Amkor facility is up and running, Arizona will become the birthplace of fully manufactured and packaged silicon chips.

Samsung Starts Mass Production of PM9E1, Industry's Most Powerful PC SSD for AI

Samsung Electronics, the world leader in advanced memory technology, today announced it has begun mass-producing PM9E1, a PCIe 5.0 SSD with the industry's highest performance and largest capacity. Built on its in-house 5-nanometer (nm)-based controller and eighth-generation V-NAND (V8) technology, the PM9E1 will provide powerful performance and enhanced power efficiency, making it an optimal solution for on-device AI PCs. Key attributes in SSDs, including performance, storage capacity, power efficiency and security, have all been improved compared to its predecessor (PM9A1a).

"Our PM9E1 integrated with a 5 nm controller delivers industry-leading power efficiency and utmost performance validated by our key partners," said YongCheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics. "In the rapidly growing on-device AI era, Samsung's PM9E1 will offer a robust foundation for global customers to effectively plan their AI portfolios."

Canon Delivers FPA -1200NZ2C Nanoimprint Lithography System for Semiconductor Manufacturing to the Texas Institute for Electronics

Canon Inc. announced today that it will ship its most advanced lithography platform, the FPA-1200NZ2C nanoimprint lithography (NIL) system for semiconductor manufacturing, to the Texas Institute for Electronics (TIE), a Texas-based semiconductor consortium. Canon became the first in the world to commercialize a semiconductor manufacturing system that uses NIL technology, which forms circuit patterns in a different method from conventional projection exposure technology, when it released the FPA-1200NZ2C on October 13, 2023.

In contrast to conventional photolithography equipment, which transfers a circuit pattern by projecting it onto the resist coated wafer, the new product does it by pressing a mask imprinted with the circuit pattern into the resist on the wafer like a stamp. Because its circuit pattern transfer process does not go through an optical mechanism, fine circuit patterns on the mask can be faithfully reproduced on the wafer. With reduced power consumption and cost, the new system enables patterning with a minimum linewidth of 14 nm, equivalent to the 5 nm node that is required to produce most advanced logic semiconductors currently available.

Huawei Starts Shipping "Ascend 910C" AI Accelerator Samples to Large NVIDIA Customers

Huawei has reportedly started shipping its Ascend 910C accelerator—the company's domestic alternative to NVIDIA's H100 accelerator for AI training and inference. As the report from China South Morning Post notes, Huawei is shipping samples of its accelerator to large NVIDIA customers. This includes companies like Alibaba, Baidu, and Tencent, which have ordered massive amounts of NVIDIA accelerators. However, Huawei is on track to deliver 70,000 chips, potentially worth $2 billion. With NVIDIA working on a B20 accelerator SKU that complies with US government export regulations, the Huawei Ascend 910C accelerator could potentially outperform NVIDIA's B20 processor, per some analyst expectations.

If the Ascend 910C receives positive results from Chinese tech giants, it could be the start of Huawei's expansion into data center accelerators, once hindered by the company's ability to manufacture advanced chips. Now, with foundries like SMIC printing 7 nm designs and possibly 5 nm coming soon, Huawei will leverage this technology to satisfy the domestic demand for more AI processing power. Competing on a global scale, though, remains a challenge. Companies like NVIDIA, AMD, and Intel have access to advanced nodes, which gives their AI accelerators more efficiency and performance.

Samsung Starts Mass Production of PCle 5.0 PM9E1 SSD

Samsung Electronics Co., Ltd., the world leader in advanced memory technology, today announced it has begun mass producing PM9E1, a PCle 5.0 SSD with the industry's highest performance and largest capacity. Built on its in-house 5-nanometer (nm)-based controller and eighth-generation V-NAND (V8) technology, the PM9E1 will provide powerful performance and enhanced power efficiency, making it an optimal solution for on-device AI PCs. Key attributes in SSDs, including performance, storage capacity, power efficiency and security, have all been improved compared to its predecessor (PM9A1a).

"Our PM9E1 integrated with a 5 nm controller delivers industry-leading power efficiency and utmost performance validated by our key partners," said YongCheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics. "In the rapidly growing on-device AI era, Samsung's PM9E1 will offer a robust foundation for global customers to effectively plan their AI portfolios."

Microsoft Unveils New Details on Maia 100, Its First Custom AI Chip

Microsoft provided a detailed view of Maia 100 at Hot Chips 2024, their initial specialized AI chip. This new system is designed to work seamlessly from start to finish, with the goal of improving performance and reducing expenses. It includes specially made server boards, unique racks, and a software system focused on increasing the effectiveness and strength of sophisticated AI services, such as Azure OpenAI. Microsoft introduced Maia at Ignite 2023, sharing that they had created their own AI accelerator chip. More information was provided earlier this year at the Build developer event. The Maia 100 is one of the biggest processors made using TSMC's 5 nm technology, designed for handling extensive AI tasks on Azure platform.

Maia 100 SoC architecture features:
  • A high-speed tensor unit (16xRx16) offers rapid processing for training and inferencing while supporting a wide range of data types, including low precision data types such as the MX data format, first introduced by Microsoft through the MX Consortium in 2023.
  • The vector processor is a loosely coupled superscalar engine built with custom instruction set architecture (ISA) to support a wide range of data types, including FP32 and BF16.
  • A Direct Memory Access (DMA) engine supports different tensor sharding schemes.
  • Hardware semaphores enable asynchronous programming on the Maia system.

Imec Demonstrates Logic and DRAM Structures Using High NA EUV Lithography

Imec, a world-leading research and innovation hub in nanoelectronics and digital technologies, presents patterned structures obtained after exposure with the 0.55NA EUV scanner in the joint ASML-imec High NA EUV Lithography Lab in Veldhoven, the Netherlands. Random logic structures down to 9,5 nm (19 nm pitch), random vias with 30 nm center-to-center distance, 2D features at 22 nm pitch, and a DRAM specific lay out at P32nm were printed after single exposure, using materials and baseline processes that were optimized for High NA EUV by imec and its partners in the framework of imec's Advanced Patterning Program. With these results, imec confirms the readiness of the ecosystem to enable single exposure high-resolution High NA EUV Lithography.

Following the recent opening of the joint ASML-imec High NA EUV Lithography Lab in Veldhoven, the Netherlands, customers now have access to the (TWINSCAN EXE:5000) High NA EUV scanner to develop private High NA EUV use cases leveraging the customer's own design rules and lay outs.

Ampere Announces 512-Core AmpereOne Aurora CPU for AI Computing

Ampere has announced a significant update to its product roadmap, highlighting the upcoming 512-core AmpereOne Aurora processor. This new chip is specifically designed to address the growing demands of cloud-native AI computing.

The newly announced AmpereOne Aurora 512 cores processor integrates AI acceleration and on-chip High Bandwidth Memory (HBM), promising three times the performance per rack compared to current AmpereOne processors. Aurora is designed to handle both AI training and inference workloads, indicating Ampere's commitment to becoming a major player in the AI computing space.

Qualitas Semiconductor Develops First In-House PCIe 6.0 PHY IP

Qualitas Semiconductor Co., Ltd. has developed a new PCIe 6.0 PHY IP, marking a significant advance in computer interconnect technology. This new product, created using advanced 5 nm process technology is designed to meet the high-speed data transfer needs of the AI era. The Qualitas' PCIe PHY IP using 5 nm FinFet CMOS technology consists of hardmacro PMA and PCS compliant to PCIe Base 6.0 specification.

The PCIe 6.0 PHY IP can achieve transmission speeds up to 64GT/s per lane. When using all 16 lanes, it can transfer data at rates up to 256 GB/s. These speeds make it well-suited for data centers and self-driving car technologies, where rapid data processing is essential. Qualitas achieved this performance by implementing 100G PAM4 signaling technology. Highlighting the importance of the new IP, Qualitas CEO Dr. Duho Kim signaled the company's intent to continue pushing boundaries in semiconductor technology.

AMD Granite Ridge and Strix Point Zen 5 Die-sizes and Transistor Counts Confirmed

AMD is about give the new "Zen 5" microarchitecture a near-simultaneous launch across both its client segments—desktop and mobile. The desktop front is held by the Ryzen 9000 "Granite Ridge" Socket AM5 processors; while Ryzen AI 300 "Strix Point" powers the company's crucial effort to capture Microsoft Copilot+ AI PC market share. We recently did a technical deep-dive on the two. HardwareLuxx.de scored two important bits of specs for both processors in its Q&A interaction with AMD—die sizes and transistor counts.

To begin with, "Strix Point" is a monolithic silicon, which is confirmed to be built on the TSMC N4P foundry node (4 nm). This is a slight upgrade over the N4 node that the company built its previous generation "Phoenix" and "Hawk Point" processors on. The "Strix Point" silicon measures 232.5 mm² in area, which is significantly larger than the 178 mm² of "Hawk Point" and "Phoenix." The added die area comes from there being 12 CPU cores instead of 8, and 16 iGPU compute units instead of 12; and a larger NPU. There are many other factors, such as the larger 24 MB CPU L3 cache; and the sizes of the "Zen 5" and "Zen 5c" cores themselves.

TSMC to Raise Wafer Prices by 10% in 2025, Customers Seemingly Agree

Taiwanese semiconductor giant TSMC is reportedly planning to increase its wafer prices by up to 10% in 2025, according to a Morgan Stanley note cited by investor Eric Jhonsa. The move comes as demand for cutting-edge processors in smartphones, PCs, AI accelerators, and HPC continues to surge. Industry insiders reveal that TSMC's state-of-the-art 4 nm and 5 nm nodes, used for AI and HPC customers such as AMD, NVIDIA, and Intel, could see up to 10% price hikes. This increase would push the cost of 4 nm-class wafers from $18,000 to approximately $20,000, representing a significant 25% rise since early 2021 for some clients and an 11% rise from the last price hike. Talks about price hikes with major smartphone manufacturers like Apple have proven challenging, but there are indications that modest price increases are being accepted across the industry. Morgan Stanley analysts project a 4% average selling price increase for 3 nm wafers in 2025, which are currently priced at $20,000 or more per wafer.

Mature nodes like 16 nm are unlikely to see price increases due to sufficient capacity. However, TSMC is signaling potential shortages in leading-edge capacity to encourage customers to secure their allocations. Adding to the industry's challenges, advanced chip-on-wafer-on-substrate (CoWoS) packaging prices are expected to rise by 20% over the next two years, following previous increases in 2022 and 2023. TSMC aims to boost its gross margin to 53-54% by 2025, anticipating that customers will absorb these additional costs. The impact of these price hikes on end-user products remains uncertain. Competing foundries like Intel and Samsung may seize this opportunity to offer more competitive pricing, potentially prompting some chip designers to consider alternative manufacturing options. Additionally, TSMC's customers could reportedly be unable to secure their capacity allocation without "appreciating TSMC's value."

ByteDance and Broadcom to Collaborate on Advanced AI Chip

ByteDance, TikTok's parent company, is reportedly working with American chip designer Broadcom to develop a cutting-edge AI processor. This collaboration could secure a stable supply of high-performance chips for ByteDance, according to Reuters. Sources claim the joint project involves a 5 nm Application-Specific Integrated Circuit (ASIC), designed to comply with U.S. export regulations. TSMC is slated to manufacture the chip, though production is not expected to begin this year.

This partnership marks a significant development in U.S.-China tech relations, as no public announcements of such collaborations on advanced chips have been made since Washington implemented stricter export controls in 2022. For ByteDance, this move could reduce procurement costs and ensure a steady chip supply, crucial for powering its array of popular apps, including TikTok and the ChatGPT-like AI chatbot "Doubao." The company has already invested heavily in AI chips, reportedly spending $2 billion on NVIDIA processors in 2023.

GIGABYTE Intros GeForce RTX 4070 Ti SUPER MAX with Repositioned 12VHPWR Connector

GIGABYTE introduced the GeForce RTX 4070 Ti SUPER WindForce MAX graphics card. It is characterized by an oversized air cooling solution that gives it some large dimensions of 33.1 cm length, 5.55 cm thickness (3 slots), but more importantly, a height of 13.6 cm, which could pose a challenge for those with mid-tower cases, given the tight cable bending restrictions of the 12VHPWR connector. GIGABYTE found a novel solution to this problem. The power connector is located pointing toward the tail end of the card, where the PCB terminates. The PCB is only two-thirds the length of the card, and so the power cable can be routed in without any bends.

The WindForce cooling solution features a trio of 100 mm fans that ventilate a large aluminium fin-stack heatsink with nine copper heatpipes, and a direct-touch base. GIGABYTE has given the card a factory overclock of 2655 MHz GPU clocks, compared to 2610 MHz reference, while leaving the memory untouched at 21 Gbps. The card offers dual-BIOS, with the default BIOS enabling these clock speeds, and the Silent BIOS lowering them to reference speeds, while quietening the cooler. Based on the 5 nm AD103 silicon, the RTX 4070 Ti SUPER is endowed with 8,448 CUDA cores, 66 RT cores, 264 Tensor cores, 96 ROPs, and 264 TMUs. The GPU gets 16 GB of 21 Gbps GDDR6X memory across a 256-bit wide memory interface. The company didn't reveal pricing.

Marvell Expands Connectivity Portfolio With New PCIe Gen 6 Retimer Product Line

Marvell Technology, a leader in data infrastructure semiconductor solutions, today expanded its connectivity portfolio with the launch of the new Alaska P PCIe retimer product line built to scale data center compute fabrics inside accelerated servers, general-purpose servers, CXL systems and disaggregated infrastructure. The first two products, 8- and 16-lane PCIe Gen 6 retimers, connect AI accelerators, GPUs, CPUs and other components inside server systems.

Artificial intelligence (AI) and machine learning (ML) applications are driving data flows and connections inside server systems at significantly higher bandwidth, necessitating PCIe retimers to meet the required connection distances at faster speeds. PCIe is the industry standard for inside-server-system connections between AI accelerators, GPUs, CPUs and other server components. AI models are doubling their computation requirements every six months1 and are now the primary driver of the PCIe roadmap, with PCIe Gen 6 becoming a requirement.

AMD Introduces EPYC 4004 Series Socket AM5 Server Processors for SMB and Dedicated Webhosting Markets

AMD today introduced the EPYC 4004 line of server processors in the Socket AM5 package. These chips come with up to 16 "Zen 4" CPU cores, a 2-channel DDR5 memory interface, and a 28-lane PCIe Gen 5 I/O, and are meant to power small-business servers, as well as cater to the dedicated web-server hosting business that generally attracts client-segment processors. This is the exact segment of market that Intel addresses with its Xeon E-2400 series processors in the LGA1700 package. The EPYC 4004 series offers a superior support and warranty regime compared to client-segment processors, besides ECC memory support, and AMD Secure Processor, and all of the security features you get with Ryzen PRO 7000 series processors for commercial desktops.

AMD's offer over the Xeon E-2400 series is its CPU core count of up to 16, which lets you fully utilize the 16-core limit of the Windows 2022 Server base license. The EPYC 4004 series is functionally the same processor as the Ryzen 7000 "Raphael" except for its ECC memory support. This chip features up to two 5 nm "Zen 4" CCDs with up to 8 cores, each; and an I/O die that puts out two DDR5 memory channels, and 28 PCIe Gen 5 lanes. Besides today's processor launch, several server motherboard vendors are announcing Socket AM5 server boards that are rackmount-friendly, and with server-relevant features.

TSMC Unveils Next-Generation HBM4 Base Dies, Built on 12 nm and 5 nm Nodes

During the European Technology Symposium 2024, TSMC has announced its readiness to manufacture next-generation HBM4 base dies using both 12 nm and 5 nm nodes. This significant development is expected to substantially improve the performance, power consumption, and logic density of HBM4 memory, catering to the demands of high-performance computing (HPC) and artificial intelligence (AI) applications. The shift from a traditional 1024-bit interface to an ultra-wide 2048-bit interface is a key aspect of the new HBM4 standard. This change will enable the integration of more logic and higher performance while reducing power consumption. TSMC's N12FFC+ and N5 processes will be used to produce these base dies, with the N12FFC+ process offering a cost-effective solution for achieving HBM4 performance and the N5 process providing even more logic and lower power consumption at HBM4 speeds.

The company is collaborating with major HBM memory partners, including Micron, Samsung, and SK Hynix, to integrate advanced nodes for HBM4 full-stack integration. TSMC's base die, fabricated using the N12FFC+ process, will be used to install HBM4 memory stacks on a silicon interposer alongside system-on-chips (SoCs). This setup will enable the creation of 12-Hi (48 GB) and 16-Hi (64 GB) stacks with per-stack bandwidth exceeding 2 TB/s. TSMC's collaboration with EDA partners like Cadence, Synopsys, and Ansys ensures the integrity of HBM4 channel signals, thermal accuracy, and electromagnetic interference (EMI) in the new HBM4 base dies. TSMC is also optimizing CoWoS-L and CoWoS-R for HBM4 integration, meaning that massive high-performance chips are already utilizing this technology and getting ready for volume manufacturing.
Return to Keyword Browsing
Dec 21st, 2024 20:02 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts