News Posts matching #fabric

Return to Keyword Browsing

Google Shows Production NVIDIA "Blackwell" GB200 NVL System for Cloud

Last week, we got a preview of Microsoft's Azure production-ready NVIDIA "Blackwell" GB200 system, showing that only a third of the rack that goes in the data center is actually holding the compute elements, with the other two-thirds holding the cooling compartment to cool down the immense heat output from tens of GB200 GPUs. Today, Google is showing off a part of its own infrastructure ahead of the Google Cloud App Dev & Infrastructure Summit, taking place on October 30, digitally as an event. Shown below are two racks standing side by side, connecting NVIDIA "Blackwell" GB200 NVL cards with the rest of the Google infrastructure. Unlike Microsoft Azure, Google Cloud uses a different data center design in its facilities.

There is one rack with power distribution units, networking switches, and cooling distribution units, all connected to the compute rack, which houses power supplies, GPUs, and CPU servers. Networking equipment is present, and it connects to Google's "global" data center network, which is Google's own data center fabric. We are not sure what is the fabric connection of choice between these racks; as for optimal performance, NVIDIA recommends InfiniBand (Mellanox acquisition). However, given that Google's infrastructure is set up differently, there may be Ethernet switches present. Interestingly, Google's design of GB200 racks differs from Azure's, as it uses additional rack space to distribute the coolant to its local heat exchangers, i.e., coolers. We are curious to see if Google releases more information on infrastructure, as it has been known as the infrastructure king because of its ability to scale and keep everything organized.

Astera Labs Introduces New Portfolio of Fabric Switches Purpose-Built for AI Infrastructure at Cloud-Scale

Astera Labs, Inc, a global leader in semiconductor-based connectivity solutions for AI and cloud infrastructure, today announced a new portfolio of fabric switches, including the industry's first PCIe 6 switch, built from the ground up for demanding AI workloads in accelerated computing platforms deployed at cloud-scale. The Scorpio Smart Fabric Switch portfolio is optimized for AI dataflows to deliver maximum predictable performance per watt, high reliability, easy cloud-scale deployment, reduced time-to-market, and lower total cost of ownership.

The Scorpio Smart Fabric Switch portfolio features two application-specific product lines with a multi-generational roadmap:
  • Scorpio P-Series for GPU-to-CPU/NIC/SSD PCIe 6 connectivity- architected to support mixed traffic head-node connectivity across a diverse ecosystem of PCIe hosts and endpoints.
  • Scorpio X-Series for back-end GPU clustering-architected to deliver the highest back-end GPU-to-GPU bandwidth with platform-specific customization.

Fraunhofer IAF Researchers Work on AlYN, an Energy-Efficient Semiconductor Material

Researchers at Fraunhofer IAF have made a significant advance in semiconductor materials by successfully fabricating aluminum yttrium nitride (AlYN) using metal-organic chemical vapor deposition (MOCVD). AlYN, known for its outstanding properties and compatibility with gallium nitride (GaN), shows great potential for energy-efficient, high-frequency electronics. Previously, AlYN could only be deposited via magnetron sputtering, but this new method opens the door to diverse applications. Dr. Stefano Leone from Fraunhofer IAF highlights AlYN's ability to enhance performance while reducing energy consumption, making it vital for future electronics.

In 2023, the team achieved a 600 nm thick AlYN layer with a record 30% yttrium concentration. They have since developed AlYN/GaN heterostructures with high structural quality and promising electrical properties, particularly for high-frequency applications. These structures demonstrate optimal two-dimensional electron gas (2DEG) properties and are highly suitable for high electron mobility transistors (HEMTs).

CPU-Z Screenshot of Alleged Intel Core Ultra 9 285K "Arrow Lake" ES Surfaces, Confirms Intel 4 Process

A CPU-Z screenshot of an alleged Intel Core Ultra 9 285K "Arrow Lake-S" desktop processor engineering sample is doing rounds on social media, thanks to wxnod. CPU-Z identifies the chip with an Intel Core Ultra case badge with the deep shade of blue associated with the Core Ultra 9 brand extension, which hints at this being the top Core Ultra 9 285K processor model, we know it's the "K" or "KF" SKU looking at its processor base power reading of 125 W. The chip is built in the upcoming Intel Socket LGA1851. CPU-Z displays the process node as 7 nm, which corresponds with the Intel 4 foundry node.

Intel is using the same Intel 4 foundry node for "Arrow Lake-S" as the compute tile of its "Meteor Lake" processor. Intel 4 offers power efficiency and performance comparable to 4 nm nodes from TSMC, although it is physically a 7 nm node. Likewise, the Intel 3 node is physically 5 nm. If you recall, the main logic tile of "Lunar Lake" is being built on the TSMC N3P (3 nm) node. This means that Intel is really gunning for performance/Watt with "Lunar Lake," to get as close to the Apple M3 Pro as possible.

Panmnesia Uses CXL Protocol to Expand GPU Memory with Add-in DRAM Card or Even SSD

South Korean startup Panmnesia has unveiled an interesting solution to address the memory limitations of modern GPUs. The company has developed a low-latency Compute Express Link (CXL) IP that could help expand GPU memory with external add-in card. Current GPU-accelerated applications in AI and HPC are constrained by the set amount of memory built into GPUs. With data sizes growing by 3x yearly, GPU networks must keep getting larger just to fit the application in the local memory, benefiting latency and token generation. Panmnesia's proposed approach to fix this leverages the CXL protocol to expand GPU memory capacity using PCIe-connected DRAM or even SSDs. The company has overcome significant technical hurdles, including the absence of CXL logic fabric in GPUs and the limitations of existing unified virtual memory (UVM) systems.

At the heart of Panmnesia's solution is a CXL 3.1-compliant root complex with multiple root ports and a host bridge featuring a host-managed device memory (HDM) decoder. This sophisticated system effectively tricks the GPU's memory subsystem into treating PCIe-connected memory as native system memory. Extensive testing has demonstrated impressive results. Panmnesia's CXL solution, CXL-Opt, achieved two-digit nanosecond round-trip latency, significantly outperforming both UVM and earlier CXL prototypes. In GPU kernel execution tests, CXL-Opt showed execution times up to 3.22 times faster than UVM. Older CXL memory extenders recorded around 250 nanoseconds round trip latency, with CXL-Opt potentially achieving less than 80 nanoseconds. As with CXL, the problem is usually that the memory pools add up latency and performance degrades, while these CXL extenders tend to add to the cost model as well. However, the Panmnesia CXL-Opt could find a use case, and we are waiting to see if anyone adopts this in their infrastructure.
Below are some benchmarks by Panmnesia, as well as the architecture of the CXL-Opt.

Discover Fractal Mood, A New Streamlined PC Case

Presenting the release of Fractal Mood, a PC case designed to streamline the gaming station. It is a low-volume case in a space-saving pillar design format, which minimizes the footprint of the PC on the desk. Its outer body features breathable fabric in a clean shape and color palette to harmonize with the home.

For access to the interior, users can pop out the back panel and simply slide off the fabric-covered enclosure for full access to their build. Inside, Mood can house a graphics card up to 325 mm in length, which is easily connected through the included PCIe 4.0 riser cable.

PCI-SIG Announces CopprLink Cable Specifications for PCIe 5.0 and 6.0 Technology

PCI-SIG, the organization responsible for the widely adopted PCI Express (PCIe) standard, today announced the release of the CopprLink Internal and External Cable specifications. The CopprLink Cable specifications provide signaling at 32.0 and 64.0 GT/s and leverage well-established industry standard connector form factors maintained by SNIA.

"The CopprLink Cable specifications integrate PCIe cabling seamlessly with the PCIe electrical base specification, providing longer channel reach and topological flexibility," said Al Yanes, PCI-SIG President and Chairperson. "The CopprLink Cables are intended to evolve with the same connector form factors, scale for future PCIe technology generations and meet the demands of emerging applications. The Electrical Work Group has already begun pathfinding work on CopprLink Cables for PCIe 7.0 technology at 128.0 GT/s, showcasing PCI-SIG's commitment to the CopprLink Cable specifications."

TSMC to Introduce Location Premium for Overseas Chip Production

As a part of its Q1 earnings call discussion, one of the largest semiconductor manufacturers, TSMC, has unveiled a strategic move to charge a premium for chips manufactured at its newly established overseas fabrication plants. During an earnings call, TSMC's CEO, C.C. Wei, announced that the company will impose higher pricing for chips produced outside Taiwan to offset the higher operational costs associated with these international locations. This move aims to maintain TSMC's target gross margin of 53% amidst rising expenses such as inflation and elevated electricity costs. This decision comes as TSMC expands its global footprint with new facilities in the United States, Germany, and Japan (JAMS) to meet the increasing demand for semiconductor chips worldwide. The company's new US-based Arizona facility, known as Fab 21, has faced delays due to equipment installation issues and labor negotiations.

Chips produced at this site, utilizing TSMC's advanced N5 and N4 nodes, could cost between 20% to 30% more than those manufactured in Taiwan. TSMC's strategy to manage the cost disparities across different geographic locations involves strategic pricing, securing government support, and leveraging its manufacturing technology leadership. This approach reflects the company's commitment to maintaining its competitive edge while navigating the complexities of global semiconductor manufacturing in today's fragmented market. Introducing a location premium is expected to impact American semiconductor designers, who may need to pass these costs on to specific market segments, particularly those with lower price sensitivity, such as government-related projects. Despite these challenges, TSMC's overseas expansion underscores its adaptive strategies in the face of global economic pressures and industry demands, ensuring its continued position as a leading player in the semiconductor industry.

Dell Expands Generative AI Solutions Portfolio, Selects NVIDIA Blackwell GPUs

Dell Technologies is strengthening its collaboration with NVIDIA to help enterprises adopt AI technologies. By expanding the Dell Generative AI Solutions portfolio, including with the new Dell AI Factory with NVIDIA, organizations can accelerate integration of their data, AI tools and on-premises infrastructure to maximize their generative AI (GenAI) investments. "Our enterprise customers are looking for an easy way to implement AI solutions—that is exactly what Dell Technologies and NVIDIA are delivering," said Michael Dell, founder and CEO, Dell Technologies. "Through our combined efforts, organizations can seamlessly integrate data with their own use cases and streamline the development of customized GenAI models."

"AI factories are central to creating intelligence on an industrial scale," said Jensen Huang, founder and CEO, NVIDIA. "Together, NVIDIA and Dell are helping enterprises create AI factories to turn their proprietary data into powerful insights."

TSMC Reportedly Investing $16 Billion into New CoWoS Facilities

TSMC is experiencing unprecedented demand from AI chip customers—unnamed parties have (fancifully) requested the construction of entirely new fabrication facilities. Taiwan's leading semiconductor contract manufacturer seems to concentrating on "sensible" expansions, mainly in the area of CoWoS packaging output—according to an Economic Daily report, company leadership and local government were negotiating over the construction of four new advanced packaging plants. Insiders propose that plans have been revised—an investment in excess of 500 billion yuan ($16 billion) will enable the founding of six new CoWoS-focused facilities. TSMC is expected to make an official announcement next month—industry moles reckon that construction work will start in April. Two (of the six total) advanced packaging plants could become fully operational before the conclusion of 2024.

Lately, TSMC has initiated an ambitious recruitment drive—targeting around 6000 new workers. A touring entity is tasked with the attraction of "talents with high enthusiasm for semiconductors." The majority of new recruits are likely heading to new or expanded Taiwan-based facilities. The Economic Daily report proposes that Chiayi City's technological hub will play host to TSMC's new CoWoS packaging plants. A DigiTimes Asia news piece (from January) posited that TSMC leadership anticipates CoWoS output reaching 44,000 units by the end of 2024. This predicted tally could grow, thanks to the (rumored) activation of additional factories. CoWoS packaging is considered to be a vital aspect of AI accelerators—insiders believe that TSMC's latest investment will boost production of NVIDIA H100 GPUs. The combined output of six new CoWoS plants will assist greatly in the creation of next-gen B100 chips.

SK Hynix To Invest $1 Billion into Advanced Chip Packaging Facilities

Lee Kang-Wook, Vice President of Research and Development at SK Hynix, has discussed the increased importance of advanced chip packaging with Bloomberg News. In an interview with the media company's business section, Lee referred to a tradition of prioritizing the design and fabrication of chips: "the first 50 years of the semiconductor industry has been about the front-end." He believes that the latter half of production processes will take precedence in the future: "...but the next 50 years is going to be all about the back-end." He outlined a "more than $1 billion" investment into South Korean facilities—his department is hoping to "improve the final steps" of chip manufacturing.

SK Hynix's Head of Packaging Development pioneered a novel method of packaging the third generation of high bandwidth technology (HBM2E)—that innovation secured NVIDIA as a high-profile and long term customer. Demand for Team Green's AI GPUs has boosted the significance of HBM technologies—Micron and Samsung are attempting to play catch up with new designs. South Korea's leading memory supplier is hoping to stay ahead in the next-gen HBM contest—supposedly 12-layer fifth generation samples have been submitted to NVIDIA for approval. SK Hynix's Vice President recently revealed that HBM production volumes for 2024 have sold out—currently company leadership is considering the next steps for market dominance in 2025. The majority of the firm's newly announced $1 billion budget will be spent on the advancement of MR-MUF and TSV technologies, according to their R&D chief.

Marvell Announces Industry's First 2 nm Platform for Accelerated Infrastructure Silicon

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, is extending its collaboration with TSMC to develop the industry's first technology platform to produce 2 nm semiconductors optimized for accelerated infrastructure.

Behind the Marvell 2 nm platform is the company's industry-leading IP portfolio that covers the full spectrum of infrastructure requirements, including high-speed long-reach SerDes at speeds beyond 200 Gbps, processor subsystems, encryption engines, system-on-chip fabrics, chip-to-chip interconnects, and a variety of high-bandwidth physical layer interfaces for compute, memory, networking and storage architectures. These technologies will serve as the foundation for producing cloud-optimized custom compute accelerators, Ethernet switches, optical and copper interconnect digital signal processors, and other devices for powering AI clusters, cloud data centers and other accelerated infrastructure.

Loongson 3A6000 CPU Reportedly Matches AMD Zen 4 and Intel Raptor Lake IPC

China's homegrown Loongson 3A6000 CPU shows promise but still needs to catch up AMD and Intel's latest offerings in real-world performance. According to benchmarks by Chinese tech reviewer Geekerwan, the 3A6000 has instructions per clock (IPC) on par with AMD's Zen 4 architecture and Intel's Raptor Lake. Using the SPEC CPU 2017 processor benchmark, Geekerwan has clocked all the CPUs at 2.5 GHs to compare the raw benchmark results to Zen 4 and Intel's Raptor Lake (Raptor Cove) processors. As a result, the Loongson 3A6000 seemingly matches the latest designs by AMD and Intel in integer results, with integer IPC measured at 4.8, while Zen 4 and Raptor Cove have 5.0 and 4.9, respectively. The floating point performance is still lagging behind a lot, though. This demonstrates that Loongson's CPU design can catching up to global leaders, but still needs further development, especially for floating point arithmetic.

However, the 3A6000 is held back by low clock speeds and limited core counts. With a maximum boost speed of just 2.5 GHz across four CPU cores, the 3A6000 cannot compete with flagship chips like AMD's 16-core Ryzen 9 7950X running at 5.7 GHz. While the 3A6000's IPC is impressive, its raw computing power is a fraction of that of leading x86 CPUs. Loongson must improve manufacturing process technology to increase clock speeds, core counts, and cache size. The 3A6000's strengths highlight Loongson's ambitions: an in-house LoongArch ISA design fabricated on 12 nm achieves competitive IPC to state-of-the-art x86 chips built on more advanced TSMC 5 nm and Intel 7 nm nodes. This shows the potential behind Loongson's engineering. Reports suggest that next-generation Loongson 3A7000 CPUs will use SMIC 7 nm, allowing higher clocks and more cores to better harness the architecture's potential. So, we expect the next generation to set a bar for China's homegrown CPU performance.

Blacklyte Shows Athena Gaming Chair and Atlas Gaming Desk at CES 2024

During CES 2024, Blacklyte, a new Canadian brand of gaming chairs, accessories, and desks, has unveiled an inaugural Athena gaming chair granting the user complete mastery over each meticulous stitch. The Athena chair features an exclusive pattern of intertwined threads inspired by fashion-forward footwear. With plushier and more breathable Skinsense fabric, the chair allows one to keep cool throughout the day. It also provides exceptional durability and smooth gliding on a reinforced base. The intricate details of the stitching patterns, high-end fabric, and sturdy construction position this as a premium, fashionable gaming chair built for comfort during extended periods of use. It also features adjustable arm rests and adjustable positions of tilting. The Athena chair comes with an MSRP of $499.99

Next, the new company has also displayed its Atlas gaming desk, which optimizes space utilization while providing flexibility and convenience. Its design includes an earphone hook and wire wrangler tray to neatly organize accessories without infringing on the desktop area. The desk also features electric height adjustment from 600 mm to 1250 mm for quick transitions between sitting and standing. Powered by a single concealed column through the leg, the built-in electrical socket can charge a few devices simultaneously. Advertised with its ability to "carry the world" of your electronics on its shoulders, Atlas is a sustainable desk built for adaptability and tidy cable management. The Atlas desk should be available soon with an unknown price tag.

Ethernet Switch Chips are Now Infected with AI: Broadcom Announces Trident 5-X12

Artificial intelligence has been a hot topic this year, and everything is now an AI processor, from CPUs to GPUs, NPUs, and many others. However, it was only a matter of time before we saw an integration of AI processing elements into the networking chips. Today, Broadcom announced its new Ethernet switching silicon called Trident 5-X12. The Trident 5-X12 delivers 16 Tb/s of bandwidth, double that of the previous Trident generation while adding support for fast 800G ports for connection to Tomahawk 5 spine switch chips. The 5-X12 is software-upgradable and optimized for dense 1RU top-of-rack designs, enabling configurations with up to 48x200G downstream server ports and 8x800G upstream fabric ports. The 800G support is added using 100G-PAM4 SerDes, which enables up to 4 m DAC and linear optics.

However, this is not only a switch chip on its own. Broadcom has added AI processing elements in an inference engine called NetGNT (Networking General-purpose Neural-network Traffic-analyzer). It can detect common traffic patterns and optimize data movement across the chip. Specifically, the company has listed an example of the system doing AI/ML workloads. In that case, NetGNT performs intelligent traffic analysis to avoid network congestion in these workloads. For example, it can detect the so-called "incast" patterns in real-time, where many flows converge simultaneously on the same port. By recognizing the start of incast early, NetGNT can invoke hardware-based congestion control techniques to prevent performance degradation without added latency.

Ayar Labs Showcases 4 Tbps Optically-enabled Intel FPGA at Supercomputing 2023

Ayar Labs, a leader in silicon photonics for chip-to-chip connectivity, will showcase its in-package optical I/O solution integrated with Intel's industry-leading Agilex Field-Programmable Gate Array (FPGA) technology. In demonstrating 5x current industry bandwidth at 5x lower power and 20x lower latency, the optical FPGA - packaged in a common PCIe card form factor - has the potential to transform the high performance computing (HPC) landscape for data-intensive workloads such as generative artificial intelligence (AI), machine learning, and support novel new disaggregated compute and memory architectures and more.

"We're on the cusp of a new era in high performance computing as optical I/O becomes a 'must have' building block for meeting the exponentially growing, data-intensive demands of emerging technologies like generative AI," said Charles Wuischpard, CEO of Ayar Labs. "Showcasing the integration of Ayar Labs' silicon photonics and Intel's cutting-edge FPGA technology at Supercomputing is a concrete demonstration that optical I/O has the maturity and manufacturability needed to meet these critical demands."

Ventana Introduces Veyron V2 - World's Highest Performance Data Center-Class RISC-V Processor and Platform

Ventana Micro Systems Inc. today announced the second generation of its Veyron family of RISC-V processors. The new Veyron V2 is the highest performance RISC-V processor available today and is offered in the form of chiplets and IP. Ventana Founder and CEO Balaji Baktha will share the details of Veyron V2 today during his keynote speech at the RISC-V Summit North America 2023 in Santa Clara, California.

"Veyron V2 represents a leap forward in our quest to lead the industry in high-performance RISC-V CPUs that are ready for rapid customer adoption," said Balaji Baktha, Founder and CEO of Ventana. "It substantiates our commitment to customer innovation, workload acceleration, and overall optimization to achieve best in class performance per Watt per dollar. V2 enhancements unleash innovation across data center, automotive, 5G, AI, and client applications."

Nacon Unveils the Rig 900 Max HX Wireless Gaming Headset

NACON, a leader in premium gaming accessories and parent of the RIG audio brand, proudly unveils the RIG 900 MAX HX. Licensed for Xbox, the 900 MAX marks the pinnacle in wireless game audio, merging capabilities such as a personalized Dolby Atmos headphone experience, dual wireless connectivity with Bluetooth audio, and a seamless charging base station for the definitive gaming experience.

The 900 MAX is the first gaming headset to offer the ability to personalize your Dolby Atmos headphone listening experience when enjoying gaming in Dolby Atmos on Xbox Series X|S, Xbox One or PC. Traditional audio experiences over headphones are designed using generic head and body types without considering the natural diversity of the human body. Leveraging the companion Dolby Personalization mobile app, the 900 MAX unlocks an exceptional Dolby Atmos experience by providing you with a custom personal profile tailored for you.

Xbox Series X Starfield and Camo Console Wraps Available to Pre-order

We're excited to announce that Xbox Series X Console Wraps are launching this year, and pre-orders have already started. We know gamers want to be able to customize their consoles and show support for their favorite games, and we are delivering an option that's more affordable and more sustainable than purchasing a special edition or limited edition console. With the launch of Series X Console Wraps, you can customize the console you already have. There are three striking designs to choose from - complete your Starfield setup with a Starfield design that pairs perfectly with the recently released Starfield Controller and Headset, and we are also launching two camo colors to choose from, Arctic Camo and Mineral Camo.

The wraps were designed specifically for Series X and have a custom, precision fit. Every detail was taken into consideration to ensure your console performance is preserved- vents are all clear, and small feet were added to the bottom of the wraps to ensure air can flow freely through the console. Made with solid core panels that are layered with high-tech fabric finishes, the wraps are folded around your console and secured with a hook and loop enclosure. The interior of the wraps are printed with silicone designs that keep the wrap in place.

Leaked Email Suggests AMD Instinct MI450 Accelerators to Feature XSwitch Interconnect Fabric

AMD is reported to be forming plans for its Instinct MI400 Accelerator series, according to a leaked internal email. This information was shared by a hardware tipster (HXL/@9550Pro) on Twitter, but their post has been deleted as some point today. Wccftech was quick enough to note down the details, and their report suggests that AMD is already making plans for an APU range that is set to succeed the unreleased Instinct MI300 lineup (expected later in 2023). Instinct MI400 accelerators are touted to drive next generation data center and cloud platforms.

The leaked email email contained information about three upcoming products: Weisshorn, MI450 and XSwitch. Kepler's recent tweet posits that Weisshorn is AMD's in-house moniker for Zen 6 "Morpheus" architecture-based Venice CPUs - these are alleged to form part of an upcoming EPYC lineup (slated for 2025 or 2026). Hardware experts reckon that AMD will introduce a new interconnect fabric with the MI400 series - "XSwitch" is speculated to be the company's main technological answer to NVIDIA's NVLINK.
Return to Keyword Browsing
Nov 21st, 2024 07:44 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts