News Posts matching #Broadcom

Return to Keyword Browsing

ASUSTOR Launches Enterprise-Grade SAS JBOD Xpanstor 12R and AS-SAS8e SAS HBA

ASUSTOR Inc. today is announcing the enterprise-grade Xpanstor 12R JBOD expansion unit and the SAS HBA AS-SAS8e. ASUSTOR NAS devices supporting the AS-SAS8e support the Xpanstor 12R JBOD expansion unit to provide convenient and robust file sharing, remote access, and backup functionality. When expansion of storage capacity is required, pair your NAS with the ASUSTOR Xpanstor 12R to allow additional hard drives to be connected to an ASUSTOR NAS through a SAS connection to create a robust enterprise storage environment for managing large volumes of data.

Effortless Configuration
Storage Manager in ADM enables easy management and configuration between an ASUSTOR NAS and an attached ASUSTOR expansion unit. Storage Manager also supports the creation of multiple volume types and snapshot features, further enhancing data security and reliability.

Intel Faces Potential Breakup as TSMC and Broadcom Explore Acquisition

According to sources close to the Wall Street Journal, Intel is weighing preliminary acquisition offers that could split the company into two parts: product and foundry. TSMC and Broadcom are independently exploring deals that would divide Intel's chip design and manufacturing operations. Broadcom has initiated informal discussions regarding Intel's chip design and marketing divisions, while TSMC is considering assembling an investor consortium to acquire Intel's facilities. This solution is improbable, as Intel's fabs are strategically one of the most critical aspects of the US semiconductor supply chain. Intel manufactures custom chips for the US Department of Defense; hence, having a foreign owner of fabs is not acceptable. The news about the acquisition comes as Intel grapples with manufacturing setbacks, including a total $13.4 billion loss in its foundry segment during 2024 and a significant erosion of market share in the AI processor market.

The acquisition talks face substantial regulatory hurdles, particularly regarding national security concerns. The US government has signaled resistance to foreign ownership of Intel's domestic manufacturing capabilities, which are deemed strategically vital to American technological sovereignty. This could particularly impact TSMC's bid for Intel's plants despite the Taiwanese company's position as the world's leading contract chipmaker. Intel's vulnerability to acquisition follows a series of strategic missteps under former leadership, including delayed manufacturing innovations and an increasing reliance on government subsidies for facility expansion. The company's share price has declined 60% from its 2021 highs amid these challenges, attracting potential buyers despite the complexity of any potential deal structure. Successful execution would require navigating both regulatory approval and the practical difficulties of disaggregating Intel's deeply integrated design and manufacturing operations.

Report Suggests OpenAI Finalizing Proprietary GPU Design

Going back a year, we started hearing about an OpenAI proprietary AI chip project—this (allegedly) highly ambitious endeavor included grand plans for a dedicated fabrication network. TSMC was reportedly in the equation, but indirectly laughed at the AI research organization's ardent requests. Fast-forward to the present day; OpenAI appears to be actively pursuing a proprietary GPU design through traditional means. A Reuters exclusive report points to 2025 being an important year for the company's aforementioned "in-house" AI chip—the publication believes that OpenAI's debut silicon design has reached the finalization stage. Insiders have divulged that the project is only months away from being submitted to TSMC for "taping out." The foundry's advanced 3-nanometer process technology is reported to be on the cards. A Reuters source reckons that the unnamed chip features: "a commonly used systolic array architecture with high-bandwidth memory (HBM)...and extensive networking capabilities."

Broadcom is reportedly assisting with the development of OpenAI's in-house design—we heard about rumored negotiations taking place last summer. Jim Keller's tempting offer—of creating an AI chip for less than $1 trillion—was ignored early last year; OpenAI has instead assembled its own internal team of industry veterans. The October 2024 news cycle posited that former Google TPU engineers were drafted in as team leaders, with a targeted mass production window scheduled for 2026. The latest Reuters news article reiterates this projected timeframe, albeit dependent on the initial tape going "smoothly." OpenAI's chip department has grown to around forty individuals with recent months, according to industry moles—a small number relative to the headcounts at "Google or Amazon's AI chip program."

Broadcom Delivers Quantum Resistant Network Encryption for Real-time Ransomware Detection

Broadcom Inc. today announced an industry-first—the new, innovative Emulex Secure Fiber Channel Host Bus Adapters (HBA)—a cost-effective, easy-to-manage solution that encrypts all data as it moves between servers and storage.

Encrypting mission-critical data is no longer a nice-to-have, but a must-have. The cost of ransomware attacks continues to rise with attacks in 2024 costing USD $5.37 million on average per attack. Upcoming generative AI and quantum computers magnify the risk if data is not encrypted at all points in the data center including the network.

Trump Administration Plans to Impose 25-100% Tariffs on Taiwan-Sourced Chips, Including TSMC

The United States, currently led by the Trump administration, could be preparing a surprise package to its close silicon ally—Taiwan. During a House GOP issues conference in Florida, US President Donald Trump announced that he would impose 25% to 100% tariffs on Taiwan-made chips, including the world's leading silicon manufacturer, TSMC. Trump addressed the conference, saying, "In the very near future, we are going to be placing tariffs on foreign production of computer chips, semiconductors, and pharmaceuticals to return production of these essential goods to the United States. They left us and went to Taiwan; we want them to come back. We do not want to give them billions of dollars like this ridiculous program that Biden has given everybody billions of dollars. They already have billions of dollars. […] They did not need money. They needed an incentive. And the incentive is going to be they [do not want to] pay a 25%, 50% or even a 100% tax."

The issue for TSMC is its massive reliance on US companies to drive revenue. The majority of its cutting-edge silicon is going to only a handful of companies, including Apple, NVIDIA, Qualcomm, and Broadcom. With tariffs, the supply chain economics, especially in the world of semiconductors, will break. TSMC's most significant export country is the US, and US companies with trillions of US Dollars of market capitalization rely on Taiwanese silicon. As a result, TSMC will most likely raise its wafer prices, with results trickling down to US companies raising their product prices with additional price hikes. TSMC plans to bring its advanced manufacturing on American soil, but given that these tariffs might break the economic model it currently operates under, it may need to happen sooner. Taiwan-based silicon giant has planned to leave US facilities trailing behind by a generation or two of advanced manufacturing, while domestic facilities produce the newest nodes. If Trump decides to go through tariffs, TSMC could make additional changes to its US-based manufacturing plans.

Solidigm Extends Agreement with Broadcom on High-Capacity SSD Controllers for AI

Solidigm, a leading provider of innovative NAND flash memory solutions, today announced a multi-year extension of its agreement with Broadcom Inc. on the use of high-capacity solid-state drive (SSD) controllers to support artificial intelligence (AI) and data-intensive workloads. Solidigm is the leading provider of high-capacity storage for AI, and Broadcom's custom controllers have served as a critical component of Solidigm SSDs for more than a decade. With more than 120 million units of Solidigm SSDs shipped featuring Broadcom controllers, the partnership between the two companies has continued through key industry SSD milestones including Serial ATA (SATA), Serial-Attached SCSI (SAS) and Non-Volatile Memory Express (NVMe).

The agreement also includes collaboration on Solidigm's recently announced 122 TB (terabyte) Solidigm D5-P5336 data center SSD, the world's highest capacity PCIe SSD that delivers industry-leading storage efficiency from the core data center to the edge. "With our new 122 TB SSD, Solidigm further extends our high-capacity QLC (quad-level cell) leadership from 8 to 122 TB drives that all share the same controller from Broadcom, making our drives easier for customers to qualify," said Solidigm Co-CEO Kevin Noh. "Our relationship with Broadcom is pivotal to Solidigm as we collectively work to help our customers achieve efficiency benefits in the buildout of AI infrastructure."

New Raspberry Pi 5 With 16 GB Goes On Sale At $120

We first announced Raspberry Pi 5 back in the autumn of 2023, with just two choices of memory density: 4 GB and 8 GB. Last summer, we released the 2 GB variant, aimed at cost-sensitive applications. And today we're launching its bigger sibling, the 16 GB variant, priced at $120.

Why 16 GB, and why now?
We're continually surprised by the uses that people find for our hardware. Many of these fit into 8 GB (or even 2 GB) of SDRAM, but the threefold step up in performance between Raspberry Pi 4 and Raspberry Pi 5 opens up use cases like large language models and computational fluid dynamics, which benefit from having more storage per core. And while Raspberry Pi OS has been tuned to have low base memory requirements, heavyweight distributions like Ubuntu benefit from additional memory capacity for desktop use cases. The optimized D0 stepping of the Broadcom BCM2712 application processor includes support for memories larger than 8 GB. And our friends at Micron were able to offer us a single package containing eight of their 16Gbit LPDDR4X die, making a 16 GB product feasible for the first time.

HighPoint M.2/E1.S NVMe RAID AICs Deliver Unparalleled Speed and Scalability for GPU Server Applications

HighPoint Technologies, a leader in advanced PCIe Switch and RAID AIC, Adapter and Storage Enclosure solutions, has announced an extensive line of M.2 and E1.S RAID AICs engineered to accommodate high-performance GPU-centric workloads. Designed for enterprise and datacenter class computing environments, HighPoint NVMe RAID AICs deliver class-leading performance and unmatched scalability, enabling modern x86/AMD and Arm platforms to support 4 to 16 NVMe SSDs via a single PCIe Gen 4 or Gen 5 x16 slot. State-of-the-art PCIe Switching Architecture and flexible RAID technology enable administrators to custom tailor M.2 and E1.S storage configurations for a broad range of data-intensive applications, and seamlessly scale or expand existing storage configurations to meet the needs of evolving workflows.

Unprecedented Storage Density
HighPoint NVMe AICs have established a new milestone for M.2 NVMe storage. HighPoint's revolutionary Dual-Width AIC architecture enables a single PCIe Gen 4 or Gen 5 x16 slot to directly host up to 16 M.2 NVMe SSDs, and 128 TB of storage capacity, at speeds up to 28 GB/s; a truly unprecedented advancement in compact, single-device storage expansion solutions. State-of-the art PCIe switching technology and advanced cooling systems maximize transfer throughput and ensure M.2 configurations operate at peak efficiency by halting the performance sapping threat of thermal throttling in its tracks.

Fujitsu Previews Monaka: 144-Core Arm CPU Made with Chiplets

Fujitsu has previewed its next-generation Monaka processor, a 144-core powerhouse for data center. Satoshi Matsuoka of the RIKEN Center for Computational Science showcased the mechanical sample on social media platform X. The Monaka processor is developed in collaboration with Broadcom and employs an innovative 3.5D eXtreme Dimension System-in-Package architecture featuring four 36-core chiplets manufactured using TSMC's N2 process. These chiplets are stacked face-to-face with SRAM tiles through hybrid copper bonding, utilizing TSMC's N5 process for the cache layer. A distinguishing feature of the Monaka design is its approach to memory architecture. Rather than incorporating HBM, Fujitsu has opted for pure cache dies below compute logic in combination with DDR5 DRAM compatibility, potentially leveraging advanced modules like MR-DIMM and MCR-DIMM.

The processor's I/O die supports cutting-edge interfaces, including DDR5 memory, PCIe 6.0, and CXL 3.0 for seamless integration with modern data center infrastructure. Security in the design is taken care of with the implementation of Armv9-A's Confidential Computing Architecture for enhanced workload isolation. Fujitsu has set ambitious goals for the Monaka processor. The company aims to achieve twice the energy efficiency of current x86 processors by 2027 while maintaining air cooling capabilities. The processor aims to do AI and HPC with the Arm SVE 2 support, which enables vector lengths up to 2048 bits. Scheduled for release during Fujitsu's fiscal year 2027 (April 2026 to March 2027), the Monaka processor is shaping up as a competitor to AMD's EPYC and Intel's Xeon processors.

Broadcom Delivers Industry's First 3.5D F2F Technology for AI XPUs

Broadcom Inc. today announced the availability of its 3.5D eXtreme Dimension System in Package (XDSiP) platform technology, enabling consumer AI customers to develop next-generation custom accelerators (XPUs). The 3.5D XDSiP integrates more than 6000 mm² of silicon and up to 12 high bandwidth memory (HBM) stacks in one packaged device to enable high-efficiency, low-power computing for AI at scale. Broadcom has achieved a significant milestone by developing and launching the industry's first Face-to-Face (F2F) 3.5D XPU.

The immense computational power required for training generative AI models relies on massive clusters of 100,000 growing to 1 million XPUs. These XPUs demand increasingly sophisticated integration of compute, memory, and I/O capabilities to achieve the necessary performance while minimizing power consumption and cost. Traditional methods like Moore's Law and process scaling are struggling to keep up with these demands. Therefore, advanced system-in-package (SiP) integration is becoming crucial for next-generation XPUs. Over the past decade, 2.5D integration, which involves integrating multiple chiplets up to 2500 mm² of silicon and HBM modules up to 8 HBMs on an interposer, has proven valuable for XPU development. However, as new and increasingly complex LLMs are introduced, their training necessitates 3D silicon stacking for better size, power, and cost. Consequently, 3.5D integration, which combines 3D silicon stacking with 2.5D packaging, is poised to become the technology of choice for next-generation XPUs in the coming decade.

Intel 18A Process Node Clocks an Abysmal 10% Yield: Report

In case you're wondering why Intel went with TSMC 3 nm to build the Compute tile of its "Arrow Lake" processor, and the SoC tile of "Lunar Lake," instead of Intel 3, or even Intel 20A, perhaps there's more to the recent story about Broadcom voicing its disappointment in the Intel 18A foundry node. The September 2024 report didn't specify a number to what yields on the Intel 18A node looked like to spook Broadcom, but we now have some idea as to just how bad things are. Korean publication Chosun, which tracks developments in the electronics and ICT industries, reports that yields on the Intel 18A foundry node stand at an abysmal 10%, making it unfit for mass-production. Broadcom validated Intel 18A as it was prospecting a cutting-edge node for its high-bandwidth network processors.

The report also hints that Intel's in-house foundry nodes going off the rails could be an important event leading up to the company's Board letting go of former CEO Pat Gelsinger, as huge 2nd order effects will be felt across the company's entire product stack in development. For example, company roadmaps put the company's next-generation "Clearwater Forest" server processor, slated for 2025, as being designed for the Intel 18A node. Unless Intel Foundry can pull a miracle, an effort must be underway to redesign the chip for whichever TSMC node is considered cutting-edge in 2025.

Raspberry Pi Compute Module 5 Officially Launches With Broadcom BCM2712 Quad-Core SoC

Today we're happy to announce the much-anticipated launch of Raspberry Pi Compute Module 5, the modular version of our flagship Raspberry Pi 5 single-board computer, priced from just $45.

An unexpected journey
We founded the Raspberry Pi Foundation back in 2008 with a mission to give today's young people access to the sort of approachable, programmable, affordable computing experience that I benefitted from back in the 1980s. The Raspberry Pi computer was, in our minds, a spiritual successor to the BBC Micro, itself the product of the BBC's Computer Literacy Project. But just as the initially education-focused BBC Micro quickly found a place in the wider commercial computing marketplace, so Raspberry Pi became a platform around which countless companies, from startups to multi-billion-dollar corporations, chose to innovate. Today, between seventy and eighty percent of Raspberry Pi units go into industrial and embedded applications.

OpenAI Designs its First AI Chip in Collaboration with Broadcom and TSMC

According to a recent Reuters report, OpenAI is continuing with its moves in the custom silicon space, expanding beyond its reported talks with Broadcom to include a broader strategy involving multiple industry leaders. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The company behind ChatGPT is actively working with both Broadcom and TSMC to develop its first proprietary AI chip, specifically focused on inference operations. Getting a custom chip to do training runs is a bit more complex task, and OpenAI leaves that to its current partners until the company figures out all details. Even with an inference chip, the scale at which OpenAI works and serves its models makes financial sense for the company to develop custom solutions tailored to its infrastructure needs.

This time, the initiative represents a more concrete and nuanced approach than previously understood. Rather than just exploratory discussions, OpenAI has assembled a dedicated chip team of approximately 20 people, led by former Google TPU engineers Thomas Norrie and Richard Ho. The company has secured manufacturing capacity with TSMC, targeting a 2026 timeline for its first custom-designed chip. While Broadcom's involvement leverages its expertise in helping companies optimize chip designs for manufacturing and manage data movement between chips—crucial for AI systems running thousands of processors in parallel—OpenAI is simultaneously diversifying its compute strategy. This includes adding AMD's Instinct MI300X chips to its infrastructure alongside its existing NVIDIA deployments. Similarly, Meta has the same approach, where it now trains its models on NVIDIA GPUs and serves them to the public (inferencing) using AMD Instinct MI300X.

Intel's Silver Lining is $8.5 Billion CHIPS Act Funding, Possibly by the End of the Year

Intel's recent financial woes have brought the company into severe cost-cutting measures, including job cuts and project delays. However, a silver lining remains—Intel is reportedly in the final stages of securing $8.5 billion in direct funding from the US government under the CHIPS Act, delivered by the end of the year. The potential financing comes at a crucial time for Intel, which has been grappling with financial challenges. The company reported a $1.6 billion loss in the second quarter of 2024, leading to short-term setbacks. However, thanks to sources close to the Financial Times, we learn that Intel's funding target will represent the CHIPS Act's largest share, leading to a massive boost to US-based semiconductor manufacturing.

Looking ahead, the potential CHIPS Act funding could serve as a catalyst for Intel's resurgence, reassuring both investors and customers about the company's future. A key element of Intel's recovery strategy lies in the ramp-up of production for its advanced 18A node, which should become the primary revenue driver for its foundry unit. This advancement, coupled with the anticipated government backing, positions Intel to potentially capture market share from established players like TSMC and Samsung. The company has already secured high-profile customers such as Amazon and (allegedly) Broadcom, hinting at its growing appeal in the foundry space. Moreover, Intel's enhanced domestic manufacturing capabilities align well with potential US government mandates for companies like NVIDIA and Apple to produce processors locally, a consideration driven by escalating geopolitical tensions.

Intel 20A Node Cancelled for Foundry Customers, "Arrow Lake" Mainly Manufactured Externally

Intel has announced the cancellation of its 20A node for Foundry customers, as well as shifting majority of Arrow Lake production to external foundries. The tech giant will instead focus its resources on the more advanced 18A node while relying on external partners for Arrow Lake production, likely tapping TSMC or Samsung for their 2 nm nodes. The decision follows Intel's successful release of the 18A Process Design Kit (PDK) 1.0 in July, which garnered positive feedback from the ecosystem, according to the company. Intel reports that the 18A node is already operational, booting operating systems and yielding well, keeping the company on track for a 2025 launch. This early success has enabled Intel to reallocate engineering resources from 20A to 18A sooner than anticipated. As a result, the "Arrow Lake processor family will be built primarily using external partners and packaged by Intel Foundry".

The 20A node, while now cancelled for Arrow Lake, has played a crucial role in Intel's journey towards 18A. It served as a testbed for new techniques, materials, and transistor architectures essential for advancing Moore's Law. The 20A node successfully integrated both RibbonFET gate-all-around transistor architecture and PowerVia backside power delivery for the first time, providing valuable insights that directly informed the development of 18A. Intel's decision to focus on 18A is also driven by economic factors. With the current 18A defect density already at D0 <0.40, the company sees an opportunity to optimize its engineering investments by transitioning now. However, challenges remain, as evidenced by recent reports of Broadcom's disappointment in the 18A node. Despite these hurdles, Intel remains optimistic about the future of its foundry services and the potential of its advanced manufacturing processes. The coming months will be crucial as the company works to demonstrate the capabilities of its 18A node and secure more partners for its foundry business.

Broadcom's Testing of Intel 18A Node Signals Disappointment, Still Not Ready for High-Volume Production

According to a recent Reuters report, Intel's 18A node doesn't seem to be production-ready. As the sources indicate, Broadcom has been reportedly testing Intel's 18A node on its internal company designs, which include an extensive range of products from AI accelerators to networking switches. However, as Broadcom received the initial production run from Intel, the 18A node seems to be in a worse state than initially expected. After testing the wafers and powering them on, Broadcom reportedly concluded that the 18A process is not yet ready for high-volume production. With Broadcom's comments reflecting high-volume production, it signals that the 18A node is not producing a decent yield that would satisfy external customers.

While this is not a good sign of Intel's Fundry contract business development, it shows that the node is presumably in a good state in terms of power/performance. Intel's CEO Pat Gelsinger confirmed that 18A is now at 0.4 d0 defect density, and it is now a "healthy process." However, alternatives exist at TSMC, which proves to be a very challenging competitor to take on, as its N7 and N5 nodes had a defect density of 0.33 during development and 0.1 defect density during high-volume production. This leads to better yields and lower costs for the contracting party, resulting in higher profits. Ultimately, it is up to Intel to improve its production process further to satisfy customers. Gelsinger wants to see Intel Foundry as "manufacturing ready" by the end of the year, and we can see the first designs in 2025 reach volume production. There are still a few more months to improve the node, and we expect to see changes implemented by the end of the year.

OpenAI in Talks with Broadcom About Developing Custom AI Chips to Power Next Generation Models

According to The Information, OpenAI is reportedly in talks with Broadcom about developing a custom AI accelerator to power OpenAI's growing demand for high-performance solutions. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The latter part is what OpenAI wants to focus on, but all the aforementioned IP developed by Broadcom is of use in a data center. Suppose OpenAI decides to use Broadcom's solutions. In that case, the fabless silicon designer offers a complete vertical stack of products for inter-system communication using various protocols such as PCIe, system-to-system communication using Ethernet networking with Broadcom Tomahawk 6 and future revisions, alongside storage solutions and many other complimentary elements of a data center.

As a company skilled in making various IPs, it also makes ASIC solutions for other companies and has assisted Google in the making of its Tensor Processing Unit (TPU), which is now in its sixth generation. Google TPUs are massively successful as Google deploys millions of them and provides AI solutions to billions of users across the globe. Now, OpenAI wants to be part of the AI chip game, and Broadcom could come to the rescue with its already-established AI success and various other data center componentry to help make a custom AI accelerator to power OpenAI's infrastructure needed for the next generation of AI models. With each new AI model released by OpenAI, compute demand spikes by several orders of magnitude, and having an AI accelerator that exactly matches their need will help the company move faster and run even bigger AI models.

ByteDance and Broadcom to Collaborate on Advanced AI Chip

ByteDance, TikTok's parent company, is reportedly working with American chip designer Broadcom to develop a cutting-edge AI processor. This collaboration could secure a stable supply of high-performance chips for ByteDance, according to Reuters. Sources claim the joint project involves a 5 nm Application-Specific Integrated Circuit (ASIC), designed to comply with U.S. export regulations. TSMC is slated to manufacture the chip, though production is not expected to begin this year.

This partnership marks a significant development in U.S.-China tech relations, as no public announcements of such collaborations on advanced chips have been made since Washington implemented stricter export controls in 2022. For ByteDance, this move could reduce procurement costs and ensure a steady chip supply, crucial for powering its array of popular apps, including TikTok and the ChatGPT-like AI chatbot "Doubao." The company has already invested heavily in AI chips, reportedly spending $2 billion on NVIDIA processors in 2023.

Broadcom Unveils Newest Innovations for VMware Cloud Foundation

Broadcom Inc. today unveiled the latest updates to VMware Cloud Foundation (VCF), the company's flagship private cloud platform. The latest advancements in VCF support customers' digital innovation with faster infrastructure modernization, improved developer productivity, and better cyber resiliency and security with low total cost of ownership.

"VMware Cloud Foundation is the industry's first private-cloud platform to offer the combined power of public and private clouds with unmatched operational simplicity and proven total cost of ownership value," said Paul Turner, Vice President of Products, VMware Cloud Foundation Division, Broadcom. "With our latest release, VCF is delivering on key requirements driven by customer input. The new VCF Import functionality will be a game changer in accelerating VCF adoption and improving time to value. We are also delivering a set of new capabilities that helps IT more quickly meet the needs of developers without increasing business risk. This latest release of VCF puts us squarely on the path to delivering on the full promise of VCF for our customers."

AMD, Broadcom, Cisco, Google, HPE, Intel, Meta and Microsoft Form Ultra Accelerator Link (UALink) Promoter Group to Combat NVIDIA NVLink

AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft today announced they have aligned to develop a new industry standard dedicated to advancing high-speed and low latency communication for scale-up AI systems linking in Data Centers.

Called the Ultra Accelerator Link (UALink), this initial group will define and establish an open industry standard that will enable AI accelerators to communicate more effectively. By creating an interconnect based upon open standards, UALink will enable system OEMs, IT professionals and system integrators to create a pathway for easier integration, greater flexibility and scalability of their AI-connected data centers.

The Wi-Fi Alliance Introduces Wi-Fi CERTIFIED 7

Wi-Fi CERTIFIED 7 is here, introducing powerful new features that boost Wi-Fi performance and improve connectivity across a variety of environments. Cutting-edge capabilities in Wi-Fi CERTIFIED 7 enable innovations that rely on high throughput, deterministic latency, and greater reliability for critical traffic. New use cases - including multi-user AR/VR/XR, immersive 3-D training, electronic gaming, hybrid work, industrial IoT, and automotive - will advance as a result of the latest Wi-Fi generation. Wi-Fi CERTIFIED 7 represents the culmination of extensive collaboration and innovation within Wi-Fi Alliance, facilitating worldwide product interoperability and a robust, sophisticated device ecosystem.

Wi-Fi 7 will see rapid adoption across a broad ecosystem with more than 233 million devices expected to enter the market in 2024, growing to 2.1 billion devices by 2028. Smartphones, PCs, tablets, and access points (APs) will be the earliest adopters of Wi-Fi 7, and customer premises equipment (CPE) and augmented and virtual reality (AR/VR) equipment will continue to gain early market traction. Wi-Fi CERTIFIED 7 pushes the boundaries of today's wireless connectivity, and Wi-Fi CERTIFIED helps ensure advanced features are deployed in a consistent way to deliver high-quality user experiences.

Top Ten IC Design Houses Ride Wave of Seasonal Consumer Demand and Continued AI Boom to See 17.8% Increase in Quarterly Revenue in 3Q23

TrendForce reports that 3Q23 has been a historic quarter for the world's leading IC design houses as total revenue soared 17.8% to reach a record-breaking US$44.7 billion. This remarkable growth is fueled by a robust season of stockpiling for smartphones and laptops, combined with a rapid acceleration in the shipment of generative AI chips and components. NVIDIA, capitalizing on the AI boom, emerged as the top performer in revenue and market share. Notably, analog IC supplier Cirrus Logic overtook US PMIC manufacturer MPS to snatch the tenth spot, driven by strong demand for smartphone stockpiling.

NVIDIA's revenue soared 45.7% to US$16.5 billion in the third quarter, bolstered by sustained demand for generative AI and LLMs. Its data center business—accounting for nearly 80% of its revenue—was a key driver in this exceptional growth.

AMD Showcases Growing Momentum for AMD Powered AI Solutions from the Data Center to PCs

Today at the "Advancing AI" event, AMD was joined by industry leaders including Microsoft, Meta, Oracle, Dell Technologies, HPE, Lenovo, Supermicro, Arista, Broadcom and Cisco to showcase how these companies are working with AMD to deliver advanced AI solutions spanning from cloud to enterprise and PCs. AMD launched multiple new products at the event, including the AMD Instinct MI300 Series data center AI accelerators, ROCm 6 open software stack with significant optimizations and new features supporting Large Language Models (LLMs) and Ryzen 8040 Series processors with Ryzen AI.

"AI is the future of computing and AMD is uniquely positioned to power the end-to-end infrastructure that will define this AI era, from massive cloud installations to enterprise clusters and AI-enabled intelligent embedded devices and PCs," said AMD Chair and CEO Dr. Lisa Su. "We are seeing very strong demand for our new Instinct MI300 GPUs, which are the highest-performance accelerators in the world for generative AI. We are also building significant momentum for our data center AI solutions with the largest cloud companies, the industry's top server providers, and the most innovative AI startups ꟷ who we are working closely with to rapidly bring Instinct MI300 solutions to market that will dramatically accelerate the pace of innovation across the entire AI ecosystem."

Ethernet Switch Chips are Now Infected with AI: Broadcom Announces Trident 5-X12

Artificial intelligence has been a hot topic this year, and everything is now an AI processor, from CPUs to GPUs, NPUs, and many others. However, it was only a matter of time before we saw an integration of AI processing elements into the networking chips. Today, Broadcom announced its new Ethernet switching silicon called Trident 5-X12. The Trident 5-X12 delivers 16 Tb/s of bandwidth, double that of the previous Trident generation while adding support for fast 800G ports for connection to Tomahawk 5 spine switch chips. The 5-X12 is software-upgradable and optimized for dense 1RU top-of-rack designs, enabling configurations with up to 48x200G downstream server ports and 8x800G upstream fabric ports. The 800G support is added using 100G-PAM4 SerDes, which enables up to 4 m DAC and linear optics.

However, this is not only a switch chip on its own. Broadcom has added AI processing elements in an inference engine called NetGNT (Networking General-purpose Neural-network Traffic-analyzer). It can detect common traffic patterns and optimize data movement across the chip. Specifically, the company has listed an example of the system doing AI/ML workloads. In that case, NetGNT performs intelligent traffic analysis to avoid network congestion in these workloads. For example, it can detect the so-called "incast" patterns in real-time, where many flows converge simultaneously on the same port. By recognizing the start of incast early, NetGNT can invoke hardware-based congestion control techniques to prevent performance degradation without added latency.

Broadcom Announces Successful Acquisition of VMware

Broadcom is thrilled to announce Broadcom's successful acquisition of VMware, and the start of a new and exciting era for all of us at the company. VMware joins our engineering-first, innovation-centric team, which is another important step forward in building the world's leading infrastructure technology company.

While an important moment for Broadcom, it's also an exciting milestone for our customers around the world. And as I said when we first announced the acquisition, we can now come together and have the scale to help global enterprises address their complex IT infrastructure challenges by enabling private and hybrid cloud environments and helping them deploy an "apps anywhere" strategy. Our goal is to help customers optimize their private, hybrid and multi-cloud environments, allowing them to run applications and services anywhere.
Return to Keyword Browsing
Feb 21st, 2025 15:03 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts