News Posts matching #Broadcom

Return to Keyword Browsing

HighPoint M.2/E1.S NVMe RAID AICs Deliver Unparalleled Speed and Scalability for GPU Server Applications

HighPoint Technologies, a leader in advanced PCIe Switch and RAID AIC, Adapter and Storage Enclosure solutions, has announced an extensive line of M.2 and E1.S RAID AICs engineered to accommodate high-performance GPU-centric workloads. Designed for enterprise and datacenter class computing environments, HighPoint NVMe RAID AICs deliver class-leading performance and unmatched scalability, enabling modern x86/AMD and Arm platforms to support 4 to 16 NVMe SSDs via a single PCIe Gen 4 or Gen 5 x16 slot. State-of-the-art PCIe Switching Architecture and flexible RAID technology enable administrators to custom tailor M.2 and E1.S storage configurations for a broad range of data-intensive applications, and seamlessly scale or expand existing storage configurations to meet the needs of evolving workflows.

Unprecedented Storage Density
HighPoint NVMe AICs have established a new milestone for M.2 NVMe storage. HighPoint's revolutionary Dual-Width AIC architecture enables a single PCIe Gen 4 or Gen 5 x16 slot to directly host up to 16 M.2 NVMe SSDs, and 128 TB of storage capacity, at speeds up to 28 GB/s; a truly unprecedented advancement in compact, single-device storage expansion solutions. State-of-the art PCIe switching technology and advanced cooling systems maximize transfer throughput and ensure M.2 configurations operate at peak efficiency by halting the performance sapping threat of thermal throttling in its tracks.

Fujitsu Previews Monaka: 144-Core Arm CPU Made with Chiplets

Fujitsu has previewed its next-generation Monaka processor, a 144-core powerhouse for data center. Satoshi Matsuoka of the RIKEN Center for Computational Science showcased the mechanical sample on social media platform X. The Monaka processor is developed in collaboration with Broadcom and employs an innovative 3.5D eXtreme Dimension System-in-Package architecture featuring four 36-core chiplets manufactured using TSMC's N2 process. These chiplets are stacked face-to-face with SRAM tiles through hybrid copper bonding, utilizing TSMC's N5 process for the cache layer. A distinguishing feature of the Monaka design is its approach to memory architecture. Rather than incorporating HBM, Fujitsu has opted for pure cache dies below compute logic in combination with DDR5 DRAM compatibility, potentially leveraging advanced modules like MR-DIMM and MCR-DIMM.

The processor's I/O die supports cutting-edge interfaces, including DDR5 memory, PCIe 6.0, and CXL 3.0 for seamless integration with modern data center infrastructure. Security in the design is taken care of with the implementation of Armv9-A's Confidential Computing Architecture for enhanced workload isolation. Fujitsu has set ambitious goals for the Monaka processor. The company aims to achieve twice the energy efficiency of current x86 processors by 2027 while maintaining air cooling capabilities. The processor aims to do AI and HPC with the Arm SVE 2 support, which enables vector lengths up to 2048 bits. Scheduled for release during Fujitsu's fiscal year 2027 (April 2026 to March 2027), the Monaka processor is shaping up as a competitor to AMD's EPYC and Intel's Xeon processors.

Broadcom Delivers Industry's First 3.5D F2F Technology for AI XPUs

Broadcom Inc. today announced the availability of its 3.5D eXtreme Dimension System in Package (XDSiP) platform technology, enabling consumer AI customers to develop next-generation custom accelerators (XPUs). The 3.5D XDSiP integrates more than 6000 mm² of silicon and up to 12 high bandwidth memory (HBM) stacks in one packaged device to enable high-efficiency, low-power computing for AI at scale. Broadcom has achieved a significant milestone by developing and launching the industry's first Face-to-Face (F2F) 3.5D XPU.

The immense computational power required for training generative AI models relies on massive clusters of 100,000 growing to 1 million XPUs. These XPUs demand increasingly sophisticated integration of compute, memory, and I/O capabilities to achieve the necessary performance while minimizing power consumption and cost. Traditional methods like Moore's Law and process scaling are struggling to keep up with these demands. Therefore, advanced system-in-package (SiP) integration is becoming crucial for next-generation XPUs. Over the past decade, 2.5D integration, which involves integrating multiple chiplets up to 2500 mm² of silicon and HBM modules up to 8 HBMs on an interposer, has proven valuable for XPU development. However, as new and increasingly complex LLMs are introduced, their training necessitates 3D silicon stacking for better size, power, and cost. Consequently, 3.5D integration, which combines 3D silicon stacking with 2.5D packaging, is poised to become the technology of choice for next-generation XPUs in the coming decade.

Intel 18A Process Node Clocks an Abysmal 10% Yield: Report

In case you're wondering why Intel went with TSMC 3 nm to build the Compute tile of its "Arrow Lake" processor, and the SoC tile of "Lunar Lake," instead of Intel 3, or even Intel 20A, perhaps there's more to the recent story about Broadcom voicing its disappointment in the Intel 18A foundry node. The September 2024 report didn't specify a number to what yields on the Intel 18A node looked like to spook Broadcom, but we now have some idea as to just how bad things are. Korean publication Chosun, which tracks developments in the electronics and ICT industries, reports that yields on the Intel 18A foundry node stand at an abysmal 10%, making it unfit for mass-production. Broadcom validated Intel 18A as it was prospecting a cutting-edge node for its high-bandwidth network processors.

The report also hints that Intel's in-house foundry nodes going off the rails could be an important event leading up to the company's Board letting go of former CEO Pat Gelsinger, as huge 2nd order effects will be felt across the company's entire product stack in development. For example, company roadmaps put the company's next-generation "Clearwater Forest" server processor, slated for 2025, as being designed for the Intel 18A node. Unless Intel Foundry can pull a miracle, an effort must be underway to redesign the chip for whichever TSMC node is considered cutting-edge in 2025.

Raspberry Pi Compute Module 5 Officially Launches With Broadcom BCM2712 Quad-Core SoC

Today we're happy to announce the much-anticipated launch of Raspberry Pi Compute Module 5, the modular version of our flagship Raspberry Pi 5 single-board computer, priced from just $45.

An unexpected journey
We founded the Raspberry Pi Foundation back in 2008 with a mission to give today's young people access to the sort of approachable, programmable, affordable computing experience that I benefitted from back in the 1980s. The Raspberry Pi computer was, in our minds, a spiritual successor to the BBC Micro, itself the product of the BBC's Computer Literacy Project. But just as the initially education-focused BBC Micro quickly found a place in the wider commercial computing marketplace, so Raspberry Pi became a platform around which countless companies, from startups to multi-billion-dollar corporations, chose to innovate. Today, between seventy and eighty percent of Raspberry Pi units go into industrial and embedded applications.

OpenAI Designs its First AI Chip in Collaboration with Broadcom and TSMC

According to a recent Reuters report, OpenAI is continuing with its moves in the custom silicon space, expanding beyond its reported talks with Broadcom to include a broader strategy involving multiple industry leaders. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The company behind ChatGPT is actively working with both Broadcom and TSMC to develop its first proprietary AI chip, specifically focused on inference operations. Getting a custom chip to do training runs is a bit more complex task, and OpenAI leaves that to its current partners until the company figures out all details. Even with an inference chip, the scale at which OpenAI works and serves its models makes financial sense for the company to develop custom solutions tailored to its infrastructure needs.

This time, the initiative represents a more concrete and nuanced approach than previously understood. Rather than just exploratory discussions, OpenAI has assembled a dedicated chip team of approximately 20 people, led by former Google TPU engineers Thomas Norrie and Richard Ho. The company has secured manufacturing capacity with TSMC, targeting a 2026 timeline for its first custom-designed chip. While Broadcom's involvement leverages its expertise in helping companies optimize chip designs for manufacturing and manage data movement between chips—crucial for AI systems running thousands of processors in parallel—OpenAI is simultaneously diversifying its compute strategy. This includes adding AMD's Instinct MI300X chips to its infrastructure alongside its existing NVIDIA deployments. Similarly, Meta has the same approach, where it now trains its models on NVIDIA GPUs and serves them to the public (inferencing) using AMD Instinct MI300X.

Intel's Silver Lining is $8.5 Billion CHIPS Act Funding, Possibly by the End of the Year

Intel's recent financial woes have brought the company into severe cost-cutting measures, including job cuts and project delays. However, a silver lining remains—Intel is reportedly in the final stages of securing $8.5 billion in direct funding from the US government under the CHIPS Act, delivered by the end of the year. The potential financing comes at a crucial time for Intel, which has been grappling with financial challenges. The company reported a $1.6 billion loss in the second quarter of 2024, leading to short-term setbacks. However, thanks to sources close to the Financial Times, we learn that Intel's funding target will represent the CHIPS Act's largest share, leading to a massive boost to US-based semiconductor manufacturing.

Looking ahead, the potential CHIPS Act funding could serve as a catalyst for Intel's resurgence, reassuring both investors and customers about the company's future. A key element of Intel's recovery strategy lies in the ramp-up of production for its advanced 18A node, which should become the primary revenue driver for its foundry unit. This advancement, coupled with the anticipated government backing, positions Intel to potentially capture market share from established players like TSMC and Samsung. The company has already secured high-profile customers such as Amazon and (allegedly) Broadcom, hinting at its growing appeal in the foundry space. Moreover, Intel's enhanced domestic manufacturing capabilities align well with potential US government mandates for companies like NVIDIA and Apple to produce processors locally, a consideration driven by escalating geopolitical tensions.

Intel 20A Node Cancelled for Foundry Customers, "Arrow Lake" Mainly Manufactured Externally

Intel has announced the cancellation of its 20A node for Foundry customers, as well as shifting majority of Arrow Lake production to external foundries. The tech giant will instead focus its resources on the more advanced 18A node while relying on external partners for Arrow Lake production, likely tapping TSMC or Samsung for their 2 nm nodes. The decision follows Intel's successful release of the 18A Process Design Kit (PDK) 1.0 in July, which garnered positive feedback from the ecosystem, according to the company. Intel reports that the 18A node is already operational, booting operating systems and yielding well, keeping the company on track for a 2025 launch. This early success has enabled Intel to reallocate engineering resources from 20A to 18A sooner than anticipated. As a result, the "Arrow Lake processor family will be built primarily using external partners and packaged by Intel Foundry".

The 20A node, while now cancelled for Arrow Lake, has played a crucial role in Intel's journey towards 18A. It served as a testbed for new techniques, materials, and transistor architectures essential for advancing Moore's Law. The 20A node successfully integrated both RibbonFET gate-all-around transistor architecture and PowerVia backside power delivery for the first time, providing valuable insights that directly informed the development of 18A. Intel's decision to focus on 18A is also driven by economic factors. With the current 18A defect density already at D0 <0.40, the company sees an opportunity to optimize its engineering investments by transitioning now. However, challenges remain, as evidenced by recent reports of Broadcom's disappointment in the 18A node. Despite these hurdles, Intel remains optimistic about the future of its foundry services and the potential of its advanced manufacturing processes. The coming months will be crucial as the company works to demonstrate the capabilities of its 18A node and secure more partners for its foundry business.

Broadcom's Testing of Intel 18A Node Signals Disappointment, Still Not Ready for High-Volume Production

According to a recent Reuters report, Intel's 18A node doesn't seem to be production-ready. As the sources indicate, Broadcom has been reportedly testing Intel's 18A node on its internal company designs, which include an extensive range of products from AI accelerators to networking switches. However, as Broadcom received the initial production run from Intel, the 18A node seems to be in a worse state than initially expected. After testing the wafers and powering them on, Broadcom reportedly concluded that the 18A process is not yet ready for high-volume production. With Broadcom's comments reflecting high-volume production, it signals that the 18A node is not producing a decent yield that would satisfy external customers.

While this is not a good sign of Intel's Fundry contract business development, it shows that the node is presumably in a good state in terms of power/performance. Intel's CEO Pat Gelsinger confirmed that 18A is now at 0.4 d0 defect density, and it is now a "healthy process." However, alternatives exist at TSMC, which proves to be a very challenging competitor to take on, as its N7 and N5 nodes had a defect density of 0.33 during development and 0.1 defect density during high-volume production. This leads to better yields and lower costs for the contracting party, resulting in higher profits. Ultimately, it is up to Intel to improve its production process further to satisfy customers. Gelsinger wants to see Intel Foundry as "manufacturing ready" by the end of the year, and we can see the first designs in 2025 reach volume production. There are still a few more months to improve the node, and we expect to see changes implemented by the end of the year.

OpenAI in Talks with Broadcom About Developing Custom AI Chips to Power Next Generation Models

According to The Information, OpenAI is reportedly in talks with Broadcom about developing a custom AI accelerator to power OpenAI's growing demand for high-performance solutions. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The latter part is what OpenAI wants to focus on, but all the aforementioned IP developed by Broadcom is of use in a data center. Suppose OpenAI decides to use Broadcom's solutions. In that case, the fabless silicon designer offers a complete vertical stack of products for inter-system communication using various protocols such as PCIe, system-to-system communication using Ethernet networking with Broadcom Tomahawk 6 and future revisions, alongside storage solutions and many other complimentary elements of a data center.

As a company skilled in making various IPs, it also makes ASIC solutions for other companies and has assisted Google in the making of its Tensor Processing Unit (TPU), which is now in its sixth generation. Google TPUs are massively successful as Google deploys millions of them and provides AI solutions to billions of users across the globe. Now, OpenAI wants to be part of the AI chip game, and Broadcom could come to the rescue with its already-established AI success and various other data center componentry to help make a custom AI accelerator to power OpenAI's infrastructure needed for the next generation of AI models. With each new AI model released by OpenAI, compute demand spikes by several orders of magnitude, and having an AI accelerator that exactly matches their need will help the company move faster and run even bigger AI models.

ByteDance and Broadcom to Collaborate on Advanced AI Chip

ByteDance, TikTok's parent company, is reportedly working with American chip designer Broadcom to develop a cutting-edge AI processor. This collaboration could secure a stable supply of high-performance chips for ByteDance, according to Reuters. Sources claim the joint project involves a 5 nm Application-Specific Integrated Circuit (ASIC), designed to comply with U.S. export regulations. TSMC is slated to manufacture the chip, though production is not expected to begin this year.

This partnership marks a significant development in U.S.-China tech relations, as no public announcements of such collaborations on advanced chips have been made since Washington implemented stricter export controls in 2022. For ByteDance, this move could reduce procurement costs and ensure a steady chip supply, crucial for powering its array of popular apps, including TikTok and the ChatGPT-like AI chatbot "Doubao." The company has already invested heavily in AI chips, reportedly spending $2 billion on NVIDIA processors in 2023.

Broadcom Unveils Newest Innovations for VMware Cloud Foundation

Broadcom Inc. today unveiled the latest updates to VMware Cloud Foundation (VCF), the company's flagship private cloud platform. The latest advancements in VCF support customers' digital innovation with faster infrastructure modernization, improved developer productivity, and better cyber resiliency and security with low total cost of ownership.

"VMware Cloud Foundation is the industry's first private-cloud platform to offer the combined power of public and private clouds with unmatched operational simplicity and proven total cost of ownership value," said Paul Turner, Vice President of Products, VMware Cloud Foundation Division, Broadcom. "With our latest release, VCF is delivering on key requirements driven by customer input. The new VCF Import functionality will be a game changer in accelerating VCF adoption and improving time to value. We are also delivering a set of new capabilities that helps IT more quickly meet the needs of developers without increasing business risk. This latest release of VCF puts us squarely on the path to delivering on the full promise of VCF for our customers."

AMD, Broadcom, Cisco, Google, HPE, Intel, Meta and Microsoft Form Ultra Accelerator Link (UALink) Promoter Group to Combat NVIDIA NVLink

AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft today announced they have aligned to develop a new industry standard dedicated to advancing high-speed and low latency communication for scale-up AI systems linking in Data Centers.

Called the Ultra Accelerator Link (UALink), this initial group will define and establish an open industry standard that will enable AI accelerators to communicate more effectively. By creating an interconnect based upon open standards, UALink will enable system OEMs, IT professionals and system integrators to create a pathway for easier integration, greater flexibility and scalability of their AI-connected data centers.

The Wi-Fi Alliance Introduces Wi-Fi CERTIFIED 7

Wi-Fi CERTIFIED 7 is here, introducing powerful new features that boost Wi-Fi performance and improve connectivity across a variety of environments. Cutting-edge capabilities in Wi-Fi CERTIFIED 7 enable innovations that rely on high throughput, deterministic latency, and greater reliability for critical traffic. New use cases - including multi-user AR/VR/XR, immersive 3-D training, electronic gaming, hybrid work, industrial IoT, and automotive - will advance as a result of the latest Wi-Fi generation. Wi-Fi CERTIFIED 7 represents the culmination of extensive collaboration and innovation within Wi-Fi Alliance, facilitating worldwide product interoperability and a robust, sophisticated device ecosystem.

Wi-Fi 7 will see rapid adoption across a broad ecosystem with more than 233 million devices expected to enter the market in 2024, growing to 2.1 billion devices by 2028. Smartphones, PCs, tablets, and access points (APs) will be the earliest adopters of Wi-Fi 7, and customer premises equipment (CPE) and augmented and virtual reality (AR/VR) equipment will continue to gain early market traction. Wi-Fi CERTIFIED 7 pushes the boundaries of today's wireless connectivity, and Wi-Fi CERTIFIED helps ensure advanced features are deployed in a consistent way to deliver high-quality user experiences.

Top Ten IC Design Houses Ride Wave of Seasonal Consumer Demand and Continued AI Boom to See 17.8% Increase in Quarterly Revenue in 3Q23

TrendForce reports that 3Q23 has been a historic quarter for the world's leading IC design houses as total revenue soared 17.8% to reach a record-breaking US$44.7 billion. This remarkable growth is fueled by a robust season of stockpiling for smartphones and laptops, combined with a rapid acceleration in the shipment of generative AI chips and components. NVIDIA, capitalizing on the AI boom, emerged as the top performer in revenue and market share. Notably, analog IC supplier Cirrus Logic overtook US PMIC manufacturer MPS to snatch the tenth spot, driven by strong demand for smartphone stockpiling.

NVIDIA's revenue soared 45.7% to US$16.5 billion in the third quarter, bolstered by sustained demand for generative AI and LLMs. Its data center business—accounting for nearly 80% of its revenue—was a key driver in this exceptional growth.

AMD Showcases Growing Momentum for AMD Powered AI Solutions from the Data Center to PCs

Today at the "Advancing AI" event, AMD was joined by industry leaders including Microsoft, Meta, Oracle, Dell Technologies, HPE, Lenovo, Supermicro, Arista, Broadcom and Cisco to showcase how these companies are working with AMD to deliver advanced AI solutions spanning from cloud to enterprise and PCs. AMD launched multiple new products at the event, including the AMD Instinct MI300 Series data center AI accelerators, ROCm 6 open software stack with significant optimizations and new features supporting Large Language Models (LLMs) and Ryzen 8040 Series processors with Ryzen AI.

"AI is the future of computing and AMD is uniquely positioned to power the end-to-end infrastructure that will define this AI era, from massive cloud installations to enterprise clusters and AI-enabled intelligent embedded devices and PCs," said AMD Chair and CEO Dr. Lisa Su. "We are seeing very strong demand for our new Instinct MI300 GPUs, which are the highest-performance accelerators in the world for generative AI. We are also building significant momentum for our data center AI solutions with the largest cloud companies, the industry's top server providers, and the most innovative AI startups ꟷ who we are working closely with to rapidly bring Instinct MI300 solutions to market that will dramatically accelerate the pace of innovation across the entire AI ecosystem."

Ethernet Switch Chips are Now Infected with AI: Broadcom Announces Trident 5-X12

Artificial intelligence has been a hot topic this year, and everything is now an AI processor, from CPUs to GPUs, NPUs, and many others. However, it was only a matter of time before we saw an integration of AI processing elements into the networking chips. Today, Broadcom announced its new Ethernet switching silicon called Trident 5-X12. The Trident 5-X12 delivers 16 Tb/s of bandwidth, double that of the previous Trident generation while adding support for fast 800G ports for connection to Tomahawk 5 spine switch chips. The 5-X12 is software-upgradable and optimized for dense 1RU top-of-rack designs, enabling configurations with up to 48x200G downstream server ports and 8x800G upstream fabric ports. The 800G support is added using 100G-PAM4 SerDes, which enables up to 4 m DAC and linear optics.

However, this is not only a switch chip on its own. Broadcom has added AI processing elements in an inference engine called NetGNT (Networking General-purpose Neural-network Traffic-analyzer). It can detect common traffic patterns and optimize data movement across the chip. Specifically, the company has listed an example of the system doing AI/ML workloads. In that case, NetGNT performs intelligent traffic analysis to avoid network congestion in these workloads. For example, it can detect the so-called "incast" patterns in real-time, where many flows converge simultaneously on the same port. By recognizing the start of incast early, NetGNT can invoke hardware-based congestion control techniques to prevent performance degradation without added latency.

Broadcom Announces Successful Acquisition of VMware

Broadcom is thrilled to announce Broadcom's successful acquisition of VMware, and the start of a new and exciting era for all of us at the company. VMware joins our engineering-first, innovation-centric team, which is another important step forward in building the world's leading infrastructure technology company.

While an important moment for Broadcom, it's also an exciting milestone for our customers around the world. And as I said when we first announced the acquisition, we can now come together and have the scale to help global enterprises address their complex IT infrastructure challenges by enabling private and hybrid cloud environments and helping them deploy an "apps anywhere" strategy. Our goal is to help customers optimize their private, hybrid and multi-cloud environments, allowing them to run applications and services anywhere.

Comcast and Broadcom to Develop the World's First AI-Powered Access Network With Pioneering New Chipset

Comcast and Broadcom today announced joint efforts to develop the world's first AI-powered access network with a new chipset that embeds artificial intelligence (AI) and machine learning (ML) within the nodes, amps and modems that comprise the last few miles of Comcast's network. With these new capabilities broadly deployed throughout the network, Comcast will be able to transform its operations by automating more network functions and deliver an improved customer experience through better and more actionable intelligence.

Additionally, the new chipset will be the first in the world to incorporate DOCSIS 4.0 Full Duplex (FDX), Extended Spectrum (ESD) and the ability to run both simultaneously, enabling Internet service providers across the globe to deliver DOCSIS 4.0 services using a toolkit with technology options to meet their business needs. DOCSIS 4.0 is the next-generation network technology that will introduce symmetrical multi-gigabit Internet speeds, lower latency, and even better security and reliability to hundreds of millions of people and businesses over their existing connections without the need for major construction of new network infrastructure.

ASUS Showcases Cutting-Edge Cloud Solutions at OCP Global Summit 2023

ASUS, a global infrastructure solution provider, is excited to announce its participation in the 2023 OCP Global Summit, which is taking place from October 17-19, 2023, at the San Jose McEnery Convention Center. The prestigious annual event brings together industry leaders, innovators and decision-makers from around the world to explore and discuss the latest advancements in open infrastructure and cloud technologies, providing a perfect stage for ASUS to unveil its latest cutting-edge products.

The ASUS theme for the OCP Global Summit is Solutions beyond limits—ASUS empowers AI, cloud, telco and more. We will showcase an array of products:

Raspberry Pi Foundation Launches Raspberry Pi 5

It has been over four years since the release of the Raspberry Pi 4, and in that time a lot has changed in the maker board and single-board computer landscape. For the Raspberry Pi Foundation there were struggles with worldwide demand and production capacity brought on by the global pandemic starting in 2020, and plenty of new competitors came to the scene to offer ready to order alternatives to the venerable RPi 4. Today however the production woes have been assuaged and a new generation of Raspberry Pi is here; the Raspberry Pi 5.

Raspberry Pi 5 is being announced in advance of availability unlike every prior RPi device launch. Pre-orders are open with many of the listed Approved Resellers on RPi's website starting today but unit shipments aren't expected until near the end of October 2023. As part of this pre-order scheme, RPi Foundation is withholding pre-orders from bulk customers and will be dealing in single-unit sales for individuals until at least the end of the year, as well as running some promotions with The MagPi and HackSpace magazines to give priority access to their subscribers. Genuinely nice to see, considering how hard it was to obtain a Pi 4 for the average Joe over the last couple years. The two announced prices for the RPi 5 are $60 USD for the 4 GB variant, and $80 USD for the 8 GB variant; or about $5 USD more than current reseller pricing on comparable configurations of the Raspberry Pi 4.

Broadcom Partners with Google Cloud to Strengthen Gen AI-Powered Cybersecurity

Symantec, a division of Broadcom Inc., is partnering with Google Cloud to embed generative AI (gen AI) into the Symantec Security platform in a phased rollout that will give customers a significant technical edge for detecting, understanding, and remediating sophisticated cyber attacks.

Symantec is leveraging the Google Cloud Security AI Workbench and security-specific large language model (LLM)--Sec-PaLM 2-across its portfolio to enable natural language interfaces and generate more comprehensive and easy-to-understand threat analyses. With Security AI Workbench-powered summarization of complex incidents and alignment to MITRE ATT&CK context, security operations center (SOC) analysts of all levels can better understand threats and be able to respond faster. That, in turn, translates into greater security and higher SOC productivity.

TSMC Ramps Up CoWoS Advanced Packaging Production to Meet Soaring AI Chip Demand

The burgeoning AI market is significantly impacting TSMC's CoWoS (Chip on Wafer on Substrate) advanced packaging production capacity, causing it to overflow due to high demand from major companies like NVIDIA, AMD, and Amazon. To accommodate this, TSMC is in the process of expanding its production capacity by acquiring additional CoWoS machines from equipment manufacturers like Xinyun, Wanrun, Hongsu, Titanium, and Qunyi. These expansions are expected to be operational in the first half of the next year, leading to an increased monthly production capacity, potentially close to 30,000 pieces, enabling TSMC to cater to more AI-related orders. These endeavors to increase capacity are in response to the amplified demand for AI chips from their applications in various domains, including autonomous vehicles and smart factories.

Despite TSMC's active steps to enlarge its CoWoS advanced packaging production, the overwhelming client demand is driving the company to place additional orders with equipment suppliers. It has been indicated that NVIDIA is currently TSMC's largest CoWoS advanced packaging customer, accounting for 60% of its production capacity. Due to the surge in demand, companies like AMD, Amazon, and Broadcom are also placing urgent orders, leading to a substantial increase in TSMC's advanced process capacity utilization. The overall situation indicates a thriving scenario for equipment manufacturers with clear visibility of orders extending into the following year, even as they navigate the challenges of fulfilling the rapidly growing and immediate demand in the AI market.

Q2 Revenue for Top 10 Global IC Houses Surges by 12.5% as Q3 on Pace to Set New Record

Fueled by an AI-driven inventory stocking frenzy across the supply chain, TrendForce reveals that Q2 revenue for the top 10 global IC design powerhouses soared to US $38.1 billion, marking a 12.5% quarterly increase. In this rising tide, NVIDIA seized the crown, officially dethroning Qualcomm as the world's premier IC design house, while the remainder of the leaderboard remained stable.

AI charges ahead, buoying IC design performance amid a seasonal stocking slump
NVIDIA is reaping the rewards of a global transformation. Bolstered by the global demand from CSPs, internet behemoths, and enterprises diving into generative AI and large language models, NVIDIA's data center revenue skyrocketed by a whopping 105%. A deluge of shipments, including the likes of their advanced Hopper and Ampere architecture HGX systems and the high-performing InfinBand, played a pivotal role. Beyond that, both gaming and professional visualization sectors thrived under the allure of fresh product launches. Clocking a Q2 revenue of US$11.33 billion (a 68.3% surge), NVIDIA has vaulted over both Qualcomm and Broadcom to seize the IC design throne.

TSMC, Broadcom & NVIDIA Alliance Reportedly Set to Advance Silicon Photonics R&D

Taiwan's Economic Daily reckons that a freshly formed partnership between TSMC, Broadcom, and NVIDIA will result in the development of cutting-edge silicon photonics. The likes of IBM, Intel and various academic institutes are already deep into their own research and development processes, but the alleged new alliance is said to focus on advancing AI computer hardware. The report cites a significant allocation of—roughly 200—TSMC staffers onto R&D involving the integration of silicon photonic technologies into high performance computing (HPC) solutions. They are very likely hoping that the usage of optical interconnects (on a silicon medium) will result in greater data transfer rates between and within microchips. Other benefits include longer transmission distances and a lower consumption of power.

TSMC vice president Yu Zhenhua has placed emphasis on innovation, in a similar fashion to his boss, within the development process (industry-wide): "If we can provide a good silicon photonics integrated system, we can solve the two key issues of energy efficiency and AI computing power. This will be a new one...Paradigm shift. We may be at the beginning of a new era." The firm is facing unprecedented demand from its clients—it hopes to further expand its advanced chip packaging capacity to address these issues by late 2024. A shift away from the limitations of "conventional electric" data transmissions could bring next generation AI compute GPUs onto the market by 2025.
Return to Keyword Browsing
Dec 21st, 2024 20:14 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts