News Posts matching #Broadcom

Return to Keyword Browsing

OpenAI Designs its First AI Chip in Collaboration with Broadcom and TSMC

According to a recent Reuters report, OpenAI is continuing with its moves in the custom silicon space, expanding beyond its reported talks with Broadcom to include a broader strategy involving multiple industry leaders. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The company behind ChatGPT is actively working with both Broadcom and TSMC to develop its first proprietary AI chip, specifically focused on inference operations. Getting a custom chip to do training runs is a bit more complex task, and OpenAI leaves that to its current partners until the company figures out all details. Even with an inference chip, the scale at which OpenAI works and serves its models makes financial sense for the company to develop custom solutions tailored to its infrastructure needs.

This time, the initiative represents a more concrete and nuanced approach than previously understood. Rather than just exploratory discussions, OpenAI has assembled a dedicated chip team of approximately 20 people, led by former Google TPU engineers Thomas Norrie and Richard Ho. The company has secured manufacturing capacity with TSMC, targeting a 2026 timeline for its first custom-designed chip. While Broadcom's involvement leverages its expertise in helping companies optimize chip designs for manufacturing and manage data movement between chips—crucial for AI systems running thousands of processors in parallel—OpenAI is simultaneously diversifying its compute strategy. This includes adding AMD's Instinct MI300X chips to its infrastructure alongside its existing NVIDIA deployments. Similarly, Meta has the same approach, where it now trains its models on NVIDIA GPUs and serves them to the public (inferencing) using AMD Instinct MI300X.

Intel's Silver Lining is $8.5 Billion CHIPS Act Funding, Possibly by the End of the Year

Intel's recent financial woes have brought the company into severe cost-cutting measures, including job cuts and project delays. However, a silver lining remains—Intel is reportedly in the final stages of securing $8.5 billion in direct funding from the US government under the CHIPS Act, delivered by the end of the year. The potential financing comes at a crucial time for Intel, which has been grappling with financial challenges. The company reported a $1.6 billion loss in the second quarter of 2024, leading to short-term setbacks. However, thanks to sources close to the Financial Times, we learn that Intel's funding target will represent the CHIPS Act's largest share, leading to a massive boost to US-based semiconductor manufacturing.

Looking ahead, the potential CHIPS Act funding could serve as a catalyst for Intel's resurgence, reassuring both investors and customers about the company's future. A key element of Intel's recovery strategy lies in the ramp-up of production for its advanced 18A node, which should become the primary revenue driver for its foundry unit. This advancement, coupled with the anticipated government backing, positions Intel to potentially capture market share from established players like TSMC and Samsung. The company has already secured high-profile customers such as Amazon and (allegedly) Broadcom, hinting at its growing appeal in the foundry space. Moreover, Intel's enhanced domestic manufacturing capabilities align well with potential US government mandates for companies like NVIDIA and Apple to produce processors locally, a consideration driven by escalating geopolitical tensions.

Intel 20A Node Cancelled for Foundry Customers, "Arrow Lake" Mainly Manufactured Externally

Intel has announced the cancellation of its 20A node for Foundry customers, as well as shifting majority of Arrow Lake production to external foundries. The tech giant will instead focus its resources on the more advanced 18A node while relying on external partners for Arrow Lake production, likely tapping TSMC or Samsung for their 2 nm nodes. The decision follows Intel's successful release of the 18A Process Design Kit (PDK) 1.0 in July, which garnered positive feedback from the ecosystem, according to the company. Intel reports that the 18A node is already operational, booting operating systems and yielding well, keeping the company on track for a 2025 launch. This early success has enabled Intel to reallocate engineering resources from 20A to 18A sooner than anticipated. As a result, the "Arrow Lake processor family will be built primarily using external partners and packaged by Intel Foundry".

The 20A node, while now cancelled for Arrow Lake, has played a crucial role in Intel's journey towards 18A. It served as a testbed for new techniques, materials, and transistor architectures essential for advancing Moore's Law. The 20A node successfully integrated both RibbonFET gate-all-around transistor architecture and PowerVia backside power delivery for the first time, providing valuable insights that directly informed the development of 18A. Intel's decision to focus on 18A is also driven by economic factors. With the current 18A defect density already at D0 <0.40, the company sees an opportunity to optimize its engineering investments by transitioning now. However, challenges remain, as evidenced by recent reports of Broadcom's disappointment in the 18A node. Despite these hurdles, Intel remains optimistic about the future of its foundry services and the potential of its advanced manufacturing processes. The coming months will be crucial as the company works to demonstrate the capabilities of its 18A node and secure more partners for its foundry business.

Broadcom's Testing of Intel 18A Node Signals Disappointment, Still Not Ready for High-Volume Production

According to a recent Reuters report, Intel's 18A node doesn't seem to be production-ready. As the sources indicate, Broadcom has been reportedly testing Intel's 18A node on its internal company designs, which include an extensive range of products from AI accelerators to networking switches. However, as Broadcom received the initial production run from Intel, the 18A node seems to be in a worse state than initially expected. After testing the wafers and powering them on, Broadcom reportedly concluded that the 18A process is not yet ready for high-volume production. With Broadcom's comments reflecting high-volume production, it signals that the 18A node is not producing a decent yield that would satisfy external customers.

While this is not a good sign of Intel's Fundry contract business development, it shows that the node is presumably in a good state in terms of power/performance. Intel's CEO Pat Gelsinger confirmed that 18A is now at 0.4 d0 defect density, and it is now a "healthy process." However, alternatives exist at TSMC, which proves to be a very challenging competitor to take on, as its N7 and N5 nodes had a defect density of 0.33 during development and 0.1 defect density during high-volume production. This leads to better yields and lower costs for the contracting party, resulting in higher profits. Ultimately, it is up to Intel to improve its production process further to satisfy customers. Gelsinger wants to see Intel Foundry as "manufacturing ready" by the end of the year, and we can see the first designs in 2025 reach volume production. There are still a few more months to improve the node, and we expect to see changes implemented by the end of the year.

OpenAI in Talks with Broadcom About Developing Custom AI Chips to Power Next Generation Models

According to The Information, OpenAI is reportedly in talks with Broadcom about developing a custom AI accelerator to power OpenAI's growing demand for high-performance solutions. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The latter part is what OpenAI wants to focus on, but all the aforementioned IP developed by Broadcom is of use in a data center. Suppose OpenAI decides to use Broadcom's solutions. In that case, the fabless silicon designer offers a complete vertical stack of products for inter-system communication using various protocols such as PCIe, system-to-system communication using Ethernet networking with Broadcom Tomahawk 6 and future revisions, alongside storage solutions and many other complimentary elements of a data center.

As a company skilled in making various IPs, it also makes ASIC solutions for other companies and has assisted Google in the making of its Tensor Processing Unit (TPU), which is now in its sixth generation. Google TPUs are massively successful as Google deploys millions of them and provides AI solutions to billions of users across the globe. Now, OpenAI wants to be part of the AI chip game, and Broadcom could come to the rescue with its already-established AI success and various other data center componentry to help make a custom AI accelerator to power OpenAI's infrastructure needed for the next generation of AI models. With each new AI model released by OpenAI, compute demand spikes by several orders of magnitude, and having an AI accelerator that exactly matches their need will help the company move faster and run even bigger AI models.

ByteDance and Broadcom to Collaborate on Advanced AI Chip

ByteDance, TikTok's parent company, is reportedly working with American chip designer Broadcom to develop a cutting-edge AI processor. This collaboration could secure a stable supply of high-performance chips for ByteDance, according to Reuters. Sources claim the joint project involves a 5 nm Application-Specific Integrated Circuit (ASIC), designed to comply with U.S. export regulations. TSMC is slated to manufacture the chip, though production is not expected to begin this year.

This partnership marks a significant development in U.S.-China tech relations, as no public announcements of such collaborations on advanced chips have been made since Washington implemented stricter export controls in 2022. For ByteDance, this move could reduce procurement costs and ensure a steady chip supply, crucial for powering its array of popular apps, including TikTok and the ChatGPT-like AI chatbot "Doubao." The company has already invested heavily in AI chips, reportedly spending $2 billion on NVIDIA processors in 2023.

Broadcom Unveils Newest Innovations for VMware Cloud Foundation

Broadcom Inc. today unveiled the latest updates to VMware Cloud Foundation (VCF), the company's flagship private cloud platform. The latest advancements in VCF support customers' digital innovation with faster infrastructure modernization, improved developer productivity, and better cyber resiliency and security with low total cost of ownership.

"VMware Cloud Foundation is the industry's first private-cloud platform to offer the combined power of public and private clouds with unmatched operational simplicity and proven total cost of ownership value," said Paul Turner, Vice President of Products, VMware Cloud Foundation Division, Broadcom. "With our latest release, VCF is delivering on key requirements driven by customer input. The new VCF Import functionality will be a game changer in accelerating VCF adoption and improving time to value. We are also delivering a set of new capabilities that helps IT more quickly meet the needs of developers without increasing business risk. This latest release of VCF puts us squarely on the path to delivering on the full promise of VCF for our customers."

AMD, Broadcom, Cisco, Google, HPE, Intel, Meta and Microsoft Form Ultra Accelerator Link (UALink) Promoter Group to Combat NVIDIA NVLink

AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft today announced they have aligned to develop a new industry standard dedicated to advancing high-speed and low latency communication for scale-up AI systems linking in Data Centers.

Called the Ultra Accelerator Link (UALink), this initial group will define and establish an open industry standard that will enable AI accelerators to communicate more effectively. By creating an interconnect based upon open standards, UALink will enable system OEMs, IT professionals and system integrators to create a pathway for easier integration, greater flexibility and scalability of their AI-connected data centers.

The Wi-Fi Alliance Introduces Wi-Fi CERTIFIED 7

Wi-Fi CERTIFIED 7 is here, introducing powerful new features that boost Wi-Fi performance and improve connectivity across a variety of environments. Cutting-edge capabilities in Wi-Fi CERTIFIED 7 enable innovations that rely on high throughput, deterministic latency, and greater reliability for critical traffic. New use cases - including multi-user AR/VR/XR, immersive 3-D training, electronic gaming, hybrid work, industrial IoT, and automotive - will advance as a result of the latest Wi-Fi generation. Wi-Fi CERTIFIED 7 represents the culmination of extensive collaboration and innovation within Wi-Fi Alliance, facilitating worldwide product interoperability and a robust, sophisticated device ecosystem.

Wi-Fi 7 will see rapid adoption across a broad ecosystem with more than 233 million devices expected to enter the market in 2024, growing to 2.1 billion devices by 2028. Smartphones, PCs, tablets, and access points (APs) will be the earliest adopters of Wi-Fi 7, and customer premises equipment (CPE) and augmented and virtual reality (AR/VR) equipment will continue to gain early market traction. Wi-Fi CERTIFIED 7 pushes the boundaries of today's wireless connectivity, and Wi-Fi CERTIFIED helps ensure advanced features are deployed in a consistent way to deliver high-quality user experiences.

Top Ten IC Design Houses Ride Wave of Seasonal Consumer Demand and Continued AI Boom to See 17.8% Increase in Quarterly Revenue in 3Q23

TrendForce reports that 3Q23 has been a historic quarter for the world's leading IC design houses as total revenue soared 17.8% to reach a record-breaking US$44.7 billion. This remarkable growth is fueled by a robust season of stockpiling for smartphones and laptops, combined with a rapid acceleration in the shipment of generative AI chips and components. NVIDIA, capitalizing on the AI boom, emerged as the top performer in revenue and market share. Notably, analog IC supplier Cirrus Logic overtook US PMIC manufacturer MPS to snatch the tenth spot, driven by strong demand for smartphone stockpiling.

NVIDIA's revenue soared 45.7% to US$16.5 billion in the third quarter, bolstered by sustained demand for generative AI and LLMs. Its data center business—accounting for nearly 80% of its revenue—was a key driver in this exceptional growth.

AMD Showcases Growing Momentum for AMD Powered AI Solutions from the Data Center to PCs

Today at the "Advancing AI" event, AMD was joined by industry leaders including Microsoft, Meta, Oracle, Dell Technologies, HPE, Lenovo, Supermicro, Arista, Broadcom and Cisco to showcase how these companies are working with AMD to deliver advanced AI solutions spanning from cloud to enterprise and PCs. AMD launched multiple new products at the event, including the AMD Instinct MI300 Series data center AI accelerators, ROCm 6 open software stack with significant optimizations and new features supporting Large Language Models (LLMs) and Ryzen 8040 Series processors with Ryzen AI.

"AI is the future of computing and AMD is uniquely positioned to power the end-to-end infrastructure that will define this AI era, from massive cloud installations to enterprise clusters and AI-enabled intelligent embedded devices and PCs," said AMD Chair and CEO Dr. Lisa Su. "We are seeing very strong demand for our new Instinct MI300 GPUs, which are the highest-performance accelerators in the world for generative AI. We are also building significant momentum for our data center AI solutions with the largest cloud companies, the industry's top server providers, and the most innovative AI startups ꟷ who we are working closely with to rapidly bring Instinct MI300 solutions to market that will dramatically accelerate the pace of innovation across the entire AI ecosystem."

Ethernet Switch Chips are Now Infected with AI: Broadcom Announces Trident 5-X12

Artificial intelligence has been a hot topic this year, and everything is now an AI processor, from CPUs to GPUs, NPUs, and many others. However, it was only a matter of time before we saw an integration of AI processing elements into the networking chips. Today, Broadcom announced its new Ethernet switching silicon called Trident 5-X12. The Trident 5-X12 delivers 16 Tb/s of bandwidth, double that of the previous Trident generation while adding support for fast 800G ports for connection to Tomahawk 5 spine switch chips. The 5-X12 is software-upgradable and optimized for dense 1RU top-of-rack designs, enabling configurations with up to 48x200G downstream server ports and 8x800G upstream fabric ports. The 800G support is added using 100G-PAM4 SerDes, which enables up to 4 m DAC and linear optics.

However, this is not only a switch chip on its own. Broadcom has added AI processing elements in an inference engine called NetGNT (Networking General-purpose Neural-network Traffic-analyzer). It can detect common traffic patterns and optimize data movement across the chip. Specifically, the company has listed an example of the system doing AI/ML workloads. In that case, NetGNT performs intelligent traffic analysis to avoid network congestion in these workloads. For example, it can detect the so-called "incast" patterns in real-time, where many flows converge simultaneously on the same port. By recognizing the start of incast early, NetGNT can invoke hardware-based congestion control techniques to prevent performance degradation without added latency.

Broadcom Announces Successful Acquisition of VMware

Broadcom is thrilled to announce Broadcom's successful acquisition of VMware, and the start of a new and exciting era for all of us at the company. VMware joins our engineering-first, innovation-centric team, which is another important step forward in building the world's leading infrastructure technology company.

While an important moment for Broadcom, it's also an exciting milestone for our customers around the world. And as I said when we first announced the acquisition, we can now come together and have the scale to help global enterprises address their complex IT infrastructure challenges by enabling private and hybrid cloud environments and helping them deploy an "apps anywhere" strategy. Our goal is to help customers optimize their private, hybrid and multi-cloud environments, allowing them to run applications and services anywhere.

Comcast and Broadcom to Develop the World's First AI-Powered Access Network With Pioneering New Chipset

Comcast and Broadcom today announced joint efforts to develop the world's first AI-powered access network with a new chipset that embeds artificial intelligence (AI) and machine learning (ML) within the nodes, amps and modems that comprise the last few miles of Comcast's network. With these new capabilities broadly deployed throughout the network, Comcast will be able to transform its operations by automating more network functions and deliver an improved customer experience through better and more actionable intelligence.

Additionally, the new chipset will be the first in the world to incorporate DOCSIS 4.0 Full Duplex (FDX), Extended Spectrum (ESD) and the ability to run both simultaneously, enabling Internet service providers across the globe to deliver DOCSIS 4.0 services using a toolkit with technology options to meet their business needs. DOCSIS 4.0 is the next-generation network technology that will introduce symmetrical multi-gigabit Internet speeds, lower latency, and even better security and reliability to hundreds of millions of people and businesses over their existing connections without the need for major construction of new network infrastructure.

ASUS Showcases Cutting-Edge Cloud Solutions at OCP Global Summit 2023

ASUS, a global infrastructure solution provider, is excited to announce its participation in the 2023 OCP Global Summit, which is taking place from October 17-19, 2023, at the San Jose McEnery Convention Center. The prestigious annual event brings together industry leaders, innovators and decision-makers from around the world to explore and discuss the latest advancements in open infrastructure and cloud technologies, providing a perfect stage for ASUS to unveil its latest cutting-edge products.

The ASUS theme for the OCP Global Summit is Solutions beyond limits—ASUS empowers AI, cloud, telco and more. We will showcase an array of products:

Raspberry Pi Foundation Launches Raspberry Pi 5

It has been over four years since the release of the Raspberry Pi 4, and in that time a lot has changed in the maker board and single-board computer landscape. For the Raspberry Pi Foundation there were struggles with worldwide demand and production capacity brought on by the global pandemic starting in 2020, and plenty of new competitors came to the scene to offer ready to order alternatives to the venerable RPi 4. Today however the production woes have been assuaged and a new generation of Raspberry Pi is here; the Raspberry Pi 5.

Raspberry Pi 5 is being announced in advance of availability unlike every prior RPi device launch. Pre-orders are open with many of the listed Approved Resellers on RPi's website starting today but unit shipments aren't expected until near the end of October 2023. As part of this pre-order scheme, RPi Foundation is withholding pre-orders from bulk customers and will be dealing in single-unit sales for individuals until at least the end of the year, as well as running some promotions with The MagPi and HackSpace magazines to give priority access to their subscribers. Genuinely nice to see, considering how hard it was to obtain a Pi 4 for the average Joe over the last couple years. The two announced prices for the RPi 5 are $60 USD for the 4 GB variant, and $80 USD for the 8 GB variant; or about $5 USD more than current reseller pricing on comparable configurations of the Raspberry Pi 4.

Broadcom Partners with Google Cloud to Strengthen Gen AI-Powered Cybersecurity

Symantec, a division of Broadcom Inc., is partnering with Google Cloud to embed generative AI (gen AI) into the Symantec Security platform in a phased rollout that will give customers a significant technical edge for detecting, understanding, and remediating sophisticated cyber attacks.

Symantec is leveraging the Google Cloud Security AI Workbench and security-specific large language model (LLM)--Sec-PaLM 2-across its portfolio to enable natural language interfaces and generate more comprehensive and easy-to-understand threat analyses. With Security AI Workbench-powered summarization of complex incidents and alignment to MITRE ATT&CK context, security operations center (SOC) analysts of all levels can better understand threats and be able to respond faster. That, in turn, translates into greater security and higher SOC productivity.

TSMC Ramps Up CoWoS Advanced Packaging Production to Meet Soaring AI Chip Demand

The burgeoning AI market is significantly impacting TSMC's CoWoS (Chip on Wafer on Substrate) advanced packaging production capacity, causing it to overflow due to high demand from major companies like NVIDIA, AMD, and Amazon. To accommodate this, TSMC is in the process of expanding its production capacity by acquiring additional CoWoS machines from equipment manufacturers like Xinyun, Wanrun, Hongsu, Titanium, and Qunyi. These expansions are expected to be operational in the first half of the next year, leading to an increased monthly production capacity, potentially close to 30,000 pieces, enabling TSMC to cater to more AI-related orders. These endeavors to increase capacity are in response to the amplified demand for AI chips from their applications in various domains, including autonomous vehicles and smart factories.

Despite TSMC's active steps to enlarge its CoWoS advanced packaging production, the overwhelming client demand is driving the company to place additional orders with equipment suppliers. It has been indicated that NVIDIA is currently TSMC's largest CoWoS advanced packaging customer, accounting for 60% of its production capacity. Due to the surge in demand, companies like AMD, Amazon, and Broadcom are also placing urgent orders, leading to a substantial increase in TSMC's advanced process capacity utilization. The overall situation indicates a thriving scenario for equipment manufacturers with clear visibility of orders extending into the following year, even as they navigate the challenges of fulfilling the rapidly growing and immediate demand in the AI market.

Q2 Revenue for Top 10 Global IC Houses Surges by 12.5% as Q3 on Pace to Set New Record

Fueled by an AI-driven inventory stocking frenzy across the supply chain, TrendForce reveals that Q2 revenue for the top 10 global IC design powerhouses soared to US $38.1 billion, marking a 12.5% quarterly increase. In this rising tide, NVIDIA seized the crown, officially dethroning Qualcomm as the world's premier IC design house, while the remainder of the leaderboard remained stable.

AI charges ahead, buoying IC design performance amid a seasonal stocking slump
NVIDIA is reaping the rewards of a global transformation. Bolstered by the global demand from CSPs, internet behemoths, and enterprises diving into generative AI and large language models, NVIDIA's data center revenue skyrocketed by a whopping 105%. A deluge of shipments, including the likes of their advanced Hopper and Ampere architecture HGX systems and the high-performing InfinBand, played a pivotal role. Beyond that, both gaming and professional visualization sectors thrived under the allure of fresh product launches. Clocking a Q2 revenue of US$11.33 billion (a 68.3% surge), NVIDIA has vaulted over both Qualcomm and Broadcom to seize the IC design throne.

TSMC, Broadcom & NVIDIA Alliance Reportedly Set to Advance Silicon Photonics R&D

Taiwan's Economic Daily reckons that a freshly formed partnership between TSMC, Broadcom, and NVIDIA will result in the development of cutting-edge silicon photonics. The likes of IBM, Intel and various academic institutes are already deep into their own research and development processes, but the alleged new alliance is said to focus on advancing AI computer hardware. The report cites a significant allocation of—roughly 200—TSMC staffers onto R&D involving the integration of silicon photonic technologies into high performance computing (HPC) solutions. They are very likely hoping that the usage of optical interconnects (on a silicon medium) will result in greater data transfer rates between and within microchips. Other benefits include longer transmission distances and a lower consumption of power.

TSMC vice president Yu Zhenhua has placed emphasis on innovation, in a similar fashion to his boss, within the development process (industry-wide): "If we can provide a good silicon photonics integrated system, we can solve the two key issues of energy efficiency and AI computing power. This will be a new one...Paradigm shift. We may be at the beginning of a new era." The firm is facing unprecedented demand from its clients—it hopes to further expand its advanced chip packaging capacity to address these issues by late 2024. A shift away from the limitations of "conventional electric" data transmissions could bring next generation AI compute GPUs onto the market by 2025.

Broadcom Provides Regulatory Update on VMware Transaction

Broadcom Inc. (NASDAQ: AVGO), a global technology leader that designs, develops and supplies semiconductor and infrastructure software solutions, today affirmed its expectation that its acquisition of VMware, Inc. (NYSE: VMW) will close on October 30, 2023, and provided an update on its progress with various regulatory agencies.

On August 21, 2023, Broadcom received final transaction approval from the United Kingdom's Competition and Markets Authority. This follows legal merger clearance in the European Union, as well as in Australia, Brazil, Canada, Israel, South Africa, and Taiwan, and foreign investment control clearance in all necessary jurisdictions. In the U.S., the Hart-Scott-Rodino pre-merger waiting periods have expired, and there is no legal impediment to closing under U.S. merger regulations.

Leading Cloud Service, Semiconductor, and System Providers Unite to Form Ultra Ethernet Consortium

Announced today, Ultra Ethernet Consortium (UEC) is bringing together leading companies for industry-wide cooperation to build a complete Ethernet-based communication stack architecture for high-performance networking. Artificial Intelligence (AI) and High-Performance Computing (HPC) workloads are rapidly evolving and require best-in-class functionality, performance, interoperability and total cost of ownership, without sacrificing developer and end-user friendliness. The Ultra Ethernet solution stack will capitalize on Ethernet's ubiquity and flexibility for handling a wide variety of workloads while being scalable and cost-effective.

Ultra Ethernet Consortium is founded by companies with long-standing history and experience in high-performance solutions. Each member is contributing significantly to the broader ecosystem of high-performance in an egalitarian manner. The founding members include AMD, Arista, Broadcom, Cisco, Eviden (an Atos Business), HPE, Intel, Meta and Microsoft, who collectively have decades of networking, AI, cloud and high-performance computing-at-scale deployments.

EU Commission Conditionally Approves Broadcom's $61 Billion Acquisition of VMware

The European Commission has today announced that it has granted conditional approval—under EU Merger Regulation—for the $61 billion acquisition of VMware by Broadcom: "The approval is conditional upon full compliance with the commitments offered by Broadcom. Today's decision follows an in-depth investigation of the proposed acquisition. Broadcom is a hardware company that offers, among other products, Fibre Channel Host-Bus Adapters ('FC HBAs'), storage adapters and Network Interface Cards ('NICs'), which are hardware components that connect servers to storage or network. Broadcom has recently started expanding into software markets, mainly for security and mainframe applications. VMware is a software supplier offering mainly virtualization software that interoperates with a wide range of hardware, including FC HBAs, storage adapters and NICs.

A company spokesperson commented on the EU administration's conditional approval of the deal: "Broadcom provided the European Commission with a technology access remedy that preserves interoperability, a core principle that would not have changed as a result of this transaction...Broadcom did this to fully address the concerns expressed by the European Commission, and Broadcom welcomes the Commission's decision to accept this access remedy." The aforementioned "concerns" relate to the acquisition resulting in a possible restriction of "competition in the market for certain hardware components which interoperate with VMware's software." Broadcom is aiming to finalize their purchase of VMware by November 1, but they have to contend with forthcoming judgements from US and UK regulators prior to that date.

Semiconductor Market Extends Record Decline Into Fifth Quarter

New research from Omdia reveals that the semiconductor market declined in revenue for a fifth straight quarter in the first quarter of 2023. This is the longest recorded period of decline since Omdia began tracking the market in 2002. Revenue in 1Q23 settled at $120.5B, down 9% from 4Q22. The semiconductor market is cyclical, and this prolonged decline follows the upsurge as the market grew to record revenues in each quarter between 4Q20 through 4Q21 following increased demand from the global pandemic.

The memory and MPU market are major areas of the semiconductor market that are contributing to the decline. The MPU market in 1Q23 was $13.1B, just 65% of its size in 1Q22 when it was $20B. The memory market fared worse, with 1Q23 coming in at $19.3B, just 44% of the market in 1Q22 when it was $43.6B. The combined MPU and memory markets declined 19% in 1Q23, dragging the market down to the 9% quarter-over-quarter (QoQ) decline.

ASUS Issues Router Product Security Advisory

If you own one of several recent ASUS router models, then you're being urged by ASUS to upgrade your firmware to the latest release as soon as possible, due to a few serious security flaws. The two most severe being CVE-2022-26376 and CVE-2018-1160, both of which are rated 9.8 on a scale of 10 in terms of severity. However, if you're running the third party Asuswrt-Merlin firmware, you're apparently safe, as the author of the third party firmware has already patched all the known security issues that ASUS has announced patches for.

The affected models are the GT6, GT-AXE16000, GT-AX11000 PRO, GT-AX6000, GT-AX11000, GS-AX5400, GS-AX3000, XT9, XT8, XT8 V2, RT-AX86U PRO, RT-AX86U, RT-AX86S, RT-AX82U, RT-AX58U, RT-AX3000, TUF-AX6000, and TUF-AX5400. That's 18 different models in total, all of which should be built around Broadcom hardware. It's unclear if more models are affected or not, but these are the ones ASUS has issued updates for. The security flaws in question could allow someone to take over an unpatched router and make it a part of a botnet or similar. ASUS has suggested turning off features like DDNS and VPN servers, as well as more obvious things like WAN access, port forwarding, port triggers and DMZ until the firmware has been updated on the affected models.
Return to Keyword Browsing
Nov 21st, 2024 08:19 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts