News Posts matching #chip

Return to Keyword Browsing

China Bans AMD and Intel CPUs from Government Systems

According to a report by the Financial Times, China has banned the use of Intel and AMD chips in government computers. The decision, which aims to reduce reliance on foreign technology and boost domestic semiconductor production, is expected to have far-reaching implications for the global tech industry and geopolitical relations. The Chinese government has instructed PC suppliers to replace foreign-made CPUs with domestic alternatives in all government computers within the next two years. This directive is part of China's broader strategy to achieve self-sufficiency in critical technologies and reduce its vulnerability to potential supply chain disruptions or geopolitical tensions. The ban on Intel and AMD chips is likely to significantly impact the two companies, as China represents a substantial market for their products.

However, the move also presents an opportunity for Chinese semiconductor manufacturers like Loongson and Sunway to expand their market share and accelerate the development of their next-generation chip technologies. By reducing its dependence on foreign technology, China aims to strengthen its position in the global tech landscape and mitigate the risks associated with potential sanctions or export controls. As China pushes for self-sufficiency in semiconductors, the global technology industry will likely experience a shift in supply chains and increased competition from Chinese manufacturers. This development may also prompt other countries to reevaluate their reliance on foreign technology and invest in domestic production capabilities, potentially leading to a more fragmented and competitive global tech market.

Alibaba Unveils Plans for Server-Grade RISC-V Processor and RISC-V Laptop

Chinese e-commerce and cloud giant Alibaba announced its plans to launch a server-grade RISC-V processor later this year, and it showcased a RISC-V-powered laptop running an open-source operating system. The announcements were made by Alibaba's research division, the Damo Academy, at the recent Xuantie RISC-V Ecological Conference in Shenzhen. The upcoming server-class processor called the Xuantie C930, is expected to be launched by the end of 2024. While specific details about the chip have not been disclosed, it is anticipated to cater to AI and server workloads. This development is part of Alibaba's ongoing efforts to expand its RISC-V portfolio and reduce reliance on foreign chip technologies amidst US export restrictions. To complement the C930, Alibaba is also preparing a Xuantie 907 matrix processing unit for AI, which could be an IP block inside an SoC like the C930 or an SoC of its own.

In addition to the C930, Alibaba showcased the RuyiBOOK, a laptop powered by the company's existing T-Head C910 processor. The C910, previously designed for edge servers, AI, and telecommunications applications, has been adapted for use in laptops. Strangely, the RuyiBOOK laptop runs on the openEuler operating system, an open-source version of Huawei's EulerOS, which is based on Red Hat Linux. The laptop also features Alibaba's collaboration suite, Ding Talk, and the open-source office software Libre Office, demonstrating its potential to cater to the needs of Chinese knowledge workers and consumers without relying on foreign software. Zhang Jianfeng, president of the Damo Academy, emphasized the increasing demand for new computing power and the potential for RISC-V to enter a period of "application explosion." Alibaba plans to continue investing in RISC-V research and development and fostering collaboration within the industry to promote innovation and growth in the RISC-V ecosystem, lessening reliance on US-sourced technology.

Altair SimSolid Transforms Simulation for Electronics Industry

Altair, a global leader in computational intelligence, announced the upcoming release of Altair SimSolid for electronics, bringing game-changing fast, easy, and precise multi-physics scenario exploration for electronics, from chips, PCBs, and ICs to full system design. "As the electronics industry pushes the boundaries of complexity and miniaturization, engineers have struggled with simulations that often compromise on detail for expediency. Altair SimSolid will empower engineers to capture the intricate complexities of PCBs and ICs without simplification," said James R. Scapa, founder and chief executive officer, Altair. "Traditional simulation methods often require approximations when analyzing PCB structures due to their complexity. Altair SimSolid eliminates these approximations to run more accurate simulations for complex problems with vast dimensional disparities."

Altair SimSolid has revolutionized conventional analysis in its ability to accurately predict complex structural problems with blazing-fast speed while eliminating the complexity of laborious hours of modeling. It eliminates geometry simplification and meshing, the two most time-consuming and expertise-intensive tasks done in traditional finite element analysis. As a result, it delivers results in seconds to minutes—up to 25x faster than traditional finite element solvers—and effortlessly handles complex assemblies. Having experienced fast adoption in the aerospace and automotive industries, two sectors that typically experience challenges associated with massive structures, Altair SimSolid is poised to play a significant role in the electronics market. The initial release, expected in Q2 2024, will support structural and thermal analysis for PCBs and ICs with full electromagnetics analysis coming in a future release.

Arizona State University and Deca Technologies to Pioneer North America's First R&D Center for Advanced Fan-Out Wafer-Level Packaging

Arizona State University (ASU) and Deca Technologies (Deca), a premier provider of advanced wafer- and panel-level packaging technology, today announced a groundbreaking collaboration to create North America's first fan-out wafer-level packaging (FOWLP) research and development center.

The new Center for Advanced Wafer-Level Packaging Applications and Development is set to catalyze innovation in the United States, expanding domestic semiconductor manufacturing capabilities and driving advancements in cutting-edge fields such as artificial intelligence, machine learning, automotive electronics and high-performance computing.

Introducing the Next-Generation Blink Mini 2—A New Compact Plug-In Camera That Works Both Indoors and Outdoors

Blink, an Amazon company, today announced the next-generation Blink Mini 2. The new Blink camera packs a punch in a compact, weather-resistant design that can now be used indoors or outdoors with the purchase of the new Blink Weather Resistant Power Adapter (sold as part of a bundle or separately). Blink Mini 2 offers enhanced image quality with improved low light performance, a wider field of view, and a built-in LED spotlight for night view in color. Powered by the company's custom-built chip, Blink Mini 2 utilizes on-device computer vision (CV) to support smart notifications, including person detection, which is available with a Blink Subscription Plan (sold separately).

"It is clear customers love Blink—in fact, the Blink business has grown 5x over the last four years," said Liz Hamren, chief executive officer at Blink. "We are building on this momentum with the addition of Mini 2 to Blink's affordable and easy-to-use suite of devices. Mini 2 was rebuilt from the inside out, keeping everything customers expect from Blink while adding even more utility through features like person detection, all at an incredible price point."

Global Top 10 Foundries Q4 Revenue Up 7.9%, Annual Total Hits US$111.54 Billion in 2023

The latest TrendForce report reveals a notable 7.9% jump in 4Q23 revenue for the world's top ten semiconductor foundries, reaching $30.49 billion. This growth is primarily driven by sustained demand for smartphone components, such as mid and low-end smartphone APs and peripheral PMICs. The launch season for Apple's latest devices also significantly contributed, fueling shipments for the A17 chipset and associated peripheral ICs, including OLED DDIs, CIS, and PMICs. TSMC's premium 3 nm process notably enhanced its revenue contribution, pushing its global market share past the 60% threshold this quarter.

TrendForce remarks that 2023 was a challenging year for foundries, marked by high inventory levels across the supply chain, a weak global economy, and a slow recovery in the Chinese market. These factors led to a downward cycle in the industry, with the top ten foundries experiencing a 13.6% annual drop as revenue reached just $111.54 billion. Nevertheless, 2024 promises a brighter outlook, with AI-driven demand expected to boost annual revenue by 12% to $125.24 billion. TSMC, benefiting from steady advanced process orders, is poised to far exceed the industry average in growth.

Marvell Announces Industry's First 2 nm Platform for Accelerated Infrastructure Silicon

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, is extending its collaboration with TSMC to develop the industry's first technology platform to produce 2 nm semiconductors optimized for accelerated infrastructure.

Behind the Marvell 2 nm platform is the company's industry-leading IP portfolio that covers the full spectrum of infrastructure requirements, including high-speed long-reach SerDes at speeds beyond 200 Gbps, processor subsystems, encryption engines, system-on-chip fabrics, chip-to-chip interconnects, and a variety of high-bandwidth physical layer interfaces for compute, memory, networking and storage architectures. These technologies will serve as the foundation for producing cloud-optimized custom compute accelerators, Ethernet switches, optical and copper interconnect digital signal processors, and other devices for powering AI clusters, cloud data centers and other accelerated infrastructure.

Global Server Shipments Expected to Increase by 2.05% in 2024, with AI Servers Accounting For Around 12.1%

TrendForce underscores that the primary momentum for server shipments this year remains with American CSPs. However, due to persistently high inflation and elevated corporate financing costs curtailing capital expenditures, overall demand has not yet returned to pre-pandemic growth levels. Global server shipments are estimated to reach approximately. 13.654 million units in 2024, an increase of about 2.05% YoY. Meanwhile, the market continues to focus on the deployment of AI servers, with their shipment share estimated at around 12.1%.

Foxconn is expected to see the highest growth rate, with an estimated annual increase of about 5-7%. This growth includes significant orders such as Dell's 16G platform, AWS Graviton 3 and 4, Google Genoa, and Microsoft Gen9. In terms of AI server orders, Foxconn has made notable inroads with Oracle and has also secured some AWS ASIC orders.

GlobalFoundries and Biden-Harris Administration Announce CHIPS and Science Act Funding for Essential Chip Manufacturing

The U.S. Department of Commerce today announced $1.5 billion in planned direct funding for GlobalFoundries (Nasdaq: GFS) (GF) as part of the U.S. CHIPS and Science Act. This investment will enable GF to expand and create new manufacturing capacity and capabilities to securely produce more essential chips for automotive, IoT, aerospace, defense, and other vital markets.

New York-headquartered GF, celebrating its 15th year of operations, is the only U.S.-based pure play foundry with a global manufacturing footprint including facilities in the U.S., Europe, and Singapore. GF is the first semiconductor pure play foundry to receive a major award (over $1.5 billion) from the CHIPS and Science Act, designed to strengthen American semiconductor manufacturing, supply chains and national security. The proposed funding will support three GF projects:

SoftBank Founder Wants $100 Billion to Compete with NVIDIA's AI

Japanese tech billionaire and founder of the SoftBank Group, Masayoshi Son, is embarking on a hugely ambitious new project to build an AI chip company that aims to rival NVIDIA, the current leader in AI semiconductor solutions. Codenamed "Izanagi" after the Japanese god of creation, Son aims to raise up to $100 billion in funding for the new venture. With his company SoftBank having recently scaled back investments in startups, Son is now setting his sights on the red-hot AI chip sector. Izanagi would leverage SoftBank's existing chip design firm, Arm, to develop advanced semiconductors tailored for artificial intelligence computing. The startup would use Arm's instruction set for the chip's processing elements. This could pit Izanagi directly against NVIDIA's leadership position in AI chips. Son has a chest of $41 billion in cash at SoftBank that he can deploy for Izanagi.

Additionally, he is courting sovereign wealth funds in the Middle East to contribute up to $70 billion in additional capital. In total, Son may be seeking up to $100 billion to bankroll Izanagi into a chip powerhouse. AI chips are seeing surging demand as machine learning and neural networks require specialized semiconductors that can process massive datasets. NVIDIA and other names like Intel, AMD, and select startups have capitalized on this trend. However, Son believes the market has room for another major player. Izanagi would focus squarely on developing bleeding-edge AI chip architectures to power the next generation of artificial intelligence applications. It is still unclear if this would be an AI training or AI inference project, but given that the training market is currently bigger as we are in the early buildout phase of AI infrastructure, the consensus might settle on training. With his track record of bold bets, Son is aiming very high with Izanagi. It's a hugely ambitious goal, but Son has defied expectations before. Project Izanagi will test the limits of even his vision and financial firepower.

Samsung Lands Significant 2 nm AI Chip Order from Unnamed Hyperscaler

This week in its earnings call, Samsung announced that its foundry business has received a significant order for a two nanometer AI chips, marking a major win for its advanced fabrication technology. The unnamed customer has contracted Samsung to produce AI accelerators using its upcoming 2 nm process node, which promises significant gains in performance and efficiency over today's leading-edge chips. Along with the AI chips, the deal includes supporting HBM and advanced packaging - indicating a large-scale and complex project. Industry sources speculate the order may be from a major hyperscaler like Google, Microsoft, or Alibaba, who are aggressively expanding their AI capabilities. Competition for AI chip contracts has heated up as the field becomes crucial for data centers, autonomous vehicles, and other emerging applications. Samsung said demand recovery in 2023 across smartphones, PCs and enterprise hardware will fuel growth for its broader foundry business. It's forging ahead with 3 nm production while eyeing 2 nm for launch around 2025.

Compared to its 3 nm process, 2 nm aims to increase power efficiency by 25% and boost performance by 12% while reducing chip area by 5%. The new order provides validation for Samsung's billion-dollar investments in next-generation manufacturing. It also bolsters Samsung's position against Taiwan-based TSMC, which holds a large portion of the foundry market share. TSMC landed Apple as its first 2 nm customer, while Intel announced 5G infrastructure chip orders from Ericsson and Faraday Technology using its "Intel 18A" node. With rivals securing major customers, Samsung is aggressively pricing 2 nm to attract clients. Reports indicate Qualcomm may shift some flagship mobile chips to Samsung's foundry at the 2 nm node, so if the yields are good, the node has a great potential to attract customers.

AI Power Consumption Surge Strains US Electricity Grid, Coal-Powered Plants Make a Comeback

The artificial intelligence boom is driving a sharp rise in electricity use across the United States, catching utilities and regulators off guard. In northern Virginia's "data center alley," demand is so high that the local utility temporarily halted new data center connections in 2022. Nation-wide, electricity consumption at data centers alone could triple by 2030 to 390 TeraWatt Hours. Add in new electric vehicle battery factories, chip plants, and other clean tech manufacturing spurred by federal incentives, and demand over the next five years is forecast to rise at 1.5%—the fastest rate since the 1990s. Unable to keep pace, some utilities are scrambling to revise projections and reconsider previous plans of closing fossil fuel plants even as the Biden administration pushes for more renewable energy. Some older coal power plans will stay online, until the grid adds more power production capacity. The result could be increased emissions in the near term and risks of rolling blackouts if infrastructure continues lagging behind demand.

The situation is especially dire in Virginia, the world's largest data center hub. The state's largest utility, Dominion Energy, was forced to pause new data center connections for three months last year due to surging demand in Loudoun County. Though connections have resumed, Dominion expects load growth to almost double over the next 15 years. With data centers, EV factories, and other power-hungry tech continuing rapid expansion, experts warn the US national electricity grid is poorly equipped to handle the spike. Substantial investments in new transmission lines and generation are urgently needed to avoid businesses being turned away or blackouts in some regions. Though many tech companies aim to power operations with clean energy, factories are increasingly open to any available power source.

China's Chip Imports See Record 15.4% Plunge in 2023

According to new data from Chinese Customs, China's imports of integrated circuits suffered their steepest annual drop on record in 2023, falling 15.4% to $349.4 billion. The decline marks the second straight year of falling chip imports and can be attributed to economic uncertainty and US export controls on advanced semiconductors. Shipment volumes of imported chips also saw a substantial 10.8% year-over-year decrease as demand within China stagnated. The country's important tech manufacturing sector has struggled under strict zero-Covid policies and a lackluster recovery post-pandemic. Flagship manufacturing companies like TSMC recorded modest declines in 2023 sales, though TSMC still forecasts overall growth this year.

Sentiment plunged further when the Biden administration heightened restrictions on China's access to cutting-edge AI-capable chips from NVIDIA and other top American suppliers. The escalating US export controls have choked off China's pipeline to advanced semiconductors needed for AI and supercomputing applications. However, early positive signs for global semiconductor demand have emerged, with worldwide chip sales rising for the first time in over a year this past November. The increase was driven by growing demand for AI and other emerging technologies that rely on advanced computing chips. While the US seeks to limit China's progress in this key strategic area, an inflection point for the battered global chip sector may be nearing.

The Wi-Fi Alliance Introduces Wi-Fi CERTIFIED 7

Wi-Fi CERTIFIED 7 is here, introducing powerful new features that boost Wi-Fi performance and improve connectivity across a variety of environments. Cutting-edge capabilities in Wi-Fi CERTIFIED 7 enable innovations that rely on high throughput, deterministic latency, and greater reliability for critical traffic. New use cases - including multi-user AR/VR/XR, immersive 3-D training, electronic gaming, hybrid work, industrial IoT, and automotive - will advance as a result of the latest Wi-Fi generation. Wi-Fi CERTIFIED 7 represents the culmination of extensive collaboration and innovation within Wi-Fi Alliance, facilitating worldwide product interoperability and a robust, sophisticated device ecosystem.

Wi-Fi 7 will see rapid adoption across a broad ecosystem with more than 233 million devices expected to enter the market in 2024, growing to 2.1 billion devices by 2028. Smartphones, PCs, tablets, and access points (APs) will be the earliest adopters of Wi-Fi 7, and customer premises equipment (CPE) and augmented and virtual reality (AR/VR) equipment will continue to gain early market traction. Wi-Fi CERTIFIED 7 pushes the boundaries of today's wireless connectivity, and Wi-Fi CERTIFIED helps ensure advanced features are deployed in a consistent way to deliver high-quality user experiences.

MaxLinear Announces World's First Wi-Fi CERTIFIED 7 Tri-band Single Chip Solutions and Wi-Fi CERTIFIED 7 Tri-band Access Point

MaxLinear, Inc., a global leader in broadband access and gateway solutions, today announced its Wi-Fi 7 chipsets and Tri-band Access Point were certified by Wi-Fi Alliance and selected as one of the Wi-Fi CERTIFIED 7 test bed devices. Wi-Fi Alliance's Wi-Fi 7 certification helps verify that upcoming Wi-Fi 7 devices can seamlessly interoperate and deliver the high-performance promises of the new standard. The inclusion of MaxLinear's innovative, single-chip Wi-Fi 7 solution in the verification process exemplifies its continuing commitment to pushing the boundaries of in-home communications and reshaping the way service providers build access and connectivity networks.

"At MaxLinear, we're not just leading the pack with Wi-Fi 7 technology; we're redefining it. Our single-chip tri-band device is an industry first, symbolizing a giant leap in wireless communication," said Will Torgerson, VP/GM Broadband Group. "With our focus on power efficiency and peak performance, we're at the forefront of the Wi-Fi 7 revolution, offering blazing-fast speeds, reduced latency, and robust connectivity. As part of the Wi-Fi Alliance, we celebrate 25 years of Wi-Fi innovation and remain committed to advancing a connected world where our groundbreaking solutions enable seamless, multi-gigabit broadband access."

Chinese Researchers Want to Make Wafer-Scale RISC-V Processors with up to 1,600 Cores

According to the report from a journal called Fundamental Research, researchers from the Institute of Computing Technology at the Chinese Academy of Sciences have developed a 256-core multi-chiplet processor called Zhejiang Big Chip, with plans to scale up to 1,600 cores by utilizing an entire wafer. As transistor density gains slow, alternatives like multi-chiplet architectures become crucial for continued performance growth. The Zhejiang chip combines 16 chiplets, each holding 16 RISC-V cores, interconnected via network-on-chip. This design can theoretically expand to 100 chiplets and 1,600 cores on an advanced 2.5D packaging interposer. While multi-chiplet is common today, using the whole wafer for one system would match Cerebras' breakthrough approach. Built on 22 nm process technology, the researchers cite exascale supercomputing as an ideal application for massively parallel multi-chiplet architectures.

Careful software optimization is required to balance workloads across the system hierarchy. Integrating near-memory processing and 3D stacking could further optimize efficiency. The paper explores lithography and packaging limits, proposing hierarchical chiplet systems as a flexible path to future computing scale. While yield and cooling challenges need further work, the 256-core foundation demonstrates the potential of modular designs as an alternative to monolithic integration. China's focus mirrors multiple initiatives from American giants like AMD and Intel for data center CPUs. But national semiconductor ambitions add urgency to prove domestically designed solutions can rival foreign innovation. Although performance details are unclear, the rapid progress shows promise in mastering modular chip integration. Combined with improving domestic nodes like the 7 nm one from SMIC, China could easily create a viable Exascale system in-house.

Report: Global Semiconductor Capacity Projected to Reach Record High 30 Million Wafers Per Month in 2024

Global semiconductor capacity is expected to increase 6.4% in 2024 to top the 30 million *wafers per month (wpm) mark for the first time after rising 5.5% to 29.6 wpm in 2023, SEMI announced today in its latest quarterly World Fab Forecast report.

The 2024 growth will be driven by capacity increases in leading-edge logic and foundry, applications including generative AI and high-performance computing (HPC), and the recovery in end-demand for chips. The capacity expansion slowed in 2023 due to softening semiconductor market demand and the resulting inventory correction.

Chinese x86 CPU Maker Zhaoxin Adds Support for "Preferred Cores" to Modernize its Processor Ecosystem

Chinese x86 CPU developer Zhaoxin is working on adding support in the Linux kernel for scheduling optimization on its processors featuring "preferred cores." Similar to asymmetric core designs from Intel and AMD, Zhaoxin's chips may have specific higher-performance cores the OS scheduler should target for critical workloads. To enable this, Zhaoxin has proposed Linux patches leveraging existing ACPI functionality to indicate per-core differences in max frequency or capabilities. The CPUfreq driver is updated to reflect this, allowing the scheduler to favor the designated high-performance cores when assigning threads and processes. This ensures tasks can dynamically take advantage of the faster cores to maximize performance. The approach resembles tuned scheduling, aware of core topology and heterogeneity already found in Intel and AMD processors.

Zhaoxin's patches don't specify which existing or upcoming CPUs will expose preferred core hints. The company likely wants the functionality in place for future server-class products where asymmetric designs make sense for efficiency. The new code contribution reflects Zhaoxin's broader upstreaming effort around Linux kernel support for its Yongfeng server CPU family. Robust open-source foundations are crucial for gaining developer mindshare and data center adoption. Adding sophisticated features like preferred core scheduling indicates that Zhaoxin's chips are maturing from essential x86 compatibility to more refined performance optimization. While still trailing Intel and AMD in cores and clocks, closing the software ecosystem and efficiency gap remains key to competitiveness. Ongoing Linux enablement work is laying the groundwork for more capable Zhaoxin silicon.

TSMC Plans to Put a Trillion Transistors on a Single Package by 2030

During the recent IEDM conference, TSMC previewed its process roadmap for delivering next-generation chip packages packing over one trillion transistors by 2030. This aligns with similar long-term visions from Intel. Such enormous transistor counts will come through advanced 3D packaging of multiple chipsets. But TSMC also aims to push monolithic chip complexity higher, ultimately enabling 200 billion transistor designs on a single die. This requires steady enhancement of TSMC's planned N2, N2P, N1.4, and N1 nodes, which are slated to arrive between now and the end of the decade. While multi-chipset architectures are currently gaining favor, TSMC asserts both packaging density and raw transistor density must scale up in tandem. Some perspective on the magnitude of TSMC's goals include NVIDIA's 80 billion transistor GH100 GPU—among today's largest chips, excluding wafer-scale designs from Cerebras.

Yet TSMC's roadmap calls for more than doubling that, first with over 100 billion transistor monolithic designs, then eventually 200 billion. Of course, yields become more challenging as die sizes grow, which is where advanced packaging of smaller chiplets becomes crucial. Multi-chip module offerings like AMD's MI300X and Intel's Ponte Vecchio already integrate dozens of tiles, with PVC having 47 tiles. TSMC envisions this expansion to chip packages housing more than a trillion transistors via its CoWoS, InFO, 3D stacking, and many other technologies. While the scaling cadence has recently slowed, TSMC remains confident in achieving both packaging and process breakthroughs to meet future density demands. The foundry's continuous investment ensures progress in unlocking next-generation semiconductor capabilities. But physics ultimately dictates timelines, no matter how aggressive the roadmap.

Samsung Electronics and Red Hat Partnership To Lead Expansion of CXL Memory Ecosystem With Key Milestone

Samsung Electronics, a world leader in advanced memory technology, today announced that for the first time in the industry, it has successfully verified Compute Express Link (CXL) memory operations in a real user environment with open-source software provider Red Hat, leading the expansion of its CXL ecosystem.

Due to the exponential growth of data throughput and memory requirements for emerging fields like generative AI, autonomous driving and in-memory databases (IMDBs), the demand for systems with greater memory bandwidth and capacity is also increasing. CXL is a unified interface standard that connects various processors, such as CPUs, GPUs and memory devices through a PCIe interface that can serve as a solution for limitations in existing systems in terms of speed, latency and expandability.

Five Leading Semiconductor Industry Players Incorporate New Company, Quintauris, to Drive RISC-V Ecosystem Forward

Semiconductor industry players Robert Bosch GmbH, Infineon Technologies AG, Nordic Semiconductor ASA, NXP Semiconductors, and Qualcomm Technologies, Inc., have formally established Quintauris GmbH. Headquartered in Munich, Germany, the company aims to advance the adoption of RISC-V globally by enabling next-generation hardware development.

The formation of Quintauris was formally announced in August, with the aim to be a single source to enable compatible RISC-V-based products, provide reference architectures, and help establish solutions to be widely used across various industries. The initial application focus will be automotive, but with an eventual expansion to include mobile and IoT.

RISC-V Breaks Into Handheld Console Market with Sipeed Lichee Pocket 4A

Chinese company Sipeed has introduced the Lichee Pocket 4A, one of the first handheld gaming devices based on the RISC-V open-source instruction set architecture (ISA). Sipeed positions the device as a retro gaming platform capable of running simple titles via software rendering or GPU acceleration. At its core is Alibaba's T-Head TH1520 processor featuring four 2.50 GHz Xuantie C910 RISC-V general-purpose CPU cores and an unnamed Imagination GPU. The chip was originally aimed at laptop designs. Memory options include 8 GB or 16 GB LPDDR4X RAM and 32 GB or 128 GB of storage. The Lichee Pocket 4A has a 7-inch 1280x800 LCD touchscreen, Wi-Fi/Bluetooth connectivity, and an array of wired ports like USB and Ethernet. It weighs under 500 grams. The device can run Android or Linux distributions like Debian, Ubuntu, and others.

As an early RISC-V gaming entrant, performance expectations should be modest—the focus is retro gaming and small indie titles, not modern AAA games. Specific gaming capabilities remain to be fully tested. However, the release helps showcase RISC-V's potential for consumer electronics and competitive positioning against proprietary ISAs like ARM. Pricing is still undefined, but another Sipeed handheld console retails for around $250 currently. Reception from enthusiasts and developers will demonstrate whether there's a viable market for RISC-V gaming devices. Success could encourage additional hardware experimentation efforts across emerging open architectures. With a 6000 mAh battery, battery life should be decent. Other specifications can be seen in the table below, and the pre-order link is here.

Top Ten IC Design Houses Ride Wave of Seasonal Consumer Demand and Continued AI Boom to See 17.8% Increase in Quarterly Revenue in 3Q23

TrendForce reports that 3Q23 has been a historic quarter for the world's leading IC design houses as total revenue soared 17.8% to reach a record-breaking US$44.7 billion. This remarkable growth is fueled by a robust season of stockpiling for smartphones and laptops, combined with a rapid acceleration in the shipment of generative AI chips and components. NVIDIA, capitalizing on the AI boom, emerged as the top performer in revenue and market share. Notably, analog IC supplier Cirrus Logic overtook US PMIC manufacturer MPS to snatch the tenth spot, driven by strong demand for smartphone stockpiling.

NVIDIA's revenue soared 45.7% to US$16.5 billion in the third quarter, bolstered by sustained demand for generative AI and LLMs. Its data center business—accounting for nearly 80% of its revenue—was a key driver in this exceptional growth.

Moore Threads Launches MTT S4000 48 GB GPU for AI Training/Inference and Presents 1000-GPU Cluster

Chinese chipmaker Moore Threads has launched its first domestically-produced 1000-card AI training cluster, dubbed the KUAE Intelligent Computing Center. A central part of the KUAE cluster is Moore Threads new MTT S4000 accelerator card with 48 GB VRAM utilizing the company's third-generation MUSA GPU architecture and 768 GB/s memory bandwidth. In FP32, the card can output 25 TeraFLOPS; in TF32, it can achieve 50 TeraFLOPS; and in FP16/BF16, up to 200 TeraFLOPS. Also supported is INT8 at 200 TOPS. The MTT S4000 focuses on both training and inference, leveraging Moore Thread's high-speed MTLink 1.0 intra-system interconnect to scale cards for distributed model parallel training of datasets with hundreds of billions of parameters. The card also provides graphics, video encoding/decoding, and 8K display capabilities for graphics workloads. Moore Thread's KUAE cluster combines the S4000 GPU hardware with RDMA networking, distributed storage, and integrated cluster management software. The KUAE Platform oversees multi-datacenter resource allocation and monitoring. KUAE ModelStudio hosts training frameworks and model repositories to streamline development.

With integrated solutions now proven at thousands of GPUs, Moore Thread is positioned to power ubiquitous intelligent applications - from scientific computing to the metaverse. The KUAE cluster reportedly achieves near-linear 91% scaling. Taking 200 billion training data as an example, Zhiyuan Research Institute's 70 billion parameter Aquila2 can complete training in 33 days; a model with 130 billion parameters can complete training in 56 days on the KUAE cluster. In addition, the Moore Threads KUAE killocard cluster supports long-term continuous and stable operation, supports breakpoint resume training, and has an asynchronous checkpoint that is less than 2 minutes. For software, Moore Threads also boasts full compatibility with NVIDIA's CUDA framework, where its MUSIFY tool translates CUDA code to MUSA GPU architecture at supposedly zero cost of migration, i.e., no performance penalty.

Chinese Firm Montage Repackages Intel's 5th Generation Emerald Rapids Xeon Processor into Domestic Product Lineup

Chinese chipmaker Montage Technology has unveiled new data center processors under its Jintide brand based on Intel's latest Emerald Rapids Xeon architecture. The 5th generation Jintide lineup offers anywhere from 16-core to 48-core options for enterprise customers needing advanced security specific to China's government and enterprise requirements. Leveraging a long-running joint venture with Intel, Jintide combines standard high-performance Xeon microarchitectures with added on-die monitoring and encryption blocks, PrC (Pre-check) and DSC (Dynamic Security Check), which are security-hardened for sensitive Chinese use cases. The processors retain all core performance attributes of Intel's vanilla offerings thanks to IP access, only with extra protections mandated by national security interests. While missing the very highest core counts, the new Jintide chips otherwise deliver similar Emerald Rapids features like 8-channel DDR5-5600 memory, 80 lanes of speedy PCIe 5.0, and elevated clock speeds over 4.0 GHz at peak. The Jintide processors have 2S scaling, which allows for dual-socket systems with up to 96 cores and 192 threads.

Pricing remains unpublished but likely carries a premium over Intel list prices thanks to the localized security customization required. However, with Jintide uniquely meeting strict Chinese government and data regulations, cost becomes secondary for target customers needing compliant data center hardware. After matching lockstep with Intel's last several leading Xeon generations, Jintide's continued iteration highlights its strategic value in enabling high-performance domestic infrastructure as China eyes IT supply chain autonomy. Intel gets expanded access to the growing Chinese server market, while Chinese partners utilize Intel IP to strengthen localized offerings without foreign dependency. It manifests the delicate balance of advanced chip joint ventures between global tech giants and rising challengers. More details about the SKUs are listed in the table below.
Return to Keyword Browsing
Jul 16th, 2024 00:27 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts