News Posts matching #AI

Return to Keyword Browsing

NVIDIA CEO Jensen Huang to Deliver Keynote Ahead of COMPUTEX 2024

Amid an AI revolution sweeping through trillion-dollar industries worldwide, NVIDIA founder and CEO Jensen Huang will deliver a keynote address ahead of COMPUTEX 2024, in Taipei, outlining what's next for the AI ecosystem. Slated for June 2 at the National Taiwan University Sports Center, the address kicks off before the COMPUTEX trade show scheduled to run from June 3-6 at the Taipei Nangang Exhibition Center. The keynote will be livestreamed at 7 p.m. Taiwan time (4 a.m. PT) on Sunday, June 2, with a replay available at NVIDIA.com.

With over 1,500 exhibitors from 26 countries and an expected crowd of 50,000 attendees, COMPUTEX is one of the world's premier technology events. It has long showcased the vibrant technology ecosystem anchored by Taiwan and has become a launching pad for the cutting-edge systems required to scale AI globally. As a leader in AI, NVIDIA continues to nurture and expand the AI ecosystem. Last year, Huang's keynote and appearances in partner press conferences exemplified NVIDIA's role in helping advance partners across the technology industry.

SpiNNcloud Systems Announces First Commercially Available Neuromorphic Supercomputer

Today, in advance of ISC High Performance 2024, SpiNNcloud Systems announced the commercial availability of its SpiNNaker2 platform, a supercomputer-level hybrid AI high-performance computer system based on principles of the human brain. Pioneered by Steve Furber, designer of the original ARM and SpiNNaker1 architectures, the SpiNNaker2 supercomputing platform uses a large number of low-power processors for efficiently computing AI and other workloads.

First-generation SpiNNaker1 architecture is currently used in dozens of research groups across 23 countries worldwide. Sandia National Laboratories, Technical University of München and Universität Göttingen are among the first customers placing orders for SpiNNaker2, which was developed around commercialized IP invented in the Human Brain Project, a billion-euro research project funded by the European Union to design intelligent, efficient artificial systems.

Core Configurations of Intel Core Ultra 200 "Arrow Lake-S" Desktop Processors Surface

Intel is giving its next-generation desktop processor lineup the Core Ultra 200 series processor model numbering. We detailed the processor numbering in our older report. The Core Ultra 200 series would be the company's first desktop processors with AI capabilities thanks to an integrated 50 TOPS-class NPU. At the heart of these processors is the "Arrow Lake" microarchitecture. Its development is the reason the company had to refresh "Raptor Lake" to cover its 2023-24 processor lineup. The company's "Meteor Lake" microarchitecture topped off at CPU core counts of 6P+8E, which would have proven to be a generational regression in multithreaded application performance over "Raptor Lake." The new "Arrow Lake-S" desktop processor has a maximum CPU core configuration of 8P+16E, which means consumers can expect at least the same core-counts at given price-points to carry over.

According to a report by Chinese tech publication Benchlife.info, the introduction of "Arrow Lake" would see Intel's desktop processor model numbering align with that of its mobile processor numbering, and incorporate the Core Ultra brand to denote the latest microarchitecture for a given processor generation. Since "Arrow Lake" is a generation ahead of "Meteor Lake," processor models in the series get numbered under Core Ultra 200 series.

Report: 3 Out of 4 Laptop PCs Sold in 2027 will be AI Laptop PCs

Personal computers (PCs) have been used as the major productivity device for several decades. But now we are entering a new era of PCs based on artificial intelligence (AI), thanks to the boom witnessed in generative AI (GenAI). We believe the inventory correction and demand weakness in the global PC market have already normalized, with the impacts from COVID-19 largely being factored in. All this has created a comparatively healthy backdrop for reshaping the PC industry. Counterpoint estimates that almost half a billion AI laptop PCs will be sold during the 2023-2027 period, with AI PCs reviving the replacement demand.

Counterpoint separates GenAI laptop PCs into three categories - AI basic laptop, AI-advanced laptop and AI-capable laptop - based on different levels of computational performance, corresponding use cases and the efficiency of computational performance. We believe AI basic laptops, which are already in the market, can perform basic AI tasks but not completely GenAI tasks and, starting this year, will be supplanted by more AI-advanced and AI-capable models with enough TOPS (tera operations per second) powered by NPU (neural processing unit) or GPU (graphics processing unit) to perform the advanced GenAI tasks really well.

AMD "Ryzen AI 9 HX170" Surfaces, Suggests New Naming Scheme for Ultraportable Processors

AMD is preparing a new processor naming scheme for its next-generation processors, targeting the ultraportable segment, according to a report by Chinese tech publication ITHome, citing sources in ASUS. The new naming scheme purports to make it easier for customers to identify processors with AI capabilities (integrated NPU), processor class (whether it's U, H, or HX), followed by a numerical component that lets customer know product grade. This runs contrary to yesterday's report that cited a Lenovo product flyer referencing a "Ryzen 8050 series." It remains to be seen if the 8050 series are a class of mainstream processors without AI capabilities, which is unlikely, given that Lenovo is using them with its premium ThinkPad T-series.

MINISFORUM Announces AtomMan X7 Ti The World's First Intel Ultra 9 AI Mini PC

Recently, MINISFORUM officially starts showing the AtomMan X7 Ti on its website, which is the world's first Intel Ultra 9 AI Mini PC equipped with a dynamic screen. The X7 Ti pre-sale will begin at 19:00 PST on May 20th in the MINISFORUM official store. AtomMan is MINISFORUM's new high-end brand dedicated to developing cutting-edge and high-performance tech products. Currently, the AtomMan brand consists of two sub-series: X Series (Exploration/AI) and G Series (Gaming).

The AtomMan X7 Ti features an Intel Core Ultra 9 processor, built on Intel's 4 nm process technology. It boasts 22 threads, 16 cores, a maximum frequency of 5.1 GHz, 24 MB of L3 cache, and a TDP of 65 W. The integrated Arc Iris Xe graphics come with 8 Xe cores and 8 additional ray tracing units, supporting AV1 encoding and decoding. With 128 execution units (FP32 cores), it also supports XeSS sampling technology, significantly enhancing 3D rendering, video editing, and live streaming workflows, making it ideal for playing AAA games.

Razer Introduces Razer Cortex: Add-Ons

Today we are thrilled to announce the latest innovation that promises to transform your gaming experience: Razer Cortex: Add-Ons. Our commitment at Razer has always been to push the boundaries of gaming technology and provide our users with the tools they need to succeed, whether they're casual gamers or competitive athletes. Today, we're taking another giant leap forward in fulfilling that promise, with the introduction of Cortex: Add-Ons - a new feature, crafted with the aim of maximizing your gaming experience.

What is Razer Cortex: Add-Ons?
Cortex: Add-Ons is your one-stop destination for all your gaming plug-in needs. We understand that gaming is not one-size-fits-all, which is why we're bringing you a platform that caters to your individual needs. Whether it's optimizing your performance, sharing your gaming highlights, or enriching your gaming sessions with unique tools, we've got you covered.

Apple Unveils Stunning New iPad Pro With the World's Most Advanced Display, M4 Chip and Apple Pencil Pro

Apple today unveiled the groundbreaking new iPad Pro in a stunningly thin and light design, taking portability and performance to the next level. Available in silver and space black finishes, the new iPad Pro comes in two sizes: an expansive 13-inch model and a super-portable 11-inch model. Both sizes feature the world's most advanced display—a new breakthrough Ultra Retina XDR display with state-of-the-art tandem OLED technology—providing a remarkable visual experience. The new iPad Pro is made possible with the new M4 chip, the next generation of Apple silicon, which delivers a huge leap in performance and capabilities. M4 features an entirely new display engine to enable the precision, color, and brightness of the Ultra Retina XDR display. With a new CPU, a next-generation GPU that builds upon the GPU architecture debuted on M3, and the most powerful Neural Engine yet, the new iPad Pro is an outrageously powerful device for artificial intelligence. The versatility and advanced capabilities of iPad Pro are also enhanced with all-new accessories. Apple Pencil Pro brings powerful new interactions that take the pencil experience even further, and a new thinner, lighter Magic Keyboard is packed with incredible features. The new iPad Pro, Apple Pencil Pro, and Magic Keyboard are available to order starting today, with availability in stores beginning Wednesday, May 15.

"iPad Pro empowers a broad set of pros and is perfect for anyone who wants the ultimate iPad experience—with its combination of the world's best displays, extraordinary performance of our latest M-series chips, and advanced accessories—all in a portable design. Today, we're taking it even further with the new, stunningly thin and light iPad Pro, our biggest update ever to iPad Pro," said John Ternus, Apple's senior vice president of Hardware Engineering. "With the breakthrough Ultra Retina XDR display, the next-level performance of M4, incredible AI capabilities, and support for the all-new Apple Pencil Pro and Magic Keyboard, there's no device like the new iPad Pro."

Apple Introduces the M4 Chip

Apple today announced M4, the latest chip delivering phenomenal performance to the all-new iPad Pro. Built using second-generation 3-nanometer technology, M4 is a system on a chip (SoC) that advances the industry-leading power efficiency of Apple silicon and enables the incredibly thin design of iPad Pro. It also features an entirely new display engine to drive the stunning precision, color, and brightness of the breakthrough Ultra Retina XDR display on iPad Pro. A new CPU has up to 10 cores, while the new 10-core GPU builds on the next-generation GPU architecture introduced in M3, and brings Dynamic Caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading to iPad for the first time. M4 has Apple's fastest Neural Engine ever, capable of up to 38 trillion operations per second, which is faster than the neural processing unit of any AI PC today. Combined with faster memory bandwidth, along with next-generation machine learning (ML) accelerators in the CPU, and a high-performance GPU, M4 makes the new iPad Pro an outrageously powerful device for artificial intelligence.

"The new iPad Pro with M4 is a great example of how building best-in-class custom silicon enables breakthrough products," said Johny Srouji, Apple's senior vice president of Hardware Technologies. "The power-efficient performance of M4, along with its new display engine, makes the thin design and game-changing display of iPad Pro possible, while fundamental improvements to the CPU, GPU, Neural Engine, and memory system make M4 extremely well suited for the latest applications leveraging AI. Altogether, this new chip makes iPad Pro the most powerful device of its kind."

Apple Reportedly Developing Custom Data Center Processors with Focus on AI Inference

Apple is reportedly working on creating in-house chips designed explicitly for its data centers. This news comes from a recent report by the Wall Street Journal, which highlights the company's efforts to enhance its data processing capabilities and reduce dependency on third parties to supply the infrastructure. In the internal project called Apple Chips in Data Center (ACDC), which started in 2018, Apple wanted to design data center processors to handle the massive user base and increase the company's service offerings. The most recent advancement in AI means that Apple will probably serve an LLM processed in Apple's data center. The chip will most likely focus on inference of AI models rather than training.

The AI chips are expected to play a crucial role in improving the efficiency and speed of Apple's data centers, which handle vast amounts of data generated by the company's various services and products. By developing these custom chips, Apple aims to optimize its data processing and storage capabilities, ultimately leading to better user experiences across its ecosystem. The move by Apple to develop AI-enhanced chips for data centers is seen as a strategic step in the company's efforts to stay ahead in the competitive tech landscape. Almost all major tech companies, famously called the big seven, have products that use AI in silicon and in software processing. However, Apple is the one that seemingly lacked that. Now, the company is integrating AI across the entire vertical, from the upcoming iPhone integration to M4 chips for Mac devices and ACDC chips for data centers.

Microsoft Prepares MAI-1 In-House AI Model with 500B Parameters

According to The Information, Microsoft is developing a new AI model, internally named MAI-1, designed to compete with the leading models from Google, Anthropic, and OpenAI. This significant step forward in the tech giant's AI capabilities is boosted by Mustafa Suleyman, the former Google AI leader who previously served as CEO of Inflection AI before Microsoft acquired the majority of its staff and intellectual property for $650 million in March. MAI-1 is a custom Microsoft creation that utilizes training data and technology from Inflection but is not a transferred model. It is also distinct from Inflection's previously released Pi models, as confirmed by two Microsoft insiders familiar with the project. With approximately 500 billion parameters, MAI-1 will be significantly larger than its predecessors, surpassing the capabilities of Microsoft's smaller, open-source models.

For comparison, OpenAI's GPT-4 boasts 1.8 trillion parameters in a Mixture of Experts sparse design, while open-source models from Meta and Mistral feature 70 billion parameters dense. Microsoft's investment in MAI-1 highlights its commitment to staying competitive in the rapidly evolving AI landscape. The development of this large-scale model represents a significant step forward for the tech giant, as it seeks to challenge industry leaders in the field. The increased computing power, training data, and financial resources required for MAI-1 demonstrate Microsoft's dedication to pushing the boundaries of AI capabilities and intention to compete on its own. With the involvement of Mustafa Suleyman, a renowned expert in AI, the company is well-positioned to make significant strides in this field.

NVIDIA Advertises "Premium AI PC" Mocking the Compute Capability of Regular AI PCs

According to the report from BenchLife, NVIDIA has started the marketing campaign push for "Premium AI PC," squarely aimed at the industry's latest trend pushed by Intel, AMD, and Qualcomm for an "AI PC" system, which features a dedicated NPU for processing smaller models locally. NVIDIA's approach comes from a different point of view: every PC with an RTX GPU is a "Premium AI PC," which holds a lot of truth. Generally, GPUs (regardless of the manufacturer) hold more computing potential than the CPU and NPU combined. With NVIDIA's push to include Tensor cores in its GPUs, the company is preparing for next-generation software from vendors and OS providers that will harness the power of these powerful silicon pieces and embed more functionality in the PC.

At the Computex event in Taiwan, there should be more details about Premium AI PCs and general AI PCs. In its marketing materials, NVIDIA compares AI PCs to its Premium AI PCs, which have enhanced capabilities across various applications like image/video editing and upscaling, productivity, gaming, and developer applications. Another relevant selling point is the user base for these Premium AI PCs, which NVIDIA touts to be 100 million users. Those PCs support over 500 AI applications out of the box, highlighting the importance of proper software support. NVIDIA's systems are usually more powerful, with GeForce RTX GPUs reaching anywhere from 100-1300+ TOPS, compared to 40 TOPS of AI PCs. How other AI PC makers plan to fight in the AI PC era remains to be seen, but there is a high chance that this will be the spotlight of the upcoming Computex show.

SK hynix Presents CXL Memory Solutions Set to Power the AI Era at CXL DevCon 2024

SK hynix participated in the first-ever Compute Express Link Consortium Developers Conference (CXL DevCon) held in Santa Clara, California from April 30-May 1. Organized by a group of more than 240 global semiconductor companies known as the CXL Consortium, CXL DevCon 2024 welcomed a majority of the consortium's members to showcase their latest technologies and research results.

CXL is a technology that unifies the interfaces of different devices in a system such as semiconductor memory, storage, and logic chips. As it can increase system bandwidth and processing capacity, CXL is receiving attention as a key technology for the AI era in which high performance and capacity are essential. Under the slogan "Memory, The Power of AI," SK hynix showcased a range of CXL products at the conference that are set to strengthen the company's leadership in AI memory technology.

SK hynix CEO Says HBM from 2025 Production Almost Sold Out

SK hynix held a press conference unveiling its vision and strategy for the AI era today at its headquarters in Icheon, Gyeonggi Province, to share the details of its investment plans for the M15X fab in Cheongju and the Yongin Semiconductor Cluster in Korea and the advanced packaging facilities in Indiana, U.S.

The event, hosted by theChief Executive Officer Kwak Noh-Jung, three years before the May 2027 completion of the first fab in the Yongin Cluster, was attended by key executives including the Head of AI Infra Justin (Ju-Seon) Kim, Head of DRAM Development Kim Jonghwan, Head of the N-S Committee Ahn Hyun, Head of Manufacturing Technology Kim Yeongsik, Head of Package & Test Choi Woojin, Head of Corporate Strategy & Planning Ryu Byung Hoon, and the Chief Financial Officer Kim Woo Hyun.

More than 500 AI Models Run Optimized on Intel Core Ultra Processors

Today, Intel announced it surpassed 500 AI models running optimized on new Intel Core Ultra processors - the industry's premier AI PC processor available in the market today, featuring new AI experiences, immersive graphics and optimal battery life. This significant milestone is a result of Intel's investment in client AI, the AI PC transformation, framework optimizations and AI tools including OpenVINO toolkit. The 500 models, which can be deployed across the central processing unit (CPU), graphics processing unit (GPU) and neural processing unit (NPU), are available across popular industry sources, including OpenVINO Model Zoo, Hugging Face, ONNX Model Zoo and PyTorch. The models draw from categories of local AI inferencing, including large language, diffusion, super resolution, object detection, image classification/segmentation, computer vision and others.

"Intel has a rich history of working with the ecosystem to bring AI applications to client devices, and today we celebrate another strong chapter in the heritage of client AI by surpassing 500 pre-trained AI models running optimized on Intel Core Ultra processors. This unmatched selection reflects our commitment to building not only the PC industry's most robust toolchain for AI developers, but a rock-solid foundation AI software users can implicitly trust."
-Robert Hallock, Intel vice president and general manager of AI and technical marketing in the Client Computing Group

We Tested NVIDIA's new ChatRTX: Your Own GPU-accelerated AI Assistant with Photo Recognition, Speech Input, Updated Models

NVIDIA today unveiled ChatRTX, the AI assistant that runs locally on your machine, and which is accelerated by your GeForce RTX GPU. NVIDIA had originally launched this as "Chat with RTX" back in February 2024, back then this was regarded more as a public tech demo. We reviewed the application in our feature article. The ChatRTX rebranding is probably aimed at making the name sound more like ChatGPT, which is what the application aims to be—except it runs completely on your machine, and is exhaustively customizable. The most obvious advantage of a locally-run AI assistant is privacy—you are interacting with an assistant that processes your prompt locally, and accelerated by your GPU; the second is that you're not held back by performance bottlenecks by cloud-based assistants.

ChatRTX is a major update over the Chat with RTX tech-demo from February. To begin with, the application has several stability refinements from Chat with RTX, which felt a little rough on the edges. NVIDIA has significantly updated the LLMs included with the application, including Mistral 7B INT4, and Llama 2 7B INT4. Support is also added for additional LLMs, including Gemma, a local LLM trained by Google, based on the same technology used to make Google's flagship Gemini model. ChatRTX now also supports ChatGLM3, for both English and Chinese prompts. Perhaps the biggest upgrade ChatRTX is its ability to recognize images on your machine, as it incorporates CLIP (contrastive language-image pre-training) from OpenAI. CLIP is an LLM that recognizes what it's seeing in image collections. Using this feature, you can interact with your image library without the need for metadata. ChatRTX doesn't just take text input—you can speak to it. It now accepts natural voice input, as it integrates the Whisper speech-to-text NLI model.
DOWNLOAD: NVIDIA ChatRTX

Micron First to Ship Critical Memory for AI Data Centers

Micron Technology, Inc. (Nasdaq: MU), today announced it is leading the industry by validating and shipping its high-capacity monolithic 32Gb DRAM die-based 128 GB DDR5 RDIMM memory in speeds up to 5,600 MT/s on all leading server platforms. Powered by Micron's industry-leading 1β (1-beta) technology, the 128 GB DDR5 RDIMM memory delivers more than 45% improved bit density, up to 22% improved energy efficiency and up to 16% lower latency over competitive 3DS through-silicon via (TSV) products.

Micron's collaboration with industry leaders and customers has yielded broad adoption of these new high-performance, large-capacity modules across high-volume server CPUs. These high-speed memory modules were engineered to meet the performance needs of a wide range of mission-critical applications in data centers, including artificial intelligence (AI) and machine learning (ML), high-performance computing (HPC), in-memory databases (IMDBs) and efficient processing for multithreaded, multicore count general compute workloads. Micron's 128 GB DDR5 RDIMM memory will be supported by a robust ecosystem including AMD, Hewlett Packard Enterprise (HPE), Intel, Supermicro, along with many others.

Samsung Electronics Announces First Quarter 2024 Results

Samsung Electronics today reported financial results for the first quarter ended March 31, 2024. The Company posted KRW 71.92 trillion in consolidated revenue on the back of strong sales of flagship Galaxy S24 smartphones and higher prices for memory semiconductors. Operating profit increased to KRW 6.61 trillion as the Memory Business returned to profit by addressing demand for high value-added products. The Mobile eXperience (MX) Business posted higher earnings and the Visual Display and Digital Appliances businesses also recorded increased profitability.

The weakness of the Korean won against major currencies resulted in a positive impact on company-wide operating profit of about KRW 0.3 trillion compared to the previous quarter. The Company's total capital expenditures in the first quarter stood at KRW 11.3 trillion, including KRW 9.7 trillion for the Device Solutions (DS) Division and KRW 1.1 trillion on Samsung Display Corporation (SDC). Spending on memory was focused on facilities and packaging technologies to address demand for High Bandwidth Memory (HBM), DDR5 and other advanced products, while foundry investments were concentrated on establishing infrastructure to meet medium- to long-term demand. Display investments were mainly made in IT OLED products and flexible display technologies.

Huawei Aims to Develop Homegrown HBM Memory Amidst US Sanctions

According to The Information, in a strategic maneuver to circumvent the constraints imposed by US sanctions, Huawei is accelerating efforts to establish domestic production capabilities for High Bandwidth Memory (HBM) within China. This move addresses the limitations that have hampered the company's advancements in AI and high-performance computing (HPC) sectors. HBM technology plays a pivotal role in enhancing the performance of AI and HPC processors by mitigating memory bandwidth bottlenecks. Recognizing its significance, Huawei has assembled a consortium comprising memory manufacturers backed by the Chinese government and prominent semiconductor companies like Fujian Jinhua Integrated Circuit. This consortium is focused on advancing HBM2 memory technology, which is crucial for Huawei's Ascend-series processors for AI applications.

Huawei's initiative comes at a time when the company faces challenges in accessing HBM from external sources, impacting the availability of its AI processors in the market. Despite facing obstacles such as international regulations restricting the sale of advanced chipmaking equipment to China, Huawei's efforts underscore China's broader push for self-sufficiency in critical technologies essential for AI and supercomputing. By investing in domestic HBM production, Huawei aims to secure a stable supply chain for these vital components, reducing reliance on external suppliers. This strategic shift not only demonstrates Huawei's resilience in navigating geopolitical challenges but also highlights China's determination to strengthen its technological independence in the face of external pressures. As the global tech landscape continues to evolve, Huawei's move to develop homegrown HBM memory could have far-reaching implications for China's AI and HPC capabilities, positioning the country as a significant player in the memory field.

TSMC Celebrates 30th North America Technology Symposium with Innovations Powering AI with Silicon Leadership

TSMC today unveiled its newest semiconductor process, advanced packaging, and 3D IC technologies for powering the next generation of AI innovations with silicon leadership at the Company's 2024 North America Technology Symposium. TSMC debuted the TSMC A16 technology, featuring leading nanosheet transistors with innovative backside power rail solution for production in 2026, bringing greatly improved logic density and performance. TSMC also introduced its System-on-Wafer (TSMC-SoW) technology, an innovative solution to bring revolutionary performance to the wafer level in addressing the future AI requirements for hyperscaler datacenters.

This year marks the 30th anniversary of TSMC's North America Technology Symposium, and more than 2,000 attended the event, growing from less than 100 attendees 30 years ago. The North America Technology Symposium in Santa Clara, California kicks off TSMC Technology Symposiums around the world in the coming months. The symposium also features an "Innovation Zone," designed to highlight the technology achievements of our emerging start-up customers.

Aetina Accelerates Embedded AI with High-performance, Small Form-factor Aetina IA380E-QUFL Graphics Card

Aetina, a leading Edge AI solution provider, announced the launch of the Aetina IA380E-QUFL at Embedded World 2024 in Nuremberg, Germany. This groundbreaking product is a small form factor PCIe graphics card powered by the high-performance Intel Arc A380E GPU.

Unmatched Power in a Compact Design
The Aetina IA380E-QUFL delivers workstation-level performance packed into a low-profile, single-slot form factor. This innovative solution consumes only 50 W, making it ideal for space and power-constrained edge computing environments. Embedded system manufacturers and integrators can leverage the power of 4.096 TFLOPs peak FP32 performance delivered by the Intel Arc A380E GPU.

US Weighs National Security Risks of China's RISC-V Chip Development Involvement

The US government is investigating the potential national security risks associated with China's involvement in the development of open-source RISC-V chip technology. According to a letter obtained by Reuters, the Department of Commerce has informed US lawmakers that it is actively reviewing the implications of China's work in this area. RISC-V, an open instruction set architecture (ISA) created in 2014 at the University of California, Berkeley, offers an alternative to proprietary and licensed ISAs like those developed by Arm. This open-source ISA can be utilized in a wide range of applications, from AI chips and general-purpose CPUs to high-performance computing applications. Major Chinese tech giants, including Alibaba and Huawei, have already embraced RISC-V, positioning it as a new battleground in the ongoing technological rivalry between the United States and China over cutting-edge semiconductor capabilities.

In November, a group of 18 US lawmakers from both chambers of Congress urged the Biden administration to outline its strategy for preventing China from gaining a dominant position in RISC-V technology, expressing concerns about the potential impact on US national and economic security. While acknowledging the need to address potential risks, the Commerce Department noted in its letter that it must proceed cautiously to avoid unintentionally harming American companies actively participating in international RISC-V development groups. Previous attempts to restrict the transfer of 5G technology to China have created obstacles for US firms involved in global standards bodies where China is also a participant, potentially jeopardizing American leadership in the field. As the review process continues, the Commerce Department faces the delicate task of balancing national security interests with the need to maintain the competitiveness of US companies in the rapidly evolving landscape of open-source chip technologies.

Qualcomm Continues to Disrupt the PC Industry with the Addition of Snapdragon X Plus Platform

Qualcomm Technologies, Inc. today expands the leading Snapdragon X Series platform portfolio with Snapdragon X Plus. Snapdragon X Plus features the state-of-the-art Qualcomm Oryon CPU, a custom-integrated processor that delivers up to 37% faster CPU performance compared to competitors, while consuming up to 54% less power. This remarkable advancement in CPU performance sets a new standard in mobile computing, enabling users to accomplish more with greater efficiency. Snapdragon X Plus is also designed to meet the demands of on-device AI-driven applications, powered by the Qualcomm Hexagon NPU capable of 45 TOPS, making it the world's fastest NPU for laptops. This platform is a significant leap in computing innovation and is set to transform the PC industry.

"Snapdragon X Series platforms deliver leading experiences and are positioned to revolutionize the PC industry. Snapdragon X Plus will power AI-Supercharged PCs that enable even more users to excel as radical new AI experiences emerge in this period of rapid development and deployment," said Kedar Kondap, senior vice president and general manager of compute and gaming, Qualcomm Technologies, Inc. "By delivering leading CPU performance, AI capabilities, and power efficiency, we are once again pushing the boundaries of what is possible in mobile computing."

Dynabook Releases Hyperlight 14-inch Portégé X40L-M Laptop with Intel Core Ultra Processors and Powerful AI Integration

Dynabook Americas, Inc., the gold standard for long-lasting, professional-grade laptops, today unveiled the latest generation of its hyperlight 14-inch premium business laptop - the Portégé X40L-M. Now engineered with cutting-edge Intel Core Ultra (Series 1) processors and packing advanced AI capabilities, this powerful laptop redefines productivity, performance, and security for today's on-the-go professionals, while meeting Intel EVO platform and Windows 11 Secured-core PC standards.

"The Portégé X40L-M is a testament to Dynabook's commitment to delivering premium, cutting-edge solutions that empowers professionals to achieve more in their work," said James Robbins, General Manager, Dynabook Americas, Inc. "With the integration of Intel's latest Core Ultra processors, advanced AI capabilities, and seamless Windows 11 with Copilot integration, the Portégé X40L-M sets a new standard for productivity, performance, and innovation in the business laptop market."

AI Demand Drives Rapid Growth in QLC Enterprise SSD Shipments for 2024

North American customers are increasing their orders for storage products as energy efficiency becomes a key priority for AI inference servers. This, in turn, is driving up demand for QLC enterprise SSDs. Currently, only Solidigm and Samsung have certified QLC products, with Solidigm actively promoting its QLC products and standing to benefit the most from this surge in demand. TrendForce predicts shipments of QLC enterprise SSD bits to reach 30 exabytes in 2024—increasing fourfold in volume from 2023.

TrendForce identifies two main reasons for the increasing use of QLC SSDs in AI applications: the products' fast read speeds and TCO advantages. AI inference servers primarily perform read operations, which occur less frequently than the data writing required by AI training servers. In comparison to HDDs, QLC enterprise SSDs offer superior read speeds and have capacities that have expanded up to 64 TB.
Return to Keyword Browsing
May 8th, 2024 19:26 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts