Apr 17th, 2025 18:42 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts

News Posts matching #Technology

Return to Keyword Browsing

Micron Announces Business Unit Reorganization to Capitalize on AI Growth Across All Market Segments

Micron Technology, Inc. (Nasdaq: MU), a leader in innovative memory and storage solutions, today announced a market segment-based reorganization of its business units to capitalize on the transformative growth driven by AI, from data centers to edge devices.

Micron has maintained multiple generations of industry leadership in DRAM and NAND technology and has the strongest competitive positioning in its history. Micron's industry-leading product portfolio, combined with world-class manufacturing execution enables the development of differentiated solutions for its customers across end markets. As high-performance memory and storage become increasingly vital to drive the growth of AI, this Business Unit reorganization will allow Micron to stay at the forefront of innovation in each market segment through deeper customer engagement to address the dynamic needs of the industry.

JEDEC and Industry Leaders Collaborate to Release JESD270-4 HBM4 Standard

JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of its highly anticipated High Bandwidth Memory (HBM) DRAM standard: HBM4. Designed as an evolutionary step beyond the previous HBM3 standard, JESD270-4 HBM4 will further enhance data processing rates while maintaining essential features such as higher bandwidth, power efficiency, and increased capacity per die and/or stack, because the higher bandwidth enables the higher data processing rate.

The advancements introduced by HBM4 are vital for applications that require efficient handling of large datasets and complex calculations, including generative artificial intelligence (AI), high-performance computing, high-end graphics cards, and servers. HBM4 introduces numerous improvements to the prior version of the standard, including:

TDK Demonstrates the World's First "Spin Photo Detector" Capable of 10X Data Transmission Speeds

TDK Corporation announces that it has developed the world's first "Spin Photo Detector," a photo-spintronic conversion element combining optical, electronic, and magnetic elements that can respond at an ultra-high speed of 20 picoseconds (20 × 10⁻¹² s) using light with a wavelength of 800 nm - more than 10X faster than conventional semiconductor-based photo detectors. This new device is expected to be a key driver for implementing photoelectric conversion technology that boosts data transmission and data processing speed, particularly in AI applications, while simultaneously reducing power consumption.

Transferring mass amounts of data at higher speeds and with lower power consumption is an inevitable need as AI evolves. To process data and make calculations, data is currently transferred between CPU/GPU chips as well as from and to memory by electrical signals. Therefore, there is an increasing need for optical communication and optical interconnects, which offer high speeds that do not decrease with interconnect distance. Photoelectronic conversion technology is also gaining global interest as a very compact fusion of both optical and electronic elements.

LG Makes Impressive Debut at Data Center World 2025

LG Electronics (LG) is unveiling a comprehensive lineup of cooling solutions at the Data Center World (DCW) 2025 conference in Washington, D.C., from April 14-17. This marks LG's debut at DCW, where the company is presenting an array of advanced data center cooling solutions, including high-performance chillers, air- and liquid-based server cooling products, and the LG Building Energy Control (BECON) solution. With its high-efficiency direct-to-chip (D2C) cooling technologies and integrated AI-powered control software, LG aims to make significant inroads into North America's rapidly growing data center market.

AI data centers generate more heat and consume more electricity than conventional data centers due to higher server-rack density and the increased usage of resource-intensive computer chips, such as graphics processing units and high-bandwidth memory. LG's hybrid solution, which combines chip cooling and room cooling, is designed to meet the thermal management needs of these next-gen facilities, delivering outstanding performance and top-tier energy efficiency. Visitors to the LG booth at DCW can learn all about the company's tailored cooling solutions for AI data centers and witness the commitment to innovation that has propelled LG to the forefront of the global HVAC industry.

IBM Announces z17, The First Mainframe Fully Engineered for the AI Age

IBM today announced the IBM z17, the next generation of the company's iconic mainframe, fully engineered with AI capabilities across hardware, software, and systems operations. Powered by the new IBM Telum II processor, IBM z17 expands the system's capabilities beyond transactional AI capabilities to enable new workloads.

IBM Z is built to redefine AI at scale, positioning enterprises to score 100% of their transactions in real-time. z17 enables businesses to drive innovation and do more, including the ability to process 50 percent more AI inference operations per day than z16.2 The new IBM z17 is built to drive business value across industries with a wide range of more than 250 AI use cases, such as mitigating loan risk, managing chatbot services, supporting medical image analysis or impeding retail crime, among others.

Trump Tariffs to Hike PC Costs at Least 20%, System Integrators Take the Biggest Blow

While semiconductors are exempt (for now at least) from Trump's tariffs, other components going into our PCs are not. According to Tom's Hardware, which spoke to multiple system integrators, tariffs are about to hike PC costs by at least 20%, with system integrators hurt the most. The tariff package imposes a 54% rate on Chinese goods, 34% on top of earlier tariffs, and significant duties on Taiwan, South Korea, and Vietnam products. These countries supply essential PC components such as SSDs, RAM, cases, and graphics cards. Wallace Santos, CEO of Maingear, highlighted the immediate effects on production: "Tariffs have a direct impact on our cost structure… which we have to pass down to our customers." He further explained that some suppliers have halted production in China, leading to scarcity and escalating costs. Santos estimates that prices for his PCs will rise "20 to 25% as a result of the tariffs."

Other company leaders express concern over the limited alternatives available. Kelt Reeves, CEO of Falcon Northwest, stated, "Sadly the overwhelming majority of PC component manufacturing is not done in the US and never has been. There's no US alternative supplier for most PC parts." Reeves added that even US-based system integrators are "facing skyrocketing costs" due to the tariffs, which are set to worsen an already challenging market situation caused by ongoing GPU shortages. Jon Bach, CEO of Puget Systems, shared his perspective in a recent blog post, noting that his company might absorb some costs to minimize consumer price increases. However, even before the latest tariff updates, Bach predicted a price rise of "20 to 45 percent by June." Critics of the tariffs warn of broader economic issues. Gary Shapiro, CEO of the Consumer Technology Association, condemned the policy as "massive tax hikes on Americans that will drive inflation, kill jobs on Main Street, and may cause a recession for the US economy." With these tariffs taking effect, the PC industry faces a period of adjustment marked by increased costs and significant supply chain challenges.

Ayar Labs Unveils World's First UCIe Optical Chiplet for AI Scale-Up Architectures

Ayar Labs, the leader in optical interconnect solutions for large-scale AI workloads, today announced the industry's first Universal Chiplet Interconnect Express (UCIe) optical interconnect chiplet to maximize AI infrastructure performance and efficiency while reducing latency and power consumption. By incorporating a UCIe electrical interface, this solution is designed to eliminate data bottlenecks and integrate easily into customer chip designs.

Capable of achieving 8 Tbps bandwidth, the TeraPHY optical I/O chiplet is powered by Ayar Labs' 16-wavelength SuperNova light source. The integration of a UCIe interface means this solution not only delivers high performance and efficiency but also enables interoperability among chiplets from different vendors. This compatibility with the UCIe standard creates a more accessible, cost-effective ecosystem, which streamlines the adoption of advanced optical technologies necessary for scaling AI workloads and overcoming the limitations of traditional copper interconnects.

Kioxia Shows Next-Generation Flash Memory and SSD Solutions at CFMS 2025

Kioxia Corporation, a world leader in memory solutions, highlighted the critical role of Kioxia's next-generation high-performance storage solutions for AI applications at the China Flash Market Summit/MemoryS 2025 ("CFMS 2025") held on March 12 in Shenzhen, China.

At CFMS 2025, theme as "MEMORY LANDSCAPE, VALUE RE-SHAPE", Kioxia exhibited its eighth-generation BiCS FLASH 3D flash memory technology, for efficient and reliable storage solutions required by evolving cloud computing and large-scale AI models. Kioxia also presented its SSD product lineup which meets high performance, high efficiency, and high scalability required for AI applications. Kioxia's exhibition included its recently-announced KIOXIA LC9 Series, its first high-capacity 122.88 terabyte (TB) NVMe enterprise SSD which incorporates 2 terabit (Tb) QLC BiCS FLASH.

Quantum Machines OPX+ Platform Enabled Breaking of Entanglement Qubit Bottleneck, via Multiplexing

Quantum networks—where entanglement is distributed across distant nodes—promise to revolutionize quantum computing, communication, and sensing. However, a major bottleneck has been scalability, as the entanglement rate in most existing systems is limited by a network design of a single qubit per node. A new study, led by Prof. A. Faraon at Caltech and conducted by A. Ruskuc et al., recently published in Nature (ref: 1-2), presents a groundbreaking solution: multiplexed entanglement using multiple emitters in quantum network nodes. By harnessing rare-earth ions coupled to nanophotonic cavities, researchers at Caltech and Stanford have demonstrated a scalable platform that significantly enhances entanglement rates and network efficiency. Let's take a closer look at the two key challenges they tackled—multiplexing to boost entanglement rates and dynamic control strategies to ensure qubit indistinguishability—and how they overcame them.

Breaking the Entanglement Bottleneck via Multiplexing
One of the biggest challenges in scaling quantum networks is the entanglement rate bottleneck, which arises due to the fundamental constraints of long-distance quantum communication. When two distant qubits are entangled via photon interference, the rate of entanglement distribution is typically limited by the speed of light and the node separation distance. In typical systems with a single qubit per node, this rate scales as c/L (where c is the speed of light and L is the distance between nodes), leading to long waiting times between successful entanglement events. This severely limits the scalability of quantum networks.

Marvell Demonstrates Industry's First End-to-End PCIe Gen 6 Over Optics at OFC 2025

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today announced in collaboration with TeraHop, a global optical solutions provider for AI driven data centers, the demonstration of the industry's first end-to-end PCIe Gen 6 over optics in the Marvell booth #2129 at OFC 2025. The demonstration will showcase the extension of PCIe reach beyond traditional electrical limits to enable low-latency, standards-based AI scale-up infrastructure.

As AI workloads drive exponential data growth, PCIe connectivity must evolve to support higher bandwidth and longer reach. The Marvell Alaska P PCIe Gen 6 retimer and its PCIe Gen 7 SerDes technology enable low-latency, low bit-error-rate transmission over optical fiber, delivering the scalability, power efficiency, and high performance required for next-generation accelerated infrastructure. With PCIe over optics, system designers will be able to take advantage of longer links between devices that feature the low latency of PCIe technology.

Sarcina Technology Launches AI Chiplet Platform Enabling Systems Up to 100x100 mm in a Single Package

Sarcina Technology, a global semiconductor packaging specialist, is excited to announce the launch of its innovative AI platform to enable advanced AI packaging solutions that can be tailored to meet specific customer requirements. Leveraging ASE's FOCoS-CL (Fan-Out Chip-on-Substrate-Chip Last) assembly technology, this platform includes an interposer which supports chiplets using UCIe-A for die-to-die interconnects, allowing for the delivery of cost-effective, customizable, cutting-edge solutions.

Sarcina Technology is on a mission to push the boundaries of AI computing system development by providing a unique platform that enables efficient, scalable, configurable and cost-effective semiconductor packaging solutions for AI applications. As AI workloads continue to evolve, there is a need for increasingly sophisticated packaging solutions capable of supporting higher computational demands. Sarcina's novel interposer packaging technology integrates leading memory solutions with high-efficiency interconnects. Whether prioritizing cost, performance or power-efficiency, Sarcina's new AI platform can deliver.

NVMe Hard Drives: Seagate's Answer to Growing AI Storage Demands

Seagate is advancing in NVMe technology for large-capacity hard drives as the growing storage needs in AI systems and apps continue to rise. Old and current storage setups struggle to handle the huge datasets needed for machine learning, which now reach petabyte and even exabyte sizes. Today's storage options have many drawbacks for AI tasks. SSD-based systems offer the needed speed however it is not cost-effective for storing large AI training data. SAS/SATA hard drives are cheaper however, the interfaces rely on proprietary silicon, host bus adapters (HBAs), and controller architectures that struggle with the high-throughput, low-latency needs of AI's high data flow.

Seagate's "new idea" is to use NVMe technology in large-capacity hard drives. This creates a solution that keeps hard drives cost-effective and dense while boosting performance for AI apps. NVMe hard drives will not need HBAs, protocol bridges, and additional SAS infrastructure, allowing seamless scalability by integrating high-density hard drive storage with high-speed SSD caching in a unified NVMe architecture.

Micron Technology Reports Results for the Second Quarter of Fiscal 2025

Micron Technology, Inc. (Nasdaq: MU) today announced results for its second quarter of fiscal 2025, which ended February 27, 2025.

Fiscal Q2 2025 highlights
  • Revenue of $8.05 billion versus $8.71 billion for the prior quarter and $5.82 billion for the same period last year
  • GAAP net income of $1.58 billion, or $1.41 per diluted share
  • Non-GAAP net income of $1.78 billion, or $1.56 per diluted share
  • Operating cash flow of $3.94 billion versus $3.24 billion for the prior quarter and $1.22 billion for the same period last year
"Micron delivered fiscal Q2 EPS above guidance and data center revenue tripled from a year ago," said Sanjay Mehrotra, Chairman, President and CEO of Micron Technology. "We are extending our technology leadership with the launch of our 1-gamma DRAM node. We expect record quarterly revenue in fiscal Q3, with DRAM and NAND demand growth in both data center and consumer-oriented markets, and we are on track for record revenue and significantly improved profitability in fiscal 2025."

NVIDIA to Build Accelerated Quantum Computing Research Center

NVIDIA today announced it is building a Boston-based research center to provide cutting-edge technologies to advance quantum computing. The NVIDIA Accelerated Quantum Research Center, or NVAQC, will integrate leading quantum hardware with AI supercomputers, enabling what is known as accelerated quantum supercomputing. The NVAQC will help solve quantum computing's most challenging problems, ranging from qubit noise to transforming experimental quantum processors into practical devices.

Leading quantum computing innovators, including Quantinuum, Quantum Machines and QuEra Computing, will tap into the NVAQC to drive advancements through collaborations with researchers from leading universities, such as the Harvard Quantum Initiative in Science and Engineering (HQI) and the Engineering Quantum Systems (EQuS) group at the Massachusetts Institute of Technology (MIT).

China Dedicates $55 Billion for Semiconductor, AI, and Quantum Computing Development in 2025

China's Ministry of Finance has allocated $55 billion (¥398.12 billion) for science and technology funding in 2025, marking a 10% increase from the previous year's $50 billion (¥361.9 billion). This expenditure now stands as the nation's third-largest budget item, following only national defense and debt interest payments. The 2024 allocation achieved a 97.6% implementation rate, indicating effective deployment of resources in the technology sector. The funding prioritizes initiatives under the "Science and Technology Innovation 2030" program, with significant investments targeting semiconductors, artificial intelligence, and quantum computing research. Rather than stimulating immediate breakthroughs, the incremental funding increase aims to strengthen existing projects and enhance technological self-reliance amid global competition.

This strategy shows some fiscal constraints imposed by China's economic slowdown while maintaining the country's long-term technological objectives. Supplementary measures bolster direct R&D investment, including enhanced support for fundamental research and specialized financing mechanisms for technology-focused enterprises. Tax reductions and targeted subsidies form part of a comprehensive policy framework designed to foster domestic innovation capabilities. While the funding increase shows commitment to technological advancement, effective project management and efficient resource allocation will be critical success factors, mainly as China competes more globally. Perhaps the most important milestone for this aid package will be supporting the development of advanced lithography tools to make sure that domestic companies can manufacture cutting-edge silicon.

CoolIT Showcases AI-Ready Liquid Cooling Technology at NVIDIA GTC 2025

CoolIT Systems (CoolIT), the world's leader in liquid cooling for computing, will showcase its latest liquid cooling technologies at NVIDIA GTC 2025. In addition to displaying CoolIT's highest-density coolant distribution units (CDUs), CoolIT's booth will feature its innovative cold plate cooling systems for next-generation AI systems.

"At GTC we are excited to reveal our next-generation AI cold plate systems that NVIDIA's server manufacturing partners can leverage to fast-track the development of their direct liquid-cooled servers," said Brandon Peterson, SVP of Business Development.

Chinese Researchers Develop No-Silicon 2D GAAFET Transistor Technology

Scientists from Beijing University have developed the world's first two-dimensional gate-all-around field-effect transistor (GAAFET), establishing a new performance benchmark in domestic semiconductor design. The design, documented in Nature, represents a difference in transistor architecture that could reshape the future of Chinese microelectronics design. Given the reported characteristic of 40% higher performance and 10% improved efficiency compared to the TSMC 3 nm N3 node, it looks rather promising. The research team, headed by Professors Peng Hailin and Qiu Chenguang, engineered a "wafer-scale multi-layer-stacked single-crystalline 2D GAA configuration" that demonstrated superior performance metrics when benchmarked against current industry leaders. The innovation leverages bismuth oxyselenide (Bi₂O₂Se), a novel semiconductor material that maintains exceptional carrier mobility at sub-nanometer dimensions—a critical advantage as the industry struggles to push angstrom-era semiconductor nodes.

"Traditional silicon-based transistors face fundamental physical limitations at extreme scales," explained Professor Peng, who characterized the technology as "the fastest, most efficient transistor ever developed." The 2D GAAFET architecture circumvents the mobility degradation that plagues silicon in ultra-small geometries, allowing for continued performance scaling beyond current nodes. The development comes during China's intensified efforts to achieve semiconductor self-sufficiency, as trade restrictions have limited access to advanced lithography equipment and other critical manufacturing technologies. Even with China developing domestic EUV technology, it is still not "battle" proven. Rather than competing directly with established fabrication processes, the Beijing team has pioneered an entirely different technological approach—what Professor Peng described as "changing lanes entirely" rather than seeking incremental improvements, where China can not compete in the near term.

Silicon Motion Announces PCIe Gen5 Enterprise SSD Reference Design Kit Supporting up to 128TB

Silicon Motion Technology Corporation ("Silicon Motion"), a global leader in designing and marketing NAND flash controllers for solid-state storage devices, today announced the sampling of its groundbreaking MonTitan SSD Reference Design Kit (RDK) that supports up to 128 TB with QLC NAND. Designed on the advanced MonTitan PCIe Gen 5 SSD Development Platform. This new offering aims to accelerate enterprise and data center storage AI SSD solutions by providing a robust and efficient RDK for OEMs and partners.

The SSD RDK incorporates Silicon Motion's PCIe Dual Ported enterprise-grade SM8366 controller, which supports PCIe Gen 5 x4 NVMe 2.0 and OCP 2.5 data center specifications offering unmatched performance, QoS, and capacity for next-generation large data lake storage needs.

AMD's David McAfee Celebrates 25th Anniversary of Radeon Graphics Technology

This month, we at AMD celebrate two significant milestones in the Radeon story. First, the 25th anniversary of Radeon, a journey that began in 2000 with the ATI Radeon DDR card. Back then, 32 MB of VRAM, a 143 MHz clocks, and 30M transistors were cutting-edge tools that sparked your early adventures. Today, those specs are a nostalgic memory, dwarfed by the leaps we've made together culminating in the 24 GB of memory, multi-GHz clocks, and nearly 60B transistors of RDNA 3 cards driving the immersive worlds you now explore. But we're not stopping there. We're proud to continue that innovation journey with the RDNA 4-based Radeon RX 9070 XT and Radeon RX 9070, available starting today. This is more than a new chapter for us, it's a promise to you, the gamers who fuel our passion. We know what matters when you choose your next GPU: raw performance to conquer your favorite titles, tech that's ready for tomorrow's blockbusters, and value that respects your investment. That's precisely what RDNA 4 delivers.

Our goal with RDNA 4 wasn't to chase an elite crown few can reach. Instead, we focused on you, the heart of gaming, crafting cards that bring exceptional power to the setups most of you run. Compared to our last gen, RDNA 4 boosts raster performance for crisper, smoother visuals. Ray tracing throughput doubles, letting you soak in lifelike lighting and reflections without compromise. And with an 8x uplift in machine learning performance, we're unlocking new possibilities - like FSR 4, our latest leap in ML-based upscaling.

SK keyfoundry Launches 3D Hall-effect Sensor Technology Capable of Measuring Speed and Direction

SK keyfoundry, an 8-inch pure-play foundry in Korea, announced today that it offers a new 3D Hall-effect sensor technology that can measure speed and direction through three-dimensional magnetic field detection for its foundry customers.

The Hall-effect sensor is a device that measures the strength of a magnetic field using the Hall-effect, which detects the voltage difference generated when a conductor or semiconductor passes through a magnetic field. The measured magnetic field is utilized in applications that leverage the position, speed, rotation, direction, and current of devices.

Giga Computing, SK Telecom, and SK Enmove to Collaborate on AI Data Center Liquid Cooling Technology

Giga Computing, a subsidiary of GIGABYTE Technology, has signed a Memorandum of Understanding (MoU) with SK Telecom and SK Enmove to collaborate on advancing AI Data Center (AIDC) and high-performance computing (HPC) while accelerating the adoption of liquid cooling technology in next-generation data centers.
This strategic partnership sets the stage to nurture and develop high-performance, energy-efficient, and sustainable data center solutions.

Driving AI and Cooling Technology Innovation Together
Performance AI servers, liquid cooling technologies, and modular AI clusters to support SK's various business units, including:
  • SK Telecom: Strengthening AIDC infrastructure to support next-generation data centers
  • SK Enmove: Advancing liquid cooling technologies to improve energy efficiency and sustainability in data centers

China Doubles Down on Semiconductor Research, Outpacing US with High-Impact Papers

When the US imposed sanctions on Chinese semiconductor makers, China began the push for sovereign chipmaking tools. According to a study conducted by the Emerging Technology Observatory (ETO), Chinese institutions have dramatically outpaced their US counterparts in next-generation chipmaking research. Between 2018 and 2023, nearly 475,000 scholarly articles on chip design and fabrication were published worldwide. Chinese research groups contributed 34% of the output—compared to just 15% from the United States and 18% from Europe. The study further emphasizes the quality of China's contributions. Focusing on the top 10% of the most-cited articles, Chinese researchers were responsible for 50% of this high-impact work, while American and European research accounted for only 22% and 17%, respectively.

This trend shows China's lead isn't about numbers only, and suggests that its work is resonating strongly within the global academic community. Key research areas include neuromorphic, optoelectric computing, and, of course, lithography tools. China is operating mainly outside the scope of US export restrictions that have, since 2022, shrunk access to advanced chipmaking equipment—precisely, tools necessary for fabricating chips below the 14 nm process node. Although US sanctions were intended to limit China's access to cutting-edge manufacturing technology, the massive body of Chinese research suggests that these measures might eventually prove less effective, with Chinese institutions continuing to push forward with influential, high-citation studies. However, Chinese theoretical work is yet to be proven in the field, as only a single company currently manufactures 7 nm and 5 nm nodes—SMIC. Chinese semiconductor makers still need more advanced lithography solutions to reach high-volume manufacturing on more advanced nodes like 3 nm and 2 nm to create more powerful domestic chips for AI and HPC.

Credo's PCIe Retimer Successfully Passes PCI-SIG Compliance

Credo Technology Group Holding Ltd (Credo) an innovator in providing secure, high-speed connectivity solutions that deliver improved reliability and energy efficiency, today announced that its PCI Express (PCIe) 5.0 specification capable "Toucan" retimer has successfully passed the testing at the PCI-SIG Compliance Workshop #133 in Taipei. This milestone confirms the retimer's compliance with the rigorous standards required for PCIe 5.0 technology integrations, and it now will be officially listed on the PCI-SIG Integrators List.

"We are excited to announce that our Toucan retimer has passed the PCI-SIG Compliance Workshop, a critical step in ensuring Credo's PCIe technology solutions seamlessly integrate into the evolving high-performance AI infrastructure," said Phil Kumin, AVP of Product for PCIe/CXL at Credo. "Considering that retimers are required to undergo extensive testing, this achievement not only reinforces our leadership in high-speed connectivity but also provides our customers with the confidence that Credo's products meet the highest standards of interoperability and performance."

Marvell Demonstrates Industry's Leading 2nm Silicon for Accelerated Infrastructure

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, has demonstrated its first 2 nm silicon IP for next-generation AI and cloud infrastructure. Produced on TSMC's 2 nm process, the working silicon is part of the Marvell platform for developing custom XPUs, switches and other technology to help cloud service providers elevate the performance, efficiency, and economic potential of their worldwide operations.

Given a projected 45% TAM growth annually, custom silicon is expected to account for approximately 25% of the market for accelerated compute by 20281.

SOPHGO Unveils New Products at the 2025 China RISC-V Ecosystem Conference

On February 27-28, the 2025 China RISC-V Ecosystem Conference was grandly held at the Zhongguancun International Innovation Center in Beijing. As a core promoter in the RISC-V field, SOPHGO was invited to deliver a speech and prominently launch a series of new products based on the SG2044 chip, sharing the company's cutting-edge practices in the heterogeneous fusion of AI and RISC-V, and contributing to the vigorous development of the global open-source instruction set ecosystem. During the conference, SOPHGO set up a distinctive exhibition area that attracted many attendees from the industry to stop and watch.

Focusing on AI Integration, Leading Breakthroughs in RISC-V Technology
At the main forum of the conference, the Vice President of SOPHGO RISC-V delivered a speech titled "RISC-V Breakthroughs Driven by AI: Integration + Heterogeneous Innovation," where he elaborated on SOPHGO's innovative achievements in the deep integration of RISC-V architecture and artificial intelligence technology. He pointed out that current AI technological innovations are driving market changes, and the emergence of DeepSeek has ignited a trillion-level computing power market. The innovation of technical paradigms and the penetration of large models into various sectors will lead to an explosive growth in inference demand, resulting in changes in the structure of computing power demand. This will also reshape the landscape of the computing power market, bringing significant business opportunities to domestic computing power enterprises, while RISC-V high-performance computing is entering a fast track of development driven by AI.
Return to Keyword Browsing
Apr 17th, 2025 18:42 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts