News Posts matching #Research

Return to Keyword Browsing

PC Market Returns to Growth in Q1 2024 with AI PCs to Drive Further 2024 Expansion

Global PC shipments grew around 3% YoY in Q1 2024 after eight consecutive quarters of declines due to demand slowdown and inventory correction, according to the latest data from Counterpoint Research. The shipment growth in Q1 2024 came on a relatively low base in Q1 2023. The coming quarters of 2024 will see sequential shipment growth, resulting in 3% YoY growth for the full year, largely driven by AI PC momentum, shipment recovery across different sectors, and a fresh replacement cycle.

Lenovo's PC shipments were up 8% in Q1 2024 off an easy comparison from last year. The brand managed to reclaim its 24% share in the market, compared to 23% in Q1 2023. HP and Dell, with market shares of 21% and 16% respectively, remained flattish, waiting for North America to drive shipment growth in the coming quarters. Apple's shipment performance was also resilient, with the 2% growth mainly supported by M3 base models.

X-Silicon Startup Wants to Combine RISC-V CPU, GPU, and NPU in a Single Processor

While we are all used to having a system with a CPU, GPU, and, recently, NPU—X-Silicon Inc. (XSi), a startup founded by former Silicon Valley veterans—has unveiled an interesting RISC-V processor that can simultaneously handle CPU, GPU, and NPU workloads in a chip. This innovative chip architecture, which will be open-source, aims to provide a flexible and efficient solution for a wide range of applications, including artificial intelligence, virtual reality, automotive systems, and IoT devices. The new microprocessor combines a RISC-V CPU core with vector capabilities and GPU acceleration into a single chip, creating a versatile all-in-one processor. By integrating the functionality of a CPU and GPU into a single core, X-Silicon's design offers several advantages over traditional architectures. The chip utilizes the open-source RISC-V instruction set architecture (ISA) for both CPU and GPU operations, running a single instruction stream. This approach promises lower memory footprint execution and improved efficiency, as there is no need to copy data between separate CPU and GPU memory spaces.

Called the C-GPU architecture, X-Silicon uses RISC-V Vector Core, which has 16 32-bit FPUs and a Scaler ALU for processing regular integers as well as floating point instructions. A unified instruction decoder feeds the cores, which are connected to a thread scheduler, texture unit, rasterizer, clipping engine, neural engine, and pixel processors. All is fed into a frame buffer, which feeds the video engine for video output. The setup of the cores allows the users to program each core individually for HPC, AI, video, or graphics workloads. Without software, there is no usable chip, which prompts X-Silicon to work on OpenGL ES, Vulkan, Mesa, and OpenCL APIs. Additionally, the company plans to release a hardware abstraction layer (HAL) for direct chip programming. According to Jon Peddie Research (JPR), the industry has been seeking an open-standard GPU that is flexible and scalable enough to support various markets. X-Silicon's CPU/GPU hybrid chip aims to address this need by providing manufacturers with a single, open-chip design that can handle any desired workload. The XSi gave no timeline, but it has plans to distribute the IP to OEMs and hyperscalers, so the first silicon is still away.

Chinese Research Institute Utilizing "Banned" NVIDIA H100 AI GPUs

NVIDIA's freshly unveiled "Blackwell" B200 and GB200 AI GPUs will be getting plenty of coverage this year, but many organizations will be sticking with current or prior generation hardware. Team Green is in the process of shipping out compromised "Hopper" designs to customers in China, but the region's appetite for powerful AI-crunching hardware is growing. Last year's China-specific H800 design, and the older "Ampere" A800 chip were deemed too potent—new regulations prevented further sales. Recently, AMD's Instinct MI309 AI accelerator was considered "too powerful to gain unconditional approval from the US Department of Commerce." Natively-developed solutions are catching up with Western designs, but some institutions are not prepared to queue up for emerging technologies.

NVIDIA's new H20 AI GPU as well as Ada Lovelace-based L20 PCIe and L2 PCIe models are weakened enough to get a thumbs up from trade regulators, but likely not compelling enough for discerning clients. The Telegraph believes that NVIDIA's uncompromised H100 AI GPU is currently in use at several Chinese establishments—the report cites information presented within four academic papers published on ArXiv, an open access science website. The Telegraph's news piece highlights one of the studies—it was: "co-authored by a researcher at 4paradigm, an AI company that was last year placed on an export control list by the US Commerce Department for attempting to acquire US technology to support China's military." Additionally, the Chinese Academy of Sciences appears to have conducted several AI-accelerated experiments, involving the solving of complex mathematical and logical problems. The article suggests that this research organization has acquired a very small batch of NVIDIA H100 GPUs (up to eight units). A "thriving black market" for high-end NVIDIA processors has emerged in the region—last Autumn, the Center for a New American Security (CNAS) published an in-depth article about ongoing smuggling activities.

Arizona State University and Deca Technologies to Pioneer North America's First R&D Center for Advanced Fan-Out Wafer-Level Packaging

Arizona State University (ASU) and Deca Technologies (Deca), a premier provider of advanced wafer- and panel-level packaging technology, today announced a groundbreaking collaboration to create North America's first fan-out wafer-level packaging (FOWLP) research and development center.

The new Center for Advanced Wafer-Level Packaging Applications and Development is set to catalyze innovation in the United States, expanding domestic semiconductor manufacturing capabilities and driving advancements in cutting-edge fields such as artificial intelligence, machine learning, automotive electronics and high-performance computing.

Extropic Intends to Accelerate AI through Thermodynamic Computing

Extropic, a pioneer in physics-based computing, this week emerged from stealth mode and announced the release of its Litepaper, which outlines the company's revolutionary approach to AI acceleration through thermodynamic computing. Founded in 2022 by Guillaume Verdon, Extropic has been developing novel chips and algorithms that leverage the natural properties of out-of-equilibrium thermodynamic systems to perform probabilistic computations for generative AI applications in a highly efficient manner. The Litepaper delves into Extropic's groundbreaking computational paradigm, which aims to address the limitations of current digital hardware in handling the complex probability distributions required for generative AI.

Today's algorithms spend around 25% of their time moving numbers around in memory, limiting the speedup achievable by accelerating specific operations. In contrast, Extropic's chips natively accelerate a broad class of probabilistic algorithms by running them physically as a rapid and energy-efficient, physics-based process in their entirety, unlocking a new regime of AI acceleration well beyond what was previously thought achievable. In coming out of stealth, the company has announced the fabrication of a superconducting prototype processor and developments surrounding room-temperature semiconductor-based devices for the broader market, with the goal of revolutionizing the field of AI acceleration and enabling new possibilities in generative AI.

The SEA Projects Prepare Europe for Exascale Supercomputing

The HPC research projects DEEP-SEA, IO-SEA and RED-SEA are wrapping up this month after a three-year project term. The three projects worked together to develop key technologies for European Exascale supercomputers, based on the Modular Supercomputing Architecture (MSA), a blueprint architecture for highly efficient and scalable heterogeneous Exascale HPC systems. To achieve this, the three projects collaborated on system software and programming environments, data management and storage, as well as interconnects adapted to this architecture. The results of their joint work will be presented at a co-design workshop and poster session at the EuroHPC Summit (Antwerp, 18-21 March, www.eurohpcsummit.eu).

SK Hynix To Invest $1 Billion into Advanced Chip Packaging Facilities

Lee Kang-Wook, Vice President of Research and Development at SK Hynix, has discussed the increased importance of advanced chip packaging with Bloomberg News. In an interview with the media company's business section, Lee referred to a tradition of prioritizing the design and fabrication of chips: "the first 50 years of the semiconductor industry has been about the front-end." He believes that the latter half of production processes will take precedence in the future: "...but the next 50 years is going to be all about the back-end." He outlined a "more than $1 billion" investment into South Korean facilities—his department is hoping to "improve the final steps" of chip manufacturing.

SK Hynix's Head of Packaging Development pioneered a novel method of packaging the third generation of high bandwidth technology (HBM2E)—that innovation secured NVIDIA as a high-profile and long term customer. Demand for Team Green's AI GPUs has boosted the significance of HBM technologies—Micron and Samsung are attempting to play catch up with new designs. South Korea's leading memory supplier is hoping to stay ahead in the next-gen HBM contest—supposedly 12-layer fifth generation samples have been submitted to NVIDIA for approval. SK Hynix's Vice President recently revealed that HBM production volumes for 2024 have sold out—currently company leadership is considering the next steps for market dominance in 2025. The majority of the firm's newly announced $1 billion budget will be spent on the advancement of MR-MUF and TSV technologies, according to their R&D chief.

Helldivers 2 Warbond System Previewed Ahead of March 14 Launch

Helldivers, get the Cutting Edge advantage on the battlefield! Greetings, fearless heroes of galactic democracy! Steel yourself for the next big push against the disgraceful enemies of freedom with our brand-new Warbond—Cutting Edge! Packed with high-voltage vibes, Cutting Edge gives you the chance to enhance your loadout of liberty with ultra-futuristic armour, guns that spit lightning, super stylish capes and epic emotes.

Super Earth R&D Experiments
Helldivers… we need your help. The brainiacs in Super Earth Research & Development have some cool experimental armour ready to be field-tested. This is where you come in, you're just the right people for the job.

IBM Intros AI-enhanced Data Resilience Solution - a Cyberattack Countermeasure

Cyberattacks are an existential risk, with 89% of organizations ranking ransomware as one of the top five threats to their viability, according to a November 2023 report from TechTarget's Enterprise Strategy Group, a leading analyst firm. And this is just one of many risks to corporate data—insider threats, data exfiltration, hardware failures, and natural disasters also pose significant danger. Moreover, as the just-released 2024 IBM X-Force Threat Intelligence Index states, as the generative AI market becomes more established, it could trigger the maturity of AI as an attack surface, mobilizing even further investment in new tools from cybercriminals. The report notes that enterprises should also recognize that their existing underlying infrastructure is a gateway to their AI models that doesn't require novel tactics from attackers to target.

To help clients counter these threats with earlier and more accurate detection, we're announcing new AI-enhanced versions of the IBM FlashCore Module technology available inside new IBM Storage FlashSystem products and a new version of IBM Storage Defender software to help organizations improve their ability to detect and respond to ransomware and other cyberattacks that threaten their data. The newly available fourth generation of FlashCore Module (FCM) technology enables artificial intelligence capabilities within the IBM Storage FlashSystem family. FCM works with Storage Defender to provide end-to-end data resilience across primary and secondary workloads with AI-powered sensors designed for earlier notification of cyber threats to help enterprises recover faster.

Qualcomm AI Hub Introduced at MWC 2024

Qualcomm Technologies, Inc. unveiled its latest advancements in artificial intelligence (AI) at Mobile World Congress (MWC) Barcelona. From the new Qualcomm AI Hub, to cutting-edge research breakthroughs and a display of commercial AI-enabled devices, Qualcomm Technologies is empowering developers and revolutionizing user experiences across a wide range of devices powered by Snapdragon and Qualcomm platforms.

"With Snapdragon 8 Gen 3 for smartphones and Snapdragon X Elite for PCs, we sparked commercialization of on-device AI at scale. Now with the Qualcomm AI Hub, we will empower developers to fully harness the potential of these cutting-edge technologies and create captivating AI-enabled apps," said Durga Malladi, senior vice president and general manager, technology planning and edge solutions, Qualcomm Technologies, Inc. "The Qualcomm AI Hub provides developers with a comprehensive AI model library to quickly and easily integrate pre-optimized AI models into their applications, leading to faster, more reliable and private user experiences."

3D Nanoscale Petabit Capacity Optical Disk Format Proposed by Chinese R&D Teams

The University of Shanghai for Science and Technology (USST), Peking University and the Shanghai Institute of Optics and Fine Mechanics (SIOM) are collaborating on new Optical Data Storage (ODS) technologies—a recently published paper reveals that scientists are attempting to create 3D nanoscale optical disk memory that breaks into petabit capacities. Society (as a whole) has an ever-growing data demand—this requires the development of improved high-capacity storage technologies—the R&D teams believe that ODS presents a viable alternative route to traditional present day solutions: "data centers based on major storage technologies such as semiconductor flash devices and hard disk drives have high energy burdens, high operation costs and short lifespans."

The proposed ODS format could be a "promising solution for cost-effective long-term archival data storage." The researchers note that current (e.g Blu-ray) and previous generation ODS technologies have been: "limited by low capacities and the challenge of increasing areal density." In order to get ODS up to petabit capacity levels, several innovations are required—the Nature.com abstract stated: "extending the planar recording architecture to three dimensions with hundreds of layers, meanwhile breaking the optical diffraction limit barrier of the recorded spots. We develop an optical recording medium based on a photoresist film doped with aggregation-induced emission dye, which can be optically stimulated by femtosecond laser beams. This film is highly transparent and uniform, and the aggregation-induced emission phenomenon provides the storage mechanism. It can also be inhibited by another deactivating beam, resulting in a recording spot with a super-resolution scale." The novel optical storage medium relies on dye-doped photoresist (DDPR) with aggregation-induced emission luminogens (AIE-DDPR)—a 515 nm femtosecond Gaussian laser beam takes care of optical writing tasks, while a doughnut-shaped 639 nm continuous wave laser beam is tasked with retrieval. A 480 nm pulsed laser and a 592 nm continuous wave laser work in tandem to read data.

GlobalFoundries and Biden-Harris Administration Announce CHIPS and Science Act Funding for Essential Chip Manufacturing

The U.S. Department of Commerce today announced $1.5 billion in planned direct funding for GlobalFoundries (Nasdaq: GFS) (GF) as part of the U.S. CHIPS and Science Act. This investment will enable GF to expand and create new manufacturing capacity and capabilities to securely produce more essential chips for automotive, IoT, aerospace, defense, and other vital markets.

New York-headquartered GF, celebrating its 15th year of operations, is the only U.S.-based pure play foundry with a global manufacturing footprint including facilities in the U.S., Europe, and Singapore. GF is the first semiconductor pure play foundry to receive a major award (over $1.5 billion) from the CHIPS and Science Act, designed to strengthen American semiconductor manufacturing, supply chains and national security. The proposed funding will support three GF projects:

NVIDIA Joins US Artificial Intelligence Safety Institute Consortium

NVIDIA has joined the National Institute of Standards and Technology's new U.S. Artificial Intelligence Safety Institute Consortium as part of the company's effort to advance safe, secure and trustworthy AI. AISIC will work to create tools, methodologies and standards to promote the safe and trustworthy development and deployment of AI. As a member, NVIDIA will work with NIST—an agency of the U.S. Department of Commerce—and fellow consortium members to advance the consortium's mandate. NVIDIA's participation builds on a record of working with governments, researchers and industries of all sizes to help ensure AI is developed and deployed safely and responsibly.

Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality. In 2023, NVIDIA endorsed the Biden Administration's voluntary AI safety commitments. Last month, the company announced a $30 million contribution to the U.S. National Science Foundation's National Artificial Intelligence Research Resource pilot program, which aims to broaden access to the tools needed to power responsible AI discovery and innovation.

NUDT MT-3000 Hybrid CPU Reportedly Utilized by Tianhe-3 Supercomputer

China's National Supercomputer Center (NUDT) introduced their Tianhe-3 system as a prototype back in early 2019—at the time it had been tested by thirty local organizations. Notable assessors included the Chinese Academy of Sciences and the China Aerodynamics Research and Development Center. The (previous generation) Tianhe-2 system currently sits in a number seven position of world-ranked Supercomputers—offering a measured performance of 33.86 petaFLOPS/s. The internal makeup of its fully formed successor has remained a mystery...until now. The Next Platform believes that the "Xingyi" monikered third generation supercomputer houses the Guangzhou-based lab's MT-3000 processor design. Author, Timothy Prickett Morgan, boasted about acquiring exclusive inside knowledge ahead of international intelligence agencies—many will be keeping an eye on the NUDT, since it is administered by the National University of Defence Technology (itself owned by the Chinese government).

The Next Platform has a track record of outing intimate details relating to Chinese-developed scientific breakthroughs—the semi-related "Oceanlight" system installed at their National Supercomputer Center (Wuxi) was "figured out" two years ago. Tianhe-3 and Oceanlight face significant competition in the form of "El Capitan"—this is the USA's prime: "supercomputer being built right now at Lawrence Livermore National Laboratory by Hewlett Packard Enterprise in conjunction with compute engine supplier AMD. We need to know because we want to understand the very different—and yet, in some ways similar—architectural path that China seems to have taken with the Xingyi architecture to break through the exascale barrier."

Hafnia Material Breakthrough Paves Way for Ferroelectric Computer Memory

Scientists and engineers have been experimenting with hafnium oxide over the past decade—many believe that this "elusive ferroelectric material" is best leveraged in next generation computing memory (due to its non-volatile properties), although this requires a major scientific breakthrough to get working in a practical manner. Hafnia's natural state is inherently non-ferroelectric, so it takes some effort to get it into a suitable state—a SciTechDaily article explores past efforts: "Scientists could only get hafnia to its metastable ferroelectric state when straining it as a thin, two-dimensional film of nanometer thickness." Research teams at the University of Rochester, New York and University of Tennessee, Knoxville have presented evidence of an exciting landmark development. Sobhit Singh, assistant professor at UoR's Department of Mechanical Engineering, believes that the joint effort has created a lane for the creation of bulk ferroelectric and antiferroelectric hafnia.

His "Proceedings of the National Academy of Sciences" study proposes an alternative material path: "Hafnia is a very exciting material because of its practical applications in computer technology, especially for data storage. Currently, to store data we use magnetic forms of memory that are slow, require a lot of energy to operate, and are not very efficient. Ferroelectric forms of memory are robust, ultra-fast, cheaper to produce, and more energy-efficient." Professor Janice Musfeldt's team at the University of Tennessee have managed to produce a ferroelectric form of hafnia—through an experimental high pressure process, based on Singh's exact calculations. The material remained in a metastable phase post-experiment, even in a pressure-relieved state. Musfeldt commented on the pleasing results: "This is as an excellent example of experimental-theoretical collaboration." Memory manufacturers are likely keeping an eye on Hafnia's breakthrough potential, but material costs are dampening expectations—Tom's Hardware cites shortages (going back to early 2023): "Hafnium (the key component in Hafnia) has seen a nearly fivefold price increase due to increased demand since 2021, raising its cost from about $1,000 per kilogram to about $5,000. Even at $1000 a kilogram, though, hafnium is by far more expensive than silicon, which measures in the tens of dollars per kilogram."

U.S. CHIPS Act Outlines $500 Million Fund for Research Institutes & Packaging Tech Development

Yesterday, the U.S. Department of Commerce publicly announced two new notices of intent—as reported by Tom's Hardware, this involves the latest distributions from the CHIPS Act's $11 billion R&D budget: "$300 million is to be made available across multiple awards of up to $100 million (not including voluntary co-investment) for research on advanced packaging, while another $200 million (or more) is set aside to create the CHIPS Manufacturing USA Institute. Companies will have to compete for the funds by filing an application." The Act's primary $39 billion tranche is designated to new construction endeavors, e.g. the founding of manufacturing facilities.

A grand total of $52 billion was set aside for the CHIPS Act in 2022, which immediately attracted the attention of several semiconductor industry giants. Companies with headquarters outside of North America were allowed to send in applications. Last year, Intel CEO Pat Gelsinger, made some controversial statements regarding his company's worthiness of government funding. In his opinion, Team Blue is due the "lion's share" due to his operation being a USA firm—the likes of TSMC and Samsung are far less deserving of subsidies.

Microsoft Announces Participation in National AI Research Resource Pilot

We are delighted to announce our support for the National AI Research Resource (NAIRR) pilot, a vital initiative highlighted in the President's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This initiative aligns with our commitment to broaden AI research and spur innovation by providing greater computing resources to AI researchers and engineers in academia and non-profit sectors. We look forward to contributing to the pilot and sharing insights that can help inform the envisioned full-scale NAIRR.

The NAIRR's objective is to democratize access to the computational tools essential for advancing AI in critical areas such as safety, reliability, security, privacy, environmental challenges, infrastructure, health care, and education. Advocating for such a resource has been a longstanding goal of ours, one that promises to equalize the field of AI research and stimulate innovation across diverse sectors. As a commissioner on the National Security Commission on AI (NSCAI), I worked with colleagues on the committee to propose an early conception of the NAIRR, underlining our nation's need for this resource as detailed in the NSCAI Final Report. Concurrently, we enthusiastically supported a university-led initiative pursuing a national computing resource. It's rewarding to see these early ideas and endeavors now materialize into a tangible entity.

Nintendo "Switch 2" with 8-inch LCD Screen Reportedly Prepped for 2024

Earlier today, Bloomberg published a report that covers expert analysis of the Nintendo Switch successor's alleged display credentials. The media outlet cites claims made by Hiroshi Hayase—Research Manager (of Small Medium Displays) at Omdia. The analyst proposes that Nintendo's hardware design team has selected an eight inch LCD screen for their "Switch 2" games console, he also believes that the launch model is due at some point this year. Hayase-san has gleaned information from supply chain insiders—the Switch successor could double shipments of entertainment-oriented "small displays." Sharp Corporation is believed to be Nintendo's main supplier, according to interpretations of deliberately vague company statements.

Nintendo's 2017 launch model sported a 6.2-inch LCD display, a more portable Lite version arrived in 2019 with a 5.5-inch display, and a larger 7-inch OLED iteration was released back in 2021. Gaming communities have long speculated about an abandoned "Switch Pro" model—many believe that the project was dropped due to ongoing supply chain problems during lockdown periods. The Switch OLED (plus its modernized dock station) is believed to be an interim gap fill. Nintendo has revealed little about their next generation gaming console, but development partners have been making some noise lately. According to a 4Gamer.net interview article, workers at Japanese studios (CAPCOM, Koei Tecmo, and Spike Chunsoft) have expressed major excitement about the upcoming model's prospects. GDC's 2024 State of the Game Industry report revealed that 240 respondents have admitted that they are actively working on Switch 2 games software.

NVIDIA Contributes $30 Million of Tech to NAIRR Pilot Program

In a major stride toward building a shared national research infrastructure, the U.S. National Science Foundation has launched the National Artificial Intelligence Research Resource pilot program with significant support from NVIDIA. The initiative aims to broaden access to the tools needed to power responsible AI discovery and innovation. It was announced Wednesday in partnership with 10 other federal agencies as well as private-sector, nonprofit and philanthropic organizations. "The breadth of partners that have come together for this pilot underscores the urgency of developing a National AI Research Resource for the future of AI in America," said NSF Director Sethuraman Panchanathan. "By investing in AI research through the NAIRR pilot, the United States unleashes discovery and impact and bolsters its global competitiveness."

NVIDIA's commitment of $30 million in technology contributions over two years is a key factor in enlarging the scale of the pilot, fueling the potential for broader achievements and accelerating the momentum toward full-scale implementation. "The NAIRR is a vision of a national research infrastructure that will provide access to computing, data, models and software to empower researchers and communities," said Katie Antypas, director of the Office of Advanced Cyberinfrastructure at the NSF. "Our primary goals for the NAIRR pilot are to support fundamental AI research and domain-specific research applying AI, reach broader communities, particularly those currently unable to participate in the AI innovation ecosystem, and refine the design for the future full NAIRR," Antypas added.

Chinese Researchers Develop FlexRAM Liquid Metal RAM Using Biomimicry

Researchers from Tsinghua University in Beijing have developed FlexRAM, the first fully flexible resistive RAM memory built using liquid metal. The innovative approach suspends droplets of gallium-based liquid metal in a soft biopolymer material. Applying voltage pulses oxidizes or reduces the metal, mimicking neuron polarization. This allows the reversible switching between high and low resistance states corresponding to bit 1s and 0s for data storage. Even when powered off, data persists in the inert liquid for 43,200 seconds (or 12 hours). The current FlexRAM prototype consists of 8 independent 1-bit memory units, storing a total of 1 byte. It has demonstrated over 3,500 write cycles, though further endurance improvements are needed for practical use. Commercial RAM is rated for millions of read/write cycles. The millimeter-scale metal droplets could eventually reach nanometer sizes, dramatically increasing memory density.

FlexRAM represents a breakthrough in circuits and electronics that can freely bend and flex. The researchers envision applications from soft robotics, medical implants, and flexible wearable devices. Compatibility with stretchable substrates unlocks enormous potential for emerging technologies. While still in the early conceptual stages, FlexRAM proves that computing and memory innovations that were once thought impossible or fanciful can become real through relentless scientific creativity. It joins a wave of pioneering flexible electronics research attaining more flexibility than rigid silicon allows. There are still challenges to solve before FlexRAM and liquid electronics can transform computing. But by proving a fluid-state memory device possible, the technology flows toward a radically different future for electronics and computation. Below, you can see the liquid metal droplet that is the FlexRAM breakthrough.

Quantum Breakthrough: Stable Qubits Generated at Room Temperature

Quantum coherence at room temperature has been achieved, thanks to the efforts of Associate Professor Nobuhiro Yanai and his research team at Kyushu University's Faculty of Engineering. Additional credit goes to Associate Professor Kiyoshi Miyata (also of Kyushu University) and Professor Yasuhiro Kobori of Kobe University, all in Japan. Their scientific experiments have led to an ideal set of conditions where it is "crucial to generate quantum spin coherence in the quintet sublevels by microwave manipulation at room temperature." A quantum system requires operation in a stable state over a certain period of time, free of environmental interference.

Kobori-san has disclosed multi-department research results in a very elaborate document: "This is the first room-temperature quantum coherence of entangled quintets." The certain period of time mentioned above was only measures in nanoseconds, so more experimental work and further refinement will be carried out to prolong harmonious conditions. Head honco, Professor Yanai outlined some goals: "It will be possible to generate quintet multiexciton state qubits more efficiently in the future by searching for guest molecules that can induce more such suppressed motions and by developing suitable MOF structures...This can open doors to room-temperature molecular quantum computing based on multiple quantum gate control and quantum sensing of various target compounds."

Chinese Researchers Want to Make Wafer-Scale RISC-V Processors with up to 1,600 Cores

According to the report from a journal called Fundamental Research, researchers from the Institute of Computing Technology at the Chinese Academy of Sciences have developed a 256-core multi-chiplet processor called Zhejiang Big Chip, with plans to scale up to 1,600 cores by utilizing an entire wafer. As transistor density gains slow, alternatives like multi-chiplet architectures become crucial for continued performance growth. The Zhejiang chip combines 16 chiplets, each holding 16 RISC-V cores, interconnected via network-on-chip. This design can theoretically expand to 100 chiplets and 1,600 cores on an advanced 2.5D packaging interposer. While multi-chiplet is common today, using the whole wafer for one system would match Cerebras' breakthrough approach. Built on 22 nm process technology, the researchers cite exascale supercomputing as an ideal application for massively parallel multi-chiplet architectures.

Careful software optimization is required to balance workloads across the system hierarchy. Integrating near-memory processing and 3D stacking could further optimize efficiency. The paper explores lithography and packaging limits, proposing hierarchical chiplet systems as a flexible path to future computing scale. While yield and cooling challenges need further work, the 256-core foundation demonstrates the potential of modular designs as an alternative to monolithic integration. China's focus mirrors multiple initiatives from American giants like AMD and Intel for data center CPUs. But national semiconductor ambitions add urgency to prove domestically designed solutions can rival foreign innovation. Although performance details are unclear, the rapid progress shows promise in mastering modular chip integration. Combined with improving domestic nodes like the 7 nm one from SMIC, China could easily create a viable Exascale system in-house.

Intel, Dell Technologies and University of Cambridge Announce Deployment of Dawn Supercomputer

Dell Technologies, Intel and the University of Cambridge announce the deployment of the co-designed Dawn Phase 1 supercomputer. Leading technical teams built the U.K.'s fastest AI supercomputer that harnesses the power of both artificial intelligence (AI) and high performance computing (HPC) to solve some of the world's most pressing challenges. This sets a clear way forward for future U.K. technology leadership and inward investment into the U.K. technology sector. Dawn kickstarts the recently launched U.K. AI Research Resource (AIRR), which will explore the viability of associated systems and architectures. Dawn brings the U.K. closer to reaching the compute threshold of a quintillion (1018) floating point operations per second - one exaflop, better known as exascale. For perspective: Every person on earth would have to make calculations 24 hours a day for more than four years to equal a second's worth of processing power in an exascale system.

"Dawn considerably strengthens the scientific and AI compute capability available in the U.K., and it's on the ground, operational today at the Cambridge Open Zettascale Lab. Dell PowerEdge XE9640 servers offer a no-compromises platform to host the Intel Data Center GPU Max Series accelerator, which opens up the ecosystem to choice through oneAPI. I'm very excited to see the sorts of early science this machine can deliver and continue to strengthen the Open Zettascale Lab partnership between Dell Technologies, Intel and the University of Cambridge, and further broaden that to the U.K. scientific and AI community," said Adam Roe, EMEA HPC technical director at Intel.

NVIDIA NeMo: Designers Tap Generative AI for a Chip Assist

A research paper released this week describes ways generative AI can assist one of the most complex engineering efforts: designing semiconductors. The work demonstrates how companies in highly specialized fields can train large language models (LLMs) on their internal data to build assistants that increase productivity.

Few pursuits are as challenging as semiconductor design. Under a microscope, a state-of-the-art chip like an NVIDIA H100 Tensor Core GPU (above) looks like a well-planned metropolis, built with tens of billions of transistors, connected on streets 10,000x thinner than a human hair. Multiple engineering teams coordinate for as long as two years to construct one of these digital mega cities. Some groups define the chip's overall architecture, some craft and place a variety of ultra-small circuits, and others test their work. Each job requires specialized methods, software programs and computer languages.

IDC Forecasts Spending on GenAI Solutions Will Reach $143 Billion in 2027 with a Five-Year Compound Annual Growth Rate of 73.3%

A new forecast from International Data Corporation (IDC) shows that enterprises will invest nearly $16 billion worldwide on GenAI solutions in 2023. This spending, which includes GenAI software as well as related infrastructure hardware and IT/business services, is expected to reach $143 billion in 2027 with a compound annual growth rate (CAGR) of 73.3% over the 2023-2027 forecast period. This is more than twice the rate of growth in overall AI spending and almost 13 times greater than the CAGR for worldwide IT spending over the same period.

"Generative AI is more than a fleeting trend or mere hype. It is a transformative technology with far-reaching implications and business impact," says Ritu Jyoti, group vice president, Worldwide Artificial Intelligence and Automation market research and advisory services at IDC. "With ethical and responsible implementation, GenAI is poised to reshape industries, changing the way we work, play, and interact with the world."
Return to Keyword Browsing
May 1st, 2024 06:00 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts