News Posts matching #Science

Return to Keyword Browsing

China Unveils Xiaohong-504: a 504-Qubit Quantum Computing Processor

China has announced the development of its latest quantum system, combining the Xiaohong-504, a 504-qubit superconducting quantum chip, with the Tianyan-504 quantum computer. The breakthrough comes from China Telecom Quantum Group (CTQG), which will use the new supercomputer to boost national telecommunications security. The Xiaohong-504 chip reportedly demonstrates impressive specifications in critical areas including qubit lifetime, gate fidelity, and circuit depth, comparable with established quantum platforms such as IBM. The first Xiaohong-504 processor is scheduled for delivery to QuantumCTek, a quantum technology company based in Anhui Province, where it will begin extensive testing of kilo-qubit measurement and control systems.

While the Tianyan-504 represents a major achievement, it currently ranks behind some international competitors in terms of qubit count. Atom Computing's 1,180-qubit prototype was revealed in late 2023, and IBM's 1,121-qubit Condor processor maintains the lead in raw qubit numbers. The development of the Tianyan-504 was a collaborative effort between CTQG, the Chinese Academy of Sciences, and QuantumCTek. The system will be integrated into the Tianyan quantum cloud platform, which has already demonstrated significant international reach since its launch in November 2023, attracting more than 12 million visits from users across over 50 countries. Rather than focusing solely on achieving quantum supremacy, the Tianyan-504 project aim is developing infrastructure for large-scale quantum systems.

SPEC Delivers Major SPECworkstation 4.0 Benchmark Update, Adds AI/ML Workloads

The Standard Performance Evaluation Corporation (SPEC), the trusted global leader in computing benchmarks, today announced the availability of the SPECworkstation 4.0 benchmark, a major update to SPEC's comprehensive tool designed to measure all key aspects of workstation performance. This significant upgrade from version 3.1 incorporates cutting-edge features to keep pace with the latest workstation hardware and the evolving demands of professional applications, including the increasing reliance on data analytics, AI and machine learning (ML).

The new SPECworkstation 4.0 benchmark provides a robust, real-world measure of CPU, graphics, accelerator, and disk performance, ensuring professionals have the data they need to make informed decisions about their hardware investments. The benchmark caters to the diverse needs of engineers, scientists, and developers who rely on workstation hardware for daily tasks. It includes real-world applications like Blender, Handbrake, LLVM and more, providing a comprehensive performance measure across seven different industry verticals, each focusing on specific use cases and subsystems critical to workstation users. SPECworkstation 4.0 benchmark marks a significant milestone for measuring workstation AI performance, providing an unbiased, real-world, application-driven tool for measuring how workstations handle AI/ML workloads.

TSMC Could Bring 2 nm Production Overseas, Taiwanese Minister Confirms

Taiwanese political officials have agreed to discuss transferring TSMC's advanced 2 nm chip technology to allied democratic nations, but only after establishing the main mass production launch in late 2025 in Taiwan. This new stance comes amid growing international pressure and recent comments from upcoming US president Donald Trump about semiconductor manufacturing. The announcement by National Science and Technology Council Minister Cheng-Wen Wu marks a notable departure from earlier statements by Economic Affairs Minister J.W. Kuo, who had previously emphasized legal restrictions on transferring leading-edge process technology overseas. Interestingly, these different positions aren't so different from one point: timeline of node deployments. As TSMC produces latest nodes in Taiwan, overseas production will lag by a generation or two.

TSMC plans to implement its 2 nm technology in US facilities by 2030. The company's Arizona facility, Fab 21, will begin with less advanced N4 and N5 processes in early 2025 and progress to 3 nm technology by 2028. However, this timeline could face pressure for acceleration, mainly if new trade policies are implemented. Industry analyst Dan Nystedt points out significant challenges in transferring advanced chip production. Integrating research and development with manufacturing processes in Taiwan provides crucial advantages for initial production ramps, making simultaneous mass production launches in multiple locations technically challenging. Simply put, there aren't enough capable engineers, scientists, and factory workers capable of doing what TSMC accomplishes in Taiwan.

IBM Launches Its Most Advanced Quantum Computers, Fueling New Scientific Value and Progress towards Quantum Advantage

Today at its inaugural IBM Quantum Developer Conference, IBM announced quantum hardware and software advancements to execute complex algorithms on IBM quantum computers with record levels of scale, speed, and accuracy.

IBM Quantum Heron, the company's most performant quantum processor to-date and available in IBM's global quantum data centers, can now leverage Qiskit to accurately run certain classes of quantum circuits with up to 5,000 two-qubit gate operations. Users can now use these capabilities to expand explorations in how quantum computers can tackle scientific problems across materials, chemistry, life sciences, high-energy physics, and more.

Japanese Scientists Develop Less Complex EUV Scanners, Significantly Cutting Costs of Chip Development

Japanese professor Tsumoru Shintake of the Okinawa Institute of Science and Technology (OIST) has unveiled a revolutionary extreme ultraviolet (EUV) lithography technology that promises to significantly push down semiconductor manufacturing costs. The new technology tackles two previously insurmountable issues in EUV lithography. First, it introduces a streamlined optical projection system using only two mirrors, a dramatic simplification from the conventional six or more. Second, it employs a novel "dual line field" method to efficiently direct EUV light onto the photomask without obstructing the optical path. Prof. Shintake's design offers substantial advantages over current EUV lithography machines. It can operate with smaller EUV light sources, consuming less than one-tenth of the power required by conventional systems. This reduction in energy consumption also reduces operating expenses (OpEx), which are usually high in semiconductor manufacturing facilities.

The simplified two-mirror design also promises improved stability and maintainability. While traditional EUV systems often require over 1 megawatt of power, the OIST model can achieve comparable results with just 100 kilowatts. Despite its simplicity, the system maintains high contrast and reduces mask 3D effects, which is crucial for attaining nanometer-scale precision in semiconductor production. OIST has filed a patent application for this technology, with plans for practical implementation through demonstration experiments. The global EUV lithography market is projected to grow from $8.9 billion in 2024 to $17.4 billion by 2030, when most nodes are expected to use EUV scanners. In contrast, ASML's single EUV scanner can cost up to $380 million without OpEx, which is very high thanks to the power consumption of high-energy light UV light emitters. Regular EUV scanners also lose 40% of the UV light going to the next mirror, with only 1% of the starting light source reaching the silicon wafer. And that is while consuming over one megawatt of power. However, with the proposed low-cost EUV system, more than 10% of the energy makes it to the wafer, and the new system is expected to use less than 100 kilowatts of power while carrying a cost of less than 100 million, a third from ASML's flagship.

NVIDIA Accelerates Quantum Computing Centers Worldwide With CUDA-Q Platform

NVIDIA today announced that it will accelerate quantum computing efforts at national supercomputing centers around the world with the open-source NVIDIA CUDA-Q platform. Supercomputing sites in Germany, Japan and Poland will use the platform to power the quantum processing units (QPUs) inside their NVIDIA-accelerated high-performance computing systems.

QPUs are the brains of quantum computers that use the behavior of particles like electrons or photons to calculate differently than traditional processors, with the potential to make certain types of calculations faster. Germany's Jülich Supercomputing Centre (JSC) at Forschungszentrum Jülich is installing a QPU built by IQM Quantum Computers as a complement to its JUPITER supercomputer, supercharged by the NVIDIA GH200 Grace Hopper Superchip. The ABCI-Q supercomputer, located at the National Institute of Advanced Industrial Science and Technology (AIST) in Japan, is designed to advance the nation's quantum computing initiative. Powered by the NVIDIA Hopper architecture, the system will add a QPU from QuEra. Poland's Poznan Supercomputing and Networking Center (PSNC) has recently installed two photonic QPUs, built by ORCA Computing, connected to a new supercomputer partition accelerated by NVIDIA Hopper.

Micron to Receive US$6.1 Billion in CHIPS and Science Act Funding

Micron Technology, Inc., one of the world's largest semiconductor companies and the only U.S.-based manufacturer of memory, and the Biden-Harris Administration today announced that they have signed a non-binding Preliminary Memorandum of Terms (PMT) for $6.1 billion in funding under the CHIPS and Science Act to support planned leading-edge memory manufacturing in Idaho and New York.

The CHIPS and Science Act grants of $6.1 billion will support Micron's plans to invest approximately $50 billion in gross capex for U.S. domestic leading-edge memory manufacturing through 2030. These grants and additional state and local incentives will support the construction of one leading-edge memory manufacturing fab to be co-located with the company's existing leading-edge R&D facility in Boise, Idaho and the construction of two leading-edge memory fabs in Clay, New York.

NVIDIA Modulus & Omniverse Drive Physics-informed Models and Simulations

A manufacturing plant near Hsinchu, Taiwan's Silicon Valley, is among facilities worldwide boosting energy efficiency with AI-enabled digital twins. A virtual model can help streamline operations, maximizing throughput for its physical counterpart, say engineers at Wistron, a global designer and manufacturer of computers and electronics systems. In the first of several use cases, the company built a digital copy of a room where NVIDIA DGX systems undergo thermal stress tests (pictured above). Early results were impressive.

Making Smart Simulations
Using NVIDIA Modulus, a framework for building AI models that understand the laws of physics, Wistron created digital twins that let them accurately predict the airflow and temperature in test facilities that must remain between 27 and 32 degrees C. A simulation that would've taken nearly 15 hours with traditional methods on a CPU took just 3.3 seconds on an NVIDIA GPU running inference with an AI model developed using Modulus, a whopping 15,000x speedup. The results were fed into tools and applications built by Wistron developers with NVIDIA Omniverse, a platform for creating 3D workflows and applications based on OpenUSD.

Extropic Intends to Accelerate AI through Thermodynamic Computing

Extropic, a pioneer in physics-based computing, this week emerged from stealth mode and announced the release of its Litepaper, which outlines the company's revolutionary approach to AI acceleration through thermodynamic computing. Founded in 2022 by Guillaume Verdon, Extropic has been developing novel chips and algorithms that leverage the natural properties of out-of-equilibrium thermodynamic systems to perform probabilistic computations for generative AI applications in a highly efficient manner. The Litepaper delves into Extropic's groundbreaking computational paradigm, which aims to address the limitations of current digital hardware in handling the complex probability distributions required for generative AI.

Today's algorithms spend around 25% of their time moving numbers around in memory, limiting the speedup achievable by accelerating specific operations. In contrast, Extropic's chips natively accelerate a broad class of probabilistic algorithms by running them physically as a rapid and energy-efficient, physics-based process in their entirety, unlocking a new regime of AI acceleration well beyond what was previously thought achievable. In coming out of stealth, the company has announced the fabrication of a superconducting prototype processor and developments surrounding room-temperature semiconductor-based devices for the broader market, with the goal of revolutionizing the field of AI acceleration and enabling new possibilities in generative AI.

Samsung Anticipates 2027-2028 Entry into Micro OLED AR/VR Market

Choi Joo-sun, CEO of Samsung Display, spoke to journalists post-conclusion of a March 6 lecture at the Korea Advanced Institute of Science and Technology (KAIST). A Chosun Daily Business reporter pulled some quotes regarding Samsung's outlook for new generation micro OLED technologies. Choi and his colleagues are likely taking their time on this development front—Sony Semiconductor Solutions (SSS) has already mass-produced OLED Microdisplay products. The Japanese technology giant is the main supplier of display panels for Apple's Vision Pro mixed reality headset—a recent iFixit teardown revealed a possible custom-designed unit. Leaked "Bill of Materials" figures indicate an eye-watering total cost of $456 for a pair of SSS 4K panels—Apple is reportedly engaged in negotiations with SeeYa and BOE regarding the supply of cheaper alternatives.

The Samsung Display boss is monitoring current industry trends, but his team is not rushing out competing solutions: "The market potential of micro OLED, which is used in augmented reality (AR) and virtual reality (VR), is significant, but I believe the market will begin in earnest around 2027-2028...there are many technical aspects to overcome and cost considerations." Choi believes that Samsung is better off with plenty of preparation time, before an anticipated bloom in the micro OLED market—in his opinion, domination can be achieved with careful investment in research and development (R&D) efforts. He stated: "During the remaining 2 to 3 years, we will deploy manpower to ensure that Samsung Display does not fall behind in the micro OLED market and introduce solutions that are competitive compared to competitors...The acquisition of Imagine, an American display company, is also part of this effort."

3D Nanoscale Petabit Capacity Optical Disk Format Proposed by Chinese R&D Teams

The University of Shanghai for Science and Technology (USST), Peking University and the Shanghai Institute of Optics and Fine Mechanics (SIOM) are collaborating on new Optical Data Storage (ODS) technologies—a recently published paper reveals that scientists are attempting to create 3D nanoscale optical disk memory that breaks into petabit capacities. Society (as a whole) has an ever-growing data demand—this requires the development of improved high-capacity storage technologies—the R&D teams believe that ODS presents a viable alternative route to traditional present day solutions: "data centers based on major storage technologies such as semiconductor flash devices and hard disk drives have high energy burdens, high operation costs and short lifespans."

The proposed ODS format could be a "promising solution for cost-effective long-term archival data storage." The researchers note that current (e.g Blu-ray) and previous generation ODS technologies have been: "limited by low capacities and the challenge of increasing areal density." In order to get ODS up to petabit capacity levels, several innovations are required—the Nature.com abstract stated: "extending the planar recording architecture to three dimensions with hundreds of layers, meanwhile breaking the optical diffraction limit barrier of the recorded spots. We develop an optical recording medium based on a photoresist film doped with aggregation-induced emission dye, which can be optically stimulated by femtosecond laser beams. This film is highly transparent and uniform, and the aggregation-induced emission phenomenon provides the storage mechanism. It can also be inhibited by another deactivating beam, resulting in a recording spot with a super-resolution scale." The novel optical storage medium relies on dye-doped photoresist (DDPR) with aggregation-induced emission luminogens (AIE-DDPR)—a 515 nm femtosecond Gaussian laser beam takes care of optical writing tasks, while a doughnut-shaped 639 nm continuous wave laser beam is tasked with retrieval. A 480 nm pulsed laser and a 592 nm continuous wave laser work in tandem to read data.

NVIDIA Joins US Artificial Intelligence Safety Institute Consortium

NVIDIA has joined the National Institute of Standards and Technology's new U.S. Artificial Intelligence Safety Institute Consortium as part of the company's effort to advance safe, secure and trustworthy AI. AISIC will work to create tools, methodologies and standards to promote the safe and trustworthy development and deployment of AI. As a member, NVIDIA will work with NIST—an agency of the U.S. Department of Commerce—and fellow consortium members to advance the consortium's mandate. NVIDIA's participation builds on a record of working with governments, researchers and industries of all sizes to help ensure AI is developed and deployed safely and responsibly.

Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality. In 2023, NVIDIA endorsed the Biden Administration's voluntary AI safety commitments. Last month, the company announced a $30 million contribution to the U.S. National Science Foundation's National Artificial Intelligence Research Resource pilot program, which aims to broaden access to the tools needed to power responsible AI discovery and innovation.

NUDT MT-3000 Hybrid CPU Reportedly Utilized by Tianhe-3 Supercomputer

China's National Supercomputer Center (NUDT) introduced their Tianhe-3 system as a prototype back in early 2019—at the time it had been tested by thirty local organizations. Notable assessors included the Chinese Academy of Sciences and the China Aerodynamics Research and Development Center. The (previous generation) Tianhe-2 system currently sits in a number seven position of world-ranked Supercomputers—offering a measured performance of 33.86 petaFLOPS/s. The internal makeup of its fully formed successor has remained a mystery...until now. The Next Platform believes that the "Xingyi" monikered third generation supercomputer houses the Guangzhou-based lab's MT-3000 processor design. Author, Timothy Prickett Morgan, boasted about acquiring exclusive inside knowledge ahead of international intelligence agencies—many will be keeping an eye on the NUDT, since it is administered by the National University of Defence Technology (itself owned by the Chinese government).

The Next Platform has a track record of outing intimate details relating to Chinese-developed scientific breakthroughs—the semi-related "Oceanlight" system installed at their National Supercomputer Center (Wuxi) was "figured out" two years ago. Tianhe-3 and Oceanlight face significant competition in the form of "El Capitan"—this is the USA's prime: "supercomputer being built right now at Lawrence Livermore National Laboratory by Hewlett Packard Enterprise in conjunction with compute engine supplier AMD. We need to know because we want to understand the very different—and yet, in some ways similar—architectural path that China seems to have taken with the Xingyi architecture to break through the exascale barrier."

Hafnia Material Breakthrough Paves Way for Ferroelectric Computer Memory

Scientists and engineers have been experimenting with hafnium oxide over the past decade—many believe that this "elusive ferroelectric material" is best leveraged in next generation computing memory (due to its non-volatile properties), although this requires a major scientific breakthrough to get working in a practical manner. Hafnia's natural state is inherently non-ferroelectric, so it takes some effort to get it into a suitable state—a SciTechDaily article explores past efforts: "Scientists could only get hafnia to its metastable ferroelectric state when straining it as a thin, two-dimensional film of nanometer thickness." Research teams at the University of Rochester, New York and University of Tennessee, Knoxville have presented evidence of an exciting landmark development. Sobhit Singh, assistant professor at UoR's Department of Mechanical Engineering, believes that the joint effort has created a lane for the creation of bulk ferroelectric and antiferroelectric hafnia.

His "Proceedings of the National Academy of Sciences" study proposes an alternative material path: "Hafnia is a very exciting material because of its practical applications in computer technology, especially for data storage. Currently, to store data we use magnetic forms of memory that are slow, require a lot of energy to operate, and are not very efficient. Ferroelectric forms of memory are robust, ultra-fast, cheaper to produce, and more energy-efficient." Professor Janice Musfeldt's team at the University of Tennessee have managed to produce a ferroelectric form of hafnia—through an experimental high pressure process, based on Singh's exact calculations. The material remained in a metastable phase post-experiment, even in a pressure-relieved state. Musfeldt commented on the pleasing results: "This is as an excellent example of experimental-theoretical collaboration." Memory manufacturers are likely keeping an eye on Hafnia's breakthrough potential, but material costs are dampening expectations—Tom's Hardware cites shortages (going back to early 2023): "Hafnium (the key component in Hafnia) has seen a nearly fivefold price increase due to increased demand since 2021, raising its cost from about $1,000 per kilogram to about $5,000. Even at $1000 a kilogram, though, hafnium is by far more expensive than silicon, which measures in the tens of dollars per kilogram."

NVIDIA Contributes $30 Million of Tech to NAIRR Pilot Program

In a major stride toward building a shared national research infrastructure, the U.S. National Science Foundation has launched the National Artificial Intelligence Research Resource pilot program with significant support from NVIDIA. The initiative aims to broaden access to the tools needed to power responsible AI discovery and innovation. It was announced Wednesday in partnership with 10 other federal agencies as well as private-sector, nonprofit and philanthropic organizations. "The breadth of partners that have come together for this pilot underscores the urgency of developing a National AI Research Resource for the future of AI in America," said NSF Director Sethuraman Panchanathan. "By investing in AI research through the NAIRR pilot, the United States unleashes discovery and impact and bolsters its global competitiveness."

NVIDIA's commitment of $30 million in technology contributions over two years is a key factor in enlarging the scale of the pilot, fueling the potential for broader achievements and accelerating the momentum toward full-scale implementation. "The NAIRR is a vision of a national research infrastructure that will provide access to computing, data, models and software to empower researchers and communities," said Katie Antypas, director of the Office of Advanced Cyberinfrastructure at the NSF. "Our primary goals for the NAIRR pilot are to support fundamental AI research and domain-specific research applying AI, reach broader communities, particularly those currently unable to participate in the AI innovation ecosystem, and refine the design for the future full NAIRR," Antypas added.

Quantum Breakthrough: Stable Qubits Generated at Room Temperature

Quantum coherence at room temperature has been achieved, thanks to the efforts of Associate Professor Nobuhiro Yanai and his research team at Kyushu University's Faculty of Engineering. Additional credit goes to Associate Professor Kiyoshi Miyata (also of Kyushu University) and Professor Yasuhiro Kobori of Kobe University, all in Japan. Their scientific experiments have led to an ideal set of conditions where it is "crucial to generate quantum spin coherence in the quintet sublevels by microwave manipulation at room temperature." A quantum system requires operation in a stable state over a certain period of time, free of environmental interference.

Kobori-san has disclosed multi-department research results in a very elaborate document: "This is the first room-temperature quantum coherence of entangled quintets." The certain period of time mentioned above was only measures in nanoseconds, so more experimental work and further refinement will be carried out to prolong harmonious conditions. Head honco, Professor Yanai outlined some goals: "It will be possible to generate quintet multiexciton state qubits more efficiently in the future by searching for guest molecules that can induce more such suppressed motions and by developing suitable MOF structures...This can open doors to room-temperature molecular quantum computing based on multiple quantum gate control and quantum sensing of various target compounds."

You Can Now Create a Digital Clone of Yourself with Eternity.AC, an AI Startup Paving a Path to Immortality

Science fiction is coming to life with eternity.ac, a new startup offering personal digital cloning where anyone can challenge the boundaries of physical limitations with an affordable artificial intelligence that looks, talks, and converses just like you. The new venture empowers individuals to preserve their unique appearance, thoughts, experiences, and memories with a simple 3-step clone creation process.

The innovation opens up a new spectrum of meaningful AI uses, such as allowing future generations to interact with loved ones, enabling fans and followers to engage with their favorite public figures, and helping people understand the viewpoints and experiences of others. Once created, people can interact with the clone via written chat or through vocal conversations.

NVIDIA CEO Meets with India Prime Minister Narendra Modi

Underscoring NVIDIA's growing relationship with the global technology superpower, Indian Prime Minister Narendra Modi met with NVIDIA founder and CEO Jensen Huang Monday evening. The meeting at 7 Lok Kalyan Marg—as the Prime Minister's official residence in New Delhi is known—comes as Modi prepares to host a gathering of leaders from the G20 group of the world's largest economies, including U.S. President Joe Biden, later this week.

"Had an excellent meeting with Mr. Jensen Huang, the CEO of NVIDIA," Modi said in a social media post. "We talked at length about the rich potential India offers in the world of AI." The event marks the second meeting between Modi and Huang, highlighting NVIDIA's role in the country's fast-growing technology industry.

TSMC Inaugurates Global R&D Center, Celebrating Its Newest Hub for Technology Innovation

TSMC today held an inauguration ceremony for its global Research and Development Center in Hsinchu, Taiwan, celebrating the Company's newest hub for bringing the next generations of semiconductor technology into reality with customers, R&D partners in industry and academia, design ecosystem partners, and senior government leaders.

The R&D Center will serve as the new home for TSMC's R&D Organization, including the researchers who will develop TSMC's leading-edge process technology at the 2-nanometer generation and beyond, as well as scientists and scholars blazing the trail with exploratory research into fields such as novel materials and transistor structures. With R&D employees already relocating to their workplaces in the new building, it will be ready for its full complement of more than 7,000 staff by September 2023.

NVIDIA Espouses Generative AI for Improved Productivity Across Industries

A watershed moment on Nov. 22, 2022, was mostly virtual, yet it shook the foundations of nearly every industry on the planet. On that day, OpenAI released ChatGPT, the most advanced artificial intelligence chatbot ever developed. This set off demand for generative AI applications that help businesses become more efficient, from providing consumers with answers to their questions to accelerating the work of researchers as they seek scientific breakthroughs, and much, much more.

Businesses that previously dabbled in AI are now rushing to adopt and deploy the latest applications. Generative AI—the ability of algorithms to create new text, images, sounds, animations, 3D models and even computer code—is moving at warp speed, transforming the way people work and play. By employing large language models (LLMs) to handle queries, the technology can dramatically reduce the time people devote to manual tasks like searching for and compiling information.

Assassin's Creed Mirage Showcases the History of Baghdad

When Assassin's Creed Mirage launches on October 12, it will continue the series' tradition of bringing players closer to history with History of Baghdad, a feature that adds historical context to the game's simulation of the past. Part of an in-game Codex that also includes tutorials and a Database with lore, History of Baghdad will deliver expertly curated information on the history, art, and culture of Baghdad and the Abbasid Caliphate circa the ninth century, accompanied by images provided by museum partners.

In keeping with Assassin's Creed Mirage being a tribute to early Assassin's Creed games, History of Baghdad will be integrated into the main game, similarly to the Database of earlier games, and is tied to player progression, with an in-game reward for Basim once completed. As Basim visits 66 historical sites throughout Baghdad, players will unlock research-driven articles that dig into information across five topics: Economy; Belief & Daily Life; Government; Art & Science; and Court Life.

IBM and UC Berkeley Collaborate on Practical Quantum Computing

For weeks, researchers at IBM Quantum and UC Berkeley were taking turns running increasingly complex physical simulations. Youngseok Kim and Andrew Eddins, scientists with IBM Quantum, would test them on the 127-qubit IBM Quantum Eagle processor. UC Berkeley's Sajant Anand would attempt the same calculation using state-of-the-art classical approximation methods on supercomputers located at Lawrence Berkeley National Lab and Purdue University. They'd check each method against an exact brute-force classical calculation.

Eagle returned accurate answers every time. And watching how both computational paradigms performed as the simulations grew increasingly complex made both teams feel confident the quantum computer was still returning answers more accurate than the classical approximation methods, even in the regime beyond the capabilities of the brute force methods. "The level of agreement between the quantum and classical computations on such large problems was pretty surprising to me personally," said Eddins. "Hopefully it's impressive to everyone."

ITRI Set to Strengthen Taiwan-UK Collaboration on Semiconductors

The newly established Department for Science, Innovation and Technology (DSIT) in the UK has recently released the UK's National Semiconductor Strategy. Dr. Shih-Chieh Chang, General Director of Electronic and Optoelectronic System Research Laboratories at the Industrial Technology Research Institute (ITRI) of Taiwan had an initial exchange with DSIT. During the exchange, Dr. Chang suggested that Taiwan can become a trustable partner for the UK and that the partnership can leverage collective strengths to create mutually beneficial developments. According to the Strategy, the British government plans to invest 1 billion pounds over the next decade to support the semiconductor industry. This funding will improve access to infrastructure, power more research and development and facilitate greater international cooperation.

Dr. Chang stressed that ITRI looks forward to more collaboration with the UK on semiconductors to enhance the resilience of the supply chain. While the UK possesses cutting-edge capabilities in semiconductor IP design and compound semiconductor technology, ITRI has extensive expertise in semiconductor technology R&D and trial production. As a result, ITRI is well-positioned to offer consultation services for advanced packaging pilot lines, facilitate pre-production evaluation, and link British semiconductor IP design companies with Taiwan's semiconductor industry chain. "The expansion of British manufacturers' service capacity in Taiwan would create a mutually beneficial outcome for both Taiwan and the UK," said Dr. Chang.

U.S. Government to Allow Chipmakers to Expand Facilities in China

The United States government has imposed sanctions on companies exporting their goods to China with the aim of limiting the country's technological advancements. This forced many companies to reduce their shipments of the latest technologies; however, according to the latest information from The Wall Street Journal, the Biden administration will allow companies to keep expanding their production capacities in China. As the source notes, quoting statements from government officials, the top semiconductor makers such as Samsung, SK Hynix, and TSMC, all of which have a chip production facility in China, will be allowed to expand the production capacity without any US backlash.

Of course, this does not contradict the plan of a US export-control policy, which the administration plans to continue. Alan Estevez, undersecretary of commerce for industry and security, noted last week in the industry gathering that the US plans to continue these restrictions for another year. Reportedly, all manufacturers of wafer fab equipment (WFE) from the US must acquire an export license from the Department of Commerce before exporting any tools for making either logic of memory chip indented for customers in China. Chipmakers Samsung, SK Hynix, and TSMC all received their licenses to export from October 2022 to October 2023. However, the US government now allows these companies to continue upgrading their Chinese plans beyond the renewed license expiry date of October 2024.

NVIDIA Touts A100 GPU Energy Efficiency, Tensor Cores Drive "Perlmutter" Super Computer

People agree: accelerated computing is energy-efficient computing. The National Energy Research Scientific Computing Center (NERSC), the U.S. Department of Energy's lead facility for open science, measured results across four of its key high performance computing and AI applications.

They clocked how fast the applications ran and how much energy they consumed on CPU-only and GPU-accelerated nodes on Perlmutter, one of the world's largest supercomputers using NVIDIA GPUs. The results were clear. Accelerated with NVIDIA A100 Tensor Core GPUs, energy efficiency rose 5x on average. An application for weather forecasting logged gains of 9.8x.
Return to Keyword Browsing
Dec 20th, 2024 07:25 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts