News Posts matching #Research

Return to Keyword Browsing

GlobalFoundries and Biden-Harris Administration Announce CHIPS and Science Act Funding for Essential Chip Manufacturing

The U.S. Department of Commerce today announced $1.5 billion in planned direct funding for GlobalFoundries (Nasdaq: GFS) (GF) as part of the U.S. CHIPS and Science Act. This investment will enable GF to expand and create new manufacturing capacity and capabilities to securely produce more essential chips for automotive, IoT, aerospace, defense, and other vital markets.

New York-headquartered GF, celebrating its 15th year of operations, is the only U.S.-based pure play foundry with a global manufacturing footprint including facilities in the U.S., Europe, and Singapore. GF is the first semiconductor pure play foundry to receive a major award (over $1.5 billion) from the CHIPS and Science Act, designed to strengthen American semiconductor manufacturing, supply chains and national security. The proposed funding will support three GF projects:

NVIDIA Joins US Artificial Intelligence Safety Institute Consortium

NVIDIA has joined the National Institute of Standards and Technology's new U.S. Artificial Intelligence Safety Institute Consortium as part of the company's effort to advance safe, secure and trustworthy AI. AISIC will work to create tools, methodologies and standards to promote the safe and trustworthy development and deployment of AI. As a member, NVIDIA will work with NIST—an agency of the U.S. Department of Commerce—and fellow consortium members to advance the consortium's mandate. NVIDIA's participation builds on a record of working with governments, researchers and industries of all sizes to help ensure AI is developed and deployed safely and responsibly.

Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality. In 2023, NVIDIA endorsed the Biden Administration's voluntary AI safety commitments. Last month, the company announced a $30 million contribution to the U.S. National Science Foundation's National Artificial Intelligence Research Resource pilot program, which aims to broaden access to the tools needed to power responsible AI discovery and innovation.

NUDT MT-3000 Hybrid CPU Reportedly Utilized by Tianhe-3 Supercomputer

China's National Supercomputer Center (NUDT) introduced their Tianhe-3 system as a prototype back in early 2019—at the time it had been tested by thirty local organizations. Notable assessors included the Chinese Academy of Sciences and the China Aerodynamics Research and Development Center. The (previous generation) Tianhe-2 system currently sits in a number seven position of world-ranked Supercomputers—offering a measured performance of 33.86 petaFLOPS/s. The internal makeup of its fully formed successor has remained a mystery...until now. The Next Platform believes that the "Xingyi" monikered third generation supercomputer houses the Guangzhou-based lab's MT-3000 processor design. Author, Timothy Prickett Morgan, boasted about acquiring exclusive inside knowledge ahead of international intelligence agencies—many will be keeping an eye on the NUDT, since it is administered by the National University of Defence Technology (itself owned by the Chinese government).

The Next Platform has a track record of outing intimate details relating to Chinese-developed scientific breakthroughs—the semi-related "Oceanlight" system installed at their National Supercomputer Center (Wuxi) was "figured out" two years ago. Tianhe-3 and Oceanlight face significant competition in the form of "El Capitan"—this is the USA's prime: "supercomputer being built right now at Lawrence Livermore National Laboratory by Hewlett Packard Enterprise in conjunction with compute engine supplier AMD. We need to know because we want to understand the very different—and yet, in some ways similar—architectural path that China seems to have taken with the Xingyi architecture to break through the exascale barrier."

Hafnia Material Breakthrough Paves Way for Ferroelectric Computer Memory

Scientists and engineers have been experimenting with hafnium oxide over the past decade—many believe that this "elusive ferroelectric material" is best leveraged in next generation computing memory (due to its non-volatile properties), although this requires a major scientific breakthrough to get working in a practical manner. Hafnia's natural state is inherently non-ferroelectric, so it takes some effort to get it into a suitable state—a SciTechDaily article explores past efforts: "Scientists could only get hafnia to its metastable ferroelectric state when straining it as a thin, two-dimensional film of nanometer thickness." Research teams at the University of Rochester, New York and University of Tennessee, Knoxville have presented evidence of an exciting landmark development. Sobhit Singh, assistant professor at UoR's Department of Mechanical Engineering, believes that the joint effort has created a lane for the creation of bulk ferroelectric and antiferroelectric hafnia.

His "Proceedings of the National Academy of Sciences" study proposes an alternative material path: "Hafnia is a very exciting material because of its practical applications in computer technology, especially for data storage. Currently, to store data we use magnetic forms of memory that are slow, require a lot of energy to operate, and are not very efficient. Ferroelectric forms of memory are robust, ultra-fast, cheaper to produce, and more energy-efficient." Professor Janice Musfeldt's team at the University of Tennessee have managed to produce a ferroelectric form of hafnia—through an experimental high pressure process, based on Singh's exact calculations. The material remained in a metastable phase post-experiment, even in a pressure-relieved state. Musfeldt commented on the pleasing results: "This is as an excellent example of experimental-theoretical collaboration." Memory manufacturers are likely keeping an eye on Hafnia's breakthrough potential, but material costs are dampening expectations—Tom's Hardware cites shortages (going back to early 2023): "Hafnium (the key component in Hafnia) has seen a nearly fivefold price increase due to increased demand since 2021, raising its cost from about $1,000 per kilogram to about $5,000. Even at $1000 a kilogram, though, hafnium is by far more expensive than silicon, which measures in the tens of dollars per kilogram."

U.S. CHIPS Act Outlines $500 Million Fund for Research Institutes & Packaging Tech Development

Yesterday, the U.S. Department of Commerce publicly announced two new notices of intent—as reported by Tom's Hardware, this involves the latest distributions from the CHIPS Act's $11 billion R&D budget: "$300 million is to be made available across multiple awards of up to $100 million (not including voluntary co-investment) for research on advanced packaging, while another $200 million (or more) is set aside to create the CHIPS Manufacturing USA Institute. Companies will have to compete for the funds by filing an application." The Act's primary $39 billion tranche is designated to new construction endeavors, e.g. the founding of manufacturing facilities.

A grand total of $52 billion was set aside for the CHIPS Act in 2022, which immediately attracted the attention of several semiconductor industry giants. Companies with headquarters outside of North America were allowed to send in applications. Last year, Intel CEO Pat Gelsinger, made some controversial statements regarding his company's worthiness of government funding. In his opinion, Team Blue is due the "lion's share" due to his operation being a USA firm—the likes of TSMC and Samsung are far less deserving of subsidies.

Microsoft Announces Participation in National AI Research Resource Pilot

We are delighted to announce our support for the National AI Research Resource (NAIRR) pilot, a vital initiative highlighted in the President's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This initiative aligns with our commitment to broaden AI research and spur innovation by providing greater computing resources to AI researchers and engineers in academia and non-profit sectors. We look forward to contributing to the pilot and sharing insights that can help inform the envisioned full-scale NAIRR.

The NAIRR's objective is to democratize access to the computational tools essential for advancing AI in critical areas such as safety, reliability, security, privacy, environmental challenges, infrastructure, health care, and education. Advocating for such a resource has been a longstanding goal of ours, one that promises to equalize the field of AI research and stimulate innovation across diverse sectors. As a commissioner on the National Security Commission on AI (NSCAI), I worked with colleagues on the committee to propose an early conception of the NAIRR, underlining our nation's need for this resource as detailed in the NSCAI Final Report. Concurrently, we enthusiastically supported a university-led initiative pursuing a national computing resource. It's rewarding to see these early ideas and endeavors now materialize into a tangible entity.

Nintendo "Switch 2" with 8-inch LCD Screen Reportedly Prepped for 2024

Earlier today, Bloomberg published a report that covers expert analysis of the Nintendo Switch successor's alleged display credentials. The media outlet cites claims made by Hiroshi Hayase—Research Manager (of Small Medium Displays) at Omdia. The analyst proposes that Nintendo's hardware design team has selected an eight inch LCD screen for their "Switch 2" games console, he also believes that the launch model is due at some point this year. Hayase-san has gleaned information from supply chain insiders—the Switch successor could double shipments of entertainment-oriented "small displays." Sharp Corporation is believed to be Nintendo's main supplier, according to interpretations of deliberately vague company statements.

Nintendo's 2017 launch model sported a 6.2-inch LCD display, a more portable Lite version arrived in 2019 with a 5.5-inch display, and a larger 7-inch OLED iteration was released back in 2021. Gaming communities have long speculated about an abandoned "Switch Pro" model—many believe that the project was dropped due to ongoing supply chain problems during lockdown periods. The Switch OLED (plus its modernized dock station) is believed to be an interim gap fill. Nintendo has revealed little about their next generation gaming console, but development partners have been making some noise lately. According to a 4Gamer.net interview article, workers at Japanese studios (CAPCOM, Koei Tecmo, and Spike Chunsoft) have expressed major excitement about the upcoming model's prospects. GDC's 2024 State of the Game Industry report revealed that 240 respondents have admitted that they are actively working on Switch 2 games software.

NVIDIA Contributes $30 Million of Tech to NAIRR Pilot Program

In a major stride toward building a shared national research infrastructure, the U.S. National Science Foundation has launched the National Artificial Intelligence Research Resource pilot program with significant support from NVIDIA. The initiative aims to broaden access to the tools needed to power responsible AI discovery and innovation. It was announced Wednesday in partnership with 10 other federal agencies as well as private-sector, nonprofit and philanthropic organizations. "The breadth of partners that have come together for this pilot underscores the urgency of developing a National AI Research Resource for the future of AI in America," said NSF Director Sethuraman Panchanathan. "By investing in AI research through the NAIRR pilot, the United States unleashes discovery and impact and bolsters its global competitiveness."

NVIDIA's commitment of $30 million in technology contributions over two years is a key factor in enlarging the scale of the pilot, fueling the potential for broader achievements and accelerating the momentum toward full-scale implementation. "The NAIRR is a vision of a national research infrastructure that will provide access to computing, data, models and software to empower researchers and communities," said Katie Antypas, director of the Office of Advanced Cyberinfrastructure at the NSF. "Our primary goals for the NAIRR pilot are to support fundamental AI research and domain-specific research applying AI, reach broader communities, particularly those currently unable to participate in the AI innovation ecosystem, and refine the design for the future full NAIRR," Antypas added.

Chinese Researchers Develop FlexRAM Liquid Metal RAM Using Biomimicry

Researchers from Tsinghua University in Beijing have developed FlexRAM, the first fully flexible resistive RAM memory built using liquid metal. The innovative approach suspends droplets of gallium-based liquid metal in a soft biopolymer material. Applying voltage pulses oxidizes or reduces the metal, mimicking neuron polarization. This allows the reversible switching between high and low resistance states corresponding to bit 1s and 0s for data storage. Even when powered off, data persists in the inert liquid for 43,200 seconds (or 12 hours). The current FlexRAM prototype consists of 8 independent 1-bit memory units, storing a total of 1 byte. It has demonstrated over 3,500 write cycles, though further endurance improvements are needed for practical use. Commercial RAM is rated for millions of read/write cycles. The millimeter-scale metal droplets could eventually reach nanometer sizes, dramatically increasing memory density.

FlexRAM represents a breakthrough in circuits and electronics that can freely bend and flex. The researchers envision applications from soft robotics, medical implants, and flexible wearable devices. Compatibility with stretchable substrates unlocks enormous potential for emerging technologies. While still in the early conceptual stages, FlexRAM proves that computing and memory innovations that were once thought impossible or fanciful can become real through relentless scientific creativity. It joins a wave of pioneering flexible electronics research attaining more flexibility than rigid silicon allows. There are still challenges to solve before FlexRAM and liquid electronics can transform computing. But by proving a fluid-state memory device possible, the technology flows toward a radically different future for electronics and computation. Below, you can see the liquid metal droplet that is the FlexRAM breakthrough.

Quantum Breakthrough: Stable Qubits Generated at Room Temperature

Quantum coherence at room temperature has been achieved, thanks to the efforts of Associate Professor Nobuhiro Yanai and his research team at Kyushu University's Faculty of Engineering. Additional credit goes to Associate Professor Kiyoshi Miyata (also of Kyushu University) and Professor Yasuhiro Kobori of Kobe University, all in Japan. Their scientific experiments have led to an ideal set of conditions where it is "crucial to generate quantum spin coherence in the quintet sublevels by microwave manipulation at room temperature." A quantum system requires operation in a stable state over a certain period of time, free of environmental interference.

Kobori-san has disclosed multi-department research results in a very elaborate document: "This is the first room-temperature quantum coherence of entangled quintets." The certain period of time mentioned above was only measures in nanoseconds, so more experimental work and further refinement will be carried out to prolong harmonious conditions. Head honco, Professor Yanai outlined some goals: "It will be possible to generate quintet multiexciton state qubits more efficiently in the future by searching for guest molecules that can induce more such suppressed motions and by developing suitable MOF structures...This can open doors to room-temperature molecular quantum computing based on multiple quantum gate control and quantum sensing of various target compounds."

Chinese Researchers Want to Make Wafer-Scale RISC-V Processors with up to 1,600 Cores

According to the report from a journal called Fundamental Research, researchers from the Institute of Computing Technology at the Chinese Academy of Sciences have developed a 256-core multi-chiplet processor called Zhejiang Big Chip, with plans to scale up to 1,600 cores by utilizing an entire wafer. As transistor density gains slow, alternatives like multi-chiplet architectures become crucial for continued performance growth. The Zhejiang chip combines 16 chiplets, each holding 16 RISC-V cores, interconnected via network-on-chip. This design can theoretically expand to 100 chiplets and 1,600 cores on an advanced 2.5D packaging interposer. While multi-chiplet is common today, using the whole wafer for one system would match Cerebras' breakthrough approach. Built on 22 nm process technology, the researchers cite exascale supercomputing as an ideal application for massively parallel multi-chiplet architectures.

Careful software optimization is required to balance workloads across the system hierarchy. Integrating near-memory processing and 3D stacking could further optimize efficiency. The paper explores lithography and packaging limits, proposing hierarchical chiplet systems as a flexible path to future computing scale. While yield and cooling challenges need further work, the 256-core foundation demonstrates the potential of modular designs as an alternative to monolithic integration. China's focus mirrors multiple initiatives from American giants like AMD and Intel for data center CPUs. But national semiconductor ambitions add urgency to prove domestically designed solutions can rival foreign innovation. Although performance details are unclear, the rapid progress shows promise in mastering modular chip integration. Combined with improving domestic nodes like the 7 nm one from SMIC, China could easily create a viable Exascale system in-house.

Intel, Dell Technologies and University of Cambridge Announce Deployment of Dawn Supercomputer

Dell Technologies, Intel and the University of Cambridge announce the deployment of the co-designed Dawn Phase 1 supercomputer. Leading technical teams built the U.K.'s fastest AI supercomputer that harnesses the power of both artificial intelligence (AI) and high performance computing (HPC) to solve some of the world's most pressing challenges. This sets a clear way forward for future U.K. technology leadership and inward investment into the U.K. technology sector. Dawn kickstarts the recently launched U.K. AI Research Resource (AIRR), which will explore the viability of associated systems and architectures. Dawn brings the U.K. closer to reaching the compute threshold of a quintillion (1018) floating point operations per second - one exaflop, better known as exascale. For perspective: Every person on earth would have to make calculations 24 hours a day for more than four years to equal a second's worth of processing power in an exascale system.

"Dawn considerably strengthens the scientific and AI compute capability available in the U.K., and it's on the ground, operational today at the Cambridge Open Zettascale Lab. Dell PowerEdge XE9640 servers offer a no-compromises platform to host the Intel Data Center GPU Max Series accelerator, which opens up the ecosystem to choice through oneAPI. I'm very excited to see the sorts of early science this machine can deliver and continue to strengthen the Open Zettascale Lab partnership between Dell Technologies, Intel and the University of Cambridge, and further broaden that to the U.K. scientific and AI community," said Adam Roe, EMEA HPC technical director at Intel.

NVIDIA NeMo: Designers Tap Generative AI for a Chip Assist

A research paper released this week describes ways generative AI can assist one of the most complex engineering efforts: designing semiconductors. The work demonstrates how companies in highly specialized fields can train large language models (LLMs) on their internal data to build assistants that increase productivity.

Few pursuits are as challenging as semiconductor design. Under a microscope, a state-of-the-art chip like an NVIDIA H100 Tensor Core GPU (above) looks like a well-planned metropolis, built with tens of billions of transistors, connected on streets 10,000x thinner than a human hair. Multiple engineering teams coordinate for as long as two years to construct one of these digital mega cities. Some groups define the chip's overall architecture, some craft and place a variety of ultra-small circuits, and others test their work. Each job requires specialized methods, software programs and computer languages.

IDC Forecasts Spending on GenAI Solutions Will Reach $143 Billion in 2027 with a Five-Year Compound Annual Growth Rate of 73.3%

A new forecast from International Data Corporation (IDC) shows that enterprises will invest nearly $16 billion worldwide on GenAI solutions in 2023. This spending, which includes GenAI software as well as related infrastructure hardware and IT/business services, is expected to reach $143 billion in 2027 with a compound annual growth rate (CAGR) of 73.3% over the 2023-2027 forecast period. This is more than twice the rate of growth in overall AI spending and almost 13 times greater than the CAGR for worldwide IT spending over the same period.

"Generative AI is more than a fleeting trend or mere hype. It is a transformative technology with far-reaching implications and business impact," says Ritu Jyoti, group vice president, Worldwide Artificial Intelligence and Automation market research and advisory services at IDC. "With ethical and responsible implementation, GenAI is poised to reshape industries, changing the way we work, play, and interact with the world."

Dell Technologies Expands Generative AI Portfolio

Dell Technologies expands its Dell Generative AI Solutions portfolio, helping businesses transform how they work along every step of their generative AI (GenAI) journeys. "To maximize AI efforts and support workloads across public clouds, on-premises environments and at the edge, companies need a robust data foundation with the right infrastructure, software and services," said Jeff Boudreau, chief AI officer, Dell Technologies. "That's what we are building with our expanded validated designs, professional services, modern data lakehouse and the world's broadest GenAI solutions portfolio."

Customizing GenAI models to maximize proprietary data
The Dell Validated Design for Generative AI with NVIDIA for Model Customization offers pre-trained models that extract intelligence from data without building models from scratch. This solution provides best practices for customizing and fine-tuning GenAI models based on desired outcomes while helping keep information secure and on-premises. With a scalable blueprint for customization, organizations now have multiple ways to tailor GenAI models to accomplish specific tasks with their proprietary data. Its modular and flexible design supports a wide range of computational requirements and use cases, spanning training diffusion, transfer learning and prompt tuning.

Analyst Forecasts TSMC Raking in $100 Billion by 2025

Pierre Ferragu, the Global Technology Infrastructure chief at New Street Research, has predicted a very positive 2025 financial outcome for Taiwan Semiconductor Manufacturing Company Limited (TSMC). A global slowdown in consumer purchasing of personal computers and smartphones has affected a number of companies including the likes of NVIDIA and AMD—their financial reports have projected a 10% annual revenue drop for 2023. TSMC has similarly forecast that its full year revenue for 2023 will settle at $68.31 billion, after an approximate 10% fall. Ferragu did not contest these figures—via his team's analysis—TSMC is expected to pull in $68 billion in net sales for this financial year.

The rumor mill has TSMC revising its revenue guidance for a third time this year—but company leadership has denied that this will occur. New Street Research estimates that conditions will improve next year, with an uptick in client orders placed at TSMC's foundries. Ferragu reckons that TSMC could hit an all-time revenue high of $100 billion by 2025. His hunch is based on the upcoming spending habits of VIP foundry patrons encompassing: "a bottom-up perspective, looking at how TSMC's top customers, which we all know very well, will contribute to such growth." The Taiwanese foundry's order books are reported to be filling up for next year, with Apple and NVIDIA seizing the moment to stand firmly at the front of the 3 nm process queue.

IBM Quantum System One Quantum Computer Installed at PINQ²

The Platform for Digital and Quantum Innovation of Quebec (PINQ²), a non-profit organization (NPO) founded by the Ministry of Economy, Innovation and Energy of Quebec (MEIE - ministère de l'Économie, de l'Innovation et de l'Énergie du Québec) and the Université de Sherbrooke, along with IBM, are proud to announce the historic inauguration of an IBM Quantum System One at IBM Bromont. This event marks a major turning point in the field of information technology and all sectors of innovation in Quebec, making PINQ² the sole administrator to inaugurate and operate an IBM Quantum System One in Canada. To date, this is one of the most advanced quantum computers in IBM's global fleet of quantum computers.

This new quantum computer in Quebec reinforces Quebec's and Canada's position as a force in the rapidly advancing field of quantum computing, opening new prospects for the technological future of the province and the country. Access to this technology is a considerable asset not only for the ecosystem of DistriQ, the quantum innovation zone for Quebec, but also for the Technum Québec innovation zone, the new "Energy Transition Valley" innovation zone and other strategic sectors for Quebec.

TSMC, Broadcom & NVIDIA Alliance Reportedly Set to Advance Silicon Photonics R&D

Taiwan's Economic Daily reckons that a freshly formed partnership between TSMC, Broadcom, and NVIDIA will result in the development of cutting-edge silicon photonics. The likes of IBM, Intel and various academic institutes are already deep into their own research and development processes, but the alleged new alliance is said to focus on advancing AI computer hardware. The report cites a significant allocation of—roughly 200—TSMC staffers onto R&D involving the integration of silicon photonic technologies into high performance computing (HPC) solutions. They are very likely hoping that the usage of optical interconnects (on a silicon medium) will result in greater data transfer rates between and within microchips. Other benefits include longer transmission distances and a lower consumption of power.

TSMC vice president Yu Zhenhua has placed emphasis on innovation, in a similar fashion to his boss, within the development process (industry-wide): "If we can provide a good silicon photonics integrated system, we can solve the two key issues of energy efficiency and AI computing power. This will be a new one...Paradigm shift. We may be at the beginning of a new era." The firm is facing unprecedented demand from its clients—it hopes to further expand its advanced chip packaging capacity to address these issues by late 2024. A shift away from the limitations of "conventional electric" data transmissions could bring next generation AI compute GPUs onto the market by 2025.

AIB Shipments Climb in Q2 2023, with Unit Sales Increasing Q2Q

According to a new research report from the analyst firm Jon Peddie Research (JPR), unit shipments in the add-in board (AIB) market increased in Q2'23 from last quarter, while AMD gained market share. Quarter to quarter, graphics AIB shipments increased modestly, by 2.9%; however, shipments decreased by -36% year to year.

Since Q1 2000, over 2.10 billion graphics cards, worth about $476 billion, have been sold. The market shares for the desktop discrete GPU suppliers shifted in the quarter, as AMD's market share increased from last quarter and Nvidia's share increased from last year. Intel, which entered the AIB market in Q3'22 with the Arc A770 and A750, will start to increase market share in 2024.

JPR: PC GPU Shipments increased by 11.6% Sequentially from Last Quarter and Decreased by -27% Year-to-Year

Jon Peddie Research reports the growth of the global PC-based graphics processor unit (GPU) market reached 61.6 million units in Q2'23 and PC CPU shipments decreased by -23% year over year. Overall, GPUs will have a compound annual growth rate of 3.70% during 2022-2026 and reach an installed base of 2,998 million units at the end of the forecast period. Over the next five years, the penetration of discrete GPUs (dGPUs) in the PC will grow to reach a level of 32%.

Year to year, total GPU shipments, which include all platforms and all types of GPUs, decreased by -27%, desktop graphics decreased by -36%, and notebooks decreased by -23%.

Jon Peddie Research: Client CPU Shipments up 17% From Last Quarter

Jon Peddie Research reports the growth of the global PC client-based CPU units market reached 53.6 million units in Q2'23, up 17%, and iGPU shipments increased by 14% to 49 million units. Year over year, iGPUs declined -29%.

Integrated GPUs will have a compound annual growth rate of 2.5% during 2022-2026 and reach an installed base of 4.8 billion units at the end of the forecast period. Over the next five years, the penetration of iGPUs in the PC will grow to reach a level of 98%.

PCI-SIG Exploring an Optical Interconnect to Enable Higher PCIe Technology Performance

PCI-SIG today announced the formation of a new workgroup to deliver PCI Express (PCIe) technology over optical connections. The PCI-SIG Optical Workgroup intends to be optical technology-agnostic, supporting a wide range of optical technologies, while potentially developing technology-specific form factors.

"Optical connections will be an important advancement for PCIe architecture as they will allow for higher performance, lower power consumption, extended reach and reduced latency," said Nathan Brookwood, Research Fellow at Insight 64. "Many data-demanding markets and applications such as Cloud and Quantum Computing, Hyperscale Data Centers and High-Performance Computing will benefit from PCIe architecture leveraging optical connections."

TSMC Inaugurates Global R&D Center, Celebrating Its Newest Hub for Technology Innovation

TSMC today held an inauguration ceremony for its global Research and Development Center in Hsinchu, Taiwan, celebrating the Company's newest hub for bringing the next generations of semiconductor technology into reality with customers, R&D partners in industry and academia, design ecosystem partners, and senior government leaders.

The R&D Center will serve as the new home for TSMC's R&D Organization, including the researchers who will develop TSMC's leading-edge process technology at the 2-nanometer generation and beyond, as well as scientists and scholars blazing the trail with exploratory research into fields such as novel materials and transistor structures. With R&D employees already relocating to their workplaces in the new building, it will be ready for its full complement of more than 7,000 staff by September 2023.

IBM Launches AI-informed Cloud Carbon Calculator

IBM has launched a new tool to help enterprises track greenhouse gas (GHG) emissions across cloud services and advance their sustainability performance throughout their hybrid, multicloud journeys. Now generally available, the IBM Cloud Carbon Calculator - an AI-informed dashboard - can help clients access emissions data across a variety of IBM Cloud workloads such as AI, high performance computing (HPC) and financial services.

Across industries, enterprises are embracing modernization by leveraging hybrid cloud and AI to digitally transform with resiliency, performance, security, and compliance at the forefront, all while remaining focused on delivering value and driving more sustainable business practices. According to a recent study by IBM, 42% of CEOs surveyed pinpoint environmental sustainability as their top challenge over the next three years. At the same time, the study reports that CEOs are facing pressure to adopt generative AI while also weighing the data management needs to make AI successful. The increase in data processing required for AI workloads can present new challenges for organizations that are looking to reduce their GHG emissions. With more than 43% of CEOs surveyed already using generative AI to inform strategic decisions, organizations should prepare to balance executing high performance workloads with sustainability.

PlayStation VR2 Product Manager Goes Deep into Design Process

When PlayStation VR2 released earlier this year, it offered players a chance to experience virtual game worlds bristling with detail and immersive features. PS VR2 was the culmination of several years of development, which included multiple prototypes and testing approaches. To learn more, we asked PS VR2's Product Manager Yasuo Takahashi about the development process of the innovative headset and PlayStation VR2 Sense Controller, and also gained insight into the various prototypes that were created as part of this process.

PlayStation Blog: When did development for the PS VR2 headset start?
Yasuo Takahashi: Research on future VR technology was being conducted even prior to the launch of the original PlayStation VR as part of our R&D efforts. After PS VR's launch in 2016, discussion around what the next generation of VR would look like began in earnest. We went back and reviewed those R&D findings and we started prototyping various technologies at the beginning of 2017. Early that same year, we began detailed conversations on what features should be implemented in the new product, and which specific technologies we should explore further.
Return to Keyword Browsing
Nov 23rd, 2024 05:59 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts