News Posts matching #Research

Return to Keyword Browsing

NVIDIA Espouses Generative AI for Improved Productivity Across Industries

A watershed moment on Nov. 22, 2022, was mostly virtual, yet it shook the foundations of nearly every industry on the planet. On that day, OpenAI released ChatGPT, the most advanced artificial intelligence chatbot ever developed. This set off demand for generative AI applications that help businesses become more efficient, from providing consumers with answers to their questions to accelerating the work of researchers as they seek scientific breakthroughs, and much, much more.

Businesses that previously dabbled in AI are now rushing to adopt and deploy the latest applications. Generative AI—the ability of algorithms to create new text, images, sounds, animations, 3D models and even computer code—is moving at warp speed, transforming the way people work and play. By employing large language models (LLMs) to handle queries, the technology can dramatically reduce the time people devote to manual tasks like searching for and compiling information.

Age of Wonders 4 Watcher Update Available via Open Beta Preview

Hello everyone! Today I'm happy to announce that we're putting the next update for Age of Wonders 4 into Open Beta! This previews some of the improvements that are part of the Watcher update, due later this summer. This coming update focuses on what we feel are the issues which are most important to you, and we'd love to get your feedback on what we've managed to do so far.

It's important to remember that this is a work in progress patch. This means that it may be unstable or imbalanced, and that the features we've added may not work entirely as we want them to. It also means that we may revert certain changes later if we feel that they aren't achieving what we want them to or if we're inspired to replace them with something better! Instructions, Patch Notes and F.A.Q. are provided below...

Tour de France Bike Designs Developed with NVIDIA RTX GPU Technologies

NVIDIA RTX is spinning new cycles for designs. Trek Bicycle is using GPUs to bring design concepts to life. The Wisconsin-based company, one of the largest bicycle manufacturers in the world, aims to create bikes with the highest-quality craftsmanship. With its new partner Lidl, an international retailer chain, Trek Bicycle also owns a cycling team, now called Lidl-Trek. The team is competing in the annual Tour de France stage race on Trek Bicycle's flagship lineup, which includes the Emonda, Madone and Speed Concept. Many of the team's accessories and equipment, such as the wheels and road race helmets, were also designed at Trek.

Bicycle design involves complex physics—and a key challenge is balancing aerodynamic efficiency with comfort and ride quality. To address this, the team at Trek is using NVIDIA A100 Tensor Core GPUs to run high-fidelity computational fluid dynamics (CFD) simulations, setting new benchmarks for aerodynamics in a bicycle that's also comfortable to ride and handles smoothly. The designers and engineers are further enhancing their workflows using NVIDIA RTX technology in Dell Precision workstations, including the NVIDIA RTX A5500 GPU, as well as a Dell Precision 7920 running dual RTX A6000 GPUs.

NVIDIA Proposes that AI Will Accelerate Climate Research Innovation

AI and accelerated computing will help climate researchers achieve the miracles they need to achieve breakthroughs in climate research, NVIDIA founder and CEO Jensen Huang said during a keynote Monday at the Berlin Summit for the Earth Virtualization Engines initiative. "Richard Feynman once said that "what I can't create, I don't understand" and that's the reason why climate modeling is so important," Huang told 180 attendees at the Harnack House in Berlin, a storied gathering place for the region's scientific and research community. "And so the work that you do is vitally important to policymakers to researchers to the industry," he added.

To advance this work, the Berlin Summit brings together participants from around the globe to harness AI and high-performance computing for climate prediction. In his talk, Huang outlined three miracles that will have to happen for climate researchers to achieve their goals, and touched on NVIDIA's own efforts to collaborate with climate researchers and policymakers with its Earth-2 efforts. The first miracle required will be to simulate the climate fast enough, and with a high enough resolution - on the order of just a couple of square kilometers.

Chinese Research Team Uses AI to Design a Processor in 5 Hours

A group of researchers in China have used a new approach to AI to create a full RISC-V processor from scratch. The team set out to answer the question of whether an AI could design an entire processor on its own without human intervention. While AI design tools do already exist and are used for complex circuit design and validation today, they are generally limited in use and scope. The key improvements shown in this approach over traditional or AI assisted logic design are the automated capabilities, as well as its speed. The traditional assistive tools for designing circuits still require many hours of manual programming and validation to design a functional circuit. Even for such a simple processor as the one created by the AI, the team claims the design would have taken 1000x as long to be done by humans. The AI was trained by observing specific inputs and outputs of existing CPU designs, with the paper summarizing the approach as such:
(...) a new AI approach, which generates large-scale Boolean function with almost 100% validation accuracy (e.g., > 99.99999999999% as Intel) from only external input-output examples rather than formal programs written by the human. This approach generates the Boolean function represented by a graph structure called Binary Speculation Diagram (BSD), with a theoretical accuracy lower bound by using the Monte Carlo based expansion, and the distance of Boolean functions is used to tackle the intractability.

RPI Announced as the First University to House IBM's Quantum System One

Today, it was announced that Rensselaer Polytechnic Institute will become the first university in the world to house an IBM Quantum System One. The IBM quantum computer, intended to be operational by January of 2024, will serve as the foundation of a new IBM Quantum Computational Center in partnership with Rensselaer Polytechnic Institute (RPI). By partnering, RPI's vision is to greatly enhance the educational experiences and research capabilities of students and researchers at RPI and other institutions, propel the Capital Region into a top location for talent, and accelerate New York's growth as a technology epicenter.

RPI's advance into research of applications for quantum computing will represent a more than $150 million investment once fully realized, aided by philanthropic support from Curtis R. Priem '82, vice chair of RPI's Board of Trustees. The new quantum computer will be part of RPI's new Curtis Priem Quantum Constellation, a faculty endowed center for collaborative research, which will prioritize the hiring of additional faculty leaders who will leverage the quantum computing system.

IBM Study Finds That CEOs are Embracing Generative AI

A new global study by the IBM Institute for Business Value found that nearly half of CEOs surveyed identify productivity as their highest business priority—up from sixth place in 2022. They recognize technology modernization is key to achieving their productivity goals, ranking it as second highest priority. Yet, CEOs can face key barriers as they race to modernize and adopt new technologies like generative AI.

The annual CEO study, CEO decision-making in the age of AI, Act with intention, found three-quarters of CEO respondents believe that competitive advantage will depend on who has the most advanced generative AI. However, executives are also weighing potential risks or barriers of the technology such as bias, ethics and security. More than half (57%) of CEOs surveyed are concerned about data security and 48% worry about bias or data accuracy.

IBM and UC Berkeley Collaborate on Practical Quantum Computing

For weeks, researchers at IBM Quantum and UC Berkeley were taking turns running increasingly complex physical simulations. Youngseok Kim and Andrew Eddins, scientists with IBM Quantum, would test them on the 127-qubit IBM Quantum Eagle processor. UC Berkeley's Sajant Anand would attempt the same calculation using state-of-the-art classical approximation methods on supercomputers located at Lawrence Berkeley National Lab and Purdue University. They'd check each method against an exact brute-force classical calculation.

Eagle returned accurate answers every time. And watching how both computational paradigms performed as the simulations grew increasingly complex made both teams feel confident the quantum computer was still returning answers more accurate than the classical approximation methods, even in the regime beyond the capabilities of the brute force methods. "The level of agreement between the quantum and classical computations on such large problems was pretty surprising to me personally," said Eddins. "Hopefully it's impressive to everyone."

ITRI Set to Strengthen Taiwan-UK Collaboration on Semiconductors

The newly established Department for Science, Innovation and Technology (DSIT) in the UK has recently released the UK's National Semiconductor Strategy. Dr. Shih-Chieh Chang, General Director of Electronic and Optoelectronic System Research Laboratories at the Industrial Technology Research Institute (ITRI) of Taiwan had an initial exchange with DSIT. During the exchange, Dr. Chang suggested that Taiwan can become a trustable partner for the UK and that the partnership can leverage collective strengths to create mutually beneficial developments. According to the Strategy, the British government plans to invest 1 billion pounds over the next decade to support the semiconductor industry. This funding will improve access to infrastructure, power more research and development and facilitate greater international cooperation.

Dr. Chang stressed that ITRI looks forward to more collaboration with the UK on semiconductors to enhance the resilience of the supply chain. While the UK possesses cutting-edge capabilities in semiconductor IP design and compound semiconductor technology, ITRI has extensive expertise in semiconductor technology R&D and trial production. As a result, ITRI is well-positioned to offer consultation services for advanced packaging pilot lines, facilitate pre-production evaluation, and link British semiconductor IP design companies with Taiwan's semiconductor industry chain. "The expansion of British manufacturers' service capacity in Taiwan would create a mutually beneficial outcome for both Taiwan and the UK," said Dr. Chang.

JPR: Graphics Add-in Board Market Continued its Correction in Q1 2023

According to a new research report from the analyst firm Jon Peddie Research, unit shipments in the add-in board (AIB) market decreased in Q1 2023 by -12.6% and decreased by -38.2% year to year. Intel increased its add-in board market share by 2% during the first quarter.

The percentage of AIBs in desktop PCs is referred to as the attach rate. The attach rate grew from last quarter by 8% but was down -21% year to year. Approximately 6.3 million add-in boards shipped in Q1 2023. The market shares for the desktop discrete GPU suppliers shifted in the quarter, as AMD's market share remained flat from last quarter. Intel, which entered the AIB market in Q3'22 with the Arc A770 and A750, gained 2% in market share, while Nvidia retains its dominant position in the add-in board space with an 84% market share.

NVIDIA Cambridge-1 AI Supercomputer Hooked up to DGX Cloud Platform

Scientific researchers need massive computational resources that can support exploration wherever it happens. Whether they're conducting groundbreaking pharmaceutical research, exploring alternative energy sources or discovering new ways to prevent financial fraud, accessible state-of-the-art AI computing resources are key to driving innovation. This new model of computing can solve the challenges of generative AI and power the next wave of innovation. Cambridge-1, a supercomputer NVIDIA launched in the U.K. during the pandemic, has powered discoveries from some of the country's top healthcare researchers. The system is now becoming part of NVIDIA DGX Cloud to accelerate the pace of scientific innovation and discovery - across almost every industry.

As a cloud-based resource, it will broaden access to AI supercomputing for researchers in climate science, autonomous machines, worker safety and other areas, delivered with the simplicity and speed of the cloud, ideally located for the U.K. and European access. DGX Cloud is a multinode AI training service that makes it possible for any enterprise to access leading-edge supercomputing resources from a browser. The original Cambridge-1 infrastructure included 80 NVIDIA DGX systems; now it will join with DGX Cloud, to allow customers access to world-class infrastructure.

Google Expands Flood Hub Platform's Global Reach

Natural disasters, like flooding, are increasing in frequency and intensity due to climate change, threatening people's safety and livelihood. It's estimated that flooding affects more than 250 million people globally each year and causes around $10 billion in economic damages.

As part of our work to use AI to address the climate crisis, today we're expanding our flood forecasting capabilities to 80 countries. With the addition of 60 new countries across Africa, the Asia-Pacific region, Europe, and South and Central America, our platform Flood Hub now includes some of the territories with the highest percentages of population exposed to flood risk and experiencing more extreme weather, covering 460 million people globally.

RIKEN and Intel Collaborate on "Road to Exascale"

RIKEN and Intel Corporation (hereafter referred to as Intel) have signed a memorandum of understanding on collaboration and cooperation to accelerate joint research in next-generation computing fields such as AI (artificial intelligence), high-performance computing, and quantum computers. The signing ceremony was concluded on May 18, 2023. As part of this MOU, RIKEN will work with Intel Foundry Services (IFS) to prototype these new solutions.

Artificial Intelligence Helped Tape Out More than 200 Chips

In its recent Second Quarter of the Fiscal Year 2023 conference, Synopsys issued interesting information about the recent moves of chip developers and their usage of artificial intelligence. As the call notes, over 200+ chips have been taped out using Synopsys DSO.ai place-and-route (PnR) tool, making it a successful commercially proven AI chip design tool. The DSO.ai uses AI to optimize the placement and routing of the chip's transistors so that the layout is compact and efficient with regard to the strict timing constraints of the modern chip. According to Aart J. de Geus, CEO of Synopsys, "By the end of 2022, adoption, including 9 of the top 10 semiconductor vendors have moved forward at great speed with 100 AI-driven commercial tape-outs. Today, the tally is well over 200 and continues to increase at a very fast clip as the industry broadly adopts AI for design from Synopsys."

This is an interesting fact that means that customers are seeing the benefits of AI-assisted tools like DSO.ai. However, the company is not stopping there, and a whole suite of tools is getting an AI makeover. "We unveiled the industry's first full-stack AI-driven EDA suite, sydnopsys.ai," noted the CEO, adding that "Specifically, in parallel to second-generation advances in DSO.ai we announced VSO.ai, which stands for verification space optimization; and TSO.ai, test space optimization. In addition, we are extending AI across the design stack to include analog design and manufacturing." Synopsys' partners in this include NVIDIA, TSMC, MediaTek, Renesas, and IBM Research, all of which used AI-assisted tools for chip design efforts. A much wider range of industry players is expected to adopt these tools as chip design costs continue to soar as we scale the nodes down. With future 3 nm GPU costing an estimated $1.5 billion, 40% of that will account for software, and Synopsys plans to take a cut in that percentage.

IonQ Aria Now Available on Amazon Braket Cloud Quantum Computing Service

Today at Commercialising Quantum Global 2023, IonQ (NYSE: IONQ), an industry leader in quantum computing, announced the availability of IonQ Aria on Amazon Braket, AWS's quantum computing service. This expands upon IonQ's existing presence on Amazon Braket, following the debut of IonQ's Harmony system on the platform in 2020. With broader access to IonQ Aria, IonQ's flagship system with 25 algorithmic qubits (#AQ)—more than 65,000 times more powerful than IonQ Harmony—users can now explore, design, and run more complex quantum algorithms to tackle some of the most challenging problems of today.

"We are excited for IonQ Aria to become available on Amazon Braket, as we expand the ways users can access our leading quantum computer on the most broadly adopted cloud service provider," said Peter Chapman, CEO and President, IonQ. "Amazon Braket has been instrumental in commercializing quantum, and we look forward to seeing what new approaches will come from the brightest, most curious, minds in the space."

Sharp Working on LCD Screen for Next Generation Gaming Console

Sharp Corporation's Chief Executive Officer, Robert Wu, has revealed that the Japanese company's display division is involved in the research and development of an LCD screen intended for use in an "upcoming" gaming console. This small detail was announced during an earnings briefing/call with analysts (on May 11), and Wu was careful to not name the key client: "I can't comment on any details regarding specific customers. But as to a new gaming console, we've been involved in its R&D stage." Bloomberg reports that information about a gaming device was removed from presentation material, soon after the conclusion of the conference call. Sharp expects to launch pilot LCD-panel production lines by the end of this fiscal year.

Given the secrecy surrounding this business partnership and projected time frame for the new line of gaming oriented LCD screens, games industry analysts point to Nintendo being the "unnamed" client. Sharp is the current supplier of 6.2-inch LCD panels that are featured on the standard Nintendo Switch (2017) handheld console. Samsung provides a 7-inch screen that is fitted to the premium Switch OLED (2021), and InnoLux makes a 5.5-inch LCD display - sported by the entry-level Switch Lite (2019). A next-gen Switch console successor is rumored to be deep into development but is not expected to arrive this year - Nintendo's CEO, Shuntaro Furukawa, informed investors (last week) that new hardware is due at some point after April 2024.

AMD faulTPM Exploit Targets Zen 2 and Zen 3 Processors

Researchers at the Technical University of Berlin have published a paper called "faulTPM: Exposing AMD fTPMs' Deepest Secrets," highlighting AMD's firmware-based Trusted Platform Module (TPM) is susceptible to the new exploit targeting Zen 2 and Zen 3 processors. The faulTPM attack against AMD fTPMs involves utilizing the AMD secure processor's (SP) vulnerability to voltage fault injection attacks. This allows the attacker to extract a chip-unique secret from the targeted CPU, which is then used to derive the storage and integrity keys protecting the fTPM's non-volatile data stored on the BIOS flash chip. The attack consists of a manual parameter determination phase and a brute-force search for a final delay parameter. The first step requires around 30 minutes of manual attention, but it can potentially be automated. The second phase consists of repeated attack attempts to search for the last-to-be-determined parameter and execute the attack's payload.

Once these steps are completed, the attacker can extract any cryptographic material stored or sealed by the fTPM regardless of authentication mechanisms, such as Platform Configuration Register (PCR) validation or passphrases with anti-hammering protection. Interestingly, BitLocker uses TPM as a security measure, and faulTPM compromises the system. Researchers suggested that Zen 2 and Zen 3 CPUs are vulnerable, while Zen 4 wasn't mentioned. The attack requires several hours of physical access, so remote vulnerabilities are not a problem. Below, you can see the $200 system used for this attack and an illustration of the physical connections necessary.

NVIDIA DGX H100 Systems are Now Shipping

Customers from Japan to Ecuador and Sweden are using NVIDIA DGX H100 systems like AI factories to manufacture intelligence. They're creating services that offer AI-driven insights in finance, healthcare, law, IT and telecom—and working to transform their industries in the process. Among the dozens of use cases, one aims to predict how factory equipment will age, so tomorrow's plants can be more efficient.

Called Green Physics AI, it adds information like an object's CO2 footprint, age and energy consumption to SORDI.ai, which claims to be the largest synthetic dataset in manufacturing.

MIT Researchers Grow Transistors on Top of Silicon Wafers

MIT researchers have developed a groundbreaking technology that allows for the growth of 2D transition metal dichalcogenide (TMD) materials directly on fully fabricated silicon chips, enabling denser integrations. Conventional methods require temperatures of about 600°C, which can damage silicon transistors and circuits as they break down above 400°C. The MIT team overcame this challenge by creating a low-temperature growth process that preserves the chip's integrity, allowing 2D semiconductor transistors to be directly integrated on top of standard silicon circuits. The new approach grows a smooth, highly uniform layer across an entire 8-inch wafer, unlike previous methods that involved growing 2D materials elsewhere before transferring them to a chip or wafer. This process often led to imperfections that negatively impacted device and chip performance.

Additionally, the novel technology can grow a uniform layer of TMD material in less than an hour over 8-inch wafers, a significant improvement from previous methods that required over a day for a single layer. The enhanced speed and uniformity of this technology make it suitable for commercial applications, where 8-inch or larger wafers are essential. The researchers focused on molybdenum disulfide, a flexible, transparent 2D material with powerful electronic and photonic properties ideal for semiconductor transistors. They designed a new furnace for the metal-organic chemical vapor deposition process, which has separate low and high-temperature regions. The silicon wafer is placed in the low-temperature region while vaporized molybdenum and sulfur precursors flow into the furnace. Molybdenum remains in the low-temperature region, while the sulfur precursor decomposes in the high-temperature region before flowing back into the low-temperature region to grow molybdenum disulfide on the wafer surface.

43rd Symposium on VLSI Technology & Circuits to Focus on Multi-chiplet Devices and Packaging Innovations as Moore's Law Buckles

The 43rd edition of the Symposium on VLSI Technology & Circuits, held annually in Kyoto Japan, is charting the way forward for the devices of the future. Held between June 11-16, 2023, this year's symposium will see structured presentations, Q&A, and discussions on some of the biggest technological developments in the logic chip world. The lead (plenary) sessions drop a major hint on the way the wind is blowing. Leadning from the front is an address by Suraya Bhattacharya, Director, System-in-Package, A*STAR, IME, on "Multi-Chiplet Heterogeneous Integration Packaging for Semiconductor System Scaling."

Companies such as AMD and Intel read the tea-leaves, that Moore's Law is buckling, and it's no longer economically feasible to build large monolithic processors at the kind of prices they commanded a decade ago. This has caused companies to ration their allocation of the latest foundry node to only the specific components of their chip design that benefit the most from the latest node, and identify components that don't benefit as much, and disintegrate them into separate dies build on older foundry nodes, which are then connected through innovative packaging technologies.

YMTC Using Locally Sourced Equipment for Advanced 3D NAND Manufacturing

According to the South China Morning Post (SCMP) sources, Yangtze Memory Technologies Corp (YMTC) has been plotting to manufacture its advanced 3D NAND flash using locally sourced equipment. As the source notes, YMTC has placed big orders from local equipment makers in a secret project codenamed Wudangshan, named after the Taoist mountain in the company's home province of Hubei. Last year, YTMC announced significant progress towards creating 200+ layer 3D NAND flash before other 3D NAND makers like Micron and SK Hynix. Called X3-9070, the chip is a 232-layer 3D NAND based on the company's advanced Xtacking 3.0 architecture.

As the SCMP finds, YTMC has placed big orders at Beijing-based Naura Technology Group, maker of etching tools and competitor to Lam Research, to manufacture its advanced flash memory. Additionally, YTMC has reportedly asked all its tool suppliers to remove all logos and other marks from equipment to avoid additional US sanctions holding the development back. This significant order block comes after the state invested 7 billion US Dollars into YTMC to boost its production capacity, and we see the company utilizing those resources right away. However, few industry analysts have identified a few "choke points" in YTMC's path to independent manufacturing, as there are still no viable domestic alternatives to US-based tool makers in areas such as metrology tools, where KLA is the dominant player, and lithography tools, where ASML, Nikon, and Canon, are noteworthy. The Wuhan-based Wudangshan project remains secret about dealing with those choke points in the future.

Google Merges its AI Subsidiaries into Google DeepMind

Google has announced that the company is officially merging its subsidiaries focused on artificial intelligence to form a single group. More specifically, Google Brain and DeepMind companies are now joining forces to become a single unit called Google DeepMind. As Google CEO Sundar Pichai notes: "This group, called Google DeepMind, will bring together two leading research groups in the AI field: the Brain team from Google Research, and DeepMind. Their collective accomplishments in AI over the last decade span AlphaGo, Transformers, word2vec, WaveNet, AlphaFold, sequence to sequence models, distillation, deep reinforcement learning, and distributed systems and software frameworks like TensorFlow and JAX for expressing, training and deploying large scale ML models."

As a CEO of this group, Demis Hassabis, a previous CEO of DeepMind, will work together with Jeff Dean, now promoted to Google's Chief Scientist, where he will report to the Sundar. In the spirit of a new role, Jeff Dean will work as a Chief Scientist at Google Research and Google DeepMind, where he will set the goal for AI research at both units. This corporate restructuring will help the two previously separate teams work together on a single plan and help advance AI capabilities faster. We are eager to see the upcoming developments these teams accomplish.

University of Chicago Molecular Engineering Team Experimenting With Stretchable OLED Display

A researcher team operating out of the Pritzker School of Molecular Engineering (PME) at the University of Chicago are developing a special type of material that is simultaneously capable of emitting fluorescent pattern and undergoing deformation via forced stretches or bends. This thin piece of experimental elastic can function as a digital display, even under conditions of great force - its creators claim that their screen technology material can be stretched to twice the original length without any deterioration or failures.

Sihong Wang (assistant professor of molecular engineering) has lead this research project, with Juan de Pablo (Liew Family Professor of Molecular Engineering) providing senior supervision. The team predicts that the polymer-based display will offer a wide range of applications including usage foldable computer screens, UI-driven wearables and health monitoring equipment. Solid OLED displays are featured in many modern devices that we use on a daily basis, but the traditional nature of that technology is not suitable for material flexibility due to inherent properties of "tight chemical bonds and stiff structures". Wang hopes to address these problems with his new polymer-type: "The materials currently used in these state-of-the-art OLED displays are very brittle; they don't have any stretchability. Our goal was to create something that maintained the electroluminescence of OLED but with stretchable polymers."

Arm-based PCs to Nearly Double Market Share by 2027, Says Report

Personal computers (PCs) based on Arm architecture will grow in popularity and their market share will almost double from 14% now to 25% by 2027, according to Counterpoint Research's latest projections. The ability of Arm-based hardware to run Mac OS has allowed Apple to capture 90% of the Arm-based notebook computer market. However, the full support of Windows and Office365 and the speed of native Arm-based app adoption are also critical factors in determining the Arm SoC penetration rate in PCs. Once these factors are addressed, Arm-based PCs will become a viable option for both daily users and businesses.

As more existing PC OEMs/ODMs and smartphone manufacturers enter the market, they will bring their expertise in Arm-based hardware and software, which will further boost the popularity of Arm-based PCs. The availability of more native Arm-based apps will also increase user comfort and familiarity with the platform. Overall, the trend towards Arm-based PCs is expected to continue and their market share will likely increase significantly in the coming years.

Mitsui and NVIDIA Announce World's First Generative AI Supercomputer for Pharmaceutical Industry

Mitsui & Co., Ltd., one of Japan's largest business conglomerates, is collaborating with NVIDIA on Tokyo-1—an initiative to supercharge the nation's pharmaceutical leaders with technology, including high-resolution molecular dynamics simulations and generative AI models for drug discovery.

Announced today at the NVIDIA GTC global AI conference, the Tokyo-1 project features an NVIDIA DGX AI supercomputer that will be accessible to Japan's pharma companies and startups. The effort is poised to accelerate Japan's $100 billion pharma industry, the world's third largest following the U.S. and China.
Return to Keyword Browsing
Nov 23rd, 2024 06:28 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts