News Posts matching #simulations

Return to Keyword Browsing

IBM Launches Its Most Advanced Quantum Computers, Fueling New Scientific Value and Progress towards Quantum Advantage

Today at its inaugural IBM Quantum Developer Conference, IBM announced quantum hardware and software advancements to execute complex algorithms on IBM quantum computers with record levels of scale, speed, and accuracy.

IBM Quantum Heron, the company's most performant quantum processor to-date and available in IBM's global quantum data centers, can now leverage Qiskit to accurately run certain classes of quantum circuits with up to 5,000 two-qubit gate operations. Users can now use these capabilities to expand explorations in how quantum computers can tackle scientific problems across materials, chemistry, life sciences, high-energy physics, and more.

Etched Introduces AI-Powered Games Without GPUs, Displays Minecraft Replica

The gaming industry is about to get massively disrupted. Instead of using game engines to power games, we are now witnessing an entirely new and crazy concept. A startup specializing in designing ASICs specifically for Transformer architecture, the foundation behind generative AI models like GPT/Claude/Stable Diffusion, has showcased a demo in partnership with Decart of a Minecraft clone being entirely generated and operated by AI instead of the traditional game engine. While we use AI to create images and videos based on specific descriptions and output pretty realistic content, having an AI model spit out an entire playable game is something different. Oasis is the first playable, real-time, real-time, open-world AI model that takes users' input and generates real-time gameplay, including physics, game rules, and graphics.

An interesting thing to point out is the hardware that powers this setup. Using a single NVIDIA H100 GPU, this 500-million parameter Oasis model can run at 720p resolution at 20 generated frames per second. Due to limitations of accelerators like NVIDIA's H100/B200, gameplay at 4K is almost impossible. However, Etched has its own accelerator called Sohu, which is specialized in accelerating transformer architectures. Eight NVIDIA H100 GPUs can power five Oasis models to five users, while the eight Sohu cards are capable of serving 65 Oasis runs to 65 users. This is more than a 10x increase in inference capability compared to NVIDIA's hardware on a single-use case alone. The accelerator is designed to run much larger models like future 100 billion-parameter generative AI video game models that can output 4K 30 FPS, all thanks to 144 GB of HBM3E memory, yielding 1,152 GB in eight-accelerator server configuration.

NVIDIA Modulus & Omniverse Drive Physics-informed Models and Simulations

A manufacturing plant near Hsinchu, Taiwan's Silicon Valley, is among facilities worldwide boosting energy efficiency with AI-enabled digital twins. A virtual model can help streamline operations, maximizing throughput for its physical counterpart, say engineers at Wistron, a global designer and manufacturer of computers and electronics systems. In the first of several use cases, the company built a digital copy of a room where NVIDIA DGX systems undergo thermal stress tests (pictured above). Early results were impressive.

Making Smart Simulations
Using NVIDIA Modulus, a framework for building AI models that understand the laws of physics, Wistron created digital twins that let them accurately predict the airflow and temperature in test facilities that must remain between 27 and 32 degrees C. A simulation that would've taken nearly 15 hours with traditional methods on a CPU took just 3.3 seconds on an NVIDIA GPU running inference with an AI model developed using Modulus, a whopping 15,000x speedup. The results were fed into tools and applications built by Wistron developers with NVIDIA Omniverse, a platform for creating 3D workflows and applications based on OpenUSD.

Altair SimSolid Transforms Simulation for Electronics Industry

Altair, a global leader in computational intelligence, announced the upcoming release of Altair SimSolid for electronics, bringing game-changing fast, easy, and precise multi-physics scenario exploration for electronics, from chips, PCBs, and ICs to full system design. "As the electronics industry pushes the boundaries of complexity and miniaturization, engineers have struggled with simulations that often compromise on detail for expediency. Altair SimSolid will empower engineers to capture the intricate complexities of PCBs and ICs without simplification," said James R. Scapa, founder and chief executive officer, Altair. "Traditional simulation methods often require approximations when analyzing PCB structures due to their complexity. Altair SimSolid eliminates these approximations to run more accurate simulations for complex problems with vast dimensional disparities."

Altair SimSolid has revolutionized conventional analysis in its ability to accurately predict complex structural problems with blazing-fast speed while eliminating the complexity of laborious hours of modeling. It eliminates geometry simplification and meshing, the two most time-consuming and expertise-intensive tasks done in traditional finite element analysis. As a result, it delivers results in seconds to minutes—up to 25x faster than traditional finite element solvers—and effortlessly handles complex assemblies. Having experienced fast adoption in the aerospace and automotive industries, two sectors that typically experience challenges associated with massive structures, Altair SimSolid is poised to play a significant role in the electronics market. The initial release, expected in Q2 2024, will support structural and thermal analysis for PCBs and ICs with full electromagnetics analysis coming in a future release.

IBM Opens State-of-the-Art "X-Force Cyber Range" in Washington DC

IBM has announced the official opening of the new IBM X-Force Cyber Range in Washington, DC. The range includes new custom training exercises specifically designed to help U.S. federal agencies, their suppliers and critical infrastructure organizations more effectively respond to persistent and disruptive cyberattacks, and threats posed by AI. The state-of-the-art facility is designed to help everyone from legal and mission-critical leaders, to the C-Suite and technical security leaders prepare for a real-world cyber incident. According to IBM's 2023 Cost of a Data Breach report the global average cost of a data breach reached $4.45 million, with the US facing the highest breach costs across all regions. Organizations that formed an incident response (IR) team and tested their IR plan experienced faster incident response times and lower costs than organizations that did neither. In fact, the report found that high levels of IR planning and testing saved industry and government nearly $1.5 million in breach costs and 54 days from the data breach lifecycle.

"From national security threats to supply chain disruptions impacting the goods and services we rely on every day, cyberattacks on government and critical infrastructure can have ramifications that go far beyond the balance sheet," said Alice Fakir, Partner, Lead of Cybersecurity Services, US Federal Market for IBM Consulting. "The elite and highly customizable cyber response training we provide at our new DC range helps organizations and federal agencies better defend against existing and emerging threats, and also addresses federal mandates like those in the Biden Administration's Executive Order 14028 focused on improving the nation's cybersecurity."

GIGABYTE Highlights its GPU Server Portfolio Ahead of World AI Festival

The World AI Cannes Festival (WAICF) is set to be the epicenter of artificial intelligence innovation, where the globe's top 200 decision-makers and AI innovators will converge for three days of intense discussions on groundbreaking AI strategies and use-cases. Against the backdrop of this premier event, GIGABYTE has strategically chosen to participate, unveiling its exponential growth in the AI and High-Performance Computing (HPC) market segments.

The AI industry has witnessed unprecedented growth, with Cloud Service Providers (CSP's) and data center operators spearheading supercomputing projects. GIGABYTE's decision to promote its GPU server portfolio with over 70+ models, at WAICF is a testament to the increasing demands from the French market for sovereign AI Cloud solutions. The spotlight will be on GIGABYTE's success stories on enabling GPU Cloud infrastructure, seamlessly powered by NVIDIA GPU technologies, as GIGABYTE aims to engage in meaningful conversations with end-users and firms dependent on GPU computing.

AWS and NVIDIA Partner to Deliver 65 ExaFLOP AI Supercomputer, Other Solutions

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), and NVIDIA (NASDAQ: NVDA) today announced an expansion of their strategic collaboration to deliver the most-advanced infrastructure, software and services to power customers' generative artificial intelligence (AI) innovations. The companies will bring together the best of NVIDIA and AWS technologies—from NVIDIA's newest multi-node systems featuring next-generation GPUs, CPUs and AI software, to AWS Nitro System advanced virtualization and security, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability—that are ideal for training foundation models and building generative AI applications.

The expanded collaboration builds on a longstanding relationship that has fueled the generative AI era by offering early machine learning (ML) pioneers the compute performance required to advance the state-of-the-art in these technologies.

Ansys Collaborates with TSMC and Microsoft to Accelerate Mechanical Stress Simulation for 3D-IC Reliability in the Cloud

Ansys has collaborated with TSMC and Microsoft to validate a joint solution for analyzing mechanical stresses in multi-die 3D-IC systems manufactured with TSMC's 3DFabric advanced packaging technologies. This collaborative solution gives customers added confidence to address novel multiphysics requirements that improve the functional reliability of advanced designs using TSMC's 3DFabric, a comprehensive family of 3D silicon stacking and advanced packaging technologies.

Ansys Mechanical is the industry-leading finite element analysis software used to simulate mechanical stresses caused by thermal gradients in 3D-ICs. The solution flow has been proven to run efficiently on Microsoft Azure, helping to ensure fast turn-around times with today's very large and complex 2.5D/3D-IC systems.

Cooler Master Launches the Dyn X: A New Frontier in Racing Simulation

Cooler Master, a leading innovator in PC Hardware, today announced the launch of Dyn X, a breakthrough product for virtual racing and flying simulations. The Dyn X is more than a simulator; it's a gateway to a whole new world of adventures.

"Cooler Master has always been at the forefront of innovation, and the Dyn X is a testament to our unwavering commitment to elevating gaming experiences," said Jimmy Sha, CEO of Cooler Master. "Whether you're taking on a new racetrack or taking flight, the Dyn X offers unprecedented customization, comfort, and immersive gameplay." The Dyn X consists of two main components: a racing seat and a cockpit. Both are designed using high-quality materials, feature universal compatibility, and adopt modern race-inspired aesthetics. The Dyn X provides users with the opportunity to craft a simulation environment that aligns perfectly with their preferences.

NVIDIA Key Player in Creation of OpenUSD Standard for 3D Worlds

NVIDIA joined Pixar, Adobe, Apple and Autodesk today to found the Alliance for OpenUSD, a major leap toward unlocking the next era of 3D graphics, design and simulation. The group will standardize and extend OpenUSD, the open-source Universal Scene Description framework that's the foundation of interoperable 3D applications and projects ranging from visual effects to industrial digital twins.

Several leading companies in the 3D ecosystem already signed on as the alliance's first general members—Cesium, Epic Games, Foundry, Hexagon, IKEA, SideFX and Unity. Standardizing OpenUSD will accelerate its adoption, creating a foundational technology that will help today's 2D internet evolve into a 3D web. Many companies are already working with NVIDIA to pioneer this future.

Tour de France Bike Designs Developed with NVIDIA RTX GPU Technologies

NVIDIA RTX is spinning new cycles for designs. Trek Bicycle is using GPUs to bring design concepts to life. The Wisconsin-based company, one of the largest bicycle manufacturers in the world, aims to create bikes with the highest-quality craftsmanship. With its new partner Lidl, an international retailer chain, Trek Bicycle also owns a cycling team, now called Lidl-Trek. The team is competing in the annual Tour de France stage race on Trek Bicycle's flagship lineup, which includes the Emonda, Madone and Speed Concept. Many of the team's accessories and equipment, such as the wheels and road race helmets, were also designed at Trek.

Bicycle design involves complex physics—and a key challenge is balancing aerodynamic efficiency with comfort and ride quality. To address this, the team at Trek is using NVIDIA A100 Tensor Core GPUs to run high-fidelity computational fluid dynamics (CFD) simulations, setting new benchmarks for aerodynamics in a bicycle that's also comfortable to ride and handles smoothly. The designers and engineers are further enhancing their workflows using NVIDIA RTX technology in Dell Precision workstations, including the NVIDIA RTX A5500 GPU, as well as a Dell Precision 7920 running dual RTX A6000 GPUs.

Intel Tech Helping Design Prototype Fusion Power Plant

What's New: As part of a collaboration with Intel and Dell Technologies, the United Kingdom Atomic Energy Authority (UKAEA) and the Cambridge Open Zettascale Lab plan to build a "digital twin" of the Spherical Tokamak for Energy Production (STEP) prototype fusion power plant. The UKAEA will utilize the lab's supercomputer based on Intel technologies, including 4th Gen Intel Xeon Scalable processors, distributed asynchronous object storage (DAOS) and oneAPI tools to streamline the development and delivery of fusion energy to the grid in the 2040s.

"Planning for the commercialization of fusion power requires organizations like UKAEA to utilize extreme amounts of computational resources and artificial intelligence for simulations. These HPC workloads may be performed using a variety of different architectures, which is why open software solutions that optimize performance needs can lend portability to code that isn't available in closed, proprietary systems. Overall, advanced hardware and software can make the journey to commercial fusion power lower risk and accelerated - a key benefit on the path to sustainable energy."—Adam Roe, Intel EMEA HPC technical director

NVIDIA Cambridge-1 AI Supercomputer Hooked up to DGX Cloud Platform

Scientific researchers need massive computational resources that can support exploration wherever it happens. Whether they're conducting groundbreaking pharmaceutical research, exploring alternative energy sources or discovering new ways to prevent financial fraud, accessible state-of-the-art AI computing resources are key to driving innovation. This new model of computing can solve the challenges of generative AI and power the next wave of innovation. Cambridge-1, a supercomputer NVIDIA launched in the U.K. during the pandemic, has powered discoveries from some of the country's top healthcare researchers. The system is now becoming part of NVIDIA DGX Cloud to accelerate the pace of scientific innovation and discovery - across almost every industry.

As a cloud-based resource, it will broaden access to AI supercomputing for researchers in climate science, autonomous machines, worker safety and other areas, delivered with the simplicity and speed of the cloud, ideally located for the U.K. and European access. DGX Cloud is a multinode AI training service that makes it possible for any enterprise to access leading-edge supercomputing resources from a browser. The original Cambridge-1 infrastructure included 80 NVIDIA DGX systems; now it will join with DGX Cloud, to allow customers access to world-class infrastructure.

University of Chicago Molecular Engineering Team Experimenting With Stretchable OLED Display

A researcher team operating out of the Pritzker School of Molecular Engineering (PME) at the University of Chicago are developing a special type of material that is simultaneously capable of emitting fluorescent pattern and undergoing deformation via forced stretches or bends. This thin piece of experimental elastic can function as a digital display, even under conditions of great force - its creators claim that their screen technology material can be stretched to twice the original length without any deterioration or failures.

Sihong Wang (assistant professor of molecular engineering) has lead this research project, with Juan de Pablo (Liew Family Professor of Molecular Engineering) providing senior supervision. The team predicts that the polymer-based display will offer a wide range of applications including usage foldable computer screens, UI-driven wearables and health monitoring equipment. Solid OLED displays are featured in many modern devices that we use on a daily basis, but the traditional nature of that technology is not suitable for material flexibility due to inherent properties of "tight chemical bonds and stiff structures". Wang hopes to address these problems with his new polymer-type: "The materials currently used in these state-of-the-art OLED displays are very brittle; they don't have any stretchability. Our goal was to create something that maintained the electroluminescence of OLED but with stretchable polymers."
Return to Keyword Browsing
Nov 21st, 2024 07:34 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts