News Posts matching #API

Return to Keyword Browsing

Forspoken Simply Doesn't Work with AMD Radeon RX 400 and RX 500 "Polaris" GPUs

AMD Radeon RX 400 series and RX 500 series graphics cards based on the "Polaris" graphics architecture are simply unable to run "Forspoken," as users on Reddit report. The game has certain DirectX 12 feature-level 12_1 API requirements that the architecture does not meet. Interestingly, NVIDIA's "Maxwell" graphics architecture, which predates AMD "Polaris" by almost a year, supports FL 12_1, and is able to play the game. Popular GPUs from the "Maxwell" generation include the GeForce GTX 970 and GTX 960. Making matters much worse, AMD is yet to release an update to its Adrenalin graphics drivers for the RX Vega, RX 5000, and RX 6000 series that come with "Forspoken" optimization. Its latest 23.1.2 beta drivers that come with these optimizations only support the RX 7000 series RDNA3 graphics cards. It's now been over 50 days since the vast majority of AMD discrete GPUs have received a driver update.

Microsoft and OpenAI Extend Partnership with Additional Investment

Today, we are announcing the third phase of our long-term partnership with OpenAI through a multiyear, multibillion dollar investment to accelerate AI breakthroughs to ensure these benefits are broadly shared with the world.

This agreement follows our previous investments in 2019 and 2021. It extends our ongoing collaboration across AI supercomputing and research and enables each of us to independently commercialize the resulting advanced AI technologies.

HaptX Introduces Industry's Most Advanced Haptic Gloves, Priced for Scalable Deployment

HaptX Inc., the leading provider of realistic haptic technology, today announced the availability of pre-orders of the company's new HaptX Gloves G1, a ground-breaking haptic device optimized for the enterprise metaverse. HaptX has engineered HaptX Gloves G1 with the features most requested by HaptX customers, including improved ergonomics, multiple glove sizes, wireless mobility, new and improved haptic functionality, and multiplayer collaboration, all priced as low as $4,500 per pair - a fraction of the cost of the award-winning HaptX Gloves DK2.

"With HaptX Gloves G1, we're making it possible for all organizations to leverage our lifelike haptics," said Jake Rubin, Founder and CEO of HaptX. "Touch is the cornerstone of the next generation of human-machine interface technologies, and the opportunities are endless." HaptX Gloves G1 leverages advances in materials science and the latest manufacturing techniques to deliver the first haptic gloves that fit like a conventional glove. The Gloves' digits, palm, and wrist are soft and flexible for uninhibited dexterity and comfort. Available in four sizes (Small, Medium, Large, and Extra Large), these Gloves offer the best fit and performance for all adult hands. Inside the Gloves are hundreds of microfluidic actuators that physically displace your skin, so when you touch and interact with virtual objects, the objects feel real.

Intel Accelerates Developer Innovation with Open, Software-First Approach

On Day 2 of Intel Innovation, Intel illustrated how its efforts and investments to foster an open ecosystem catalyze community innovation, from silicon to systems to apps and across all levels of the software stack. Through an expanding array of platforms, tools and solutions, Intel is focused on helping developers become more productive and more capable of realizing their potential for positive social good. The company introduced new tools to support developers in artificial intelligence, security and quantum computing, and announced the first customers of its new Project Amber attestation service.

"We are making good on our software-first strategy by empowering an open ecosystem that will enable us to collectively and continuously innovate," said Intel Chief Technology Officer Greg Lavender. "We are committed members of the developer community and our breadth and depth of hardware and software assets facilitate the scaling of opportunities for all through co-innovation and collaboration."

Intel Data-Center GPU Flex Series "Arctic Sound-M" Launched: Visual Processing, Media, and Inference top Applications

Intel today launched its Arctic Sound M line of data-center GPUs. These are not positioned as HPC processors like the "Ponte Vecchio," but GPUs targeting cloud-compute providers, with their main applications being in the realm of visual processing, media, and AI inferencing. Their most interesting aspect has to be the silicon, which are the same 6 nm "ACM-G11" and "ACM-G10" chips powering the Arc "Alchemist" client graphics cards, based on the Xe-HPG architecture. Even more interesting is their typical board power values, ranging between 75 W to 150 W. The cards are built in the PCI-Express add-on card form-factor, with their cooling solutions optimized for rack airflow.

The marketing name for these cards is simply Intel Data Center GPU Flex, with two models being offered: The Data Center GPU Flex-140, and Flex-170. The Flex-170 is a full-sized add-on card based on the larger ACM-G10 silicon, which has 32 Xe Cores (4,096 unified shaders), whereas the Flex-140, interestingly, is a low-profile dual-GPU card with two smaller ACM-G11 chips that each has 8 Xe Cores (1,024 unified shaders). The two chips appear to be sharing a PCIe bridge chip in the renders. Both models come with four Xe Media Engines that pack AV1 encode hardware-acceleration, XMX AI acceleration, real-time ray tracing, and GDDR6 memory.

Intel Xe iGPUs and Arc Graphics Lack DirectX 9 Support, Rely on API Translation to Play Older Games

So you thought your Arc A380 graphics card, or the Gen12 Xe iGPU in your 12th Gen Core processors were good enough to munch through your older games from the 2000s and early 2010s? Not so fast. Intel Graphics states that the Xe-LP and Xe-HPG graphics architectures, which power the Gen12 Iris Xe iGPUs and the new Arc "Alchemist" graphics cards, lack native support for the DirectX 9 graphics API. The two rely on API translation such as Microsoft D3D9On12, which attempts to translate D3D9 API commands to D3D12, which the drivers can recognize.

Older graphics architectures such as the Gen11 powering "Ice Lake," and Gen9.5 found in all "Skylake" derivatives, feature native support for DirectX 9, however when paired with Arc "Alchemist" graphics cards, the drivers are designed to engage D3D9On12 to accommodate the discrete GPU, unless the dGPU is disabled. API translation can be unreliable and buggy, and Intel points you to Microsoft and the game developers for support, Intel Graphics won't be providing any.

ÆPIC Leak is an Architectural CPU Bug Affecting 10th, 11th, and 12th Gen Intel Core Processors

The x86 CPU family has been vulnerable to many attacks in recent years. With the arrival of Spectre and Meltdown, we have seen side-channel attacks overtake both AMD and Intel designs. However, today we find out that researchers are capable of exploiting Intel's latest 10th, 11th, and 12th generation Core processors with a new CPU bug called ÆPIC Leak. Named after Advanced Programmable Interrupt Controller (APIC) that handles interrupt requests to regulate multiprocessing, the leak is claimeing to be the first "CPU bug able to architecturally disclose sensitive data." Researchers Pietro Borrello (Sapienza University of Rome), Andreas Kogler (Graz Institute of Technology), Martin Schwarzl (Graz), Moritz Lipp (Amazon Web Services), Daniel Gruss (Graz University of Technology), and Michael Schwarz (CISPA Helmholtz Center for Information Security) discovered this flaw in Intel processors.
ÆPIC Leak is the first CPU bug able to architecturally disclose sensitive data. It leverages a vulnerability in recent Intel CPUs to leak secrets from the processor itself: on most 10th, 11th and 12th generation Intel CPUs the APIC MMIO undefined range incorrectly returns stale data from the cache hierarchy. In contrast to transient execution attacks like Meltdown and Spectre, ÆPIC Leak is an architectural bug: the sensitive data gets directly disclosed without relying on any (noisy) side channel. ÆPIC Leak is like an uninitialized memory read in the CPU itself.

A privileged attacker (Administrator or root) is required to access APIC MMIO. Thus, most systems are safe from ÆPIC Leak. However, systems relying on SGX to protect data from privileged attackers would be at risk, thus, have to be patched.

Intel Teams Up with Aible to Fast-Track Enterprise Analytics and AI

Intel's collaboration with Aible enables teams across key industries to leverage artificial intelligence and deliver rapid and measurable business impact. This deep collaboration, which includes engineering optimizations and an innovative benchmarking program, enhances Aible's ability to deliver rapid results to its enterprise customers. When paired with Intel processors, Aible's technology provides a serverless-first approach, allowing developers to build and run applications without having to manage servers, and build modern applications with increased agility and lower total cost of ownership (TCO).

"Today's enterprise IT infrastructure leaders face significant challenges building a foundation that is designed to help business teams drive value from AI initiatives in the data center. We've moved past talking about the potential of AI, as business teams across key industries are experiencing measurable business impact within days, using Intel Xeon Scalable processors with built-in Intel software optimizations with Aible," said Kavitha Prasad, Intel vice president and general manager of Datacenter, AI and Cloud Execution and Strategy.

Intel Unveils Arc Pro Graphics Cards for Workstations and Professional Software

Intel has today unveiled another addition to its discrete Arc Alchemist graphics card lineup, with a slight preference to the professional consumer market. Intel has prepared three models for creators and entry pro-vis solutions, called Intel Arc Pro graphics cards. All GPUs are AV1 accelerated, have ray tracing support, and are designed to handle AI acceleration inside applications like Adobe Premiere Pro. At the start, we have a small A30M mobile GPU aimed at laptop designs. It has a 3.5 TeraFLOP FP32 capability inside a configurable 35-50 Watt TDP envelope, has eight ray tracing cores, and 4 GB of GDDR6 memory. Its display output connectors depend on OEM's laptop design.

Next, we have the Arc A40 Pro discrete single-slot GPU. Having 3.5 TeraFLOPs of FP32 single-precision performance, it has eight ray tracing cores and 6 GB of GDDR6 memory. The listed maximum TDP for this model is 50 Watts. It has four mini-DP ports for video output, and it can drive two monitors at 8K 60 Hz, one at 5K 240 Hz, two at 5K 120 Hz, or four at 4K 60 Hz refresh rate. Its bigger brother, the Arc A50 Pro, is a dual-slot design with 4.8 TeraFLOPs of single-precision FP32 computing, has eight ray tracing cores, and 6 GB of GDDR6 memory as well. It has the same video output capability as the Arc A40 Pro, with a beefier cooling setup to handle the 75 Watt TDP. All software developed using the OneAPI toolkit can be accelerated using these GPUs. Intel is working with the industry to adapt professional software for Arc Pro graphics.

KIOXIA Introduces Sample PCIe NVMe Technology-Based Flash Hardware for SEF

Supporting the Linux Foundation's Software-Enabled Flash open-source project, KIOXIA America, Inc. today announced innovative new software-defined technology and sample hardware based on PCIe and NVMe technology. This technology fully uncouples flash storage from legacy HDD protocols, allowing flash to realize its full capability and potential as a storage media. KIOXIA will highlight Software-Enabled Flash at this week's Flash Memory Summit Conference & Expo at its booth #307 on the show floor and present the session, "NVMe Software-Enabled Flash Storage for Hyperscale Data Centers," at the Santa Clara Convention Center.

o reach efficiency at scale, hyperscale cloud storage needs more from flash storage devices that are currently based on hard disk drive protocols created decades ago. To resolve this, the Linux Foundation's Software-Enabled Flash Community Project will enable industry adoption of a software-defined flash API, giving developers the ability to customize flash storage specific to data center, application and workload requirements. The project was created to benefit the storage developer community with a vendor agnostic, flexible solution that meets the evolving requirements of the modern data center.

Supermicro Launches Multi-GPU Cloud Gaming Solutions Based on Intel Arctic Sound-M

Super Micro Computer, Inc., a global leader in enterprise computing, storage, networking, and green computing technology, is announcing future Total IT Solutions for availability with Android Cloud Gaming and Media Processing & Delivery. These new solutions will incorporate the Intel Data Center GPU, codenamed Arctic Sound-M, and will be supported on several Supermicro servers. Supermicro solutions that will contain the Intel Data Center GPUs codenamed Arctic Sound-M, include the 4U 10x GPU server for transcoding and media delivery, the Supermicro BigTwin system with up to eight Intel Data Center GPUs, codenamed Arctic Sound-M in 2U for media processing applications, the Supermicro CloudDC server for edge AI inferencing, and the Supermicro 2U 2-Node server with three Intel Data Center GPUs, codenamed Arctic Sound-M per node, optimized for cloud gaming. Additional systems will be made available later this year.

"Supermicro will extend our media processing solutions by incorporating the Intel Data Center GPU," said Charles Liang, President, and CEO, Supermicro. "The new solutions will increase video stream rates and enable lower latency Android cloud gaming. As a result, Android cloud gaming performance and interactivity will increase dramatically with the Supermicro BigTwin systems, while media delivery and transcoding will show dramatic improvements with the new Intel Data Center GPUs. The solutions will expand our market-leading accelerated computing offerings, including everything from Media Processing & Delivery to Collaboration, and HPC."

AMD Software Adrenalin 22.7.1 Released, Includes OpenGL Performance Boost and AI Noise-Suppression

AMD on Tuesday released the AMD Software Adrenalin 22.7.1 drivers, which include several major updates to the feature-set. To begin with, AMD has significantly updated its OpenGL ICD (installable client driver), which can have an incredible 79 percent increase in frame-rates at 4K with "Fabulous" settings, as measured on the flagship RX 6950 XT, and up to 75 percent, as measured on the entry-level RX 6400. Also debuting is AMD Noise Suppression, a new feature that lets you clear out your voice-calls and in-game voice-chats. The software leverages AI to filter out background noises that don't identify as the prominent foreground speech. Radeon Super Resolution support has been extended to RX 5000 series and RX 6000 series GPUs running on Ryzen processor notebooks with Hybrid graphics setups.

Besides these, Adrenalin 22.7.1 adds optimization for "Swordsman Remake," support for Radeon Boost plus VRS with "Elden Ring," "Resident Evil VIII," and "Valorant." The drivers improve support for Windows 11 22H2 Update, and Agility SDK 1.602 and 1.607. A few more Vulkan API extensions are added with this release. Among the handful issues fixed are lower-than-expected F@H performance on RX 6000 series, Auto Undervolt disabling idle-fan-stop; "Hitman 3" freezing when switching between windows in exclusive fullscreen mode; blurry web video upscaling on certain RX 6000 series cards, and Enhanced Sync locking framerates to 15 FPS with video playback on extended monitors.

DOWNLOAD: AMD Software Adrenalin 22.7.1

Intel Releases Open Source AI Reference Kits

Intel has released the first set of open source AI reference kits specifically designed to make AI more accessible to organizations in on-prem, cloud and edge environments. First introduced at Intel Vision, the reference kits include AI model code, end-to-end machine learning pipeline instructions, libraries and Intel oneAPI components for cross-architecture performance. These kits enable data scientists and developers to learn how to deploy AI faster and more easily across healthcare, manufacturing, retail and other industries with higher accuracy, better performance and lower total cost of implementation.

"Innovation thrives in an open, democratized environment. The Intel accelerated open AI software ecosystem including optimized popular frameworks and Intel's AI tools are built on the foundation of an open, standards-based, unified oneAPI programming model. These reference kits, built with components of Intel's end-to-end AI software portfolio, will enable millions of developers and data scientists to introduce AI quickly and easily into their applications or boost their existing intelligent solutions."

AMD WMMA Instruction is Direct Response to NVIDIA Tensor Cores

AMD's RDNA3 graphics IP is just around the corner, and we are hearing more information about the upcoming architecture. Historically, as GPUs advance, it is not unusual for companies to add dedicated hardware blocks to accelerate a specific task. Today, AMD engineers have updated the backend of the LLVM compiler to include a new instruction called Wave Matrix Multiply-Accumulate (WMMA). This instruction will be present on GFX11, which is the RDNA3 GPU architecture. With WMMA, AMD will offer support for processing 16x16x16 size tensors in FP16 and BF16 precision formats. With these instructions, AMD is adding new arrangements to support the processing of matrix multiply-accumulate operations. This is closely mimicking the work NVIDIA is doing with Tensor Cores.

AMD ROCm 5.2 API update lists the use case for this type of instruction, which you can see below:
rocWMMA provides a C++ API to facilitate breaking down matrix multiply accumulate problems into fragments and using them in block-wise operations that are distributed in parallel across GPU wavefronts. The API is a header library of GPU device code, meaning matrix core acceleration may be compiled directly into your kernel device code. This can benefit from compiler optimization in the generation of kernel assembly and does not incur additional overhead costs of linking to external runtime libraries or having to launch separate kernels.

rocWMMA is released as a header library and includes test and sample projects to validate and illustrate example usages of the C++ API. GEMM matrix multiplication is used as primary validation given the heavy precedent for the library. However, the usage portfolio is growing significantly and demonstrates different ways rocWMMA may be consumed.

Intel Arc A370M Graphics Card Tested in Various Graphics Rendering Scenarios

Intel's Arc Alchemist graphics cards launched in laptop/mobile space, and everyone is wondering just how well the first generation of discrete graphics performs in actual, GPU-accelerated workloads. Tellusim Technologies, a software company located in San Diego, has managed to get ahold of a laptop featuring an Intel Arc A370M mobile graphics card and benchmark it against other competing solutions. Instead of using Vulkan API, the team decided to use D3D12 API for tests, as the Vulkan usually produces lower results on the new 12th generation graphics. With the 30.0.101.1736 driver version, this GPU was mainly tested in the standard GPU working environment like triangles and batches. Meshlet size is set to 69/169, and the job is as big as 262K Meshlets. The total amount of geometry is 20 million vertices and 40 million triangles per frame.

Using the tests such as Single DIP (drawing 81 instances with u32 indices without going to Meshlet level), Mesh Indexing (Mesh Shader emulation), MDI/ICB (Multi-Draw Indirect or Indirect Command Buffer), Mesh Shader (Mesh Shaders rendering mode) and Compute Shader (Compute Shader rasterization), the Arc GPU produced some exciting numbers, measured in millions or billions of triangles. Below, you can see the results of these tests.

Intel Arc Alchemist GPUs Get Vulkan 1.3 Compatibility

A part of the process of building a graphics card is designing compatibility to execute the latest graphics APIs like DirectX, OpenGL, and Vulkan. Today, we have confirmation that Intel's Arc Alchemist discrete graphics cards will be compatible with Vulkan's latest iteration - version 1.3. In January, Khronos, the team behind Vulkan API, released their regular two-year update to the standard. Graphics card vendors like NVIDIA and AMD announced support immediately with their drivers. Today, the Khronos website officially lists Intel Arc Alchemist mobile graphics cards as compatible with Vulkan 1.3 with Intel Arc A770M, A730M, A550M, A370M, and A350M GPUs.

At the time of writing, there is no official announcement for the desktop cards yet. However, given that the mobile SKUs are supporting the latest standard, it is extremely likely that the desktop variants will also carry the same level of support.

Intel Releases OpenVINO 2022.1 to Advance AI Inferencing for Developers

Since OpenVINO launched in 2018, Intel has enabled hundreds of thousands of developers to dramatically accelerate AI inferencing performance, starting at the edge and extending to the enterprise and the client. Today, ahead of MWC Barcelona 2022, the company launched a new version of the Intel Distribution of OpenVINO Toolkit. New features are built upon three-and-a-half years of developer feedback and include a greater selection of deep learning models, more device portability choices and higher inferencing performance with fewer code changes.

"The latest release of OpenVINO 2022.1 builds on more than three years of learnings from hundreds of thousands of developers to simplify and automate optimizations. The latest upgrade adds hardware auto-discovery and automatic optimization, so software developers can achieve optimal performance on every platform. This software plus Intel silicon enables a significant AI ROI advantage and is deployed easily into the Intel-based solutions in your network," said Adam Burns, vice president, OpenVINO Developer Tools in the Network and Edge Group.

Intel Updates Technology Roadmap with Data Center Processors and Game Streaming Service

At Intel's 2022 Investor Meeting, Chief Executive Officer Pat Gelsinger and Intel's business leaders outlined key elements of the company's strategy and path for long-term growth. Intel's long-term plans will capitalize on transformative growth during an era of unprecedented demand for semiconductors. Among the presentations, Intel announced product roadmaps across its major business units and key execution milestones, including: Accelerated Computing Systems and Graphics, Intel Foundry Services, Software and Advanced Technology, Network and Edge, Technology Development, More: For more from Intel's Investor Meeting 2022, including the presentations and news, please visit the Intel Newsroom and Intel.com's Investor Meeting site.

Intel Adds Experimental Mesh Shader Support in DG2 GPU Vulkan Linux Drivers

Mesh shader is a relatively new concept of a programmable geometric shading pipeline, which promises to simplify the whole graphics rendering pipeline organization. NVIDIA introduced this concept with Turing back in 2018, and AMD joined with RDNA2. Today, thanks to the finds of Phoronix, we have gathered information that Intel's DG2 GPU will carry support for mesh shaders and bring it under Vulkan API. For starters, the difference between mesh/task and traditional graphics rendering pipeline is that the mesh edition is much simpler and offers higher scalability, bandwidth reduction, and greater flexibility in the design of mesh topology and graphics work. In Vulkan, the current mesh shader state is NVIDIA's contribution called the VK_NV_mesh_shader extension. The below docs explain it in greater detail:
Vulkan API documentationThis extension provides a new mechanism allowing applications to generate collections of geometric primitives via programmable mesh shading. It is an alternative to the existing programmable primitive shading pipeline, which relied on generating input primitives by a fixed function assembler as well as fixed function vertex fetch.

There are new programmable shader types—the task and mesh shader—to generate these collections to be processed by fixed-function primitive assembly and rasterization logic. When task and mesh shaders are dispatched, they replace the core pre-rasterization stages, including vertex array attribute fetching, vertex shader processing, tessellation, and geometry shader processing.

Researchers Exploit GPU Fingerprinting to Track Users Online

Online tracking of users happens when 3rd party services collect information about various people and use that to help identify them in the sea of other online persons. This collection of specific information is often called "fingerprinting," and attackers usually exploit it to gain user information. Today, researchers have announced that they managed to use WebGL (Web Graphics Library) to their advantage and create a unique fingerprint for every GPU out there to track users online. This exploit works because every piece of silicon has its own variations and unique characteristics when manufactured, just like each human has a unique fingerprint. Even among the exact processor models, silicon differences make each product distinct. That is the reason why you can not overclock every processor to the same frequency, and binning exists.

What would happen if someone were to precisely explore the differences in GPUs and use those differences to identify online users by those characteristics? This is exactly what researchers that created DrawnApart thought of. Using WebGL, they run a GPU workload that identifies more than 176 measurements across 16 data collection places. This is done using vertex operations in GLSL (OpenGL Shading Language), where workloads are prevented from random distribution on the network of processing units. DrawnApart can measure and record the time to complete vertex renders, record the exact route that the rendering took, handle stall functions, and much more. This enables the framework to give off unique combinations of data turned into fingerprints of GPUs, which can be exploited online. Below you can see the data trace recording of two GPUs (same models) showing variations.

Intel Releases oneAPI 2022 Toolkits to Developers

Intel today released oneAPI 2022 toolkits. Newly enhanced toolkits expand cross-architecture features to provide developers greater utility and architectural choice to accelerate computing. "I am impressed by the breadth of more than 900 technical improvements that the oneAPI software engineering team has done to accelerate development time and performance for critical application workloads across Intel's client and server CPUs and GPUs. The rich set of oneAPI technologies conforms to key industry standards, with deep technical innovations that enable applications developers to obtain the best possible run-time performance from the cloud to the edge. Multi-language support and cross-architecture performance acceleration are ready today in our oneAPI 2022 release to further enable programmer productivity on Intel platforms," said Greg Lavender, Intel chief technology officer, senior vice president and general manager of the Software and Advanced Technology Group.

New capabilities include the world's first unified compiler implementing C++, SYCL and Fortran, data parallel Python for CPUs and GPUs, advanced accelerator performance modeling and tuning, and performance acceleration for AI and ray tracing visualization workloads. The oneAPI cross-architecture programming model provides developers with tools that aim to improve the productivity and velocity of code development when building cross-architecture applications.

Intel Disables DirectX 12 API Loading on Haswell Processors

Intel's fourth-generation Core processors, codenamed Haswell, are subject to new security exploits. According to the company, a vulnerability exists inside the graphics controller of 4th generation Haswell processors, happening once the DirectX 12 API loading occurs. To fix the problem, Intel has found that disabling this API results in a fix. Starting with Intel graphics driver 15.40.44.5107 applications that run exclusively on DirectX 12 API no longer work with the following Intel Graphics Controllers: Intel Iris Pro Graphics 5200/5100, HD Graphics 5000/4600/4400/4200, and Intel Pentium and Celeron Processors with Intel HD Graphics based on 4th Generation Intel Core.

"A potential security vulnerability in Intel Graphics may allow escalation of privilege on 4th Generation Intel Core processors. Intel has released a software update to mitigate this potential vulnerability. In order to mitigate the vulnerability, DirectX 12 capabilities were deprecated." says the Intel page. If a user with a Haswell processor has a specific need to run the DirectX 12 application, they can downgrade their graphics driver to version 15.40.42.5063 or older.

SiPearl Partners With Intel to Deliver Exascale Supercomputer in Europe

SiPearl, the designer of the high computing power and low consumption microprocessor that will be the heart of European supercomputers, has entered into a partnership with Intel in order to offer a common offer dedicated to the first exascale supercomputers in Europe. This partnership will offer their European customers the possibility of combining Rhea, the high computing power and low consumption microprocessor developed by SiPearl, with Intel's Ponte Vecchio accelerator, thus creating a high performance computing node that will promote the deployment of the exascale supercomputing in Europe.

To enable this powerful combination, SiPearl plans to use and optimize for its Rhea microprocessor the open and unified programming interface, oneAPI, created by Intel. Using this single solution across the entire heterogeneous compute node, consisting of Rhea and Ponte Vecchio, will increase developer productivity and application performance.

Linux Foundation to Form New Open 3D Foundation

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced an intent to form the Open 3D Foundation to accelerate developer collaboration on 3D game and simulation technology. The Open 3D Foundation will support open source projects that advance capabilities related to 3D graphics, rendering, authoring, and development. As the first project governed by the new foundation, Amazon Web Services, Inc. (AWS) is contributing an updated version of the Amazon Lumberyard game engine as the Open 3D Engine (O3DE), under the permissive Apache 2.0 license. The Open 3D Engine enables developers and content creators to build 3D experiences unencumbered by commercial terms and will provide the support and infrastructure of an open source community through forums, code repositories, and developer events. A developer preview of O3DE is available on GitHub today. For more information and/or to contribute, please visit: https://o3de.org

3D engines are used to create a range of virtual experiences, including games and simulations, by providing capabilities such as 3D rendering, content authoring tools, animation, physics systems, and asset processing. Many developers are seeking ways to build their intellectual property on top of an open source engine where the roadmap is highly visible, openly governed, and collaborative to the community as a whole. More developers look to be able to create or augment their current technological foundations with highly collaborative solutions that can be used in any development environment. O3DE introduces a new ecosystem for developers and content creators to innovate, build, share, and distribute immersive 3D worlds that will inspire their users with rich experiences that bring the imaginations of their creators to life.

Intel Ponte Vecchio GPU Scores Another Win in Leibniz Supercomputing Centre

Today, Lenovo in partnership with Intel has announced that Leibniz Supercomputing Centre (LRZ) is building a supercomputer powered by Intel's next-generation technologies. Specifically, the supercomputer will use Intel's Sapphire Rapids CPUs in combination with the highly-teased Ponte Vecchio GPUs to power the applications running at Leibniz Supercomputing Centre. Along with the various processors, the LRZ will also deploy Intel Optane persistent memory to process the huge amount of data the LRZ has and is producing. The integration of HPC and AI processing will be enabled by the expansion of LRZ's current supercomputer called SuperMUG-NG, which will receive an upgrade in 2022, which will feature both Sapphire Rapids and Ponte Vecchio.

Mr. Raja Koduri, Intel graphics guru, has on Twitter teased that this supercomputer installment will represent a combination of Sapphire Rapids, Ponte Vecchio, Optane, and One API all in one machine. The system will use over one petabyte of Distributed Asynchronous Object Storage (DAOS) based on the Optane technologies. Then, Mr. Koduri has teased some Ponte Vecchio eye candy, which is a GIF of tiles combining to form a GPU, which you can check out here. You can also see some pictures of Ponte Vecchio below.
Intel Ponte Vecchio GPU Intel Ponte Vecchio GPU Intel Ponte Vecchio GPU Intel Ponte Vecchio GPU
Return to Keyword Browsing
May 21st, 2024 19:13 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts