News Posts matching #Linux

Return to Keyword Browsing

IBM z16 and LinuxONE 4 Get Single Frame and Rack Mount Options

IBM today unveiled new single frame and rack mount configurations of IBM z16 and IBM LinuxONE 4, expanding their capabilities to a broader range of data center environments. Based on IBM's Telum processor, the new options are designed with sustainability in mind for highly efficient data centers, helping clients adapt to a digitized economy and ongoing global uncertainty.

Introduced in April 2022, the IBM z16 multi frame has helped transform industries with real-time AI inferencing at scale and quantum-safe cryptography. IBM LinuxONE Emperor 4, launched in September 2022, features capabilities that can reduce both energy consumption and data center floor space while delivering the scale, performance and security that clients need. The new single frame and rack mount configurations expand client infrastructure choices and help bring these benefits to data center environments where space, sustainability and standardization are paramount.

Kubuntu Focus Team Announces New XE Gen 2 Linux Laptop

The Kubuntu Focus Team announces the second generation of the powerful Focus XE laptop. This ultra-portable and affordable laptop is a great choice for developers, creators, and those who are looking for the best out-of-the-box Linux experience but don't need the power, complexity, or expense of a dedicated GPU.

This generation features the i7-1260P CPU, which provides a 16% and 60% boost in single and multi-core Geekbench 5 scores. In real life, this translates into very snappy performance and the ability to handle large, multi-process tasks with speed and ease. Other highlights of this laptop are the numerous high-speed audio and data ports, include Thunderbolt 4, and the capacity to attach multiple 4K displays. Customers can tailor their system with up to 64 GB of high-speed 3200Mhz Dual-Channel RAM, up to 2 TB of 7,450 MBps NVMe storage, and optional no-cost disk encryption. They are shipping now and the base model starts at $895.

Qualcomm Expands Connected Intelligent Edge Ecosystem Through Groundbreaking IoT and Robotics Products

Qualcomm Technologies, Inc. today announced the world's first integrated 5G IoT processors that are designed to support four major operating systems, in addition to two new robotics platforms, and an accelerator program for IoT ecosystem partners. These new innovations will empower manufacturers participating in the rapidly expanding world of devices at the connected intelligent edge.

The need for connected, intelligent, and autonomous devices is growing rapidly, and it is expected to hit $116 billion by 2030 according to Precedence Research. Businesses attempting to compete in this fast-moving economy need a reliable source of control and connectivity technology for their IoT and robotic devices. Qualcomm Technologies, which has shipped over 350 million dedicated IoT chipsets, is uniquely capable of providing manufacturers with the platforms needed to address this expanding segment.

OnLogic Launches Helix 401 Compact Fanless Computer

In response to the increasing demand for powerful computing that can be relied on in a wide range of even the most challenging installation environments, leading industrial computing manufacturer and solution provider, OnLogic (www.onlogic.com), has unveiled their Helix 401 fanless industrial computer. The compact device is designed for use in edge computing, industry 4.0, Internet of Things (IoT), and many other emerging applications and will make its public debut at Embedded World 2023.

"You may never see them, but industrial computers are everywhere, working diligently to power technology solutions of every shape and size. These systems need to be small and reliable while still being just as powerful as high-end desktop machines," says Mike Walsh, Senior Product Manager at OnLogic. "The Helix 401 balances size and performance while providing a wide range of configuration possibilities to help users tailor it to their specific application. It's small enough to fit in your hand, similar in size to Intel's popular NUC, but capable enough to drive advanced automation solutions and power the next great smart agriculture, building automation or energy management innovation."

SAM/ReBAR Stripped Out of AMD Open-Source OpenGL Driver RadeonSI Gallium3D

Support for AMD's Smart Access Memory and the overarching Resizable BAR technologies has been removed from the RadeonSI Gallium3D OpenGL driver as of today's Mesa 22.3.7 release. The comment in the announcement simply reads, "Disable Smart Access Memory because CPU access has large overhead." The nail in the coffin seems to have been this bug ticket submitted last month for the game Hyperdimension Neptunia Re;Birth1, in which the user reported the game running oddly slow on their RX 6600 while previously they had no issues on the much older R9 380. The solution provided was to simply disable ReBAR/SAM either with radeonsi_disable_sam=true or via UEFI. In the comments of the ticket lead RadeonSI developer Marek Olšák states, "We've never tested SAM with radeonsi, and it's not necessary there."

Apparently the performance advantages weren't panning out for RadeonSI, and since direct optimizations of these features was not a primary goal the decision was made to cut them out. Attempts to optimize SAM with RadeonSI date as far back as December 2020 and Mesa 21.0, but support for SAM under Linux goes further back. None of the changes to RadeonSI will affect other drivers such as RADV, the open-source Radeon Vulkan driver, and this code change is limited to only the RadeonSI OpenGL driver.

Magewell Expands Eco Capture Family of Ultra-Compact, Power-Efficient M.2 Capture Cards

Magewell has unveiled a new model in its Eco Capture family of ultra-compact, power-efficient, M.2 video capture cards. The new single-channel Eco Capture AIO M.2 provides both HDMI and SDI interfaces with embedded audio support for flexible input connectivity. Magewell will highlight the Eco Capture AIO M.2 and other new innovations in booth C5031 at the 2023 NAB Show in Las Vegas from April 16 to 19.

Magewell's Eco Capture cards offer systems integrators and OEM developers a high-performance video capture solution with low power consumption in a space-efficient form factor. The cost-effective, low-latency devices feature a high-speed PCIe 2.0 bus interface with an M.2 connector and measure just 22x80mm (0.87x3.15 in), making them ideal for incorporation into small, portable or embedded systems where full-sized PCIe slots are not available.

Primate Labs Launches Geekbench 6 with Modern Data Sets

Geekbench 6, the latest version of the best cross-platform benchmark, has arrived and is loaded with new and improved workloads to measure the performance of your CPUs and GPUs. Geekbench 6 is available for download today for Android, iOS, Windows, macOS, and Linux.

A lot has changed in the tech world in the past three years. Smartphone cameras take bigger and better pictures. Artificial intelligence, especially machine learning, has become ubiquitous in general and mobile applications. The number of cores in computers and mobile devices continues to rise. And how we interact with our computers and mobile devices has changed dramatically - who would have guessed that video conferencing would suddenly surge in 2020?

MediaTek Expands IoT Platform with Genio 700 for Industrial and Smart Home Products

Ahead of CES 2023, MediaTek today announced the latest chipset in the Genio platform for IoT devices, the octa-core Genio 700 designed for smart home, smart retail, and industrial IoT products. The new chipset will be featured as part of a demo at MediaTek's booth at CES 2023. With a focus on power efficiency, the MediaTek Genio 700 is a N6 (6 nm) IoT chipset that boasts two ARM A78 cores running at 2.2 GHz and six ARM A55 cores at 2.0 GHz while providing 4.0 TOPs AI accelerator. It comes with support for FHD 60p + 4K 60p display, as well as an ISP for better images.

"When we launched the Genio family of IoT products last year, we designed the platform with the scalability and development support that brands need, paving the way for opportunities to continue expanding," said Richard Lu, Vice President of MediaTek IoT Business Unit. "With a focus on industrial and smart home products, the Genio 700 is a perfect natural addition to the lineup to ensure we can provide the widest range of support possible to our customers."

ASUS Announces AMD EPYC 9004-Powered Rack Servers and Liquid-Cooling Solutions

ASUS, a leading provider of server systems, server motherboards and workstations, today announced new best-in-class server solutions powered by the latest AMD EPYC 9004 Series processors. ASUS also launched superior liquid-cooling solutions that dramatically improve the data-center power-usage effectiveness (PUE).

The breakthrough thermal design in this new generation delivers superior power and thermal capabilities to support class-leading features, including up to 400-watt CPUs, up to 350-watt GPUs, and 400 Gbps networking. All ASUS liquid-cooling solutions will be demonstrated in the ASUS booth (number 3816) at SC22 from November 14-17, 2022, at Kay Bailey Hutchison Convention Center in Dallas, Texas.

Andes Technology Unveils The AndesCore AX60 Series, An Out-Of-Order Superscalar Multicore RISC-V Processor Family

Today, at Linley Fall Processor Conference 2022, Andes Technology, a leading provider of high efficiency, low power 32/64-bit RISC-V processor cores and founding premier member of RISC-V International, reveals its top-of-the-line AndesCore AX60 series of power and area efficient out-of-order 64-bit processors. The family of processors are intended to run heavy-duty OS and applications with compute intensive requirements such as advanced driver-assistance systems (ADAS), artificial intelligence (AI), augmented/virtual reality (AR/VR), datacenter accelerators, 5G infrastructure, high-speed networking, and enterprise storage.

The first member of the AX60 series, the AX65, supports the latest RISC-V architecture extensions such as the scalar cryptography extension and bit manipulation extension. It is a 4-way superscalar with Out-of-Order (OoO) execution in a 13-stage pipeline. It fetches 4 to 8 instructions per cycle guided by highly accurate TAGE branch predictor with loop prediction to ensure fetch efficiency. It then decodes, renames and dispatches up to 4 instructions into 8 execution units, including 4 integer units, 2 full load/store units, and 2 floating-point units. Besides the load/store units, the AX65's aggressive memory subsystem also includes split 2-level TLBs with multiple concurrent table walkers and up to 64 outstanding load/store instructions.

Axiomtek Launches New DIN-rail Cybersecurity Gateway for OT Cybersecurity and Secured Edge - iNA200

Axiomtek - a world-renowned leader relentlessly devoted to the research, development, and manufacture of series of innovative and reliable industrial computer products of high efficiency - is pleased to announce the iNA200, a DIN-rail cybersecurity gateway for operational technology (OT) network security. The iNA200 is powered by the Intel Atom x6212RE or x6414RE processor (Elkhart Lake) and has one DDR4-3200 SO-DIMM for up to 32 GB of system memory. For demanding rugged environments, this fanless IIoT edge gateway comes with a wide operating temperature range of -40°C to 70°C and supports wide power input of 9 to 36 VDC with dual power input. The iNA200 also has two 2.5G LAN ports, sufficient storage, and high expandability for various industrial application needs.

"OT cybersecurity is essential for Industry 4.0. Axiomtek's iNA200 is designed to safeguard your OT assets and avoid network threats for critical infrastructure," said Kevin Hsiao, a product manager of Network Computing Platform Division at Axiomtek. "Additionally, our iNA200 features an M.2 Key B slot to enable 5G connectivity for next-generation industrial use cases. With the Trusted Platform Module 2.0 (TPM 2.0) support, this cybersecurity gateway increases security offering hardware-level protection against malware and sophisticated cyber-attacks."

Basemark Debuts a Unique Benchmark for Comparisons Between Android, iOS, Linux, MacOS and Windows Devices

Basemark launched today GPUScore Sacred Path. It is the world's only cross-platform GPU benchmark that includes the latest GPU technologies like Variable Rate Shading (VRS). Sacred Path supports all the relevant device categories - ranging from premium mobile phones to high-end gaming PCs and discrete graphics cards, including full support of the major operating systems, such as Android, iOS, Linux, macOS and Windows.

This benchmark is of great importance for application vendors, device manufacturers, GPU vendors and IT Media. Game developers need a thorough understanding of performance across the device range to optimize the use of the same assets across a maximum device range. GPU vendors and device manufacturers can compare their products with competitor products, which allows them to develop new product ranges with the correct targeting. In addition, Sacred Path is a true asset for media reviewing any GPU-equipped devices.

Intel Accelerates Developer Innovation with Open, Software-First Approach

On Day 2 of Intel Innovation, Intel illustrated how its efforts and investments to foster an open ecosystem catalyze community innovation, from silicon to systems to apps and across all levels of the software stack. Through an expanding array of platforms, tools and solutions, Intel is focused on helping developers become more productive and more capable of realizing their potential for positive social good. The company introduced new tools to support developers in artificial intelligence, security and quantum computing, and announced the first customers of its new Project Amber attestation service.

"We are making good on our software-first strategy by empowering an open ecosystem that will enable us to collectively and continuously innovate," said Intel Chief Technology Officer Greg Lavender. "We are committed members of the developer community and our breadth and depth of hardware and software assets facilitate the scaling of opportunities for all through co-innovation and collaboration."

ASUS IoT and Canonical Partner on Ubuntu Certification for IoT Applications

ASUS IoT, a global AIoT solution provider, today announced a partnership agreement with Canonical to provide pre-installed and certified versions of the Ubuntu Linux operating system for embedded boards and systems in diverse edge computing applications. New ASUS IoT devices such as the PE100A will be guaranteed by Canonical for optimized performance with Ubuntu Linux, in the interest of improving development time, configuration and installation.

This collaboration between ASUS IoT and Canonical ensures that individual hardware I/O functions conform to industrial-grade standards and to the version of Ubuntu running on the device. Moreover, it provides up to 10 years of Linux security (with five years of bundled ESM service) along with updated capabilities for users in industrial manufacturing, smart retail, smart transportation, surveillance and many other application sectors.

Intel Meteor Lake Can Play Videos Without a GPU, Thanks to the new Standalone Media Unit

Intel's upcoming Meteor Lake (MTL) processor is set to deliver a wide range of exciting solutions, with the first being the Intel 4 manufacturing node. However, today we have some interesting Linux kernel patches that indicate that Meteor Lake will have a dedicated "Standalone Media" Graphics Technology (GT) block to process video/audio. Moving encoding and decoding off GPU to a dedicated media engine will allow MTL to play back video without the GPU, and the GPU can be used as a parallel processing powerhouse. Features like Intel QuickSync will be built into this unit. What is interesting is that this unit will be made on a separate tile, which will be fused with the rest using tile-based manufacturing found in Ponte Vecchio (which has 47 tiles).
Intel Linux PatchesStarting with [Meteor Lake], media functionality has moved into a new, second GT at the hardware level. This new GT, referred to as "standalone media" in the spec, has its own GuC, power management/forcewake, etc. The general non-engine GT registers for standalone media start at 0x380000, but otherwise use the same MMIO offsets as the primary GT.

Standalone media has a lot of similarity to the remote tiles present on platforms like [Xe HP Software Development Vehicle] and [Ponte Vecchio], and our i915 [kernel graphics driver] implementation can share much of the general "multi GT" infrastructure between the two types of platforms.

Microsoft Brings Ampere Altra Arm Processors to Azure Cloud Offerings

Microsoft is announcing the general availability of the latest Azure Virtual Machines featuring the Ampere Altra Arm-based processor. The new virtual machines will be generally available on September 1, and customers can now launch them in 10 Azure regions and multiple availability zones around the world. In addition, the Arm-based virtual machines can be included in Kubernetes clusters managed using Azure Kubernetes Service (AKS). This ability has been in preview and will be generally available over the coming weeks in all the regions that offer the new virtual machines.

Earlier this year, we launched the preview of the new general-purpose Dpsv5 and Dplsv5 and memory optimized Epsv5 Azure Virtual Machine series, built on the Ampere Altra processor. These new virtual machines have been engineered to efficiently run scale-out, cloud-native workloads. Since then, hundreds of customers have tested and experienced firsthand the excellent price-performance that the Arm architecture can provide for web and application servers, open-source databases, microservices, Java and.NET applications, gaming, media servers, and more. Starting today, all Azure customers can deploy these new virtual machines using the Azure portal, SDKs, API, PowerShell, and the command-line interface (CLI).

Ansys and AMD Collaborate to Speed Simulation of Large Structural Mechanical Models Up to 6x Faster

Ansys announced that Ansys Mechanical is one of the first commercial finite element analysis (FEA) programs supporting AMD Instinct accelerators, the newest data center GPUs from AMD. The AMD Instinct accelerators are designed to provide exceptional performance for data centers and supercomputers to help solve the world's most complex problems. To support the AMD Instinct accelerators, Ansys developed APDL code in Ansys Mechanical to interface with AMD ROCm libraries on Linux, which will support performance and scaling on the AMD accelerators.

Ansys' latest collaboration with AMD resulted in a solution that, according to Ansys' tests, significantly speeds up simulation of large structural mechanical models—between three and six times faster for Ansys Mechanical applications using the sparse direct solver. Adding support for AMD Instinct accelerators in Ansys Mechanical gives customers greater flexibility in their choice of high-performance computing (HPC) hardware.

Intel Driver Update Confirms VPU Integration in Meteor Lake for AI Workload Acceleration

Intel yesterday confirmed its plans to extend its Meteor Lake architecture towards shores other than general processing. According to Phoronix, Intel posted a new driver that lays the foundations for VPU (Versatile Processing Unit) support under Linux. The idea here is that Intel will integrate this VPU within its 14th Gen Meteor Lake architecture, adding AI inferencing acceleration capabilities to its silicon. A sure-fire way to achieve enormous gains in AI processing, especially in performance/watt. Interestingly, Intel is somewhat following Apple's footsteps here, as the company already includes AI-dedicated processing cores in its desktop/laptop Apple Silicon processors since the M1 days.

Intel's VPU architecture will surely be derived from Movidius' designs, which Intel acquired back in 2016 for a cool $400 million. It's unclear which parts of Movidius/Intel IP will be included in the VPU units to be paired with Meteor Lake: whether a full-blown, SoC (System on Chip)-like VPU design such as the Myriad X VPU, or if Intel will take select bits of the architecture (plus the equivalent of five additional years of research and development), sprinkling them on top of their upcoming architecture. We do know the VPU itself will include a memory management unit, a RISC-based microcontroller, a Neural Compute System (what exactly entails this compute system and its slices is the mysterious part) and network-on-chip capabilities.

AMD Introduces Radeon Raytracing Analyzer 1.0

Today, the AMD GPUOpen announced that AMD developed a new tool for game developers using ray tracing technologies to help organize the model geometries in their scenes. Called Radeon Raytracing Analyzer (RRA) 1.0, it is officially available to download for Linux and Windows and released as a part of the Radeon Developer Tool Suite. With rendering geometries slowly switching from rasterization to ray tracing, developers need a tool that will point out performance issues and various workarounds in the process. With RRA, AMD has enabled all Radeon developers to own a tool that will answer many questions like: how much memory is the acceleration structure using, how complex is the implemented BVH, how many acceleration structures are used, does geometry in the BLAS axis align enough, etc. Developers will find it very appealing for their ray tracing workloads.
AMDRRA is able to work because our Radeon Software driver engineers have been hard at work, adding raytracing support to our Developer Driver technology. This means that once your application is running in developer mode - using the Radeon Developer Panel which ships with RRA - the driver can log all of the acceleration structures in a scene with a single button click. The Radeon Raytracing Analyzer tool can then load and interrogate the data generated by the driver, presenting it in an easy-to-understand way.

ORICO Launches High-Performing Portable USB4 SSD Inspired by Mondrian

ORICO - Shenzhen-based innovative enterprise focusing on high-performance solutions for USB data transmission and charging - is proud to unveil the ORICO USB4 High Speed Portable SSD Montage 40 Gbps series, with a striking and durable design inspired Dutch painter Piet Mondrian. The bold and bright aesthetic draws from Mondrian's famous work Composition with Red, Blue and Yellow, incorporating the thick black lines and blocks of color that immediately distinguish the device from the monochrome alternatives on the market. Loud, but not lurid, the design is applied with the durable in-mold labeling technique also found in automobile manufacturing for its resistance to corrosion.

However, the product engineers at ORICO do not pursue form over function and have invested in the right technology to make the Montage 40 Gbps series one of the best-performing SSDs available. During performance testing, the drive achieved 3,126 MB/s reading speed, a 2,832 MB/s writing speed, and transferred 3 GB files in just one second, matching, and even surpassing, many leading products currently on the market. Accompanied by a versatile 2-in-1 data cable for USB type A and type C connections, the drive is widely compatible and able to be used with Mac OS, Windows, Android, and Linux operating systems without requiring a driver. Depending on user requirements, the Montage series offers capacity options ranging from 512 GB to 2 TB. "We are so excited to launch the eye-catching Montage series, serving superior performance and carrying a timeless aesthetic that really transcends style trends," commented Xu Yeyou, CEO of ORICO. "We had in mind on-the-go creatives, such as photographers and video editors, when designing the product."

RISC-V development platform ROMA features forthcoming quad-core RISC-V processor

DeepComputing and Xcalibyte today opened pre-orders for the industry's first native RISC-V development laptop. The hotly anticipated ROMA development platform features an unannounced quad-core RISC-V processor with a companion NPU/GPU for the fastest, seamless RISC-V native software development available.

"Native RISC-V compile is a major milestone," said Mark Himelstein, Chief Technology Officer for RISC-V International. "The ROMA platform will benefit developers who want to test their software running natively on RISC-V. And it should be easy to transfer code developed on this platform to embedded systems."

AMD Instinct MI300 APU to Power El Capitan Exascale Supercomputer

The Exascale supercomputing race is now well underway, as the US-based Frontier supercomputer got delivered, and now we wait to see the remaining systems join the race. Today, during 79th HPC User Forum at Oak Ridge National Laboratory (ORNL), Terri Quinn at Lawrence Livermore National Laboratory (LLNL) delivered a few insights into what El Capitan exascale machine will look like. And it seems like the new powerhouse will be based on AMD's Instinct MI300 APU. LLNL targets peak performance of over two exaFLOPs and a sustained performance of more than one exaFLOP, under 40 megawatts of power. This should require a very dense and efficient computing solution, just like the MI300 APU is.

As a reminder, the AMD Instinct MI300 is an APU that combines Zen 4 x86-64 CPU cores, CDNA3 compute-oriented graphics, large cache structures, and HBM memory used as DRAM on a single package. This is achieved using a multi-chip module design with 2.5D and 3D chiplet integration using Infinity architecture. The system will essentially utilize thousands of these APUs to become one large Linux cluster. It is slated for installation in 2023, with an operating lifespan from 2024 to 2030.

MediaTek Unveils New AIoT Platform Stack and Introduces the Genio 1200 AIoT Chip

MediaTek today unveiled its new Genio platform for AIoT devices and introduced the first chip in the Genio family, the Genio 1200 designed for premium AIoT products. MediaTek Genio is a complete platform stack for the AIoT with powerful and ultra- efficient chipsets, open platform software development kits (SDKs) and a developer portal with comprehensive resources and tools. This all-in-one platform makes it easy for brands to develop innovative consumer, enterprise and industrial smart applications at the premium, mid-range and entry levels, and bring these devices to market faster. With MediaTek Genio, customers have access to all the hardware, software and resources needed to go from concept to design and manufacturing.

Customers can choose from a range of Genio chips to suit their product needs, and then use MediaTek's developer resources and the Yocto Linux open platform SDK to customize their designs. MediaTek also makes it easy for customers to access its partners' system hardware and software, and leverage partners' networks and sales channels. By offering an integrated, easy-to-use platform, MediaTek Genio reduces development costs and speeds up time to market, while providing long-term support for operating system updates and security patches that extend the product lifecycle. "Today MediaTek powers the most popular AIoT devices on the market. As the industry enters the next era of innovation, MediaTek's Genio platform delivers flexibility, scalability and development support brands need to cater to the latest market demands," said Jerry Yu, MediaTek Corporate Senior Vice President and General Manager of MediaTek's Computing, Connectivity and Metaverse Business Group. "We look forward to seeing the new user experiences brands bring to life with the Genio 1200 and its powerful AI capability, support for 4K displays and advanced imaging features."

AMD's Integrated GPU in Ryzen 7000 Gets Tested in Linux

It appears that one of AMD's partners has a Ryzen 7000 CPU or APU, with integrated graphics up and running in Linux. Based on details leaked, courtesy of the partner testing the chip using the Phoronix Test Suite and submitting the results to the OpenBenchmarking database. The numbers are by no means impressive, suggesting that this engineering sample isn't running at the proper clock speeds. For example, it only scores 63.1 FPS in Enemy Territory: Quake Wars, where a Ryzen 9 6900HX manages 182.1 FPS, where both GPUs have been allocated 512 MB of system memory as the minimum graphics memory allocation.

The integrated GPU goes under the model name of GFX1036, with older integrated RDNA2 GPUs from AMD having been part of the GFX103x series. It's reported to have a clock speed of 2000/1000 MHz, although it's presumably running at the lower of the two clock speeds, if not even slower, as it's only about a third of the speed or slower, than the GPU in the Ryzen 9 6900HX. That said, the GPU in the Ryzen 7000-series is as far as anyone's aware, not really intended for gaming, since it's a very stripped down GPU that is meant to mainly be for desktop use and media usage, so it's possible that it'll never catch up with the current crop of integrated GPUs from AMD. We'll hopefully find out more in less than two weeks time, when AMD has its keynote at Computex.

NVIDIA Releases Open-Source GPU Kernel Modules

NVIDIA is now publishing Linux GPU kernel modules as open source with dual GPL/MIT license, starting with the R515 driver release. You can find the source code for these kernel modules in the NVIDIA Open GPU Kernel Modules repo on GitHub. This release is a significant step toward improving the experience of using NVIDIA GPUs in Linux, for tighter integration with the OS and for developers to debug, integrate, and contribute back. For Linux distribution providers, the open-source modules increase ease of use.

They also improve the out-of-the-box user experience to sign and distribute the NVIDIA GPU driver. Canonical and SUSE are able to immediately package the open kernel modules with Ubuntu and SUSE Linux Enterprise Distributions. Developers can trace into code paths and see how kernel event scheduling is interacting with their workload for faster root cause debugging. In addition, enterprise software developers can now integrate the driver seamlessly into the customized Linux kernel configured for their project.
Return to Keyword Browsing
Jul 12th, 2025 11:10 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts