News Posts matching #32-bit

Return to Keyword Browsing

Saramonic Ultra Launched: A Pro Wireless Microphone with Timecode for Pro Creators

Saramonic launches its latest wireless lavalier microphone—Saramonic Ultra—for professional creators. With timecode, 130 dB SPL, 32-bit float onboard recording, IPX5-rated transmitters, and an external antenna, the Saramonic Ultra is packed with pro features that deliver top-notch performance in demanding environments, providing creators with more freedom for their creativity and more editing room in post-production.

Designed with professional creators in mind, the Saramonic Ultra features Timecode, a function that synchronizes time-codes across the microphone system and cameras, regardless of when each device starts recording. In post-production, this enables editors to easily synchronize audio and video down to the frame using the auto-sync functions offered by editing software, saving them valuable time.

Akeana Exits Stealth Mode with Comprehensive RISC-V Processor Portfolio

Akeana, the company committed to driving dramatic change in semiconductor IP innovation and performance, has announced its official company launch approximately three years after its foundation, having raised over $100 million in capital, with support from A-list investors including Kleiner Perkins, Mayfield, and Fidelity. Today's launch marks the formal availability of the company's expansive line of IP solutions that are uniquely customizable for any workload or application.

Formed by the same team that designed Marvell's ThunderX2 server chips, Akeana offers a variety of IP solutions, including microcontrollers, Android clusters, AI vector cores and subsystems, and compute clusters for networking and data centers. Akeana moves the industry beyond the status quo of legacy vendors and architectures, like Arm, with equitable licensing options and processors that fill and exceed current performance gaps.

X-Silicon Startup Wants to Combine RISC-V CPU, GPU, and NPU in a Single Processor

While we are all used to having a system with a CPU, GPU, and, recently, NPU—X-Silicon Inc. (XSi), a startup founded by former Silicon Valley veterans—has unveiled an interesting RISC-V processor that can simultaneously handle CPU, GPU, and NPU workloads in a chip. This innovative chip architecture, which will be open-source, aims to provide a flexible and efficient solution for a wide range of applications, including artificial intelligence, virtual reality, automotive systems, and IoT devices. The new microprocessor combines a RISC-V CPU core with vector capabilities and GPU acceleration into a single chip, creating a versatile all-in-one processor. By integrating the functionality of a CPU and GPU into a single core, X-Silicon's design offers several advantages over traditional architectures. The chip utilizes the open-source RISC-V instruction set architecture (ISA) for both CPU and GPU operations, running a single instruction stream. This approach promises lower memory footprint execution and improved efficiency, as there is no need to copy data between separate CPU and GPU memory spaces.

Called the C-GPU architecture, X-Silicon uses RISC-V Vector Core, which has 16 32-bit FPUs and a Scaler ALU for processing regular integers as well as floating point instructions. A unified instruction decoder feeds the cores, which are connected to a thread scheduler, texture unit, rasterizer, clipping engine, neural engine, and pixel processors. All is fed into a frame buffer, which feeds the video engine for video output. The setup of the cores allows the users to program each core individually for HPC, AI, video, or graphics workloads. Without software, there is no usable chip, which prompts X-Silicon to work on OpenGL ES, Vulkan, Mesa, and OpenCL APIs. Additionally, the company plans to release a hardware abstraction layer (HAL) for direct chip programming. According to Jon Peddie Research (JPR), the industry has been seeking an open-standard GPU that is flexible and scalable enough to support various markets. X-Silicon's CPU/GPU hybrid chip aims to address this need by providing manufacturers with a single, open-chip design that can handle any desired workload. The XSi gave no timeline, but it has plans to distribute the IP to OEMs and hyperscalers, so the first silicon is still away.

SSD Overclocking? It can be Done, with Serious Performance Gains

The PC master race has yielded many interesting activities for enthusiasts alike, with perhaps the pinnacle of activities being overclocking. Usually, subjects for overclocking include CPUs, GPUs, and RAM, with other components not actually being capable of overclocking. However, the enthusiast force never seems to settle, and today, we have proof of overclocking an off-the-shelf 2.5-inch SATA III NAND Flash SSD thanks to Gabriel Ferraz, a Computer Engineering graduate, and TechPowerUp's SSD database maintainer. He uses the RZX Pro 256 GB SSD in the video, a generic NAND Flash drive. The RZX Pro uses the Silicon Motion SM2259XT2 single-core, 32-bit ARC CPU running up to 550 MHz. It has two channels at 400 MHz, each with eight chip enable interconnects, allowing up to 16 NAND Flash dies to operate. The SSD doesn't feature a DRAM cache or support a host memory buffer. It has only one NAND Flash memory chip from Kioxia, uses BiCS FLASH 4 architecture, has 96 layers, and has 256 GB capacity.

While this NAND Flash die is rated for up to 400 MHz or 800 MT/s, it only ran at less than half the speed at 193.75 MHz or 387.5 MT/s at default settings. Gabriel acquired a SATA III to USB 3.0 adapter with a JMS578 bridge chip to perform the overclock. This adapter allows hot swapping of SSDs without the need to turn off the PC. He shorted two terminals in the drive's PCB to get the SSD to operate without its default safe mode. Mass Production Tools (MPTools), which OEMs use to flash SSDs, were used to change the firmware settings. Each NAND Flash architecture has its own special version of MPTools. The software directly shows control of the Flash clock, CPU clock, and output driving. However, additional tweaks like Flash IO driving with subdivisions need modifications. Control and Flash On-Die Termination (ODT) and Schmitt window trigger (referring to the Schmitt trigger comparator circuit) also needed a few modifications to make it work.

Nuvoton Unveils New Production-Ready Endpoint AI Platform for Machine Learning

Nuvoton is pleased to announce its new Endpoint AI Platform to accelerate the development of fully-featured microcontroller (MCU) AI products. These solutions are enabled by Nuvoton's powerful new MCU and MPU silicon, including the NuMicro M55M1 equipped with Ethos U55 NPU, NuMicro MA35D1, and NuMicro M467 series. These MCUs are a valuable addition to the modern AI-centric computing toolkit and demonstrate how Nuvoton continues to work closely with Arm and other companies to develop a user-friendly and complete Endpoint AI Ecosystem.

Development on these platforms is made easy by Nuvoton's NuEdgeWise: a well-rounded, simple-to-adopt tool for machine learning (ML) development, which is nonetheless suitable for cutting-edge tasks. Together, this powerful core hardware, combined with unique rich development tools, cements Nuvoton's reputation as a leading microcontroller platform provider. These new single-chip-based platforms are ideal for applications including smart home appliances and security, smart city services, industry, agriculture, entertainment, environmental protection, education, highly accurate voice-control tasks, and sports, health, and fitness.

Synopsys Expands Its ARC Processor IP Portfolio with New RISC-V Family

Synopsys, Inc. (Nasdaq: SNPS) today announced it has extended its ARC Processor IP portfolio to include new RISC-V ARC-V Processor IP, enabling customers to choose from a broad range of flexible, extensible processor options that deliver optimal power-performance efficiency for their target applications. Synopsys leveraged decades of processor IP and software development toolkit experience to develop the new ARC-V Processor IP that is built on the proven microarchitecture of Synopsys' existing ARC Processors, with the added benefit of the expanding RISC-V software ecosystem.

Synopsys ARC-V Processor IP includes high-performance, mid-range, and ultra-low power options, as well as functional safety versions, to address a broad range of application workloads. To accelerate software development, the Synopsys ARC-V Processor IP is supported by the robust and proven Synopsys MetaWare Development Toolkit that generates highly efficient code. In addition, the Synopsys.ai full-stack AI-driven EDA suite is co-optimized with ARC-V Processor IP to provide an out-of-the-box development and verification environment that helps boost productivity and quality-of-results for ARC-V-based SoCs.

AMD, Arm, Intel, Meta, Microsoft, NVIDIA, and Qualcomm Standardize Next-Generation Narrow Precision Data Formats for AI

Realizing the full potential of next-generation deep learning requires highly efficient AI infrastructure. For a computing platform to be scalable and cost efficient, optimizing every layer of the AI stack, from algorithms to hardware, is essential. Advances in narrow-precision AI data formats and associated optimized algorithms have been pivotal to this journey, allowing the industry to transition from traditional 32-bit floating point precision to presently only 8 bits of precision (i.e. OCP FP8).

Narrower formats allow silicon to execute more efficient AI calculations per clock cycle, which accelerates model training and inference times. AI models take up less space, which means they require fewer data fetches from memory, and can run with better performance and efficiency. Additionally, fewer bit transfers reduces data movement over the interconnect, which can enhance application performance or cut network costs.

Chinese Exascale Sunway Supercomputer has Over 40 Million Cores, 5 ExaFLOPS Mixed-Precision Performance

The Exascale supercomputer arms race is making everyone invest their resources into trying to achieve the number one spot. Some countries, like China, actively participate in the race with little proof of their work, leaving the high-performance computing (HPC) community wondering about Chinese efforts on exascale systems. Today, we have some information regarding the next-generation Sunway system, which is supposed to be China's first exascale supercomputer. Replacing the Sunway TaihuLight, the next-generation Sunway will reportedly boast over 40 million cores in its system. The information comes from an upcoming presentation for Supercomputing 2023 show in Denver, happening from November 12 to November 17.

The presentation talks about 5 ExaFLOPS in the HPL-MxP benchmark with linear scalability on the 40-million-core Sunway supercomputer. The HPL-MxP benchmark is a mixed precision HPC benchmark made to test the system's capability in regular HPC workloads that require 64-bit precision and AI workloads that require 32-bit precision. Supposedly, the next-generation Sunway system can output 5 ExaFLOPS with linear scaling on its 40-million-core system. What are those cores? We are not sure. The last-generation Sunway TaihuLight used SW26010 manycore 64-bit RISC processors based on the Sunway architecture, each with 260 cores. There were 40,960 SW26010 CPUs in the system for a total of 10,649,600 cores, which means that the next-generation Sunway system is more than four times more powerful from a core-count perspective. We expect some uArch and semiconductor node improvements as well.

Steam Deck Gets 32 GB LPDDR5 Memory Upgrade by Modder

Valve's Steam Deck handheld gaming console has launched with 16 GB of LPDDR5 memory running at 5500 MT/s. This is distributed over four 32-bit channels for 88 GB/s total bandwidth memory bandwidth. While the storage option can be upgraded, the memory is limited to 16 GB, and the memory chips are soldered. However, it seems like that problem can also be solved only if you are a professional and can solder well. Thanks to the Balázs Triszka on Twitter/X, we have witnessed a mod of Steam Deck, where memory gets upgraded from 16 to 32 GB.

The modder successfully bumped up the system memory using Samsung's LPDDR5 K3LKCKC0BM-MGCP memory chips. All it was needed was some experience with ball grid array (BGA) resoldering. No glue was under the chips, and they were easy to remove. You can see the pictures below, and the system shows the higher memory count.

Debian 12 Bookworm Released

After 1 year, 9 months, and 28 days of development, the Debian project is proud to present its new stable version 12 (code name bookworm). bookworm will be supported for the next 5 years thanks to the combined work of the Debian Security team and the Debian Long Term Support team.

Following the 2022 General Resolution about non-free firmware, we have introduced a new archive area making it possible to separate non-free firmware from the other non-free packages:
  • non-free-firmware
  • Most non-free firmware packages have been moved from non-free to non-free-firmware. This separation makes it possible to build a variety of official installation images.
Debian 12 bookworm ships with several desktop environments, such as:
  • Gnome 43,
  • KDE Plasma 5.27,
  • LXDE 11,
  • LXQt 1.2.0,
  • MATE 1.26,
  • Xfce 4.18

Intel Publishes Sorting Library Powered by AVX-512, Offers 10-17x Speed Up

Intel has recently updated its open-source C++ header file library for high-performance SIMD-based sorting to support the AVX-512 SIMD instruction set. Extending the capability of regular AVX2 support, the sorting functions now implement 512-bit extensions to offer greater performance. According to Phoronix, the NumPy Python library for mathematics that underpins a lot of software has updated its software base to use the AVX-512 boosted sorting functionality that yields a fantastic uplift in performance. The library uses AVX-512 to vectorize the quicksort for 16-bit and 64-bit data types using the extended instruction set. Benchmarked on an Intel Tiger Lake system, the NumPy sorting saw a 10-17x increase in performance.

Intel's engineer Raghuveer Devulapalli changed the NumPy code, which was merged into the NumPy codebase on Wednesday. Regarding individual data types, the new implementation increases 16-bit int sorting by 17x and 32-bit data type sorting by 12-13x, while float 64-bit sorting for random arrays has experienced a 10x speed up. Using the x86-simd-sort code, this speed-up shows the power of AVX-512 and its capability to enhance the performance of various libraries. We hope to see more implementations of AVX-512, as AMD has joined the party by placing AVX-512 processing elements on Zen 4.

IBM Artificial Intelligence Unit (AIU) Arrives with 23 Billion Transistors

IBM Research has published information about the company's latest development of processors for accelerating Artificial Intelligence (AI). The latest IBM processor, called the Artificial Intelligence Unit (AIU), embraces the problem of creating an enterprise solution for AI deployment that fits in a PCIe slot. The IBM AIU is a half-height PCIe card with a processor powered by 23 Billion transistors manufactured on a 5 nm node (assuming TSMC's). While IBM has not provided many details initially, we know that the AIU uses an AI processor found in the Telum chip, a core of the IBM Z16 mainframe. The AIU uses Telum's AI engine and scales it up to 32 cores and achieve high efficiency.

The company has highlighted two main paths for enterprise AI adoption. The first one is to embrace lower precision and use approximate computing to drop from 32-bit formats to some odd-bit structures that hold a quarter as much precision and still deliver similar result. The other one is, as IBM touts, that "AI chip should be laid out to streamline AI workflows. Because most AI calculations involve matrix and vector multiplication, our chip architecture features a simpler layout than a multi-purpose CPU. The IBM AIU has also been designed to send data directly from one compute engine to the next, creating enormous energy savings."

Imagination launches IMG RTXM-2200 - its first real-time embedded RISC-V CPU

Imagination Technologies announces IMG RTXM-2200, its first real-time embedded RISC-V CPU, a highly scalable, feature-rich, 32-bit embedded solution with a flexible design for a wide range of high-volume devices. IMG RTXM-2200 is one of the first commercial cores in Imagination's Catapult CPU family, previously announced in December 2021. Accelerating the expansion of its RISC-V offering, Imagination's IMG RTXM-2200 can be integrated into complex SoCs for a range of applications including networking solutions, packet management, storage controllers, and sensor management for AI cameras and smart metering. Together with its market-leading GPU and AI accelerator IP, Imagination's new CPU cores offer customers access to innovative heterogeneous solutions.

This real-time embedded core features up to 128 KB of tightly coupled memories (both instruction and data) for deterministic response and Level 1 cache sizes of up to 128 KB for robust performance. The new CPU offers a range of floating-point formats including single-precision and bfloat16. The latter enables manufacturers to deploy AI applications through this core without the need for an additional chip. This reduces silicon area, for a cost-effective and optimised design in AI cameras and smart metering applications.

LG Debuts UltraGear GP9 Gaming Speaker, Matches its Displays

LG has gained a pretty good reputation for its UltraGear monitors and the company has now launched its first accessory, the matching GP9 gaming speaker. At first glance, it just looks like a compact soundbar, but looks can be deceiving and that's very much the case here, as not only is it a portable, battery powered speaker, but it hides quite a few features that aren't apparent at first glance.

For starters, LG has incorporated what they call a "Quad DAC" with some help from ESS in the shape of the 9038Pro, which is ESS' flagship 32-bit DAC. LG uses this to deliver virtual 7.1-channel audio and the GP9 is Hi-Res Audio certified. The GP9 also has a built-in noise cancelling microphone, so you can use it for voice chat or online meetings if so inclined.

HiSilicon Develops RISC-V Processor to Move Away from Arm Restrictions

Huawei's HiSilicon subsidiary, which specialized in the design and development of semiconductor devices like processors, has made a big announcement today. A while back, the US government has blacklisted Huawei from using any US-made technology. This has rendered HiSilicon's efforts of building processors based on Arm architecture (ISA) practically useless, as the US sanctions applied to that as well. So, the company had to turn to alternative technologies. Today, HiSilicon has announced the new HiSilicon Hi3861 development board, based on RISC-V architecture. This represents an important step to Huawei's silicon independence, as RISC-V is a free and open-source ISA designed for all kinds of workloads.

While the HiSilicon Hi3861 development board features a low-power Hi3861 chip, it is the company's first attempt at building a RISC-V design. It features a "high-performance 32-bit microprocessor with a maximum operating frequency of 160 MHz". While this may sound very pale in comparison to the traditional HiSilicon products, this chip is used for IoT applications, which don't require much processing power. For tasks that need better processing, HiSilicon will surely develop more powerful designs. This just represents an important starting point, where Huawei's HiSilicon moves away from Arm ISA, and steps into another ISA design and development. This time, with RISC-V, the US government has no control over the ISA, as it is free to use by anyone who pleases, with added benefits of no licensing costs. It is interesting to see where this will lead HiSilicon and what products the company plans to release on the new ISA.

OpenFive Tapes Out SoC for Advanced HPC/AI Solutions on TSMC 5 nm Technology

OpenFive, a leading provider of customizable, silicon-focused solutions with differentiated IP, today announced the successful tape out of a high-performance SoC on TSMC's N5 process, with integrated IP solutions targeted for cutting edge High Performance Computing (HPC)/AI, networking, and storage solutions.

The SoC features an OpenFive High Bandwidth Memory (HBM3) IP subsystem and D2D I/Os, as well as a SiFive E76 32-bit CPU core. The HBM3 interface supports 7.2 Gbps speeds allowing high throughput memories to feed domain-specific accelerators in compute-intensive applications including HPC, AI, Networking, and Storage. OpenFive's low-power, low-latency, and highly scalable D2D interface technology allows for expanding compute performance by connecting multiple dice together using an organic substrate or a silicon interposer in a 2.5D package.

Arm Highlights its Next Two Generations of CPUs, codenamed Matterhorn and Makalu, with up to a 30% Performance Uplift

Editor's Note: This is written by Arm vice president and general manager Paul Williamson.

Over the last year, I have been inspired by the innovators who are dreaming up solutions to improve and enrich our daily lives. Tomorrow's mobile applications will be even more imaginative, immersive, and intelligent. To that point, the industry has come such a long way in making this happen. Take app stores for instance - we had the choice of roughly 500 apps when smartphones first began shipping in volume in 2007 and today there are 8.9 million apps available to choose from.

Mobile has transformed from a simple utility to the most powerful, pervasive device we engage with daily, much like Arm-based chips have progressed to more powerful but still energy-efficient SoCs. Although the chip-level innovation has already evolved significantly, more is still required as use cases become more complex, with more AI and ML workloads being processed locally on our devices.

Raspberry Pi 4 Gets Upgraded to 8 GB, Priced at $75

Raspberry Pi 4 is almost a year old, and it's been a busy year. We've sold nearly 3 million units, shipped a couple of minor board revisions, and reduced the price of the 2 GB variant from $45 to $35. On the software side, we've done enormous amounts of work to reduce the idle and loaded power consumption of the device, passed OpenGL ES 3.1 conformance, started work on a Vulkan driver, and shipped PXE network boot mode and a prototype of USB mass storage boot mode - all this alongside the usual round of bug fixes, feature additions, and kernel version bumps.

While we launched with 1 GB, 2 GB and 4 GB variants, even at that point we had our eye on the possibility of an 8 GB Raspberry Pi 4. We were so enthusiastic about the idea that the non-existent product made its way into both the Beginner's Guide and the compliance leaflet. The BCM2711 chip that we use on Raspberry Pi 4 can address up to 16 GB of LPDDR4 SDRAM, so the real barrier to our offering a larger-memory variant was the lack of an 8 GB LPDDR4 package. These didn't exist (at least in a form that we could address) in 2019, but happily our partners at Micron stepped up earlier this year with a suitable part. And so, today, we're delighted to announce the immediate availability of the 8 GB Raspberry Pi 4, priced at just $75.

Microsoft Begins Phasing Out 32-Bit Support for Windows 10

It seems Microsoft has begun the long process of phasing out 32-bit support for Windows 10 beginning with version 2004, all new OEM Windows 10 systems will be required to use 64-bit builds and Microsoft will no longer release 32-bit builds for OEM distribution. This will not affect those of you running 32-bit versions of Windows 10 who will continue to receive updates and Microsoft plans to continue to sell 32-bit versions of Windows 10 through retail channels for the foreseeable future. This is likely just the first step in what will probably be a multi-year project to gradually phase out 32-bit support as more consumers and businesses switch to 64-bit systems.

Primate Labs Introduces GeekBench 5, Drops 32-bit Support

Primate Labs, developers of the ubiquitous benchmarking application GeekBench, have announced the release of version 5 of the software. The new version brings numerous changes, and one of the most important (since if affects compatibility) is that it will only be distributed in a 64-bit version. Some under the hood changes include additions to the CPU benchmark tests (including machine learning, augmented reality, and computational photography) as well as increases in the memory footprint for tests so as to better gauge impacts of your memory subsystem on your system's performance. Also introduced are different threading models for CPU benchmarking, allowing for changes in workload attribution and the corresponding impact on CPU performance.

On the Compute side of things, GeekBench 5 now supports the Vulkan API, which joins CUDA, Metal, and OpenCL. GPU-accelerated compute for computer vision tasks such as Stereo Matching, and augmented reality tasks such as Feature Matching are also available. For iOS users, there is now a Dark Mode for the results interface. GeekBench 5 is available now, 50% off, on Primate Labs' store.

Creative Formally Launches the Sound Blaster AE-7 and AE-9 Audiophile Sound Cards

Creative Technology Ltd continues its legacy of revolutionizing audio with the launch of its most advanced PCI-e sound cards ever - the Sound Blaster AE-9 and Sound Blaster AE-7. Built with only the most premium components, and complemented with the latest technologies from Creative, these sound cards are designed to define a new performance standard in this class for the ultimate PC entertainment experience.

Up till 1989, the only sounds coming out of the PC were mere beeps. The same year, Sound Blaster was born, and PC audio was transformed forever. Since then, over 400 million Sound Blasters have been sold; and the Sound Blaster brand has become synonymous with the term sound card and high-quality PC audio - first for gaming, and then movies and music. With experience and expertise refined over three decades of audio innovation, Sound Blaster has continued to reinvent itself with the development of digital audio processing technologies. Each new innovation served to redefine what the ultimate audio experience really means, such as when it evolved beyond the PC in the form of external sound cards for platforms like gaming and entertainment consoles.

Logitech launches the G502 Lightspeed Mouse, Bringing Wireless to the Family

Logitech has announced an upgraded, improved model of their legendary G502 mouse, in a Lightspeed edition. Ignoring the quantum implications of having a mouse that moves at lightspeed on your palm, the new, updated G502 features Logitech's Latest, in-house Hero 16K sensor: up to 16,000 DPI processed by a 32-bit ARM Cortex-M-based SoC. Logitech says their wireless performance in their proprietary Lightspeed technology is comparable to that of a wired solution, offering 1 ms lag through special, purpose-designed features of the mouse.

Magnetic charging technology available in Logitech's mouse mats mean you'll never be out of juice, with the G502's battery supporting 800 charge cycles before degradation starts to set in. 11 programmable buttons, adjustable weight (starting at 114g up to 130g) and quality of life additions to the mouse's structure, such as rubberized compounds and the G502 Lightspeed's RGB lighting bring this particular rodent up to a $149 pricing.

Logitech G Announces the 2019 MX518 Gaming Mouse

You asked for it, we did it. Over the years, Logitech G community has consistently asked us to bring back the legendary Logitech G MX518, which many consider to be the finest gaming mouse of all time. Today Logitech G is excited to announce that the new MX518 gaming mouse is now available to fans around the world.

The reborn MX518 retains the same shape and feel of the original that made it famous but is updated to the very latest, next-generation technologies, including HERO 16K sensor and the addition of a 32-bit ARM processor for a super fast 1 ms report rate. The MX518 also features eight programmable buttons so you can bind custom commands. With onboard memory, you can also save your preferences directly to the mouse, so you can use it on different systems without the need to install custom software or reconfigure your settings.

Apacer Launches World's first 32-bit DDR4 SODIMM for ARM Processors

In view of the current trend in which ARM technology drives the development of Internet of Things (IoT), mobile computing and automotive electronics, and the demands for economical, efficient and compact smart devices, Apacer, the world's leading industrial memory brand, launches the world's first 32-bit DDR4 SODIMM which supports industrial embedded systems using ARM/RISC processors or the latest RISC-V 32-bit processors. Apacer's 32-bit DDR4 SODIMM achieves an ideal balance between performance, power consumption and cost. Compared with existing onboard memory, it offers significant advantages of flexibility in capacity and space arrangement, making Apacer poised to upturn the market of ARM processor memory and ride this rapidly growing IoT wave.

AMD Releases Radeon Software Adrenalin 18.10.2 Beta

AMD has released today the Radeon Software Adrenalin 18.10.2 beta drivers. These drivers focus on a few key fixes with the first one solving the issue of Vulkan API titles that experience crashing when launching the game. Next is a specific fix for Assassin's Creed Odyssey which keeps the game from randomly exiting when it is restarted after applying Adaptive Anti-Aliasing on multi-GPU systems.

That said, a few issues have been specifically noted. Strange Brigade can still experience application hang when using the DirectX 12 API. Radeon Overlay does not play nice with the latest Windows 10 October 2018 Update. It can cause intermittent instability or game crashes for the time being. Finally, RX Vega series graphics cards may experience elevated memory clocks when the system is idle. Other than that nothing else is mentioned by AMD in regards to possible driver performance improvements etc. Instead, this latest beta focuses on a few key fixes and nothing more. It should also be noted that it is available in 64-bit only, as AMD confirmed earlier today they will not be supporting 32-bit operating systems going forward.

DOWNLOAD: AMD Radeon Software Adrenalin 18.10.2 Beta
The change-log follows.
Return to Keyword Browsing
Nov 21st, 2024 11:05 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts