AMD Vega Microarchitecture Technical Overview 29

AMD Vega Microarchitecture Technical Overview

Next Generation Compute Unit »

Memory System


Vega is a big die and AMD helped here by marking clearly the sectors where the memory system is located. The biggest point regarding memory where Vega is considered is the introduction of HBM2 to the retail consumer market - 8 GB HBM2, which was otherwise only available in high-end professional development solutions costing an order of magnitude more. HBM2 has higher capacity per stack, which in turn increases the maximum memory capacity as well, relative to HBM1. AMD has also provided a comparison to GDDR5 (as opposed to GDDR5X, which is what NVIDIA uses in their competing GeForce solutions) to mention the higher efficiency and lower foot print. The latter holds true even with GDDR5X, however, as our RX Vega Preview indicates, so it has been for nought given the VGA card's PCB appears to be longer than what we had with the AMD R9 Fury series that first introduced HBM.


In order to best make use of the higher bandwidth available with HBM2, AMD's Radeon Technology Group (RTG) devised a brand new High-Bandwidth Cache Controller (HBCC) to help maximize GPU VRAM utilization through grouped memory. Here, VRAM is used as a cache device for system memory and/or disk storage, and HBCC controls data movement in an intelligent manner. As a quick visualization of how memory management is otherwise done, AMD is showing how HBCC can help with a page-based management system wherein data segments are handled individually rather than as complete chunks with active pages residing in the high-bandwidth cache and inactive pages in the slower memory.

This can be especially handy when a program loads into memory resources it finds relevant to the 3D scene being rendered without it needing access to them all for every single frame. This disparity hampers the otherwise high memory bandwidth and consumes resources moving said data. On large working sets, this also brings with it the chance that the physical GPU memory overflows, causing expensive swapping operations to happen in an unorganized manner. By using the high-bandwidth memory cache (HBMC) in Vega, AMD is tackling this via a direct hardware solution, and this is where the HBCC comes in.

The example above showed uniformly sized pages, but the high-bandwidth cache controller is designed to handle irregularly sized memory pages as well. Typical page sizes are between 4K and 128K. It can access not just system memory and storage, but non-volatile RAM as well, such as Intel's new Optane technology-based SSDs. If you have used a small SDD as a scratch/cache disk with a spinning drive, think of the practical benefits you achieved.

The design of the high-bandwidth cache controller will be handy also in that AMD now has a platform to use this concept with new microarchitectures or scaled-up silicon, and expand upon the same functionality. As it is now, it provides as much as 27 GB worth of assets to be used, allowing for real-time OpenGL rendering of ~500 million triangles. AMD estimates that this can be expanded upon to as much as 512 TB of virtual space.
Next Page »Next Generation Compute Unit
View as single page
Nov 21st, 2024 09:51 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts