Wednesday, December 6th 2017
AMD Develops GDDR6 Controller for Next-generation Graphics Cards, Accelerators
This news may really not come as such; it's more of a statement in logical, albeit unconfirmed facts rather than something unexpected. AMD is working (naturally) on a GDDR6 memory controller, which it's looking to leverage in its next generations of graphics cards. This is an expected move: AMD is expected to continue using more exotic HBM memory implementations on its top tier products, but that leaves a lot of GPU space in their product stack that needs to be fed by high-speed memory solutions. With GDDR6 nearing widespread production and availability, it's only natural that AMD is looking to upgrade its controllers for the less expensive, easier to implement memory solution on its future products.
The confirmation is still worth mention, though, as it comes straight from a principal engineer on AMD's technical team, Daehyun Jun. A Linked In entry (since removed) stated that he was/is working on a DRAM controller for GDDR6 memory since September 2016. GDDR6 memory brings advantages of higher operating frequencies and lower power consumption against GDDR5 memory, and should deliver higher potential top frequencies than GDDR5X, which is already employed in top tier NVIDIA cards. GDDR6, when released, will start by delivering today's GDDR5X top speeds of roughly 14 Gbps, with a current maximum of 16 Gbps being achievable on the technology. This means more bandwidth (up-to double over current 8 Gbps GDDR5) and higher clock frequency memory. GDDR6 will be rated at 1.35 v, the same as GDDR5X.SK Hynix, Samsung, and Micron have all announced their GDDR6 processes, so availability should be enough to fill NVIDIA's lineup, and AMD's budget and mainstream graphics cards, should the company choose to do so. Simpler packaging and PCB integration should also help in not lowering yields from usage of more complex memory subsystems.
Sources:
Tweakers.net, Guru 3D, Thanks @ P4-630!
The confirmation is still worth mention, though, as it comes straight from a principal engineer on AMD's technical team, Daehyun Jun. A Linked In entry (since removed) stated that he was/is working on a DRAM controller for GDDR6 memory since September 2016. GDDR6 memory brings advantages of higher operating frequencies and lower power consumption against GDDR5 memory, and should deliver higher potential top frequencies than GDDR5X, which is already employed in top tier NVIDIA cards. GDDR6, when released, will start by delivering today's GDDR5X top speeds of roughly 14 Gbps, with a current maximum of 16 Gbps being achievable on the technology. This means more bandwidth (up-to double over current 8 Gbps GDDR5) and higher clock frequency memory. GDDR6 will be rated at 1.35 v, the same as GDDR5X.SK Hynix, Samsung, and Micron have all announced their GDDR6 processes, so availability should be enough to fill NVIDIA's lineup, and AMD's budget and mainstream graphics cards, should the company choose to do so. Simpler packaging and PCB integration should also help in not lowering yields from usage of more complex memory subsystems.
25 Comments on AMD Develops GDDR6 Controller for Next-generation Graphics Cards, Accelerators
I reserve myself the right to be wrong, though :)
And further some more reasons.
This is more about Computing, but it shows a direction in general.
images.nvidia.com/events/sc15/pdfs/SC_15_Keckler_distribute.pdf
Plus the slide speculates about future HBM standards we have not even seen yet. I highly doubt future HBM standards up the operating voltage. At worst they keep it at the same level. HBM is also highly scalable in terms of density and die area.
The main difference between the two that is very useful in compute is lower lag of HBM.
Bandwidth alone is already there with GDDR5x. GDDR6 beats that.
Maximum you can achieve on a 384bit bus (typical of today's cards) with GDDR6 is 768 GB/s according to the current predicted spec.
V100 with HBM2 gets close to 1 TB/s , I don't know how you figured out that it "beats that".
Interposer is expensive.
AMD has shown even with HBM power supply die side is still huge, so saving 20W where cooling density isn't an issue...... isn't the issue.
But make no mistake , it's not going away.
You have Intel buying up the HBM2. You have IBM buying up the HBM2. And alot more then just AMD or Nvidia. They can only produce an X amount of chips every month. And AMD gets a percentage out of that.
Nvidia's P100 HBM2 chip hit the market long ago, but I guess they were sensible (or as some would say... shady).
GDDR6 has more bandwidth than that.
The only metric HBM2 in Vega 64 is beating GDDR5x in 1080Ti is latency. Which, you know, is not really that important in gaming (unlike compute, where nvidia is using it too)
Come back when you have a better understanding of these things. Just an advice.
There surely are various configurations possible.
All other things the same, using GDDR6 instead of GDDR5X would lead to higher bandwidth.
GDDR6 and GDDR5X both have the same data rate. So if clock speed and bus width are also both the the same, meaning all other things are are the same, GDDR6 and GDDR5X will produce the same exact bandwidth. Period.