Thursday, May 12th 2016
Micron GDDR5X Memory Chip Pictured Up Close
Here are some of the first pictures of a Micron-made GDDR5X memory chip, up close. The picture reveals the 8-gigabit chip's package number "6HA77Z9TXT-2N3Y." The company's part number is "MT58K256M32." The GeForce GTX 1080 is the first production graphics card with GDDR5X memory, and Micron is the first to market with these chips. The GTX 1080 uses eight such 8-gigabit chips across its 256-bit wide memory interface, to make up its 8 GB standard memory amount. The reference-design GTX 1080 features a memory clock speed of 2.50 GHz (actual), or 10 Gbps (effective). The memory bandwidth for the GTX 1080 is a staggering 320 GB/s.
In its company blog post, Micron states: "Designed by our specialized team of Graphics memory engineers in Munich, GDDR5X provides NVIDIA with an unprecedented level of memory bandwidth for their new GeForce GTX 1080. The bandwidth delivered by Micron's GDDR5X memory is the result of thousands of hours of teamwork by some of the most brilliant minds in our two companies." Not only is GDDR5X faster, but also more energy-efficient than the 7 Gbps GDDR5 chips from the previous generation. This particular chip has a module voltage (VDDQ) of 1.35V, its package measures 14 mm x 10 mm x 1.1 mm (LxWxH).
Source:
VideoCardz
In its company blog post, Micron states: "Designed by our specialized team of Graphics memory engineers in Munich, GDDR5X provides NVIDIA with an unprecedented level of memory bandwidth for their new GeForce GTX 1080. The bandwidth delivered by Micron's GDDR5X memory is the result of thousands of hours of teamwork by some of the most brilliant minds in our two companies." Not only is GDDR5X faster, but also more energy-efficient than the 7 Gbps GDDR5 chips from the previous generation. This particular chip has a module voltage (VDDQ) of 1.35V, its package measures 14 mm x 10 mm x 1.1 mm (LxWxH).
31 Comments on Micron GDDR5X Memory Chip Pictured Up Close
So the latest and greatest Nvidia card with the latest and greatest type of GDDR memory has the exact same bandwidth that with the last 3+ generation of cards.
So let's talk numbers: the bandwidth (B/W) of GTX 1080 is 320 GB/sec, while 980 Ti had 336 GB/sec with only 7GHz memory but 384-bit bus. Then 780 Ti also had 336 GB/sec with the same specs as 980 Ti.
When Micron announced GDDR5X last year they said the frequency will be up to 14GHz, but instead they give us this low performance product of 10GHz (most likely due to low yields, meaning they couldn't produce enough high frequency chips)
So let's assume you are the product manager for this product, the new GPU from Nvidia, and you know you can make a better product even using the 10GHz memory (unfortunately the 11 or 12GHz chips, are just now getting sampled as I read yesterday). but in order to do so you should use a 384-bit bus.
But no, you opt to milk the market for a longer period of time and give out the same performance-wise product, instead of going for 384-bit at 10GHz = 480 GB/sec (almost the same a HBM1 that goes for 512GB/sec).
This memory has a potential of up to 512-bit at 14GHz = ~900 GB/sec. Of course Nvidia would never go for a 512-bit bus so even at 384-bits you could have 670GB/sec B/W on the memory, which is essentially 2 times faster than last year's 336GB/sec.
So let's wait and see how many years will it take to give us this kind of product. I assume by then, HBM2 might be much cheaper so the whole thing might not worth it after all.
Selling 2 cards after eachother for 600 dollars is better then selling 1 card.
And we know, its business, they are out to make money, but again, that does not help us and we should not support it as far as I am concerned.
It really only came into the common knowledge base back just like 2-3mo's ago, and already hard spec's and sampling. Now it's fully ramped-up and Nvidia has a product base on it? We know GP104 probably first taped out like October '15, A-1 production was like Feb-March '16. So you're telling me Nvidia took a chance on 5X without making any optimization to their memory controller? Or, yes they got early access.
While I grasp the differences, in this "real" implementation as said above it's not pressing the full potential, it's just convenient. Micron probably only first pitched this (just) to Nvidia many months (+18) back. As AMD was in with Hynix and HBM, Micron figured Nvidia would wring their hands over having first crack, and then like become a monetarily partner on the low-down, keeping all the released info late in coming. Like just now, hearing "oh" Micron is in full production. Not long ago it was like production might happen in Q3. Not this hardly the end of Q2 and Nvidia already has it in their factories and cards built with!
I'd bet AMD/RTG didn't get a head-ups on this till like the end of 2015, and by then it was off the drawing board to implement. I'm not saying this isn't business or truly “off the unethical reservation”, but again Nvidia doesn't like working within open standards. Might have a reason for that whole AMD will get first shot at Hynix HBM2 production, while why Samsung and Nvidia had to make up in the courts.
I think it will turn out that GDDR5X is more a means to keep the “bus” to a less costly 256-Bit PCB, all while keeping the bandwidth in the sweet-spot of more 2K. Probably still enough to hold its own in 4K, given the limits of the just 25% more CUDA parts, and not the 40% the 980Ti retained just to sneak into 4K.
But hey the secret could be in the sauce.
@Casecutter actually, I recall hearing about GDDR5x right after hearing about HBM. So both manufacturers have known of it for quite awhile. That means Micron started work on this at least a couple years ago.
I guess it would be much more on a AMD card with a 512bit bus.
The card has delta compression, it doesn't need more than 320 GB/s. Also this 320 GB/s with delta compression is worth a lot more than 780 Ti's 336 GB/s with 1750 MHz GDDR5 without compression.
The bandwidth of 980 Ti and 1080 are way higher than that of 780 Ti. 780 Ti bandwidth is comparable with that of GTX 980 (224 GB + compression speed up, nearly matching it or more).
So there's no "conspiracy" here. ;)
GDDR5X is a very welcome alternative, HBM2 would make the cards even more expensive. I wonder if GP100 could be made with GDDR5X, I think yes. High clocked (14 GHz) GDDR5X with 384 bit bus, should have easily enough bandwidth compared to HBM2. But it would suck more power I guess.
The only disadvantage of GDDR5X compared to HBM2 on the long run is again the power consumption, nothing really changed compared to GDDR5, it's just a bit better.
But AMD cards do with 512 bit bus.
Since Micron wrote the specification, JEDEC ratification was basically just a rubber stamp job. AFAIK, Micron began a pilot manufacturing line for GDDR5X even before the January ratification happened. Because people at AMD have reading and learning disabilities? or because they can't afford an internet connection? :roll: You're telling me, the entire industry and people who might only browse mainstream tech sites are more informed than AMD? Now that would explain a few things. That's exactly what you appear to be saying. You think this situation is more of a monopoly situation than AMD and Hynix working on HBM out of the public eye at least four years before the first product arrived? :rolleyes::rolleyes::rolleyes::rolleyes:
Keeping stuff secret for years is not worthy of mention, but Micron announcing GDDR5X on 1 September of 2015 and including their immediate intention to make it an open standard (which occurred 4 moths later) is somehow shady?
I keep everything stored in my memory-blank too!
I find it amazing how many people go rabid over this. "BAH, NVIDIAS NOW HIGH END CARD IS FASTER THAN THE ENTHUSIAST CARD, THEY ARE RIPPING US OFF BECAUSE IT ISNT TWICE AS FAST AS A TITAN X AND DOESNT HAVE A 9000 BIT BUS!!!!1!!1!!1" Every generation is like this, what once took enthusiast class hardware now only takes mid-high to high range hardware, high end is now mid range, ece. The way I see it, the 1080 is twice as fast as the card it replaces, the 980, and is faster than the titan x, while being a much smaller and more energy efficient chip. at 180 watt, it plows the 980 out of the water, and rightly so.
Interesting that you completely ignore the fact that AMD and Hynix worked on HBM in secret for years, but that's OK in your book.
HypHypocrite.rocrite.I'd continue to debate the point, but you'll just forget this thread ever took place in a few days..........don't worry though, your highly amusing post won't fade into obscurity - I'll see to it.;)
EDIT for grammatical error since Casecutter sees this as having major importance.
HMC was designed almost exclusively for Xeon Phi. When Nvidia evinced interest, Intel crushed its development and developed MCDRAM.
Nvidia wasn't working with Intel on HMC. Never had anything to do with HMC (they weren't even a member of the HMC Consortium). Never used HMC. Only ever showed a mock up of what an HMC equipped module would look like. NOT the same thing as actively working on a technology for years. The fact that you are still defending AMD despite this - and almost certainly not knowing anything about HMC (or much else it would seem), makes your posting just look like knee-jerk rants interspersed with AMD shilling.:rolleyes: