Monday, October 26th 2015

GDDR5X Puts Up a Fight Against HBM, AMD and NVIDIA Mulling Implementations

There's still a little bit of fight left in the GDDR5 ecosystem against the faster and more energy-efficient HBM standard, which has a vast and unexplored performance growth curve. The new GDDR5X standard offers double the bandwidth per-pin compared to current generation GDDR5, without any major design or electrical changes, letting GPU makers make a seamless and cost-effective transition to it.

In a presentation by a DRAM maker leaked to the web, GDDR5X is touted as offering double the data-rate per memory access, at 64 byte/access, compared to 32 byte/access by today's fastest GDDR5 standard, which is currently saturating its clock/voltage curve at 7 Gbps. GDDR5X breathes a new lease of live to the ageing DRAM standard, offering 10-12 Gbps initially, with a goal of 16 Gbps in the long term. GDDR5X chips will have identical pin layouts to their predecessors, and hence it should cost GPU makers barely any R&D to implement them.
When mass-produced by companies like Micron, GDDR5X is touted to be extremely cost-effective compared to upcoming HBM standards, such as HBM2. According to a Golem.de report, both AMD and NVIDIA are mulling GPUs that support GDDR5X, so it's likely that the two could reserve expensive HBM2 solutions for only their most premium GPUs, and implement GDDR5X on their mainstream/performance solutions, to keep costs competitive.
Source: Golem.de
Add your own comment

69 Comments on GDDR5X Puts Up a Fight Against HBM, AMD and NVIDIA Mulling Implementations

#1
dinmaster
dont see why they wouldn't use it. no wonder amd will make a new series instead of just rebranding. this is a easy jump for their products.
Posted on Reply
#2
Musaab
Does anyone know how much the HBM and the GDDR5 cost. Because if it's like 20 to 50$ for 8GBs HBM then DDR5X make sense in sub 300$ market but not above this number. Hope to see hybrid cards that use both in sub 300$.
Posted on Reply
#3
RejZoR
Though if it's the same as GDDR5, it's easy to implement, but carries the same size footprint, something HBM has advantage over being very compact.
Posted on Reply
#4
Xajel
Bandwidth is only half part of the story, HBM uses much less power than GDDR5... and we know nothing about GDDR5X power usage also which I think will be even higher..

In high-end graphics card, having less demanding memory system means more power room for the GPU.. AMD used this fact with the Fury X... so I think high-end graphics will benefit more from HBM.. specially that HBM is still in it's first gen. second gen. HBM2 with next gen. AMD & NVIDIA is coming which is faster and more memory can be used than first gen. making more room for the GPU...

Plus it's not just power, HBM is also not as hot as GDDR5, specially that GDDR5 requires their own Power MOSFET which also consume power and generate heat

So HBM(specially HBM2) is a winner over GDDR5 in every side except cost... so I think it will stay in the high-end market... and specially Dual GPU cards where even PCB space is a challenge of it's own !!
Posted on Reply
#5
NC37
Makes sense they'd use GDDR5 or GDDR5X for the sub $300 area. New tech always = higher intro price.
Posted on Reply
#6
ArdWar
"Easier implementation" isn't gonna be the big deal here. Manufacturing-wise, once you use a specific technology in your product, it's easier to implement that technology across all of your similar product. Even more if the tech are significantly different, which is the case in HBM vs GDDR case.

Another factor like supply chain/avability or massively different cost will force manufacturer to use second choice, which made easier by the already familiar technology. That in turn made HBM the niche one, with corresponding cost overhead.
Posted on Reply
#7
FordGT90Concept
"I go fast!1!11!1!"
Seems pointless. There is really no more room for growth until they get off 28nm. AMD is going to stick with HBM so the only potential client is NVIDIA and I doubt NVIDIA wants to re-release cards again before 14/16nm.

On the other hand, I could see GDD5X being used on cheaper 14/16nm cards
Posted on Reply
#8
HumanSmoke
XajelBandwidth is only half part of the story, HBM uses much less power than GDDR5... and we know nothing about GDDR5X power usage also which I think will be even higher..
Bandwidth is less than half the story. GDDR5X likely costs the same to implement as standard 20nm GDDR5 which is a commodity product. HBM while relatively cheap to manufacture costs more to implement due to the microbumping required.
FordGT90ConceptOn the other hand, I could see GDD5X being used on cheaper 14/16nm cards
That would be the market. Small GPUs where size/power isn't much of an issue and bandwidth isn't paramount. A GDDR5X equipped board could still utilize a narrow memory bus to gain appreciably in bandwidth and still very likely be GPU constrained rather than bandwidth constrained.
Posted on Reply
#9
Steevo
So double the bit fetch length for high demand, with no mention of what it does to timings, which are of high importance as well, and how much more logic, termination, and many other things need to be built into expensive die space, where as I believe HBM is going to allow for termination on the substrate which is cheap compared and may as well allow for substrate based power management logic, and allows for shorter less attenuating signal paths allowing scale-able timing expectations.
Posted on Reply
#10
Musaab
HumanSmokeBandwidth is less than half the story. GDDR5X likely costs the same to implement as standard 20nm GDDR5 which is a commodity product. HBM while relatively cheap to manufacture costs more to implement due to the microbumping required.

That would be the market. Small GPUs where size/power isn't much of an issue and bandwidth isn't paramount. A GDDR5X equipped board could still utilize a narrow memory bus to gain appreciably in bandwidth and still very likely be GPU constrained rather than bandwidth constrained.
As for NVIDIA they usually replace all their products range each time they lunch new core design. AMD announced that they will build their new generation around Arctic island.
Posted on Reply
#11
lilhasselhoffer
...?

Where is the fight?


GDDR5X would functionally be on lower end hardware, taking the place of retread architectures. This is similar to how GDDR3 existed for quite some time after GDDR5 was implemented, no?

If that was the case GDDR5X wouldn't compete with HBM, but be a lower cost implementation for lower end cards. HBM would be shuffled from the highest end cards downward, making the competition (if the HBM2 hype is to be believed) only on the brackets which HBM production could not fully serve with cheap chips.



As far as Nvidia versus AMD, why even ask about that? Most assuredly, Pascal and Arctic Islands are shooting for lower temperatures and powers. Upgrading RAM to be more power hungry opposes that goal, so the only place they'll be viable is where GPUs would be smaller, which is just the lower end cards. Again, there is no competition.

I guess I'm agreeing with most other people here. GDDR5X is RDRAM. It's better than what we've got right now, but not better than pursuing something else entirely.
Posted on Reply
#12
Casecutter
I we're hearing of this just now... one would reasonable believe if a company like Micron had this for awhile now, face it didn't come as "lightning-bolt" a month ago. Let's say this has been something they had an inkling of a year ago. At that time the would've white papered this to Nvidia/AMD engineering.

For assurance I'll ask... this can be implement without change to the chips memory controller, meaning existing parts could've got GDDR5X as pretty much plug and play?

Now for Maxwell a year ago would've been late for sure, and/or for Maxwell with its memory compression and overall efficiency they could forgo supposed benefits in the price/margins they intended to be at. Though one would 've thought AMD could've made use of this on Hawaii (390/390X), so why didn't they? Perhaps 8Gb was more the "marketing boon" than the extra bandwith for the power GDDR5X dictates. Probably 4Gb of new GDDR5X was more costly verses 8Gb, used more power, and with 512-Bit it probably didn't see the performance jump to justify it.

I think we might see it with 14/14mmFf parts that hold to 192/256-bit bus a suffice fine on just 4Gb. I think they need to offset the work with the new process while in markets that 4Gb is all that's required. If it's a case of at resolutions that that truly can more than 4Gb, HBM2 has it beat.
Posted on Reply
#13
arbiter
CasecutterNow for Maxwell a year ago would've been late for sure, and/or for Maxwell with its memory compression and overall efficiency they could forgo supposed benefits in the price/margins they intended to be at. Though one would 've thought AMD could've made use of this on Hawaii (390/390X), so why didn't they? Perhaps 8Gb was more the "marketing boon" than the extra bandwith for the power GDDR5X dictates. Probably 4Gb of new GDDR5X was more costly verses 8Gb, used more power, and with 512-Bit it probably didn't see the performance jump to justify it.
Not knowing everything about GDDR5x would assume it would needed a change on the controller side of the chip to manage it properly, AMD likely didn't want to put $ in to it for a rebrand gpu.
Posted on Reply
#14
Kanan
Tech Enthusiast & Gamer
Can you all not read? There's written on the presentation that it has reduced voltage, so power usage still goes down, not up - the straight opposite.

This is an alternative to GDDR5 for mainstream/lower cards of upcoming generation, nothing more. Cards can have a smaller memory controller, or cheaper memory controller compared to normal GDDR5 because of the higher bandwidth of these.
Posted on Reply
#15
xorbe
lilhasselhofferGDDR5X would functionally be on lower end hardware, taking the place of retread architectures. This is similar to how GDDR3 existed for quite some time after GDDR5 was implemented, no?
Except that there isn't a card with HBM that's showing a clean sweep advantage due to having used HBM. Gddr5 was a big step over gddr3, no questions asked.
Posted on Reply
#16
HumanSmoke
KananCan you all not read? There's written on the presentation that it has reduced voltage, so power usage still goes down, not up - the straight opposite.
This is an alternative to GDDR5 for mainstream/lower cards of upcoming generation, nothing more. Cards can have a smaller memory controller, or cheaper memory controller compared to normal GDDR5 because of the higher bandwidth of these.
QFT, but I doubt many actually even glance at the presentation - which also states that the I/O pin count stays the same and that only a "limited effort" is required to update an existing GDDR5 IMC.
Maybe people are too pressed for time to actually read the info and do some basic research. Much easier to skim read (if that) and spend most of the available time writing preconceived nonsense.
Posted on Reply
#17
lilhasselhoffer
KananCan you all not read? There's written on the presentation that it has reduced voltage, so power usage still goes down, not up - the straight opposite.

This is an alternative to GDDR5 for mainstream/lower cards of upcoming generation, nothing more. Cards can have a smaller memory controller, or cheaper memory controller compared to normal GDDR5 because of the higher bandwidth of these.
I think you've forgotten reality somewhere in the mix, so let's review it. You can read an article, and process the facts without detailing all the logical steps you've taken. Let's review:

1) GDDR5X is lower voltage (1.5 to 1.35, or a 10% decrease). Fact.
2) GDDR5X targets higher capacities. Extrapolated fact based upon the prefetch focus of the slides.
3) Generally speaking, GPU RAM is going to be better utilized, therefore low end cards are going to continue to have more RAM on them as a matter of course. Demonstrable reality.
4) Assume that RAM quantities only double. You're looking at a 10% decrease in voltage, with a 100% increase in count. That's a new power increase of 80% (insanely ballparked based off variable voltage and constant leakage, more likely to be 50-80% increase in reality). Reasonable assumption of net influence.

Therefore, GDDR5X is a net improvement gain per memory cell, but as more cells are required it's a net decrease overall. HBM has yet to show real improvements in performance, but you've placed the cart before the horse there. Fury managed to take less RAM and make it not only fit in a small thermal envelope, but actual perform well despite basically being strapped to a GPU otherwise not really meant for it (yay interposer).

While I respect you pointing out the obvious, perhaps you should ask what underlying issues you might be missing before asking if everyone is a dullard. I can't speak for everyone, but if I explained 100% of my mental processes nobody would ever listen. To be fair though, I can't say this is everyone's line of reasoning.
Posted on Reply
#18
HumanSmoke
lilhasselhofferWhile I respect you pointing out the obvious, perhaps you should ask what underlying issues you might be missing before asking if everyone is a dullard.
I took Kanan's choice of words to mean that less of the GPU die need be devoted to its IMC's and memory pinout. For example, Tahiti's uncore accounts for more than half the GPU die area, and GDDR5 memory controllers and their I/O account for the greater portion of that.


Using the current Hawaii GPU* as a further example (since is architecturally similar) :

512 bit bus width / 8 * 6 GHz effective memory speed = 384 GB/sec of bandwidth.
Using GDDR5X on a future product with half the IMC's:
256 bit bus width / 8 * 10 -16 GHz effective memory speed = 320 - 512 GB/sec of bandwidth using half the number of IMC's and memory pinout die space.

So you effectively have saved die space and lowered the power envelope on parts whose prime consideration is production cost and sale price (which Kanen referenced). The only way this doesn't make sense is if HBM + Interposer + microbump packaging is less expensive than traditional BGA assembly - something I have not seen mentioned, nor believe to be the case.

* I don't believe Hawaii-type performance is where the product is aimed, but merely used as an example. A current 256-bit level of performance being able to utilize a 128-bit bus width is a more likely scenario.
Posted on Reply
#19
lilhasselhoffer
HumanSmokeI took Kanan's choice of words to mean that less of the GPU die need be devoted to its IMC's and memory pinout. For example, Tahiti's uncore accounts for more than half the GPU die area, and GDDR5 memory controllers and their I/O account for the greater portion of that.


Using the current Hawaii GPU* as a further example (since is architecturally similar) :

512 bit bus width / 8 * 6 GHz effective memory speed = 384 GB/sec of bandwidth.
Using GDDR5X on a future product with half the IMC's:
256 bit bus width / 8 * 10 -16 GHz effective memory speed = 320 - 512 GB/sec of bandwidth using half the number of IMC's and memory pinout die space.

So you effectively have saved die space and lowered the power envelope on parts whose prime consideration is production cost and sale price (which Kanen referenced). The only way this doesn't make sense is if HBM + Interposer + microbump packaging is less expensive than traditional BGA assembly - something I have not seen mentioned, nor believe to be the case.

* I don't believe Hawaii-type performance is where the product is aimed, but merely used as an example. A current 256-bit level of performance being able to utilize a 128-bit bus width is a more likely scenario.
I don't contest this point, other components may well be able to be made cheaper. My issue is (bolded below):
KananCan you all not read? There's written on the presentation that it has reduced voltage, so power usage still goes down, not up - the straight opposite.

This is an alternative to GDDR5 for mainstream/lower cards of upcoming generation, nothing more. Cards can have a smaller memory controller, or cheaper memory controller compared to normal GDDR5 because of the higher bandwidth of these.
I don't argue that it might be more cost effective to run GDDR5X (I believe I made that point myself). My point of contention is suggesting the GDDR5X will have a lower voltage, and thus will run cooler, is an issue. While strictly correct, the reality is three generations where 0.5-2 GB of VRAM was enough is disappearing. Thus the decrease in GDDR5X voltages will still represent a net real world increase in temperatures due to needing to pack more in.
Posted on Reply
#20
Kanan
Tech Enthusiast & Gamer
Still GDDR5X needs less power for the same. And thats what I meant - and I'm pretty sure the others said the opposite, so I disagreed. What you do is basically speculating that Ram is doubled again on mainstream cards - what I'am not quite convinced will happen. 8 GB Ram is a lot, and I don't see weaker cards than a 390 running 8 GB of (GDDR5X) Ram. And even if they would, this has nothing to do with what I was talking about - doubling the amount and therefore increasing total power has nothing to do with the Ram type itself being more efficient than GDDR5, this is a moot point.
HBM has yet to show real improvements in performance, but you've placed the cart before the horse there.
I didn't talk about HBM one bit. But you're still wrong, HBM has already proven itself on Fiji cards - it's a fact that they would be even more bandwidth limited without HBM, as overclocking HBM increased performance of every Fiji card further. This has proven that Fiji has not too much bandwidth, it has proven that it can't have enough bandwidth. Besides HBM made Fiji possible - there would be no Fiji without HBM. The same chip with GDDR5 would've taken more than 300 W TDP, a no-go, or would have needed a lower clock speed, which is a no-go too, because AMD wanted to achieve Titan X-like performance. The 275W TDP of Fury X was only possible with HBM on it. So not only DID the increased bandwidth help (a lot), it helped making the whole card possible at all.
Posted on Reply
#21
lilhasselhoffer
KananStill GDDR5X needs less power for the same. And thats what I meant - and I'm pretty sure the others said the opposite, so I disagreed. What you do is basically speculating that Ram is doubled again on mainstream cards - what I'am not quite convinced will happen. 8 GB Ram is a lot, and I don't see weaker cards than a 390 running 8 GB of (GDDR5X) Ram. And even if they would, this has nothing to do with what I was talking about - doubling the amount and therefore increasing total power has nothing to do with the Ram type itself being more efficient than GDDR5, this is a moot point.



I didn't talk about HBM one bit. But you're still wrong, HBM has already proven itself on Fiji cards - it's a fact that they would be even more bandwidth limited without HBM, as overclocking HBM increased performance of every Fiji card further. This has proven that Fiji has not too much bandwidth, it has proven that it can't have enough bandwidth. Besides HBM made Fiji possible - there would be no Fiji without HBM. The same chip with GDDR5 would've taken more than 300 W TDP, a no-go, or would have needed a lower clock speed, which is a no-go too, because AMD wanted to achieve Titan X-like performance. The 275W TDP of Fury X was only possible with HBM on it. So not only DID the increased bandwidth help (a lot), it helped making the whole card possible at all.
Allow me a little exercise, and an apology. First, the apology. I conflated the quote from @xorbe with what you said. It was incorrect, and my apologies for that.

Next, let's review the quotes.
KananCan you all not read? There's written on the presentation that it has reduced voltage, so power usage still goes down, not up - the straight opposite...
A very true point, yet somewhat backwards. Let's review the slides you criticize others for not reading. In particular, let's review 94a. The way that the memory supposedly doubles bandwidth is a much larger prefetch. That's all fine and dandy, but if I'm reading this correctly that means the goal is to have more RAM to cover the increasing prefetch. This would generally imply that more RAM would produce better results, as more data can be stored with prefetching accounting for an effectively increased bandwidth.

Likewise, everyone wants more RAM. Five years ago a 1 GB card was high end, while today an 8 GB card is high end. Do you really expect that trend to not continue? Do you think somebody out there is going to decide that 2 GB (what we've got now on middle ground cards) is enough? I'd say that was insane, but I'd prefer not to have it taken as an insult. For GDDR5X to perform better than GDDR5 you'd have to have it be comparable, but that isn't what sells cards. You get somebody to upgrade cards by offering more and better, which is easily demonstrable when you can say we doubled the RAM. While in engineering land that doesn't mean squat, it only matters that people want to fork their money over for what is perceived to be better.
Kanan... But you're still wrong, HBM has already proven itself on Fiji cards - it's a fact that they would be even more bandwidth limited without HBM, as overclocking HBM increased performance of every Fiji card further. This has proven that Fiji has not too much bandwidth, it has proven that it can't have enough bandwidth. Besides HBM made Fiji possible - there would be no Fiji without HBM. The same chip with GDDR5 would've taken more than 300 W TDP, a no-go, or would have needed a lower clock speed, which is a no-go too, because AMD wanted to achieve Titan X-like performance. The 275W TDP of Fury X was only possible with HBM on it. So not only DID the increased bandwidth help (a lot), it helped making the whole card possible at all.
To the former point, prove it. The statement that overclocking increases performance is...I'm going to call it a statement so obvious as to be useless. The reality is that any data I can find on Fiji suggests that overclocking the core vastly outweighs the benefits of clocking the HBM alone (this thread might be of use:www.techpowerup.com/forums/threads/confirmed-overclocking-hbm-performance-test.214022/ ). If you can demonstrate that HBM overclocking substantially (let's say 5:4, instead of a better 1:1 ratio) performance I'll eat my words. What I've seen is sub 2:1, which in my book says that the HBM bandwidth is sufficient for the cards, despite its being the more limited HBM1.

To the later point, it's not addressing the issue. AMD wanting overall performance to be similar to that of Titan didn't require HBM, as demonstrated by Titan. What it required was an investment in hardware design, rather than re-releasing the same architecture with minor improvements, a higher clock, and more thermal output. I love AMD, but they're seriously behind the ball here. The 7xxx, 2xx, and 3xx cards follow that path to a T. While the 7xxx series was great, Nvidia invested money back into R&D to produce a genuinely cooler, less power hungry, and better overclocking 9xx series of cards. Nvidia proved that Titan level performance could easily be had with GDDR5. Your argument is just silly.


As to the final point, we agree that this is a low end card thing. Between being perfect for retread architectures, cheap to redesign, and being more efficient in direct comparison it's a dream for the middle range 1080p crowd. It'll be cheap enough to give people 4+ GB (double the 2 GB that people are currently dealing with on middle end cards) of RAM, but that's all it's doing better.

Consider me unimpressed with GDDR5X, but hopeful that it continues to exist. AMD has basically developed HBM from the ground up, so they aren't going to use GDDR5X on anything but cheap cards. Nvidia hopefully will make it a staple of their low end cards, so HBM always has enough competition to keep it honest. Do I believe we'll see GDDR5X in any real capacity; no. 94a says that GDDR5X will be coming out around the same time as HBM2. That means they'll be fighting to get a seat at the Pascal table, when almost everything is supposedly already set for HBM2. That's a heck of a fight, even for a much cheaper product.
Posted on Reply
#22
Casecutter
KananCan you all not read? There's written on the presentation that it has reduced voltage, so power usage still goes down, not up - the straight opposite.
Man a guy writes three words and the venom erupts!
Casecutterthan the extra bandwith for the power GDDR5X dictates. Probably 4Gb of new GDDR5X was more costly verses 8Gb, used more power
My thinking was more what the overall gain to the perf/watt... Regrettably, those three words weren't deliberately vigilant, but then again I'm not here for this to be my "life's work", or the "keeper" of all things regarding correctness.

The slide said:
Lower VDD, VDDQ for reduced power
• VDD, VDDQ = 1.35V for reduced energy/bit
• GDDR5 cannot abandon VDD, VDDQ = 1.5V due to legacy

Now as that would imply being more energy-efficient, but can't tell us the amount saved offers a reasonable gain in real world performance.
btarunrmore energy-efficient
As for memory controller it said:
• The GDDR5 command protocol remains preserved as much as possible
• GDDR5 ecosystem is untouched

Though on the next slide is states:
Targeting a limited effort to upgrade a GDDR5 memory controller.

That sound like existing memory controllers would need some tweaking, and correct no one would see the value of re-working the chips controller when all they need is the chip hold out 6-8 months.
Posted on Reply
#23
HumanSmoke
CasecutterThat sound like existing memory controllers would need some tweaking, and correct no one would see the value of re-working the chips controller when all they need is the chip hold out 6-8 months.
You are making a couple of assumptions.
Firstly, whose to say that a chip has a lifetime of 6-8 months? Most GPUs in production have lifetimes much longer than that, and with process node cycles likely to be longer, I'd doubt that GPUs would suddenly have their lifetime shortened.
Secondly, memory controllers and the GDDR5 PHY ( the physical layer between the IMC's and the memory chips) are routinely revised - and indeed, reused from across generations of architectures. One of the more recent cases of a high profile/high sales implementation was that initiated on Kepler ( GTX 680's GK104-400 to GTX 770's GK104-425) and is presently used in Maxwell. AMD undoubtedly reuse their memory controllers and PHY as well. It is quite conceivable (and very likely) that future architectures such as Arctic Islands and Pascal will be mated with the GDDR5/GDDR5X logic blocks in addition to HBM, since increased costs associated with the latter ( interposer, micro-bumping packaging, assembly, and verification) would make a $100-175 card much less viable even if the GPU were combined with a single stack of HBM memory.
Posted on Reply
#24
Kanan
Tech Enthusiast & Gamer
lilhasselhofferAllow me a little exercise, and an apology. First, the apology. I conflated the quote from @xorbe with what you said. It was incorrect, and my apologies for that.
Np.
Do you really expect that trend to not continue? Do you think somebody out there is going to decide that 2 GB (what we've got now on middle ground cards) is enough? I'd say that was insane, but I'd prefer not to have it taken as an insult.
No, sorry for not clarifying it correctly - my answer was only about the next gen of cards, coming 2016 - what I think is that 4 GB for the lower cards and 8 GB for the middle cards will suffice, so it's hardly an increase. The top end cards can have all between 8 and 16 - that's not entirely clear at the moment, because 8 GB is a lot and will still be in 2016, I'm pretty sure of that. But other than that I expect the trend to continue (more need of Vram), just that it won't be 2016, rather 2017.
1 GB cards were good for 3-5 years or until now, depending on game, but I'd say at least 3 years. I used a HD 5970 for a long time and it has only 1 GB Vram - so don't overestimate how much Vram is needed. That said, 4 or 8 GB will be perfectly fine in 2016, as long as you don't play 4K with highest details on a 4GB card - and even that will depend on game. But 6 GB (with compression, like 980 Ti) or 8 GB (like R9 390X) will suffice.
For GDDR5X to perform better than GDDR5 you'd have to have it be comparable, but that isn't what sells cards. You get somebody to upgrade cards by offering more and better, which is easily demonstrable when you can say we doubled the RAM. While in engineering land that doesn't mean squat, it only matters that people want to fork their money over for what is perceived to be better.
Sadly, this is true. But I don't care about it much - we are here to speak about what is the truth or what is really needed and not what the average users out there think is better. They think a lot and 50-90% of it is wrong...
To the former point, prove it. The statement that overclocking increases performance is...I'm going to call it a statement so obvious as to be useless. The reality is that any data I can find on Fiji suggests that overclocking the core vastly outweighs the benefits of clocking the HBM alone (this thread might be of use:www.techpowerup.com/forums/threads/confirmed-overclocking-hbm-performance-test.214022/ ). If you can demonstrate that HBM overclocking substantially (let's say 5:4, instead of a better 1:1 ratio) performance I'll eat my words. What I've seen is sub 2:1, which in my book says that the HBM bandwidth is sufficient for the cards, despite its being the more limited HBM1.
I never said HBM overclocking brings a lot of performance. I just said it DOES, and this is the important thing - because a bandwidth saturated card would never gain any FPS from overclocking the Vram. So HBM is good, it helps - don't forget AMD compression is a lot worse than what NV has on Maxwell, so they NEED lots of bandwidth on their cards. This is why Hawaii has 512 Bit at first with only 1250 clocked Ram and now with 1500 MHz - because it never has enough, and this is why Fiji needs even more, because it is a more powerful chip - so it got HBM. Alternative would have been 1750 MHz clocked GDDR5 on a 512 bit bus like with Hawaii, but that would've sucked too much power and still be inferior in terms of maximum bandwidth.
To the later point, it's not addressing the issue. AMD wanting overall performance to be similar to that of Titan didn't require HBM, as demonstrated by Titan. What it required was an investment in hardware design, rather than re-releasing the same architecture with minor improvements, a higher clock, and more thermal output. I love AMD, but they're seriously behind the ball here.
This is not about what AMD should've had, it is about what AMD has (only GCN 1.2 architecture) and what they could do with it. HBM made a big big GCN possible - without it, no, never possible. And this is what I meant. From my perspective it is adressing the issue, even if it's not directly. It helped a older architecture go big and gave AMD a strong performer they otherwise would've missed, because Fiji with GDDR5 would have been impossible - too high TDP, they couldn't have done that, it would've been awkward.
And btw. they missed the money to develop a new architecture like Nvidia, I think.
The 7xxx, 2xx, and 3xx cards follow that path to a T. While the 7xxx series was great, Nvidia invested money back into R&D to produce a genuinely cooler, less power hungry, and better overclocking 9xx series of cards. Nvidia proved that Titan level performance could easily be had with GDDR5. Your argument is just silly.
And therefore my argument is the opposite of silly. First try to understand before you run out and call anyone "silly". You are very quick to judge or harass persons, not the first time you've done that - you do it way too frequently, and shouldn't do it at all.
As to the final point, we agree that this is a low end card thing. Between being perfect for retread architectures, cheap to redesign, and being more efficient in direct comparison it's a dream for the middle range 1080p crowd. It'll be cheap enough to give people 4+ GB (double the 2 GB that people are currently dealing with on middle end cards) of RAM, but that's all it's doing better.
I only see HBM1/2 for highend cards now or in 2016 because its too expensive for lower cards - therefore it's (GDDR5X) good for middle to semi highend cards I'd say. I even would bet on that. You really expect a premier technology to be used on middle to semi high end cards? Me not. GDDR5X has a nice gap there to fill, I think.
Do I believe we'll see GDDR5X in any real capacity; no. 94a says that GDDR5X will be coming out around the same time as HBM2. That means they'll be fighting to get a seat at the Pascal table, when almost everything is supposedly already set for HBM2. That's a heck of a fight, even for a much cheaper product.
Well that's no problem. If it arrives with HBM2, they can plan for it and produce cards with it (GDDR5X). I don't see why this would be a problem. And as I said, I only see HBM2 on highend or highest end (650$) cards. I think the 400 (if any)/500/650$ cards will have it - so everything cheaper than that will be GDDR5X, and that's not only "cheap cards". I don't see 200-400 as "cheap". Really cheap is 150 or less and this is GDDR5-land (not even GDDR5X) I think.
Posted on Reply
#25
lilhasselhoffer
Kanan...
And therefore my argument is the opposite of silly. First try to understand before you run out and call anyone "silly". You are very quick to judge or harass persons, not the first time you've done that - you do it way too frequently, and shouldn't do it at all.

I only see HBM1/2 for highend cards now or in 2016 because its too expensive for lower cards - therefore it's (GDDR5X) good for middle to semi highend cards I'd say. I even would bet on that. You really expect a premier technology to be used on middle to semi high end cards? Me not. GDDR5X has a nice gap there to fill, I think.

Well that's no problem. If it arrives with HBM2, they can plan for it and produce cards with it (GDDR5X). I don't see why this would be a problem. And as I said, I only see HBM2 on highend or highest end (650$) cards. I think the 400 (if any)/500/650$ cards will have it - so everything cheaper than that will be GDDR5X, and that's not only "cheap cards". I don't see 200-400 as "cheap". Really cheap is 150 or less and this is GDDR5-land (not even GDDR5X) I think.
I contest these three points. Everything else I may not agree with, but there isn't any reason to believe it isn't possible.


Here's the post I quoted, paring out the unnecessary bits. I've highlighted the silliness. If I weren't being generous, I'd call garbage statments which are either factually incorrect or useless.
Kanan...I didn't talk about HBM one bit. But you're still wrong, HBM has already proven itself on Fiji cards - it's a fact that they would be even more bandwidth limited without HBM, as overclocking HBM increased performance of every Fiji card further. This has proven that Fiji has not too much bandwidth, it has proven that it can't have enough bandwidth. Besides HBM made Fiji possible - there would be no Fiji without HBM. The same chip with GDDR5 would've taken more than 300 W TDP, a no-go, or would have needed a lower clock speed, which is a no-go too, because AMD wanted to achieve Titan X-like performance. The 275W TDP of Fury X was only possible with HBM on it. So not only DID the increased bandwidth help (a lot), it helped making the whole card possible at all.
As you can see, most of this statement is silly. Allow me to tear into it though.
1) it's a fact that they would be even more bandwidth limited without HBM,
Really? Is it a fact? How then does Nvidia have the performance it has with Titan? You make a statement of false equivocation, based upon a faulty premise. Because HBM is designed to have higher bandwidth it must therefore be responsible for performance. Where are your facts?
2) overclocking HBM increased performance of every Fiji card further
Again, facts. What I've seen is overclocking HBM leads to 50% or less returns on improvement. For example, the cited 8% overclock only returns 4% increased numerical results. Technically overclocking does increase performance, but you're weaseling out of this argument by saying any increase is an increase. When 50% of your added effort is wasted without seeing real improvement then it isn't really a reasonable improvement.
3) Fiji has not too much bandwidth
This goes back to point two. If Fury X (not Fiji in general) was bandwidth limited an 8% increase in clocks would yield somewhere near 8% of improved performance. It doesn't, therefore your point is invalid. Rather than dismissing it, I call it a silly and unsubstantiated point.
4) proven that it can't have enough bandwidth
Same as 3.
5) there would be no Fiji without HBM
Except you're wrong. There would be no Fury inside of the form factor and thermal envelope they chose, that doesn't mean Fiji wouldn't exist. You're equating two entirely separate and unrelated topics without factual basis here.
6) 275W TDP of Fury X
Artificially chosen value by AMD. This is irrelevant to the implementation of HBM (as demonstrated by Nvidia).
7) So not only DID the increased bandwidth help (a lot), it helped making the whole card possible at all
You reiterated all of your previous points in a single sentence. Congratulations, but a house built without a solid foundation is going to collapse rather easily. You've drawn all of these conclusions, in the face of existing data that proves you wrong. My teachers called it cute when I did this in school. My bosses fired me for incompetence. You are denying reality, and therefore are either an idiot or silly. I choose to give you the benefit of doubt and assume silliness. We've all been guilty of that at some point in time.



As to HBM2 not being available/cost effective, may I ask what exactly you expect of GDDR5X? It's an as yet unmanufactured standard, without substantial testing behind it. It may be largely plug and play with older controllers, but it still has to be made by somebody. This means added costs as the process is proven out, additional costs for redesigns of controllers to actually see the benefits of GDDR5X (why would you switch to what has to be a more expensive type of memory while cheaper stuff is more readily available), and supply issues all their own.

What you're arguing for is that in the midst of pushing out HBM, both AMD and Nvidia will push out another new standard. Why? Why would you ever completely retool everything, with less than 12 months to design, test, rework, prototype, and have manufacturing specifications for a product line? It'd be insane to do so.

Let's offer the benefit of doubt again, and acquiesce to your theory (based on nothing) that 90% of cards will not be HBM based. In order to accept that we have to make the assumption that HBM2 is not being produced right now, and will in fact only see production late next year. Why would AMD and Nvidia let that happen? They know that their new processes will finally be out next year. They know that the shrink will produce more performance gains than the last two redesigns, because of its huge magnitude. They know that Pascal and Arctic Islands will be the time that everybody re-evaluates 3-4 year old cards and decides it's time to evaluate an upgrade. Knowing all of this, how do you come to the conclusion that they aren't already starting on HBM2 chip orders (yes, it was the interposer that was the issue with HBM1, I know)?


I refer to your statements as silly, because they make no logical sense. I don't directly call your opinions idiotic, because you've demonstrated a grasp on reality. Our opinions may differ, but you deserve the respect of being proven wrong rather than being dismissed out of hand for for saying things that are unconnected to observable and demonstrable reality. It would help your argument to bring facts to the table though. It's hard to argue point like "overclocking makes things faster" is inaccurate, when you don't go out and at least try to have factual support for your statements.

As to your personal opinions of me, I don't care. You are more than welcome to call me an ass, and there's no reason to defend myself against it. In the last month I've been misinterpreted so as to call everyone from West Virginia as victims of a lack of genetic diversity (further being stretched to personal insultation of a person I don't know), I've been accused of calling southerners all hicks and rednecks (despite never using the term), and I've more than once been proven wrong. None of us, in the US at least, have the right to not be offended. The point of a forum is to raise a substantive argument based upon facts, and have poorer arguments torn apart by reality. It's the only way we can make sure our beliefs are rooted in reality, and not fantasy.
Posted on Reply
Add your own comment
Nov 21st, 2024 13:37 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts