• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Samsung Bets on GDDR6 for 2018 Rollout

You are wrong, by rearranging the cuda cores into more modules (less cores per module), they improved amount of cache per core. Pascal also has twice beefier dispatchers. Clocks are major part of the performance boost, though.

Too bad it didn't really do anything compared to clocks.
 
Why doesn't Nvidia use HBM?

Because of their highly aggressive delta color compression implementation. By aggressive I mean that the algorithms they use to compress pixel data probably use a lot of approximation. Meaning, if the delta between pixel (x, y) and (x, l) is "sufficiently small", both pixels are compressed into the same delta value.
 
well.. this is pathetic.. i understand the usage scenario for this type of memory and i also understand that companies will do anything to keep the status quo. but i still find it pathetic. Afterall HBM in great quantities would be cheap, as everything that comes in great quantities.
 
well.. this is pathetic.. i understand the usage scenario for this type of memory and i also understand that companies will do anything to keep the status quo. but i still find it pathetic. Afterall HBM in great quantities would be cheap, as everything that comes in great quantities.

I guess it just depends on how expensive it is. I think HBM1 was like $100 - $150 for 4GB. Cheaper HBM2 about to be produced is likely $75 for 4GB, and it will at least beat the FUTURE 384-bit GDDR6 in both performance and power usage. Unless they can sell GDDR6 for like 1/3rd the price I just don't see it being useful for anything but $150 and lower cards.
 
well.. this is pathetic.. i understand the usage scenario for this type of memory and i also understand that companies will do anything to keep the status quo. but i still find it pathetic. Afterall HBM in great quantities would be cheap, as everything that comes in great quantities.

I guess it just depends on how expensive it is. I think HBM1 was like $100 - $150 for 4GB. Cheaper HBM2 about to be produced is likely $75 for 4GB, and it will at least beat the FUTURE 384-bit GDDR6 in both performance and power usage. Unless they can sell GDDR6 for like 1/3rd the price I just don't see it being useful for anything but $150 and lower cards.
 
I guess it just depends on how expensive it is. I think HBM1 was like $100 - $150 for 4GB. Cheaper HBM2 about to be produced is likely $75 for 4GB, and it will at least beat the FUTURE 384-bit GDDR6 in both performance and power usage. Unless they can sell GDDR6 for like 1/3rd the price I just don't see it being useful for anything but $150 and lower cards.
To use HBM you have to mount both gpu and chips to an interposer which add's another step in process of what can go wrong. SO its not about cost of the chip's, is also that inter poser and yields you get outta them when chips are mounted to said inter poser. There will be chips that fail after that process. With that said other side is also bandwidth needed, if there is enough memory bandwidth to keep the chip supplied as you could see with fury x having insane memory bandwidth is just a paper stat that doesn't help if it can't utilized to its full ability.
 
I guess it just depends on how expensive it is. I think HBM1 was like $100 - $150 for 4GB. Cheaper HBM2 about to be produced is likely $75 for 4GB, and it will at least beat the FUTURE 384-bit GDDR6 in both performance and power usage. Unless they can sell GDDR6 for like 1/3rd the price I just don't see it being useful for anything but $150 and lower cards.

To use HBM you have to mount both gpu and chips to an interposer which add's another step in process of what can go wrong. SO its not about cost of the chip's, is also that inter poser and yields you get outta them when chips are mounted to said inter poser. There will be chips that fail after that process. With that said other side is also bandwidth needed, if there is enough memory bandwidth to keep the chip supplied as you could see with fury x having insane memory bandwidth is just a paper stat that doesn't help if it can't utilized to its full ability.
i think we all understand how production procedure works, when something is done in a large scale it can be done more efficiently and with reduced costs, if that procedure is used only for halo products you cannot expect the costs or its efficiency to change drastically or even at all.
 
Yep and sadly thats all the are after, not actually pushing the envelope, actually propelling humanity forward with cutting edge tech, just tiny mouse steps, just enough to beat the competition for easy maximum profit.

I thought Nvidia was a corporation? What are corporations after? Oh yeah, money.

Pascal is also suppose to be using tile based rendering, something not found on Maxwell 1/2.

If tiled resources is the same as tile based rendering, even Kepler had that.
 
To use HBM you have to mount both gpu and chips to an interposer which add's another step in process of what can go wrong. SO its not about cost of the chip's, is also that inter poser and yields you get outta them when chips are mounted to said inter poser. There will be chips that fail after that process. With that said other side is also bandwidth needed, if there is enough memory bandwidth to keep the chip supplied as you could see with fury x having insane memory bandwidth is just a paper stat that doesn't help if it can't utilized to its full ability.

I'm not saying the Fury X doesn't have more bandwidth than "needed", but I will say the extra bandwidth did add to the performance. Every single card (Both AMD and Nvidia) that I have overclocked gained massively from increased memory speeds. The fact is that cards have been relatively memory starved for the past 5 years. Compare the bandwidth and TFLOP increases that have happened over recent years - The Fury X has 2.5x the computational power while having less than double the bandwidth. Even with memory compression is really isn't enough.

Cards. Need. More. Bandwidth. NOW. Look at the pathetic gains received from overclocking the 1080's core by upwards of 20%, and the massive gains that come from overclocking just the memory on the RX 480.
 
Because they are cheapskates and don't care about the consumer, only profits!

What's the point of being in business if you're not in it to make profit?

Screw off, Sammy. HBM is clearly the future. I bet they're just mad they turned down AMD for HBM lolz (I'm assuming that AMD approached them b/c why wouldn't you).

There will be a need for alternatives to HBM for a long time to come yet. It's nothing to do with Nvidia refusing to use it in consumer products, but more so yields, costs, economics, technical limitations, form-factors, and other variables. There is no point strapping HBM onto a low/mid tier, power-sipping, notebook GPU at this stage, is there? There is a clear market here, and Samsung is looking to fill the need. You don't want low/mid tier GPUs to be stuck of GDDR5/X for the next 5 to 10 years because there was no viable alternative to HBM to keep costs down. You want the best performance you can get for your money, and GDDR6 should be just that.

Do people just cry their fan boy opinions without actually stopping and thinking? Never mind, the answer to this is obvious - it's the internet after all.
 
I run a full time ebay selling business by myself, without profits I couldn't pay the rent every month and/or go on 8-10 weeks vacation every year.

So yea, every business cares about profits, but they also have to care about the customer(s). I care about both.
 
AMD are basically crowdfunded at this stage, so they will be fine.

With that said, I'm waiting for HBM6 before I jump.
 
What's the point of being in business if you're not in it to make profit?



There will be a need for alternatives to HBM for a long time to come yet. It's nothing to do with Nvidia refusing to use it in consumer products, but more so yields, costs, economics, technical limitations, form-factors, and other variables. There is no point strapping HBM onto a low/mid tier, power-sipping, notebook GPU at this stage, is there? There is a clear market here, and Samsung is looking to fill the need. You don't want low/mid tier GPUs to be stuck of GDDR5/X for the next 5 to 10 years because there was no viable alternative to HBM to keep costs down. You want the best performance you can get for your money, and GDDR6 should be just that.

Do people just cry their fan boy opinions without actually stopping and thinking? Never mind, the answer to this is obvious - it's the internet after all.

You are right, but only if GDDR6 is priced accordingly (It probably will be). Let's say HBM3 costs $50 for 4GB, well then imo GDDR6 should cost $15. There is a place for something besides HBM, but only if it is much stronger or dirt cheap.
 
i think we all understand how production procedure works, when something is done in a large scale it can be done more efficiently and with reduced costs, if that procedure is used only for halo products you cannot expect the costs or its efficiency to change drastically or even at all.
Most people don't understand it really. Most people don't realize that to use HBM you have to the get gpu that passes test's and you need HBM chips that pass. Then you gotta put them all on an interposer would could ruin it all if it don't go perfectly. All of that add's on to cost You say all understand it but i don't think as many under stand it as you think.

If they can get that performance outta GDDR6, I can see Nvidia skipping HBM for that as it would remove 1 possible step of creating a waste.
 
Why doesn't Nvidia use HBM?

HBM 1 was limited to only 4GB, that's why Fury only had 4GB.

HBM 2 isn't limited to 4GB, but it's still in limited supply and costs higher, so NV only used it in their pro computing products... HBM 2 is supposed to go for better supply end of this year - early next year, that's why both NV and AMD postponed it's use for next year..
 
All in all, guys, I can't quite feel the same joy you seem to be feeling in this thread.

nVidia seems to be able to develop 3 different chips in parallel and that with serious architectural changes from project to project
AMD is limited to rolling out in one segment at a time and is doing rather small changes to existing architecture.

Volta is expected in 2017 and it might be to Pascal what Maxwell was to Kepler.
Meanwhile AMD is merely competitive in low range, but even that might evaporate in 2017.

We might end up with AMD not being able to compete in any segment in 2017, and if so, it will be game over.

Why doesn't Nvidia use HBM?

Except it DOES use HBM2 with GP100 chip.

And if you were wondering about something else, namely:
Q: Why did AMD bother with HBM in Fury?
A: Because, being an underdog in a rather desperate positions, they need to gamble on new tech.

Q: Why do nVidia cards normally need less bandwidth, than AMD cards?
A: Compression on nVidia cards is said to be more effective(although AMD should be closing the gap with Polaris). Architectural differences (yeah, a vague statement, I know) more effective use of cache might also play role.


36% to 42% faster in overall gaming than Maxwell
I guess you are comparing 450$ chip (1070) to 330$ chip (970), makes a lot of sense.

Pascal is also suppose to be using tile based rendering, something not found on Maxwell 1/2.
I doubt the "not found on Maxwell" part.


I thought Nvidia was a corporation? What are corporations after? Oh yeah, money.

Yeah, "companies make money" = "all company get as low as nVidia, after all, they also make money", very logical statement.
 
Last edited:
Most people don't understand it really. Most people don't realize that to use HBM you have to the get gpu that passes test's and you need HBM chips that pass. Then you gotta put them all on an interposer would could ruin it all if it don't go perfectly. All of that add's on to cost You say all understand it but i don't think as many under stand it as you think.

If they can get that performance outta GDDR6, I can see Nvidia skipping HBM for that as it would remove 1 possible step of creating a waste.
obviously gddr cannot match hbm performance wise, its only advantage is the cost, and that is only due to the fact that hbm production is in its infancy.
 
I guess you are comparing 450$ chip (1070) to 330$ chip (970), makes a lot of sense.

My statement was aimed at a quote that said Pascal only got higher clocks and no major improvement otherwise. Prices are a different story. I think the prices are too high but that's just speaking for myself and in reality where can you turn to for something competitive with the 1070, 1080 and Pascal Titan X? AMD? No, at least not for a while. Markets need competition to function in a healthy way.
 
My statement was aimed at a quote that said Pascal only got higher clocks and no major improvement otherwise. Prices are a different story.
No, not really.
Bump was about 20%, you claimed twice that.

AIB 980Ti´s are more than competitive vs 1070.
 
No, not really.
Bump was about 20%, you claimed twice that.

AIB 980Ti´s are more than competitive vs 1070.

I was speaking about overall performance gain over a test suite of games

GTX 1070 over GTX 970 gain
at 1440p 38% faster
at 4K 40% faster

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1070/24.html

GTX 1080 over GTX 980 gain
at 1440p 40% faster
at 4K 41% faster

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080/26.html

Pascal Titan X over Maxwell Titan X gain
at 1440p 40% faster
at 4K 42% faster

https://www.techpowerup.com/reviews/NVIDIA/Titan_X_Pascal/24.html
 
I was speaking about overall performance gain over a test suite of games
I was speaking about similarly priced card, the 980.

Comparing 450 Euro 1070 to 330 Euro 970 is ridiculous.

Same goes to vs 1080 which es even more expensive than 980Ti was.

Oh, and here is a site where they know how %'s work, hover at will:
https://www.computerbase.de/2016-08...x-480/2/#diagramm-performancerating-1920-1080

AIB 1070 is 23-24% faster than 980 (1440p/1080p)
AIB 1080 is 19-20% faster than 980Ti (1440p/1080p), stock is only 10% faster
 
Last edited:
Comparing 450 Euro 1070 to 330 Euro 970 is ridiculous.
Weren't you already told that
Prices are a different story.
by your logic any comparable product cannot be compared if it falls out of a price range :shadedshu:

Pascal is also suppose to be using tile based rendering, something not found on Maxwell 1/2.
I doubt the "not found on Maxwell" part.
Kepler doesn't use tile based rendering, Maxwell does. Pascal uses same algorithm as Maxwell but with differently sized tiles (adjusted for Pascal's higher cache per core)
 
Back
Top