Friday, March 11th 2016

NVIDIA "GP104" Silicon to Feature GDDR5X Memory Interface

It looks like NVIDIA's next GPU architecture launch will play out much like its previous two generations - launching the second biggest chip first, as a well-priced "enthusiast" SKU that outperforms the previous-generation enthusiast product, and launching the biggest chip later, as the high-end enthusiast product. The second-biggest chip based on NVIDIA's upcoming "Pascal" architecture, the "GP104," which could let NVIDIA win crucial $550 and $350 price-points, will be a lean machine. NVIDIA will design the chip to keep manufacturing costs low enough to score big in price-performance, and a potential price-war with AMD.

As part of its efforts to keep GP104 as cost-effective as possible, NVIDIA could give exotic new tech such as HBM2 memory a skip, and go with GDDR5X. Implementing GDDR5X could be straightforward and cost-effective for NVIDIA, given that it's implemented the nearly-identical GDDR5 standard on three previous generations. The new standard will double densities, and one could expect NVIDIA to build its GP104-based products with 8 GB of standard memory amounts. GDDR5X breathed a new lease of life to GDDR5, which had seen its clock speeds plateau around 7 Gbps/pin. The new standard could come in speeds of up to 10 Gbps at first, and eventually 12 Gbps and 14 Gbps. NVIDIA could reserve HBM2 for its biggest "Pascal" chip, on which it could launch its next TITAN product.

The GP104 will be built on the 16 nm FinFET process, by TSMC. NVIDIA is hoping to unveil the first GP104-based products by April, at the Graphics Technology Conference (GTC) event, which it hosts annually; with possible market availability by late-May or early-June, 2016.
Source: Benchlife.info
Add your own comment

135 Comments on NVIDIA "GP104" Silicon to Feature GDDR5X Memory Interface

#26
Jism
Yes, for now. But HBM is still the better product with low power, lower latency, and even future generations of HBM can still outperform GDDRX5 easily.

You woud'nt had a Fury X in small form factor if it was'nt for HBM. There woud'nt be room to stack 8 GDDRX5 chips around the chip anymore.
Posted on Reply
#27
bug
JismSince AMD was into development of HBM and HBM2, it said as well to have a first priority related to getting HBM 1 & HBM2 chips. The reason why Nvidia is choosing for GDDRX5 is simply because it cant get HBM2 chips yet.

HBM is still superior to GDDR, less chips, less power, much tighter latency's and much bigger bandwidth.
Actually, the latency of HBM is worse than GDDR's. It makes up for it with its additional bandwidth. It's a little bit like the old RAMBUS.
Posted on Reply
#28
rtwjunkie
PC Gaming Enthusiast
JismYes, for now. But HBM is still the better product with low power, lower latency, and even future generations of HBM can still outperform GDDRX5 easily.

You woud'nt had a Fury X in small form factor if it was'nt for HBM. There woud'nt be room to stack 8 GDDRX5 chips around the chip anymore.
And?... we get it, you'd rather see HBM2 GPU's. You will, on the high end cards. The GP104 are of the same placement as the current 980 and 970, which is on the upper end of the mid-tier. They are not high end flagships, and nor will be the first of Pascal, GP104.

Gp104 should be affordable. GDDR5X, tho expensive, should be alot cheaper to produce in volume than HBM2, translating into: Keeping GP104 GPU's somewhat affordable.
Posted on Reply
#29
PP Mguire
JismSince AMD was into development of HBM and HBM2, it said as well to have a first priority related to getting HBM 1 & HBM2 chips. The reason why Nvidia is choosing for GDDRX5 is simply because it cant get HBM2 chips yet.

HBM is still superior to GDDR, less chips, less power, much tighter latency's and much bigger bandwidth.
HBM is still expensive, HBM2 will be more expensive than GDDR5x, and Nvidia/AMD are getting their chips from different companies. Nvidia will have their HBM2 chips as soon as they're available in Q3, and are releasing GP104 first anyways like they have been the past 3 generations. So that being said, HBM2 will be left for their big chip just like HBM was for AMD and Fury. Nothing new here. GDDR5x will be perfectly fine for products that aren't top tier.
Posted on Reply
#30
the54thvoid
Super Intoxicated Moderator
ZoneDymoOh well...guess MOAR profits rule again and the blind will all gladly bend over.
I know. Isn't it just awful. Imagine if AMD released something smaller, less powerful than Fury X, with lower circuit componentry and then took the AIO unit off. Imagine they did that and charged more for it...

Shock horror, both teams try to make a profit. Deal with it.
Posted on Reply
#33
rtwjunkie
PC Gaming Enthusiast
anubis44Uh Oh:

pokde.net/blog/amd-r9-290x-beats-the-gtx-980-ti-in-dx12/
pcpartpicker.com/forums/topic/105376-do-not-buy-gtx980-or-gtx980ti-they-are-obsolete

Maybe you're still within the return period for that obsolete 980Ti? Best of luck.
So, as most members here know, those stories are based on the performance in Ashes of the singularity, the only real DX12 game out there a half a year after premiere of DX12.

So, there is absolutely zero credence given to either articl, especially the second hysterical "your Maxwells are obsolete" one.
Posted on Reply
#34
medi01
Read article 2 times, missed which part of it was news (or what was the title based on).

It says NV MIGHT do something? Is the fact what some company MIGHT do considered news these days? ^^
Posted on Reply
#35
midnightoil
rtwjunkieI'm not sure what your "um" is for, sounding like I'm an idiot or sonething. It's obvious you didn't read the whole thread. Read my post #8: www.techpowerup.com/forums/threads/nvidia-gp104-silicon-to-feature-gddr5x-memory-interface.220823/#post-3429647

This isn't my first rodeo. :)
That would likely mean a paper launch 4 or even 5 months before any real availability ... they may be late and AMD earlier, but showing nothing at all would be far less damaging to their brand than that kind of paper launch.
Posted on Reply
#36
midnightoil
bugActually, the latency of HBM is worse than GDDR's. It makes up for it with its additional bandwidth. It's a little bit like the old RAMBUS.
GDDR5(X) has a minute advantage in tRC and a disadvantage in tccd. Both aren't really enough to matter.

However a 256bit GDDR5X 8GB card will only have 448GB/s of bandwidth. An HBM2 8GB card will have 1024GB/s of bandwidth. It'll also likely have well under half the power consumption and consume about one quarter of the board area. There are also further advantages from being a 4096bit bus vs 256bit.

It's not even close.
Posted on Reply
#37
PP Mguire
midnightoilGDDR5(X) has a minute advantage in tRC and a disadvantage in tccd. Both aren't really enough to matter.

However a 256bit GDDR5X 8GB card will only have 448GB/s of bandwidth. An HBM2 8GB card will have 1024GB/s of bandwidth. It'll also likely have well under half the power consumption and consume about one quarter of the board area. There are also further advantages from being a 4096bit bus vs 256bit.

It's not even close.
At this point I can safely say it really doesn't matter much [for now]. We get more of an advantage over raw GPU power over memory bandwidth. I can raise the clocks on my VRAM and hardly get any performance benefit, while raising my core 300MHz gives me a good FPS boost. Sure a boost in memory bandwidth coupled with the architectural differences from Maxwell to Pascal will be nice, the majority of the performance benefit will come from Pascal being faster itself while the faster memory is just a compliment. The main benefit HBM will bring to the table in large quantities is the ability to store large amounts of big texture data.
Posted on Reply
#38
the54thvoid
Super Intoxicated Moderator
anubis44Uh Oh:

pokde.net/blog/amd-r9-290x-beats-the-gtx-980-ti-in-dx12/
pcpartpicker.com/forums/topic/105376-do-not-buy-gtx980-or-gtx980ti-they-are-obsolete

Maybe you're still within the return period for that obsolete 980Ti? Best of luck.
Lol. I'll gladly pitch my 980ti against any Fury X at 1440p or 4k on any current or upcoming game. As far as working DX12 titles, well, scientifically speaking you have nothing to bring to the table bar one unreleased, arguably AMD favouring title.
Love the avatar by the way, nice to see you wear your heart on your sleeve.
Posted on Reply
#39
anubis44
the54thvoidLol. I'll gladly pitch my 980ti against any Fury X at 1440p or 4k on any current or upcoming game. As far as working DX12 titles, well, scientifically speaking you have nothing to bring to the table bar one unreleased, arguably AMD favouring title.
Love the avatar by the way, nice to see you wear your heart on your sleeve.
Nothing to bring to the table, eh? AOtS the 'only' example of async compute being enabled on a DX12 game, eh?

What's this?
wccftech.com/hitman-pc-directx-12-benchmarks/

The Radeons are beating the crap out of your beloved Green Goblin, and a 390X is beating the Titan X.

nVidia GPUs do not have hardware schedulers, so they can't perform asynchronous compute. This is like not having 'hyperthreading' on a GPU. And it's quite probable that Pascal won't have them either.

As for 'wearing my heart on my sleeve', well, what can I say? I hate nVidia with a burning passion, so why be coy about it? They screwed me over with a burned out GPU in my Toshiba Tecra M3, for which I never received compensation, their drivers were so crappy with my GTX670 on a 3 monitor setup I gave up and sold the card, they're STILL selling a 3.5GB (useable) graphics card as a 4GB graphics card (so they're a pack of liars as far as I'm concerned), they stripped out asynchronous compute engines in order to save power, and now that they've been caught with their pants down, they're using their money to try to insert GimpWorks into every title they can to deliberately sabotage Radeon cards, so they're a pack of desperate cheaters:

wccftech.com/nvidia-gameworks-visual-corruption-gears-war-ultimate-edition/

But the biggest joke is on nVidia owners, because of course, Kepler owners are the ones getting Gimped the most by GimpWorks:


If you think all that is irrelevant, be my guest and keep bending over for them.
Posted on Reply
#40
AsRock
TPU addict
JismYes, for now. But HBM is still the better product with low power, lower latency, and even future generations of HBM can still outperform GDDRX5 easily.

You woud'nt had a Fury X in small form factor if it was'nt for HBM. There woud'nt be room to stack 8 GDDRX5 chips around the chip anymore.
But if not much difference will be noticed it's not worth it due to cost. People don't like spending $600+ on video cards and tell you the truth i don't need HBM and i like to be able to cool my memory chips too and not have them in a hot zone.
Posted on Reply
#41
Frick
Fishfaced Nincompoop
rtwjunkieI'm not sure what your "um" is for, sounding like I'm an idiot or sonething.
He must be from Arstechnica.

And wow, TWO games in which AMD are faster? Need moar data.
Posted on Reply
#42
rtwjunkie
PC Gaming Enthusiast
anubis44they're STILL selling a 3.5GB (useable) graphics card as a 4GB graphics card (so they're a pack of liars as far as I'm concerned)
Why do you effin care? It's always the people that haven't used a 970 that get angry.

And angry you are. Really dude, you're going to die an early death of a stroke getting all worked up about things THAT DON'T MATTER AT ALLin the grand scheme of LIFE. There's enough real crap in this world to focus on.
Posted on Reply
#43
ZoneDymo
the54thvoidI know. Isn't it just awful. Imagine if AMD released something smaller, less powerful than Fury X, with lower circuit componentry and then took the AIO unit off. Imagine they did that and charged more for it...

Shock horror, both teams try to make a profit. Deal with it.
yeah and with AMD being a much smaller company it makes sense.
Remember that massive marketshare difference, it would certainly speak to Nvidia's image if they just said
"ya know what, lets not go for insane profit margins and just make a healthy amount of profit and yet move technology forward by putting HDM2 memory on all our GTX cards".

But no, procrastinate for some more years pls, we have the time, let it all go as slow as possible and release as many incremental jumps in performance as possible so as to make as much profit as possible over the backs of people.

I for one would think we as a people should give the finger to those practices but again, plenty will gladly bend over and even praise these tactics that they themselves are ultimately the victim off.

And yeah, I am dealing with it by pointing out these problems.
I feel all people that use terms like "deal with it" just mean "accept it, bend over and just accept it, let it happen" instead of actually dealing with it (which again is what Im doing by protesting).
Posted on Reply
#44
ZoneDymo
rtwjunkieWhy do you effin care? It's always the people that haven't used a 970 that get angry.

And angry you are. Really dude, you're going to die an early death of a stroke getting all worked up about things THAT DON'T MATTER AT ALLin the grand scheme of LIFE. There's enough real crap in this world to focus on.
Always love these silly "arguments"
If you go that far then why does life even matter? might as well kill yourself right now man and no, I dont mean that as an insult or to offend, if your mind really wanders that far then yeah, life itself is pointless as well so why go on?
Posted on Reply
#45
rtwjunkie
PC Gaming Enthusiast
ZoneDymoAlways love these silly "arguments"
If you go that far then why does life even matter? might as well kill yourself right now man and no, I dont mean that as an insult or to offend, if your mind really wanders that far then yeah, life itself is pointless as well so why go on?
That's just it. I don't care. These things people get mad about pale in comparison to almost everything.

I usually find that once people have dealt with death and disease, aging, raising children, etc, anger at a gaming hardware company takes a back seat and really means nothing.

That's a luxury that only those who have not had to actually live life have.
Posted on Reply
#46
ZoneDymo
rtwjunkieThat's just it. I don't care. But these issues that people get actually mad about are pointkess.
again, everything is ultimately pointless, life itself is pointless, if you go that far nothing matters ever which makes it a rather redundant argument.
Posted on Reply
#47
rtwjunkie
PC Gaming Enthusiast
ZoneDymoagain, everything is ultimately pointless, life itself is pointless, if you go that far nothing matters ever which makes it a rather redundant argument.
Read the rest of what i wrote. There are real things to worry about.
Posted on Reply
#48
HumanSmoke
ZoneDymoAlways love these silly "arguments"
If you go that far then why does life even matter? might as well kill yourself right now man and no, I dont mean that as an insult or to offend, if your mind really wanders that far then yeah, life itself is pointless as well so why go on?
10/10 for hyperbole.
Life can, and often does, stretch decades with a near infinite variation of experiences. A graphics card for an enthusiast typically measures its lifespan in a year or less. During that time the options in decent gaming might be measured on the fingers of one hand between immature drivers, broken games, short games split over months thanks to artificially partitioning a game into bite-sized DLC's.
I have one life, but I've lost count of the cards and generations of cards that have passed through my hands. Once upon a time, my 9700 PRO was manna from heaven - now just a fading memory, as were my 8800's, GTX 280's, HD 4890's, HD 5970, triple HD 5850's, GTX 580's, GTX 670's and a host of other cards I've owned since Mach 64 and S3 based Diamond Stealth cards.

Apples and Oranges dude.
Posted on Reply
#49
64K
ZoneDymoAlways love these silly "arguments"
If you go that far then why does life even matter? might as well kill yourself right now man and no, I dont mean that as an insult or to offend, if your mind really wanders that far then yeah, life itself is pointless as well so why go on?
Posted on Reply
#50
PP Mguire
anubis44Nothing to bring to the table, eh? AOtS the 'only' example of async compute being enabled on a DX12 game, eh?

What's this?
wccftech.com/hitman-pc-directx-12-benchmarks/

The Radeons are beating the crap out of your beloved Green Goblin, and a 390X is beating the Titan X.

nVidia GPUs do not have hardware schedulers, so they can't perform asynchronous compute. This is like not having 'hyperthreading' on a GPU. And it's quite probable that Pascal won't have them either.

As for 'wearing my heart on my sleeve', well, what can I say? I hate nVidia with a burning passion, so why be coy about it? They screwed me over with a burned out GPU in my Toshiba Tecra M3, for which I never received compensation, their drivers were so crappy with my GTX670 on a 3 monitor setup I gave up and sold the card, they're STILL selling a 3.5GB (useable) graphics card as a 4GB graphics card (so they're a pack of liars as far as I'm concerned), they stripped out asynchronous compute engines in order to save power, and now that they've been caught with their pants down, they're using their money to try to insert GimpWorks into every title they can to deliberately sabotage Radeon cards, so they're a pack of desperate cheaters:

wccftech.com/nvidia-gameworks-visual-corruption-gears-war-ultimate-edition/

But the biggest joke is on nVidia owners, because of course, Kepler owners are the ones getting Gimped the most by GimpWorks:


If you think all that is irrelevant, be my guest and keep bending over for them.
No, Maxwell has 32 queues compared to the 128 from AMD. Maxwell supports Async Compute but only chokes when too many commands are sent to it as has been explained in detail many times by renowned sources on the net.

Funny you link a site known for fud and rumors. Even further, the benchmarks they show don't even match. How is it the first set they showed has a 980ti (which beats the Fury X) running 76fps in DX12 @ 1080p (beats in all 3 resolutions too) but the other site shows 68? Furthermore the gap at 1440p for Fury X is higher between the two sites. Who to trust? I'll say nobody. Also, it's been mentioned extensively that DX12 in the game is buggy anyways, as the Computer Base site mentions. Finally, the game is an AMD title and I'd expect it to run better on AMD cards anyways. Of course, one set of benchmarks seems to show otherwise. As to the 390x beating the crap out of the Titan X, yea and I'd be willing to bet like every other site they aren't letting the card boost. An eVGA Titan X SC will boost easily to 1350 by itself which is close to the 980ti clockspeed they have which means Titan X and 980ti would be on top of the charts for the first set of benches shown. So no, anybody worth their salt that owns a Titan X will have performance actually higher than a 980ti, which would handily beat the 390x if the top set of charts are to be believed.

So you had an issue with 2 Nvidia setups and you want to spread BS? What happens when your precious AMD card dies? Are you going to make excuses to continue hating on Nvidia? :rolleyes: Get a grip dude. Your Toshiba laptop came out 11 years ago and you're still whining about it? And who the hell runs Surround on a 670 of all cards? Oh man, my 5770 Crossfire setup was screwed up even after 3 replacement cards because XFX kept giving me Rev B boards.
AMD drivers keep crashing when I test my 295x2 for performance figures for friends. Snap, I might as well hate on AMD and not consider their lineup this year against Pascal and cry about it a decade later. :shadedshu:
The 970 has 4GB of usable VRAM. If you actually OWNED one of these cards you would know that all 4GB is addressable and actually doesn't hamper performance like all the BS misinformation on the net suggests. Want to know how I know? I have one sitting right behind me and have actually tested it for this very reason. Maybe if you took off your green glasses and stopped the fanboyism you'd see that your post here just sounds ridiculous.

Nvidia didn't bother putting a lot of effort into Maxwell regarding DX12 and DX12 features because they are meant to be DX11 1080p/1440p beasts, and that they are. Wait until Pascal is here, you'll see that very statement is beyond true.

The only thing that's a joke here is you and your post bud.
Posted on Reply
Add your own comment
Nov 22nd, 2024 20:16 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts