Friday, May 27th 2016

NVIDIA GeForce GTX 1070 Reference PCB Pictured

Here's the first picture of an NVIDIA reference-design PCB for the GeForce GTX 1070. The PCB (PG411) is similar to that of the GTX 1080 (PG413), except for two major differences, VRM and memory. The two PCBs are pictured below in that order. The GTX 1070 PCB features one lesser VRM phase compared to the GTX 1080. The other major difference is that it features larger GDDR5 memory chips, compared to the smaller GDDR5X memory chips found on the GTX 1080. These are 8 Gbps chips, and according to an older article, its clock speed is maxed out to specifications, at which the memory bandwidth works out to be 256 GB/s. The GeForce GTX 1070 will be available by 10th June.
Source: VideoCardz
Add your own comment

37 Comments on NVIDIA GeForce GTX 1070 Reference PCB Pictured

#1
ZoneDymo
I will say im severely looking down upon the choice of going with GDDR5 with this one but alas.
Posted on Reply
#2
dozenfury
Very notable that TDP on 1070 FE cards is bios locked at 112% per the article. Surprised a bigger deal hasn't been made of that if confirmed to be true. Granted I'm sure it will be worked around in AIB or custom bioses...but not nice NV.
Posted on Reply
#3
bug
ZoneDymoI will say im severely looking down upon the choice of going with GDDR5 with this one but alas.
Wait for reviews and see whether that extra bandwidth makes any difference to your use case.
Posted on Reply
#4
Fluffmeister
bugWait for reviews and see whether that extra bandwidth makes any difference to your use case.
Yeah it's funny, everyone is keen to point out how fast the 980 Ti is.

And yes it is fast, to the point that the HBM1 dream died.

Meh
Posted on Reply
#5
ZoneDymo
bugWait for reviews and see whether that extra bandwidth makes any difference to your use case.
If it does not make a difference, when why have it on the GTX1080? purely marketing?
Posted on Reply
#6
Fluffmeister
Fury X was all about the HBM dream, but it got beat by lowly GDDR5.

Purely marketing?
Posted on Reply
#7
ZoneDymo
FluffmeisterFury X was all about the HBM dream, but it got beat by lowly GDDR5.

Purely marketing?
on HDM1 we can indeed all agree that that was little more then pure marketing.
Made the card small and nice looking, felt new and fresh and on the X it had a water cooler to boot.
Good marketing that, plus a pretty solid card (that was a tad too pricey sure).
But yeah that was pure marketing, soooo GDDR5X on the GTX1080 is as well I guess?
Posted on Reply
#8
newtekie1
Semi-Retired Folder
I kind of figured it would be basically the same as the GTX1080 PCB, but with a few(VRM) components removed. Going from 5 Phase to 4 Phase.
ZoneDymoI will say im severely looking down upon the choice of going with GDDR5 with this one but alas.
The thing is that the amount of memory bandwidth per CUDA core is actually higher on the GTX1070 than the GTX1080.

GTX1080 = 320Gbps / 2560 = 128Mbps per CUDA Core
GTX1070 = 256Gbps / 1920 = 136.5Mbps per CUDA Core

Just food for thought.

Simply throwing more memory bandwidth at a GPU is not always going to yield better performance. AMD has been trying that strategy for generations, and it obviously isn't working. If the GPU itself can't process any faster, then the extra memory bandwidth goes wasted. So the GTX1080, with its roughly 25% faster GPU, can probably benefit from the higher memory bandwidth. While the slower GTX1070 probably wouldn't benefit that much from the higher memory bandwidth.
Posted on Reply
#9
Fluffmeister
ZoneDymoon HDM1 we can indeed all agree that that was little more then pure marketing.
Made the card small and nice looking, felt new and fresh and on the X it had a water cooler to boot.
Good marketing that, plus a pretty solid card (that was a tad too pricey sure).
But yeah that was pure marketing, soooo GDDR5X on the GTX1080 is as well I guess?
Indeed, a small card, no cheaper and ultimately slower than the competition.

The GTX 1080 on the other hand is the fastest thing on the market right now.

Hopefully a larger chip with HBM2 will be cheaper. ;)
Posted on Reply
#10
Caring1
FluffmeisterFury X was all about the HBM dream, but it got beat by lowly GDDR5.

Purely marketing?
Pure economics, HBM cost more and the Fury X wasn't as fast as it should have been, AMD were relying on HBM to make up the difference.
On two identical cards limited to 4Gb, I think HBM would come out on top. But that test will never happen.
Posted on Reply
#11
Kanan
Tech Enthusiast & Gamer
Caring1Pure economics, HBM cost more and the Fury X wasn't as fast as it should have been, AMD were relying on HBM to make up the difference.
On two identical cards limited to 4Gb, I think HBM would come out on top. But that test will never happen.
It already did. Just compare 2x 380X to Fury X, it's ~100% the same just GDDR5 vs HBM. Bandwidth of 2x 380X is obviously less, but there's not a bottleneck.
Posted on Reply
#12
ppn
Funny card. I'd pay 299$ for this.
Posted on Reply
#13
Fluffmeister
Caring1Pure economics, HBM cost more and the Fury X wasn't as fast as it should have been, AMD were relying on HBM to make up the difference.
On two identical cards limited to 4Gb, I think HBM would come out on top. But that test will never happen.
Hmm dunno, the GTX 980 does fine here, but it's HardOCP, so people will scream bias:

www.hardocp.com/article/2016/05/24/xfx_radeon_r9_fury_triple_dissipation_video_card_review#.V0jv1hUrIuU

I guess you guys are right, after all it's nvidia that actually makes a profit.
Posted on Reply
#14
Kanan
Tech Enthusiast & Gamer
The reason for AMD to go with HBM was more because they needed to save energy consumption, the 4096 shader chip equipped with GDDR5 and 512 bit bus would have simply consumed too much power. Also the Nano would not have been possible.
Posted on Reply
#15
newtekie1
Semi-Retired Folder
KananIt already did. Just compare 2x 380X to Fury X, it's ~100% the same just GDDR5 vs HBM. Bandwidth of 2x 380X is obviously less, but there's not a bottleneck.
As long as you assume 100% Crossfire scaling. o_O
Posted on Reply
#16
Kanan
Tech Enthusiast & Gamer
newtekie1As long as you assume 100% Crossfire scaling. o_O
Yep, I assumed that (some games have that, or nearly at least). The Fury X was faster but I don't think it was the HBM, it was the 2x 380X hindered by crossfire. I think that's the review I'm talking about: www.eteknix.com/amd-r9-380x-4gb-graphics-card-crossfire-review/4/
Still, goes a long way on telling if HBMs added bandwidth is really useful or not.
Posted on Reply
#17
Fluffmeister
KananThe reason for AMD to go with HBM was more because they needed to save energy consumption, the 4096 shader chip equipped with GDDR5 and 512 bit bus would have simply consumed too much power. Also the Nano would not have been possible.
Maybe, unfortunately Maxwell was still more efficient.
Posted on Reply
#18
newtekie1
Semi-Retired Folder
KananYep, I assumed that (some games have that, or nearly at least). The Fury X was faster but I don't think it was the HBM, it was the 2x 380X hindered by crossfire. I think that's the review I'm talking about: www.eteknix.com/amd-r9-380x-4gb-graphics-card-crossfire-review/4/
Still, goes a long way on telling if HBMs added bandwidth is really useful or not.
Yeah, none of those show a 100% Crossfire scaling. And you see those horribly low minimums for the CF setup? Welcome to memory bandwidth bottleneck town.
Posted on Reply
#19
Kanan
Tech Enthusiast & Gamer
newtekie1Yeah, none of those show a 100% Crossfire scaling. And you see those horribly low minimums for the CF setup? Welcome to memory bandwidth bottleneck town.
See the other benchmarks too, I think the minimum FPS is that bad because of other reasons (driver, game issues) in that game. 100% scaling not, but sometimes the 380X CF setup is even faster.
Posted on Reply
#20
Enterprise24
pcb look very cheap again. 449$ for founder edition is overpriced.
Posted on Reply
#21
newtekie1
Semi-Retired Folder
KananSee the other benchmarks too, I think the minimum FPS is that bad because of other reasons (driver, game issues) in that game. 100% scaling not, but sometimes the 380X CF setup is even faster.
And I think it is because of memory bandwidth. Since there is no way to be sure, we'll just have to move on.
Posted on Reply
#22
Kanan
Tech Enthusiast & Gamer
newtekie1And I think it is because of memory bandwidth. Since there is no way to be sure, we'll just have to move on.
It's obviously not because of memory bandwidth. That would mean a single R9 380X is limited too, which clearly isn't the case. So two obviously aren't either. The crossfire factor even helps this point as it decreases performance, which means it doesn't even need as much bandwidth as these 2 cards have combined.
Posted on Reply
#23
ensabrenoir
....once again everything nvidia(or intel) makes is over priced. Because as a company, they understand that their profit margin have to include r&d, future growth and development so they can continue producing performance leading tech? Its mind boggling to me..... given a Halo product will be overpriced because its a halo but do you want nvidia to just give stuff away? Struggle to survive...repackage old tech as new... we already know how this story ends.....
Posted on Reply
#24
newtekie1
Semi-Retired Folder
KananIt's obviously not because of memory bandwidth. That would mean a single R9 380X is limited too, which clearly isn't the case. So two obviously aren't either. The crossfire factor even helps this point as it decreases performance, which means it doesn't even need as much bandwidth as these 2 cards have combined.
For all we know a single 380X is limited by its memory bandwidth. But this is far enough off topic, time to move back on topic.
Posted on Reply
#25
Kanan
Tech Enthusiast & Gamer
newtekie1For all we know a single 380X is limited by its memory bandwidth. But this is far enough off topic, time to move back on topic.
Source? I'd like to "know" too.
Posted on Reply
Add your own comment
Jul 22nd, 2024 05:22 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts