# GeForce GTX 880 ES Intercepted En Route Testing Lab, Features 8 GB Memory?



## btarunr (May 5, 2014)

An engineering sample (ES) of the GeForce GTX 880 was intercepted on its way from a factory in China, to NVIDIA's development center in India, where it will probably undergo testing and further development. The shipping manifest of the courier ferrying NVIDIA's precious package was sniffed out by the Chinese press. NVIDIA was rather descriptive about the ES, in its shipping declaration. Buzzwords include "GM204" and "8 GB GDDR5," hinting at what could two of the most important items on its specs sheet. GM204 is a successor of GK104, and is rumored to feature 3,200 CUDA cores, among other things, including a 256-bit wide memory bus. If NVIDIA is cramming 8 GB onto the card, it must be using some very high density memory chips. The manifest also declares its market value at around 47,000 Indian Rupees. It may convert to US $780, but adding all taxes and local markups, 47,000 INR is usually where $500-ish graphics cards end up in the Indian market. The R9 290X, for example, is going for that much.





*View at TechPowerUp Main Site*


----------



## NationsAnarchy (May 5, 2014)

256-bit memory interface only ? Can someone convince me why not 512 ?


----------



## LAN_deRf_HA (May 5, 2014)

I figure it's likely retail cards would be 4 GB only for the first few months. Unless Nvidia is specifically deciding to market for 4k.


----------



## renz496 (May 5, 2014)

NationsAnarchy said:


> 256-bit memory interface only ? Can someone convince me why not 512 ?



...to reduce cost...


----------



## NationsAnarchy (May 5, 2014)

renz496 said:


> ...to reduce cost...



Yeah ok that's a good reason.


----------



## newtekie1 (May 5, 2014)

NationsAnarchy said:


> 256-bit memory interface only ? Can someone convince me why not 512 ?



Obviously nVidia didn't think it was worth the extra cost.  GM204 is a mid-range GPU after all, so 512-Bit memory bus would be kind of ridiculous.


----------



## MxPhenom 216 (May 5, 2014)

renz496 said:


> ...to reduce cost...



That and the GPU die would be huge for what should be mid range.


----------



## NationsAnarchy (May 5, 2014)

newtekie1 said:


> Obviously nVidia didn't think it was worth the extra cost.  GM204 is a mid-range GPU after all, so 512-Bit memory bus would be kind of ridiculous.





MxPhenom 216 said:


> That and the GPU die would be huge for what should be mid range.



Gotcha. Thanks for that, didn't quite realize that GM204 is just mid-range.


----------



## MxPhenom 216 (May 5, 2014)

NationsAnarchy said:


> Gotcha. Thanks for that, didn't quite realize that GM204 is just mid-range.



Its the GPU that should be for the successor to the gtx660ti/gtx760, but like first gen kepker the gtx680 was on gk104 and beat AMD high end then with the 700 series they used the actual high end gpu gk110 for 780 and higher. I exoect that to be the same for 800 and 900 series.


----------



## Sasqui (May 5, 2014)

NationsAnarchy said:


> Gotcha. Thanks for that, didn't quite realize that GM204 is just mid-range.



What boggles is 8gb on a "mid range" card.


----------



## librin.so.1 (May 5, 2014)

"FUNTIONAL"? As in... made to produce fun? 

(lel 'DAT TYPO in the shipping manifest)


----------



## NationsAnarchy (May 5, 2014)

Sasqui said:


> What boggles is 8gb on a "mid range" card.



Dude it's just an ES, really. Because of the GPU codename GM204


----------



## NationsAnarchy (May 5, 2014)

MxPhenom 216 said:


> Its the GPU that should be for the successor to the gtx660ti/gtx760, but like first gen kepker the gtx680 was on gk104 and beat AMD high end then with the 700 series they used the actual high end gpu gk110 for 780 and higher. I exoect that to be the same for 800 and 900 series.



Yeah yeah you're right. Memory just doesn't function correctly for me. It's late at night now at my home anyway


----------



## GhostRyder (May 5, 2014)

Cool, I like the idea of that much ram on a gaming card reference because it will keep it much more stable in the coming years!


----------



## jabbadap (May 5, 2014)

8GB, probably quadro or tesla  then. Kind of doubt there will be 8GB geforce card, same pcb maybe and 4GB gddr5 for geforce and 8GB for quadro.


----------



## MxPhenom 216 (May 5, 2014)

jabbadap said:


> 8GB, probably quadro or tesla  then. Kind of doubt there will be 8GB geforce card, same pcb maybe and 4GB gddr5 for geforce and 8GB for quadro.



I do not think the Quadro and Tesla cards will be running on the GM204 die. They save those cards for the big die GM210.


----------



## jabbadap (May 5, 2014)

MxPhenom 216 said:


> I do not think the Quadro and Tesla cards will be running on the GM204 die. They save those cards for the big die GM210.



Quadro k5000 is gk104 with 4GB vram, while geforce gtx680 has gk104 with 2GB vram. 

It's true that there's no tesla with single gk104 but tesla k10 has 2 of them.


----------



## Selene (May 5, 2014)

680 and 770 where mid range cards on GK104, AMD was weak and Nvidia took full advantage of the market charging $500 a pop for what used to be the $250-$300 range card.

Then  you have the 680's and 770's with double the memory 4gb on the 256bit bus, it simply does not work by the time the rez gets high enough to use that extra memory the GPU is not strong enough to matter.
The only time is even remotely worth while is if your going two and three way SLI at very high resolution.


----------



## Assimilator (May 5, 2014)

There is zero reason to put 8GB of memory on a mid-range product, especially a 28nm mid-range product. I call BS on this, anyone can fake up a table in MS Word.


----------



## jabbadap (May 5, 2014)

Assimilator said:


> There is zero reason to put 8GB of memory on a mid-range product, especially a 28nm mid-range product. I call BS on this, anyone can fake up a table in MS Word.



Table is not random excel spreadsheet. It's from https://www.zauba.com/


> *Welcome to Zauba*
> Home to India's import and export data. Zauba helps businesses reach new heights. Gain unparalleled insight into trade, access daily import and export shipment records, discover new markets and opportunities.



You can search it yourself:
https://www.zauba.com/import-gm204-hs-code.html


----------



## erocker (May 5, 2014)

renz496 said:


> ...to reduce cost...


...or to make more profit. Considering Nvidia's pricing strategy as of late, I'd go with profit.


----------



## Hilux SSRG (May 5, 2014)

3,200 cuda cores seems high for the midrange.


----------



## cadaveca (May 5, 2014)

two words:


Dual GPU.



Everything in that list hints at it.


 I dunno WTF I am talking about.


----------



## Fluffmeister (May 5, 2014)

Selene said:


> 680 and 770 where mid range cards on GK104, AMD was weak and Nvidia took full advantage of the market charging $500 a pop for what used to be the $250-$300 range card.



Just because GK104 was mid-range in the Kepler hierarchy it hardly warranted mid range prices.

People seem to be quick to forget it launched being both cheaper and faster than the competition:
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/1.html


----------



## Hilux SSRG (May 5, 2014)

Fluffmeister said:


> Just because GK104 was mid-range in the Kepler hierarchy it hardly warranted mid range prices.
> 
> People seem to be quick to forget it launched being both cheaper and faster than the competition:
> http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/1.html



7970 launched 12/2011 at $550
680 launched 03/2012 at $500

Nvidia saw the gk104 was enough to battle the 7970 and decided to overcharge for mid range.  Then came the 780, 780ti, 780 Black, arguably the high range/flagships.


----------



## Fluffmeister (May 5, 2014)

Hilux SSRG said:


> 7970 launched 12/2011 at $550
> 680 launched 03/2012 at $500
> 
> Nvidia saw the gk104 was enough to battle the 7970 and decided to overcharge for mid range.  Then came the 780, 780ti, 780 Black, arguably the high range/flagships.



It was more than enough to compete, not sure why they should sell it for any cheaper. It's performance fit, maybe AMD should have charged less for their high-end chip that only performed like a mid-range one apparently.

/shrug


----------



## W1zzard (May 5, 2014)

It makes perfect sense to build an early ES board with more than enough memory. So, using a single card, you can gather performance data for 1 GB, 2 GB, 3 GB, 4 GB, 6 GB, 8 GB configurations. Data that will later be used to decide what memory size the final shipping cards will have.

It also helps to uncover hardware and software design flaws. Imagine you build a new GPU, then a year later decide you'll make a 6 GB Titan card, and oops, the memory controller in the GPU doesn't work with that much memory.


----------



## Hilux SSRG (May 5, 2014)

Fluffmeister said:


> It was more than enough to compete, not sure why they should sell it for any cheaper. It's performance fit, maybe AMD should have charged less for their high-end chip that only performed like a mid-range one apparently.
> 
> /shrug



Nvidia saw Amd's weakness and ran with it, to the detriment of consumers.  It's just shocking how much they overcharged.  I wonder if their failed Tegra3 with lack of LTE support caused Nvidia to seek higher profits in gfx cards?


----------



## Fluffmeister (May 5, 2014)

Hilux SSRG said:


> Nvidia saw Amd's weakness and ran with it, to the detriment of consumers.  It's just shocking how much they overcharged.  I wonder if their failed Tegra3 with lack of LTE support caused Nvidia to seek higher profits in gfx cards?



Welcome to the world of multi-billion dollar corps.

As always people are free to vote with their wallets, nVidia have a stronger brand and people are clearly happy to pay, again I'm sorry but that is just how it is.


----------



## haswrong (May 5, 2014)

since theres the cheapest workforce in india and china, it is very likely nvidia is finally going to price these cards very cheaply (like 300$ for the best model) and will finally focus on the affordability for end customers, which has been severely lacking the last decade.


----------



## D3LTA09 (May 5, 2014)

This doesnt make sense to me, how can an 880 be the successor to a GK104 board? it would have to be the successor to GK110?


----------



## OneCool (May 5, 2014)

Who gives a damn about the video card........... I want the Water Cooled Plunger!!!!!!!!


----------



## 64K (May 5, 2014)

D3LTA09 said:


> This doesnt make sense to me, how can an 880 be the successor to a GK104 board? it would have to be the successor to GK110?



When Nvidia launched the GTX 680 there were yield problems with the GK110 at TSMC and AMD's flagship performance was such that Nvidia could label the GK104 as a GTX 680 when in the past this would have been a midrange GPU. It was a success and they probably will repeat this with the Maxwell series.


----------



## john_ (May 5, 2014)

256bit mid range card hoping to cost ***ONLY*** $500. 
How nice.


A! Yes.... 8GB RAM.
for example


----------



## KainXS (May 5, 2014)

it looks like a dual gpu to me, am I the only one thinking that


----------



## NC37 (May 5, 2014)

Oh thats not a good sign if a x04 chip is taking the high end. This is Kepler all over again where we went from GF104 in Fermi being the mid-high chip to the GK104 being dumped in the high end with the original high end 110 coming later.

It wreaked prices. Fermi had the 104s in a beautiful pricing segment and then Kepler saw them jump cause suddenly 104s were used for all the high ends. NV then stuffed 106s into the mid-high bracket when 106s were more mid-low. So they literally found a way to make people pay more for weaker GPUs.


----------



## matar (May 6, 2014)

So now NVidia will give us the GTX 880 with 2 flavors a 4GB and a 8GB  but still based on The 28nm.
Then we will still have the same but redefined architecture for the GTX 990 with a 20nm it seams to me now NVidia and Intel > ( like  sandy bridge  ivy bridge )  are both going the same route to make more money and not give us the latest technology right away when available remember we seen this with the 8800 to 9800 same  but redefined architecture then 400 to 500 same then 600 to 700 same and now we will get 800 AND ONLT IN THE 900 we will get true full Maxwell chip and that's what NVidia use to stand for. nVidia that's the way its meant to be played. but I guess those days are going away don't get me wrong I still would only buy only NVidia GPUs.


----------



## GhostRyder (May 6, 2014)

Nvidia needs to put more ram on their cards at better prices (At least on the gaming GPUS's).  Its annoying to have to buy the EVGA, Asus, or MSI cards that add more ram to make up for the detriment to high resolution gaming.    I love the fact theres an 8gb version coming because that will be sweet (So long as the price is right).


----------



## Relayer (May 6, 2014)

cadaveca said:


> two words:
> 
> 
> Dual GPU.
> ...



You actually make sense though. Either that or I'm Tweedle Dee to your Tweedle Dumb.


----------



## Relayer (May 6, 2014)

haswrong said:


> since theres the cheapest workforce in india and china, it is very likely nvidia is finally going to price these cards very cheaply (like 300$ for the best model) and will finally focus on the affordability for end customers, which has been severely lacking the last decade.




I have a bridge I'd like to sell you.


----------



## HalfAHertz (May 6, 2014)

I don't really see the 680 as a purely mid ranged card. I think that Nvidia was indeed ahead of AMD with the desing of the so-called high end GPU but only a by bit and their CEO (like every good CEO should) tried to misrepresented the scale of the situation to put Nvidia in a better light.

I think the timeline went something along those lines:

-TSMC's 28nm process had problem as usual.

- There weren't that many 7970s at first.

-The 680 came in a bit later than the 7970.

-Gk104 had some supply issues as well.

-The 7970 GHz Ed. came out.

-Non-competitive high pricing for both AMD and Nvidia

-The 780 came out some time later.

-The 290x came out a few months after that.

Frankly, I think that the 680 was simply the best that Nvidia could put out from TSMC at that time. It was a new architecture with a complicated design on a new manufacturing node. The last time they tried this, they came up with the fx5000 series, which was a bit of a flop. So they made the calculated choice not to rush things like last time and work through the issues to deliver a more solid solution.

Unsurprisingly AMD was in a similar position. They were getting sub-par results that delivered good performance but in a higher power envelope. They made the choice not to optimize power consumption as much and to rush the design so that they can deliver before Nvidia and conquer the market in the mean time. AMD as a whole was in a financially fragile situation at that time and was really in need of a few months of good sales. Not long after the initial release AMD tried to mediate their unoptimized design with the refined 7970 GHz edition  somewhat successfully.

Both products had high pricing for a long time due to the difficulty of manufacturing them at a sufficiently high rates to satisfy demand.

Once Nvidia sorted things out with the manufacturing process, they could finally bring out the Fermi successor they envisioned from the start and the 780 came out.

The 290x came out a few months later and managed to deliver comparable performance but had the same high power usage issues like the 7970. The conclusion is that the 290 had the same silicon substrate as the 7970 and the two were designed in tandem from the start. Most likely AMD tried to solve the high power consumption but failed and decided not to delay the release any further to prevent further loss of sales to the 780.


----------



## xenocide (May 6, 2014)

NC37 said:


> Oh thats not a good sign if a x04 chip is taking the high end. This is Kepler all over again where we went from GF104 in Fermi being the mid-high chip to the GK104 being dumped in the high end with the original high end 110 coming later.
> 
> It wreaked prices. Fermi had the 104s in a beautiful pricing segment and then Kepler saw them jump cause suddenly 104s were used for all the high ends. NV then stuffed 106s into the mid-high bracket when 106s were more mid-low. So they literally found a way to make people pay more for weaker GPUs.


 
Alright, let's get this over with.  *Stop acting like the product code immediately determines the value*.  You wanna know _why_ the GTX 680 was the GK104?  Because it was better than AMD's best offering at the time, and substantially better than the GTX 580, and absolutely destroyed the GTX 560.  Everyone keeps acting like just because it says GK104 instead of GK100 or GK110 it's suddenly crap--it's not.  If the GTX880 comes out as a GM204 part on 28nm, and offers performance that exceeds the GTX 780 Ti while using less power for ~$500, you cannot tell me you would consider it _crap_.  Being 20nm or GM100/110 doesn't matter as long as the performance is there.  Don't like Nvidia's system?  Then encourage AMD to put out a GPU that isn't a miniature heater with a leafblower attached to it that can compete with Nvidia at the high end.

The GTX 680 came out and did screw with prices, because it was so *good*.  AMD had to drop the price of their HD7970 to compete with Nvidia's smaller, cooler, more energy efficient mid-range part that they were getting better profit margins on.  The people who control the upper bracket of performance set the price bar.  AMD didn't hesitate to throw the HD7970 out there at $550 when the HD6970 launched at under $400.  Just like they didn't hesitate to throw the FX Series Socket 939 CPU's out there for $1000+ when they were substantially better than Intel's offerings.  It's capitalism.  If they are offering a superior product, they are going to charge a premium.  Corvette's are more expensive than Camaro's, it doesn't mean Camaro's are a rip off or crappy products.



GhostRyder said:


> Nvidia needs to put more ram on their cards at better prices (At least on the gaming GPUS's).  Its annoying to have to buy the EVGA, Asus, or MSI cards that add more ram to make up for the detriment to high resolution gaming.    I love the fact theres an 8gb version coming because that will be sweet (So long as the price is right).


 
Show me a single benchmark where a 4GB variation of a normally 2GB card significantly outperforms at higher resolutions.  I'll wait.  I haven't found a single one.


----------



## Relayer (May 6, 2014)

xenocide said:


> Alright, let's get this over with.  *Stop acting like the product code immediately determines the value*.  You wanna know _why_ the GTX 680 was the GK104?  Because it was better than AMD's best offering at the time, and substantially better than the GTX 580, and absolutely destroyed the GTX 560.  Everyone keeps acting like just because it says GK104 instead of GK100 or GK110 it's suddenly crap--it's not.  If the GTX880 comes out as a GM204 part on 28nm, and offers performance that exceeds the GTX 780 Ti while using less power for ~$500, you cannot tell me you would consider it _crap_.  Being 20nm or GM100/110 doesn't matter as long as the performance is there.  Don't like Nvidia's system?  Then encourage AMD to put out a GPU that isn't a miniature heater with a leafblower attached to it that can compete with Nvidia at the high end.
> 
> The GTX 680 came out and did screw with prices, because it was so *good*.  AMD had to drop the price of their HD7970 to compete with Nvidia's smaller, cooler, more energy efficient mid-range part that they were getting better profit margins on.  The people who control the upper bracket of performance set the price bar.  AMD didn't hesitate to throw the HD7970 out there at $550 when the HD6970 launched at under $400.  Just like they didn't hesitate to throw the FX Series Socket 939 CPU's out there for $1000+ when they were substantially better than Intel's offerings.  It's capitalism.  If they are offering a superior product, they are going to charge a premium.  Corvette's are more expensive than Camaro's, it doesn't mean Camaro's are a rip off or crappy products.
> 
> ...


GK104 was better than Tahiti perf/W in Gaming and that's it (Once both are O/C'd there is no perf advantage where Tahiti is typically a little faster.). Tahiti is a much better chip in any compute metric (excluding CUDA, of course). Stronger in DP and destroys it in OpenCL and scrypt (what mining uses). for some reason the Titan and Titan-Z (especially) are worth huge premiums because of the DP prowess. Nobody takes that into acct. though when comparing GK104 to Tahiti.


----------



## buggalugs (May 6, 2014)

Relayer said:


> GK104 was better than Tahiti perf/W in Gaming and that's it (Once both are O/C'd there is no perf advantage where Tahiti is typically a little faster.). Tahiti is a much better chip in any compute metric (excluding CUDA, of course). Stronger in DP and destroys it in OpenCL and scrypt (what mining uses). for some reason the Titan and Titan-Z (especially) are worth huge premiums because of the DP prowess. Nobody takes that into acct. though when comparing GK104 to Tahiti.




 I agree, the 680 wasn't that great. It only just beat the 7970,(under 10% and 7970 won plenty of games) and that was with a highly overclocked and boost enabled 680. The GHz edition changed things again then Nvidia rushed out the 780. Then they had the cheek to release the Titan for $1,000 knowing the 780 was just weeks away and would demolish the titan for much less money.

 I look at it very negatively in that Nvidia decided to screw consumers and milk them for another upgrade.


----------



## HumanSmoke (May 6, 2014)

xenocide said:


> Show me a single benchmark where a 4GB variation of a normally 2GB card significantly outperforms at higher resolutions.  I'll wait.  I haven't found a single one.


Very true. It's almost as if some people believe that adding memory makes a cumulative difference, when the fact is that the chip architects ACTUALLY know what they are doing in making a balanced design. For a 4GB card to show a marked improvement over a 2GB version, it must mean that the original design wasn't balanced in the first place  - and that simply doesn't happen in modern GPU design. The other scenario would have to be that the lower framebuffer was allied with a less functional die


xenocide said:


> Alright, let's get this over with.  *Stop acting like the product code immediately determines the value*.  You wanna know _why_ the GTX 680 was the GK104?  Because it was better than AMD's best offering at the time, and substantially better than the GTX 580, and absolutely destroyed the GTX 560.  Everyone keeps acting like just because it says GK104 instead of GK100 or GK110 it's suddenly crap--it's not.


Actually probably a case of both vendors looking over the fence. AMD's die size has been steadily increasing as they see the value of compute - traditionally (at least since G80) something that Nvidia incorporated wholesale. Nvidia noted how not every GPU needed to check every feature box. AMD's Barts GPU sold very well despite a having a complete lack of double precision, but Nvidia also commands the lions share of the pro market and needs compute - hence the bifurcation of the product lines. The top part gets all the bells and whistles, the second tier parts and down become stripped down to the bare minimum to save die space and power.


xenocide said:


> If the GTX880 comes out as a GM204 part on 28nm, and offers performance that exceeds the GTX 780 Ti while using less power for ~$500, you cannot tell me you would consider it _crap_.


I don't think you can judge any performance and price in a vacuum. How the card is viewed will be as much about how it stands against its competition as its own features. R600 might have been judged a pretty fair GPU had the G80 and G92 not bracketed its release.
AMD have a record of playing the GPU cards close to their chests (something the CPU division could learn from), so I'll reserve judgement for the time being. I wouldn't be at all surprised to see a re-run of the previous few generations with the GM204 v Pirate Islands episode. When people are arguing about superiority when the difference is a few percentage points and an optimized game bench or two can swing the result one way of the other, it seems that neither side will put daylight between their product and the competition.


----------



## 64K (May 6, 2014)

I think it's fair to judge the GTX 680 as upper midrange. I certainly had no complaints about performance when I got mine nor do I have any complaints about performance now.

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/32.html

When you compare the true Kepler flagship GTX 780Ti to the GTX 680 it's clear where the GTX 680 fits on the Kepler scale.


----------



## Aditya (May 6, 2014)

NationsAnarchy said:


> 256-bit memory interface only ? Can someone convince me why not 512 ?


Yeah I wonder the same,didnt the old all powerful GTX-295 have 896bit memory interface? Maybe the higher clock speeds must be compensating for the reduced width.
http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-295/specifications


----------



## NationsAnarchy (May 6, 2014)

Aditya said:


> Yeah I wonder the same,didnt the old all powerful GTX-295 have 896bit memory interface? Maybe the higher clock speeds must be compensating for the reduced width.



Well, if you do pay attention to the topic, a lot of reasons have been made lol


----------



## cadaveca (May 6, 2014)

Relayer said:


> You actually make sense though. Either that or I'm Tweedle Dee to your Tweedle Dumb.




Just because it makes sense, doesn't make it right, though. Listed memory and shader counts don't make sense to me, and to me point to that dual-GPU... W1zz is probably bang-on as to why there might be an 8 GB listing, however. I hadn't considered that it might have to do with memory controller testing, and that makes even more sense to me. 3000 shaders tops 780 TI though.


----------



## rtwjunkie (May 6, 2014)

I totally agree with W1zz, in that it makes sense to send an ES with all kinds of goodies on it.  It allows you to test many different variations on one card.  Plus it keeps all of us in a guessing frezy as to what the specs will be on release!


----------



## arbiter (May 7, 2014)

Fluffmeister said:


> Just because GK104 was mid-range in the Kepler hierarchy it hardly warranted mid range prices.
> 
> People seem to be quick to forget it launched being both cheaper and faster than the competition:
> http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/1.html



GTX680 was top of like card for that series, it was GK104 chip. It wasn't midrange at the time it was released.


Don't see why everyone assumes its 256bit memory bus as it would be a bit of a downgrade performance wise if they did that. 256bit was nothing but a rumor so best to leave it as one til proper specs are released.


----------



## Relayer (May 7, 2014)

cadaveca said:


> Just because it makes sense, doesn't make it right, though. Listed memory and shader counts don't make sense to me, and to me point to that dual-GPU... W1zz is probably bang-on as to why there might be an 8 GB listing, however. I hadn't considered that it might have to do with memory controller testing, and that makes even more sense to me. 3000 shaders tops 780 TI though.


I didn't say I'd bet the house on it.


----------



## cadaveca (May 7, 2014)

Relayer said:


> I didn't say I'd bet the house on it.


Heh. I feel ya. The shader numbers given are still weird, so whateva.


----------



## EarthDog (May 7, 2014)

NationsAnarchy said:


> 256-bit memory interface only ? Can someone convince me why not 512 ?


Sure... what is the bandwidth of a 512bit bus running at 1250Mhz versus a 256bit bus running at 1750 Mhz?
(Answer: in the same ballpark)

It is, for the most part, two different ways of getting to the same thing. making a 512bit bus is more expensive, but using 1250Mhz rated DDR5 is cheaper. Versus a cheaper to make bus and more expensive ram IC's.


----------



## NationsAnarchy (May 7, 2014)

EarthDog said:


> Sure... what is the bandwidth of a 512bit bus running at 1250Mhz versus a 256bit bus running at 1750 Mhz?
> (Answer: in the same ballpark)
> 
> It is, for the most part, two different ways of getting to the same thing. making a 512bit bus is more expensive, but using 1250Mhz rated DDR5 is cheaper. Versus a cheaper to make bus and more expensive ram IC's.



I'm down with this one. Thanks man ! I hope I can learn more from you


----------



## 3dhotshot (May 18, 2014)

Nvidia knows a 512 bit memory bus will destroy any and all games at ultra settings even with just a 2GB Gfx card ~ this is just the reason they will only keep the 512bit bus for their flagship products.

512 bit = more heat, this is why I think 8 Gigs of 256 bit ram is the route they are going with.

8GB gddr5 on a graphics card has been long overdue, and these cards will equip the gamer for 3k and 4k gaming. Lets not forget custom GPU resolution scaling on existing drivers allow you to play on resolutions much higher than your current monitor supports. Other people like 3d artists who use software like Lumion / Lightwave 3d / Unreal Engine 4 / Adobe CS suite etc will benefit greatly from these 8gig cards !!!

I vote for 8gigs of ram anytime. They are preparing for smooth fps on next generation game engines.



NationsAnarchy said:


> 256-bit memory interface only ? Can someone convince me why not 512 ?



Yes I also vote for a 512bit memory bus 8gig Gfx card which is affordable, however Nvidia may not be willing to come to the table instead.... they continue to drag the market and gamers along with it. 

Nvidia knows a 512 bit memory bus will destroy any and all games at ultra settings even with just a 2GB Gfx card ~ this is just the reason they will only keep the 512bit bus for their flagship products.

512 bit = more heat, this is why I think 8 Gigs of 256 bit ram is the route they are going with.

8GB gddr5 on a graphics card has been long overdue, and these cards will equip the gamer for 3k and 4k gaming. Lets not forget custom GPU resolution scaling on existing drivers allow you to play on resolutions much higher than your current monitor supports. Other people like 3d artists who use software like Lumion / Lightwave 3d / Unreal Engine 4 / Adobe CS suite etc will benefit greatly from these 8gig cards !!!

I vote for 8gigs of ram anytime. They are preparing for smooth fps on next generation game engines.


----------



## HumanSmoke (May 18, 2014)

3dhotshot said:


> Nvidia knows a 512 bit memory bus will destroy any and all games at ultra settings even with just a 2GB Gfx card ~ this is just the reason they will only keep the 512bit bus for their flagship products.


No, it's because memory controllers add die complexity and size. The memory bus also needs to balance the core components. It is why you don't see a 512-bit (or 384 for that matter) used in *ANY* GPU except the largest die of an architecture.
Care to name _ANY_ GPU regardless of vendor that wasn't a flagship of the architecture that had a high bus width?


3dhotshot said:


> 512 bit = more heat.


No. Transistor density is actually lower in the uncore ( memory controllers, cache, I/O etc ) than in the core. The only reason that high bus width GPUs use more power is because they are large pieces of silicon with more cores than the mainstream/entry level GPUs.


----------



## GhostRyder (May 19, 2014)

3dhotshot said:


> Yes I also vote for a 512bit memory bus 8gig Gfx card which is affordable, however Nvidia may not be willing to come to the table instead.... they continue to drag the market and gamers along with it.
> 
> Nvidia knows a 512 bit memory bus will destroy any and all games at ultra settings even with just a 2GB Gfx card ~ this is just the reason they will only keep the 512bit bus for their flagship products.
> 
> ...


You can make up for the bus being smaller with core clocks which maybe what NVidia is going for here since that's how they have done it before in fact both do something like that as a trade every now and then (AMD and Nvidia).  Plus these are only rumors and rumors do change with time.  The 8gb itself is what will be king if this turns into the real GTX 880 because that's where Nvidia has been falling behind is with enough ram to run ultra HD setups, 3gb was not cutting it this round.


HumanSmoke said:


> No, it's because memory controllers add die complexity and size. The memory bus also needs to balance the core components. It is why you don't see a 512-bit (or 384 for that matter) used in *ANY* GPU except the largest die of an architecture.
> Care to name _ANY_ GPU regardless of vendor that wasn't a flagship of the architecture that had a high bus width?


Pfft, ok.  Try the HD 6850-6870, both had a 256bit bus just like the HD 6950 and 6970.  LINK1, link2, link3.


----------



## xenocide (May 19, 2014)

GhostRyder said:


> The 8gb itself is what will be king if this turns into the real GTX 880 because that's where Nvidia has been falling behind is with enough ram to run ultra HD setups, 3gb was not cutting it this round.


 
Except that it was.  Ultra HD is defined as 4K+, do you seriously think any current GPU will run out of VRAM before it runs out of processing power at that high of a resolution?



GhostRyder said:


> Pfft, ok.  Try the HD 6850-6870, both had a 256bit bus just like the HD 6950 and 6970.  LINK1, link2, link3.


 
HD6850 and HD6870 were basically revised versions of HD5850 and HD5870--AMD's former flagship.  Plus, AMD switched from VLIW5 to VLIW4 between Barts and Cayman.


----------



## GhostRyder (May 19, 2014)

xenocide said:


> HD6850 and HD6870 were basically revised versions of HD5850 and HD5870--AMD's former flagship.  Plus, AMD switched from VLIW5 to VLIW4 between Barts and Cayman.


But Barts was not the flagship of that generation, Cayman was and they still had a 256bit bus just like it.



xenocide said:


> Except that it was.  Ultra HD is defined as 4K+, do you seriously think any current GPU will run out of VRAM before it runs out of processing power at that high of a resolution?


At 4k is what I was referring to, that resolution has already seen the 3gb maxing out and causes the gap that the GTX 780ti had gotten over the 290x to become lower than at the lower resolutions.  In Multi-GPU setups, the 290X even takes the lead in many situations or keeps it within a few FPS average difference.  The new EVGA GTX 780ti 6gb edition solves that problem out right or this next gen maxwell will.


----------



## xenocide (May 19, 2014)

GhostRyder said:


> But Barts was not the flagship of that generation, Cayman was and they still had a 256bit bus just like it.


 
Because it was a repackaged flagship product.  Using that same logic a GTX 760 or 770 also somewhat proves the point.



GhostRyder said:


> At 4k is what I was referring to, that resolution has already seen the 3gb maxing out and causes the gap that the GTX 780ti had gotten over the 290x to become lower than at the lower resolutions.  In Multi-GPU setups, the 290X even takes the lead in many situations or keeps it within a few FPS average difference.  The new EVGA GTX 780ti 6gb edition solves that problem out right or this next gen maxwell will.


 
Prove it.  Show me a single benchmark of the 780Ti with 6GB vastly outperforming the 3GB version.  I cannot find a single one.  For that matter, find me a good example of a card offering significant performance gains from doubling the VRAM period.  Because I can show you dozens of benchmarks saying it makes no difference.


----------



## GhostRyder (May 19, 2014)

xenocide said:


> Because it was a repackaged flagship product.  Using that same logic a GTX 760 or 770 also somewhat proves the point.


Confused by your wording a bit, it was repackaged yes but the highest performing card of that generation was cayman which in the end of the day was still VLIW architecture.  Hawaii vs Tahiti could be viewed the same way in the fact they are both GCN yet the revision number still seems to be a confused point where its either considered 1.1 or 2.0 GCN depending on where you look.   But at the end of the day, they are still all part of GCN.




xenocide said:


> Prove it.  Show me a single benchmark of the 780Ti with 6GB vastly outperforming the 3GB version.  I cannot find a single one.  For that matter, find me a good example of a card offering significant performance gains from doubling the VRAM period.  Because I can show you dozens of benchmarks saying it makes no difference.


Ok link, very game dependent but you can see the 290X does use beyond 3gb of memory on games like Crysis 3.  As far as a 6gb 780, kinda hard to show since they are pretty new to the market still.


----------



## HumanSmoke (May 19, 2014)

GhostRyder said:


> HumanSmoke said:
> 
> 
> > No, it's because memory controllers add die complexity and size. The memory bus also needs to balance the core components. *It is why you don't see a 512-bit (or 384 for that matter) used in ANY GPU except the largest die of an architecture.*
> ...


Pfft. Try again. This time take your time reading what's written. I've added a subtle hint to help you.

--------------------------------------------------------------------------------------------------------


BTW, if you consider 256-bit some kind of pinnacle of bus width, then I'm sure GT 230 owners amongst others would be truly surprised.
...and why prattle on about Cayman? Every man and his dog knows that the R970 was bandwidth starved, and was a principle reason why Tahiti added IMC's. From Anand's Tahiti review:


> As it turns out, there’s a very good reason that AMD went this route. ROP operations are extremely bandwidth intensive, so much so that even when pairing up ROPs with memory controllers, the ROPs are often still starved of memory bandwidth. *With Cayman AMD was not able to reach their peak theoretical ROP throughput even in synthetic tests, never mind in real-world usage*.





xenocide said:


> Except that it was.  Ultra HD is defined as 4K+, do you seriously think any current GPU will run out of VRAM before it runs out of processing power at that high of a resolution?


Some people on the internet said it is true, so some other people believe it. The main differences between AMD and Nvidia's architecture re: 4K, are raster throughput and its relative ratio to the number of compute units (or SMX's in Nvidia's case), scheduling of instructions, latency, and cache setup.
Raw bandwidth and framebuffer numbers don't take into account the fundamental difference in architectures, which is why a 256-bit GK 104 with 192 Gb/s of bandwidth can basically live in the same performance neighbourhood as a 384-bit Tahiti with 288 Gb/s. within the confines of a narrow gaming focus.


----------



## GhostRyder (May 19, 2014)

HumanSmoke said:


> Pfft. Try again. This time take your time reading what's written. I've added a subtle hint to help you.
> 
> BTW, if you consider 256-bit some kind of pinnacle of bus width, then I'm sure GT 230 owners amongst others would be truly surprised.
> ...and why prattle on about Cayman? Every man and his dog knows that the R970 was bandwidth starved, and was a principle reason why Tahiti added IMC's. From Anand's Tahiti review:


Pfft, it was the highest of that architecture at that time, your quote said:


HumanSmoke said:


> Care to name *ANY GPU* regardless of vendor that *wasn't a flagship of the architecture *that had a *high bus width*?


Its old get over it, you just said that and it was the HIGHEST of the VLIW architecture meaning your wrong.  Trying to change it by saying its not that high compared to todays standards does not mean anything because memory when were talking about a card from a couple years ago especially since 256 bit bus width is still being used on many mid range cards these days.  The highest at the generational battles from Nvidia was the 384 bit bus on the GTX 580 for a single GPU at that generational POINT.


----------



## HumanSmoke (May 19, 2014)

GhostRyder said:


> Pfft, it was the highest of that architecture at that time, your quote said:
> 
> 
> HumanSmoke said:
> ...


My god, did you fail learning at school. Why prattle on about the HD 6970 *when I was speaking of second tier and lower GPUs*. My quote you yourself quoted makes that abundantly clear.
 

You can hone your "refuting points that aren't being put forward" skills with someone else. I see it as lazy, boring, and counterproductive.


----------



## GhostRyder (May 19, 2014)

HumanSmoke said:


> My god, did you fail learning at school. Why prattle on about the HD 6970 *when I was speaking of second tier and lower GPUs*. My quote you yourself quoted makes that abundantly clear.
> 
> 
> You can hone your "refuting points that aren't being put forward" skills with someone else. I see it as lazy, boring, and counterproductive.


HD 6870 is a second Tier GPU and had a 256 bit bus same as the HD 6970 which was the flagship of that generation.  Nice try changing the subject again...


HumanSmoke said:


> Care to name *ANY GPU* regardless of vendor that *wasn't a flagship of the architecture *that had a *high bus width*?





GhostRyder said:


> Try the HD 6850-6870, both had a 256bit bus just like the HD 6950 and 6970.  LINK1, link2, link3.


Apparently reading is not your strong suit.
6870 = 256bit
6970 = 256bit
6870 is not the highest of that generation, 6970 is, that was high back then...The only card that had a higher bus at that generational point was its competitor the GTX 580.


----------



## xenocide (May 19, 2014)

GhostRyder said:


> Ok link, very game dependent but you can see the 290X does use beyond 3gb of memory on games like Crysis 3.  As far as a 6gb 780, kinda hard to show since they are pretty new to the market still.


 
There are plenty of games that use more than 3GB of VRAM, it doesn't mean you will see any real performance boost from adding more of it.  Case in point;  http://hexus.net/tech/reviews/graphics/43109-evga-geforce-gtx-680-classified-4gb/?page=7 .  BF3 can use more than 3GB of VRAM at 5760x1080, and the difference between a 4GB and 2GB GTX 680 is nonexistant (the 4GB version is also clocked a bit higher so take any gains with a grain of salt).  Here's exactly what they said about Crysis 2 during their review:


> Of more interest is the 2,204MB framebuffer usage when running the EVGA card, suggesting that the game, set to Ultra quality, is stifled by the standard GTX 680's 2GB. We ran the game on both GTX 680s directly after one another and didn't _feel_ the extra smoothness implied by the results of the 4GB-totin' card.


Just to discredit any notion that I'm cherry picking here's Anandtech reaching a similar conclusion.  And Guru3D.  And oh look, TPU's own review.  The only review I found that has results in favor of a 4GB variation of the GTX 680 was LegionHardware testing at 7680x1600, and still only getting like 12fps, in other words, the gpu ran out of power before memory really became a factor.


----------

