Friday, August 1st 2014

NVIDIA to Launch GeForce GTX 880 in September
NVIDIA is expected to unveil its next generation high-end graphics card, the GeForce GTX 880, in September 2014. The company could tease its upcoming products at Gamescom. The company is reportedly holding a huge media event in California this September, where it's widely expected to discuss high-end graphics cards based on the "Maxwell" architecture. Much like AMD's Hawaii press event that predated actual launch of its R9 290 series by several weeks; NVIDIA's event is expected to be a paper-launch of one or more graphics cards based on its GM204 silicon, with market availability expected in time for Holiday 2014 sales.
The GM204 is expected to be NVIDIA's next workhorse chip, which will be marketed as high-end in the GeForce GTX 800 series, and performance-segment in its following GTX 900 series; much like how the company milked its "Kepler" based GK104 across two series. It's expected to be built on the existing 28 nm process, although one cannot rule out an optical shrink to 20 nm later (like NVIDIA shrunk the G92 from 65 nm to 55 nm). The GTX 880 reportedly features around 3,200 CUDA cores, and 4 GB of GDDR5 memory.
Source:
VideoCardz
The GM204 is expected to be NVIDIA's next workhorse chip, which will be marketed as high-end in the GeForce GTX 800 series, and performance-segment in its following GTX 900 series; much like how the company milked its "Kepler" based GK104 across two series. It's expected to be built on the existing 28 nm process, although one cannot rule out an optical shrink to 20 nm later (like NVIDIA shrunk the G92 from 65 nm to 55 nm). The GTX 880 reportedly features around 3,200 CUDA cores, and 4 GB of GDDR5 memory.
96 Comments on NVIDIA to Launch GeForce GTX 880 in September
Even using current basic logic, by the standard that the 650ti to the 750ti using less cores was able to outperform the older generation chip while using less power, we can assume that if the GTX 880 has the same amount of cores or even ~15% less that the performance would still be above the predecessor. Of course that's assuming core clocks remain roughly the same which depending could end up showing higher clock speeds and memory speeds (Of course that's just an assumption).
We also are basing so much off of rumors, speculations, and possible release dates.
Exactly what this guy said. They could have easily released Gk110 but instead they were selling 680 for a high end price. We should know what NVidia does from past.
Edit: I'm not defending there price structure though. It's gone balls up.
The GTX 880 should hammer on the GTX 780 and I think it will. It may roll right over the GTX 780 Ti in performance. Time will tell.
If anyone needs to upgrade their GPU in the next couple of months and wants to go Nvidia then the GTX 880 priced at around $425 will probably be a good deal. Otherwise wait for the 20nm Maxwells.
I guess it would have been funny to see a GK110 powered 680 vs the under clocked Tahiti 7970, embarrassing.... but funny.
This is the same strategy that has been used before so I do not get why people are getting so shocked. Compare the fermi architecture to the Kepler in terms of release and the chips used you will see the strategy remains the same and the same ideas can be expressed. Each cycle follows a similar strategy with the companies, you can say it's like a tick tock cycle. They release the introduction to a new architecture, show off how well it performs, gain data, and then next cycle release the full powered version of the architecture next release.
VLIW (or the Terascale series) from ATI also followed a similar set. This is no different than the strategies were all used to (well not much at least) and we can also compare the GCN architecture in that way.
Also anyone assuming that the GTX 880 is going to be weaker than the 780ti I feel is going to be either disappointed or impressed (depending on your outlook). It would not make much sense to release a less powerful GPU as your next gen GPU...
Fact is when the GK110 was ready, it was in the form of the K20X for Oak Ridge, low yield, high returns, much more sense than appeasing forum warriors at TPU.
If Nvidia had intended for the GK 110 for desktop from the outset - which they could have managed as a paper/soft launch with basically no availability but plenty of PR ( i.e. a green scenario mirroring the HD 7970's 22nd December 2011 "launch"), they in likelihood could have had parts out in time. GK 110 taped out in early January 2012 (even noted Nvidia-haters tend to agree on this point). Fabrication, testing/debug, die packaging, board assembly, product shipping to distributers takes 8-12 weeks for a consumer GPU - production GK110's are A1 silicon, so no revision was required - that means early to mid March 2012 for a possible launch date IF the GTX 680 hadn't proved sufficient....and the launch date for the GTX 680? March 22nd, 2012.
Oak Ridge National Labs started receiving their first Tesla K20's in September 2012 (1000 or so in the first tranche), which tallies with the more stringent runtime validation process required for professional boards in general and mission critical HPC in particular.
Unbelievable that so much FUD exists about this considering most of the facts are actually well documented by third parties. History tells us that the GTX 680 was sufficient. The competition (the 7970) was a known factor, so there was actually zero need to hastily put together a GK110 card. I doubt that a GK110 GTX card would have been any more than a PR stunt in any case, since Oak Ridge's contract superseded any consumer pissing contest. True enough. ORNL's Titan was the high profile large order customer, but more than a few people forget that Nvidia was also contracted to supply the Swiss supercomputing institute's Todi system, and the Blue Waters system for the National Center for Supercomputing, so around 22,000 boards required without taking replacements into consideration.
But no, it should have been $400 bucks and called the 680, wonders never cease. :P
Still, since I bought the 780 before the price drop, i prefer to keep the top of the chip line in my main rig. For me it just makes sense to wait til GM210, whenever that is (GTX 980?). Gotta get my money's worth!!
So anyway, i take back some of my false advertising statements, about 680 and the correllary to the 880, with neither top of the line card having the top of the line chip in the lineup. It all relates to being ready as well as business committments by Nvidia.
No new news on the hybrid board , gtx880 or anything going on then I guess.
I guess you're right.
GTX 680 Released: March 22, 2012
GTX Titan Released: February 19, 2013
Yea they released Titan as a 1k card almost 11 months later, obviously they had no problem releasing a 1k Desktop grade video card. If they had wanted to get that card out sooner they would have been happier to and charged accordingly, but they had enough trouble even getting the GTX 680 out which was out of stock and basically required camping your computer night and day to get one. I am getting just as tired as you are of people dragging out these threads to off subject fanboy arguments.
But then what is going to be the excuse this time with the 880? Since everyone is convinced an un-released card with very little known about it is going to be inferior to the current lineup... Exactly, im at a loss how certain people keep claiming that this chip sucks before we have even seen anything...
680 was good enough for them and they saw a $ benefit. 580 was FP64=1/8 and since then all Geforce have gone to a FP64=1/24. While AMD stuck to a FP64=1/4 on Tahiti until Hawaii where they lowered it to FP64=1/8.
Taihiti had FP64=1/4 so it was AMD "Titan" successor to the 580 if u don't take sides released a year after the 580. Not to mention the prices.
11/2010 - GTX 580 = $500
1/2012 - HD 7970 = $550
2/2013 - GTX Titan = $1000
The whole "TITAN" argument applies to Tahiti with in that same time frame with the notable exception of CUDA of course.
Now both companies are further cutting FP64 for gaming line where if Nvidia would had stuck to its old ways TITAN would have been the 580 successor not 680 nor 780.
I hope Maxwell goes back to the old ways but I highly doubt it.
GeForce GTX Titan :FP64 1:3 rate (w/boost disabled - which stands to reason since overclocking and double precision aren't mutually beneficial from either a error or power consideration)
GeForce GTX Titan Black: FP64 1:3 rate w/boost disabled
GeForce GTX Titan Z : FP64 1:3 rate w/boost disabled Thanks for reminding me that AMD halved the double precision ratio for desktop high end in the current series - though I already was aware of the fact. How about not offering double precision at all on GPU's other than the top one for the Evergreen and Northern Islands series after offering FP64 on the HD 4000 series RV770 ? Crazy shit huh? or limiting Pitcairn and Curacao to 1:16 FP64 to save die space and keep power demand in check? It's called tailoring the feature set to the segment.
Horses for courses. FP64 is a die space luxury largely unrequired in gaming GPUs.
Nvidia figured out a while ago that the monolithic big die really isn't that economic when sold at consumer prices which was why the line was bifurcated after the Fermi architecture - who would have thought selling a 520mm² GPU for $290 (GTX 560 Ti 448) and $350 (GTX 570) wouldn't have resulted in a financial windfall !. AMD will likely do the same since they will need a big die for pro/HSA apps ( and Fiji sounds like a 500mm²+ from all accounts), and keep the second tier and lower die-area ruled by gaming considerations ( just as Barts, Pitcairn, and Curacao are now) The old ways of reverting back to 1:8 FP64 rate with Fermi, or 1:3 rate with the current GTX Titan range ? :confused:
Even when I'm not arguing with you, you still come off as a jerk.
I didn't include TITAN because that was the exception on there top series card even though it has different "branding". I understood you would know the difference. Sheesh. Didn't think crossing T's and dotting I's was needed for you to understand.
Old way as to not change FP64 with-in chip in gaming series. GK110 was there first to do that TITAN & 780 differ. They saw an opportunity to make $ off so many that didn't meet standards but it was a smart business move but not so good for the consumer.
P.S.
I need to stay away from culinary school. Apparently it turns you into an even greater ass.
So when you said... ...what you actually meant was "all GeForces have gone to 1:24 except the ones that are 1:3"
Makes sense. Might have been apropos to include that....but then it would make the rest of your post redundant.
Still not sure why you actually bought up double precision in any case, since GM 204 likely won't be compute/pro focused any more than any other sub-300mm^2 GPU is, and it isn't actually apropos to anything anyone including myself was talking about - so why bother quoting my post which wasn't in any way related to what you are talking about? Still can't hold a discussion without resorting to name calling? Some things never change.
GEEKS3D - AMD Radeon and NVIDIA GeForce FP32/FP64 GFLOPS Table
Really I though most of that post I quoted you from was refering to GK110.? Silly me. :rolleyes:
Name calling. More like observation. Not like I'm the only one nor in this thread with such an observation.
I'll leave you to your HPD