Friday, January 2nd 2015
Possible NVIDIA GM200 Specs Surface
Somebody sent our GPU-Z validation database a curious looking entry. Labeled "NVIDIA Quadro M6000" (not to be confused with AMD FirePro M6000), with a device ID of 10DE - 17F0, this card is running on existing Forceware 347.09 drivers, and features a BIOS string that's unlike anything we've seen. Could this be the fabled GM200/GM210 silicon?
The specs certainly look plausible - 3,072 CUDA cores, 50 percent more than those on the GM204; a staggering 96 ROPs, and a 384-bit wide GDDR5 memory interface, holding 12 GB of memory. The memory is clocked at 6.60 GHz (GDDR5-effective), belting out 317 GB/s of bandwidth. The usable bandwidth is higher than that, due to NVIDIA's new lossless texture compression algorithms. The core is running at gigahertz-scraping 988 MHz. The process node and die-size are values we manually program GPU-Z to show, since they're not things the drivers report (to GPU-Z). NVIDIA is planning to hold a presser on the 8th of January, along the sidelines of the 2015 International CES. We're expecting a big announcement (pun intended).
The specs certainly look plausible - 3,072 CUDA cores, 50 percent more than those on the GM204; a staggering 96 ROPs, and a 384-bit wide GDDR5 memory interface, holding 12 GB of memory. The memory is clocked at 6.60 GHz (GDDR5-effective), belting out 317 GB/s of bandwidth. The usable bandwidth is higher than that, due to NVIDIA's new lossless texture compression algorithms. The core is running at gigahertz-scraping 988 MHz. The process node and die-size are values we manually program GPU-Z to show, since they're not things the drivers report (to GPU-Z). NVIDIA is planning to hold a presser on the 8th of January, along the sidelines of the 2015 International CES. We're expecting a big announcement (pun intended).
80 Comments on Possible NVIDIA GM200 Specs Surface
So much for things being different around here. Not even 2 days into the New Year and we have the same character acting they same way as last year. People are suppose to get more mature with age not immature. I'd hate to be in my 50s and be acting like that.
Rather than accept that the information the "leaker" posted is fake (at least as far as the unreleased parts are concerned), you then move on to some senseless and unfounded assertion that I have bias against the site that hosted the "leaked" charts. If you claim that you're the one being wronged, why keep the accusations flying and invite reply? "The lady doth protest too much, methinks" - Hamlet, Act III, Scene II.
I guess if confrontation isn't your thing this will be the last word on the matter. Anyhow, I assume we can now get back to the subject at hand - GM 200 specifications and Quadro.
I'm not sure where you get the idea I'm asking you to believe them or take them as fact. I'm just asking you to provide proof of what you said. What did you do? You clung onto the 20nm statement. How is that debunking the results? The sites you linked just think its highly unlikely to have them all.
Just incase you forget and before you accuse me or imagine I said something totally different. Here is a recap of the conversation since you seem to have trouble with it. I guess asking you to prove what you said is too much.
The question now is... When?
I miss the old TPU :( Had less trolls & fanboys :banghead:
Writing styles always get 'frosty' but there's not that many 'fuck you' posts. Yet!
I understand why some nvidia fanboys could be unhappy about introducing discussions in the direction that despite those "so great" specs of GM200, they won't be enough and the company will soon be lagging behind.
Don't underestimate and ignore all possibilities.
Actually, I would even put my money on a bet that those scores from CH are plausible.
TILL ALL ARE ONE!!!!!!!!!!!!!!!!!!!!
But hey maybe they have achieved wonders, now all they need is to get the bloody things on the shelves, paper tigers don't pay the bills after all.
The specs could (on hardware config and architecture maturation) lead to a plausible 40-50% increase on GTX 980.
We require R9 290X perf to be bested by 50-60% to compete on that basis. I can't call that, I don't have the chips in my hand.
But your statement is pure troll buddy. You may not mean it but without some form of tech in there to back up your assertion, it is pure flame...
What isn't very plausible is that these new parts are supposedly scaling perfectly in relation to mature products months out from release using what are undoubtedly very immature drivers. Unless you believe that both Nvidia and AMD have already perfected the drivers for these parts well ahead of launch. How likely does that sound? Like the fortunes of a company are predicted upon a halo part sold in limited numbers (didn't seem to do much for AMD when Hawaii ruled the roost for both single and dual GPU cards)? Who are they supposed to be lagging behind and why? Last time I checked, the company held ~80% of the workstation market, 85% of the HPC GPGPU market (+ a few high profile additions to come), is gobbling up mobile discrete graphics market share as fast as AMD is losing it, and is carving out a growing market for auto based SoC's. How is this all supposed to come tumbling down and what kind of timeframe are you expecting? You made the prediction, so you must have some supporting theory and evidence, right?
So, no, it shouldn't be. Likely enough. Nvidia behind AMD because of lower gaming performance from top-tier new cards. Yes.
Oh, and I didn't say GM200 would be a fail, just that it would be inferior.
You can live the dream(world) for AMD all you like, but the facts are pretty clear. Nvidia has outsold ATI/AMD in discrete graphics for every quarter for more than a dozen years and is presently outselling AMD two-to-one - at higher prices I might add, and that ratio is historically increasing....on that note, Q4 2014's figures should make some interesting reading in a couple of weeks time. I'd suggest you direct that question to people who bought the GTX Titan. Even if you discounted the benchmarking/gaming fraternity, the card sold well amongst the CG rendering crowd. While undoubtedly true in some instances, there are also many instances where it boils down to buying the best tool for the job. Where CUDA outstrips OpenCL in rendering applications and time to completion is a priority,people choose the system best tailored to their needs. As for how many buy because of penis issues, I'll leave you to initiate a straw
polepoll.@ensabrenoir
I think I'll join you in un-subbing. When a graphics card thread devolves into Hitler, penises, and full-on trolling (Hi Sony), it's time to pull the pin
/ SMH and exits stage left
also, "When a graphics card thread devolves into Hitler, penises, and full-on trolling..." my job here is done.
just fyi for anyone that cares. i hope this, 980ti super kraken eating titan 2 and the 3>9000xtxsxrisriinxs+ are both monsters of gpu which muller 4k and are ready for 8k as i dont like multi gpu setups my self and cant wait till 1 gpu can do 4k as i will be upgrading to it then :) and they will have a price war, even better!
Are you claiming that everything they stated back in November last year is correct?
So, you think 3dcenter's info is plausible, while Chiphell's is not? :eek:
In theory, it would be fairly easy. Most people should realize that a large die performance/enthusiast GPU devotes ~50% of its die area to cores and TMU's. The remaining 50% comprises the uncore (memory controllers, memory interfaces, command processor, transcode engine, raster ops etc.)
The green area's are the core, everything else the uncore
The uncore is relatively fixed in size if the memory interfaces (bus width) remain static. Hawaii at 2816 cores is 438mm^2, half of which is cores and texture address units (220mm). If the core count is increased by 45% ( to 4096) then the area devoted to it increases to 319mm^2. Add the 220mm^2 for the uncore and the resultant die area becomes 539mm^2 - or just slightly smaller than GK 110.
That is how TSMC is capable of manufacturing a 4096 shader Fiji. Whether they are the foundry involved depends on when AMD decided to use GloFo's 28nm SHP process for GPUs in addition to Kaveri APUs. One of these two processes will almost certainly be the manufacturing node involved. If you'd bothered to read what I wrote it would be obvious that what I was pointing out was that 3DC attributed the name Fiji to the 4096 shader part. I might also point out that many other sources do the same including a well known AMD brown-noser who claims intimate knowledge of AMD's business (although you'll have to stump up a fee to breach the paywall). Have AMD swapped the names around? were they in the right order to begin with? Who knows, although I'd note that the other parts in the hierarchy don't seem affected. 3DC don't release leaks, they gather information and extrapolate from that. Their membership includes a number of industry insiders, coders, architects. Chiphell on the other hand are a conglomeration like any forum based site. The validity of their information depends upon the individuals posting there. Some is legitimate, some is quasi-legitimate (access to samples but results/info massaged for PR spin ***cough**Coolaler**cough***), some is estimation/guesstimation, and some is outright bullshit. Chiphell posts should be taken on a case by case basis- especially from posters with little or no previous track record of providing reliable information.
In this particular instance, we have a poster with no previous record for releasing reliable leaks, quoting a manufacturing process wholly unsuited for large GPUs, using a naming convention at odds with the rest of the tech world, and showing results that would indicate perfect scaling for both vendors which supposes mature drivers for both AMD and Nvidia months out from launch....all this plus a single source having access to not just one unreleased top-tier card, nor two, nor three, but four - access that includes both AMD and Nvidia.
I also find it difficult to accept that this guy benchmarked four unreleased cards (along with comparisons with many released cards) across 20 games, yet can't provide any shred of photographic evidence, no standard benchmark validations (Heaven, 3DMark), nor power figures for AMD's top part, nor any single game numbers. All a bit convenient.
CES 2015 Nvidia Press Conference is at 8:00pm PST (Nvidia Live Stream)
*Don't think they are offering children discounts, even if you act like one.
Good effort though.
Maybe initially Fiji had indeed been scheduled for production on 28nm with 4096 shaders, but afterwards it could be forward-ported to a more advanced manufacturing process, 20nm at GloFo.
In theory it would be ok but in practice, to me, releasing anything 28nm (even GM200) is purely a short-vision decision.
The more you delay in time the more likely those will use either 20nm or 16nm. :rolleyes:
wccftech.com/nvidia-gm200-titan-2-amd-fiji-380x-bermuda-390x-benchmarked/