• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GeForce GTX 680 Specifications Sheet Leaked

Joined
Feb 19, 2009
Messages
1,162 (0.20/day)
Location
I live in Norway
Processor R9 5800x3d | R7 3900X | 4800H | 2x Xeon gold 6142
Motherboard Asrock X570M | AB350M Pro 4 | Asus Tuf A15
Cooling Air | Air | duh laptop
Memory 64gb G.skill SniperX @3600 CL16 | 128gb | 32GB | 192gb
Video Card(s) RTX 4080 |Quadro P5000 | RTX2060M
Storage Many drives
Display(s) AW3423dwf.
Case Jonsbo D41
Power Supply Corsair RM850x
Mouse g502 Lightspeed
Keyboard G913 tkl
Software win11, proxmox
Yes it does. If you want an electronic device to clock higher you have to shorten the path between input and output and that means going parallel with (duplicating at transistor level) a lot of things that would otherwise be serial and means you have to invest much more transistors on it. That takes up much more space and transistors and that also means more complicated control & logic, which once again means more transistors. Which once again means more active transistors for the same job, which means higher TDP, which means higher temps, which actually means higher TDP, which actually means lower posible clocks, which actually means you have to invest even more transistors in order to achieve a certain clock, which means higher TDP and the process keeps going on and on and on.

Ibm shows that it's possible, but I agree with you, but there are other ways to design stuff that goes around that problem you state, but Nvidia is doing the right thing.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
Ibm shows that it's possible, but I agree with you, but there are other ways to design stuff that goes around that problem you state, but Nvidia is doing the right thing.

There are ways to circumvent and alleviate that problem a little bit, but in most cases it requires doing it "by hand" which is fine for CPUs and long design cycles, but it's rare in GPU world, where most of the design is automated. I've never heard of hand crafted GPUs tbh.

Which may actually be the case with Kepler to an extent. Cadaveca suggested it somewhere and it may very well be true for Kepler after all. According to all the data we can collect Nvidia's shaders have not changed much since G80, apart from adding functionality, expanding the ISA, etc, but on the most basic level they're almost the same. For a company like Nvidia, wanting to enter HPC so badly it may make a lot of sense to take their single most important, yet small, element and completely hand craft it. Considerig it's going to be used for several years and when packed together in the thousands, they take up 60-70% of die size easily, it does make sense.

And there were rumors about Nvidia changing the SPs for project Echelon (I think that was the name for the DARPA funded project) and that it would posibly make it into Maxwell. But release dates have been pushed back by 28 nm, so maybe some of the changes made it into Kepler?
 
Joined
Apr 21, 2008
Messages
5,250 (0.86/day)
Location
IRAQ-Baghdad
System Name MASTER
Processor Core i7 3930k run at 4.4ghz
Motherboard Asus Rampage IV extreme
Cooling Corsair H100i
Memory 4x4G kingston hyperx beast 2400mhz
Video Card(s) 2X EVGA GTX680
Storage 2X Crusial M4 256g raid0, 1TbWD g, 2x500 WD B
Display(s) Samsung 27' 1080P LED 3D monitior 2ms
Case CoolerMaster Chosmos II
Audio Device(s) Creative sound blaster X-FI Titanum champion,Creative speakers 7.1 T7900
Power Supply Corsair 1200i, Logitch G500 Mouse, headset Corsair vengeance 1500
Software Win7 64bit Ultimate
Benchmark Scores 3d mark 2011: testing
benches tell the truth
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.52/day)
And there were rumors about Nvidia changing the SPs for project Echelon (I think that was the name for the DARPA funded project) and that it would posibly make it into Maxwell. But release dates have been pushed back by 28 nm, so maybe some of the changes made it into Kepler?

I look at it this way:

Any company, no matter the industry, will always cater to their largest customer, and then adapt what they can to meet the needs of other smaller customers...but that big customer is always priority #1.


So, who is nVidia's largest paying customer? Answer that question, and I think any questions about potential changes in architectural design will be answered, as well as targeted performance for said designs.


Of course, you do have to figure in issues like sourcing components and such...
 
Joined
Mar 10, 2010
Messages
11,878 (2.20/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
this GPU does feature 1,536 CUDA cores

im thinking now, from going off previouse rumours of a split shader architecture(with some assigned solely GFX workloads) ,we may be in for some thing were not expecting ,as to me all the reports are over stateing the CUDA 1,536 cores , could NV have thrown additional special gfx use shaders in their as well ;) not countable as CUDA cores ,they did say expect to be blown away and for something different

seems such a soddin odd number too
 
Joined
Oct 29, 2010
Messages
2,972 (0.57/day)
System Name Old Fart / Young Dude
Processor 2500K / 6600K
Motherboard ASRock P67Extreme4 / Gigabyte GA-Z170-HD3 DDR3
Cooling CM Hyper TX3 / CM Hyper 212 EVO
Memory 16 GB Kingston HyperX / 16 GB G.Skill Ripjaws X
Video Card(s) Gigabyte GTX 1050 Ti / INNO3D RTX 2060
Storage SSD, some WD and lots of Samsungs
Display(s) BenQ GW2470 / LG UHD 43" TV
Case Cooler Master CM690 II Advanced / Thermaltake Core v31
Audio Device(s) Asus Xonar D1/Denon PMA500AE/Wharfedale D 10.1/ FiiO D03K/ JBL LSR 305
Power Supply Corsair TX650 / Corsair TX650M
Mouse Steelseries Rival 100 / Rival 110
Keyboard Sidewinder/ Steelseries Apex 150
Software Windows 10 / Windows 10 Pro

MxPhenom 216

ASIC Engineer
Joined
Aug 31, 2010
Messages
13,012 (2.49/day)
Location
Loveland, CO
System Name Ryzen Reflection
Processor AMD Ryzen 9 5900x
Motherboard Gigabyte X570S Aorus Master
Cooling 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi
Memory Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v
Video Card(s) Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz
Storage WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2)
Display(s) Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | LG 24" IPS 1440p
Case Lian Li PC-011D XL | Custom cables by Cablemodz
Audio Device(s) FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic
Power Supply Seasonic Prime Ultra Platinum 850
Mouse Razer Viper v2 Pro
Keyboard Corsair K65 Plus 75% Wireless - USB Mode
Software Windows 11 Pro 64-Bit
It's not. Four times CUDA cores of the GF114.

yet that won't mean 4 times the performance either, since the Cores are significantly weaker then those of Fermi cores.

according to the recent spec slide ti appears Kepler is going to have insanely fast memory. 6ghz clock!
 
Joined
Oct 29, 2010
Messages
2,972 (0.57/day)
System Name Old Fart / Young Dude
Processor 2500K / 6600K
Motherboard ASRock P67Extreme4 / Gigabyte GA-Z170-HD3 DDR3
Cooling CM Hyper TX3 / CM Hyper 212 EVO
Memory 16 GB Kingston HyperX / 16 GB G.Skill Ripjaws X
Video Card(s) Gigabyte GTX 1050 Ti / INNO3D RTX 2060
Storage SSD, some WD and lots of Samsungs
Display(s) BenQ GW2470 / LG UHD 43" TV
Case Cooler Master CM690 II Advanced / Thermaltake Core v31
Audio Device(s) Asus Xonar D1/Denon PMA500AE/Wharfedale D 10.1/ FiiO D03K/ JBL LSR 305
Power Supply Corsair TX650 / Corsair TX650M
Mouse Steelseries Rival 100 / Rival 110
Keyboard Sidewinder/ Steelseries Apex 150
Software Windows 10 / Windows 10 Pro
according to the recent spec slide ti appears Kepler is going to have insanely fast memory. 6ghz clock!

Yep, another surprise. If it's true then a spectacular way to mend the memory controller and beat AMD at its own game.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
I look at it this way:

Any company, no matter the industry, will always cater to their largest customer, and then adapt what they can to meet the needs of other smaller customers...but that big customer is always priority #1.


So, who is nVidia's largest paying customer? Answer that question, and I think any questions about potential changes in architectural design will be answered, as well as targeted performance for said designs.


Of course, you do have to figure in issues like sourcing components and such...

I don't know who is their largest customer now, a little help would be appreciated instead of the mistery. Consumer GPU revenues have been declining, professional market has been growing, is that what you mean? The last time I saw a breakdown, by revenue consumer GPU was usually 2x as big as professional market, by gross margin it was the opposite, while profits were more or less the same. I don't know how it stands now.

In any case I don't see it as relevant. A more efficient shader architecture is good for both HPC and GPUs so IMO its irrelevat which target customer fueled the change. That they were changing the shaders for Maxwell is pretty much a fact. It was not expected for Kepler, but maybe...
 
Joined
Mar 13, 2012
Messages
396 (0.08/day)
Location
USA
I don't know who is their largest customer now, a little help would be appreciated instead of the mistery. Consumer GPU revenues have been declining, professional market has been growing, is that what you mean? The last time I saw a breakdown, by revenue consumer GPU was usually 2x as big as professional market, by gross margin it was the opposite, while profits were more or less the same. I don't know how it stands now.

In any case I don't see it as relevant. A more efficient shader architecture is good for both HPC and GPUs so IMO its irrelevat which target customer fueled the change. That they were changing the shaders for Maxwell is pretty much a fact. It was not expected for Kepler, but maybe...

Nvidia's largest customers right now are at the enterprise and smartphone levels.
And it's funny you mention marketshare of GPUs, seeing as AMD has nearly a 10% lead on Nvidia in the GPU marketshare department right now.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
And it's funny you mention marketshare of GPUs, seeing as AMD has nearly a 10% lead on Nvidia in the GPU marketshare department right now.

Yes and no. Those figures include APUs and other kinds of integrated GPU. It only reflects the fact that Nvidia no longer sells integrated GPUs and the fact that every Intel CPU and most (by sales) AMD CPUs are sold with an integrated GPU. So unless you pretend now that Intel is the largest (almost 3x bigger than AMD) graphics card manufacturer, your point is moot. When it comes to discrete GPUs Nvidia's share is almost twice as much as AMD's. I don't know why you bring this into this thread, instead the other one.

As for what is their biggest customer, smartphone companies, definitely are not. Enterprise, by revenue, it's not, I don't think so. If it IS, provide proof please.
 
Joined
Mar 13, 2012
Messages
396 (0.08/day)
Location
USA
Yes and no. Those figures include APUs and other kinds of integrated GPU. It only reflects the fact that Nvidia no longer sells integrated GPUs and the fact that every Intel CPU and most (by sales) AMD CPUs are sold with an integrated GPU. So unless you pretend now that Intel is the largest (almost 3x bigger than AMD) graphics card manufacturer, your point is moot. When it comes to discrete GPUs Nvidia's share is almost twice as much as AMD's. I don't know why you bring this into this thread, instead the other one.

As for what is their biggest customer, smartphone companies, definitely are not. Enterprise, by revenue, it's not, I don't think so. If it IS, provide proof please.
I had remembered reading that, however I am unable to find a link to the numbers unfortunately. I will fully concede you that.
 
Joined
Jun 24, 2011
Messages
571 (0.12/day)
Location
Islamabad
System Name Hhumas-PC
Processor Intel(R) Core(TM)2 Extreme CPU X9650 @ 3.00GHz (4 CPUs), ~3.0GHz
Motherboard Asus P5W DH Deluxe
Cooling ZALMAN CNPS 9700 NT 110mm 2 Ball Ultra Quiet
Memory OCZ Platinium 2x2GB
Video Card(s) ZOTAC GeForce GTX 680 2GB 256-bit GDDR5
Storage WD Green 500 GB , WD 500 GB
Display(s) Dell U2410F
Case Casecom
Audio Device(s) Creative Sound Blaster X-Fi Titanium Fatal1ty Pro
Power Supply Corsair HX1000
Software Windows 7 Ultimate 32 Bit
awesome .. that is why I wait and didn't go for ATi
 
Joined
Nov 27, 2005
Messages
1,080 (0.15/day)
Location
Look behind you!!
System Name NEW
Processor Intel 4770 non-K
Motherboard Gigabyte H81M-DS2V
Cooling CM Hyper 212 plus
Memory 16gb Muskin
Video Card(s) XFX 380X 4gb
Storage Sandisk 120gb plus WD blue 1tb
Display(s) AOC 23.5 LED bl
Case XIGMATEK
Audio Device(s) motherboard
Power Supply Cooler Master 500
lol same comment gen after gen after gen.

Its like clock work every time. :rolleyes: EVERY FREAKING TIME!!!!!!!!!!!!!

AMD/Ati releases their best stuff blindly I may add and nVidia holds out to make sure they can beat it,whether it takes 2 to 4 months from AMD/Ati's release.

rinse and repeat :D

NVIDIA - "THE WAY CHICKEN SHIT IS MEANT TO BE PLAYED"
 
Joined
Jan 28, 2009
Messages
1,742 (0.30/day)
Location
on top of that big mountain on mars(Romania)
System Name ( . Y . )
Its like clock work every time. :rolleyes: EVERY FREAKING TIME!!!!!!!!!!!!!

AMD/Ati releases their best stuff blindly I may add and nVidia holds out to make sure they can beat it,whether it takes 2 to 4 months from AMD/Ati's release.

rinse and repeat :D

NVIDIA - "THE WAY CHICKEN SHIT IS MEANT TO BE PLAYED"

This is how a duopoly works.
 

Zerono

New Member
Joined
Mar 7, 2012
Messages
3 (0.00/day)
Location
Tamworth, Australia
I still dont get why it has a 256bit memory bus shouldn't it be higher?
 
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
I still dont get why it has a 256bit memory bus shouldn't it be higher?

The GK104 was originally intended to be midrange, so they designed it accordingly, hence no 384-bit or higher memory bus. I assume they decided they didn't need it.
 
Joined
Nov 4, 2005
Messages
12,015 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
mhz and die size have nothing in common....

Yes it does. If you want an electronic device to clock higher you have to shorten the path between input and output and that means going parallel with (duplicating at transistor level) a lot of things that would otherwise be serial and means you have to invest much more transistors on it. That takes up much more space and transistors and that also means more complicated control & logic, which once again means more transistors. Which once again means more active transistors for the same job, which means higher TDP, which means higher temps, which actually means higher TDP, which actually means lower posible clocks, which actually means you have to invest even more transistors in order to achieve a certain clock, which means higher TDP and the process keeps going on and on and on.

This is true.^


Decoupling capacitors on die and termination of a high drive strength signal means more drains, and more power to run the circuits at the higher speed.
 
W

Wiselnvestor

Guest
 
Joined
Oct 7, 2006
Messages
1,338 (0.20/day)
Processor e8200 3.93mhz@1.264v
Motherboard P5E3 Pro
Cooling Scythe Infinity
Memory 4gb of G.Skill Ripjaw 6-7-7-18@1404 and 1.62v
Video Card(s) HIS 5770 v2 940/1275mhz stock volts
Storage 1TB Hitachi
Display(s) Acer 22" Widescreen LCD
Case Blue Cooler Master Centurion
Audio Device(s) Onboard audio :(, and Klipsch 5.1 Pro Media's
Power Supply 650 Watt BFG
Software Vista 64 Ultimate
That'd be awesome, as I generally have a good understanding of hardware, but this stuff blows my mind.:laugh:



Funny how it needs to be repeated. Personally, because i don't get what's going on with these card, I reserve all judgement until after I get to read W1zz's review, which i guess is incoming at some point.



I'm still laughing at the fact that the "chocolate" was in fact a cookie. That mis-conception alone, based on appearances, says quite a bit.

Nvidia will never try to take out AMD and AMD will never try to take out Nvidia. If the other company failed it would hurt them, because they would likely get split due to the monopoly. We might see times when 1 card is remarkably faster than the other companies card, but I doubt we will ever see a grand slam.
 
Top