• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce Kepler Packs Radically Different Number Crunching Machinery

jamsbong

New Member
Joined
Mar 17, 2010
Messages
83 (0.02/day)
System Name 2500Kjamsbong
Processor Core i5 2500K @ 4.6Ghz
Motherboard Asrock Extreme 4 Z68
Cooling Zalman Reserator (CPU and GPU)
Memory DDR3 8GB
Video Card(s) EVGA Nvidia 560Ti 1GB
Storage 60GB Kingston SSD
Display(s) 24" Dell IPS
Case CoolerMaster 690 Advanced II
Audio Device(s) on-board
Power Supply Zalman ZM-600HP modular 600watt
Software Windows 7
@Benetanegia I could continue this pointless argument with an NV fanboy such as pointing all the mistakes that you've made on the last post alone but it is time to move on.

If NV have created something fantastic (i.e. 50% faster than GTX580 card) and it is stable enough to work on non-TWIMTBP titles. I won't mind cashing one for myself. If not, then Tahiti. A simple wait and see situation. Cheers.
 
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
I actually explicitly said not counting cards like the GTX285 and 8800 Ultra because they technically came out after the initial lineup launched. They were usually just super high-end offerings that were made to address performance deficits or because they could. In the case of the GTX580 3GB, it was because super high-end users needed more VRAM, this only really affected people using 3 Display setups, so it was an incredibly niche product.

If we wanted to go crazy there are all sorts of products released that are technically better, the HD5970 is to this day ridiculously powerful, and surprisingly cost efficient. I also omitted the HD4890, because it was launched months after the rest of the 4xxx series.

My listings are still accurate. There are outliers, but for the most part all of those cards were the original high-end GPU of their corresponding series.
 

crazyeyesreaper

Not a Moderator
Staff member
Joined
Mar 25, 2009
Messages
9,816 (1.71/day)
Location
04578
System Name Old reliable
Processor Intel 8700K @ 4.8 GHz
Motherboard MSI Z370 Gaming Pro Carbon AC
Cooling Custom Water
Memory 32 GB Crucial Ballistix 3666 MHz
Video Card(s) MSI RTX 3080 10GB Suprim X
Storage 3x SSDs 2x HDDs
Display(s) ASUS VG27AQL1A x2 2560x1440 8bit IPS
Case Thermaltake Core P3 TG
Audio Device(s) Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset
Power Supply Zalman EBT-1000
Mouse Mionix NAOS 7000
Keyboard Mionix
dosent change the fact 680 will be priced at $600 + most likely in the $650-675 range with after market cooled cards hitting $700,

but your free to believe what you wish, :roll:
 
Joined
Oct 29, 2010
Messages
2,972 (0.58/day)
System Name Old Fart / Young Dude
Processor 2500K / 6600K
Motherboard ASRock P67Extreme4 / Gigabyte GA-Z170-HD3 DDR3
Cooling CM Hyper TX3 / CM Hyper 212 EVO
Memory 16 GB Kingston HyperX / 16 GB G.Skill Ripjaws X
Video Card(s) Gigabyte GTX 1050 Ti / INNO3D RTX 2060
Storage SSD, some WD and lots of Samsungs
Display(s) BenQ GW2470 / LG UHD 43" TV
Case Cooler Master CM690 II Advanced / Thermaltake Core v31
Audio Device(s) Asus Xonar D1/Denon PMA500AE/Wharfedale D 10.1/ FiiO D03K/ JBL LSR 305
Power Supply Corsair TX650 / Corsair TX650M
Mouse Steelseries Rival 100 / Rival 110
Keyboard Sidewinder/ Steelseries Apex 150
Software Windows 10 / Windows 10 Pro
dosent change the fact 680 will be priced at $600 + most likely in the $650-675 range with after market cooled cards hitting $700,

but your free to believe what you wish, :roll:

What you call "680" at 600$ + will probably get another name. All that we see now is the GK104 which will probably be faster by a hair than the 7970 (but enough to claim its the fastest card) with some disadvantages (lower mem bandwidth and probably already very high clocked at stock to meet the target of being faster than the 7970) and some say this will be the 680. Now this card will not cost 600$ but neither 300$ as it was reported so I would expect somewhere between 450-500. As it was reported, same chip with some disabled stuff and proly clocked lower will make the 670 part, perf between 580/7950 and 7970 for 350-400$. The big boy will be out later and there we can expect 600$ plus.
 
Joined
Oct 26, 2011
Messages
3,145 (0.66/day)
Processor 8700k Intel
Motherboard z370 MSI Godlike Gaming
Cooling Triple Aquacomputer AMS Copper 840 with D5
Memory TridentZ RGB G.Skill C16 3600MHz
Video Card(s) GTX 1080 Ti
Storage Crucial MX SSDs
Display(s) Dell U3011 2560x1600 + Dell 2408WFP 1200x1920 (Portrait)
Case Core P5 Thermaltake
Audio Device(s) Essence STX
Power Supply AX 1500i
Mouse Logitech
Keyboard Corsair
Software Win10
Joined
May 22, 2010
Messages
2,516 (0.47/day)
Location
Canada
System Name m1dg3t | DeathBox | HairPi 3
Processor 3570k @ 4.0 1.15v BIOS | q9550 @ 3.77 1.325v BIOS
Motherboard Asrock z77e iTX | p5q Dlx 2301 BIOS
Cooling Custom Water | D-14 & HR-03gt | Passive HSF
Memory Samsung MV-3V4G3D 4g x 2 @ 1866 1.35v | OcZ RpR 2g x 4 @ 1067 2.2v
Video Card(s) MSi 7950 tf3 @1000 / 1350 | Asus 5870 V2 @ 900 / 1275
Storage Adata sx900 256Gb / WD 2500 HHTZ | WD 1001 FALS x 2
Display(s) BenQ gw2750hm | 46" Sharp Quatron
Case BitFenix Prodigy - m0dd3d | Antec Fusion Remote MAX
Audio Device(s) Onboard Toslink > Yamaha HTR 6290 | Xonar HDAV1.3 > Yamaha DSP z7
Power Supply Ocz mXp700w | Ocz zx850w | Cannakit 5v 2.5a
Mouse Logitech G700s | Logitech G9x - Cable Repaired
Keyboard TT Meka G1 - Black w Cherry Blacks| Logitech G11
Software Win7 Home | Xp sp3 & Vista ultimate | Raspbian
Benchmark Scores Epeen!! Who needs epeen??
dosent change the fact 680 will be priced at $600 + most likely in the $650-675 range with after market cooled cards hitting $700,

but your free to believe what you wish, :roll:

Hell the 580 is still more expensive than the 7970! At most place's by me anyways :eek: Can't wait to see the new pricing
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
@Benetanegia I could continue this pointless argument with an NV fanboy such as pointing all the mistakes that you've made on the last post alone but it is time to move on.

If NV have created something fantastic (i.e. 50% faster than GTX580 card) and it is stable enough to work on non-TWIMTBP titles. I won't mind cashing one for myself. If not, then Tahiti. A simple wait and see situation. Cheers.

Giving up on time is good practice when you are so wrong, so well played. lol
 
Joined
Jan 31, 2011
Messages
2,211 (0.44/day)
System Name Ultima
Processor AMD Ryzen 7 5800X
Motherboard MSI Mag B550M Mortar
Cooling Arctic Liquid Freezer II 240 rev4 w/ Ryzen offset mount
Memory G.SKill Ripjaws V 2x16GB DDR4 3600
Video Card(s) Palit GeForce RTX 4070 12GB Dual
Storage WD Black SN850X 2TB Gen4, Samsung 970 Evo Plus 500GB , 1TB Crucial MX500 SSD sata,
Display(s) ASUS TUF VG249Q3A 24" 1080p 165-180Hz VRR
Case DarkFlash DLM21 Mesh
Audio Device(s) Onboard Realtek ALC1200 Audio/Nvidia HD Audio
Power Supply Corsair RM650
Mouse Rog Strix Impact 3 Wireless | Wacom Intuos CTH-480
Keyboard A4Tech B314 Keyboard
Software Windows 10 Pro
I like nvidia's FXAA to, but the biggest problem is NOT implemented in drivers in control center like ATI. And there are so many games out there that don't have any AA support.

Is it that difficult nvidia to implement FXAA into drivers also????

FXAA is already inside the recent nvidia drivers

1. download nvidia inspector
2. open the advanced driver settings
3. look at the advanced configs (scroll down)
4. set FXAA to 1 (default 0/off)

there are also some hidden settings there like framecap/framerate limit, SLI and/or AA flags etc.

also, some moar rumour tablez


http://forum.beyond3d.com/showthread.php?p=1619912
 
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
Interesting chart. I wonder why the AA never gets put above 4x...
 

crazyeyesreaper

Not a Moderator
Staff member
Joined
Mar 25, 2009
Messages
9,816 (1.71/day)
Location
04578
System Name Old reliable
Processor Intel 8700K @ 4.8 GHz
Motherboard MSI Z370 Gaming Pro Carbon AC
Cooling Custom Water
Memory 32 GB Crucial Ballistix 3666 MHz
Video Card(s) MSI RTX 3080 10GB Suprim X
Storage 3x SSDs 2x HDDs
Display(s) ASUS VG27AQL1A x2 2560x1440 8bit IPS
Case Thermaltake Core P3 TG
Audio Device(s) Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset
Power Supply Zalman EBT-1000
Mouse Mionix NAOS 7000
Keyboard Mionix
so according that chart.... 3Dmark 11 is 7% difference :roll:


Total Average = is 12% difference across all those tests
 
Last edited:
Joined
Oct 19, 2007
Messages
8,258 (1.32/day)
Processor Intel i9 9900K @5GHz w/ Corsair H150i Pro CPU AiO w/Corsair HD120 RBG fan
Motherboard Asus Z390 Maximus XI Code
Cooling 6x120mm Corsair HD120 RBG fans
Memory Corsair Vengeance RBG 2x8GB 3600MHz
Video Card(s) Asus RTX 3080Ti STRIX OC
Storage Samsung 970 EVO Plus 500GB , 970 EVO 1TB, Samsung 850 EVO 1TB SSD, 10TB Synology DS1621+ RAID5
Display(s) Corsair Xeneon 32" 32UHD144 4K
Case Corsair 570x RBG Tempered Glass
Audio Device(s) Onboard / Corsair Virtuoso XT Wireless RGB
Power Supply Corsair HX850w Platinum Series
Mouse Logitech G604s
Keyboard Corsair K70 Rapidfire
Software Windows 11 x64 Professional
Benchmark Scores Firestrike - 23520 Heaven - 3670
I just want benchmarks already so i know what to buy.
 
Joined
Jul 10, 2011
Messages
797 (0.16/day)
Processor Intel
Motherboard MSI
Cooling Cooler Master
Memory Corsair
Video Card(s) Nvidia
Storage Western Digital/Kingston
Display(s) Samsung
Case Thermaltake
Audio Device(s) On Board
Power Supply Seasonic
Mouse Glorious
Keyboard UniKey
Software Windows 10 x64


Borderlands 2 or new Brothers in Arms running on Kepler? : D
 
Joined
Oct 29, 2010
Messages
2,972 (0.58/day)
System Name Old Fart / Young Dude
Processor 2500K / 6600K
Motherboard ASRock P67Extreme4 / Gigabyte GA-Z170-HD3 DDR3
Cooling CM Hyper TX3 / CM Hyper 212 EVO
Memory 16 GB Kingston HyperX / 16 GB G.Skill Ripjaws X
Video Card(s) Gigabyte GTX 1050 Ti / INNO3D RTX 2060
Storage SSD, some WD and lots of Samsungs
Display(s) BenQ GW2470 / LG UHD 43" TV
Case Cooler Master CM690 II Advanced / Thermaltake Core v31
Audio Device(s) Asus Xonar D1/Denon PMA500AE/Wharfedale D 10.1/ FiiO D03K/ JBL LSR 305
Power Supply Corsair TX650 / Corsair TX650M
Mouse Steelseries Rival 100 / Rival 110
Keyboard Sidewinder/ Steelseries Apex 150
Software Windows 10 / Windows 10 Pro
Aliens: Colonial Marines? PhysX? GTX680?

As for that suspicious table, based on the specs which I think we can agree that are more or less accurate, this table was done by somebody who has done his homework. 30% plus on average above the GTX580 which brings us to that 10% over the 7970. If you look carefully you'll see the clocks - 1050 and 1425 - very high for a stock card and above the reported 950 for GPU. It is also done at 1080p where the mem bandwidth disadvantage is less pronounced.

So what I'm saying is that if this is close to real then NV will launch the GK104 under the name GTX680, a slightly faster card than the 7970 with certain weak points due to the fact that the chip was initially designed for the performance segment but after AMD's launch it can fulfill other expectations. Price? Neither 300$ nor 550$
 
Last edited:
Joined
Feb 13, 2012
Messages
523 (0.11/day)
I doubt these rumors are true, i heard about nvidia dropping their hot clocks but changing the structure of the gpu this much i dont think its possible in such a short amount of time, as far as i thought kepler is a fermi die shrink with some tweaks.
and another note is that this article claims gk104 is a 340mm die which is nvidias mid range, the hd 7970 has a die size of 375mm, so much for the "we expected more from amd" talk
not to mention nvidias high end is said to have a 550mm die size, well amd could easily build a gpu that big and pack more transistors but that is usualy a very bad business choice, and nvidia suffer from it almost every time.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
i dont think its possible in such a short amount of time, as far as i thought kepler is a fermi die shrink with some tweaks.

AMD/Nvidia do not start working on their chips only after releasing the previous one. They work for years on every chip. Sometimes as much as 5 years depending on how different it is. Nvidia is already working on Maxwell and whatever comes next. AMD is already working on their next 2 architectures too, Sea Islands and Canary Islands. The work on Kepler started many years ago, maybe even before GTX200 was released or shortly after.

As far as Kepler goes, yes it's a tweaked Fermi in 99% of cases, you can see it in the specs and schematics. The only difference is that they dropped the hot-clocks, which makes SPs substantially smaller and doubled the amount of them per SM to compensate.

No one knows exactly how much smaller SPs are, but just as an example of how much clocks can affect the size of some units, AMD Bart's memory controler is half as big as Cypress/Cayman because it's designed to work at ~1000 Mhz instead of >1200 Mhz. Those extra 200 Mhz make the memory controler in Cypress/Cayman twice as big. So in case of Kepler and looking at specs and 340 mm2, we can assume that non hot-clocked SPs are around half the size.
 
Joined
Feb 13, 2012
Messages
523 (0.11/day)
AMD/Nvidia do not start working on their chips only after releasing the previous one. They work for years on every chip. Sometimes as much as 5 years depending on how different it is. Nvidia is already working on Maxwell and whatever comes next. AMD is already working on their next 2 architectures too, Sea Islands and Canary Islands. The work on Kepler started many years ago, maybe even before GTX200 was released or shortly after.

As far as Kepler goes, yes it's a tweaked Fermi in 99% of cases, you can see it in the specs and schematics. The only difference is that they dropped the hot-clocks, which makes SPs substantially smaller and doubled the amount of them per SM to compensate.

No one knows exactly how much smaller SPs are, but just as an example of how much clocks can affect the size of some units, AMD Bart's memory controler is half as big as Cypress/Cayman because it's designed to work at ~1000 Mhz instead of >1200 Mhz. Those extra 200 Mhz make the memory controler in Cypress/Cayman twice as big. So in case of Kepler and looking at specs and 340 mm2, we can assume that non hot-clocked SPs are around half the size.

yes but fermi was supposed 2 be nvidias architecture for the years to come, kepler is a descendant kinda like piledriver will be for bulldozer.
but well i guess that makes sense doing so in order to scale at high clocks kinda like cpus having longer pipelines to scale at high frequency but there is no way it will make that much difference(especially since the whole point of architecture that aim for high frequency is to make smaller chips with less hardware and lower ipc but with more throughput, but thats in cpus im not sure about gpus), mayb the 1536 is refering to the bigger gtx680/780 which would have a 550mm2 die size(read that in previous leaks/rumors)
because even considering the die size which is much smaller than the 580 yet it triples the core count
even with 28nm thats only 40% smaller and its near impossible to get perfect scaling
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
yes but fermi was supposed 2 be nvidias architecture for the years to come, kepler is a descendant kinda like piledriver will be for bulldozer.
but well i guess that makes sense doing so in order to scale at high clocks kinda like cpus having longer pipelines to scale at high frequency but there is no way it will make that much difference(especially since the whole point of architecture that aim for high frequency is to make smaller chips with less hardware and lower ipc but with more throughput, but thats in cpus im not sure about gpus), mayb the 1536 is refering to the bigger gtx680/780 which would have a 550mm2 die size(read that in previous leaks/rumors)
because even considering the die size which is much smaller than the 580 yet it triples the core count
even with 28nm thats only 40% smaller and its near impossible to get perfect scaling

Don't let the number of SPs blind you, they didn't really tripple the number of cores. Like I said dropping the hot-clocks probably allows them to put 2x as many as if they were Fermi cores in the same space, but they are only half as fast. They are trading 2x shader clock for 2x the number of SP.

Based on die area GK104 has to have around 3.6-4.0 billion transistors, that's twice as much as GF104/114, the chip it's based on. Would you have doubted so much if Nvidia had made a 768 SP Fermi(ish) part with 256 bit memory interface? Twice the SPs at twice the number of transistors, while keeping 256 bit MC. It's 100% expected don't you think? And now they have this 768 SP "GF124" and it's here where they drop hot-clocks, thus making the SP much smaller, and allowing them to put 2x as many of them: GK104 is born.

Also remember that doubling SPs per SM is a lot more area/transistor efficient than doubling the number of SMs.

And to finish, never look at die size for comparing, look at transistor count. Scaling varies a lot from one node to another, and transistor density can change a lot as a node matures, i.e. look at Cypress vs Cayman. GK104 has twice as many transistors as GF104 and that's all that you should look at. It's pointless to even compare to GF100/110, because GF100 is a compute oriented chip, with far more GPGPU features than GF104/114 and GK104. Even GF104 is 60% as big as GF100, but it has 75% of gaming performance.
 
Joined
Feb 13, 2012
Messages
523 (0.11/day)
Don't let the number of SPs blind you, they didn't really tripple the number of cores. Like I said dropping the hot-clocks probably allows them to put 2x as many as if they were Fermi cores in the same space, but they are only half as fast. They are trading 2x shader clock for 2x the number of SP.

Based on die area GK104 has to have around 3.6-4.0 billion transistors, that's twice as much as GF104/114, the chip it's based on. Would you have doubted so much if Nvidia had made a 768 SP Fermi(ish) part with 256 bit memory interface? Twice the SPs at twice the number of transistors, while keeping 256 bit MC. It's 100% expected don't you think? And now they have this 768 SP "GF124" and it's here where they drop hot-clocks, thus making the SP much smaller, and allowing them to put 2x as many of them: GK104 is born.

Also remember that doubling SPs per SM is a lot more area/transistor efficient than doubling the number of SMs.

And to finish, never look at die size for comparing, look at transistor count. Scaling varies a lot from one node to another, and transistor density can change a lot as a node matures, i.e. look at Cypress vs Cayman. GK104 has twice as many transistors as GF104 and that's all that you should look at. It's pointless to even compare to GF100/110, because GF100 is a compute oriented chip, with far more GPGPU features than GF104/114 and GK104. Even GF104 is 60% as big as GF100, but it has 75% of gaming performance.

yes i believe you man, it was just pretty shocking thats all, now we might be able to compare amd vs nvidia a bit more closely based on shader count
as for cypress and cayman it seems like it happened from the other extreme isnt it? as far as i remember it was pretty much getting rid of the sps that werent being utilized and change vliw5 to vliw4 and ended up with smaller SM's that performed the same as their predecessor but at a smaller size allowing them to fit more SM's into the 6970 so even though shader count was less, it performed like 20% better.

though i still think there is still more behind this, having hot clocks has its benefits, but has its limitations too, like i heard they dont scale well when frequency increases, while amd would clock while increasing performance at a constant rate(i could be wrong tho idk much about the bitty details in gpu)
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
This is not going to be 50% faster than 7970. Judging by the specs it should fall between 7950 and 7970 at a rumored 300$.
GK110 will probably be the Tahiti killer. At a price...

yeh ,that was sarcasm from me ,so i agree with you dude:toast:

but in all honesty im betting these will arrive cheap and be below a 7950 in performance
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
though i still think there is still more behind this, having hot clocks has its benefits, but has its limitations too, like i heard they dont scale well when frequency increases, while amd would clock while increasing performance at a constant rate(i could be wrong tho idk much about the bitty details in gpu)

Yes, that's correct and the reason that Nvidia stopped using hot-clocks with Kepler.

The reason they used hot-clocks before was apparently to have lower latencies and better single threaded/light threaded performance, so that compute apps would benefit. Remember the first chips to have hot-clocks on shaders were running at 600 Mhz core clocks and below, so shaders run at <1200 Mhz. Now even without hot-clocks they will be running at 1000 Mhz so that's probably enough*. Latencies are further reduced with a shorter pipeline (due to lower clocks) and other means that are required for GPGPU anyway.

Fermi shaders running at 2000 Mhz would have been overkill for what it's really needed and consume more than two 1000 Mhz shaders. A compute GPU needs first and foremost multi-threaded performance, so long as single threaded is not crap, single threaded is only required up to a certain level, so that some minor tasks don't become a bottleneck.
 

jamsbong

New Member
Joined
Mar 17, 2010
Messages
83 (0.02/day)
System Name 2500Kjamsbong
Processor Core i5 2500K @ 4.6Ghz
Motherboard Asrock Extreme 4 Z68
Cooling Zalman Reserator (CPU and GPU)
Memory DDR3 8GB
Video Card(s) EVGA Nvidia 560Ti 1GB
Storage 60GB Kingston SSD
Display(s) 24" Dell IPS
Case CoolerMaster 690 Advanced II
Audio Device(s) on-board
Power Supply Zalman ZM-600HP modular 600watt
Software Windows 7
Giving up on time is good practice when you are so wrong, so well played. lol

I'm not aware that I'm in anyway wrong nor am I giving up on anything. All I did was to be rational and put things on hold. You're mistaken again.......
I guess it is always going to be difficult for me to have a logical debate with someone who is not.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
I'm not aware that I'm in anyway wrong nor am I giving up on anything. All I did was to be rational and put things on hold. You're mistaken again.......
I guess it is always going to be difficult for me to have a logical debate with someone who is not.

Maybe you should start by explaining why if it's only going to be almost as fast as GTX580, why did they put 96 SPs per SM (double) instead of say 64 SPs, or more importantly why did they double up the number of TMUs, when 64 TMU were perfectly fine for GTX580 and GK104 will have 25% higher clocks (thus 25% higher texture fillrate had it had 64 TMU intead of 128). I'm sorry but you just don't increase die size like that if it's not coming with a substantial (read justified) performance increase.

You have produced ZERO proof (I didn't expect that since nothing is fact), but also explained nothing (which I do expect) about why such a massive increase in computational power -that didn't came for free and suposed a 100% increment in transistor count- is not going to produce any performance gain.

You have not explained why a 2.9 TFlops card will not be able to beat the 1.5 Tflops card, and why if that'd be the case why didn't they just create a 1.5 TFlops (768 SP) card in the firt place. In the end that would have been easy, same architecture, half the SPs, 48 per SM. If going with 96 SPs is going to make the block 50% as (in)efficient as Fermi with 48 SP, you just don't make it 96 SP!!

So start by explaining something, anything, and stop calling fanboy as if that was any kind of argument in your favor, because it is not, it only makes you look like a 12 year old kid and an idiot. "It's going to be so, because (you think) it's going to be so, and if you think different you are a fanboy" is not an argument.

Logic, the study of the principles and criteria of valid inference and demonstration.

More Logic:

GK104 is 340 mm2, so close to 4 billion transistor, twice as much as GF104 and 33% more than GF110, logic dictates that Nvidia did not sudenly create an architecture that is at least 33% less efficient than Fermi (70% compared to GF104), 25% higher clocks notwithstanding. Especially when they have been claiming better efficiency for almost 2 years now.
 
Last edited:
Joined
May 15, 2007
Messages
777 (0.12/day)
System Name Daedalus | ZPM Hive |
Processor M3 Pro (11/14) | i7 12700KF |
Motherboard Apple M3 Pro | MSI Z790 |
Cooling Pure Silence | Freezer 36 |
Memory 18GB Unified | 32GB DDR5 6400MT/s C32|
Video Card(s) M3 Pro | Radeon RX7900 GRE |
Storage 512GB NVME | 1TB NVME (Boot) + 4 x 1TB RAID0 NVME Games |
Display(s) 14" 3024x1964 | 1440p UW 144Hz |
Case Macbook Pro 14" | H510 Flow |
Audio Device(s) Onboard | None | Onboard |
Power Supply ~ 77w Magsafe | EVGA 750w G3 |
Mouse Razer Basilisk
Keyboard Logitech G915 TKL
Software MacOS Sonoma | Win 11 x64 |
I'm not aware that I'm in anyway wrong nor am I giving up on anything. All I did was to be rational and put things on hold. You're mistaken again.......
I guess it is always going to be difficult for me to have a logical debate with someone who is not.

I am struggling to see the point of your argument here. You keep stating that Benetanegia is a fanboy and "wrong" all the time yet so far I have nothing but rational, well thought out posts from him. I may not agree with everything in his posts (actually I do agree with most of it) but I am struggling to see the "fanboy" stance you keep going on about.

No doubt I will get called a Nvidia fanboy now despite running a HD7970 and Eyefinity.... :wtf:

One thing that does interest me about Kepler being a dieshrunk and "tweaked" Fermi is how much performance increase we can expect from future driver improvements? Driver improvements are a given with CGN as the architecture is realtively immature but what about Kepler? Could we end up with a case that Kepler comes out the gate faster than Thahiti but ends up slower in the long run due to a lack of driver improvements?

Obviously this is still conjecture but it is an interesting avenue to investigate as I have seen some pretty big boosts in BF3 (@3560*1920) with the latest HD79xx RC driver (25/01/2012).
 

jamsbong

New Member
Joined
Mar 17, 2010
Messages
83 (0.02/day)
System Name 2500Kjamsbong
Processor Core i5 2500K @ 4.6Ghz
Motherboard Asrock Extreme 4 Z68
Cooling Zalman Reserator (CPU and GPU)
Memory DDR3 8GB
Video Card(s) EVGA Nvidia 560Ti 1GB
Storage 60GB Kingston SSD
Display(s) 24" Dell IPS
Case CoolerMaster 690 Advanced II
Audio Device(s) on-board
Power Supply Zalman ZM-600HP modular 600watt
Software Windows 7
@Benetanegia "but also explained nothing (which I do expect)". I've discussed this with you before, since there is no facts whatever you built on is full on nothing. No point getting into explanation mode on speculative information.

"GK104 is 340 mm2, so close to 4 billion transistor" I am not aware of this information, where did you get 4 billion transistor? Did you estimate it off the 340mm2? in other words, building a case off speculative information?

@Xaser04 no need to struggle. Just read what I've posted thoroughly and comprehend it before venting off more steam.
 
Top