• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ATI Believes GeForce GTX 200 Will be NVIDIA's Last Monolithic GPU.

Joined
May 31, 2005
Messages
284 (0.04/day)
What's wrong with GTX 280 again? It looks like it's 30% faster than a 8800 GTX and that seems right inline with where it should be.
 
Joined
Feb 18, 2006
Messages
5,147 (0.75/day)
Location
AZ
System Name Thought I'd be done with this by now
Processor i7 11700k 8/16
Motherboard MSI Z590 Pro Wifi
Cooling Be Quiet Dark Rock Pro 4, 9x aigo AR12
Memory 32GB GSkill TridentZ Neo DDR4-4000 CL18-22-22-42
Video Card(s) MSI Ventus 2x Geforce RTX 3070
Storage 1TB MX300 M.2 OS + Games, + cloud mostly
Display(s) Samsung 40" 4k (TV)
Case Lian Li PC-011 Dynamic EVO Black
Audio Device(s) onboard HD -> Yamaha 5.1
Power Supply EVGA 850 GQ
Mouse Logitech wireless
Keyboard same
VR HMD nah
Software Windows 10
Benchmark Scores no one cares anymore lols
What's wrong with GTX 280 again? It looks like it's 30% faster than a 8800 GTX and that seems right inline with where it should be.

nothing would be wrong with the gtx280 if the 9800gx2 didn't precede it. being that it did, the gtx 280's price vs performance doesn't seemt hat impressive against it's 400$ predecessor which performs the same in manhy situations.

but as for the discussion at hand, the gtx280 is like my 2900xt in that it puts out alot of heat, uses alot of energy, is expensive to produce, and has to have a big cooler on it.

but as for the specs, I said it before, the gtx280 is exactly what we all hoped it would be spec wise.
 
Joined
Sep 26, 2006
Messages
6,959 (1.05/day)
Location
Australia, Sydney
What's wrong with GTX 280 again? It looks like it's 30% faster than a 8800 GTX and that seems right inline with where it should be.

Nothing wrong with it. It just saps a lot of power for a graphics card, and costs more than the 9800GX2 which is pretty close to it.

Its powerful. Thats for sure. But AMD are saying that Nvidia are being suicidal by keeping everything in one core, I have to agree with that logic. Two HD4850s according to Tweaktown spank a GTX280, and those are the mid range HD4850, not the high-mid, 4870. The 4850 already is faster than a 9800GTX.

Now if you consider AMD putting two HD4850/HD4870's performance into ONE card, what AMD is saying suddenly makes sense.
 
Joined
Jul 19, 2006
Messages
43,606 (6.50/day)
Processor AMD Ryzen 7 7800X3D
Motherboard ASUS TUF x670e
Cooling EK AIO 360. Phantek T30 fans.
Memory 32GB G.Skill 6000Mhz
Video Card(s) Asus RTX 4090
Storage WD m.2
Display(s) LG C2 Evo OLED 42"
Case Lian Li PC 011 Dynamic Evo
Audio Device(s) Topping E70 DAC, SMSL SP200 Headphone Amp.
Power Supply FSP Hydro Ti PRO 1000W
Mouse Razer Basilisk V3 Pro
Keyboard Tester84
Software Windows 11
If this archetecture was produced using a 45nm or 32nm process, a single chip would be a bit more efficient. But that's a lot of chip to shrink!
 

Polaris573

Senior Moderator
Joined
Feb 26, 2005
Messages
4,268 (0.59/day)
Location
Little Rock, USA
Processor LGA 775 Intel Q9550 2.8 Ghz
Motherboard MSI P7N Diamond - 780i Chipset
Cooling Arctic Freezer
Memory 6GB G.Skill DDRII 800 4-4-3-5
Video Card(s) Sapphire HD 7850 2 GB PCI-E
Storage 1 TB Seagate 32MB Cache, 250 GB Seagate 16MB Cache
Display(s) Acer X203w
Case Coolermaster Centurion 5
Audio Device(s) Creative Sound Blaster X-Fi Xtreme Music
Power Supply OCZ StealthXStream 600 Watt
Software Windows 7 Ultimate x64
Do not rick roll people outside of general nonsense. This is not 4chan. Techpowerup is not for spamming useless junk. This is becoming more and more of a problem, I am going to have to start handing out infractions for this in the future if it does not stop.
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,244 (7.55/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
If this archetecture was produced using a 45nm or 32nm process, a single chip would be a bit more efficient. But that's a lot of chip to shrink!

45nm itself was unthinkable just three years ago. Remember how technologists world over celebrated the introduction of Prescott just because it breached into the 100nm process territory? Unfortunately, a die-shrink didn't give it any edge over its 130nm cousins (Northwood) albeit more L2 cache could be accommodated, just as the shrink from Prescott to Cedarmill (and Smithfield to Presler), 90nm to 65nm didn't benefit thermal/power properties of the chip, just that miniaturisation helped squeeze in more L2 cache(s). In the same way, I doubt if this transit from 65nm to 55nm will help NVidia in any way. If you want a live example from GPU's, compare Radeon HD2600 XT to HD3650 (65nm - 55nm, nothing (much) changed).
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,745 (3.30/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Die shrinks will just allow Nvidia to cram more transistors into the same package size. Nvidia's battle plan seems to be something like this:
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.27/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
nothing would be wrong with the gtx280 if the 9800gx2 didn't precede it. being that it did, the gtx 280's price vs performance doesn't seemt hat impressive against it's 400$ predecessor which performs the same in manhy situations.

but as for the discussion at hand, the gtx280 is like my 2900xt in that it puts out alot of heat, uses alot of energy, is expensive to produce, and has to have a big cooler on it.

but as for the specs, I said it before, the gtx280 is exactly what we all hoped it would be spec wise.

I would say it has even "better" specs than what we thought. At least this is true in my case. This is because it has effectively an additional Physx processor slapped into the core. Those additional 30 FP64 units with all tha added registers and cache don't help on rendering at all. Nor can be used by graphics APIs, only by CUDA. That's why I say better in quotes, they have added a lot of silicon that is not useful at all NOW. It could be very useful in the future, that FP64 unit really is powerful and unique as no other comercial chip has ever implemented a unit with such capabilities, so when CUDA programs start to actually be something more than a showcase, or games start to implement Ageia we could say that enhancements are something good. Until then we can only look at them like some kind of silicon waste.

45nm itself was unthinkable just three years ago. Remember how technologists world over celebrated the introduction of Prescott just because it breached into the 100nm process territory? Unfortunately, a die-shrink didn't give it any edge over its 130nm cousins (Northwood) albeit more L2 cache could be accommodated, just as the shrink from Prescott to Cedarmill (and Smithfield to Presler), 90nm to 65nm didn't benefit thermal/power properties of the chip, just that miniaturisation helped squeeze in more L2 cache(s). In the same way, I doubt if this transit from 65nm to 55nm will help NVidia in any way. If you want a live example from GPU's, compare Radeon HD2600 XT to HD3650 (65nm - 55nm, nothing (much) changed).

You seem to overlook that more cache means more power and heat. Specially when caches are half the size of the chip, even despite caches do not consume nearly as much as other parts, but it makes a difference, a big one.
 
Joined
Dec 28, 2006
Messages
4,378 (0.67/day)
Location
Hurst, Texas
System Name The86
Processor Ryzen 5 3600
Motherboard ASROCKS B450 Steel Legend
Cooling AMD Stealth
Memory 2x8gb DDR4 3200 Corsair
Video Card(s) EVGA RTX 3060 Ti
Storage WD Black 512gb, WD Blue 1TB
Display(s) AOC 24in
Case Raidmax Alpha Prime
Power Supply 700W Thermaltake Smart
Mouse Logitech Mx510
Keyboard Razer BlackWidow 2012
Software Windows 10 Professional
depending on how well cuda is adopted for games in the next 6 months could very well mean Nvidia wins round 10 in the GPU wars even with the price, if CUDA is worked into games to offload alot of the calculations then Nvidia just won, and im betting money this is there gamble.
 
Joined
Jan 2, 2008
Messages
3,296 (0.53/day)
System Name Thakk
Processor i7 6700k @ 4.5Ghz
Motherboard Gigabyte G1 Z170N ITX
Cooling H55 AIO
Memory 32GB DDR4 3100 c16
Video Card(s) Zotac RTX3080 Trinity
Storage Corsair Force GT 120GB SSD / Intel 250GB SSD / Samsung Pro 512 SSD / 3TB Seagate SV32
Display(s) Acer Predator X34 100hz IPS Gsync / HTC Vive
Case QBX
Audio Device(s) Realtek ALC1150 > Creative Gigaworks T40 > AKG Q701
Power Supply Corsair SF600
Mouse Logitech G900
Keyboard Ducky Shine TKL MX Blue + Vortex PBT Doubleshots
Software Windows 10 64bit
Benchmark Scores http://www.3dmark.com/fs/12108888
delusions of hope
 
Joined
Dec 28, 2006
Messages
4,378 (0.67/day)
Location
Hurst, Texas
System Name The86
Processor Ryzen 5 3600
Motherboard ASROCKS B450 Steel Legend
Cooling AMD Stealth
Memory 2x8gb DDR4 3200 Corsair
Video Card(s) EVGA RTX 3060 Ti
Storage WD Black 512gb, WD Blue 1TB
Display(s) AOC 24in
Case Raidmax Alpha Prime
Power Supply 700W Thermaltake Smart
Mouse Logitech Mx510
Keyboard Razer BlackWidow 2012
Software Windows 10 Professional
not really look how nvidia helps devs to ensure compatibilty with NVidia GPU's.

If physics and lighting where moved to the GPU from the CPU that bottleneck is gone from the CPU and the GPU can handle it 200x at least faster than the fastest quad core even running the game at the same time. This in turn allows for better more realistic things to be done, remember the alan wake demo at IDF for those great physics, heres the thing it stuttered, now if CUDA is used intead it would get alot more FPS, the reason for not so heavy realistic phyics is the lack of raw horsepower, if CUDA is used as Nvidia hopes it will be used the games may not run any faster, but the level of realism can increase greatly which would sway more than one consumer.

If it get 100FPS and use's large transparent textures for dust thats great

if it gets 100FPS but draws each grain of dirt as its own pixel thats even better

which would you get evne with the price diffrence id go for the real pixel dirt
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,244 (7.55/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
You seem to overlook that more cache means more power and heat. Specially when caches are half the size of the chip, even despite caches do not consume nearly as much as other parts, but it makes a difference, a big one.

Cache sizes and their relations to heat is close to insignificant. The Windsor 5000+ (2x 512KB L2) differed very little from Windsor 5200+ (2x 1MB L2). Both had the same speeds and other parameters. I've used both. But when Prescott is shrunk, despite double cache there should be significant falls in power consumptions, like Windsor (2x 512KB L2 variants) and Brisbane had.
 
Joined
Dec 28, 2006
Messages
4,378 (0.67/day)
Location
Hurst, Texas
System Name The86
Processor Ryzen 5 3600
Motherboard ASROCKS B450 Steel Legend
Cooling AMD Stealth
Memory 2x8gb DDR4 3200 Corsair
Video Card(s) EVGA RTX 3060 Ti
Storage WD Black 512gb, WD Blue 1TB
Display(s) AOC 24in
Case Raidmax Alpha Prime
Power Supply 700W Thermaltake Smart
Mouse Logitech Mx510
Keyboard Razer BlackWidow 2012
Software Windows 10 Professional
agreed its not the cache its the overall design of the processing unit. The reason presscott had so many problems with heat is very simple, the extra 512k cache was takced next to the old cache causing a longer distance than before for the CPU to read the cache, and this causes friction which creates heat, the shorter the distance the better. Intel was just lazy back then
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,244 (7.55/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
agreed its not the cache its the overall design of the processing unit. The reason presscott had so many problems with heat is very simple, the extra 512k cache was takced next to the old cache causing a longer distance than before for the CPU to read the cache, and this causes friction which creates heat, the shorter the distance the better. Intel was just lazy back then

You're being sarcastic right? Even if you weren't,

 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.27/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
Cache sizes and their relations to heat is close to insignificant. The Windsor 5000+ (2x 512KB L2) differed very little from Windsor 5200+ (2x 1MB L2). Both had the same speeds and other parameters. I've used both. But when Prescott is shrunk, despite double cache there should be significant falls in power consumptions, like Windsor (2x 512KB L2 variants) and Brisbane had.

Huh! :eek: Now I'm impressed. You have the required tools to see power consumption and heat at home?!!?

Because otherwise, just because temperatures are not higher doesn't mean the chip is not outputting more heat and consuming more. Heat has to do with energy swapping. In the case of CPU is energy swapping between surfaces. More cache = more surface = more energy transfer = lower temperatures at same heat output.

That was one reason, the other a lot more simple is that, was not 5000+ a 5200+ wih hlf the cache "dissabled"? In quotes because most times they can't dissable all the energy in the dissabled part.
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,244 (7.55/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Huh! :eek: Now I'm impressed. You have the required tools to see power consumption and heat at home?!!?

Because otherwise, just because temperatures are not higher doesn't mean the chip is not outputting more heat and consuming more. Heat has to do with energy swapping. In the case of CPU is energy swapping between surfaces. More cache = more surface = more energy transfer = lower temperatures at same heat output.

That was one reason, the other a lot more simple is that, was not 5000+ a 5200+ wih hlf the cache "dissabled"? In quotes because most times they can't dissable all the energy in the dissabled part.

No, it's charts that I follow, and I don't mean charts from AMD showing a fixed 89W or 65W across all models of a core. It's more than commonsense that when a die-shrink from 90nm to 65nm sent the rated wattage down from roughly (89W~65W) for AMD, Prescott and Cedarmill didn't share a similar reduction. That's what I'm basing it on.
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.27/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
No, it's charts that I follow, and I don't mean charts from AMD showing a fixed 89W or 65W across all models of a core. It's more than commonsense that when a die-shrink from 90nm to 65nm sent the rated wattage down from roughly (89W~65W) for AMD, Prescott and Cedarmill didn't share a similar reduction. That's what I'm basing it on.

Well I can easily base my point in that CPUs with L3 caches have a lot higher TDP. Which of the two do you think is better?
 
Top