• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ATI Believes GeForce GTX 200 Will be NVIDIA's Last Monolithic GPU.

Megasty

New Member
Joined
Mar 18, 2008
Messages
1,263 (0.21/day)
Location
The Kingdom of Au
Processor i7 920 @ 3.6 GHz (4.0 when gaming)
Motherboard Asus Rampage II Extreme - Yeah I Bought It...
Cooling Swiftech.
Memory 12 GB Crucial Ballistix Tracer - I Love Red
Video Card(s) ASUS EAH4870X2 - That Fan Is...!?
Storage 4 WD 1.5 TB
Display(s) 24" Sceptre
Case TT Xaser VI - Fugly, Red, & Huge...
Audio Device(s) The ASUS Thingy
Power Supply Ultra X3 1000W
Software Vista Ultimate SP1 64bit
I completely disagree in the single-die multi-core GPU thing. The whole idea of using multiple GPUs is to reduce die size. Doing dual core GPU on a single die is exactly the same as doing a double sized chip, but even worse IMO. Take into account that GPUs are already multi-processor devices, in which the cores are tied to a crossbrigde bus for communication. Look at GT200 diagram:

http://techreport.com/articles.x/14934

In the image the conections are missing, but it suffices to say they are all conected to the "same bus". A dual core GPU would be exactly the same because GPUs are already a bunch of parallel processors, but with two separate buses, so it'd need an external one and that would only add latency. What's the point of doing that? Yields are not going to be higher, as in both cases you have same number of processors and same silicon that would need to go (and work) together. In a single "core" GPU if one unit fails you can just disable it and sell it as a lower model (8800 GT, G80 GTS, HD2900GT, GTX 260...) but in a dual "core" GPU the whole core should need to be disabled or you would need to dissable another unit in the other "core" (most probably) to keep symetry. In any case you loose more than with the single "core" aproach, and you don't gain anything because the chip is the same size. In the case of CPUs multi-core does make sense because you can't cut down/dissable parts of them, except the cache, if one unit is broken you have to throw away the whole core and in the case that one of them is "defective" (it's slower, only half the cache works...) you just cut them off and sell them separately. With CPUs is a matter of "it works/ doesn't work and if it does at which speed?", with GPUs is "how many units work?".

I was thinking the same thing. Given the size of the present ATi chips, they could be combined & still retain a reasonbly sized die but the latency between the 'main' cache & 'sub' cache would be so high that they might as well leave them apart. It would be fine if they increase the bus but then you would end up with a power hungry monster. If the R800 is a multi-core then so be it but we gonna need a power plant for the thing if its not going to be just another experiment like the R600.
 

imperialreign

New Member
Joined
Jul 19, 2007
Messages
7,043 (1.11/day)
Location
Sector ZZ₉ Plural Z Alpha
System Name УльтраФиолет
Processor Intel Kentsfield Q9650 @ 3.8GHz (4.2GHz highest achieved)
Motherboard ASUS P5E3 Deluxe/WiFi; X38 NSB, ICH9R SSB
Cooling Delta V3 block, XPSC res, 120x3 rad, ST 1/2" pump - 10 fans, SYSTRIN HDD cooler, Antec HDD cooler
Memory Dual channel 8GB OCZ Platinum DDR3 @ 1800MHz @ 7-7-7-20 1T
Video Card(s) Quadfire: (2) Sapphire HD5970
Storage (2) WD VelociRaptor 300GB SATA-300; WD 320GB SATA-300; WD 200GB UATA + WD 160GB UATA
Display(s) Samsung Syncmaster T240 24" (16:10)
Case Cooler Master Stacker 830
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro PCI-E x1
Power Supply Kingwin Mach1 1200W modular
Software Windows XP Home SP3; Vista Ultimate x64 SP2
Benchmark Scores 3m06: 20270 here: http://hwbot.org/user.do?userId=12313
I completely disagree in the single-die multi-core GPU thing. The whole idea of using multiple GPUs is to reduce die size. Doing dual core GPU on a single die is exactly the same as doing a double sized chip, but even worse IMO. Take into account that GPUs are already multi-processor devices, in which the cores are tied to a crossbrigde bus for communication. Look at GT200 diagram:

http://techreport.com/articles.x/14934

In the image the conections are missing, but it suffices to say they are all conected to the "same bus". A dual core GPU would be exactly the same because GPUs are already a bunch of parallel processors, but with two separate buses, so it'd need an external one and that would only add latency. What's the point of doing that? Yields are not going to be higher, as in both cases you have same number of processors and same silicon that would need to go (and work) together. In a single "core" GPU if one unit fails you can just disable it and sell it as a lower model (8800 GT, G80 GTS, HD2900GT, GTX 260...) but in a dual "core" GPU the whole core should need to be disabled or you would need to dissable another unit in the other "core" (most probably) to keep symetry. In any case you loose more than with the single "core" aproach, and you don't gain anything because the chip is the same size. In the case of CPUs multi-core does make sense because you can't cut down/dissable parts of them, except the cache, if one unit is broken you have to throw away the whole core and in the case that one of them is "defective" (it's slower, only half the cache works...) you just cut them off and sell them separately. With CPUs is a matter of "it works/ doesn't work and if it does at which speed?", with GPUs is "how many units work?".


I see your point, and I slightly agree as well . . . but that's looking at current tech with current technology and current fabrication means.

If AMD/ATI can develop a more sound fabrication process, or reduce the number of dead cores, it would make it viable, IMO.

I'm just keeping in mind that over the last 6+ months, AMD has been making contact with some reputable companies who've helped them before, and have also taken on quite a few new personnel who are very well respected and amoungst the top of the line in their fields.

The Fuzion itself is, IMO, a good starting point, and AMD proving to themselves they can do it. Integrating a GPU core like that wouldn't be resource friendly if their fabrication process left with a lot of dead fish in the barrel - they would be losing money just in trying to design such an architecture if fabrication would shoot themselves in the foot.

Perhaps it's possible they've come up with a way to stitch two cores together where if one is dead from fabrication, it doesn't cripple the chip, and the GPU can be slapped on a lower end card and shipped. Can't really be sure right now, as AMD keeps throwing out one surprise after another . . . perhaps this will be the one they hit the home run with?
 
Joined
May 19, 2007
Messages
7,662 (1.20/day)
Location
c:\programs\kitteh.exe
Processor C2Q6600 @ 1.6 GHz
Motherboard Anus PQ5
Cooling ACFPro
Memory GEiL2 x 1 GB PC2 6400
Video Card(s) MSi 4830 (RIP)
Storage Seagate Barracuda 7200.10 320 GB Perpendicular Recording
Display(s) Dell 17'
Case El Cheepo
Audio Device(s) 7.1 Onboard
Power Supply Corsair TX750
Software MCE2K5
you guys sre making the green giant seem like a green dwarf.
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.27/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
I was thinking the same thing. Given the size of the present ATi chips, they could be combined & still retain a reasonbly sized die but the latency between the 'main' cache & 'sub' cache would be so high that they might as well leave them apart. It would be fine if they increase the bus but then you would end up with a power hungry monster. If the R800 is a multi-core then so be it but we gonna need a power plant for the thing if its not going to be just another experiment like the R600.

Well we don't know with certainty the number of transistors of RV770 but they are above 800 million so it would be more than 1600 for a dual core. That's more than the size of the GT200, but I don't think it would be a big problem.

On the other hand, the problem with GT200 is not transistor count, but die size, the fact they have done it in 65 nm. In 55 nm the chip would probably be around 400 cm2 which is not that high really.

Another problem when we compare GT200 size and the performance it delivers is that they have added those 16k caches in the shader processors where are not needed for any released game or benchmark. Applications will need to be programmed to use them. As it stands now GT200 has almost 0,5 MB of cache with zero benefit. 4MB of cache in Core2 are pretty much half the die size, in GT200 it's a lot less than that but a lot from a die size/gaming performance point of view. And to that you have to add L1 caches, that are probably double the size than on G92, with zero benefit again. It's here and in FP64 shaders where Nvidia has used a lot of silicon for future proofing the marchitecture, but we don't see the fruits yet.

I think that on GPUs bigger single core chips is the key to performance and multi-GPU is the key to profitability once reached one point in the fab-process. The better result is probably something in the middle, I mean not going with more than two GPUs and keep making the chips bigger according to the fab-process capabilities. As I explained above I don't think multi-core GPUs have any advantage over bigger chips.

I see your point, and I slightly agree as well . . . but that's looking at current tech with current technology and current fabrication means.

If AMD/ATI can develop a more sound fabrication process, or reduce the number of dead cores, it would make it viable, IMO.

I'm just keeping in mind that over the last 6+ months, AMD has been making contact with some reputable companies who've helped them before, and have also taken on quite a few new personnel who are very well respected and amoungst the top of the line in their fields.

The Fuzion itself is, IMO, a good starting point, and AMD proving to themselves they can do it. Integrating a GPU core like that wouldn't be resource friendly if their fabrication process left with a lot of dead fish in the barrel - they would be losing money just in trying to design such an architecture if fabrication would shoot themselves in the foot.

Perhaps it's possible they've come up with a way to stitch two cores together where if one is dead from fabrication, it doesn't cripple the chip, and the GPU can be slapped on a lower end card and shipped. Can't really be sure right now, as AMD keeps throwing out one surprise after another . . . perhaps this will be the one they hit the home run with?

That would open the door to both bigger chips and, as you say, multi-core chips. Again I don't see any advantage on multi-core GPUs.


And what's the difference between that and what they do today? Well what Nvidia does today, as Ati is not doing that with RV670 and 770, but they did in the past.
 
Joined
Dec 28, 2006
Messages
4,378 (0.67/day)
Location
Hurst, Texas
System Name The86
Processor Ryzen 5 3600
Motherboard ASROCKS B450 Steel Legend
Cooling AMD Stealth
Memory 2x8gb DDR4 3200 Corsair
Video Card(s) EVGA RTX 3060 Ti
Storage WD Black 512gb, WD Blue 1TB
Display(s) AOC 24in
Case Raidmax Alpha Prime
Power Supply 700W Thermaltake Smart
Mouse Logitech Mx510
Keyboard Razer BlackWidow 2012
Software Windows 10 Professional
nvidia has many work arounds, like the pci freq. trick that they used, and their architecutre has been the same since like geforce 4... and on top of that.. they get hurt here because, in order to keep up with amd's R7 core, they basically slapped 2 G92 cores into a new core and released it. its like intel with the dual-dual-core chips on die to make a quad core in order to keep up with amd's phenom "true" quad core you know?

how do you get that it has 40 rops, the G92 has 16 even that discredits even the idea of dual G92 under there.
 
Joined
Sep 26, 2006
Messages
6,959 (1.05/day)
Location
Australia, Sydney
If nVidia can do a fab shrink to reduce die size and to reduce power they have a clear winner.

THEREFORE, AMD are creating this "nVidia is a dinosaur" hype, because, truth be told, AMD cannot compete with nVidia unless they go x2. And x2? Oh, thats the same total chip size as GTX200 (+/- 15%). But with a fab shrink (to same fab scale as AMD), nVidia would be smaller. Really? Can that really be true? Smaller and same performance = nVidia architecture must be better.

So long as nVidia can manufacture with high yield, they are AOK.

Even the CEO of nvidia admitted that die shrinking will do shit all in terms of cooling, the effectiveness of a Die shrink from 65nm to 45nm is not that big for that many transistors.

AMD Creating this "nvidia is a dinosaur" hype is, viable.

If you have so much heat output on one single core, the cooling would be expensive to manufacture. 200W on one core, the cooling system would have to transfer the heat away ASAP. While, 2x100W cores, would fare better, with the heat output being spread out.

Realise that a larger core means a far more delicate card with the chip itself requiring more BGA solder balls; means the card cannot take much stress before BGA solder balls falter.

AMD is saying that, if they do what they are doing now, they will not need to completely redesign an architecture. It doesn't matter if they barely spend anything in R&D, in the end the consumer benefits from lower prices, we are the consumer remember.

AMD can decide to stack two or even three cores, provided they make the whole card function as one GPU (instead of the HD3870X2 style's 2 cards on software/hardware level), if the performance and price is good.

My thoughts:
GPU's will eventually end up kinda like dual/quad core CPUs. You'll have 2 on one Die. When? who knows, but it seems that AMD is kinda working in that direction. However, people complained when the 7950GX2 came out because "it took 2 cards to beat ATI's 1 (1950XTX)". They did it again, but to a lesser degree for the 3870X2, and it'll become more accepted as it goes on, espically since AMD has said "no more mega GPUs". Part of that is they don't wanna f up with another 2900 and they dont quite have the cash, but they are also thinking $$$. Sell more high performing mid range parts. That's where all the money is made. And we all know AMD needs cash.

Just correcting you, 2 on one die is what we have atm anyway. GPUs are effectively a collection of processors in one die. AMD is trying not to put dies together as they know that die shrinks under 65~45nm do not really help in terms of heat output, and therefore are splitting the heat output. As I mentioned before, a larger die will mean more R&D effort, and more expensive to manufacture.
 
Last edited:

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,244 (7.55/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
At least NVidia came this far. ATI hit its limit way back with the R580+. The X1950 XTX was the last ''Mega Chip'' ATI made. Ofcourse the R600 was their next megachip but ended up being a cheeseburger.
 
Joined
Sep 26, 2006
Messages
6,959 (1.05/day)
Location
Australia, Sydney
At least NVidia came this far. ATI hit its limit way back with the R580+. The X1950 XTX was the last ''Mega Chip'' ATI made. Ofcourse the R600 was their next megachip but ended up being a cheeseburger.

Instead of the word cheeseburger I think you should use something that tastes vile. Cheeseburgers are successful.
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,244 (7.55/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Instead of the word cheeseburger I think you should use something that tastes vile. Cheeseburgers are successful.

by cheeseburger I was highlighting the word 'fattening', 'not as nutritious as it should be', 'unhealthy diet'. Popularity isn't indicative of a better product. ATI fans will continue to buy just anything they put up. Though I'm now beginning to admire the HD3870 X2.
 

Nyte

New Member
Joined
Jan 11, 2005
Messages
181 (0.02/day)
Location
Toronto ON
Processor i7 x980
Motherboard Asus SuperComputer
Memory 3x 2GB Triple Channel
Video Card(s) 2x Tahiti
Storage 2x OCZ SSD
Display(s) 23 inch
Power Supply 1 kW
Software Win 7 Ultimate
Benchmark Scores Very high!
One still has to wonder though if NVIDIA has already thought ahead and designed a next-gen GPU with a next-gen architecture... just waiting for the right moment to unleash it.
 
Joined
Jan 11, 2005
Messages
1,491 (0.21/day)
Location
66 feet from the ground
System Name 2nd AMD puppy
Processor FX-8350 vishera
Motherboard Gigabyte GA-970A-UD3
Cooling Cooler Master Hyper TX2
Memory 16 Gb DDR3:8GB Kingston HyperX Beast + 8Gb G.Skill Sniper(by courtesy of tabascosauz &TPU)
Video Card(s) Sapphire RX 580 Nitro+;1450/2000 Mhz
Storage SSD :840 pro 128 Gb;Iridium pro 240Gb ; HDD 2xWD-1Tb
Display(s) Benq XL2730Z 144 Hz freesync
Case NZXT 820 PHANTOM
Audio Device(s) Audigy SE with Logitech Z-5500
Power Supply Riotoro Enigma G2 850W
Mouse Razer copperhead / Gamdias zeus (by courtesy of sneekypeet & TPU)
Keyboard MS Sidewinder x4
Software win10 64bit ltsc
Benchmark Scores irrelevant for me
On the other hand, the problem with GT200 is not transistor count, but die size, the fact they have done it in 65 nm. In 55 nm the chip would probably be around 400 cm2 which is not that high really.



The die size of gt200 is 576mm2 on 65nm so in 55nm 160000 mm2 ? :slap:
 
Joined
Jun 18, 2008
Messages
356 (0.06/day)
Processor AMD Ryzen 3 1200 @ 3.7 GHz
Motherboard MSI B350M Gaming PRO
Cooling 2x Dynamic X2 GP-12
Memory 2x4GB GeIL EVO POTENZA AMD PC4-17000
Video Card(s) GIGABYTE Radeon RX 560 2GB
Storage Samsung SSD 840 Series (250GB)
Display(s) Asus VP239H-P (23")
Case Fractal Design Define Mini C TG
Audio Device(s) ASUS Xonar U3
Power Supply CORSAIR CX450
Mouse Logitech G500
Keyboard Corsair Vengeance K65
Software Windows 10 Pro (x64)
this titanic GPU may not fare that well now, but it falls right into the category of future proofing. it, like the G80GTX/Ultra, will stand the test of time, especially when the 55nm GT200b comes out with better yields/higher clocks.

Not saying anything but umm... From my understanding anyway, die shrinks generally cause worse yields and a whole mess of manufacturing issues in the short run, depending of course upon the core being shrunk. Again, not an engineer or anything, but shrinking the GT200, being the behemoth that it is, will not likely be an easy task. Hell, if it were easy we'd have 45nm Phenoms by now, and Intel wouldn't bother with their 65nm line either now that they've already got the tech pretty well down. Correct me if I'm wrong...
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.27/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,244 (7.55/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro

The news is as stale as...
...that.
 
Last edited by a moderator:
Joined
Dec 28, 2006
Messages
4,378 (0.67/day)
Location
Hurst, Texas
System Name The86
Processor Ryzen 5 3600
Motherboard ASROCKS B450 Steel Legend
Cooling AMD Stealth
Memory 2x8gb DDR4 3200 Corsair
Video Card(s) EVGA RTX 3060 Ti
Storage WD Black 512gb, WD Blue 1TB
Display(s) AOC 24in
Case Raidmax Alpha Prime
Power Supply 700W Thermaltake Smart
Mouse Logitech Mx510
Keyboard Razer BlackWidow 2012
Software Windows 10 Professional
yea but raw power means diddly the R600 has twice the computational units yet lagged behind, i still await benchmarks
 

Easy Rhino

Linux Advocate
Staff member
Joined
Nov 13, 2006
Messages
15,587 (2.36/day)
Location
Mid-Atlantic
System Name Desktop
Processor i5 13600KF
Motherboard AsRock B760M Steel Legend Wifi
Cooling Noctua NH-U9S
Memory 4x 16 Gb Gskill S5 DDR5 @6000
Video Card(s) Gigabyte Gaming OC 6750 XT 12GB
Storage WD_BLACK 4TB SN850x
Display(s) Gigabye M32U
Case Corsair Carbide 400C
Audio Device(s) On Board
Power Supply EVGA Supernova 650 P2
Mouse MX Master 3s
Keyboard Logitech G915 Wireless Clicky
Software The Matrix
i love AMD, but come on. why would they go and say something like that? nvidia has proven time and again that they can put out awesome cards and make a ton of money doing it. meanwhile amds stock is in the toilet and they arent doing anything special to keep up with nvidia. given the past 2 years history between the 2 groups, who would you put your money on in this situation? the answer is nvidia.
 
Joined
Jun 20, 2007
Messages
3,942 (0.62/day)
System Name Widow
Processor Ryzen 7600x
Motherboard AsRock B650 HDVM.2
Cooling CPU : Corsair Hydro XC7 }{ GPU: EK FC 1080 via Magicool 360 III PRO > Photon 170 (D5)
Memory 32GB Gskill Flare X5
Video Card(s) GTX 1080 TI
Storage Samsung 9series NVM 2TB and Rust
Display(s) Predator X34P/Tempest X270OC @ 120hz / LG W3000h
Case Fractal Define S [Antec Skeleton hanging in hall of fame]
Audio Device(s) Asus Xonar Xense with AKG K612 cans on Monacor SA-100
Power Supply Seasonic X-850
Mouse Razer Naga 2014
Software Windows 11 Pro
Benchmark Scores FFXIV ARR Benchmark 12,883 on i7 2600k 15,098 on AM5 7600x
Even if the statement is true it still falls in Nvidia's favor either way.

They have the resources to go 'smaller' if need be. ATi has less flexibility.
 
Joined
Oct 6, 2005
Messages
10,242 (1.46/day)
Location
Granite Bay, CA
System Name Big Devil
Processor Intel Core i5-2500K
Motherboard ECS P67H2-A2
Cooling XSPC Rasa | Black Ice GT Stealth 240 | XSPC X2O 750 | 2x ACF12PWM | PrimoChill White 7/16"
Memory 2x4GB Corsair Vengeance LP Arctic White 1600MHz CL9
Video Card(s) EVGA GTX 780 ACX SC
Storage Intel 520 Series 180GB + WD 1TB Blue
Display(s) HP ZR30W 30" 2650x1600 IPS
Case Corsair 600T SE
Audio Device(s) Xonar Essence STX | Sennheisser PC350 "Hero" Modded | Corsair SP2500
Power Supply ABS SL 1050W (Enermax Revolution Rebadge)
Software Windows 8.1 x64 Pro w/ Media Center
Benchmark Scores Ducky Year of the Snake w/ Cherry MX Browns & Year of the Tiger PBT Keycaps | Razer Deathadder Black
The news is as stale as...
...that.

I just woke up my entire family because I fell out of my chair and knocked over my lamp at 3AM when I read that :laugh:
 
Last edited by a moderator:
Joined
Jul 18, 2007
Messages
2,693 (0.42/day)
System Name panda
Processor 6700k
Motherboard sabertooth s
Cooling raystorm block<black ice stealth 240 rad<ek dcc 18w 140 xres
Memory 32gb ripjaw v
Video Card(s) 290x gamer<ntzx g10<antec 920
Storage 950 pro 250gb boot 850 evo pr0n
Display(s) QX2710LED@110hz lg 27ud68p
Case 540 Air
Audio Device(s) nope
Power Supply 750w superflower
Mouse g502
Keyboard shine 3 with grey, black and red caps
Software win 10
Benchmark Scores http://hwbot.org/user/marsey99/
ati claims nvidia is using dinosaur tech, love it.

its the most powerful single gpu ever, of course ati will try and dull the shine on it.

i recall all the ati fanbios claiming foul when nv did the 79gx2 but now its cool to do 2 gpu on 1 card to compete?

wait till the 280gtx gets a die shrink and they slap 2 on 1 card, can you say 4870x4 needed to compete.
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,244 (7.55/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
ati claims nvidia is using dinosaur tech, love it.

its the most powerful single gpu ever, of course ati will try and dull the shine on it.

i recall all the ati fanbios claiming foul when nv did the 79gx2 but now its cool to do 2 gpu on 1 card to compete?

wait till the 280gtx gets a die shrink and they slap 2 on 1 card, can you say 4870x4 needed to compete.

Even if you do shrink the G200 to 55nm (and get a 4 sq.cm die), its power and thermal properties won't allow a X2. Too much power consumption (peak) compared to a G92(128SP, 600MHz) which allowed it. Watch how the GTX 280 uses a 6 + 8 pin input. How far do you think a die shrink would go to reduce it? Not to forget, there's something funny as to why NV isn't adapting newer memory standards (that are touted to be energy efficient). (1st guess: stick with GDDR3 to cut mfg costs since it takes $120 to make the GPU alone). Ceiling Cat knows what....but I don't understand what "meow" actually means...it means a lot of things :(
 
Joined
Sep 26, 2006
Messages
6,959 (1.05/day)
Location
Australia, Sydney
In the end it DOES NOT MATTER how AMD achieves their performace.

7950GX2 is an invalid claim as it could not function on every system, due to it being seen on a driver level as two cards; an SLi board was needed. You can't compare the 4870X2 to a 7950, its like comparing apples and oranges, 4870X2 to the system is only ONE card not two, CF is not enabled (therefore performance problems with multi GPUs go out the window). Moreover the way that the card uses memory is just the same as the C2Ds, two cores, shared L2.
 

Megasty

New Member
Joined
Mar 18, 2008
Messages
1,263 (0.21/day)
Location
The Kingdom of Au
Processor i7 920 @ 3.6 GHz (4.0 when gaming)
Motherboard Asus Rampage II Extreme - Yeah I Bought It...
Cooling Swiftech.
Memory 12 GB Crucial Ballistix Tracer - I Love Red
Video Card(s) ASUS EAH4870X2 - That Fan Is...!?
Storage 4 WD 1.5 TB
Display(s) 24" Sceptre
Case TT Xaser VI - Fugly, Red, & Huge...
Audio Device(s) The ASUS Thingy
Power Supply Ultra X3 1000W
Software Vista Ultimate SP1 64bit
The sheer size of the G200 won't allow for an GX2 or whatever. The heat that 2 of those things produce will burn each other out. Why in the hell would NV put 2 of them in a card when it costs an arm & a leg just to make one. The PP ratio for this this card is BS too when $400 worth of cards, whether it be the 9800GX2 or 2 4850s, are not only in the same league as the beast but allegedly beats it. The G200b won't be any different either. NV may be putting all their cash in this giant chip ATM but that doesn't mean that they're going to do anything stupid with it.

If the 4870x2 & the 4850x2 are both faster than the GTX280 & costs a whole lot less then I don't see what the problem is except for people crying about the 2 GPU mess. As long as its fast & DON'T cost a bagillion bucks I'm game.
 

DarkMatter

New Member
Joined
Oct 5, 2007
Messages
1,714 (0.27/day)
Processor Intel C2Q Q6600 @ Stock (for now)
Motherboard Asus P5Q-E
Cooling Proc: Scythe Mine, Graphics: Zalman VF900 Cu
Memory 4 GB (2x2GB) DDR2 Corsair Dominator 1066Mhz 5-5-5-15
Video Card(s) GigaByte 8800GT Stock Clocks: 700Mhz Core, 1700 Shader, 1940 Memory
Storage 74 GB WD Raptor 10000rpm, 2x250 GB Seagate Raid 0
Display(s) HP p1130, 21" Trinitron
Case Antec p180
Audio Device(s) Creative X-Fi PLatinum
Power Supply 700W FSP Group 85% Efficiency
Software Windows XP
I would like to know in which facts are you guys basing your claims that a die shrink won't do anything to help lowering heat output and power consumption? It has always helped A LOT. It is helping Ati and surely will help Nvidia. Thinking that the lower power consumption of RV670 and RV770 is based on architecture ehancements alone is naive. I'm talking about peak power, in comparison to what R600 was compared to competition, idle power WAS improved indeed, and so has GT200.
 
Top