newtekie1
Semi-Retired Folder
- Joined
- Nov 22, 2005
- Messages
- 28,473 (4.10/day)
- Location
- Indiana, USA
Processor | Intel Core i7 10850K@5.2GHz |
---|---|
Motherboard | AsRock Z470 Taichi |
Cooling | Corsair H115i Pro w/ Noctua NF-A14 Fans |
Memory | 32GB DDR4-3600 |
Video Card(s) | RTX 2070 Super |
Storage | 500GB SX8200 Pro + 8TB with 1TB SSD Cache |
Display(s) | Acer Nitro VG280K 4K 28" |
Case | Fractal Design Define S |
Audio Device(s) | Onboard is good enough for me |
Power Supply | eVGA SuperNOVA 1000w G3 |
Software | Windows 10 Pro x64 |
I had a G80 8800GTS which I promptly returned to CompUSA and picked up an 8800GT to replace it. The 8800GT, despite having 112 shaders to the GTX's 128shaders, regularly outperformed the GTX due to it's much higher clock speeds.
That was the crux of why the 8800GT was so amazing. For $300 it was performing the same as or better than the $599-$649 8800GTX.
Yeah, I'm pretty sure that is why they significantly upped the clock speeds on the 9800GTX(not to mention actually using faster RAM too).
It was 8800 GTS 512 which had too many rebrands (9800 GTX, 9800 GTX+, GTS 150 OEM, GTS 250 - and you couldn't tell if they had the original 65nm or die-shrink 55nm chip)
The + in the 9800GTX+ was there specifically to designate the G92b 55nm version. The GTS150 was always G92 and the GTS250 was always G92b. So, yes, you could always tell if they had the 65nm or 55nm chip.