# New love for old cards - [GPU restoration]



## Dinnercore (Aug 22, 2018)

Hello dear people out there!

I´m Dinnercore, on this board for some days now and enjoying myself around here. I thought of opening this thread to celebrate my new found love for 'older' GPU´s.
They once had been the latest tech and were in use for a long time, decades in some cases. Now they land on my doorstep, beaten, dusty, still trying all they have left in them but in the tech world time is moving fast. Consuming to much power, while not suitable for any modern game they desperatly look for a purpose and I want to give them one in my machines.

But before they can live in a happy retirement under my watch they need some serious attention. I put a bit more time and effort into them then just spraying air and throwing new paste on. I want to restore them as best as I can. I find the work very relaxing. And I can relive all the memories that come up with the dream hardware from my childhood.
On top of that, I want to try different aftermarket coolers, watercooling etc.. Multi-GPU setups and some OC adventures.

What´s in it for you? A closer look at a lot of pcbs and coolers from different times, the comparison of before/after if you´re into that. Every now and then I plan to show them running in a system (not necessarily a perfect time-matching system for now, I lack the time and space for that much hardware).
The rough time period you can expect is anything from the late 90s up to 2010.

NOTE: If you like any of the photos I took and want a high res version, you can ask me and I´ll see where I can upload it for you. I just can´t upload all photos in original 4K res because of filesizes.
Same goes if you want to know or see a specific detail of one of my cards.

The following table contains my collection and results so far:
Please note that temps are in delta T over ambient due to fluctuation in ambient temp. Power draw is measured by my PSUs 'Power-in' reading.



*GPU*​*Cooler Type**Idle before**Load test before**Idle after**Load test after**Temp improvement**Power draw Idle**Power draw Load**Stock Core**Stock Memory*GeForce GTX295 Zotac 1792MB sPCBStock Cooler20 °Cn/a17 °C48 °C-3 °C130 W400 W576 MHz1008 MHzGeForce 9800 GX2 Leadtek 1024MBStock Cooler39 °COTP33 °C70 °C-6 °C175 W370 W602 MHz999 MHzGeForce 9800 GTX+ XFX Black Edition 512MBArctic Accelero Xtreme 980012.9 °Cn/a11 °C36 °C-1.9 °C125 W275 W785 MHz1150 MHzGeForce 9800 GTX+ EVGA 512MBStock Cooler12 °C55 °C11.7 °C51.7 °C-3.3 °C L/-0.3 °C i98 W255 W738 MHz1100 MHzGeForce 7900 GTO MSI 512MBStock Cooler15 °C36 °C16 °C43 °C+7 °C L/+1 °C i98 W176 W650 MHz660 MHzATi Radeon HD4870 512MBStock Cooler46 °C56 °C46 °C56 °C0 °C L / i161 W271 W750 MHz900 MHzATi Radeon HD4850 X2 1024MB2x Zalman Quiet VGA Cooler13 °C28°Ctbdtbdtbd172 W294 W625 MHz993 MHzATi Radeon X1950 XTX 512MBStock Cooler41.4 °C61 °C39.9 °C61 °C-1.4 °C i117 W185 W648 MHz999 MHzATi All-In-Wonder X1900 256MBStock Cooler31.3 °C59.1 °C31.6 °C56.9 °C-2.2 °C L/+0.3 °C i125 W175 W500 MHz477 MHz

Additional information for this table: n/a usually means that I could not take this measurement because of e.g. the load test crashing or cooler parts missing. The Furmark load test will always be 10 minutes on aircoolers, since they don´t take that much time to saturate. Note that for some more delicate dual GPU-Cards I will try to avoid Furmark in the future. On future watercooling I plan to use a Mo-Ra3 loop just for the GPU/s. I will make some tests as to when the water temp has settled and plan a time accordingly.
Ambient for calculating delta T is measured right at the front intake fan in the case.
All my repasting is done with Arctic MX-4.

Anyway, let´s start this off with my latest puppy:

An XFX 9800 GTX+ Black Edition.







Already came with an Arctic Accelero Xtreme but the voltage regulator cooling must have fallen off. There is only the glue residue left. Oh well I´ll see what I can do.






Interesting mod (not by me) on the cooler for the GPU-status LED cable.






This is nasty, but could be worse.






Atleast the thermal pad could tell me what memory this card has:







The card is tested and runs fine, gets picked up by driver and GPU-z and it can run 3D-load however I did not make an endurance test and I´m very happy with that decision now that I see that VRM cooling is no longer present. It may be fine like this, but I don´t want to risk it.
Next post from me will be the result, after I got some thermal glue.


----------



## dorsetknob (Aug 22, 2018)

Keep them coming


----------



## phill (Aug 23, 2018)

Subbed


----------



## Mr.Scott (Aug 23, 2018)

Subbed also.


----------



## DeathtoGnomes (Aug 23, 2018)

Subbed too!


----------



## rtwjunkie (Aug 23, 2018)

I’m in! Almost like pr0n.


----------



## DeathtoGnomes (Aug 23, 2018)

rtwjunkie said:


> I’m in! Almost like pr0n.


just dont suggest a midget be added.


----------



## Urlyin (Aug 23, 2018)

take some pics when their all cleaned up afterwards... perhaps some GPU temps before and after?  Welcome to TPU Dinnercore


----------



## Dinnercore (Aug 23, 2018)

Oh wow, thanks! Wouldn´t have thought so many would like to see these cards.
The suggestion with temps is nice, I´ll see how I work that into the whole thing. Maybe I could create a spreadsheet in the first post or sth.. But I´m worried about the fluctuations in data by my ambient temp. It´s all over the place in the next weeks, like today it´s 26°C in here and if I open up a window tomorrow it´ll be more like 20°C.

Anyway, some progress on the XFX card:
I´m not happy with those stains from the paste that someone left there (e.g. in the middle of the 4 top left memory modules). It is barely removable with alcohol.

EDIT: these noisy smartphone cam pictures dont do it justice, from now on I´ll bust out the DSLR no matter what. (replaced old pics in this post already)






This part is looking a little bit better now.






And do you prefer the dark side....






Or the light side?






And with my CSI super zoom I tried to catch some strange marking on the die:






There is two of those squares, they seem perfectly aligned on the same height. Was this where the tool grabbed it after it was cut from the wafer? I have no idea.

Full assembly will be tomorrow when I get the glue for VRM cooling. I have sourced some fitting heatsinks from broken plasma TV pcb´s for free, but forgot that I will actually need something to hold them in place...
I´ve got 3M double sided thermal tape, some thermal glue and some xbox branded thermal glue-pad thingy incoming, I want to test which of those things work better.

And a teaser for what´s coming up next:


----------



## Urlyin (Aug 23, 2018)

not sure if you're aware but I found acetone to work better than alcohol... pics look fine.


----------



## qubit (Aug 23, 2018)

Hey, I look forward to your restorations, dinnercore. This is properly nerdy so the qubit approvez.


----------



## Dinnercore (Aug 24, 2018)

Time for the next update. As life is, I got some good news and some bad results with one of my thermal glue solutions.

And thanks @Urlyin for the hint on acetone, did not try that yet and will give it a go in the future.


Cleaned cooler.






Next up, my try on the VRM heatsinks. I tried the first and cheapest solution to stick them on first: A double sided 'thermal conductive tape' from some chinese no name reseller. 10€ for a 25m roll is a bit expensive for just tape, but much to cheap to be anything that can transfer heat. It is however really really thin so I hoped that it would be ok. It isn´t. I interrupted the long term load test after 5 minutes, because the backside pcb was burning to the touch, while the heatsinks I could reach felt cold or lukewarm at best.
Lesson learned. I now have to take it apart again and try the second option from a company that does stuff for modding consoles. It´s a 100x100mm sheet, only 0.15mm thick and rated at ~3W/mK... The instructions on it suggest that the glue on it is a bit tricky to work with, I hope it will do. The 2 compound glue I ordered takes 2-3 weeks shipping.






Fully assembled, and by the time im typing this already on it´s back and loosing screws again 











Temps!:
Ambient 24°C / Idle temperature now 35°C. Before taking it apart the idle was at 38-39°C with an ambient temp of 25,6°C.
I did not win much there, but it´s the feeling that counts.






And right after I cut short the first endurance test due to slight worry about VRM:
(Looks like it would have settled at 49°C anyway).
Full load test with furmark has to wait until I get those damn heatsinks to do their job. No idea if the card can run without, the reference cooler for this card seems to be lacking there but every other 9800 I saw online had some.






Another thing I want to monitor for this one and all the following cards is power draw. They will, if possible, all run on the same machine with the same config. I will measure power draw at the wall in idle and under furmark.
This one is for now: 124W @ idle // 234W @ GPU-z render thingy.

Finally, look what the previous owner send me without a word:






Aww yiss, DHE-stock cooler. I love those. They are my absolute favorite for looks. Especially the ones from around 2008-2010 with rendered artwork on them.






That´s it for today, I´m going back to testing these xbox thermal sticky pads.


----------



## theFOoL (Aug 25, 2018)

That Light-Side photo should be in a Comp-Por¦ Magazine


----------



## Mr.Scott (Aug 25, 2018)

For your heatsinks, no tape. 
Your favorite non conductive TIM in the middle, and a dot of superglue on opposite corners.
Works titties.


----------



## Dinnercore (Aug 25, 2018)

Final numbers for the XFX 9800 GTX+ Black Edition  (@21°C ambient):

10min Furmark: 57°C // 275W Power in
Idle after Furmark: 32°C // 125W Power in











And @Mr.Scott that is another great idea. I was a bit paranoid to use superglue, I always have 100 what-if´s in mind (superglue and the stuff inside the coils fighting each other, glue to solid for temperature cycles = cracking and falling off etc.) But if it works that´s great news. 
For now tho the adhesive worked.

A question for anyone who knows how to do things properly in a forum, what would be the best way to implement a chart for temps and power draw in this thread? I can´t edit my first post, so no adding there. I could just make another post, but will loose the ability to edit that one too. I could create one somwhere and link to it, but who would find that link between all the other posts? 

PS: If anyone wants to see a specific test or benchmark, just ask and I´ll see what I can do.


----------



## hat (Aug 25, 2018)

Not sure why you can't edit your post. Under your own posts, there should be an Edit button, near your signature (if you had one) and whre the Report button is for other's posts.

@FordGT90Concept posts tables more than anyone else I know of on the forums


----------



## rtwjunkie (Aug 25, 2018)

Dinnercore said:


> A question for anyone who knows how to do things properly in a forum, what would be the best way to implement a chart for temps and power draw in this thread? I can´t edit my first post, so no adding there. I could just make another post, but will loose the ability to edit that one too. I could create one somwhere and link to it, but who would find that link between all the other posts?


There is a time limit on editing.  PM a Supermod and see if they can extend your editing period on your posts in your own thread.  Or it may be on an as needed basis.  It's also possible I may recall that W1zzard keeps that locked down for himself.  In any case, he doesn't usually have a problem if you explain the need.


----------



## Jetster (Aug 25, 2018)

Had this for years. Never been taken apart. Should I?
I tested it a couple of years ago. Its only scores a 400 on Heaven 4.0. My GTX150 ti scores an 800











Someone want to get in on the shoot


----------



## hat (Aug 25, 2018)

Heh, I've got a *BFG* GTX 260 Maxcore 55. It served me well for a while, then I got a 5870 and tried to sell it, but nobody bought it, so in the parts bin it sits.


----------



## FordGT90Concept (Aug 25, 2018)

The first ATI Radeon (yes, that's a picture of my card):
https://www.techpowerup.com/gpudb/3024/radeon-sdr-pci

Doesn't need to be restored because of how little it has been used.


----------



## biffzinker (Aug 25, 2018)

hat said:


> Heh, I've got a *BFG* GTX 260 Maxcore 55.


Still have my *BFG Geforce GTS 250 *(later version of the 8800 GTX after the 9800 GTX)
(but in a roundabout way I lend it to my brother for a Xeon build (Ivy Bridge) I donated to help him out on moving from his older Athlon 5200+ build.)


----------



## Fouquin (Aug 25, 2018)

biffzinker said:


> Still have my *BFG Geforce GTS 250 *(later version of the 8800 GTX after the 9800 GTX)



Eh, close. GTS 250 (G92B) was a rebadged 9800 GTX+ with slightly more reserved clocks and double the VRAM. G80 gave its last with the 8800 Ultra and GTS Core 112, G92 lived on for a couple generations beyond.


----------



## Dinnercore (Aug 25, 2018)

Time for the next card. And chart is otw I hope, I contacted a mod via pm.

This card is giving me a real headache:
Zotac GTX 295, a dual GPU card.





It looks fine on the outside. But already gave me trouble just getting it to boot.





This one will be a longer story for sure...





Got it running into idle, but have trouble getting it to work under load in SLI. 
Idle Temp @ 22°C ambient: 41°C / 42°C
Power draw: ~130W







For those interested in my troubleshooting journey with it (and it´s not over yet):

My testsystem for reference:
Ryzen 7 1800x
MSI X370 Gaming Pro Carbon (I hate this thing, never buy cheap)
16GB (2x8) DDR4 G.Skill Flare X @ 3200 14-14-14-34
Corsair RMI 850W PSU
120GB SSD

Got the card like usual, stupid me did NOT use DDU before changing the card from the GTX 9800+ to this, thinking oh it´s the same driver version it may be fine for now...
Well before it even got to the driver stage it didn´t post. No signal out. PC stuck, every 3rd power on it initialized keyboard and mouse, but still no signal. Fan spinning 100% on the GPU, green LED on.
Alrighty, off to a great start. Next thing I did, instead of using 2 seperate PSU cables, I used a single Y for 6+8pin.
Oh wow look at that it posts and boots. Wait a minute, why does my memory initialisation now fail?! Mainboard boot looped 3 times, went into standard config. Ok. Deep breath. This will be a long day.

Now it did boot. I got into OS (Win7 64Bit), checked that the card was picked up by driver and GPU-z. Went just fine. Scratching my head about why my system can´t hold OC settings or my ram profile now. So I rebooted, went into Bios, tried some higher voltages all around. Nothing, as soon as PC starts it completly shuts down, goes back up 3x (Mainboard retry setting) and then just proceeds with stock settings. Alright then. Time for a BIOS-Update.

BIOS-Update done, oh my god it finally starts without any delay at startup like it did before all the time. Wow thanks MSI, I can finally set subtimings! Just about a year late. Went into windows, everything still ok? Nope. BSOD within 2 minutes / related to me removing USB-Stick. Blaming the pci.sys as the fault. Well i´m not done anyway. Back into BIOS, getting all my settings back in order after the update, then I updated my Chipset-Drivers too. Time for some coffee.
Finally it seemed stable. Took the Idle temp measurement and went ahead into Furmark. Single GPU runs fine. SLI activated, 20 seconds in @64°C it died again with a BSOD. This time nvlddmkm.sys is stated as the cause.
Since it crashed pretty much related to the load I hope it´s not the card that reached EOL. The error code hints more towards driver tho, I hope I can fix this with DDU and a clean reinstall after all the changes I made. However, due to the possibilty of it being a heat/stress issue on the card in it´s current state, I´ll now go ahead and take a closer look at it before I do more testing.






Jetster said:


> Had this for years. Never been taken apart. Should I?



It doesn´t look bad from the outside, but since I see a cat around it there could be a lot of hairs inside. Not the nvidia hairworks you would want.
Maybe plug it in, check if it works and if temperatures seem reasonable. Does the fan spin, what noise does it make etc..

I have great memories of my GTX260, had to rma the first one I got due to cooler not making good contact. It was the first upgrade for my first pc!
And I need a GTS250, I would like to have the whole 200 series family. Got a 260 and 295 now.


----------



## Mr.Scott (Aug 25, 2018)

Driver issue. 295 was always driver picky.
332.21 I know works.
No longer have a 295. Just a couple 260's (65nm and 55nm), 275FTW, and 280.
Really want to find another.


----------



## Tatty_One (Aug 25, 2018)

Op..... I have unlocked your first post so you can add/edit there should you wish.


----------



## hat (Aug 25, 2018)

Good luck with the 295. A friend shipped me a 9800GX2 years ago for free because it was broke. I thought maybe I could get it to work... nope. Wound up in the dumpster unfortunately.


----------



## Dinnercore (Aug 25, 2018)

Tatty_One said:


> Op..... I have unlocked your first post so you can add/edit there should you wish.



Thank you very much! 

And here is my weekend plan:

Those tiny sheets of metal they call 'backplate' is actually the memory heatsink...












So many pads... I will reuse them, no chance I´ll get a replacement for them. This would cost 2x the current price of this card. 
The discoloration on the shroud around the communication chipsets hints at a lot of heat, and those grey pads seem solid like a brick. One of those didn´t even have proper contact, this may have been the primary issue. And that paste, eww. It crackled as I loosened the screws. 






Let´s see if this will run like it should afterwards. And with the different driver version that @Mr.Scott suggested. 
May take me a day or two.


----------



## Jetster (Aug 25, 2018)

Dinnercore said:


> It doesn´t look bad from the outside, but since I see a cat around it there could be a lot of hairs inside. Not the nvidia hairworks you would want.
> Maybe plug it in, check if it works and if temperatures seem reasonable. Does the fan spin, what noise does it make etc..
> 
> I have great memories of my GTX260, had to rma the first one I got due to cooler not making good contact. It was the first upgrade for my first pc!
> And I need a GTS250, I would like to have the whole 200 series family. Got a 260 and 295 now.



I tested it about 4 years, works good, no issues


----------



## Muaadib (Aug 25, 2018)

I wish i could find my broke 7900GX2 (not to be confused with 7950GX2). That thing still holds a special place in my heart.


----------



## Dinnercore (Aug 26, 2018)

@Muaadib I would like to get my hands on a 7900GX2, and a 9800GX2 for that matter, someday in the future. We will see what I can grab. 

Again I have some mixed news for everyone. So I post the eyecandy first ok?





I think I might be developing feelings for this card that I should not have. Not much bare silicon but those two IHS facing each other. Kind of like that UT-map 'facing worlds'.

A more SFW shot:





And some silicon for good measure:









Now for what is troubling me. I wanted to reuse those pads, so instead of going brute force on the plate they stick on I took out my tweezers and carefully removed the larger lumps of dust around it, without disturbing that fine grain. I don´t want any of that close to the pad surface. I could not remove these pads in one piece, so I left them sticking. The white ones are still very soft but brittle. 
The problem are the grey ones on those SLI chipsets that spread the 16x over Y to 8x/8x. These ones are hard as rock and crumble away into dust as soon as something hard touches them. 






Poked the side of one with my tweezers, and they simply break apart. If I remount them under pressure I´m pretty certain they break and on top I think they lost their performance anyway. So forget what I said earlier, I´m buying new ones for those. Quality new ones. I just have to do this one right. 

Now for something completly different! If you ever want to store itty bitty parts and screws, like the one from the cable-cover of the fan cable, get micro-mount boxes for mineral collectors. They are perfect to hold small batches of things, you can easily spot what´s inside, they stack and you can get 100 for less then 10$ or so. 






And since the result of this one will now be delayed again, we will have a short intermission:






Managed to grab an EK-waterblock for ~10$. It may be dirty but it fits on GTX 260s and 280s. I will definitly use that at some point. Now let´s see if I can get this clean again. Maybe some toothpaste?


----------



## theFOoL (Aug 26, 2018)

So So Sexy ￼￼


----------



## phill (Aug 26, 2018)

I think I have a 7900 GX2 or a 7950 GX2 with water block still here   I did have a dual PCB GTX 295 but sold it ages ago   Oh how I wish I kept more stuff


----------



## blobster21 (Aug 26, 2018)

@Robert B 's Old hardware Emporium is always nice, but this thread is very cool too. Excellent pictures quality btw, i only wish there was a clickable version that would show them in all their HD glory.


----------



## Dinnercore (Aug 27, 2018)

I took a short detour. For 6€ I couldn´t really argue. BTW @Robert B  ´s thread and posts are very impressive, I hope to get some soldering work in here too, if I get a case that needs it and I figure it all out. And his cleaning jobs are next level.

Here we got a partner in crime for the XFX card and the first stock DHE-cooler:






An EVGA 9800 GTX+.
Not much to see on the outside, looking decent already. Please note that screw right at the sticker. I will get back to it later.






Let´s jump into its current idle and load temps + power consumption figures.









We have 23°C ambient (measured at the front case intake fan btw) and get:
Idle 35°C // 98 W power-in
Load 78°C // 255W power-in

That is quite impressive to me. These blower style coolers can get pretty hot and I think these numbers are still ok. I don´t think it would reach the 78°C in any gaming scenario.

Now let´s take a look inside...






Oh. The previous owner sold me the card and some free noise insulation material. Let´s go with that.






This is one furry card, looking at the current full moon I start to wonder. Oh well those pads look fresh. Like really fresh. And that paste was still soft and not dry.
Now coming back to that screw next to the sticker, the spring on it was still UNDER the sticker and it was sticking to the glue of it. This and the paste tells me that this card was never actually opened before. We look at 9 years and about 6 months lifetime and it still performed like it did.
VERY nice EVGA. If my PSU from you guys has the same kind of lifespan design behind it, thumps up.

However, I don´t completly agree with the placement of this thermal pad:






How did I estimate the age of this card? Well the 'Magic'-Fan had a crystal-ball-bearing-... production date on it (20.11.2008):






Now that thing was loud. I know blower fans make noise, but this one had an unhealthy almost rattle-like tune. I decided to try an remove the sticker and on top let some oil on the bearing by building an oil-bridge:






They may be sealed, I´m not sure, but if there is a way in this oil will find it.






I use this stuff, intended for open yo-yo bearings. Not many would have this around, but if you do it´s great and you will never empty that bottle with your yo-yos no matter how much you play.

Now let´s get some close-ups while cleaning and then back into Furmark.





















And it´s ready to rock:














This time we have an ambient of 21.3 °C
Idle: 33°C
Load: 73°C (actually some improvement)

And the biggest benefit, the fan did stop the annoying noise. It´s still loud but for one doesn´t spin up as much and the rattle like sound is gone.
I honestly did not think the temps would improve at all after seeing how good the paste and pads were looking. But looking at load temp, with 5°C drop and lower fan speed I think I did make a difference. Yes ambient is 1.7 °C lower too, and the lower the temp is, the smaller delta T gets because of density and transfer rate but I still think it improved beyond that effect.

To finish it off I have these pictures and some bonus material packed in the spoiler. WARNING tho NSFW-Screenshots from gameplay on this card with blood and gore.


Spoiler: Potentially NSFW, Ingame-screenshots with blood and gore





 

 

 

 

 

 

 

 

 

 

 

 

 

 

 





I did test UT3 on it, because I had it installed. Well this 2007 title was absolutly no match even in HD. 60fps and bored with it. So I threw a more recent but not very demanding title at it: PoE
It struggled to hit 30fps and the drops made my hardcore char really anxious so I decided to not do an intensive test. I did notice that the Dx11 mode ran a lot faster on this old card then the Dx9 setting.

Now for @blobster21 ´s wish / suggestion, and thank you for these kind words on my pictures. I thought about the picture resolution and my problem is this: I take them all with my DSLR in RAW. This means big filesize and I need to convert them before uploading. Now I take every picture and crop it, if necessary and then convert it into a smaller jpg. I skip any post processing to speed things up. This file is in the original resolution of 4928 x 3264 and still at 5mb filesize for each picture. I want to use bigger pictures and not just the thumbnails in my posts, so you don´t have to open every specific one I write about manually. This means resizing them into a managable size, with the benefit of this page here loading even with slower internet connection and me not having to upload 3GB of photo material. My current ad-free pic hoster would not allow that anyway and I think TPU servers wouldn´t want to be piled with my 200GB photo history after a year or so.
Only Option I see would be, to upload either the full size JPGs, which would take forever with my 'quality' german internet, or do double the work for every photo and resize them twice. One version for this thread and one version in FullHD, the max. that my pic hoster allows if it´s below 1mb.
If anyone has a suggestion to easily upload high-res without the downsides, that would be great.


----------



## hat (Aug 27, 2018)

Try imgur?

Ah, the 9800GTX+. You made me remember my poor 9800GT which I lost in a horrible accident involving overvolting a quad core processor, a cheap motherboard, and unfortunately a critical amount of negligence 

It was the first time I had a system that could respectably tear through anything I threw at it. Then I swapped the whatever chip I had in there for a Phenom 9500 (I think it might have been my old 5200+) but it was, as OG phenoms were, slow, and I was desperate to push the clocks... I'm sure we've all had those "oops" moments...


----------



## theFOoL (Aug 27, 2018)

What camera you use? Good lord


----------



## Jetster (Aug 27, 2018)

rk3066 said:


> What camera you use? Good lord


Its a Nikon D5100


----------



## Dinnercore (Aug 27, 2018)

Yes I use a Nikon D5100 with a 50mm 1:1,8 lens. Have this one for a long time and I´m afraid the mirror shutter might die soon as it´s nearing the 100.000 shot mark fast. 
For my extreme close-ups I use close-up spacer rings between the camera and the lens. Works great, but really demands a lot of light. I don´t do this professionally and I do not own any studio lighting so I make use of my desk lamp as well as I can. It can be a PITA to hold the cam steady enough in your hand for a close-up shot at 1/5th sec. exposure time...



hat said:


> [...] I'm sure we've all had those "oops" moments...



I´d say when it comes to PC-Hardware I was trained by my granddad to be very careful on what to do and what not to do. He tought me this stuff like overclocking, installing os, getting games to run, troubleshooting, building a pc. I was about 6 or 7 at that time and loved it, most of all the gaming stuff. Ahh good old times. Now its more then 15 years later and the only 'oops'-moment I can recall would be how I treated my first PC.
At one point it got so bloated with software (good old Win XP-times) that I had random bluescreens and couldn´t trace them down to a source. They only ever happened every 1-2 hours and that was really annoying. That constant feeling of 'it IS going to happen'. One day I was already a bit angered by something and after the 3rd BSOD that day I started to kick my pc with my foot. Until it shut down. Then instantly regret hit me. It booted back up, but I broke the front fan and the MB has some scars left. Audio ports no longer work and 2 RAM-slots are broken. The other two only boot stable if I put in max voltage.
Still have that board, with my original CPU and harddrive in a case and sometimes turn it on just to see it run again. Kept this one as a reminder to never use violence as a vent for anger. So far it worked. If I rage now like ingame, I only get vocal and if it get´s to much I just leave.


----------



## Jetster (Aug 27, 2018)

We have a DSLR thread is you're interested. https://www.techpowerup.com/forums/...gital-slr-and-photography-club.76565/page-108


----------



## Fouquin (Aug 28, 2018)

Dinnercore said:


> How did I estimate the age of this card? Well the 'Magic'-Fan had a crystal-ball-bearing-... production date on it (20.11.2008):



You can also use whatever is the latest date between the PCB and ASIC, since they are both dated. The ASIC on that particular card is 0841, or 41st week of 2008. The PCB is 0847, 47th week of 2008. So all three of your production dates indicate it was built in late November or early December, with the ASIC rolling off the line first and waiting quite awhile for a board to sit on.


----------



## Dinnercore (Aug 28, 2018)

Fouquin said:


> You can also use whatever is the latest date between the PCB and ASIC, since they are both dated. The ASIC on that particular card is 0841, or 41st week of 2008. The PCB is 0847, 47th week of 2008. So all three of your production dates indicate it was built in late November or early December, with the ASIC rolling off the line first and waiting quite awhile for a board to sit on.



Thank you, that is very useful to know for future reference. I guess this practice is commonly used on other parts as well? Like ATI-Cards e.g.?


----------



## Fouquin (Aug 28, 2018)

Dinnercore said:


> Thank you, that is very useful to know for future reference. I guess this practice is commonly used on other parts as well? Like ATI-Cards e.g.?



Yep. ASICs aren't always dated, but boards generally are. If you can't find a direct date on a card you can check the BIOS revision info for a build date with GPU-Z, or find a date on any supporting chip a card may have like a PLX bridge, or sometimes a VRM controller. It's not a guarantee those parts will match up well with the cards actual production date, but it'll give you a nice indication of, 'no older than X' timeline.


----------



## Liquid Cool (Aug 28, 2018)

Late to the party, but I'm subbing as well...

Jetster...I owned that same card...so, yes I think you should.  If I remember correctly I was using it when I played COD 2.  Made for a nice setup.

Best,

Liquid Cool


----------



## Dinnercore (Aug 29, 2018)

Well thank you for joining @Liquid Cool ! What a perfect name for my next post, because soon it´s time for some liquid cooling too. 

Update on my EK-FC 280 waterblock. 






Insides look a bit corroded, backside is ok.






My tip for getting the rubber gaskets out, something pointy but dull. You could use your average knife or a needle, but that just calls for an accident to happen. This plastic toothpick served me well:






I gave the copper two baths in citric acid. Not extremly high concetration and for about 30 minutes each time. Then a quick brush with toothpaste. For the acrylic make sure to NOT use any alcohol or aggressive chemicals! I had to remove the spots of corrosion from it and did so with a pencil eraser. You can use warm water with soap and a towel to wipe it off too. IF something happens with the acrylic and you see it get white/milky don´t use it on your block any longer. It get´s brittle and can crack very easy in that state, especially if under pressure from the screws and during heat cycles. 

Results:






Btw I did not grease the gasket. Typically it´s not necessary for waterblocks I think and on top different types of grease are for different applications. In the worst case the grease could ruin the rubber = leakage.






Now I need a card for this. I´m already preparing a loop just for testing watercooling stuff, more on it when it´s done. With these different cooling solutions I wonder if I should add another chart with overclocking results on these cards. Do you want to see them run benchmarks? 

Another update on my GTX 295 and the SLI issue: I have ordered some thermal pads, I´ve gone for several sheets in varying thickness to have some supply ready for future cards. This is kinda new for me and I was not thinking about everything I would need. I hope the 295 is not mad at me for letting it sit here another couple of days. 
I did however make progress on the SLI-Issue. Huge thanks to @Mr.Scott for hinting at the driver version. I had the same BSOD escapade with my GTX 9800+ cards too when trying to boot on both. So I went for driver Version 332.21 and it booted no problem. However not in SLI. This was the moment I finally noticed that the EVGA is @55nm while the XFX is 65nm! Different device IDs. Both cards recognized by nvidia control panel, but only physX mode possible. Grrr...
I googled myself a whole encyclopedia of troubleshooting SLI configs and if it´s possible to use those two with the standard nvidia drivers. Everyone on the Internet back in the day reported it should not be an issue and that this case with the 9800 GTX cards is the only exception from the rule that only matching device IDs work. 
Well it no longer works. I tried many different drivers, on about 50% of them I get a BSOD with pci.sys as soon as those two cards initialize on the nvidia driver. (They have no issue to boot up with the windows driver tho.)
I´m starting to believe that the chipset drivers for ryzen on windows 7 are a bit confused about what the hell I´m trying to do too. So I currently look into options on a system around LGA 775.


----------



## Dinnercore (Aug 31, 2018)

Today is the day we get back to that beautiful GTX 295. The pads arrived and I got to cover it with these squishy blue squares. Time to Go Down.






My first cut outs are a bit large, but I got better the more I tried.

EDIT: I initially went for 0.5mm thickness on the memory and VRM, but that does NOT make proper contact! Use 1mm all around.

I noticed the grey they had on there before had been a bit thicker then the white ones. It was very sketchy because I had no clue what thickness I had to go with but thinking the less the better, as long as they connect with the plate.






The tricky part is, that this plate on top is being held in place by screwing it together with the two seperate backplates, which each cover another set of memory on the back. So I had to cut and place all pads over every module and hope for the best. Really didn´t want to redo the whole thing. This card with it´s massive IHS surface and all the memory and chipsets and VRM needs about 1/3 of it´s total pcb surface, front and back, covered in TIM...

Well next up is placing the fan back on, taking a quick snap of it´s sticker:





Then I just had to re-paste it, attach the fan, put the little fan cable protection thing back in place and it should be done. Hopefully. As easy as one...






...two...






...and three.






You may have already noticed from the first pictures of it, that there is quite a gap between the plastic shell and the card. This is because some of the clips that hold it down are broken, and I did break another one on removal as I noticed, so now it´s only two left that hold onto the outer edges... It gives it a really beaten up and wonky look.






Maybe I can find a broken one with a decent cover for cheap. But it is time for the moment I was a bit anxious about. Will it run? Are temps ok? Did these damn pads make contact or will the VRM fail on me in furmark?

Boot went as normal, no beeps, LED on the back of card working. First sigh of relief. Driver install went fine, it boots back up and tells me SLI-compatible system recognized. Well it said that before too and crashed but atleast I didn´t break anything further!
New idle temps @ 20°C ambient this time (-2°C from initial test).






That is a small 2°C improvement compared to before, I´m ok with that.
But does it survive Furmark now with the new driver?

Yes it does!






It´s alive and feeling as good or even better then new!
68°C max while the system is inhaling 400W. I was really anxious during the first minutes of the stress test, probably hurt me more then the card. Afterwards I googled, because even tho my ambient is low, these numbers seem very low for this card. And indeed, these cards could get really hot, mostly due to lacking airflow. We should be thankful for modern airflow oriented cases with several 140mm fans. I remember my case back in the day had a single 120 as intake and a single 120 as exhaust...

I could have seen something like this: (NOTE not my card ofc., got this one from someone asking if this is a normal temp for the 295...)





This beast of a card was something I looked at back when I bought my GTX260, thinking to myself that I would never ever even get to see one of those. And now it´s sitting in my pc and I can play Sacred 2 on it with physX enabled, something that my 260 couldn´t handle. Oh I will enjoy this weekend 

And I already have more cards lined up, moving slowly backwards through time.


----------



## blobster21 (Aug 31, 2018)

GTX 295, the name itself instilled fear in mortal men  Glad that the last temp graph is not your


----------



## theFOoL (Aug 31, 2018)

My Wife


----------



## Fouquin (Aug 31, 2018)

GTX 295 restoration photo time? I'd like to include mine. 







Dinnercore said:


> You may have already noticed from the first pictures of it, that there is quite a gap between the plastic shell and the card. This is because some of the clips that hold it down are broken, and I did break another one on removal as I noticed, so now it´s only two left that hold onto the outer edges... It gives it a really beaten up and wonky look.



That happens on pretty much every card with those stupid clips. The plastic gets brittle and the clips snap. The GTX 295 doesn't suffer alone though, AMD used a nearly identical clip mount for the Radeon HD 6000 series reference coolers, and I'm always sweating bullets if I have to tear one of those down for cleaning. Worst design choice ever, screws work just fine.


----------



## Dinnercore (Aug 31, 2018)

Fouquin said:


> GTX 295 restoration photo time? I'd like to include mine.
> 
> View attachment 106189
> 
> ...



Now that´s cool. Engineering samples always have that special snowflake (in a good way) feeling to them. Love it! 
And I agree on these clips, I´d be ok with anything else that doesn´t involve brittle plastic that can break. Make me do a raindance while yelling the lyrics of golden brown but pls no more of those clips. Even if you have a 3D-Printer ready it would be tough to replace...


----------



## jaggerwild (Sep 1, 2018)

For a test use half life 2 if you have it. Great hardware porn!!!


----------



## Dinnercore (Sep 3, 2018)

Another update, did you miss _her_ yet? 







Oh she is a demanding lady. Remember how she passed Furmark? Yeah, but as soon as I tried any game I ran into trouble. First wanted to go for Sacred 2 and it only took 20 seconds for the menu to freeze up, come back with stutter and then freeze for good. Okaaay....
Well. Could be driver and SLI being stupid. Fired up an older title (Gothic, which ran fine on the same driver and the 9800GTX+) and deactivated SLI. This time it lasted 3 minutes and ended in a hard lock-up with me having to reset the pc. This does not look good.

I now had 3 options instantly in my head. First, driver still wonky. Not very likely after it ran Furmark in SLI without a hickup. Second, well I was not feeling confident about those 0.5mm pads.... Third, this card has trouble beyond my grasp, since Furmark did not stress memory, maybe I have a bad solder joint on a memory chip or it degraded for good.

I´m not having any of option three, thanks. I will test option one, but first I have to take a look at those pads and make sure they contact.






Atleast the paste looked decent. Sad I had to open it up again. I´m getting quicker in taking these cards apart. 

So I looked at the pads, and forgot to take a picture because I finally wanted to move on. Well they did not really connect well with the plate. They had some loose contact and looking at them I saw that varying from 20% - 70% of the surfaces had left their mark on the plate. The SLI chipsets looked very good. These pads took most of the mounting pressure and were a quite dented in. So I decided, just put 1mm thickness all around and it should work. Absolutly perfect would be something like 0.9mm on the memory, 1mm on the chipsets and 0.8mm on the backside memory. But I only have 0.5 / 1 and 1.5. 






Looking a lot better then before, and it felt a lot better while tightening the screws down. I could feel the soft resistance of the pads this time. 

Now I did not waste the other thermal pads I removed, I put them flat on a piece of plastic and put them in a sealed container for later use. Sorry to bother you all so much with this one, but I want to do things right and I stand by my mistakes. Now I just have to ask a mod for permission to edit my earlier post so no one who googles this gets the idea that they can order 0.5mm pads for this card! 

Man it looks so beaten up and scratched. Reminds me of my old skateboard.






But she works now. I did not want to go full force today and went for an hour in Flatout 2. Finally. Fingers crossed. 







Our next candidate will be from team red, before I get a bad reputation as an nvidia shill. 






An all-in-wonder multimedia solution. GPU and TV-card in one. Set in someones basement for years until I took it out of it´s premature grave. But this is a story that I will get into later. The 295 really clings to me.


----------



## droopyRO (Sep 3, 2018)

Iirc those cards were loud.


----------



## hat (Sep 4, 2018)

Maybe not after Dinnercore works his magic. Clean it up, fresh paste... 

But damn, where have all the single slot cards gone? I'm sure a single slot cooler would work at least up to GTX1050Ti...


----------



## Dinnercore (Sep 5, 2018)

droopyRO said:


> Iirc those cards were loud.



Oh I heard that now. If that thing spins up to 100% you hear it. Not just in the room the PC is at. It´s not just that I measured 54dB next to the open case, the high pitch of that fan is not pleasant.



hat said:


> Maybe not after Dinnercore works his magic. Clean it up, fresh paste...
> 
> But damn, where have all the single slot cards gone? I'm sure a single slot cooler would work at least up to GTX1050Ti...



Heh, I could do a lot of things to silence that cooler, but that goes a bit against my intentions of restoring them. If it comes with a stock cooler I want to do everything I can to make that stock cooler work like it did at day 1. And those were just loud. Cleaning it up will not help this one much I´m afraid. We will see. I hit another brick wall anyway...
I think the noise is one of the main reasons those single slot solutions are gone. At that low profile point you can´t waste any cooler mass and a radial-fan is pretty much your best option there. However to make it work with fins that can´t have any siginificant height, you need high rpm = a lot of high pitched whiny noise. 



Well next update: Bad news. I think my MSI-X370 Gaming Pro Carbon already gave up on me. The thing they advertise with PCIe 'Armor' to withstand large GPUs and multiple swaps. Ye, everything from marketing is always a lie. If they have to say it, it can´t be true. 

I did my usual routine, use DDU, shut down, unplug power, take out previous GPU, plug new one in. Boot back up, so far so good. Got into Windows, and installed the legacy driver I found for this card on the AMD-Website. Then I just had to restart....
It started with the issue that the PC would not restart, e.g. not shut down. It got to the point of signing out of Windows only to stop there with peripherals already powered off. And from here it just refuses to post. The X1900 screams with 100% fanspeed, everything powers on except for RGB-strips. Debug LED on the board show nothing, it just stays in this state forever. Waited 30 minutes, nothing. Double checked every power connector, checked my AiO for a leak, everything ok. Tried another GPU that worked before, nope. 
It just won´t post, and doesn´t even get to any debug LED state. No Idea what killed it.
Maybe my 1.40V OC on the CPU was to much, maybe the PCIe slot died. I now have to fun task of troubleshooting my fairly new motherboard, of course the part that is not 10+ years old craps itself when being stressed.

I´ll clear CMOS, use another PCIe slot, switch PSU, I guess you all know the drill. But project is on hold until I sort it out. I have an LGA775 board ready, but nothing to use it with yet.


----------



## hat (Sep 5, 2018)

I never had an issue with "loud" stock coolers, though. Sure they'll get loud at 100% (most fans do) but it shouldn't reach 100% anyway.

Sorry to hear about your board, though. It's odd that it would just die on reboot. I've killed an AGP slot before, and the system would have run fine otherwise, just no AGP... so I feel like something else is amiss. Not sure what could have possibly happened though, swapping GPUs shouldn't kill a motherboard, unless there was a mighty ESD.


----------



## theFOoL (Sep 5, 2018)

By my Past use with these "Single-Slot Cards" (Please bring back God ) The Fans would go to 70-80% or even 90%


----------



## Dinnercore (Sep 5, 2018)

hat said:


> I never had an issue with "loud" stock coolers, though. Sure they'll get loud at 100% (most fans do) but it shouldn't reach 100% anyway.
> 
> Sorry to hear about your board, though. It's odd that it would just die on reboot. I've killed an AGP slot before, and the system would have run fine otherwise, just no AGP... so I feel like something else is amiss. Not sure what could have possibly happened though, swapping GPUs shouldn't kill a motherboard, unless there was a mighty ESD.



That one time the card booted up for driver installation the fan did drop to idle and it was ok in that state. Not louder then anything else in my system. 

I´m a bit puzzled too about this issue. What on earth can happen on a reboot that the whole board just bricks. I watched it closely and I do get a confirmation on the RAM-Slots from the board, they work and get detected. I do get the CPU-light and on the debug thing it checks CPU and passes that, but as soon as it should move over to GPU it just stops. No light nothing. But the PCIe slot LED is on, confirming that the board has detected something is plugged in there. I dusted everything off now, tried the 2nd slot but nothing so far. 
I tried to switch the PSU cable, I tried a different plug on the PSU for PCIe power. I decided to just leave the power connector unconnected on the card and just try to get past GPU detection but nope.

I unplugged all drives except boot drive, nope. I confirmed the AiO pump is running. No leak. 
I had power off completly and cleared CMOS, nope. I tried brute force on/off/2x reset, nope. Something is not quite right. I remember having the same issue before on the 9800 and once on the 295, but everytime resolving it by switching PSU cable from using 2 seperate to a single one. On the 295 the other way around... But now this no longer works. 
Could be the PSU but wouldn´t it be more likely that the whole 12V supply would be faulty then and not just the PCI connectors? Ahh man.


----------



## Mr.Scott (Sep 5, 2018)

Dinnercore said:


> Could be the PSU but wouldn´t it be more likely that the whole 12V supply would be faulty then and not just the PCI connectors? Ahh man.


Depends on how many 12v rails it has.
If it's not a single rail PSU, the PCIe connectors are usually on their own rail. So it's entirely possible for just one rail to be bad.


----------



## Dinnercore (Sep 7, 2018)

One for sorrow // Two for mirth, // Three for a funeral // And four for birth.

System is working again. Bios was the issue. Had taken it apart (and gave it a proper clean-up in the process) and rebuild the whole thing. Was still not working, so put my tweezers between the battery jumper pins for a good 10 minutes instead of just 10 seconds and guess what, it´s as if nothing ever happened. MSI please learn how to bios, it takes more then talent.

Now for that lovely all-in-wonder card.






The most interesting feature set out of all the ones I have so far. Performance is not it´s strength, but it was intended to be a multimedia powerhouse back in the day and oh boy this card can do things that no other solution can. Came bundled with software for DVD playback and creation. Well not a big deal. But you can hook up cable TV to it, PAL/SECAM and it supported DVB-T. It has it´s own tv-tuner and a co-processor 'Theater-200' for video and audio De-Interlacing. Hardware accel for H.264 and you can hook up a radio antenna and use it to listen to the radio over your pc back in the day!
Want to watch TV while working? Well you can set the TV-Screen as your bloody desktop background!
It even had a crude time-shift function. I´m astonished by this thing.











Very dirty on the upper side, but the back looks clean. Must have been sitting in an open box face up.






Getting it to work was simple (only MSI-bios messed up), AMD still provides the legacy ATI-driver software. Thank you AMD!

Right from the start the idle temp had me worried. The fan was chilling out tho. Ambient this time 21.5 °C.






A bit warm but they were designed with that in mind. Instead of making noise with that screaming fan they let the card heat up. I don´t agree with that completly, now that it aged and electronics do degrade faster with heat, but eh.

I wanted to see if I can control the fan with afterburner and noticed an interesting thing:






It picks up 2 processors, the R580 GPU and something unlabled. No fan-control for both, but I think that 2nd could be the theater-200.
Since it idled that high I didn´t want to throw Furmark at it instantly. So some medium load first.






Guild Wars should run fine on this. But 70°C and the fan is still not bothered. Well, if they don´t let the fan spin anyway I don´t hope to achieve much with my fresh TIM. A soaked heatsink is a soaked heatsink, I will only get it to soak faster.

Time for Furmark:
Ambient still 21.5°C






It does get hot, but not because that fan and heatsink can´t handle it, but because they didn´t let the fan spin up to keep the noise low. It was audible at around 40% but not really a bother. However it did drop back down to 36% at the end of the test, because it cooled down to 75°C on 40% fan speed.
From what I see this thing has a temperature soft-target of 80°C and tries to get there and hold that with the least spins possible on the fan. I´m not okay with 80°C, looking at how easy the temp went down to the low 70s on just 40% fan-speed but I can´t control it atm.

Power consumption idle: 125W // load: 175W.
Temps idle: 52.8 °C // load 80.6 °C

Now taking it apart:






Looking good. Only had one screw on the cooler that wouldn´t turn and I rounded the head. Got it out by careful use of pliers and will find a replacement for it.

I had to look closely at the red comb to get it off, you can pull out plastic pins in those knobs that hold it in place and then squeeze them to get them out:






That´s it for today, next update will be cleaning and close-up pcb shots with the new thermals and testing afterwards.


----------



## theFOoL (Sep 8, 2018)

Dinnercore said:


> That's it for today, next update will be cleaning and close-up pcb shots with the new thermals and testing afterwards.


Take off the white thermal string so can see those things 

What are those white thermal strings for BTW?


----------



## hat (Sep 8, 2018)

VRMs?

If you really want to tweak that card in particular, try out TPU's very own ATiTool. It's pretty well antiquated now, but it should work for that card. Or maybe you can edit the BIOS with a higher default fan speed, but modding the BIOS probably goes against the whole restoring vintage stuff thing.


----------



## basco (Sep 8, 2018)

and i think gputool loves the old voltcontrollers too-its worth a shot
https://www.techpowerup.com/download/gputool-community-technology-preview/

and is atitool 27 beta 4 the latest?


----------



## dorsetknob (Sep 8, 2018)

Dinnercore said:


> Now for that lovely all-in-wonder card.


great job

Being a Fan of the AIW Series of ATI Graphics Card its a shame they Stopped making them
I Started off with a 3DFX 3500TV (agp2)
Forced to upgrade because it could not handle Agp4 or 8 i looked at NV offerings (Crap) and settled for the 9800se AIW
Great Card lasted years (bugger is still working but boxed for storage)
Only gave them up because those that supported DVT were not available locally and mine was analog only.
ended up with a pci TV Card.( huppagage).

Still look for available AIW Cards on Fleabay  ( their in short supply for the later models )


----------



## hat (Sep 8, 2018)

These days any card can do what an AIW did. Just run an HDMI to your TV. Back in the day though, they sure were useful for connecting a PC to a TV.


----------



## Dinnercore (Sep 8, 2018)

hat said:


> These days any card can do what an AIW did. Just run an HDMI to your TV. Back in the day though, they sure were useful for connecting a PC to a TV.



Those were not just for connecting a TV to a PC like you suggest. It does more then any new card can, it makes your pc into a TV. With the build in tuner. Or can you use your 1080 to watch 2 tv channels on your pc while browsing the internet and having a 3rd channel record for time-shift playback? I know no card that can take a direct coax-cable and the aiw can take 2. And I can´t listen to the radio over my Vega 64. Sadly.

If I run an HDMI from my tv to my pc, why use the pc in the first place? With the aiw I can save the cost of the TV.


----------



## Caring1 (Sep 9, 2018)

I used to have an A.I.W. in a home theatre box in the loungeroom just for that purpose, sadly the advent of D.T.V. saw it retired.


----------



## Dinnercore (Sep 12, 2018)

Hope you didn´t fall into kernel-panic (low freq. warning on that link, please keep volume low at night if you like your neighbours) in the past days. The aiw X1900 is done, the messed up screw was more difficult to replace then I thought. In the end I found a matching one in my big general collection box.

But that is boring stuff, you want to see the freaky VRM under that thermal-pad band?






Look at those sweet colors!






I guess it must be some anodizing but I don´t really know the reason behind that. Maybe there is someone here who knows more about this and why it is colored like it is? Does it have a meaning or is it just due to manufacturing process?

Moving on to the die:






Beautiful R580. I think gold suits it very well:






Then we have the Theater-200.






And the Nxt-6000 as COFDM Decoderchip for the TV-Signal.






Cleaning it up was easy, not much to do. Some wipes, some brush strokes for the dust that sits in tight places. The copper part of the cooler could be taken apart by undoing the plate with the friendly lady and her friendly sword.

Note: I did re-use the pads on the memory, because they looked ok and had no dust on them. And they are 2mm+ thickness, my new pads only go up to 1.5mm. I did however replace the strip from the VRM cooler with some of the leftover 0.5mm ones that did not fit the GTX295. That was a good decision as it turned out!






I did not like the look of that old copper, even tho copper patina has it´s own appeal.






Quick brush under citric acid, very short time to prevent copper citrate.






And it´s all back together!












The results on thermal performance did make me scratch my head this time. With an ambient of 23 °C, we are just barely above the 21.5 °C from the first test. Still I got this:






Idle pcb-temp went up by 5°C, while the GPU-Temp is within the same range as before. The VRM however did drop a full 6 °C with the higher ambient temp. That is some serious improvement from the new pads.






New load temps are interesting too, the fan had to spin at 45% the whole time to maintain the 80°C on GPU-Core. PCB-Temp is now 6°C higher then before but the VRMs are 16°C! cooler and remember the slightly higher ambient.

Not sure what to make of this result, I´m happy for the VRM but worried about the pcb and core-temps. Since the VRM-cooling-comb-thingy has no contact to the main cooler the increase in GPU-temp can´t be from a better transfer. Maybe the old pads on the memory have to cure again, or did not re-shape very well. My paste application was generous and I always spread it manually to cover the whole die.
I will work with this card for a few days now and see if there will be a change.

I wanted to give ATITool a try, but I could not run it because Win7 'saved' me everytime from the missing driver signature for this program and prevented execution. I could not bypass this yet.


----------



## hat (Sep 13, 2018)

There's a way around that... advanced boot menu or something, should be an option to disable driver signature enforcement.


----------



## theFOoL (Sep 13, 2018)

That "Disable Driver Enforcement" should be in the Advanced Boot Option which one should be able to Access by pushing the F8key before the boot logo appears


----------



## Dinnercore (Sep 13, 2018)

I will give that driver enforcement setting a try in the future, thanks. But today I wanted to move on and didn´t fiddle around much longer.

Last thing I did with the AiW X1900 was play some Half Life 2 Lost Coast. I don´t have the full game, but this tech-demo level will do:
(Thank you @jaggerwild for the suggestion)






I like to blow those boats into pieces and watch them float on the water. Stuff breaking apart and floating on water in a game blew my mind as a kid, I imagined all the calculation horsepower it would need to really simulate such detail, ofc I did not have enough understanding and knowledge to see that physics were and are still NOT simulated but rather emulated with shortcuts and simplifications that I consider cheating. But that blissful ignorance back then made it all the more enjoyable. The more you learn and know, the less magic you find in the world...

Anyone want to guess what happend right after?






Average fps were around 35-40, resolution 1080p! Not bad for 256mb memory. And additional stuff like texture filtering was enabled too. Only texture detail on medium and shader on medium.

The reason I wanted to move on was a patient I received today:






7950 gx2. Sold for the equivalent of a loaf of bread and described as not working. Hence all the missing parts, this poor thing was already stripped for the bin. Standoffs are missing, slot-bracket missing.
That´s how it presented in the ER this morning, no further history known.

If it´s sold as not working and someone already gave up on it, I propably don´t have much success either. But I have to try and I like myself a challenge. No burn marks or browned stickers, this thing did atleast not get treated with oven necromancy yet.

Quickly grabbed some standoffs and estimated the length:






Looks alright eh? A bit scratched:






Now how do you support a card in your pc when it does no longer have a slot-bracket? I tried to fit one from an 8800gts, they have the same ports but it is spaced in a different way so I could not get the card to fit with that one.
Well just stuck something under there for now:






And the moment of truth, does it do anything?

Well it is still breathing, I have no idea where to measure pulse on a GPU, but no display out. Red light of doom on mainboard debug LED. Tried bios reset just to be sure, nope the card will not run like this. Funny enough, on the second try it did get stuck again between CPU self test and VGA test, with something detected on the PCIe slot, but it did not boot any further.

Fans both spin on the top and bottom pcb, full 100% like they should. On the pcb with the PCIe connector the VRM is heating up and getting lukewarm, the other pcb I could not feel any heat just the fan spinning. Could be a clue.

I will take it apart now and scan the pcb and SMDs for any visible damage. I really hope I do find some, if not I got a real problem.
Has anyone got a source for pcb blueprints on cards like this one? So I can check where each component should be? Will google myself, but maybe someone knows of a collection somewhere that I don´t come across.


----------



## hat (Sep 13, 2018)

Seems to me the card was dropped or something. Could be physically broken somewhere...


----------



## jaggerwild (Sep 13, 2018)

Hard to find a 7950x2 that does work, think heat was an issue not certain. Might contact a gpu seller like Evga they might sponsor a gpu flashback(worth a try)!


----------



## Dinnercore (Sep 17, 2018)

hat said:


> Seems to me the card was dropped or something. Could be physically broken somewhere...



The one corner of the pcbs closest to memory chips seem a bit scuffed. Could have been dropped, but I could not make out any other physical damage.



jaggerwild said:


> Hard to find a 7950x2 that does work, think heat was an issue not certain. Might contact a gpu seller like Evga they might sponsor a gpu flashback(worth a try)!



Yeah I read now that they ran hot all the time with the stacked pcb there are little options for improvement with aftermarket cooling, except special waterblocks designed for this card...
I hope I find one working, and if I do I will not use it until I figured out a way to run these without cooking them. But before I will contact EVGA, I´ll rather buy some more random cards. For the price of shipping to USA I´d get about 3-4 of these cards. 


I did spend some hours with the 7950 GX2. There was no dust or anything to clean really. I first visually inspected it as well as I could with my trusty 10x folding magnifier that accompanied me during my geosciences study.
Except for scratches on the cooler I found nothing. Solder joints looked bad in some places, but I could not spot a loose contact.

I did not find specific information on the pcb-layout but some datasheets like the one for the ISL6568 two-phase Buck Controller. So I went ahead and stabbed the thing with my multimeter, first powered off on my desk, since that is the only way I could get to most of the contact points anyway. Can´t check voltages that way, but I could check if everything is still connected. 
Well it all seems fine, just to make sure I checked every pin of that connector that bridges the stack together. Nothing wrong.

Took my close-up shots:











The Intersil:











Then put everything back together. Plugged it in one more time, but no. Mainboard reports VGA-Error on debug. Both fans spin, this time both boards got warm. I measured voltages and everything seemed like it should work. 12V supply was stable and on both pcbs. Just for fun I switched the DVI-port but ofc that was useless.
For now I´d say this is beyond what I can repair, since it can not be tackled by replacing components on the pcb other then new cores and/or memory. 

However I refuse to ever give up. This stays on my to-do list, but down at the bottom. I will try to get any signal from it with a board that features a chipset that is actually listed as compatible with it. And if this fails too, well I do have a hot-air station. Might try some heat necromancy. And if nothing, well I always wanted to do a crazy core transplant. Why not go full maniac and do a reball. But for now, I´m afraid we have to move on. 

Question is, what comes next. Another dual GPU or are you already tired of them?


----------



## hat (Sep 17, 2018)

How about a PS1?


----------



## jaggerwild (Sep 17, 2018)

Can you use a long stack to space out the GPU'S from each other and extend the fan cooling wires? Clearly with the short stack the cooling isn't there for the underside gpu. Longer ones will work.


----------



## Dinnercore (Sep 17, 2018)

jaggerwild said:


> Can you use a long stack to space out the GPU'S from each other and extend the fan cooling wires? Clearly with the short stack the cooling isn't there for the underside gpu. Longer ones will work.



That doesn´t really work because the cards need to talk to each other, there is only one of them that has the connection to the mainboard. This is where this bridge pcb comes in:






I can´t extend that easy. There was a version that was higher or even adjustable via ribbon cable (not sure) for some aftermarket cooler. But finding these today will be difficult.


----------



## Dinnercore (Sep 20, 2018)

New Update. I don´t know if I can do this anymore... you open a box that got send to you and unwrap the two 8800 GTX you bought, just to hear that high pitched clacking on your floor of small metal fragments falling out. You know, I have no problem seeing gore on people, but this moment made me a shook one.
The previous owner didnt care about them, told me only one works the other sometimes picks up and sometimes doesnt. Auction ended very cheap so he just careless threw them back on back in the box...
I guess this will happen frequent. Warning, shocking images ahead.











There was some serious force involved to rip the layer off of the pcb and do things like this:











And this is from the better looking one. The other has even more damage. Despite how it looks I did pull out my soldering stuff and tried to get atleast the ones that are just loose back in place. Lead free stuff gave me some trouble to reach the necessary temp, because the kit with the fine tip for small electronics I own was the cheap sidekick to my main one, but that is way to big for these small things. In the end it was no use anyway, both cards definitly dead and I´m missing more SMDs then I could find in the packaging 

On the plus side there was some more stuff in that box, I now got 3 more Arctic Accelero aftermarket GPU coolers (unused) and a smaller Alpenföhn Klara (unused too) that fits mid tier cards from ATi and Nvidia (6600 e.g. or the ATi X1900). And a waterblock from aquacomputer for 9800 GTX(+) cards. Oh and another 7950 GX2, this one will be worth a closer look because it does boot, and with the driver I had I got into the windows login screen that was correct 1080p, and then it dropped signal. Went back on and off and on and off in a cycle, indicating overheat. The fans did not spin up at all too, they were spinning but not audible. But that´s for later.

Actually I had this guy planned for a while now, another card I got for cheap as it was sold as 'untested and stored on display for years, can´t test if working'. I guess most people thought 'oh yeah sure you can´t just plug it in your pc and do a quick test boot, it must be broken'.

We will see:






A 9800 GX2, with original box and even the game it came with (Neverwinter Nights 2). Only the outer shell/cover is missing the screws, I guess this card was running without it for thermal improvement and then the screws ended up somewhere else.

Top:






The sideview with the ribbon cable to connect both cards:






And the backside. I had to photograph it standing up because on the top side it has small metal contacts on very thin and bendy metal springs that tend to break off. These must be to route ground to the metal cover.






The fan is covered in dust and since it didn´t run for ages I had instinct telling me to not plug this one in right away and clean it up first. But I gotta do what I gotta do for data. Let´s see if it runs at all before I invest time.

Put it in the slot, flipped the power switch, pressed the botton aaaand. Blackscreen. Nothing. Boot stuck at red-VGA error LED. ARGFDSGSF

Okay. Wait a minute. Card shows green LEDs on power connectors. Fan spinning and not just spinning but ramping down from 100% to idle after starting the pc. This smells like a working card and me being dumb.
Looking at the back of the card I find 2 more LEDs, one for each pcb. I put the monitor DVI-cable into the top port, this one showed green. The bottom one with the empty DVI and HDMI port had a blue LED. So I switched my cable down to where the blue light was and powered off and on again.

Now its working!






There we go, detection works fine.






Horrible Idle temps, but I read in the original review of this one that 60+ °C is the normal idle temp with this cooler since the card has no 2D-downclock mode. It runs at full speed 24/7.
22°C ambient today. 175W power consumption // 61°C on the hottest GPU.

I do not like this. Especially that 3 °C difference, could be a hint that one side has less contact then the other. Well I can always quit the Furmark test if I get uncomfortable. They should shut themself down at 105°C, but I don´t want to heat a card like this up for no reason considering the age of the parts.






And this is where I cut the test short. I think everyone can plot the curve a bit ahead and see that this one climbs steadily into oblivion. 100°C at the 5 minute mark I would guess, and maybe even 105°C before 10 minutes.
Fan was already at the limit. We reached a thirsty 370W max. and went to 92°C on the hot core after just 2 minutes 11 sec.

But I´m happy that it ran stable. Had it working for over an hour in windows and played Flatout 2 for 20 minutes (reaching 72°C max.). No crashes or trouble so far.

Next is teardown.


----------



## hat (Sep 21, 2018)

Too bad about the 8800GTX... but nice score on the 9800GX2 at least.


----------



## Dinnercore (Oct 20, 2018)

Sorry for the long wait, could just get back to it this weekend. New semester began, gotta split my time between study and work, leaving not much room for other things.

Well then, we left at the furmark test of the 9800 GX2 and now it´s time to crack it open and look if there is any room for improvement, side A:






Very little dust got in there, looks like the cooler does a decent job at keeping dust away from the pcb. Here is side B:






Paste was still wet in some spots and looked fine. The pads were a bit crumbly but not bad. Not much cleaning to do for me this time either. Not much I can do about the high temps here, seems like the limiting factor is the cooler design itself. Has very large fins, low fin count and is all in all more mass then surface area.

Here are some shots of the silicon:
















Looking around the board I found an interesting name on the sockets for the ribbon cable connectors that link the two pcbs together:






Not sure if that is a part from THE Honda automotive company, but might be.

After the photo session I got my MX4 and the Arctic Cooling thermal pads, read online that the pads should be rather thick and put everything back together. Again it was bad to just assume the thickness and I made the same mistake as I did on the GTX295. The correct thickness is again 1mm all around! (I need to order a huge amount of these, they seem to be most commonly used)

This is what the contact of the die looked like:





NOT good. I´m lucky it didn´t go worse. I even tried to check by looking at it sideways from an angle and using a laser pointer to try and see if the die had contact but I couldn´t tell. 

So I had it running (the pic is from after I noticed!) and checked temps, seeing that on one GPU it climbed up to 70°C and more in idle, telling me that I made the pad mistake again... Which meant that I had to undo every screw again, and remove the I/O-cover too because in order to seperate the pcbs you need to take that off too, which is a pain because you need to reach the screws through very small holes on one of the boards.
After those pleasant 30 minutes of undoing and re-tightening dozens of tiny spring-loaded screws that love to bounce everywhere, it now is finally done and back together.
Even found three screws for the cover to hold that in place again:






Man this card is looking nice. I like it.






A little damage around the power connector area, adds some character 






Now my time is already running out for today, so new temperature figures will come tomorrow. I don´t expect a big gain however, these cards are known for load temps in the mid 90s.

To finish it for today, here is the card in action right now:


----------



## Radical_Edward (Oct 20, 2018)

This thread inspired me to take apart my old ASUS HD5850. 



All cleaned up. There was a full blown floof ball in the heatsink too. The TIM on the chip was hard.


----------



## Dinnercore (Oct 22, 2018)

Radical_Edward said:


> This thread inspired me to take apart my old ASUS HD5850.
> 
> View attachment 109070View attachment 109071
> 
> ...



Wow that looks like some years worth of floof. Good thing you took it apart, did you put it back together yet?


My next update:

I did no good to this card it seems. First off the new idle temps are kind off ok:






19,7°C intake air today, but the numbers did not change really when compared to last time. The curve in afterburner shows how the temp climbs up after start up.

Atleast it doesn´t exceed 75°C now, like it did with bad die contact and it needs less RPM at the same temp. I expected now from the load test, that I would see something like 90°C again, but not much more and that it would hold there.
BUT:






It just shoots up faster then ever, especially on the one GPU that has the PCIe connector on the pcb. I aborted the test, it did not crash and since it does take a minute to get to 90°C I think the cooler to die contact should be ok. If some part of the die still had no contact it should be dead by now.
And it seems to be only one side, the other one was still at 80°C, a huge difference of 14°C. What could cause this?

On taking it apart I inspected the cooler and cleaned it with distilled water. It had some screws in it, that seemed like they would hold a plate on top of the fin-stack and I unscrewed them but the plate didn´t move one tiny bit so I put them back and just left it at that.
Did I damage it by doing this? My guess now is that I loosened connection between some of the fins and one of the baseplates, so that the heat of one side can now only sink into a few fins if at all. Since I can not open the cooler I have no idea how to fix this. I might need a new cooler now.
One hint that indicates the heat source is connected to less mass then before is the quick rise and drop off of the temperature:






EDIT: btw I also have no clue why GPU-Z reads it on the Windows drivers, while it states clearly on the other tab that it uses nvidia 337.88 and SLI is working...

From 94 to 78 it dropped within 2 seconds.

This card really wants to bother me it seems.


----------



## Mr.Scott (Oct 22, 2018)

Dinnercore said:


> It just shoots up faster then ever, especially on the one GPU that has the PCIe connector on the pcb. I aborted the test, it did not crash and since it does take a minute to get to 90°C I think the cooler to die contact should be ok. If some part of the die still had no contact it should be dead by now.
> And it seems to be only one side, the other one was still at 80°C, a huge difference of 14°C. What could cause this?


That side will always be hotter. Less air flow and sandwiched between heat from the other card and it's own heat.


----------



## Radical_Edward (Oct 22, 2018)

Dinnercore said:


> Wow that looks like some years worth of floof. Good thing you took it apart, did you put it back together yet?



I did, it's lived in filtered cases all its life, however it was in use until just this last fall in my wife's system. Guess it wasn't getting dusted out as much as it should have been.  

All put back together and ready to go in my backup system once I get a mobo.


----------



## Salty_sandwich (Oct 23, 2018)

Radical_Edward said:


> I did, it's lived in filtered cases all its life, however it was in use until just this last fall in my wife's system. Guess it wasn't getting dusted out as much as it should have been.
> 
> All put back together and ready to go in my backup system once I get a mobo.



I have a graphics card given to me by an old friend an old Palit GF9800GTX+ / 512MB DDR3/256bit

who must have been quite a heavy smoker.
After i had taken card apart to remove years of fluff the card is nicotine stained ! i said to him crikey did you blow ya smoke into ya PC directly ?

everything, the motherboard, CPU heat sink, PSU the lot, covered in dark tar like crap lol

i see if i can find it (i think i still have it)  and stick up a pic lol


----------



## Dinnercore (Jan 24, 2019)

This thread is not dead, me neither I think. 

I have trouble finding time for all my hobbies between life and its responsibilities. Now that semester break is starting once again I can get back to this. 
The 9800GX2 is still alive, but the cooler seems definitly busted in some way. I´m looking for a replacement from a dead card and while I scan for a cheap score I will take a look at this guy that I found recently:





Dusty, a bit yellow and telling from the smell this card was owned by a smoker at some point.





7900 GTO from MSI. Rusty screws, 12 years old. Received it today, will test it next.


----------



## Tatty_One (Jan 24, 2019)

Dinnercore said:


> This thread is not dead, me neither I think.
> 
> I have trouble finding time for all my hobbies between life and its responsibilities. Now that semester break is starting once again I can get back to this.
> The 9800GX2 is still alive, but the cooler seems definitly busted in some way. I´m looking for a replacement from a dead card and while I scan for a cheap score I will take a look at this guy that I found recently:
> ...


I used to have one of those...… awesome card!


----------



## Dinnercore (Jan 27, 2019)

I wonder how the previous owner got it into his pc or if that bend slot cover was a result of shipping (top right on the bottom picture from before). It did not fit like that, had to bend it back in shape.
Once it was in it just came to live like it was at home in my setup. No long boot sequence or boot-loop like with the 295 or 9800GX2. It just works!

Latest driver for this card is the 309.08 for Win7 64bit.






Today we have 19°C ambient temperature. Idle power draw is sitting just around 100W, idle temp looking fine on 34°C.
One thing I noticed was how silent this card is compared to the other ones I had so far. If the case is closed it really does not stand out from any of the other case fans. I really like that!

But lets put some load on it...






And now i´m even more impressed. Given the age of this card and the slight 'patina' on the cooler I had expected worse. On top of that, the fan seems fixed speed and did not get any louder. I would have loved to have this one back in my old PC instead of the 8600GTS I had instead.

Load temps look perfect, 55°C is actually to good to be true. I´m suspecting that someone already took good care of this 7900 GTO before it got to me. Whoever that was, thank you but I´ll still take it apart. The pads look like the stock ones from what I can tell.
Well power draw is reasonable under load ~175W.

After it ran for a while I could tell by the smell this was definitly owned by a smoker but it is not unbearable for me. I will clean it anyway. If I can´t improve thermals I may atleast be able to make it shine again


----------



## Mr.Scott (Jan 27, 2019)

Nice card.
Here are mine.




http://3dmark.com/3dm06/16869829


----------



## E-Bear (Jan 28, 2019)

DeathtoGnomes said:


> just dont suggest a midget be added.



Add a mini itx and it might be.


----------



## Dinnercore (Feb 15, 2019)

Mr.Scott said:


> Nice card.
> Here are mine.
> 
> View attachment 115227View attachment 115228
> http://3dmark.com/3dm06/16869829



Nice, I wish I had the box to mine... And seems like I need a 2nd one to get into SLI testing too 

So many things I want to do, so little time...

Well back to my card, I took some time to build a new test system. I had plenty older hardware around to use and using my 1800x just as a testbench for 10 year old GPUs was a bit overkill. So I sold that system and bought an open benchtable. The rest of the money will fund future cards, thermal pads etc.. I have already used a full 15cm x 15cm sheet of 1mm thick thermal pad on these cards so far.

Some pics of the new system:













As you can see I kept the PSU, I really like this unit. That USB-link is very useful for me. Specs:

EVGA 780i board - Supports 3-way SLI
Intel E8500 Dual Core - Running @3.7GHz with stock cooler
Intel Stock CPU cooler
4x2GB OCZ DDR2 800
PSU Corsair RM850i

OS: Windows 7 64bit

Surprisingly the power draw figures come close to what they were on my Ryzen system, it´s just consuming ~5-10W more on idle and GPU load.
Of course the board got the same treatment as my cards, after confirming that it works I cleaned the heatsinks and replaced thermal paste and pads on chipset / VRM.

Now returning to the 7900GTO:

I pulled it apart and took a look underneath the heatsink.





Nothing special here, paste was old but looking very good still. Pads a bit yellow on the parts that had been exposed to smoke.





There is a little VRM heatsink that came loose by pulling out some pins with a spring:





The fan-cover and heatsink itself looked a bit nasty with brownish dust all over:





So I gave it all a good bath and cleaned it as best as I could. The pads and paste I wiped off the card:





Eww. Well it is what it is, cigarette smoke is not nice and the smell clings to everything it touches.

I got rid of all the material and removed the fan + cover from the heatsink too. Put it all in a bath of warm soap water. It looks better now and the smell is less but still not gone.
Before I forget to mention it, just by pure coincidence I found out that just by touching the card close to the PCIe connectors while it is running produced heavy flickering and weird distortion on screen. I took a look at the PCIe connectors of the card and cleaned them with contact-tunerspray, when wiping them off there was a lot of brown and yellow residue on the towel... That cigarette smoke stuff really does get everywhere.
Anyway, time for the more pleasant pictures:





The die had a really deep violet color that the camera did not pick up that well.
Right behind it on the backside of the card I found the usual SMD-city, compared to modern SMDs these are still fairly large:





I noticed something on the VRM components, there is an obvious height difference between the right one and the other two on the left. The original pads did not account for this height difference and just got compressed far enough. While that works I don´t find it an ideal solution.





Why not ideal? Take a look:





After wiping the old pads off you can see the stains where it made good contact with pressure and these stains are only where the two high components are. Might be due to higher heat from these two too, but I went ahead and tried to improve the situation by using 0.5mm pads and 1mm pads to even it out instead of hoping that pressure alone fixes it (as my pads are harder then the stock ones and don´t get squished that easy):





I did not get a good picture of it, but it makes a decently flat surface for the heatsink to sit on now.

Back together:





Oh those sexy heatpipes...






Back to testing I found some higher temps this time. Partly due to a very slightly higher ambient and the mainboard blowing warm air on the back of it but I´d say also due to the fact that the old paste was still in perfect condition and had a good spread. My paste probably has not cured yet too, I did not cycle it before the test because my time is limited.





Idle sitting slightly higher at 36°C with the ambient today at 20°C.





Under load I´m now at 63°C max. which is ~8°C higher then before. Again, paste might take some time to cure and spread. I´m still good with this result, 63°C leaves plenty headroom.
One thing I can do now is take shots of the cards in action on my benchtable:





I really enjoy having it up in front of me. Its not really quicker this way, since swapping a GPU in and out of case is no big deal either, but it feels so different to see it up close.
Oh and I have a place for all my case badges now 

Lets see what card we will have next. Might be ATI-time again.


----------



## Dinnercore (Feb 20, 2019)

Why can´t anything ever go the easy way in life. Well complaining doesn´t help, but I have some minor bad news.

My Camera broke. I was an idiot, tired and forgot that it was still sitting on the floor connected to the pc via USB cable, I fell over it and kicked it across the floor. It did not fall any height nor did it come to a sudden stop, it was just sliding across the floor. No visible damage but it no longer turns on. Battery symbol and SD-Card LED are the only two things lighting up when I turn it on, indicating that the battery would be empty but it is definitly charged. Tested another one, same problem.
I´m now in a really bad mood, I did not expect this thing to fail because of my stupidity, it still had so much lifetime left on the shutter and all.
This puts me in a spot where I can´t continue with my card collection, because my only replacement would be the cam on my Galaxy S4 and that thing is not suitable for quality pictures. I don´t have any money put aside just to replace my trusty DSLR, so I can´t just buy another one either... Gonna have to try and fix it or work double shifts for another two months... I´m so angry at myself. Why why why.

Well I got some last pictures out if it. Our next candidate for this program:





The mighty ATI X1950 XTX. Powerful DirectX 9 flagship card, most powerful single GPU for something like 3 months back in the day 2006. This time I even got the original box with it! It looks cleaner than I expected.

Found this funny sticker on the bag it came with, never seen that on one of my more recent GPUs as far as I remember:





Back from a time when most people were not as familiar with PC-hardware as they may be today and when additional power connectors for a GPU were still kinda new.

I already put it on my testbench and fiddled around with driver install, it refused to install the catalyst 10.2 Version from Guru3D for 64bit. I had to install some optional windows updates, they started to fail installing, after some restarts it figured itself out and then the windows driver did not want to go away. Even DDU could only remove it for a few seconds.
I then went to the AMD site and got the 10.2 version from there under the legacy section. I went into device manager and forced the windows driver to stop bothering me, installed the CCC and the driver for this card. Finally it worked. Card is running, but my head was so stuck with the dead camera on my desk that I forgot to take my screenshots.
Will take them later and add them like usual.

For now I need to get this thing back:





Another night with no sleep it seems -.- But I can´t live without a decent camera and all my equipment is for this exact model. Wish me luck.


----------



## hat (Feb 20, 2019)

That sucks. I know all too well the feeling of breaking stuff due to my own clumsiness... though, nothing quite that expensive...


----------



## phill (Feb 20, 2019)

What's happened to the camera??


----------



## hat (Feb 20, 2019)

It's literally in the second line of his post...


----------



## biffzinker (Feb 20, 2019)

phill said:


> What's happened to the camera??


Didn't like being kicked across the room? I'm sure @Dinnercore was kicking himself for the fatal mistake.


----------



## Steevo (Feb 20, 2019)

If I had access to all the old machines I had built I could give you work for a year. Good on you, I remember my old cards and the work that went into them.


----------



## king of swag187 (Feb 21, 2019)

Bought a box of local parts, unfortunately most of them were bent to all hell/pins dented/capacitors missing, but I did get a working R7 260X, GTS 250, and a nice HD 5770


----------



## phill (Feb 21, 2019)

hat said:


> It's literally in the second line of his post...



Many apologises, just looked at the pictures as was trying to do something else at the time..  Guy thing, tired and not looking at what I'm doing


----------



## Dinnercore (Feb 21, 2019)

phill said:


> Many apologises, just looked at the pictures as was trying to do something else at the time..  Guy thing, tired and not looking at what I'm doing


No worries, as my camera can tell you I´m sometimes in the same state. 



king of swag187 said:


> Bought a box of local parts, unfortunately most of them were bent to all hell/pins dented/capacitors missing, but I did get a working R7 260X, GTS 250, and a nice HD 5770


Argh dead hardware is never nice to unbox. Like the 8800s I got from someone who shipped them in a way that I could pour the broken-off SMDs out of the box 
Those 3 working cards are a score tho! Grats on that, I would love to have a GTS250. They are not hard to find but somehow I did not yet come across an offer that would have got me interested. As a mid-low power card that thing seems to be still sought after for internet and office PC use.



Steevo said:


> If I had access to all the old machines I had built I could give you work for a year. Good on you, I remember my old cards and the work that went into them.


Oh I think I´ll find enough cards to keep myself busy for a year or more. There are so many models released and the era I´m focusing on atm is highly available on the second hand market. Seems like 10 years is roughly the timespan people stay attached to their old stuff and then they notice that it is just sitting in box somewhere catching dust and taking space and suddenly they want it gone. 



biffzinker said:


> Didn't like being kicked across the room? I'm sure @Dinnercore was kicking himself for the fatal mistake.


I´m still busy kicking myself mentally. I think I´ll not get over it for about a week.


News are tho, I took the camera apart up to the point where I would have to de-solder tiny wires to dig further. This is a step I´m not 100% confident I can reverse so instead I tried my best to find anything that could be broken but nothing was visible. I took all ribbon connectors (there are so many) out and put them back, cleaned the board and tried to get some life out of it. It did power on this time, display turned on too. But the joy only lasted so long as the display is broken and the camera is still stuck. I can enter the menu now but it does not detect lenses correct and still claims the voltage is to low. Due to low battery voltage it locks the release. 
I think this camera is beyond my capabilities, but I got nothing to loose at this point so I will try to dig further into it. For now tho I need a replacement which means I might look into buying the same body used, I found some reasonable offers below 200$.

Sorry for off-topic, back to my lovely X1950 XTX. The idle values:






Once again I´m at 20°C for ambient, due to the low stock fan-speed it sits at 61°C in idle. Power draw is reasonable as well with ~117W, the card does run on lower clocks and voltages in 2D-mode. 
And this power save state is not detecting Furmark as a full-speed 3D-application:






I do get my load numbers, but the card did not go full speed. I do want to stick with Furmark as the load test for now, so I will use these numbers but they are not accurate for these ATI cards. I might have to use some game or 3D-benchmark like 3D-Mark 03 or 05.
In Furmark however we see up to 80°C, shortly spiking over that until the fan-curve kicks in and drops the temp back down to ~76°C. And there we have one of the critic points for this card, the not very clever fan-curve has it ramping the fan up and down during load which causes some audible noise and tone difference. It´s not loud but the really noticable change every 20 seconds is annoying. I´d rather have it constantly high, as it is still just as loud as the stock intel cooler on my CPU and that mainboard fan.

To read the stock fan-curve I used ATITool, glad this thing exists. Afterburner does not support these cards at all.






Seems like ATI wanted to be as silent as possible, letting the card reach 80°C+ before even starting to bother the fan. But the fan speed is only checked every few seconds so it sits at 13% idle, load hits, it peaks in the mid 80s, fan is set to 70%, it instantly drops back down but the fan keeps going until it checks temp again after ~10 seconds. By that time the temp is already back at 75°C or less and the fan decides its time to idle again. Another 10 seconds of heating back up to 86°C and the loop starts again.

I was not happy with that, mostly with the high temps. I set my own curve to start ~5-7°C earlier on each step and launched Half Life 2 - Lost Coast. I ran the benchmark scene in full 1080p with settings on high, 4x AA and texture filtering max.. HDR on. 






74FPS average is a really solid result for full HD, back in the day my monitor was like 1024 x 768.






With my own fan-curve the temp peaked at 79°C, fan speed hitting up to 56%. 






And idle changed too, 6°C drop just by running it at 25% instead of 13%. And at that speed it is still not audible over my other fans. So I wonder why they choose to set it up like they did, I can´t imagine that 25% fan speed would hurt the life span in a significant manner.


----------



## ST.o.CH (Feb 21, 2019)

Great old stuff here, sub for more,
Also thanks for share.


----------



## king of swag187 (Feb 22, 2019)

Would 10/10 not get a GTS 250, here they're a little less than the R5 240, which can go in basically any system with a PCIE slot and has all the newest AMD support, as well as Display port and half decent gaming perf. It's still a dandy old card, might stick it in a project me and a friend are doing at school, a old minecraft PC for as cheap as possible, but fun times. The 260X is in my brothers rig until I can figure out how to fit a Fury X in it, and the 5770 has a special place in my HP 8200, hanging out the side with a SATA to PCIE 6 pin. Streams stuff decently well, which I like and need.


----------



## Dinnercore (Feb 23, 2019)

king of swag187 said:


> Would 10/10 not get a GTS 250, here they're a little less than the R5 240, which can go in basically any system with a PCIE slot and has all the newest AMD support, as well as Display port and half decent gaming perf. [...]


But I want one, I started collecting cards and I want to have atleast one of each, maybe two for SLI / Crossfire tests.


I could not fix my camera but got a replacement body. I can take photos again and spend some hours with the 1950XTX:



 



Initial look was perfect from the outside, pcb looked clean and the cooler performed like it should.

Upon taking it apart tho I finally found something to do for me, mainly the cooler itself had some dust build-up.
Take a look at that fan, 0,75 amps -> that is a 9W fan at full speed. Holy moly this can move some air. But I think the design of the fan as a kind of paddlewheel fights with air resistance more than any other design does too.





The pads look clean, they are still the stock ones and I´m going to re-use them. No need to waste stuff here, in contrast to the brittle white stuff that I find on nvidia cards, which have to be replaced most of the time due to them breaking into small pieces.
The whole cooler came off easy, the main unit with the fan is held on only by the metal X plate while the copper heatsink on those memory modules is held on with some screws.





The comb-like heatsink for the VRMs has two screws, but after taking them off it was still glued to the components, it felt really solid and I could spot some white pad type under it. I decided not to forcefully break it off and just leave it as is.

Thermal paste on the die was still mostly wet with a dry spot right in the middle. I can´t tell if this was replaced by someone in the past or if it is still the stock application.

On the back of it I found the self-destruct button along with the switch to re-boot the matrix. Not sure which is which tho.






The die and some additional photos:







 

 



The only dust I could find on the card itself was where air exits the I/O bracket.


 



Again I have to compliment the cooler for being designed in a proper way, as in keeping dust away from the pcb and collecting it, if at all, only on the heatsink parts. Well it could be that someone cleaned it before I got it, but he did a bad job on the heatsink then:







I no longer fool around with the small syringes of MX4, I got a big boy now. The last two 4g ones lasted for all the cards I have in this thread, lets see how far I can get with these 20g.





And its back together, still a bit wet but I will deal with that right now.





I DO NOT RECOMMEND THIS! I kinda sometimes think I should know what I´m doing so I dared to do this, but you definitly shouldn´t. Let cleaned parts dry before mounting them and plugging it back in!





I just let it warm up and evaporate the water trapped between the fins. Looked really cool to see the condensate building at the exhaust. And final shot of it back up and running:






Next up, the new temps.

EDIT: Here they are:






For the idle temp I waited until the card was dry and did my usual routine - heated it up just a bit (~75°C) with furmark and waited a while for the temp to settle back down. Same ambient of 20°C, this time we got just under 60°C. A small 1°C improvement that is confirmed by the 1°C drop on the pcb temp too. BUT still well within margin of error. 






Load numbers did not change due to the fan curve managing the cards temperature, it wants to maintain something around 80°C.
This concludes the ATi X1950 XTX!


----------



## Dinnercore (Feb 26, 2019)

Next card! Hello Ruby!






ATi / AMDs HD4870. The 512mb version. This will be likely the most recent series from ATI/AMD that I´d like to collect. It was competing with the Nvidia 200-Series and this model was reviewed as slightly ahead of a GTX 260 while much lower in price.

Again got it without the box, and shipped together with another card just loose in a simple box with some air packaging bags between them. The box was beaten and I was worried it might not have survived.

I can already spot more dust inside the fan and cooler as with the X1950 XTX before. The pcb has some minor damage on the corners opposite of the slot cover.





The back is looking clean and no damage. The whole board is bend just slightly, not sure if this comes from mounting pressure of the cooler or if it has to do with the scratched and beaten corners...





Doesn´t seem to bother it in any way tho, it boots fine and is working. This time using Catalyst 13.9, got it again from AMD. Upon starting the system the fan goes crazy at ~80% fanspeed and is really loud but moves a lot of air at the same time. It again spins up to 80% for a very short time when windows boots and the driver takes over.






In idle we can see what the two power connectors already suggested, serious idle power consumption around 160 W. The temps however hold where the driver or bios wants them to be with a low noise fan speed at 21%. I like how there are so many sensors that show up in GPU-Z, I´m a big fan of monitoring EVERYTHING. Be it useful or not.
Again ambient temp of 20°C, card is idling at ~66°C.






Load test shows that the card is still in good working condition. Core temp is fine at 75°C, the other temps we can see are ok-ish as far as I know about these things. VRMs do get hot and they are rated to tolerate that. The memory in the low 80s is still ok too, I´d say on some nvidia cards it runs the same or even hotter (the stuff on the back of GTX 295 single pcbs does get toasty quick) but I could never find a sensor read-out for those.
Healthy power draw peaking at 271 W. The fan is just audible at 33% speed but not noisy. The fact that it doesn´t need to spool-up the fan further shows me that the heatsink is still doing a good job.


----------



## ST.o.CH (Feb 26, 2019)

Is there any chance you may test one Radeon HD2900XT,
I had one back in 2007 until 2011, was my first flagship gpu.
I still have it but unfortunately it´s death.


----------



## Dinnercore (Feb 26, 2019)

ST.o.CH said:


> Is there any chance you may test one Radeon HD2900XT,
> I had one back in 2007 until 2011, was my first flagship gpu.
> I still have it but unfortunately it´s death.



There is a big chance I will get one and post it here.
Currently I don´t have one yet and I will first go through the cards I have ready to work on. But if I go hunting again I can keep an eye open for this exact model. I want to collect so many different cards and usually pick them at random so it doesn´t matter to me if I get the HD2900XT first or for example the FX6800.


----------



## Kovoet (Feb 26, 2019)

I have a old 5870 lying around. You've given me an idea. I might just give this a go


----------



## ST.o.CH (Feb 28, 2019)

Dinnercore said:


> There is a big chance I will get one and post it here.
> Currently I don´t have one yet and I will first go through the cards I have ready to work on. But if I go hunting again I can keep an eye open for this exact model. I want to collect so many different cards and usually pick them at random so it doesn´t matter to me if I get the HD2900XT first or for example the FX6800.


That would be great, nowadays is hard to get an old flagshish graphics card.


----------



## DOM (Feb 28, 2019)

I have alot of old card sitting in storage ☹


----------



## Mr.Scott (Feb 28, 2019)

DOM said:


> I have alot of old card sitting in storage ☹


What, next to the boxes of baby clothes?


----------



## Dinnercore (Feb 28, 2019)

DOM said:


> I have alot of old card sitting in storage ☹



Then get them out and let them breathe electrons again! Well if you have a system to run them in... If you got watercooling in your current build it might not be that simple to just switch the new for the old for a day. 
But I can tell you it may bring back a lot of memories and fun to run them again with your favorite game titles from the day. The familiar sound of those coolers alone does it for me...


----------



## Dinnercore (Mar 1, 2019)

Please excuse the following short interruption of the main program:




I found a 9800GX2 for 15€! It was sold as dead and I can confirm that it is, but I was most interested in using the heatsink anyway. Now I can test my theory on the working 9800GX2 that I got and is still waiting for an update. The dead card here has some major issue with power delivery, I did not take it apart yet, BUT the lower board does not power up as in the LED on the back stays off, no picture from any display out, mainboard reports GPU error and just touching the cable resulted in the power indicator LED flickering from green to red. Will take a closer look at that later.


The 4870 got stripped and yep, much more dust in this one.





Enough to feed a small dust bunny family. This time I decided to test my 1.5mm thermal pads vs. these thick stock ones. I replaced all of the pads, VRM + memory with my Arctic Pad. 1.5mm on the memory, 1mm on the VRMs.
More screws this time and I don´t think the bend pcb came from mounting pressure, in fact without the cooler in place it was slightly worse.

Some of the stuff I pulled from the card:




Before I did anything else tho I took care of the beaten corners by dipping them with CA-glue.




The damage started to get worse, everytime it touched something on these damaged edges more fibers came out of the pcb. With this glue it is now sealed again. I choose CA-glue just because I had it around, I would guess any kind of glue that is not too aggressive would do. Like hot-glue for example.





^These are the things that manage power for the core while this:



Should be memory supply.

Does your card have VITEC?




Yo this new VITEC on my power stages kicks in at around 70 W and pushes the core to insane speeds, your boost V2.0 aint nothing compared to this! (sorry for the car-related joke)

I´m blue abedi abedei.








Back to cleaning the heatsink, there was a lot of floof inside too. Before and after:


 



And the dust-plagued fan, this time a 12W power blower. Getting close to hair-dryer level here.




I did not bath the heatsink unit itself this time and again just used a brush to carefully remove most of the dust from the fan. The fins felt really loose and got bend pretty easy, the heatsink on the X1950 XTX was much more solid in its construction.

Upon putting things back in place I had some trouble to align the baseplate with the copper-core for the die on the card. The problem is that the big heatsink on the die is completly seperated from the baseplate with the fan and has ~3-4mm room for movement. So when I put the card on the cooler and aligned the holes for the core, the baseplate holes were out of place and moving that plate into place meant potentially sliding the thermal pads out of place too!
Best thing you can do in that case is to lift it up again and check if pads are still good and carefully align EVERY screw hole before you drop the card on the cooler.





Now that it was proper clean and on new pads + paste I was curious to see if anything changed in thermals.






It does not look like it. Pretty much the same numbers as before, again within margin of error. Only thing I noticed was that the fan RPM is just a bit more consistent and rises less frequent. All the dust did not seem to bother it yet.






And my thermal pads work perfect as replacement for the stock stuff. That is good news.
Again the temp is decided by preset fan-curve from AMD, it wants to hold these temps and does so with ease at 33% fan speed. I did a little experiment with my own fan-curve this time using Afterburner.

This is what the idle temps looked like after I started my own fan-curve (letting it cool off from the Furmark test):





Just a single step higher in fan-speed, 27% instead of 22%, sees a whopping 15°C drop on the core and memory controller. Even the VRMs dropped 7°C across the board. When I´m using this card I will now set a custom fan-profile. Taking some °C off from 10 year old components seems worth the slightly higher noise. Which was still on par with what my other system fans produce.

BTW I have upgraded the testbench with a new cooler, the Asus V60! It does wonders compared against the intel stock one and fits perfect on this very size restrictive board.
On top of that I added one of my old HDDs to make room for some games and potential benchmarks. The small SSD was just enough for the system + basic things I need.


----------



## Dinnercore (Mar 2, 2019)

The 9800GX2 is a big pain to work with. So many screws and the really sketchy ribbon cables that easily detach on assembly.

I have opened the one that I received as broken with the wonky 8-pin power but after taking a close look at it I can´t find anything wrong. 




Someone did replace the thermal paste and decided to paste that SLI-ASIC too, which did seem to work more or less. The 8-pin itself has no weak solder connection as far as I can measure after probing it with my DMM. On both pcbs the VRMs resistances are fine, I see now visible damage. IF something is broken, it is again somewhere I can´t check right now. I might try and see if I can check any voltages on the side that does power up, but my guess right now is that one of the ribbon cables was loose.
Either way, the cooler is not the fix for the problem I have on my working card, it is again the thermal pad situation. 

You see, the original 9800GX2 pads were really squishy and got compressed to a height of ~0,72mm. I only have 1mm or 0,5mm. The 0.5 does not make contact with the cooler while the 1mm is to strong to get squished enough by the weak mounting pressure that the tiny springs on those screws can provide...
So using 1mm pads I have contact between VRM, ASIC and memory modules -> heatsink BUT the paste on the core does not get squished enough, so the distance between die and heatsink is somewhere around 0.3-0.4mm which is too much for efficient heat transfer. This made the card/pcb soak with heat, as seen by the temps that slowly rise up to 70-75°C in idle. 

To fix this I have to improvise, there are no 0,72mm pads on the consumer market as far as I can tell. I can either try to squish some pads by hand, or try to increase the mounting pressure by introducing washers on the screws. But I have to be careful not to give it to much and break something.


----------



## silentbogo (Mar 2, 2019)

I guess I'll join the retro party )))
Found this puppy in one of the drawers in my office. Can't remember where I got it, but I'm guessing it ended up there cause someone thought it was broken... 
Apparently all it needed was some dusting, pasting and lubig. Running like a champ with under 65C at full load. WD40 and no-name silicone lubricant worked some miracles on that fan (had to scrub the magnetic ring and the rotor from dust). Now it's clean like my cat's balls and it's nearly inaudible even at full speed!

Also dug up a reference Gainward GTX285, which is due for some cleaning tomorrow (and possibly vRAM replacement, cause as far as i remember it had some nasty non-GPU related artifacts).


----------



## Jism (Mar 2, 2019)

Hacking the stock heatsink to fit the VRM's, kronaut straight onto the VRM's and a set of washers to tighten up. Result? Barely 40 degrees on 15 min furmark. When going 1450Mhz @ 1.175V and furmark "extreme" with Postfx enabled. the VRM's hit 60 degrees while the GPU power is over 260W.






I think alot of the one-heatsink-fits-all unnecessary heat up the VRM's and memory modules. In stock config this would easily rise to 65 degrees or so. Coud'nt think of doing PostFX turned on at all.


----------



## Dinnercore (Mar 3, 2019)

I´ve finally done it! The 9800GX2 is ALIVE and KICKING. Have to celebrate that with some special music, straight from my heart. Dancing across my room atm. 4:52am and I don´t even care.



 



You have no idea how many hours went into this card and getting it to this point. I took this thing apart and reassembled it 5 times just today.

Let´s start where we left off, I initially opened it again to switch the heatsink and test if the one from the donor card was working better. Nope. After that I switched to the original cooler again, but tried to squish the 1mm pads a bit further, nope and on top it now did not boot and the mainboard threw GPU init. errors at me.
I took it apart again, checked if I broke anything or if the ribbon cable was loose, nope nothing to see. I put it back together with 0,5mm pads on one side and 1mm on the other to test the difference between the two and to see if 0.5mm might just be enough height to make contact with the heatsink. Oh and everytime new paste too. It booted and worked, no error and no trouble this time. I saw that the side with 0.5mm pads looked decent in temps, while the other one was ~10°C higher under load. So I took it apart again, went for 0.5mm on both sides, but it back together aaand...
Again it threw error, did not boot and one side appeared dead.

So, you guessed it, I took it apart again... (this process includes 37 tiny screws, 16 memory modules to cover, 2 chips to cover, VRMs and the ASIC) And saw nothing that would cause this. Ribbon cable was tight and in place. I then decided to put it on without the beaten cover on the outside, maybe this shortened something. And guess what, it worked fine without the cover on. Now I don´t know if that was really a short somewhere or some EMC problem, since the cover is connected to ground and they taped over the place where the ribbon bridge connectes the pcbs, possibly to prevent inducted noise. So to fix this and still save the original cover (all about the restoration here) I placed two non conductive thermal pads between that cover and each side and carefully adjusted the tape behind the ribbon connection so that it seats right above that.

Maybe it helped, maybe it didn´t but I have a working card! And finally new temps for my chart:





Oh how sweet these numbers are compared to the condition I received the card in last year. For idle I used the same fan speed fixed as last time, since it now idles with much lower speed. 53 / 54 °C, that is 5 / 7 °C lower than before, while the ambient today is just a touch lower at 20-21°C. But it idled fine before, lets throw Furmark at it and see if it starts a fire.





Before I aborted the test at 2:11 / 92 °C because temps kept climbing while the fan was maxed out on 100%. This time the temperature was the one that gave up and maxed out at 91°C on the hottest side. The fan was switching between 94% and 96%, so it had some room left. Given the card and Furmark being a power-virus test I´ll say this is decent.

Now I have a second card split in half on my table that has working VRMs and showed the same trouble like my card did with the wonky cover... Time to try and get a second working 9800GX2! But first some rest.


----------



## Apocalypsee (Mar 3, 2019)

For the love of God please don't use Furmark. Everytime I see people using it I died a little inside, especially these old and delicate cards. Its nothing but a heat and power virus. Run uniengine or whatever, please, anything but Furmark


----------



## Mr.Scott (Mar 3, 2019)

Apocalypsee said:


> For the love of God please don't use Furmark. Everytime I see people using it I died a little inside, especially these old and delicate cards. Its nothing but a heat and power virus. Run uniengine or whatever, please, anything but Furmark



Thank you.


----------



## Dinnercore (Mar 3, 2019)

Apocalypsee said:


> For the love of God please don't use Furmark. Everytime I see people using it I died a little inside, especially these old and delicate cards. Its nothing but a heat and power virus. Run uniengine or whatever, please, anything but Furmark



I know that this is more pain then necessary and that I´ll never see this much heat in any real usage scenario but it does one thing very well: Testing the function of the cooler.
And it does a good job at being a reproducable load. With unigine benchmarks e.g. I can´t test these dual GPU cards because they don´t always work reliable due to SLI with stutters and load varying from testrun to testrun. This results in 2 different max. temps every time I run these.

For single GPUs this may be ok, but still is more difficult to use. Furmark is a dirty but quick and easy test. It is even used professionally by companies like Alternate (pc-system and parts dealer) for testing RMAs (EDIT: If I remember correctly some manufacturers even demanded this to be run first by us before they accept any returns). I used this test on cards like this 9800GX2 way back when I was working there during a school internship.

I can totally understand you all, and believe me I sweat more than all of you, after all I´m sitting here next to the cards screaming in agony for 10 minutes. But after these 20 minutes total (which is not a long period, like some people run this thing for hours) the cards will never have to do this again and I carefully use them only in games for a while, monitoring them so that they may last a little while longer. The ones I´m not using I store in special boxes with dehumidifiers that I control every month. Believe me I love my cards and I care for them.
But for once I will defend Furmark, if the cooler does not perform to spec or if there is a weak spot somewhere and the card is close to death it will most likely show up during Furmark. If a card dies like this it might not have lasted much longer either way.


EDIT: I made some progress with the 'dead' 9800GX2. It is up and running again, but only half of it.










Posted a separate thread for this and got the suggestion to try and flash the bios, and I´ll try that next. It does run like this tho, I ran the Half Life 2 - Lost Coast benchmark (found that title fitting for the situation ;P)





Solid 146 fps in full HD with some AA and texture filtering stuff. If half of it does turn out to be dead I can atleast try 9800GX2 3-way SLI


----------



## phill (Mar 4, 2019)

Apocalypsee said:


> For the love of God please don't use Furmark. Everytime I see people using it I died a little inside, especially these old and delicate cards. Its nothing but a heat and power virus. Run uniengine or whatever, please, anything but Furmark





Mr.Scott said:


> Thank you.



I prefer Heaven or even Catzilla....  Just something different to watch other than 3D Mark..  That said 06 is pretty ok, same as Vantage I guess....


----------



## Jism (Mar 4, 2019)

Apocalypsee said:


> For the love of God please don't use Furmark. Everytime I see people using it I died a little inside, especially these old and delicate cards. Its nothing but a heat and power virus. Run uniengine or whatever, please, anything but Furmark



It's a stress test tool, just as AtiTool was once before. It's a very quick and effective way of testing clocks. voltages, cooling and all that. Simular as running Intel burn test on a OC'ed CPU just to test the worse possible scenario. I'd prefer doing this and any decent designed card should be able to handle it very well. 

My RX580 did a whopping 260W core usage and the VRM's where pulling as good as 20 amps over one single 8 pin connector. That's the testing what i wanna see when going for a max 24/7 OC. When the cooling is properly and the supply of power is sufficient, nothing much that could go wrong. Just dont select POST-FX because THATS the power virus you are saying. In normal furmark we're talking 140 to 180W of load.


----------



## Dinnercore (Mar 4, 2019)

Jism said:


> It's a stress test tool, just as AtiTool was once before. It's a very quick and effective way of testing clocks. voltages, cooling and all that. Simular as running Intel burn test on a OC'ed CPU just to test the worse possible scenario. I'd prefer doing this and any decent designed card should be able to handle it very well.
> 
> My RX580 did a whopping 260W core usage and the VRM's where pulling as good as 20 amps over one single 8 pin connector. That's the testing what i wanna see when going for a max 24/7 OC. When the cooling is properly and the supply of power is sufficient, nothing much that could go wrong. Just dont select POST-FX because THATS the power virus you are saying. In normal furmark we're talking 140 to 180W of load.



Ok I kind of agree with you but have to say for OC _stability _this test is not good. Furmark runs stable with clocks that would crash ingame. Just like you can run insane speeds on the GPU-Pi benchmark that would crash anywhere else. It is good for testing the cooling and if the cooler is up to the task under all circumstances. Or to simulate the additional heat on the components that you might see in higher ambients during summer when its currently winter. 

But like the others said, please don´t use this as 24/7 stability tester. To test OC for stability run a short Furmark run under full supervision to see if the temps get critical and then run your 3D-Benchmarks and games to test. 
Don´t run Furmark hours on hours or even overnight while you sleep. That kills VRMs by pushing them slightly out of spec and reducing the lifespan significantly if you aren´t there to stop the test when something gets too hot.


----------



## Jism (Mar 4, 2019)

A properly designed card woud'nt run out of spec, that is the whole point. They have a OCP and proberly a bunch of SMD based fuses on the card itself. The core is actually shutting down at a certain threshold as well. I'd agree with you that furmark is'nt the tool to be left alone 8 hours or more but running for bare 5 minutes aint going to cut it as well.

I live in portugal, and i had a terrible hot summer with 46.5 degrees outside lol. When having an office of over 35 degrees ambient it's running into it's 90's easily. So yeah you want to have a 24/7 sustained OC that would defenitly pass in any circumstanes. I cant have a crash and the driver resetting the GPU in the middle of a game if you know what i mean.


----------



## Dinnercore (Mar 13, 2019)

As some may have read already, the EVGA 9800GX2 is gone for good now. Last thing it did was playing some Half Life 2, I´d say that is a nice way to go. One day I might want to go like that too...

Now it may be apparent that I´m into dual GPU (VPU in this case) cards. Which is why I had to take a look at ATi / AMDs take on mGPU. Apologies to @ST.o.CH , the HD2900XT will have to wait until I come across one. You are right, they are not very common to find these days. Even less than the 1950 XTX. 

I found this next card under the 'for free / as gift' section. I was a bit too far away to drive over and pick it up, so I made a deal with the owner for shipping.






It came with the original box and all the stuff in it. You can instantly spot that the only original part that is missing is the cooler. Instead this card was already fitted with an aftermarket solution from Zalman. Two Zalman VGA coolers for the dies, and on the original plate that is covering the memory modules some extra little heatsinks in blue. Suits the card quite nice. All the other heatsinks, like for the VRMs, are stock. 

I´ll see when I find time to take care of it, currently using my benchtable to test out another mainboard and some CPUs.


----------



## ST.o.CH (Mar 17, 2019)

Dinnercore said:


> Now it may be apparent that I´m into dual GPU (VPU in this case) cards. Which is why I had to take a look at ATi / AMDs take on mGPU. Apologies to @ST.o.CH , the HD2900XT will have to wait until I come across one. You are right, they are not very common to find these days. Even less than the 1950 XTX.


No need for apologies, few time ago I had look for an HD2900XTX and didn´t find any, but came across one GTX 9800 GX2 for 40€, one HD 4870 X2 for 35€ and one GTX 295 simple pcb which price can´t recall, didn´t took any because wasn´t interested.
BTW that HD 4850 X2 is very nice .


----------



## Dinnercore (Mar 18, 2019)

Between mainboard swaps I found some time to test the 4850X2:





The fans plug into the mainboard and as such are controlled by it. This means I have to set them to a fixed speed and can not make any load adjustments, so I set them to 80% all the time.




33°C idle temp @ 20°C ambient. High fan-speed plus aftermarket coolers = low temps. I have to add 20W to the power draw figure, because this board was DDR3 instead of the DDR2 before and idle draw was lower. Same CPU.





For this load test I used the GPU-Z render in fullscreen to generate some heat. I wanted to try the suggestion of running a game instead of Furmark, but had trouble to get the Half-Life 2 benchmark to run in Crossfire. It only used one GPU even tho Crossfire was activated in Catalyst software. 

Again decent temps, 48°C max on one core and highest overall sensor at 53.5 °C. Power draw with adjustment = 294 W

Will take it apart next.


----------



## Dinnercore (Aug 1, 2019)

Just reporting back, thread is not dead. It will never be, but time is often short and I´ve added CPU-OC to the list of my hobbies.







Just yesterday I took the 4850X2 apart and as you can see there is a lot of dust that has build up over time. Thanks to the heatsink design it did not block any airflow, but still I prefer a clean pcb over THIS any day.





The thermal pads were very brittle, will have to replace them. It seems like 1mm thickness on the memory, and 1mm thickness on the VRM-heatsink on the back of the card. 0.5mm thickness for the large silver VRM-heatsink on the front.





I have decided not to try and pull the ASIC heatsink off, it is glued in place and I think it should stay that way. There is no alternative mounting mechanism and tearing on the glue will most likely destroy it.





Backside looking fine, stickers still in place. That thin yellow foil stuff looks like thermal and electric isolating tape, I can replace that with my own. However I did just run out of 0.5mm pads, will have to re-order those before moving on.





I like these coolers, they use thumb-screws. Really simple to attach and you get good control over the mounting pressure.


----------



## biffzinker (Aug 1, 2019)

Dinnercore said:


> That thin yellow foil stuff looks like thermal and electric isolating tape, I can replace that with my own.


It's called kapton tape.






						1 Mil Kapton Tape (Polyimide) - 1/2" X 36 Yds: Electrical Tape: Amazon.com: Industrial & Scientific
					

1 Mil Kapton Tape (Polyimide) - 1/2" X 36 Yds: Electrical Tape: Amazon.com: Industrial & Scientific



					www.amazon.com


----------



## Dinnercore (Aug 1, 2019)

biffzinker said:


> It's called kapton tape.
> 
> 
> 
> ...


Thats the name I was looking for! Thanks 

Yeah I got a lot of this stuff around. Very useful. 

Oh and btw I did fix the chart in my first post to a more forum friendly format. Seems it was broken during some update recently.


----------



## kapone32 (Aug 1, 2019)

Whenever I come on your thread it makes me wax nostalgic for my 8800 GTS (first GPU ever bought).


----------



## Grog6 (Aug 1, 2019)

I just looked at your table; I ran 3x 4870 video cards in crossfire in a i7-920/x58 setup a few years ago, and I never realized they drew that much power!

I must have been pushing 900w on video cards alone, lol.

Nice thread!

BTW, the kapton tape is the same kind of stuff they make flex-circuits out of; it's polyamide, and good to ~400C for soldering on, and then be dipped in LN2 to operate, lol.

Also, If you want to get the HS off that's glued on, you can use acetone, carefully applied with a syringe, with the board standing on edge.
I'd find a good thermal 'glue' to put it back with before I considered that. 
Arctic Ceramique is an epoxy I've used, but it can pop off if hit just so.


----------



## Mr.Scott (Aug 2, 2019)

Grog6 said:


> I'd find a good thermal 'glue' to put it back with before I considered that.
> Arctic Ceramique is an epoxy I've used, but it can pop off if hit just so.


Use the paste of your choice and a dot of super glue on the opposite corners. Works tits.


----------



## Grog6 (Aug 2, 2019)

Nice tip; I hadn't considered that.  

That should even work with liquid metal, if the superglue is compatible; the surface tension should hold the LM in place.

I had problems with peltier based dehumidifier I made for drying rocket chemicals; the cold side kept popping off.
A freon leak can be Bad in that usage case, and is to be avoided.


----------



## Dinnercore (Aug 2, 2019)

Grog6 said:


> Nice tip; I hadn't considered that.
> 
> That should even work with liquid metal, if the superglue is compatible; the surface tension should hold the LM in place.
> 
> ...



Please dont use liquid metal, those heatsinks on VRMs and in this case on the bridge chip are most of the time made of aluminum and will be destroyed by the LM.
It is also completly unecessary to use LM on a part that was fine with cheap paste or even a pad before.


----------



## Grog6 (Aug 3, 2019)

I know all about aluminum and Gallium, lol.
The specific one he shows above IS aluminum, so I get your point.

There's a lot of places I use heatsinks, and something that doesn't outgas is an advantage; like in a vacuum chamber.
You can really only move heat to the walls, and stainless is a terrible heat conductor.
A piece of copper laid up beside a piece of stainless, with a thin gap of vacuum between them is WAY worse, so a film of LM between them would be ideal.
The vapor pressure of LM, IDK, but it's got to be better than silicone based products, or even epoxies.
(most hivac is stainless; copper tubing is considered porous at hivac conditions. Yes, I know about the copper gaskets.  )
Swaging the metals together is possible, but it's not reconfigurable then. And there can still be vacuum areas between them...

On the fun side:
You can dissolve aluminum into a chunk of a certain metal, and when it starts to solidify, the aluminum will come to the surface and burst into flame as it's not covered with an oxide layer; and it's too hot for it to form in a protective way. 

I won't mention the composition, because it's been used for nefarious purposes in the past.


----------



## Dinnercore (Aug 6, 2019)

Time for the pretty photos. Got rid of the worst stuff and started my usual photo-routine.






Looks like a pretty tidy layout.



 




 







All the little SMDs and everything labled and traced. Just beautiful to look at up close. I wonder why this one has so many big diodes.

Here we have a battle-scarred die:




And some more die-shots:


 



Time to dress it up again.




Like I mentioned before, memory and backside VRM takes 1mm pads, the front side VRM heatsink (the thing on the right side below PCIe-power) would want 0,5mm thin pads but I decided to use paste instead this time.




 

 

 

 

 

 

 

 

 

 

 

 

 

 




Gonna throw it on my testbench some time soon, but at this moment it is still busy benching a CPU and I don´t want to mess up my data with a sudden GPU and driver switch.


----------



## r9 (Aug 6, 2019)

rtwjunkie said:


> I’m in! Almost like pr0n.



Correct term vintage pr0n.


----------



## R00kie (Aug 6, 2019)

aaaahhh, this thread is giving me all the nostalgia feels! 

I should get my X1600 Pro, 2 HD4850's, a GTX 295 (sandwich edition), and an HD5970 out for some photoshoots


----------



## biffzinker (Aug 6, 2019)

gdallsk said:


> I should get my X1600 Pro, 2 HD4850's, a GTX 295 (sandwich edition), and an HD5970 out for some photoshoots


I'm always up for some naked card photo shoots.


----------



## Dinnercore (Aug 7, 2019)

gdallsk said:


> aaaahhh, this thread is giving me all the nostalgia feels!
> 
> I should get my X1600 Pro, 2 HD4850's, a GTX 295 (sandwich edition), and an HD5970 out for some photoshoots


Do it  Especially the 295, I love those. Got 5 of them now, 2 dual pcb and 3 single pcb and even waterblocks for every single one.


----------

