# NVIDIA's shady trick to boost the GeForce 9600GT



## W1zzard (Feb 29, 2008)

When we first reviewed NVIDIA's new GeForce 9600 GT we noticed a discrepancy between the advertised core speed and the frequency reported by the clock generator. After further investigation we can now answer what causes this.

*Show full review*


----------



## newtekie1 (Feb 29, 2008)

I wouldn't call it a shady trick.

Is it not entirely possible that nVidia's engineers simply used the PCI-e bus speed as the base because it would mean the card would be slightly cheaper to manufacture?  I mean, the standard PCI-e bus speed is 100MHz, and only a very few people actually change that speed.  So, perhaps they just figured they could use that to calculate the base 25MHz frequency instead of actually putting a crystal on the board, which would increase costs(slightly, but every little bit counts).

Linkboost does increase the PCI-e frequency but ONLY when a compatible graphics card is insalled.  AFAIK the 9600GT isn't a compatible graphics card, so it won't be affected.  However, I might be wrong here.

I don't think it was really an attempt to pull the wool over our eyes by nVidia, I just think it was a move to make the product cheaper and easier to produce that wasn't totally thought through or properly explained to the reviews/end-users.


----------



## DaMulta (Feb 29, 2008)

Now if you could change the voltage in bios for the video card this would really rock. No software needed for a simple core clock.

Now when you do this, the shaders are staying the same correct? They are not linked on this the same way the core is correct?


----------



## PaulieG (Feb 29, 2008)

newtekie1 said:


> I wouldn't call it a shady trick.
> 
> Is it not entirely possible that nVidia's engineers simply used the PCI-e bus speed as the base because it would mean the card would be slightly cheaper to manufacture?  I mean, the standard PCI-e bus speed is 100MHz, and only a very few people actually change that speed.  So, perhaps they just figured they could use that to calculate the base 25MHz frequency instead of actually putting a crystal on the board, which would increase costs(slightly, but every little bit counts).
> 
> ...



Well, I would agree with you if they didnt flat out deny or refuse to answer questions about it. I mean once it becomes an issue/concern, why not give an accurate explanation?


----------



## POGE (Feb 29, 2008)

Dont know why they want to hide it... but the fact they do is wrong.  I mean is it ok to lie about clock speeds in drivers?


----------



## W1zzard (Feb 29, 2008)

newtekie1 said:


> mean the card would be slightly cheaper to manufacture?



a 27 mhz crystal is already on the board for the memory clock, so why not use that like we did for the last 25 or so years?


----------



## regan1985 (Feb 29, 2008)

somthing tells me there is more to it! altho i think its a very good find, some1 has a lot of time to spare well done w1zzard why they didnt tell you i dont know! i wonder if other new nvidia cards will end up the same?!


----------



## [I.R.A]_FBi (Feb 29, 2008)

will the same apply to a 9800GT?


----------



## ShinyG (Feb 29, 2008)

So that's how they did it!
I was like crazy on another thread why is the 9600GT so close to the 8800GT even though it has half the shaders. It just didn't add up.
Well, now it does!

Thanks W1zzard for clearing that out


----------



## SK-1 (Feb 29, 2008)

Good Tech,...bad execution. I agree with Wizz.


----------



## rhythmeister (Feb 29, 2008)

Nice find and good digging up of subject matter!


----------



## W1zzard (Feb 29, 2008)

[I.R.A]_FBi said:


> will the same apply to a 9800GT?



depends on how they will implement the pcb design


----------



## [I.R.A]_FBi (Feb 29, 2008)

W1zzard said:


> depends on how they will implement the pcb design


 If they do, that may be the only reason to get a 9800GT, to the 1337, it makes no difference i guess.


----------



## DaMulta (Feb 29, 2008)

W1zzard said:


> depends on how they will implement the pcb design



With that note, do you think this will only affect reference design 9600GT cards?


----------



## cowie (Feb 29, 2008)

i would think they would boast about something thing like that or at least make users aware?
you guys are pure genius.


----------



## newtekie1 (Feb 29, 2008)

W1zzard said:


> a 27 mhz crystal is already on the board for the memory clock, so why not use that like we did for the last 25 or so years?



I don't know, why use 25MHz as the base at all?  All very good questions.


----------



## cdawall (Feb 29, 2008)

what if someone just pressed the wrong key when coding the drivers


----------



## EastCoasthandle (Feb 29, 2008)

Wow, just wow.  This is an excellent find.  This brings up 2 questions:
#1 Will people with intel chipsets loss out on this added boost (without boostlink)?
#2 What is the probablity of the card failing/malfunctioning because of this?


----------



## W1zzard (Feb 29, 2008)

1) if you crank up the pcie clock on intel you will see the same clock increase
2) thats a good question. if a card can fail from overclocking (i have never seen evidence for that) and a manufacturer can detect it (no evidence for that either) and you didn't even know that the funky nvidia chipset overclocked your card, and made it go boom, you should get a new card from the mfgr or find a lawyer


----------



## AddSub (Feb 29, 2008)

That's what I call a professionally done investigative report. Great stuff W1zzard.


----------



## niko084 (Feb 29, 2008)

Thats a bit odd why they would mislead like that...

Shady a bit I would say...

Definitely very odd.


----------



## Tatty_One (Feb 29, 2008)

Paulieg said:


> Well, I would agree with you if they didnt flat out deny or refuse to answer questions about it. I mean once it becomes an issue/concern, why not give an accurate explanation?



Because perhaps they dont wanna give the competition idea's.....this has got to be a good thing, the method that is, not being secretive.  I am all for performance enhancements, just dont like it being shady.


----------



## WarEagleAU (Feb 29, 2008)

I Agree Tatty. If they fudged the numbers or something, I may understand, but a flat out refusal or secretive notion, is just wrong.


----------



## tkpenalty (Feb 29, 2008)

Kudos w1zz.. and that proves that the 8800GT still > 9600GT however.

Its a great boost for 9600GT users and I agree, why did they have to be so shady with this? Why didnt they just tell us in the first place?


----------



## warhammer (Feb 29, 2008)

Nice read Wiz

Can the crystal be changed on the G92 (8800GTS) and would it give a big boost in performance.


----------



## W1zzard (Feb 29, 2008)

the crystal affects only memory clock .. better to leave it alone and do normal overclocking


----------



## panchoman (Feb 29, 2008)

very nice investigation man!


----------



## theonetruewill (Feb 29, 2008)

Very interesting - thanks W1z. I too was puzzled by your earlier findings so it's pleasing to see that you found the reason behind it. I certainly am impressed by your deduction skills!


----------



## iop3u2 (Feb 29, 2008)

can someone test if linkboost is enabled when using 9600gt? just plug it in a 680i chipset and check if pci-e is above 100MHz. If it is above 100MHz it would mean that pretty much every 9600gt sli benchmark on the net is actually with the cards 0-25% overclocked.


----------



## OnBoard (Mar 1, 2008)

ShinyG said:


> So that's how they did it!
> I was like crazy on another thread why is the 9600GT so close to the 8800GT even though it has half the shaders. It just didn't add up.
> Well, now it does!
> 
> Thanks W1zzard for clearing that out



Yeah I read your posts and wondered it my self too, wasn't expecting it to be so close to 8800GT. It being OC card overclocked adds up nicely to the performance 

btw. has to 9600GT more core voltage than 8800GT, if it can run so high on stock voltage?


----------



## JacKz5o (Mar 1, 2008)

I wonder why NVIDIA hid the 27MHz crystal instead of marketing it as a.. "feature" like some other companies would do? Seems a bit shady.


----------



## cbunting (Mar 1, 2008)

Hello,

Nvidia's Ntune and every other software program I've seen shows the correct value. Only Riva Tuner, which doesn't support the 9600 gt even though it works shows two different values for the core clock speed. Isn't this actually a bug in a sense if only one program is giving false readings? 

I would think that Ntune would show the same thing that Riva tuner does. But no other software does. Just rivatuner so I don't think it's accurate otherwise, the core clock changes would be listed on my 3dmark05/06 scores, gpu-z ect.

I just think riva tuner is wrong. Maybe RT 2.07 with support for the 9600 with give correct readings.

I could be wrong but I don't think RT is accurate.
Chris


----------



## PuMA (Mar 1, 2008)

warhammer said:


> Nice read Wiz
> 
> Can the crystal be changed on the G92 (8800GTS) and would it give a big boost in performance.



i think its not about 9600 performing well without chrystal, its about how u OC the card. with 9600gt and nvidia chipset u just enable linkboost and ur card is overclocked. on the G92 and other cards u have to digg up OC progs to OC the card. just 2 different methods.


----------



## W1zzard (Mar 1, 2008)

cbunting said:


> Hello,
> 
> Nvidia's Ntune and every other software program I've seen shows the correct value.



you are telling me that if you raise pcie clock the new core clock is reflected by other programs?


----------



## cbunting (Mar 1, 2008)

Hello,

Well, if the crystal thing is really legit. I'd like to know where the memory core crystal is as well.

http://i29.tinypic.com/1zzjl3a.jpg

I get an inaccurate reading on both the core clock AND memory clock. Look at my oc settings in the pic. Both are not correct. The drivers have changed for the 9600 so I still don't see how the plugins are reading the drivers correctly since Riva isn't updated.

I still could be wrong but the memory clock isn't right either.
Chris


----------



## cbunting (Mar 1, 2008)

W1zzard said:


> you are telling me that if you raise pcie clock the new core clock is reflected by other programs?



No, I'm was saying that all other software shows my factory default clocks. Only the non updated Riva Tuner is the only software that shows different clock speeds than what I have set it in.

If I set my card to 650 core clock. Ntune, Expertool, 3dmark06 and everest all show 650mhz core clock. Riva tuner is the only one showing a different value. And it doesn't support my drivers so I just don't see how it's an accurate reading is all.

Chris


----------



## W1zzard (Mar 1, 2008)

cbunting said:


> No, I'm was saying that all other software shows my factory default clocks. Only the non updated Riva Tuner is the only software that shows different clock speeds than what I have set it in.
> 
> If I set my card to 650 core clock. Ntune, Expertool, 3dmark06 and everest all show 650mhz core clock. Riva tuner is the only one showing a different value. And it doesn't support my drivers so I just don't see how it's an accurate reading is all.
> 
> Chris



yes you are correct. rivatuner sensor readings only pointed out that something strange is going on here that required more investigation. and as i mentioned in the article rt's sensor readings are wrong. its just something that makes you ask "whats going on here?" and then you dig and find your answer.


----------



## cbunting (Mar 1, 2008)

Hello,

I'm digging but so far. I see that almost all reviewers use Riva Tuner. RT does not have support for G94 cards. It also doesn't support the 174.20 drivers either. So I think the reading are wrong because riva doesn't know how to read the drivers.

The wierd Riva Tuner output;
http://i30.tinypic.com/2dqjdd4.jpg

GPU-Z output for Stock Factory OC;
http://i26.tinypic.com/2e4btbo.jpg

Re-applied OC Settings with Expertool showing with GPU-Z;
http://i30.tinypic.com/33capgz.jpg

This is why I honestly think Riva Tuner is inaccurate moreso than anything.
Chris


----------



## Wile E (Mar 1, 2008)

cbunting said:


> Hello,
> 
> I'm digging but so far. I see that almost all reviewers use Riva Tuner. RT does not have support for G94 cards. It also doesn't support the 174.20 drivers either. So I think the reading are wrong because riva doesn't know how to read the drivers.
> 
> ...


He's not disputing that RT is inaccurate, only that it's different readings led him to investigate further.


----------



## cbunting (Mar 1, 2008)

I just think in the end it's odd because I found this post by searching for "9600 rivatuner" in google.com to see what the line was that needed to be added to the rivatuner config. but searching google for 9600 rivatuner lists where this topic is now being posted everywhere. But a simple comparision with expertool, ntune or simular along side of gpu-z would yeild the correct results. But now, we have some magical 9600 gt cards all due to a version of riva tuner that doesn't even support the g94 chipset or it's drivers.

It's ashame Riva Tuner 2.07 isn't out with updated plugins I guess.
Chris


----------



## Wile E (Mar 1, 2008)

cbunting said:


> I just think in the end it's odd because I found this post by searching for "9600 rivatuner" in google.com to see what the line was that needed to be added to the rivatuner config. but searching google for 9600 rivatuner lists where this topic is now being posted everywhere. But a simple comparision with expertool, ntune or simular along side of gpu-z would yeild the correct results. But now, we have some magical 9600 gt cards all due to a version of riva tuner that doesn't even support the g94 chipset or it's drivers.
> 
> It's ashame Riva Tuner 2.07 isn't out with updated plugins I guess.
> Chris


Whether it's reading or not isn't the issue. W1zzard proved that upping the PCIe bus overclocked the card by using benchmarks, not by using anything that reads the card's clocks.


----------



## btarunr (Mar 1, 2008)

Dugg! 

Ah, NVidia can give Desperate Housewives a run for their money.....in the shady business that is.


----------



## cbunting (Mar 1, 2008)

Wile E said:


> Whether it's reading or not isn't the issue. W1zzard proved that upping the PCIe bus overclocked the card by using benchmarks, not by using anything that reads the card's clocks.



How do you get benchmarks "Without" reading the cards clocks?

I'm aware of this article, http://www.nbsgaming.com/PCIEBus.html

If overclocking the PCIe bus has really increased the core with the stats not coming from Riva Tuner, then the 9600 cards are one of the first to be able to show this from what I've seen.

Chris


----------



## Wile E (Mar 1, 2008)

cbunting said:


> How do you get benchmarks "Without" reading the cards clocks?
> 
> I'm aware of this article, http://www.nbsgaming.com/PCIEBus.html
> 
> ...


That's his point exactly. Do you have a 9600? If so, run 3dmark 06 at 100MHz PCIe, then set your PCIe to something like 105MHz, and run it again.


----------



## cbunting (Mar 1, 2008)

Hello,

I still don't see where the info is accurate. Looking back at some of the older cards which already had the 27mhz chips on it..



> Almost all GT/GTX cards used to set core frequencies at 27 MHz steps, frequency of the geometry unit was always higher than those of the other units by 40 MHz. But this 27 MHz step was removed in the G71. In the 7900 series, all the three frequencies can be changed at 1-2 MHz steps. A difference between the geometry unit frequency and frequencies of the other units in the 7900 GTX has grown to 50 MHz. This difference in the 7900 GT is 20 MHz, there is no 27 MHz step either.



So it would seem by opening up my 9600, that Nvidia has brought something back for some reason. But I don't see how that affects a change in core settings through the PCIe bus unless something is different where these 2.0 cards are backwards compatible with x16. Since I am using x16, a oc of my pcie bus may show some differences but I would assume it would change the data rate, not overclock the card. I just can't see how it would be possible although it very well could be.

Chris


----------



## Wile E (Mar 1, 2008)

cbunting said:


> Hello,
> 
> I still don't see where the info is accurate. Looking back at some of the older cards which already had the 27mhz chips on it..
> 
> ...


Just run 3Dmark06 at the settings I said above. You should see a roughly 5% increase in performance. 5MHz on the PCIe bus on any other card, makes no such difference.


----------



## cbunting (Mar 1, 2008)

Hello,

I think I've found the details. 

http://www.digit-life.com/articles2/video/g70-2.html

Look half way down that review until you see the Nvidia control panel with the oc configuration. The text below that starts talking about how the reviewer and the author of Riva Tuner figured out exactly why this was happening.

Chris

Edit:

Just search that page at the above link for   A story about the triple core frequencies
and you'll find the start of the info.


----------



## Wile E (Mar 1, 2008)

You're missing the point entirely. The readings from programs don't matter here. The fact of the matter is, the card gets faster when you increase the PCIe Frequency. Instead of trying to explain the readings, just do as I suggested, and run benchmarks at various PCIe frequencies, and see for yourself.


----------



## AlexUnwinder (Mar 1, 2008)

cbunting said:


> The wierd Riva Tuner output;
> http://i30.tinypic.com/2dqjdd4.jpg
> 
> GPU-Z output for Stock Factory OC;
> ...



You're mistaking. All the tools you've mentioned including nTune, GPU-Z and ExpertTool show just the *target* clocks, which you "ask" to set. So you'll always see "correct" clocks there regardless of the reall PLL state, thermal throttling conditions etc.
The real clocks generated by hardware must be and normally are different comparing to target ones. And there are only two tools, allowing to monitor real PLL clocks: RivaTuner and Everest. The rest will give your target clocks only.


----------



## XodiloS (Mar 1, 2008)

Does anybody know if its the same with the 8800gs cards?


----------



## cbunting (Mar 1, 2008)

Hello,

In the article, was the 9600 GT card that was used during the test plugged into a PCIe 2.0 slot or was it using an x16 PCIe slot?

Chris


----------



## cbunting (Mar 1, 2008)

I've searched and searched and searched an I see nothing supporting these articles.

The benchmark tests in the techPowerup article use the NVidia Nforce 590 SLI motherboard which only has x16 PCIe slots, Not PCIe 2.0.

I'd like to see someone who has a motherboard which supports a PCIe 2 slot with a 9600 GT and see if overclocking the PCIe bus makes any difference with the hardware lock.

It just looks to me like you have a 9600 that supports PCIe 2, which can handle twice the data rate of PCIe x16. So to have a x16/2.0 card running in an x16 slot, will be able to handle additional data from an overclocked PCIe bus. But only because it already supports twice the data rate as listed above. 

It's not actually overclocking, it's just passing more data isn't it?
Chris


----------



## btarunr (Mar 1, 2008)

How does that matter? Does the 9600 GT utilize the full bandwidth of PCI-E 1.1 in the first place? Other than bi-directional data rate and the power, there's no difference in the architectures.

I wonder how many reviewers used a PCI-E 2.0 board to test this in the first place. The TPU evaluation methodology is very realistic, we don't use a NForce 780i SLI board with a QX9650 to test a 9600 GT because people with such CPU/platform configurations wouldn't even use a 9600 GT. In the same way a rather moderate system is used to evaluate the board. AMD 7xx chipsets don't run Intel chips, X38 would seem high-end, NForce 7xx is not far spread with its mainstream chipsets as yet. PCI-E 1.1 is sufficient for even a 8800 GT.


----------



## W1zzard (Mar 1, 2008)

XodiloS said:


> Does anybody know if its the same with the 8800gs cards?



no, the only cards affected at the moment are 9600 gt

pcie 1.1 or 2.0 does not matter at all for this. the pcie base frequency does, which is 100 mhz by default on both


----------



## WhiteLotus (Mar 1, 2008)

wow I never knew this would cause such an outcry. Even the developer of rivatuner has stepped in!


----------



## ghost101 (Mar 1, 2008)

WhiteLotus said:


> wow I never knew this would cause such an outcry. Even the developer of rivatuner has stepped in!



Well, there should be. Nvidia misled reviewers to think that the performance they got on linkboost capable boards applied to all motherboards.

Therefore, if i think the 9600gt outperforms the hd 3870 and i use this information to buy one of those on my p35 board and then find that my stock performance is worse than the hd3870, i will be annoyed.


----------



## WhiteLotus (Mar 1, 2008)

yea I guess you are correct and this should be clearly shown on all specifications. But then did they honestly think that no-one would realise? If they thought that no-one like w1zzard would be curious enough to find this then they are living in their own bubble


----------



## [I.R.A]_FBi (Mar 1, 2008)

ghost101 said:


> Well, there should be. Nvidia misled reviewers to think that the performance they got on linkboost capable boards applied to all motherboards.
> 
> Therefore, if i think the 9600gt outperforms the hd 3870 and i use this information to buy one of those on my p35 board and then find that my stock performance is worse than the hd3870, i will be annoyed.




and that is one of the few burning issues.


----------



## xfire (Mar 1, 2008)

I couldn't get most of the article but from what I understood the PCI bus determines the closk of this card and it brings the clock of this card much above the stock resulting in performance boost. Could someone put it in plain english?
W1z can we expect an review of the 9600GT at various clocks?


----------



## ViperJohn (Mar 1, 2008)

W1zzard said:


> a 27 mhz crystal is already on the board for the memory clock, so why not use that like we did for the last 25 or so years?



I have to agree with using the crystal control method.

The is also another twist here.  A 100Mhz PCI bus frequency in no longer the across the board
standard.  The reference NV 780i's motherboards run a 125Mhz PCIe bus frequency on the PCIe
2.0 GFX card slots as the default and it is not adjustable in system bios either.

Viper


----------



## ghost101 (Mar 1, 2008)

xfire said:


> I couldn't get most of the article but from what I understood the PCI bus determines the closk of this card and it brings the clock of this card much above the stock resulting in performance boost. Could someone put it in plain english?
> W1z can we expect an review of the 9600GT at various clocks?



Only if the pci-express frequency is above 100mhz. Its like how cpu's are clocked now. It uses the pci-express bus speed instead of the front side bus. The multiplier is 26 for the 9600gt.

So 100/4 = 25mhz

25 x 26 = 650mhz

With 110mhz pci-express bus speed :

110/4 = 27.5

27.5 x 26 = 715mhz

with 125mhz: 812.5mhz

So the last one applies for linkboost.


----------



## Tatty_One (Mar 1, 2008)

AlexUnwinder said:


> You're mistaking. All the tools you've mentioned including nTune, GPU-Z and ExpertTool show just the *target* clocks, which you "ask" to set. So you'll always see "correct" clocks there regardless of the reall PLL state, thermal throttling conditions etc.
> The real clocks generated by hardware must be and normally are different comparing to target ones. And there are only two tools, allowing to monitor real PLL clocks: RivaTuner and Everest. The rest will give your target clocks only.



Now there is a useful piece of info!


----------



## Tatty_One (Mar 1, 2008)

cbunting said:


> I've searched and searched and searched an I see nothing supporting these articles.
> 
> The benchmark tests in the techPowerup article use the NVidia Nforce 590 SLI motherboard which only has x16 PCIe slots, Not PCIe 2.0.
> 
> ...



As Wile E suggests, run a bench such as 3D mark 2006, with identical system settings and identical gfx core/shader and memory speeds, just ensure on one run your PCI-E bus speed is at 105 or 110mhz and the other run keep it at the default 100mhz.....you will see the difference for yourself! then the proof of the pudding will be in the eating so to speak.  

I understand your points (and frustrations) but liken this to the fact that you have a car, the cars top speed on the flat is 150 miles an hour, no matter what you try you cannot get it to go faster, then I come along with a mod for your engine and fit it, but you cannot find the mod and looking in all the manuals for the engine you can find no difference in your engine so you dont beleive it exists, however, you get into your car and drive it, flat out on the flat and you find it does 175 miles an hour, do you still not beleive I fitted the mod?

Now I know thats a fairly random dreamt up story but give it a try (the gfx card that is, not the car!)........

One other thing, a 9600 could not possibly use any of the bandwidth that PCI-E 2.0 has to offer (5Gbit) so that cannot be a factor, it wont even be using 2.5Gbit (PCI-E 1.1).  It's not a case of "handling" it, the throughput is not there in the first place to utilise the bandwidth, even an 8800Ultra cannot use just 2.5Gbit bandwidth let alone PCI-E 2.0's 5Gbit.


----------



## MrMilli (Mar 1, 2008)

*Oh boy*

Ok i just read the report and also the complete thread in this forum.
And i'm wondering about something: how many of you have read the complete report????? The questions being asked are just ridiculous. Guys like 'cbunting' must be pulling our leg ... it can't be he's that stupid. I hope those of you that are asking these crazy questions did see that the report is FOUR pages. For everything that's being asked here, you'll find an answer for in the report (IF you read the article, you wouldn't have the questions in the first place). Wizzard i really wonder why you even bother answering something that can be found in the report. Even a 14 yo kid has harder stuff to learn in school than this report.


----------



## Steevo (Mar 1, 2008)

True. But there are avid supporters who believe large companies of their choice can do no wrong, or they wnat to know in detail the steps taken to test.



Either way, W1zz is reputable and has not misled or lied before. He uses some of the best mixes to test hardware, and top notch setups. Some might actually believe that the PCI-e 2.0 standard will provide instant benefits due to reading the whitepapers and taking them at face value. However users with more experiance know that the current PCI-e bus is not to the point of saturation and increasing the bus speed does little to nothing other than eleminate a fractional latentcy in reads and writes. Now that we have a add in board basing it's timing speed off the PCI-e bus we have a real reason to experment, but on boards that have stability issues with higher frequencies the card in question will underperform from what the manufacturer has stated.


----------



## ViperJohn (Mar 1, 2008)

MrMilli said:


> Ok i just read the report and also the complete thread in this forum.
> And i'm wondering about something: how many of you have read the complete report????? The questions being asked are just ridiculous. Guys like 'cbunting' must be pulling our leg ... it can't be he's that stupid. I hope those of you that are asking these crazy questions did see that the report is FOUR pages. For everything that's being asked here, you'll find an answer for in the report (IF you read the article, you wouldn't have the questions in the first place). Wizzard i really wonder why you even bother answering something that can be found in the report. Even a 14 yo kid has harder stuff to learn in school than this report.



I have to agree.  Bringing  PCIe 1.x/2.0 spec bandwidth difference and any performance difference
due to that is irrelevant in the scope of the report.

What is relevant is NV is apparently using the PCI bus frequency divided by 4 as the *core* clock
generators master/base frequency for the 9600GT instead of a 27mhz crystal as on all other cards.

Simply plugging the card into a motherboard that runs a higher default PCIe bus frequency, such as
a reference 780i (125Mhz default on the PCIe 2.0 slots), will cause the 9600GT to run higher core
clock speeds and give higher benchmark scores, than plugging it into a motherboard than runs a
100Mhz default PCIe bus frequency slot.  The user/tester doesn't have to do anything else and
may not even notice the core clock speed difference...just the higher benchmark scores.

I can not help but think this artificial performance increase the 9600GT has when plugged into a
780i with it higher default PCIe bus frequency was some how accidentally over looked by NV!!!

Viper


----------



## cdawall (Mar 1, 2008)

i doubt the 780i PCI-e bus speed was overlooked hell they made the chipset


----------



## divinebaboon (Mar 2, 2008)

so what do you guys suggest on a good method of overclocking my 9600gt with this knowledge?Should i set the bus speed to 100 first,oc the core,mem and shader to the highest stable settings,then slowly get the bus speed up?or do the opposite?get the bus speed up as high as i can while stable,then start ocing the core and mem and shaders?


----------



## cbunting (Mar 2, 2008)

Hello All,

I've not been trying to start a debate over the article. But I do see that none of the software used in the article is accurate. The 27Mhz frequency is nothing new as I listed an article where this was covered in another review in which the Author of Riva Tuner also discussed when it was found, what 3 years ago?

In regards to what I have been trying to say about all of this unsupported software. I was simply trying to see how this article came about and what PROOF there was to support the theory. But the whole report is based on results that shows screenshots of GPU-Z, Riva Tuner, 3DMark06.. All programs that are fairly outdated as compared to todays hardware.

---------

Going back over all of my posts, this is all I can find to give/take away from what I got from the article. But as my original reply. I do not see anything to support this article or the hundreds of others. If these cards could change clock freqencies based on the PCIe bus frequency. We would have some very unstable cards as they already come OC'd from the factory and some can not be pushed much more without getting hot, causing black screens, monitors to shut off ect.

Stock 9600 GT / Bandwidth

675/1800/1700 - Bandwidth: 57.6GB/s

OC'd 9600 GT / Bandwidth

750/2000/1918 - Bandwidth: 64.0 GB/s







3DMark06 reports the differences with the overclock settings as normal. However, if you set the card back to the factory clocks as listed above, the bandwidth also changes back to normal. SO while the card is set back to the stock clocks, If I OC the PCIe bus, I am increasing the Bandwidth not by OC'ing the card but by OC'ing the bus itself. But the problem is that 3DMark06 seems to think that this increase in bandwidth is coming from an OC'ed card. So the results/card readings appear as if the card is overclocked and 3DMark shows a change in the core/mem/shader clocks based on the increased bandwidth of the OC'ed PCIe bus.

What does it mean? That 3DMark06 isn't accurate much like Riva Tuner. nVidia won't comment because they can't. Nothing has changed on these cards other than the fact that no software that is out right now actually supports them. We don't even have decent drivers at this point so all in all,  no software is giving factual readings of these cards.

These are just my own opinions from my own research. Again, I am not saying that I am right. I am saying however that I find it a bit odd that only one review has been posted where as no other 9600 GT owner can produce simular results.

Chris

BTW: 
If someone knows where 3DMark06 takes into account changes to the PCIe bus frequency and calculates these increased bandwidth changes there instead of showing it to be oc'd settings within the card. Please let me know because I have not found this anywhere in 3dmark06.


----------



## cbunting (Mar 2, 2008)

Please Note:

I missed something from the main article. From all of my tests and research, I am doing all of this on a custom built computer. In looking back at the original article in which I too asked about the nForce 590 MB, it seems that the only people who may gain anything from OC's the PCIe bus is those with a 590 MB or possibly better.



> The marchitecture is currently code-named Trinity and denotes a combination of Trinity-enabled motherboard (the 590s) and Trinity-enabled graphics cards. Once that combination is set in motion, the available bandwidth for the cards will go from 4GB/s to 5.2 GB/s, essentially â€“ a legal overclock of the PCIe bus, going from x16 to "x20" or "x22", depending on the final stability tests the company conducted recently.



http://www.theinquirer.net/en/inquirer/news/2006/04/27/nvidia-set-to-overclock-the-pcie-bus

I do not have an nForce board, Therefore, I also don't have LinkBoost technology when I OC my PCIe bus. So for those of us who don't have an nForce board, you do need to be careful OC'ing the PCIe bus as you can burn up your video card, cause corrupted data on your hd among other harmful things. On most MB's, overclocking the bus can cause instability so unless you have an nForce board, just be careful.

Again, this was something I missed and didn't understand because the article pointed out something with the 9600 card. Specifically in the title. But it has everything to do with nForce boards.

Chris

*BTW*
The reason this article is confusing is because of the title!

*NVIDIA's shady trick to boost the GeForce 9600GT*

That is wrong! If anything, the article should have been called.

*nVidia's advancement of Technology in the nForce Trinity-enabled motherboards*

There isn't anything shady about nVidia creating Video cards that support the latest advancements of thier nForce boards. nVidia is the creator of both so I don't see how anyone considers that shady.


----------



## btarunr (Mar 2, 2008)

The 'shady' part is that they've not communicated to the reviewers about this 'advancement'. Reviewers take the card and evaluate it on par with other competing hardware.  That's the gripe, in the article's conclusion there's a reasoning for that.


----------



## Betty (Kung Pow) (Mar 2, 2008)

cbunting said:


> ...
> 
> 3DMark06 reports the differences with the overclock settings as normal. However, if you set the card back to the factory clocks as listed above, the bandwidth also changes back to normal. SO while the card is set back to the stock clocks, If I OC the PCIe bus, I am increasing the Bandwidth not by OC'ing the card but by OC'ing the bus itself. But the problem is that 3DMark06 seems to think that this increase in bandwidth is coming from an OC'ed card. So the results/card readings appear as if the card is overclocked and 3DMark shows a change in the core/mem/shader clocks based on the increased bandwidth of the OC'ed PCIe bus.
> 
> ...



First of all, you are here pointing out the main problem with this feature:
That the 9600GT gets a higher core-speed simply by increasing the PCI-E bus. Thats give a great boost in fill rates and such, as proof given on the second page: http://www.techpowerup.com/reviews/NVIDIA/Shady_9600_GT/2.html

That gives motherboard with automatic PCI-E overclock or a a higher PCI-E clock as standard a performance boost, by between 5 and 25 %. But thats not applying on every motherboard out there, and therefore people buying the card dont know what performance to expect on thier intel, amd or nvida chipset.

And yes, the extra bandwith comes from an overclocked card.



cbunting said:


> ...
> 
> BTW
> The reason this article is confusing is because of the title!
> ...



There you are wrong again, as the same performance boost would be applied to a manually overclocked intelboard as well. This article isnt about if nividia did invent a good thing or not, its about why they kept quiet about it, even on direct questions about it.
Its a shady way to increase performance in reviews, so it might missmatch with performance regular buyers would get of the card in thier almost identical systems at home.


----------



## trog100 (Mar 2, 2008)

i think nvidia attach great importance to early reviews.. if they have deliberately kept quiet about this "change" its to make the odd review make the card seem better than what it really is.. deliberate or incompetence.. neither can be consider a merit mark..

its also quite clear that many contributors to this thread are posting nonsense because they havnt properly read the article or the thread.. 

trog


----------



## cbunting (Mar 2, 2008)

Hello,

I've talked to a friend who stopped by. He said that the main reason why those who overclock for high scores on 3DMark06 and so forth do overclock the PCIe bus as it "Does" raise the core clock on the cards.

What I find confusing about the article as well as all of the replies to my own is that there are tons of articles dating back to 2006 stating that the nforce boards have linkboost and/or allowing changes to the PCIe frequency. So if these features have been available for over 2 years and the G80+ chipsets have also supported this for the past year or more. Why would it just now be noticed with a 9600 GT card when there are reviews and tech sites that say it's been available for over a year, maybe two?

I don't know when nvidia first offered the nForce 590 board or the G80 chipset. But there seems to be enough info stating that this shady feature has been around for quite some time.

Other sites were first to mention it in 2006. But some people are just now finding out while testing all of these new cards since there seem to be so much hype around them.

Chris


----------



## cbunting (Mar 2, 2008)

This will be my last post on this subject.

There are many reasons why I don't understand the whole concept behind the article and what exactly it has too do with the 9600 GT card. The article is 4 pages long and talks about 2 year old features that are now defunct for the most part.

Here is why, for myself to read an article written in Feb 2008 doesn't make sense.



> According to nVidia's website, LinkBoost has been removed from the 590 and 680i series chipsets. Only older boards with the 590 support it still.



That happened some time ago.



> Where there cards other than the 9600 that supported Link Boost and PCIe Overclocking?



The GeForce 79xx series and above featured these additions.

So all in all, everything as listed in the article has been around since 2006. But like I said in the above reply, I don't see how a 2008 review can call NVidia shady because of the 9600 when it's obvious that the article was written about features that have been around since at least 2006.

Chris


----------



## Betty (Kung Pow) (Mar 2, 2008)

cbunting said:


> Hello,
> 
> I've talked to a friend who stopped by. He said that the main reason why those who overclock for high scores on 3DMark06 and so forth do overclock the PCIe bus as it "Does" raise the core clock on the cards.
> 
> ...



If you do take a look at the chart at page 2, because it seems that you havnt. http://www.techpowerup.com/reviews/NVIDIA/Shady_9600_GT/2.html 

There you the theoretical benchmark, comparing PCI-E busspeeds on the 9600 and the older 8800, you will see that the shady feature isnt present at the later.


----------



## cbunting (Mar 2, 2008)

How many people reading this thread overclocked thier PCIe bus speed and noticed changes on thier 3DMark96 scores with using a 9600 card?

I didn't so lets see if anyone else does?

Chris


----------



## TmkGod (Mar 2, 2008)

*"Too good to be true" is the right phrase*

it seems like they (not accusing) were trying to sell a low/mid priced card that acts like a higher end card.
Thats probably is the reason for the initial delay, when they were having "trouble" with the vcore voltage.

higher than normal voltage + higher frequency than manufactured = Lots of people buy O.C'd products without knowing that theyre oc'd

P.S: i'm a owner of a G92 8800GT(65mm fan) and i now have a clue to why the cooling is so faulty and lame, that is - to cut OC potential and encourage those who didnt buy 8800gt to buy a lower priced "equally good" 9600GT


dont flame me because i'm sarcastic, i'm nice most of the time


----------



## trog100 (Mar 2, 2008)

your 8800gt is lame cos it was a rushed out (extra cheap) to beat ati torpedo job..

in other words not a proper product release.. the 9600 is the real one..

another nvidia shady trick

trog


----------



## Tatty_One (Mar 2, 2008)

trog100 said:


> your 8800gt is lame cos it was a rushed out (extra cheap) to beat ati torpedo job..
> 
> in other words not a proper product release.. the 9600 is the real one..
> 
> ...



I'll take the 8800....it's faster   if it takes em longer to bring out slower....i'll take rushed job anytime


----------



## cdawall (Mar 2, 2008)

Tatty_One said:


> I'll take the 8800....it's faster   if it takes em longer to bring out slower....i'll take rushed job anytime



what is it faster in?


----------



## Tatty_One (Mar 2, 2008)

cdawall said:


> what is it faster in?



pretty much everything mefinks


----------



## cdawall (Mar 2, 2008)

Tatty_One said:


> pretty much everything mefinks



well i guess that excludes benchmarks?

http://hwbot.org/hardware.compare.do?type=gpu&id=1278_1&id=1233_1&id=1291_1

cause other than 3dm06 the 9600GT takes the cake in everything


----------



## Tatty_One (Mar 2, 2008)

cdawall said:


> well i guess that excludes benchmarks?
> 
> http://hwbot.org/hardware.compare.do?type=gpu&id=1278_1&id=1233_1&id=1291_1
> 
> cause other than 3dm06 the 9600GT takes the cake in everything



Obviously we are reading different reviews, I have just read 2 and they certainly didnt say that!

Take out the the Palit and EVGA because they are overclocked and the 8800GT is reference so just look at the Asus figures for the 9600GT, then go through the gaming benches. It does well tho I will give it that, but in this review.....not quite well enuff 

http://www.hothardware.com/printarticle.aspx?articleid=1112


----------



## cdawall (Mar 3, 2008)

Tatty_One said:


> Obviously we are reading different reviews, I have just read 2 and they certainly didnt say that!



 i wasn't reading reviews i was looking at the avg oc's on the cards and the score that was achieved with them. apparently the reviews didn't cover that those crazy reviewers


----------



## Tatty_One (Mar 3, 2008)

cdawall said:


> i wasn't reading reviews i was looking at the avg oc's on the cards and the score that was achieved with them. apparently the reviews didn't cover that those crazy reviewers



Ahhhhh right, I was talking gaming performance at stock.....I made that about 11-3 in the 8800's favour with one tie but TBH I like the 9600GT very much, there is so little in it in most things gaming and with the 9600 being cheaper.....well.

Ohhhhh and synthetic benching??  Pfffttt


----------



## cdawall (Mar 3, 2008)

Tatty_One said:


> Ahhhhh right, I was talking gaming performance at stock.....I made that about 11-3 in the 8800's favour with one tie but TBH I like the 9600GT very much, there is so little in it in most things gaming and with the 9600 being cheaper.....well.
> 
> Ohhhhh and synthetic benching??  Pfffttt



i have the same view on synths 

but we both know that are card is going to end up oc'd so why not look at the results of that and go form there


----------



## Richteralan (Mar 3, 2008)

1. I don't think this is "cheating". It doesn't get more confusing for joe six pack: "We are no longer using 27-MHz oscillator to generate our clock rate, we now changed to use 1/4 PCIe frequency as our base frequency. So any change of the PCIe frequency changes our GPU frequency, too!"
2. The same thing happens on my Geforce 8800M GTS.
3. I guess techpowerup needs more clicks for their website.


----------



## cbunting (Mar 3, 2008)

Hello,

I think what has been seen with the PCIe bus frequency changes are not the same as what is being referred to in the article. I've also seen the mention where Alex, AKA Unwinder will not change anything to support the 27Mhz chip.

However, One of the 9600 GT features is the PureVideo.



> A Powerful Entertainment Hub
> Experience the GPU’s power while enjoying HD movies or having a premium 3D user experience with Windows Vista and Windows Media Center. PureVideo® HD technology provides lifelike pictures and vibrant color while CPU-offload capabilities enable you to manage your photos and videos with ease.



It would seem to me that it was the reviewer who first tried to put the PCIe freqency changes as being a result of the 27Mhz chip, However,



> A method for managing an asynchronous data buffer to provide an output data stream comprises the steps of receiving asynchronous data at a nominal data rate and writing at least a portion of the received asynchronous data into the buffer. A fullness level of the buffer is monitored to determine whether the fullness falls within a first, nominal range, or into second or third higher ranges. For example, the buffer may have a capacity of 1024 bytes, the first range may be from 0 to 648 bytes, the second range may be from 648 to 836 bytes, and the third range may be from 836 to 1024 bytes.
> 
> A target data output rate is then determined, which may be 19,200 bits per second, or 19,200/2n bits per second, where n is a non-negative integer (e.g., n=0, 1, 2, . . . ). A fixed reference clock signal having an associated rate, for example, 27 MHz is also provided. 27 MHz is selected as an example since it is used in the MPEG system. However, virtually any system clock frequency may be used. A clocking signal is provided for outputting the asynchronous data from the buffer at a rate which corresponds to a ratio of the associated rate and the divisor. A divisor is selected to provide the clocking signal at a first rate to minimize a difference between the target data output rate and the first rate when the buffer fullness falls within the first range. Optionally, a direct digital synthesis (DDS) circuit may be used to provide the clocking signal at the desired level by providing a fractional divisor.



Full page where text came from. http://www.patentstorm.us/patents/5949795-description.html

The above example is not related to Nvidia cards, but 27Mhz seems to be pretty standard with applications supporting video encoding / decoding for data output. And as I mentioned above, The 9600 GT offers Pure Video, HD and so on so I think that chip has more to do with encoding / decoding than the PCIe changes.

Again, I am not saying that I am right or the info. I am just looking to see exactly what would cause this to happen and see what exactly is causing it.
Chris

BTW:

Here is a full list of Nvidia Patents. I sure all of the chips, designs or whatever is listed here somewhere.
http://www.patentstorm.us/patents/search-results.html?search=nvidia&imageField2.x=14&imageField2.y=8


----------



## Scyphe (Mar 3, 2008)

OMG, cbunting is hilarious. 

cbunting: you STILL haven't understood what's going on? seriously? 

Let me give it a shot:

Most other monitors/overclockingtools use the drivers settings to calculate the frequency. Rivatuner does not read from the driver but instead reads directly off the hardware. The fact that there's a discrepancy here is the very key to understanding the article and what's happening. 

Rivatuner is using the standard clock of 27Mhz (which indeed sits on the 9600GT) which doesn't correspond to the new way the 9600GT derives it's clock frequency (PCIe-bus/4). This is very interesting for understanding what's going on.

What happens is that the 9600GT partly uses the PCIe-bus to set it's frequency. This is new. With modern nForce-based boards (Linkboost), the PCIe-bus may be automatically overclocked, overclocking the 9600GT at the same time without you or any other 9600GT-owner knowing it. 

If you don't have linkboost but have overclocked your PCIe-bus even a little (5Mhz), your 9600GT is overclocked quite a bit. This goes for all PCIe-buses whether PCIe 1.1 or 2.0, AMD, Intel or nForce, doesn't matter. 

The REALLY shady thing here is that the drivers don't seem to reflect the increase in speed. They can still say the core is running a default "650Mhz" when it's actually running @ 687.5Mhz. This inflates reviewers benchmarks, creates possible instability and confuses people like you.

So in the end it's very simple.

From www.nordichardware.com:



> In other cases, people have had to downclock the card to make it work. Some partners have launched quite heavily overclocked cards and for natural reasons these were unstable to a higher degree, but now to a higher degree than normal. Downclocking the cards have been the only solution, which is a pity when you've paid for an overclocked card.
> 
> This called for further investigation and the people over at techPowerUp! noticed a discrepancy between the frequency reported by the driver and the clock generator. Most tools will read the frequency from the driver, but the RivaTuner monitor reads directly from the clock generator. However, the monitoring reads the information incorrectly. It multiplies by 27 instead of 25.
> 
> ...


----------



## Steevo (Mar 3, 2008)

Reality-------------------------------------------------------------------------Most of you.


While the idea of the patent is all right I believe it deals more with the Memory timings as stated in the article than the core clock.




We have established the following.

1) Nvidia manipulated a situation to show a supposed superior product when it fails in most cases to reach said potential.

http://www.pcstats.com/articleview.cfm?articleid=2253&page=6
http://www.techpowerup.com/reviews/Zotac/GeForce_9600_GT_Amp_Edition/20.html

When you place the card in a standard board you get substandard performance. My system kicks the card to the curb if you compare 3D06 setups. I have a old system.



However if you include a performance boost with a previously unmentioned twist that you have to overclock your PCI-e bus, many people would turn away. However Nvidias clever marketing department has twisted it to seem that this is a killer card for a great price, instead of the potential turd it is.

2) On the majority of setups the card will not perform to the expected level due to the lower core clock.

3) PCI-e bus is not saturated when using a 1.1 with a 2.0 card.

http://www.tomshardware.com/2004/11/22/sli_is_coming/


After reading this sit and smoke a cigar, and have a drink. Let the full test sink in then post something intelligent, like how easy it is to be a backseat driver, or a professional reviewer without all the hassle of doing it for a living or injecting more than your presupposed ideas onto the internet for people everywhere to laugh at who know better. Tell us about your plants and you dog scruffie. But please refrain from showing your stupidity on the internet, unlike at your mom's house we will laugh at you.


----------



## Unwinder (Mar 3, 2008)

Scyphe said:


> The REALLY shady thing here is that the drivers don't seem to reflect the increase in speed. They can still say the core is running a default "650Mhz" when it's actually running @ 687.5Mhz. This inflates reviewers benchmarks, creates possible instability and confuses people like you.



Most likely NVIDIA driver will never show real clock speed in case of PCI-E bus overclocking, I've explained why here:

http://forums.guru3d.com/showpost.php?p=2618787&postcount=3

The same will apply to new versions of diagnostic tools reading PLL clock directly (RivaTuner and Everest). Both will also use fixed 25MHz reference clock.


----------



## Betty (Kung Pow) (Mar 3, 2008)

cbunting said:


> ...
> CRAP
> ...



You sir, seems to have no clue of what thread you are answeing in. I doubt more and more that you get what the 4 pages by W1zzard tries to tell you.

Another explanation is that you are a employe of NVidia, and tries to make W1zzard's article look bad by de-railing from its purpose.

No flame or anything, but i cant figure out why you would otherwise do this.


----------



## MrMilli (Mar 3, 2008)

Betty (Kung Pow) said:


> You sir, seems to have no clue of what thread you are answeing in. I doubt more and more that you get what the 4 pages by W1zzard tries to tell you.
> 
> Another explanation is that you are a employe of NVidia, and tries to make W1zzard's article look bad by de-railing from its purpose.
> 
> No flame or anything, but i cant figure out why you would otherwise do this.



Well it's obvious to me that he doesn't have a clue. Since he's a noob in computer hardware, he doesn't even understand everything in the article and that makes him confused. I don't think he means any harm but he really should stop spamming this thread.


----------



## cbunting (Mar 3, 2008)

I apologize for not understanding the article then.

So if Riva Tuner reads the real clock value, which in my case, my core clock stock is 650Mhz, but shows as 729Mhz in Riva Tuner is the real clock speed of the card. Then this makes no sense because "ANY" clock speed over 650Mhz for the 9600 GT OC VOIDS THE WARRENTY..

So what now? We have a card that sets it's core clock speed to that higher than the factory default. So if the 729Mhz core clock as shown by Riva Tuner, causes my card to burn up or malfuction, I am out $239.00 because Nvidia added some new feature or clock?



> I'm very sorry but we do not have any fan controller or overclocking utility
> as any overclocking of the card beyond the factory settings voids your warranty.
> 
> Thank you,
> ...



So a Stock OC Card installed on a pc with the PCIe oc'd to 125Mhz automaticly voids my warrenty correct?

Chris


----------



## cbunting (Mar 3, 2008)

Scyphe said:


> OMG, cbunting is hilarious.
> *Rivatuner does not read from the driver but instead reads directly off the hardware. The fact that there's a discrepancy here is the very key to understanding the article and what's happening. *



I just caught this part of the reply. And yes, This is where I get confused.

#1. Alex AKA Underwinder has said himself that Riva Tuner does not support the G94 Chipset. Therefore, it is not able to correctly read the freqencies. At least that was my understanding.



> I'd like to add that v2.06 "knows" nothing about G94 core has no internal G94 specific codepath. So adding G94 support to 2.06 this way may cause unpredictable results and several thing may function improperly or not work at all. For example, RAM type will be detected improperly, bus width will not be detected at all, core clock can be monitored improperly in hardware monitoring module etc.
> The only case when it is safe to add new card support by means of editing GPU database in .cfg file is when you're adding support for new display adapter model based on supported GPU family (e.g. you can safely add 8800GTS 512 support this way, because 2.06 fully supports G92 core). G94 is a bit different story. So please use it at your own risk.



See Alex/Unwinder's reply directly under mine in the full thread.
http://www.evga.com/forums/tm.asp?m=266477

What I have been trying to get at all along is that the software used as mentioned is NOT an accurate method for basing this theory on. Are there any specifics that state the 9600 really changes the clock speeds based on the PCIe bus speed?

But the article was written based on the core clock changes as found with using Riva Tuner and the Benchmark Software. But how does that prove anything?

If no software currently supports reading the G94 chipset correctly, Then how can anyone know that what the article is written about and based on is actually true?

Chris

*BTW: *

Based on all of the info that I have found and/or have been given. What was the exact basis for the article?

I've been told by various Overclockers that overclocking the PCIe Bus to a different frequency changes the core clock speed on ANY graphics card. That is why ppl started oc'ing the PCIe bus to begin with. But again, the article is about the 9600 GT and Nvidia's shady trick because of what? Possibly because the 9600 is the only card that will actually show the core clock changes based on the OC'd PCIe bus?


----------



## Solaris17 (Mar 3, 2008)

Unwinder in the new revision of RT have you took out hardware reading from the pll all together and just stuck to measuring multi's? now the clock slider and monitor display the same clocks....as apposed to the 27+ mhz differance...


----------



## Saakki (Mar 3, 2008)

i think Cbunting is a shady nVidia mindblender..after this..and hello all im new face in your nice forum..after this..should i go 9600 GT or 8800 GT for my upcoming rig..?


----------



## Solaris17 (Mar 3, 2008)

WELCOME TO THE FORUMS! go 8800GT it performs better the 9600 is on par with the 8800GS


----------



## imperialreign (Mar 3, 2008)

cbunting said:


> Based on all of the info that I have found and/or have been given. What was the exact basis for the article?
> 
> I've been told by various Overclockers that overclocking the PCIe Bus to a different frequency changes the core clock speed on ANY graphics card. That is why ppl started oc'ing the PCIe bus to begin with. But again, the article is about the 9600 GT and Nvidia's shady trick because of what? Possibly because the 9600 is the only card that will actually show the core clock changes based on the OC'd PCIe bus?



the basis of the article is that the majority of people who would potentially buy this card don't really mess around with OCing their systems.  Seeing as how it's with a 9600 card, and I'd expect those to slide into the mid-range bracket 'ere long - the mid range market is "typically" the highest cost average consumers are willing to purchase on a new card.

Even still, there aren't too many users that really start digging into a system BIOS for graphics OCing, and remember, there are only a few brands of motherboards whose BIOS allows for adjusting the PCIE frequency.  If you have a OE system from Dell, HP, eMachines, etc, you wouldn't have access to that setting in BIOS at all, either.  And being such, if the setting is left to [AUTO] or there isn't a means to adjust it, the PCIE frequency can change during running of applications (someone correct me on that if I'm wrong).

What it all boils down to, is that for the average consumer purchasing a 9600 for use in the Dell, Gateway, etc setup - or users who don't OC and just run everything at stock speeds - your card will effectively OC itself without your knowing about it.

I'm personally not too keen on the idea, as it makes the card appear better than it truly is, and effectively throws scoring for reviews because the card is OCing itself.  IMO, this is like steroid use amoungst athletes - until there's hard evidence that something is amiss, no one's the wiser about it, but that doesn't make it right.


----------



## Saakki (Mar 3, 2008)

thanks for warm welcome Solaris + for the info..my ye olde AGP rig is gettin a bit cheesy


----------



## hat (Mar 3, 2008)

Saakki said:


> i think Cbunting is a shady nVidia mindblender..after this..and hello all im new face in your nice forum..after this..should i go 9600 GT or 8800 GT for my upcoming rig..?



THE AVATAR... IT BURNS!! =/


----------



## Saakki (Mar 3, 2008)

no..it stares at u..BEWARE


----------



## trt740 (Mar 3, 2008)

*now Wizzy whats the deal with the 8800gs*



W1zzard said:


> a 27 mhz crystal is already on the board for the memory clock, so why not use that like we did for the last 25 or so years?



I can tell it should go higher any news why nvidia driver locked it the Gpu won't go much higher than 700Mgz when tested with with ati artifacting tool. I see people reporting higher clock but they just aren't stressing there cards hard enough.


----------



## Tatty_One (Mar 3, 2008)

trt740 said:


> I can tell it should go higher any news why nvidia driver locked it the Gpu won't go much higher than 700Mgz when tested with with ati artifacting tool. I see people reporting higher clock but they just aren't stressing there cards hard enough.



NVidia didnt want it to compete with the 8800GT....or 9600GT????


----------



## Saakki (Mar 3, 2008)

lold i misread earlier...ofc 9600 wont beat 8800 gt ...it was bout gs..appologies


----------



## trt740 (Mar 3, 2008)

Tatty_One said:


> NVidia didnt want it to compete with the 8800GT....or 9600GT????



I can tell just by overclocking it there is no way it should artifact at 705 mghz or anywhere near that. It's max temps in my system under load, with a dual or are 43c. With a 8800gt, which uses basically the same Gpu, you would get 780/825mghz and this card has less shader=less heat so it should go higher. I'm scratching my head here. I cannot believe someone hasn't unlocked it , it reminds me when ATi first did it to the 3870 cards. My memory won't go higher but this GPU should big time.


----------



## trt740 (Mar 3, 2008)

Saakki said:


> lold i misread earlier...ofc 9600 wont beat 8800 gt ...it was bout gs..appologies



not by much, and only because the 8800gs is bios locked or driver locked.


----------



## Tatty_One (Mar 3, 2008)

trt740 said:


> I can tell just by overclocking it there is no way it should artifact at 705 mghz or anywhere near that. it's max temps in my system under load, with a dual or are 43c. With a 8800gt which uses basically the same Gpu you would get 780/825mghz and this card has less shader=less heat so it should go higher. I'm scratching my head here. I cannot believe someone hasn't unlocked it it reminds me when ATi first did it to the 3870 cards.



Unfortunatly, you cannot unlock what is laser cut, sadly I think the days of simply disabling hardware have well and truly gone.  The next thing you need to look at is voltage, they may have undervolted the card, dump the bios and have a look at it in Nibitor, you should be able to read it at least but of course Nibitor won't let you write to the BIOS juust yet as there is no support for the GS yet.
Then google around for what the boards max voltage is....I am willing to guess that the boards can take/give a lot more than is regulated in the BIOS, if thats the case, once Nibitor supports it, there may be some headroom to do some Bios modifications.

Also I heard that the max core clock is driver invoked?  if thats the case there could be a way to get around that as well but not easy, for example, if/when RivaTuner "officially" supports the card, providing you set rivatuner to startup in windows you can then diasable the NVdriver in "processes" in task manager, that may allow you to overclock the card further.......just a couple of what might be "fruitless" idea's


----------



## trt740 (Mar 3, 2008)

Tatty_One said:


> Unfortunatly, you cannot unlock what is laser cut, sadly I think the days of simply disabling hardware have well and truly gone.  The next thing you need to look at is voltage, they may have undervolted the card, dump the bios and have a look at it in Nibitor, you should be able to read it at least but of course Nibitor won't let you write to the BIOS juust yet as there is no support for the GS yet.
> Then google around for what the boards max voltage is....I am willing to guess that the boards can take/give a lot more than is regulated in the BIOS, if thats the case, once Nibitor supports it, there may be some headroom to do some Bios modifications.
> 
> Also I heard that the max core clock is driver invoked?  if thats the case there could be a way to get around that as well but not easy, for example, if/when RivaTuner "officially" supports the card, providing you set rivatuner to startup in windows you can then diasable the NVdriver in "processes" in task manager, that may allow you to overclock the card further.......just a couple of what might be "fruitless" idea's



Already flashed it added 1.1v to extra made zero difference it is bios locked it will artifact as soon as you set it within ten MHz of 700Mhz core when you test it with ati tools. The new version of Nibitor will let you edit the bios. I have since flashed it back. I can bech alot higher, but not stable


----------



## Tatty_One (Mar 3, 2008)

trt740 said:


> Already flashed it added 1.1v to extra made zero difference it is bios locked it will artifact as soon as you set it within ten MHz of 700Mhz core when you test it with ati tools. The new version of Nibitor will let you edit the bios. I have since flashed it back. I can bech alot higher, but not stable



OK, disable the nv driver in taskmanger then and overclock the card higher in rivatuner and see if that does the job, if that dont work, ensure that rivatuner starts up with windows, then go into .....

Start > Run > type "msconfig" then go to the startup tab and temporarily disable the NV driver from starting up.


----------



## Saakki (Mar 3, 2008)

this all nvidia shadyness gets actually pretty interesting ...thanks for wizz for sharing up knowledge.. finnish site reported this and its hot topic now


----------



## trt740 (Mar 3, 2008)

Tatty_One said:


> OK, disable the nv driver in taskmanger then and overclock the card higher in rivatuner and see if that does the job, if that dont work, ensure that rivatuner starts up with windows, then go into .....
> 
> Start > Run > type "msconfig" then go to the startup tab and temporarily disable the NV driver from starting up.



Will try this weekend have the Flu so I'm just barely hanging on


----------



## Scyphe (Mar 3, 2008)

cbunting said:


> I apologize for not understanding the article then.
> 
> So if Riva Tuner reads the real clock value, which in my case, my core clock stock is 650Mhz, but shows as 729Mhz in Riva Tuner is the real clock speed of the card. Then this makes no sense because "ANY" clock speed over 650Mhz for the 9600 GT OC VOIDS THE WARRENTY..


Rivatuner does not show the real clock value since the 9600GT is NOT using the standard 27Mhz crystal to calculate it's frequency. However, and this is very important, the fact that it shows WRONG values makes you question WHY they are wrong since it's using a standard way of deciding frequencies. And here's the deal. nVidia have changed the way 9600GT sets it's frequency. Instead of using the crystal on the card it uses PCIe-bus/4. That's why the card becomes overclocked when you increase the PCIe-bus beyond 100Mhz. And this is why Rivatuner is a symptom of this change. The 9600GT use a multiplier of 26 with the PCIe-bus frequency/4. This means that at 100Mhz PCIe-bus the frequency of the core will be 26*25=650Mhz. It also means that if the PCIe-bus is increased to for instance 110Mhz the core will run @ 26*27.5(pcie/4)=715Mhz. 

WHY nVidia have decided to change the way the 9600GT set it's frequency is the shady part. They haven't said a word, and when asked they've denied any knowledge. The result is that there's a lot of overclocked 9600GT's being reviewed as stock 9600GT's which will inflate the results and pretty much muddle any possible comparisons. 



> So what now? We have a card that sets it's core clock speed to that higher than the factory default. So if the 729Mhz core clock as shown by Riva Tuner, causes my card to burn up or malfuction, I am out $239.00 because Nvidia added some new feature or clock?


Pretty much except that Rivatuner isn't showing the true speed. To get the REAL speed you'd have to take 729, split it with 27, then multiply it with pcie/4. The fact that you paid $239 for a 9600GT instead of getting a 8800GT is something you'll have to live with.



> So a Stock OC Card installed on a pc with the PCIe oc'd to 125Mhz automaticly voids my warrenty correct?


Depends on the warranty and the company policy of the vendor.

And no, overclocking the PCIe-bus doesn't overclock other cards, it ONLY overclocks the PCIe-bus.


----------



## Steevo (Mar 3, 2008)

imperialreign said:


> the basis of the article is that the majority of people who would potentially buy this card don't really mess around with OCing their systems.  Seeing as how it's with a 9600 card, and I'd expect those to slide into the mid-range bracket 'ere long - the mid range market is "typically" the highest cost average consumers are willing to purchase on a new card.
> 
> Even still, there aren't too many users that really start digging into a system BIOS for graphics OCing, and remember, there are only a few brands of motherboards whose BIOS allows for adjusting the PCIE frequency.  If you have a OE system from Dell, HP, eMachines, etc, you wouldn't have access to that setting in BIOS at all, either.  And being such, if the setting is left to [AUTO] or there isn't a means to adjust it, the PCIE frequency can change during running of applications (someone correct me on that if I'm wrong).
> 
> ...



180 degrees wrong. In most systems the card sucks for the $$$$$ spent compared to competing products. Most systems will leave the PCI-e at 100Mhz locked and the card will run at stock where it failes to meet the foretold performance.


Nvidia conpares it to a HD3870 in http://www.nordichardware.com/news,7357.html when they showed it to nordichardware and announced it would trounce the 3870. But when a first stock to stock comparison is made.
http://www.tweaktown.com/reviews/1293/4/page_4_test_system_setup_and_3dmark06/index.html it manages to edge ahead a small amount in lower res, and fall behind in higher res.


But when placed in a linkboost board that overclocks the core with out the user knowing, it would edge out the HD3870. 


Again if we compare stock to stock http://www.techpowerup.com/reviews/Biostar/GeForce_9600_GT/20.html the HD3850 512 kicks its ass at stock, making it the better buy for high res gaming. But if we compare a overclocked version. http://www.techpowerup.com/reviews/VVIKOO/GeForce_9600_GT_Turbo/20.html The card suddenly looks better.



So what Nvidia has done is to show a card that when overclocked unknown to reviewers and owners has better performance than a competing card. But in most owners and reviewers boards failt to reach said performance. Now we have to wonder if they underclocked the card for the reason of core binning problems, issues with heat, or some other form of deception.


One thing remains however, this card is not what it was or is marketed to be in stock form.


----------



## imperialreign (Mar 4, 2008)

Steevo said:


> 180 degrees wrong. In most systems the card sucks for the $$$$$ spent compared to competing products. Most systems will leave the PCI-e at 100Mhz locked and the card will run at stock where it failes to meet the foretold performance.
> 
> 
> Nvidia conpares it to a HD3870 in http://www.nordichardware.com/news,7357.html when they showed it to nordichardware and announced it would trounce the 3870. But when a first stock to stock comparison is made.
> ...



I guess I must have misunderstood the report a bit . . . so, the 9600 auto OCing only happens on nVidia chipset based boards, and we wouldn't see that same increase in clocks on boards from other manufacturers with different chipsets that utilize automatic PCIE freq clocking (i.e. ASUS boards with PEG Link controls)?

Even still - it's a bit of an underhanded practice . . . it's too bad how fanatical the fanboish base has become, as a few years ago practices like this hurt the company for quite a long while


----------



## cbunting (Mar 4, 2008)

On May 23rd, 2006, Nvidia released a press release about the linkboost technology.
http://www.nvidia.com/object/IO_31319.html

The Techpowerup article was written Feb 29th, 2008.

All in all, it's not a new feature and has been out for over a year and a half. I mean, what is the real point of the article? Simply that it took a review 1 1/2 years later for someone to notice?

You guys have not once understood any of the replies I've made since the very first one. Either that, or you all must have not known about any of these features either it seems. I don't see how you consider this, New, Shady or having anything to do with the 9600.
Chris

*BTW:* From what I can find online. Both the nForce 590i and Linkboost as mentioned in the article have been discontinued.


----------



## Solaris17 (Mar 4, 2008)

um....idu what that post has anyhting to do with this article.....linkboost isnt the issue being discussed here....of couse this article wouldnt come up till a year and ahalf later because it was at the release of the 9 series were the pci-e freq was used to determine core clock....


----------



## cbunting (Mar 4, 2008)

Solaris17 said:


> it was at the release of the 9 series were the pci-e freq was used to determine core clock....



But that is the problem. It doesn't.


----------



## Scyphe (Mar 4, 2008)

cbunting said:


> On May 23rd, 2006, Nvidia released a press release about the linkboost technology.
> http://www.nvidia.com/object/IO_31319.html
> 
> The Techpowerup article was written Feb 29th, 2008.
> ...



Damn you're daft. 

Ask yourself: every single one in here (especially w1zzard, developer and author of ATITool and Unwinder, developer and author of Rivatuner) EXCEPT you grasp the simple, explained (times 10) and proven issue with 9600GT calculating it's core frequencies differently from earlier generations. Somehow you just don't get it, you're not even on the same planet as the rest of us. Do us all a favor and sell your PC and buy a Wii. You clearly can't grasp even the simplest of concepts since you're spamming with all sorts of totally unrelated and useless links that have nothing to do with the issue we're discussing. OR you're a troll trying to defend the 9600GT like a knight (although in this case it's more like Don Quijote since you don't even know what you're defending it for or against, you're off somewhere else with us just looking in amazement while you fight imaginary monsters). 



> You guys have not once understood any of the replies I've made since the very first one.


You didn't even understand what we're discussing, hence your replies made absolutely no sense to anyone. It's like trying to explain quantum physics to a retard.


----------



## Solaris17 (Mar 4, 2008)

cbunting said:


> But that is the problem. It doesn't.



are you serious? i might know

<---------------------



and seeing as i got my 9600 b4 release youd think i talked to the programmers?


----------



## cbunting (Mar 4, 2008)

Ok, Then let's prove real true stats to see if the article is true.. Anyone else, Post yours!

BFG 9600 GT OC 512MB.
http://www.bfgtech.com/bfgr96512gtoce.aspx

100Mhz PCIe Bus;
http://www.techpowerup.com/gpuz/3h8eb/

115Mhz PCIe Bus;
http://www.techpowerup.com/gpuz/2gu7p/

Oh wait.. Nothing changed.. Notice I didn't use Riva Tuner for statistics. 

I used GPU-Z which DOES support the 9600 GT...



> Changes in GPU-Z version 0.1.7:
> Corrected several GeForce 9600 GT readings



Chris


----------



## Solaris17 (Mar 4, 2008)

RT supports it you have to add a line. hold on ill try some tests in a bit.


----------



## cbunting (Mar 4, 2008)

It doesn't as I already posted..

From Alex, AKA Unwinder / Author of Riva Tuner



> I'd like to add that v2.06 "knows" nothing about G94 core has no internal G94 specific codepath. So adding G94 support to 2.06 this way may cause unpredictable results and several thing may function improperly or not work at all. For example, RAM type will be detected improperly, bus width will not be detected at all, core clock can be monitored improperly in hardware monitoring module etc.
> The only case when it is safe to add new card support by means of editing GPU database in .cfg file is when you're adding support for new display adapter model based on supported GPU family (e.g. you can safely add 8800GTS 512 support this way, because 2.06 fully supports G92 core). G94 is a bit different story. So please use it at your own risk.



http://i30.tinypic.com/ivd2mo.jpg

The article was based on results from software that doesn't support the card.

I can not reproduce the results. This is why I have asked all of those who have a 9600 to post theirs. We have one author who posts a review about this feature of the 9600. One person who has noticed this. And now I can't reproduce the result. So either more people get the same affect of the article for proof, or no one else can reproduce it either. For myself, I can't. It makes no difference what I OC as nothing changes the clock speeds.


----------



## Solaris17 (Mar 4, 2008)

well i guess im lucky it worked huh?


----------



## Scyphe (Mar 4, 2008)

cbunting said:


> Ok, Then let's prove real true stats to see if the article is true.. Anyone else, Post yours!
> 
> BFG 9600 GT OC 512MB.
> http://www.bfgtech.com/bfgr96512gtoce.aspx
> ...



Hahaha, you must be pulling our legs or you have a serious cognitive ability problem. 

Did you benchmark your card with 100Mhz PCIe and 115Mhz PCIe? 

Did you read the posts that said the drivers do NOT reflect any core changes when overclocking the 9600GT via the PCIe-bus? 

This is all like a black hole to you, right? You seriously don't understand a word we're saying, that much is clear.




> For myself, I can't. It makes no difference what I OC as nothing changes the clock speeds.


And how do you determine the clock speeds? Since the drivers doesn't reflect the overclocking through increasing the PCIe-bus clock (and thus GPU-Z and any other tool using the drivers to find their numbers) you have to find another way to find out what's going on.


----------



## cbunting (Mar 4, 2008)

C'mon man..

Overclocking the PCIe bus as page 2 of the article shows isn't right..

When you overclock the PCIe bus, the only allows the Core Clock of the video card to be OC'd higher. It don't change anything by overclocking the pcie.

Page 2 of that article was concluded based on bus frenqency changes.

http://www.techpowerup.com/reviews/NVIDIA/Shady_9600_GT/2.html


> We ran the fillrate test with four difference PCI-E clocks set in the BIOS: 100 MHz, 105 MHz, 110 MHz and 115 MHz.
> 
> As you can see, the diagram above is pretty conclusive and shows exactly the behaviour we expected.



My card runs stable at 740Mhz core clock. It gets glitchy above 740Mhz.. Unless I OC my pcie bus, I am not able to push the card higher. With a PCIe bus change to 11-125Mhz, I can oc the card to around 762Mhz core clock. But changes to the pcie bus doesn't change anything in regards to the core clocks other than allowing you to push the card oc a bit higher.

So again, What software provided those stats in the graph? Riva Tuner?

What part of what the Author of Riva Tuner said can't anyone understand?


> *core clock can be monitored improperly in hardware monitoring module etc*.



Furthermore, the  9600 would not even run stable enough at 834 MHz to even finish a benchmark of 3DMark06. If you think it will, then try it yourself.


----------



## Scyphe (Mar 4, 2008)

What the hell, which part of the following do you NOT understand? Forget Rivatuner, forget GPU-Z, forget nTune!!!!!!!!!



> In order to test our theory *we ran 3DMark06 Multitexture Fillrate tests* on both the GeForce 9600 GT (G94) and the GeForce 8800 GT (G92). Both use the same underlying GPU architecture, with the G92 being the faster part. We always tested at the stock frequencies of our two cards, we never adjusted the GPU frequency manually.



The 8800GT did NOT gain anything by increasing the PCIe-bus, as expected. The 9600GT DID gain more each time the PCIe-bus was increased, with NO other overclocking going on whatsoever. 

Set your PCIe-bus to 100Mhz. Run the 3DMark06 Multitexture Fillrate test. Then set your PCIe-bus to 115Mhz and run the same test again.


----------



## cbunting (Mar 4, 2008)

3DMark06, FutureMark nor YouGamers.com support the 9600 GT with any of thier software.

Results Inconclusive?







Wow, YouGamers by FutureMark doesn't even know what card I have.. And to think the outdated 3DMark06 would.. 

God, I don't even believe in every article.
Chris


----------



## Solaris17 (Mar 4, 2008)

um they still give you a score.........and when i oc my card the score goes up i know it sounds crazy....but set your pci-e bus at 100mhz run the fill rate post up the numbers...then set it to 115mhz run it again post up the numbers.


----------



## Scyphe (Mar 4, 2008)

cbunting said:


> 3DMark06, FutureMark nor YouGamers.com support the 9600 GT with any of thier software.
> 
> Results Inconclusive?
> 
> ...



Well, with that you've just exposed yourself as a disinformant. You failed though, nobody seems to have adopted any confusion that you wanted to spread.


----------



## imperialreign (Mar 4, 2008)

this thread has become quite hilarious


----------



## Steevo (Mar 4, 2008)

^ True.


If I had the money to waste on getting a 9600GT I wouldn't. It has already been proven by multiple respected people what is true, and this joker thinks he can come in with a backseat drivers license and change everyones mind. Fat chance.



Besides, your Pentium D sucks.


----------



## Solaris17 (Mar 4, 2008)

Steevo said:


> Besides, your Pentium D sucks.



LOL!





Steevo said:


> It has already been proven by multiple respected people what is true



thank you


----------



## Wile E (Mar 4, 2008)

cbunting:

you are not listening to a single word anybody is saying.

I'll make it very simple to understand:

NO SOFTWARE READS THE CORE CLOCKS OF THE 9600 PROPERLY!!! THE 9600 DRIVER DOESN'T EVEN READ THEM PROPERLY. NO SOFTWARE CAN BE USED TO PROVE THE OVERCLOCK BASED ON CLOCK READINGS. YOU MUST COMPARE PERFORMANCE VIA BENCHMARK SCORE DIFFERENCES AT BOTH 100MHZ PCIE AND THEN AT OCED PCIE SPEEDS.

Now, instead of trying to dig up articles, and trying to show clockspeeds on a program (which is wrong, no matter what program you choose, PERIOD), just do what the hell I said. Run the 3DMark06 benchmark. Not open it to see what it says about your clock speeds, actually run the damn benchmark test. Do this with PCIe at 100MHz, and post your score. Now, do this again, with PCIe set to 110MHz, and post the score. Until then, your words mean absolutely nothing.


----------



## Saakki (Mar 4, 2008)

i already earlier told you that cbunting is a hired nVidia mindblender to confuse OC guys and reviewers even more  ..why bother buying 9600 GT because its priced almost same as 8800 gt around here...


----------



## MrMilli (Mar 4, 2008)

Saakki said:


> i already earlier told you that cbunting is a hired nVidia mindblender to confuse OC guys and reviewers even more  ..why bother buying 9600 GT because its priced almost same as 8800 gt around here...



He's not. He's just plain ol' school stupid, nothing more, nothing less. Ok maybe less.

But seriously, if you read all his posts, you will see that he doesn't know a thing about these stuff. I wonder if he even knows what fill rate means. Maybe his English is just bad ...


----------



## Saakki (Mar 4, 2008)

MrMilli said:


> He's not. He's just plain ol' school stupid, nothing more, nothing less. Ok maybe less.
> 
> But seriously, if you read all his posts, you will see that he doesn't know a thing about these stuff. I wonder if he even knows what fill rate means. Maybe his English is just bad ...


hope so..pretty hilarious tough..


----------



## Tatty_One (Mar 4, 2008)

Lol, this thread has become my daily read on my laptop at work when I go for a cr*p each morning, in fact it's that good it has taken over from FHM


----------



## OnBoard (Mar 5, 2008)

imperialreign said:


> this thread has become quite hilarious



That was some fun arguing  (maybe not for those that took part, but for us readers) I was tired when I started 4 pages back, somehow I'm not anymore


----------



## dtdw (Mar 6, 2008)

If thats the case then the fps count would be even higher ?

Nowander my Oblivion's lowest fps jumped from 45(lowest) to 55(lowest)

I didnt dare to O'c my 9600gt in sli cos its stil new but i used Ntune to 'tune' my system so the Pci-e is tune as well and it went up to 2925mhz ..Woot ! ..

Went and test Oblivion and it went 10fps higher...

Now that only means one thing...OC the card till 725mhz, increase Pci-e to 120mhz 

So its 725mhz + 130mhz = 855mhz

Edit : Gee now i need to get an active cooling for my southbridge =.= or do i need to? hahahah

Insane


----------



## Wile E (Mar 6, 2008)

Well, considering he hasn't responded, I'm gonna guess he ran the tests, saw the increases, and felt like an ass.


----------



## jaydeejohn (Mar 6, 2008)

I dont post that often here, but maaaaaan, was that daft or what???? LOL Fellas this has been one great thread, and TY w1z for a great insight into what may be a great deception, especially if I were a writer/reviewer


----------



## Tatty_One (Mar 6, 2008)

Wile E said:


> Well, considering he hasn't responded, I'm gonna guess he ran the tests, saw the increases, and felt like an ass.



Amen to that


----------



## dtdw (Mar 7, 2008)

okay i just need to clarify this again, the author manage to check the 'actual' core speed in rivatuner but i cant. Why is that ?

for his example his clock @ 725mhz bt reported @ riva is 783mhz

but whereas if thats the case 650mhz original clock would report 708mhz at riva ...

or did i NOT get the calculation correct..


----------



## Solaris17 (Mar 7, 2008)

the new version of riva tuner doesnt do that i think unwinder stopped it from reading from the pll


----------



## dtdw (Mar 7, 2008)

Solaris17 said:


> the new version of riva tuner doesnt do that i think unwinder stopped it from reading from the pll



Ohhh..so that means i need to use the old version...

or is there a way to edit the config so that it reads the actual 1 from 2.07 ?

heheeh...i am really really curios to know wht is the actual one after raising my pci-e

my card is unstable @ 700mhz while having pci-e @ 117 ... but i thought raising the pci-e 'increases' stability ?


----------



## ghost101 (Mar 7, 2008)

dtdw said:


> okay i just need to clarify this again, the author manage to check the 'actual' core speed in rivatuner but i cant. Why is that ?
> 
> for his example his clock @ 725mhz bt reported @ riva is 783mhz
> 
> ...



rivatuner doesnt report the actual core speed. It assumes a 27mhz multiplicative. So for a 650mhz core with the pci-e bus running at 100mhz, the multiplier is 650/25 = 26.

Therefore, rivatuner should read 27*26 = 702mhz.

If on a pci-e bus of 125mhz, the actual core clock is 26*(125/4) = 812.5. Rivatuner will read, 702mhz still and nvidia will report 650mhz.


----------



## Unwinder (Mar 7, 2008)

Solaris17 said:


> the new version of riva tuner doesnt do that i think unwinder stopped it from reading from the pll



Wrong guess. I still read the clocks from the PLL. I just use hardcoded 25MHz crystal clock for G94 based display adapters now.


----------



## AddSub (Mar 7, 2008)

> Wrong guess. I still read the clocks from the PLL. I just use hardcoded 25MHz crystal clock for G94 based display adapters now.



How honorable. I'm sure nVidia appreciates it. :shadedshu


----------



## Solaris17 (Mar 7, 2008)

Unwinder said:


> Wrong guess. I still read the clocks from the PLL. I just use hardcoded 25MHz crystal clock for G94 based display adapters now.



how did you manage to get it to read correctly with a 27mhz crystal?


----------



## AlexUnwinder (Mar 7, 2008)

AddSub said:


> How honorable. I'm sure nVidia appreciates it. :shadedshu



Thanks for "pleasant" comments, mate. I guess that next time I'll simply never share results of my findings with anyone and simply won't allow such reviews to appear. It is damn sad to kill 2 weeks on investigating G94 PLL clock internals then to read such comments from ATI fans. Community doesn't deserve sharing technical details with it.


----------



## erocker (Mar 7, 2008)

Why are there two Unwinders here?  Don't let one guy get you down mate, 99% of the people here love RivaTuner, deffinitely including myself.  I would be nowhere without it.  Thank you very much for your hard work and dedication to this wonderful program.


----------



## Saakki (Mar 7, 2008)

=d wow share it man..


----------



## Tatty_One (Mar 7, 2008)

erocker said:


> Why are there two Unwinders here?  Don't let one guy get you down mate, 99% of the people here love RivaTuner, deffinitely including myself.  I would be nowhere without it.  Thank you very much for your hard work and dedication to this wonderful program.



I second that, Rivatuner as been probably the single most important peice of overclocking software I have ever used.


----------



## trt740 (Mar 7, 2008)

*Please don't listen to a few AZZholes*



AlexUnwinder said:


> Thanks for "pleasant" comments, mate. I guess that next time I'll simply never share results of my findings with anyone and simply won't allow such reviews to appear. It is damn sad to kill 2 weeks on investigating G94 PLL clock internals then to read such comments from ATI fans. Community doesn't deserve sharing technical details with it.



Rivatuner is a fantastic program. I for one would pay for it and want thank you very very much for making it. It and Atitools are the best overclocking tools for video cards made thanks very very much. I'm sorry if some nimrod insulted you, *you are   truely one of the video card, overclocking GODs  * I cannot believe anyone would do such a dumb thing.!!!!!!!!!!


----------



## AlexUnwinder (Mar 7, 2008)

erocker said:


> Why are there two Unwinders here?



My bad, I use different logins when posting here from home and from the office. And I do not mean RivaTuner, I mean publishing technical details about hardware internals like this G94 clocking issue. After discovering that PCIE bus is used as a source of 25MHz reference clock I discussed it with Guru3D staff and we decided to post investigation results just in our forum and w1zzard decided to create a review about it. And even with his ideal and newbie friendly review, after seeing the reaction of community, after seeing how many peoply misunderstand it, treat it wrongly or start seeing conspiracy theories inside, I'm more and more certain that such info shouldn't be provided to public. Less people know about it = less headache the developers have.


----------



## trt740 (Mar 7, 2008)

AlexUnwinder said:


> My bad, I use different logins when posting here from home and from the office. And I do not mean RivaTuner, I mean publishing technical details about hardware internals like this G94 clocking issue. After discovering that PCIE bus is used as a source of 25MHz reference clock I discussed it with Guru3D staff and we decided to post investigation results just in our forum and w1zzard decided to create a review about it. And even with his ideal and newbie friendly review, after seeing the reaction of community, after seeing how many peoply misunderstand it, treat it wrongly or start seeing conspiracy theories inside, I'm more and more certain that such info shouldn't be provided to public. Less people know about it = less headache the developers have.



no you were right to tell us, because I for one couldn't understand how it was beating a 8800gs until your info came out.


----------



## Solaris17 (Mar 7, 2008)

I dont agree i personally think such things should be released to the public...not to totally dimiss your idea ut to the ppl that understand things like this will help them to hose intrested the ppl that understand could teach them as for everyone else...well unwinder as you know theirs always gonna be a couple @zzholes and ppl always trying to disprove you. Im sure you get that enough as a developer. i know this info helped me...after i got my 9600GT me and w1zz were talking quite often for long periods of time about the card...seeing as i got it b4 release i wanted a working copy of gpu-z and thats when i noticed the discrepency so me and w1zz talked about it for hours going over theorys etc...why this was happening dual oscilators the whole works..and im thankfull for the article it covered or cleared up a few things me and w1zz didnt finish discussing.


----------



## Midnightknight (Mar 8, 2008)

I, i'm new on this forum to and just want to post an opinion on some things said here.

First when i read 





> Thanks for "pleasant" comments, mate. I guess that next time I'll simply never share results of my findings with anyone and simply won't allow such reviews to appear. It is damn sad to kill 2 weeks on investigating G94 PLL clock internals then to read *such comments from ATI fans*. Community doesn't deserve sharing technical details with it.



I think there is something wrong. I'm what we can call ATI fan, i really love ATI not because they always do performant cards but cause i like the way they do their cards. I persoanly liked my rage 128 fury cause it was great for video and show better image quality, i loved my 8500 and my 9800, and my X1950. Those are good cards. They do crapy thing with the 2900, ok i won't denie it and i have a geforce2 at home.
The true issue is just "stupide fans" that just don't even know why they are. They always expect to beat the other side, and again there is no point on this here, cause i use rivatuner on my old radeon. I don't even think is an ATI fan at all ...So what's the maening of this? Take a brake, you can't make understand reason to stupide guys 

Overwer this thread is really great and have been report on many reviwers websites even in my country (France) and it truly help buyers. even if i m not playing to buy Nvidia cards cause of their politic, juste like this one (ATI strart doing same shady things :/), i will change my stat of mind when making computer for friends or even at work, recommanding this card for what it is and not for the triky way it take boost.

So again thanks for it, and keep seeking suchs thing, your the ones that "stop" the marketing world to destroy all the rest ^^


----------



## jaydeejohn (Mar 8, 2008)

unwinder, tho Im not a writer reviewer, this sort of thing youve done is ESSENTIAL. No one should shoot the message nor the messenger. How many people in community would still be scratching there heads without this knowledge? And what kind of firestorm would come of THAT? not knowing whats going on? Youve done us all a great service with this, and of course with Riva Tuner. Thank You    John


----------



## pepsi71ocean (Mar 8, 2008)

this explains why there are reviews at newegg that say the 9600 series will BSOD on them when the 8800 won't. And i think this might answer allot of other questions *goes to evga forums for mor info*

Plus AlexUnwinder: you are a god, Riva Tuner is a great program, its way better than nTune, which imo is crap.


----------



## imperialreign (Mar 8, 2008)

AlexUnwinder said:


> My bad, I use different logins when posting here from home and from the office. And I do not mean RivaTuner, I mean publishing technical details about hardware internals like this G94 clocking issue. After discovering that PCIE bus is used as a source of 25MHz reference clock I discussed it with Guru3D staff and we decided to post investigation results just in our forum and w1zzard decided to create a review about it. And even with his ideal and newbie friendly review, after seeing the reaction of community, after seeing how many peoply misunderstand it, treat it wrongly or start seeing conspiracy theories inside, I'm more and more certain that such info shouldn't be provided to public. Less people know about it = less headache the developers have.



I completely understand where you're coming from on this, and looking back over the last x number of pages in this thread, it entirelly validates and supports your reasoning without a doubt - but I also feel that the public has a right to know, to an extent.  Those that understand the technical aspects of it and that want to know, versus the "general public" . . . and it's typically the "general public" that f* things up for everyone in all walks of life :shadedshu

TBH, though, I'm entirelly appreciative of the info.  As much as I'm ATI loyal, and have been for years, I _try_ to stay as up to date as possible as to what nVidia is up to.  Even as a red camp loyalist, I'll be the first to admit ATI has been beaten senseless over the last few years, and my claims to loyalty don't stop me from recommending nVidia hardware now and then, either.


----------



## btarunr (Mar 8, 2008)

AlexUnwinder said:


> My bad, I use different logins when posting here from home and from the office. And I do not mean RivaTuner, I mean publishing technical details about hardware internals like this G94 clocking issue. After discovering that PCIE bus is used as a source of 25MHz reference clock I discussed it with Guru3D staff and we decided to post investigation results just in our forum and w1zzard decided to create a review about it. And even with his ideal and newbie friendly review, after seeing the reaction of community, after seeing how many peoply misunderstand it, treat it wrongly or start seeing conspiracy theories inside, I'm more and more certain that such info shouldn't be provided to public. Less people know about it = less headache the developers have.




The more we know, it influences our purchase decisions better. The last people we bank on are neutral reviewers, of which very less are left. Hence we need to read such articles from W1z and you. It's a dying breed in a world of cash.....neutral technologists.


----------



## warhammer (Mar 8, 2008)

Putting faster crystals on electronic devices to boost performance is not new practice.
Maybe there was a shortage on the 25MHz crystals....


----------



## Steevo (Mar 9, 2008)

You don't get it. ^








Try again, this time read it all.


----------



## akin (Mar 21, 2008)

after i read all of the statements above, i have a little bit question.

for the reference clock of the 9600 GT (650 Mhz ), that clock is generated by PCIE/4 * 26. But according to this link http://www.bfgtech.com/bfgr96512gtoce.aspx  i'm little bit confuse with how they can generate 675 Mhz GPU clock ? does BFG change the multi (26) of the clock generators ?

anyone can explain this ?


----------



## newtekie1 (Mar 21, 2008)

AlexUnwinder said:


> Thanks for "pleasant" comments, mate. I guess that next time I'll simply never share results of my findings with anyone and simply won't allow such reviews to appear. It is damn sad to kill 2 weeks on investigating G94 PLL clock internals then to read such comments from ATI fans. Community doesn't deserve sharing technical details with it.



Don't let one idiot fanboy throw you off, and yes he is an idiot fanboy, he has been on my ignore list for as long as I can remember because of his idiotic comments.  Don't judge the whole community by one idiots comments.  It would be like judging the entire human race by one idiot racist's comments, it just isn't fair.  There are those of use in the community that love to know the technical details behind what is going on.  And it can prove to be helpful too.  We might see this information become useful later on down the road when people are having problems with their G94 based cards, and can't figure out why they are unstable when they aren't overclocking the cards, and it could come down the the fact that their PCI-E bus is running too fast and they didn't even realize that could be a problem.



akin said:


> after i read all of the statements above, i have a little bit question.
> 
> for the reference clock of the 9600 GT (650 Mhz ), that clock is generated by PCIE/4 * 26. But according to this link http://www.bfgtech.com/bfgr96512gtoce.aspx  i'm little bit confuse with how they can generate 675 Mhz GPU clock ? does BFG change the multi (26) of the clock generators ?
> 
> anyone can explain this ?



That is essentially how all overclocking is achieve, you are just changing the integers that the reference clock is being divided and multiplied by.

So to get 675MHz, the number 26 is changed to 27.


----------



## akin (Mar 24, 2008)

> So to get 675MHz, the number 26 is changed to 27.


is this mean that the integer (26) value can be edit in BIOS right ?


----------



## Scyphe (Mar 24, 2008)

akin said:


> is this mean that the integer (26) value can be edit in BIOS right ?



No. When you set the wanted frequency in an overclocking utility it will change the numbers the drivers use to set the frequency, you don't change the multipliers manually.


----------



## akin (Mar 25, 2008)

Scyphe said:


> No. When you set the wanted frequency in an overclocking utility it will change the numbers the drivers use to set the frequency, you don't change the multipliers manually.



ok thanks a lot bro...


----------



## funboy6942 (Mar 26, 2008)

So without having to read soooo many pages on this I would just like to know. Is there any way to "disable" this "feature" and rerun it against the pack again and get some real, true, performance scores on it?

Id like to know how well it would stand on its own if they didnt doing this behind everyones back to influence buyers, and sell more units because they lied about the "stock" performance of the card.

This has me all upset and think back in the fx days when they used a driver trick to get performance gains. If ATI knew about this, and implemented a thing of their own, how far away would the scores be then?

Please find a way to disable the feature and retest them all again.


----------



## akin (Mar 26, 2008)

what feature did you mean ?


----------



## Fahim (May 13, 2008)

Sounds like a shady trick to me. People will see the review with linkboost tech, but without knowing about it, and then when they buy it for their system, which may not be a linkboost mobo, and the performance is not there. Kind of cheating to win the benchmarks i guess.


----------



## greybeard (May 21, 2008)

*G92 overclocking*

Riva Tuner, and increasing the PCI-e bus speed, both work.  But the best option in my case (with two eVGA brand 8800 GTs) was to replace the video card bios on both cards with
the bios from a higher-clocked model.  

I turned two ordinary cards into "Super Super Clocked" models running at 700-1725-1000, with no problems and no other form of overclocking needed.  (My cards were able to go a little higher from there with Riva Tuner, but there wasn't much of a performance gain.)


----------

