Thursday, June 19th 2008

NVIDIA Gently Intros GeForce 9800 GTX+

AMD today took a major point for the red team by positioning its brand new ATI Radeon HD 4850 cards between NVIDIA's GeForce 9 series and GTX 200 series cards. The all new HD 4850 cards beat NVIDIA's GeForce 9800 GTX while also maintaining the very reasonable MSRP of $199. Currently NVIDIA has no card that can compete in this category, but that's eventually going to change in mid-July, when the company will announce a new mid-range video card dubbed GeForce 9800 GTX+. The card will be idential to GeForce 9800 GTX from the outside, but from the "inside" it will use a smaller and more efficient 55 nanometer GPU with increased default clock/shader speeds: from 675MHz to 738MHz and from 1688MHz to 1836MHz respectively. Memory speeds for this card will be dropped slightly to 1GHz (1100MHz for GeForce 9800 GTX). Other than that the card is virtually the same as GeForce 9800 GTX, the three-way SLI support also remains untouched. NVIDIA expects to start offering GeForce 9800 GTX+ with a MSRP of $229. The company also plans to drop the price of the 65nm GeForce 9800 GTX to $199.

First card is Leadtek 9800GTX, second one is GeForce 9800 GTX+
Source: bit-tech.net
Add your own comment

137 Comments on NVIDIA Gently Intros GeForce 9800 GTX+

#51
wolf
Better Than Native
and improvements they are :D i cant believe im getting 20% more fps and these are GTX280/260 drivers hacked for my 9800, i hope theres more to come :D
Posted on Reply
#52
tkpenalty
-Tkpenalty yawns-

And they leave us 8800GT users in the dark. BRAVO NVIDIA.
Posted on Reply
#53
btarunr
Editor & Senior Moderator
Dark_WebsterBut still... is it worth it? With the new HD4850's ??
It's absofnlutely not.

If the 9800GTX (old) is sold for $199, take a look at these:



Even at $199 it's not worth it. Add to that, the much higher power requirements of the 9800GTX (and need for 2x 6-pin or 6-pin + 2x molex power inputs). Add to that, lesser options of scalability compared to HD4850. You can use upto 4 HD4850's in tandem, you can't with the 9800GTX. If $400 is all I have, I'd buy a 790FX board and a HD4850 now, enjoy all the games, keep buying a HD4850 each month so end of four months I have a fairly powerful graphics sub-system.
Posted on Reply
#54
Bundy
wolfand improvements they are :D i cant believe im getting 20% more fps and these are GTX280/260 drivers hacked for my 9800, i hope theres more to come :D
I cant decide whether to install the modded drivers or wait until Nvidia officially release them. How long do they normally hold these ones for? geez.

lol 20% for free is the best value for money I've seen in a while. It makes this price drop worth even more than it seems given that most benchmarks were using the older drivers.
Posted on Reply
#55
Mussels
Freshwater Moderator
well going to 55nm and slight changing the name is a good move. I think it makes better sense than merely updating the core and having the two get mixed in together.
Posted on Reply
#56
TooFast
nvidia is screwing up big time... all the people that paid 350$ for the 9800gtx must feel stupid.
Posted on Reply
#57
Mussels
Freshwater Moderator
TooFastnvidia is screwing up big time... all the people that paid 350$ for the 9800gtx must feel stupid.
by that logic everyone who ever bought a video card feels stupid.

OMG you bought a 9700PRO for $300 in 2002? i got an 8800GT for $200 this year! omg you must feel stupid!


sorry if this comes off as petty, but those kind of comments are very short-sighted... a cheaper, better value card ALWAYS comes out, in time.
Posted on Reply
#58
candle_86
MegastyThat's a lot of people's opinion on sandbagging. The only cure for it is for it to backfire in their faces. I've been waiting for that for 2 yrs now. I want the good old days of close competition back now. GW NV, now your best single G92 can hardly keep up with ATi's mid-range card :respect:
G92 is 2007 tech and the fact it can tie the new core speaks volumes about its longivity honestly. It makes you honestly wonder what happens with the refreash GT200 cores that will come likly smaller, a lil cut down but overall faster.
Posted on Reply
#59
TooFast
Musselsby that logic everyone who ever bought a video card feels stupid.

OMG you bought a 9700PRO for $300 in 2002? i got an 8800GT for $200 this year! omg you must feel stupid!


sorry if this comes off as petty, but those kind of comments are very short-sighted... a cheaper, better value card ALWAYS comes out, in time.
ya.... but not a month later!
Posted on Reply
#60
candle_86
Musselsby that logic everyone who ever bought a video card feels stupid.

OMG you bought a 9700PRO for $300 in 2002? i got an 8800GT for $200 this year! omg you must feel stupid!


sorry if this comes off as petty, but those kind of comments are very short-sighted... a cheaper, better value card ALWAYS comes out, in time.
agreed look what the 6600GT did to the 9800XT, or what the 7600GT did to the x800XT/6800Ultra, or the 8800GT/3870 to the 8800GTSG80/2900XT. IT happens every year. I paid 250 bucks for a 5900XT in 2004 in late January does that make me stupid then no, if i paid that same price today yes. Same logic applies to anyone who bought say an 8800GTX in 2006 for 600 bucks, they made a smart move, would they be stupid to pay even half that price today yes they would. Use logic, the 9800GTX came out so Nvidia could appear to be moving forward thats why it appeared, because to the normal moron nvidia looked stagnant. Its the cost of being on the bleeding edge, in 2 years you can prolly find a GTX280 on ebay for 100-150 bucks, but not today, the fastest always costs the most at there time. If you want to wait two years to get a really good deal on a card be my guest but by then its merly midrange if that.
Posted on Reply
#61
Mussels
Freshwater Moderator
candle_86agreed look what the 6600GT did to the 9800XT, or what the 7600GT did to the x800XT/6800Ultra, or the 8800GT/3870 to the 8800GTSG80/2900XT. IT happens every year. I paid 250 bucks for a 5900XT in 2004 in late January does that make me stupid then no, if i paid that same price today yes. Same logic applies to anyone who bought say an 8800GTX in 2006 for 600 bucks, they made a smart move, would they be stupid to pay even half that price today yes they would. Use logic, the 9800GTX came out so Nvidia could appear to be moving forward thats why it appeared, because to the normal moron nvidia looked stagnant. Its the cost of being on the bleeding edge, in 2 years you can prolly find a GTX280 on ebay for 100-150 bucks, but not today, the fastest always costs the most at there time. If you want to wait two years to get a really good deal on a card be my guest but by then its merly midrange if that.
on the 8800GTX, i was one of the lucky ones who did... considering its power even today, i think that was one hell of an investment. anyone who got a 9800GTX in the last week or so can try and return it to get a GTX+ anyway... i would, in that situation.
Posted on Reply
#62
PedoBearApproves
Musselsby that logic everyone who ever bought a video card feels stupid.

OMG you bought a 9700PRO for $300 in 2002? i got an 8800GT for $200 this year! omg you must feel stupid!


sorry if this comes off as petty, but those kind of comments are very short-sighted... a cheaper, better value card ALWAYS comes out, in time.
your annaligy is very flawed, in this case your comparing a card thats a first gen dx9 card from how many years ago? with a card from THIS YEAR.

and i really think anybody who bought a 9800gtx was pretty stupid if they didnt get somekind of uber deal, since they 8800gts g92 gives same perf, hell mine hits 788/1900/2020 no problem, thats faster then the new 9800gtx+........
Posted on Reply
#63
Mussels
Freshwater Moderator
timing all comes down to education. how many people have advised people to wait for the new cards on here?

If someone didnt wait and rushed out they didnt get a bad deal... they paid the extra money to get the card faster because they didnt want to wait.

Speed, quality, price: pick two
Its a universal law for buying things.
(speed being speed at which you get the item, not speed of hardware)
Posted on Reply
#64
Megasty
candle_86G92 is 2007 tech and the fact it can tie the new core speaks volumes about its longivity honestly. It makes you honestly wonder what happens with the refreash GT200 cores that will come likly smaller, a lil cut down but overall faster.
The only problem is that NV keeps on switching between 55nm & 65nm cores, when 55nm cores have every advantage over the latter. The G92 is great & fast but all this switching around for more costly, huge, & slower cores just don't make any sense. NV & ATi have cores produced by the same companies so the tech to produce all cores @ 55nm is there, but NV only does it with rehashes. Then the rehash makes the orig look like a pos :( I can't delve into their 65nm dinosaurs when I know for sure that the 55nm will blow all over it - but for some reason they refuse to completely move to 55nm :shadedshu
Posted on Reply
#65
DarkMatter
DaMultaThere was a performance increase was in the 8.6 drivers as well. Is AMD holding back too?
You still don't know this? Nvidia is the reincarnation of evil, while Ati is Christ resurrected in graphics company form, that because of his humility came with very low resources and works above his posibilities everyday to get some little improvements. Nvidia on the other hand has the ability to make cards 10x faster, but because owning the market well beyond the actual marketshare and taking Ati out of the ecuation is bad for their bussines, they hold back all that potential. In fact, letting Ati have more than 30% market share while being inferior (if full potential was used) and having to lower most cards' prices to compete to inferior products*, is a lot better way of improving profits than improving the cards performance, you can't compare. /sarcasm mode off

*Like 8800 GT, 9600GT... always taking into account yours are better bt you don't want to unleash their power by the drivers you are holding back :)
Posted on Reply
#66
DarkMatter
MegastyThe only problem is that NV keeps on switching between 55nm & 65nm cores, when 55nm cores have every advantage over the latter. The G92 is great & fast but all this switching around for more costly, huge, & slower cores just don't make any sense. NV & ATi have cores produced by the same companies so the tech to produce all cores @ 55nm is there, but NV only does it with rehashes. Then the rehash makes the orig look like a pos :( I can't delve into their 65nm dinosaurs when I know for sure that the 55nm will blow all over it - but for some reason they refuse to completely move to 55nm :shadedshu
Switching to a lower fab process is not always beneficial if the process and your architecture don't fit well in that moment. It can increase costs a lot and sometimes even not allow higher clocks or better power consumptions and even have severely lower yields. (Prescott, firts 45nm chips...) Producing in 55nm costs more per wafer than on 65 nm. The trick is in whether the higher chips number and other benefits accounts for that difference. A clear example of this is RV670, especially when compared to RV770. RV670 loses to G92 in performance-per-watt (under load) while the latter is in 65nm and has quite more transistors. How can that be? The classic answer was more SPs or some other exotic answers. But to invalidate those, here it comes RV770, with much more SPs, TMUs and transistors and with not a lot higher power consumption. We can talk about architecture improvements, but there is an undeniable improvement in the fab process too. If you want my opinion, I think it's more because of the latter, than the former and is what a 55nm chip should consume, as simple as that.
Posted on Reply
#67
candle_86
actually those 800SP's arn't being used its still the same old R600 design actully if you look at it, same design except instead of 64 usder shaders for complex now 160 are used for complex work of the 800 still a 5 part shader design that has been retooled for better effeincy but still crippled thanks to ROPS and TMU's. The reason for the numbers we see are quite simple over all for both areas 64 shaders for complex vs 160 shaders for complex with 128 simple shaders vs 320 simple shaders, and the rest are interger units. They more than doubled there shader output on this card but again alot of the card is still not used and with powerplay those parts are left off. That explains your power figures that and the lack of rops or TMU's.
Posted on Reply
#68
DarkMatter
candle_86actually those 800SP's arn't being used its still the same old R600 design actully if you look at it, same design except instead of 64 usder shaders for complex now 160 are used for complex work of the 800 still a 5 part shader design that has been retooled for better effeincy but still crippled thanks to ROPS and TMU's. The reason for the numbers we see are quite simple over all for both areas 64 shaders for complex vs 160 shaders for complex with 128 simple shaders vs 320 simple shaders, and the rest are interger units. They more than doubled there shader output on this card but again alot of the card is still not used and with powerplay those parts are left off. That explains your power figures that and the lack of rops or TMU's.
Being the architecture similar in both cases, no matter how you look at it, it has 2.5x more shaders, both complex and simple. Also I would want to place a bet it's mantaining SP-to-TMU ratio and has 40 TMUs, again 2.5x. Now the rest of the chip is the "same" size so 2.5x higher power consumption is out of question, but I think that 25-50% more seems reasonable, the actual card consumes just 10W or 5% more than the HD3870 under load while being much much faster. If you strongly believe that this comes from architectural enhancements only, explain me how and show me some precedents, please.

Of course I'm basing my numbers on the belief that the card has 40 TMUs and thus it has "enough" of them or at least as much as HD3870 of them to have a similar under load usage. And note that I said similar, as I DO believe all new cards are being underused, as well as fastest older cards: GX2 > X2 > 9800GTX in order of lesser usage of real power.

If TMU number is smaller you could be right and many SPs could not be used, though we have to wait a week and see, but there are enough "hints" out there to make me believe it has 40.
Posted on Reply
#69
candle_86
its not that the TMU's are slowed down, also look at the 4850 reviews the core hasn't been crippled from what we know so TMU's are in line with what we expect. But you have to consider a few things doing with optimization and die shrinks first. First of all as stated it not an automatic power saved and 65 to 55 isnt that big honestly. Because the number of shaders went up and so did the TMU's granted but look how the core was arranged and the memory sub system the GDDR5 runs alot cooler and uses less power than even GDDR4 does and with you have 8 chips that helps lower first of all. Second of all the core was arranged better this round likly to cut heat output from the core which adds the bonus of less energy expended on heat so you save more power because of less friction in the interconnects to generate heat which is simply a loss of the cards power to friction. All this play's a massive role in the power usage, along with other things such as quality of componets used ect, even the caps can effect the power usage. I honestly belive the power envolope we see has more to do with design that process
Posted on Reply
#70
DarkMatter
candle_86its not that the TMU's are slowed down, also look at the 4850 reviews the core hasn't been crippled from what we know so TMU's are in line with what we expect. But you have to consider a few things doing with optimization and die shrinks first. First of all as stated it not an automatic power saved and 65 to 55 isnt that big honestly. Because the number of shaders went up and so did the TMU's granted but look how the core was arranged and the memory sub system the GDDR5 runs alot cooler and uses less power than even GDDR4 does and with you have 8 chips that helps lower first of all. Second of all the core was arranged better this round likly to cut heat output from the core which adds the bonus of less energy expended on heat so you save more power because of less friction in the interconnects to generate heat which is simply a loss of the cards power to friction. All this play's a massive role in the power usage, along with other things such as quality of componets used ect, even the caps can effect the power usage. I honestly belive the power envolope we see has more to do with design that process
Look, that the power efficiency and many other things increase as the fab process matures is a fact, finding the right type and amount of doping, better ways of getting rid of the remains and the time and the way the chemicals are applied are only some exemples of the things that advance and that have to be tested for every process (EDIT: and chip) as no two are equal.

Don't talk about GDDR5 here because it doesn't use it.

There's no doubt all the things you mentioned are true, but IMO the process is a lot more relevant. Ati is always the first to jump to those new processes, so they are simply not mature when they first use them. They take a risk and sometimes it pays off and sometimes not. Not to say that what you say it's also true and, in fact, many of the enhancements you mentioned have a lot to do with the fab process, because being smaller, the arrangement has to change. This is true, but if Ati wasn't able to come with the right arrangement in the first place, Nvidia will probably not do it better. Only that is one reason not to change. Let's see another one:

Price - What I'm going to say can be applied to GDDR5 too. When talking about the price of the fab process and the benefits you are going to obtain, you can't simply compare the price as it is now (or was when Ati started using it) and say it pays off to do the change. Nvidia has 60-70% of the discreet market share, that means it has to produce twice as Ati if they want to mantain it. Current manufacturing costs of 55nm (remember GDDR5 too) are based on the demand of pretty much only Ati's needs and they strugle to meet demand. Mix Nvidia in and you need to multiply the supply by 3x. They can't do that, so prices would skyrocket. As a fact TSMC already upped all of his prices because of this same reason and increasing costs (that at the same time are related to the demand too). If prices went up it would be bad for both companies, not to mention consumers, but because Ati is targeting a lower priced market that situation would be beneficial for them, but in no way for Nvidia. For Nvidia is just better to not enter that price/supply war and make prices lower as long as theay are competitive. Don't dare to ask if this point is not well explained.
Posted on Reply
#71
MilkyWay
so according to the chart a 4850 is the same as a 3870x2

so 1 gpu is as powerful as the top dual ati gpu solution?

thats good

only idiots would buy this updated 9800 because there are new cards around the corner and the relative performance increase over a normal 9800gtx is minute probly a little higher clocks due to it being smaller a little less power usage
Posted on Reply
#72
candle_86
well if the regular 9800GTX goes to 199 and with the new numbers the 177 beta's give is any indication id say the 9800GTX is a damn good competitor to the 4850
Posted on Reply
#73
MilkyWay
9800GTX is last gens card and if a last gen card can be performance equal to a next gen card then that proves the new gen is shit

the problem is the 4850/70 is more powerful than the 9800gtx

your only taking into account the price, even then the 4850 is the card of choice in next gen due to the price either way ill have to wait and see if the next nvidia cards are any better performance wise enuf to justify the price
Posted on Reply
#74
btarunr
Editor & Senior Moderator
MilkyWay9800GTX is last gens card and if a last gen card can be performance equal to a next gen card then that proves the new gen is shit
If it takes a $199 card to beat a 9800GTX which is just three months old and had a $350 launch-price it proves otherwise. "This is new-gen, that wasn't" is a flawed argument. The G92 was built to compete with ATI in this very generation, just that it was rushed in because of RV670. Else G92 was supposed to be the GPU that shat all over GeForce 9 series.
Posted on Reply
#75
DarkMatter
btarunrIf it takes a $199 card to beat a 9800GTX which is just three months old and had a $350 launch-price it proves otherwise. "This is new-gen, that wasn't" is a flawed argument. The G92 was built to compete with ATI in this very generation, just that it was rushed in because of RV670. Else G92 was supposed to be the GPU that shat all over GeForce 9 series.
Not at all. Nvidia has been postponing GT200 for more than a year. GT200 was supposed to be the 9 series.
Posted on Reply
Add your own comment
Nov 26th, 2024 21:25 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts