Tuesday, December 2nd 2008

Dual GeForce GTX 260 to be Officially Named GeForce GTX 295

NVIDIA would be giving its flagship consumer graphics processor, the G200 a refresh using the newer 55nm silicon fabrication process. With this, the company plans to carve out new SKUs taking the benefit of enhanced thermal and electrical properties of the updated core. In the pipeline, is a dual-GPU card based on two GeForce GTX 260 GPUs.

Expreview learned that the new graphics card is to be named GeForce GTX 295. NVIDIA is creating the card to regain the performance crown from ATI Radeon HD 4870 X2, the fastest single graphics card in the market. The card will sport two G200b cores in the 216 SP configuration, although not much is known about the memory configuration and clock speeds, at this point in time. The card has already passed design phase and is awaiting trial production and testing. It is expected to be released in January 2009.
Source: Expreview
Add your own comment

98 Comments on Dual GeForce GTX 260 to be Officially Named GeForce GTX 295

#76
DarkMatter
wolfamen to that, RV770 has been a godsend even if you didn't buy one.

having said that, 45nm processes aren't far away, whats the bet GT200 AND G9X are both re-released in this form too.
+1 to RV770 although I don't agree with the strategy and strongly believe it's not self sustainable, this particular implementation of the strategy was like a godsend.

About the rehashes I don't have the smallest issue with them as long as they are still competitive in the segment they are released. When I buy a graphics card I look for much more than just the name, more and more people have started to do so and everyone SHOULD do. I have temporarily worked in stores now and then and TBH I have yet to see anyone buying a dishwasher, a microwave or a driller or anything similar without asking every single aspect of that machine. Same with cars, mp3's, cellphones and many other things, people care to learn and ask about those things. BUT when it comes to hardware they came to the store and ask for "a powerful gaming card for 50 euro", "-the Nvidia GeForce -yeah, but which one - huh?" etc, etc. Personally I couldn't care less if that kind of people is confused by the name changes, the people that spends 10 mins asking about every propertiy of a $50 worth drill and then shells out $300 for a graphics cards without caring one bit about what he is getting, except the name (they would always have the time to come back later claiming that was not what they needed though, or that it doesn't fit in their slot, etc.). Sorry but I don't care. Just as I don't care if the 8800 GT continues selling with that name or 9800 GT or GT 150, it's the same card and it has demostrated it's still a worthwhile buy in current market.

\end of rant lol.
Posted on Reply
#77
wolf
Better Than Native
agree, its really not that important what the name is, just know what you are buying.

all my gfx card purchases are based on; (in no particular order)

1. my brand preference of the time, for example i know my next card will be nvidia.

2. what the card has on offer in terms of tech specs, ie, shaders, rops, memory, memory bus etc. vary rarely will power consumption or heat be a swaying factor, as i have expendable income for a power bill and i after market cool just about every card i get.

3. the all important benchmarks, how well does it stack up against what i own now, and what else is on offer in the same and other segments.

4. price/performance ratio. not the biggest factor but quite important nonethless, thats how i warranted buying a 8800GTX 2 years after release, i payed LESS than price/performance for it.

name really doesn't matter jack to me, however i do enjoy anything with GT in it, Ultra in it, or any amount of X's :) but thats just wank factor really.
Posted on Reply
#78
Darkrealms
DarkMatter, I agree with you and wolf on that. I see people all the time asking everything about a product and when it comes to computer components they just as "whats good" or "whats best for $XX" and get whatever they get.
I have a friend that thinks if it costs more it must be that much better . . . He has a 9600GT now because I told him to shutup and take it because thats what was best for him and his useage.

I bought my GTX260 because I wanted more than I could use and I wanted to play. The GTX280 wouldn't make a difference for me because I couldn't make it (a 22" only goes to 1680x1050).

Although I may SLI it with another one to get my F@H score higher . . . ; )
Posted on Reply
#79
Skillz
They need to stop this stupid cycle of adding more to the equation, they need to make the equation better, it will only just get more and more complicated with all this chip adding, now it's "2" chip on a card pretty soon we'll see "4" and a movement from 10" to 12" in card length, they're getting tunnel vision when it comes on to new design.
Not only will this just be a waste, now it's going to get more difficult for developers to write codes for these dual chips to be utilized at 100% efficiency, take the 4870x2 for example i know that card hasn't reached it's full potential and by the time it does it will be at it's EOL, faster card would be out at half the original retail price.
Posted on Reply
#80
OzzmanFloyd120
SkillzThey need to stop this stupid cycle of adding more to the equation, they need to make the equation better, it will only just get more and more complicated with all this chip adding, now it's "2" chip on a card pretty soon we'll see "4" and a movement from 10" to 12" in card length, they're getting tunnel vision when it comes on to new design.
Not only will this just be a waste, now it's going to get more difficult for developers to write codes for these dual chips to be utilized at 100% efficiency, take the 4870x2 for example i know that card hasn't reached it's full potential and by the time it does it will be at it's EOL, faster card would be out at half the original retail price.
I disagree, by that logic they just end up making 100nm fabs that burn red hot when they're running.
I'm not a silicon engineer, but I'll bet it's pretty hard to keep cramming transistors, SPs, and ROPs, on to those dies.
Posted on Reply
#81
Binge
Overclocking Surrealism
wolfin your sig it says GTX280 192, is that a 260?
U get an award for correcting my spEELing error.



@Ozzy: SP and ROPs etc can always be refined and look at how far we've come from vacuum tubes. Stuff can always get smaller and use less energy. AMD did very little except make a fast chip, fast ram, and so on... I've said it before and I'll say it again. There's a huge IQ difference in 3D between the nV and ATi cards, and at the moment nV holds the crown for best 3D picture.
Posted on Reply
#82
Bjorn_Of_Iceland
DarkMatterPart of the problem for SLI in the past was that no Nvidia chipset (nor Ati's anyway) was as fast as Intel chipsets to begin with, so SLI already started with a disadvantage against Crossfire in Intel chipsets. X58 has evened the field and that's why SLI is doing better now IMHO.
I do agree. In fact the benchies shown in various sites led me to think that SLi is somewhat crippled on nVidia chipset architechture (as single card setup performance on the new platform, has little or no difference with the previous'. Whilst multi-GPU are REALLY scaling well.). This only shows that nVidia SLi really was held back by its parents... now that it moved out of the house, its time for it to party like a rockstar.

Intel, with a more antiseptic grasp on the whole platform itself has unleashed SLi to its full rabid potential.
Posted on Reply
#83
eidairaman1
The Exiled Airman
whats funny is that Since Intel is selling the X58, that Means that Nvidia Cant use SLI as their Main Selling point on Intel Motherboards, now with AMD its a diff story until same fate happens again to NV on SLI.
Posted on Reply
#84
DarkMatter
eidairaman1whats funny is that Since Intel is selling the X58, that Means that Nvidia Cant use SLI as their Main Selling point on Intel Motherboards, now with AMD its a diff story until same fate happens again to NV on SLI.
I don't understand what you mean. :confused:
Posted on Reply
#85
eidairaman1
The Exiled Airman
for quite some years NV has been Using SLI as their Main Selling Point for Motherboards with Their Chipsets, you could only get SLI if you had a NV chipset motherboard, now that has changed. That basically opens up the market to Users who switch videocards out when 1 set is better than the other etc.
Posted on Reply
#86
DarkMatter
eidairaman1for quite some years NV has been Using SLI as their Main Selling Point for Motherboards with Their Chipsets, you could only get SLI if you had a NV chipset motherboard, now that has changed. That basically opens up the market to Users who switch videocards out when 1 set is better than the other etc.
Ah that's what I thought, but I didn't quite understood the sentence "that Means that Nvidia Cant use SLI as their Main Selling point on Intel Motherboards". You meant on Nvidia motherboards for Intel processors and that's what I didn't get.

You are partially right, but IMHO th thing has lately worked in the completely opposite way. Intel chipsets were much better, so anyone even considering to use multi-GPU in the future took an Intel board and an Ati graphics card. Examples of people in TPU deciding between Nvidia/Ati that went Ati just because of that, are extremely high. If anything the change goes into Nvidia's GPU division benefit. Chipset team, that's another thing, as you said they still have AMD, and they could make good mainstream chipsets for Nehalem in the future, who knows, 750i wasn't so bad after all (stability problems aside).
Posted on Reply
#87
eidairaman1
The Exiled Airman
considering it was a Biostar T-Power That high the 6 GHz Barrier with a C2E8400 and it took Asus with a Higher Dollar Board and Higher Dollar Processor to reach a little over the 6 GHz barrier.

But Ya, My Personal preference, Im unsure i will ever use SLI or Crossfire and Always think there should be Excellent Non SLI/Crossfire Chipsets out there, Because look at my Current Combo, Works Flawlessly (I know its Old tech but it can hold its own in many games- also for future tweaking- want to reach into the 2.4-3.0GHz arena/legacy 16bit games)

For Me I wouldnt Mind Running a NV Chipset/AMD CPU/ATI Graphics Card as that is what my current machine is and has served me well for the Last 5-6 years (1 year i couldnt use it because i was overseas)
Posted on Reply
#88
wolf
Better Than Native
BingeU get an award for correcting my spEELing error.
so youll send me the GTX280 :rolleyes:

plz plz plz :respect: heheheheh
Posted on Reply
#89
Skillz
OzzmanFloyd120I disagree, by that logic they just end up making 100nm fabs that burn red hot when they're running.
I'm not a silicon engineer, but I'll bet it's pretty hard to keep cramming transistors, SPs, and ROPs, on to those dies.
No one said anything about cramming more transistors, i'm talking about a complete over haul just like they did with the core i7, one main innovated move was to make the memory controller directly on the chip... a new design, GPU has had the same architect for years, even you sir seem to be thinking with tunnel vision.
At a point in time the only way to have multitreading in computer was by having two processors, the problem was solve by making multi-cores on a single chip, the CPU industry has made more leaps in innovation, all I'm saying is maybe the GPU industry should take notes and do the same.
Posted on Reply
#90
Tatty_Two
Gone Fishing
eidairaman1for quite some years NV has been Using SLI as their Main Selling Point for Motherboards with Their Chipsets, you could only get SLI if you had a NV chipset motherboard, now that has changed. That basically opens up the market to Users who switch videocards out when 1 set is better than the other etc.
It just strengthens Intels positioning against AMD in the CPU wars, I am guessing unless things change AMD will lose on both fronts to a certain degree, on the CPU front, even more people will go Intel chipset motherboards now as they will have both GPU's to play with, mainly of course if they are seeking a multi GPU platform and on the GPU front, those people who only bought ATI cards because they wanted the multi GPU option on an Intel chipset board can go back to NVidia..............Intel win, NVidia win so who loses? obviously loss is subjective but it's all good for the consumer.
Posted on Reply
#91
OzzmanFloyd120
SkillzYou are a fool for thinking so, no ones said anything about cramming more transistors, i'm talking about a complete over haul just like they did with the core i7, one main innovated move was to make the memory controller directly on the chip... a new design, GPU has had the same architect for years, even you sir seem to be thinking with tunnel vision.
At a point in time the only way to have multitreading in computer was by having two processors, the problem was solve by making multi-cores on a single chip, the CPU industry has made more leaps in innovation, all I'm saying is maybe the GPU industry should take notes and do the same.
They are working on exactly that. It's called "Ray Tracing"
Also you couldn't be more wrong with architecture having not changed in all this time, it used to be where a GPU could only do one operation per SP, then they doubled that and ATi again doubled that with some crazy change they made in architecture (I'm too lazy to look up what it was called)
Another point is that having a complete overhaul of the GPU architecture would cause havoc on older games because they were designed to operate on this architecture, This problem is the reason that Ray Tracing hasn't hit the market full-force.
Finally, don't call names. It's not nice.
Posted on Reply
#92
Skillz
OzzmanFloyd120They are working on exactly that. It's called "Ray Tracing"
Also you couldn't be more wrong with architecture having not changed in all this time, it used to be where a GPU could only do one operation per SP, then they doubled that and ATi again doubled that with some crazy change they made in architecture (I'm too lazy to look up what it was called)
Another point is that having a complete overhaul of the GPU architecture would cause havoc on older games because they were designed to operate on this architecture, This problem is the reason that Ray Tracing hasn't hit the market full-force.
Finally, don't call names. It's not nice.
"then they doubled that and ATi again doubled that with some crazy change they made in architecture"

Their you go again with your tunnel vision, my point is now clarified, adding more to the equation does NOT over haul an architecture...second my core i7 is doing a great job running older programs, third where in the world does all that money go when you upgrade to a new 500 dollar GPU cause i sure as hell haven't seen anything innovative since not to mention DX10 is a fat joke, pointless to argue with someone of your intellect:shadedshu.
Posted on Reply
#93
OzzmanFloyd120
Skillz"then they doubled that and ATi again doubled that with some crazy change they made in architecture"

Their you go again with your tunnel vision, my point is now clarified, adding more to the equation does NOT over haul an architecture...second my core i7 is doing a great job running older programs, third where in the world does all that money go when you upgrade to a new 500 dollar GPU cause i sure as hell haven't seen anything innovative since not to mention DX10 is a fat joke, pointless to argue with someone of your intellect:shadedshu.
For a start comparing a GPU to a CPU is worlds different. However, if I MUST explain. Back when the x800 series came out ATi released a major architecture change where each SP (or pixel pipeline as they called it back in those days) was able to double the amount of ROPs that each SP was able to produce, effectively creating four times pixel grunt per operation. They called it The Quad Dispatch System. This is why the x800 series of GPUs were legendary.
But if that's not enough of an "architecture change" for you then in essence the last GPU to use that was the NV2 core which would generate a single polygon and then it would round the edge of the polygon to make it appear 3D.... Care to take a guess what happened to that "architecture"? It died, and it took the Sega Saturn along with it, and I'll tell you why. It was because every single game had to be re-programmed to work with the way the GPU worked. This is the same reason that Ray Tracing hasn't taken a huge market share yet, because they can't get it to work in conjunction with with the current texture/shader system.
IN ADDITION we don't need huge architectural changes thanks to software like OpenGL and DirectX, which actually in many cases makes just as much difference as the grunt of the GPU.
Posted on Reply
#94
btarunr
Editor & Senior Moderator
Calm down people. Please stick to the topic.
Posted on Reply
#95
erocker
*
Everyone please tone down your language and do not insult other members while trying to get your point across. Behaving like children doesn't make yourselves sound very credible, nor does it help get your point across any better. Be good.:slap:
Posted on Reply
#96
Wile E
Power User
BingeThere's a huge IQ difference in 3D between the nV and ATi cards, and at the moment nV holds the crown for best 3D picture.
I 100% disagree. i just came from a 8800GT to this 4850, and IQ is the same between both. A little different? Yes. One better than the other? No. Just different.
Posted on Reply
#97
Tatty_Two
Gone Fishing
Wile EI 100% disagree. i just came from a 8800GT to this 4850, and IQ is the same between both. A little different? Yes. One better than the other? No. Just different.
Agreed although at 48 my eyes are rather old, as you know I just came from a GTX260 to two HD4850's and my weiry eyes cant see any difference.......maybe I need specs.
Posted on Reply
#98
Analog_Manner
Skillz"then they doubled that and ATi again doubled that with some crazy change they made in architecture"

Their you go again with your tunnel vision, my point is now clarified, adding more to the equation does NOT over haul an architecture...second my core i7 is doing a great job running older programs, third where in the world does all that money go when you upgrade to a new 500 dollar GPU cause i sure as hell haven't seen anything innovative since not to mention DX10 is a fat joke, pointless to argue with someone of your intellect:shadedshu.
If you think DX10 is a joke then I don't think you have every played DX10 World In Conflict and compared it to DX9.
Posted on Reply
Add your own comment
Nov 24th, 2024 06:38 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts