Tuesday, December 2nd 2008
Dual GeForce GTX 260 to be Officially Named GeForce GTX 295
NVIDIA would be giving its flagship consumer graphics processor, the G200 a refresh using the newer 55nm silicon fabrication process. With this, the company plans to carve out new SKUs taking the benefit of enhanced thermal and electrical properties of the updated core. In the pipeline, is a dual-GPU card based on two GeForce GTX 260 GPUs.
Expreview learned that the new graphics card is to be named GeForce GTX 295. NVIDIA is creating the card to regain the performance crown from ATI Radeon HD 4870 X2, the fastest single graphics card in the market. The card will sport two G200b cores in the 216 SP configuration, although not much is known about the memory configuration and clock speeds, at this point in time. The card has already passed design phase and is awaiting trial production and testing. It is expected to be released in January 2009.
Source:
Expreview
Expreview learned that the new graphics card is to be named GeForce GTX 295. NVIDIA is creating the card to regain the performance crown from ATI Radeon HD 4870 X2, the fastest single graphics card in the market. The card will sport two G200b cores in the 216 SP configuration, although not much is known about the memory configuration and clock speeds, at this point in time. The card has already passed design phase and is awaiting trial production and testing. It is expected to be released in January 2009.
98 Comments on Dual GeForce GTX 260 to be Officially Named GeForce GTX 295
About the rehashes I don't have the smallest issue with them as long as they are still competitive in the segment they are released. When I buy a graphics card I look for much more than just the name, more and more people have started to do so and everyone SHOULD do. I have temporarily worked in stores now and then and TBH I have yet to see anyone buying a dishwasher, a microwave or a driller or anything similar without asking every single aspect of that machine. Same with cars, mp3's, cellphones and many other things, people care to learn and ask about those things. BUT when it comes to hardware they came to the store and ask for "a powerful gaming card for 50 euro", "-the Nvidia GeForce -yeah, but which one - huh?" etc, etc. Personally I couldn't care less if that kind of people is confused by the name changes, the people that spends 10 mins asking about every propertiy of a $50 worth drill and then shells out $300 for a graphics cards without caring one bit about what he is getting, except the name (they would always have the time to come back later claiming that was not what they needed though, or that it doesn't fit in their slot, etc.). Sorry but I don't care. Just as I don't care if the 8800 GT continues selling with that name or 9800 GT or GT 150, it's the same card and it has demostrated it's still a worthwhile buy in current market.
\end of rant lol.
all my gfx card purchases are based on; (in no particular order)
1. my brand preference of the time, for example i know my next card will be nvidia.
2. what the card has on offer in terms of tech specs, ie, shaders, rops, memory, memory bus etc. vary rarely will power consumption or heat be a swaying factor, as i have expendable income for a power bill and i after market cool just about every card i get.
3. the all important benchmarks, how well does it stack up against what i own now, and what else is on offer in the same and other segments.
4. price/performance ratio. not the biggest factor but quite important nonethless, thats how i warranted buying a 8800GTX 2 years after release, i payed LESS than price/performance for it.
name really doesn't matter jack to me, however i do enjoy anything with GT in it, Ultra in it, or any amount of X's :) but thats just wank factor really.
I have a friend that thinks if it costs more it must be that much better . . . He has a 9600GT now because I told him to shutup and take it because thats what was best for him and his useage.
I bought my GTX260 because I wanted more than I could use and I wanted to play. The GTX280 wouldn't make a difference for me because I couldn't make it (a 22" only goes to 1680x1050).
Although I may SLI it with another one to get my F@H score higher . . . ; )
Not only will this just be a waste, now it's going to get more difficult for developers to write codes for these dual chips to be utilized at 100% efficiency, take the 4870x2 for example i know that card hasn't reached it's full potential and by the time it does it will be at it's EOL, faster card would be out at half the original retail price.
I'm not a silicon engineer, but I'll bet it's pretty hard to keep cramming transistors, SPs, and ROPs, on to those dies.
@Ozzy: SP and ROPs etc can always be refined and look at how far we've come from vacuum tubes. Stuff can always get smaller and use less energy. AMD did very little except make a fast chip, fast ram, and so on... I've said it before and I'll say it again. There's a huge IQ difference in 3D between the nV and ATi cards, and at the moment nV holds the crown for best 3D picture.
Intel, with a more antiseptic grasp on the whole platform itself has unleashed SLi to its full rabid potential.
You are partially right, but IMHO th thing has lately worked in the completely opposite way. Intel chipsets were much better, so anyone even considering to use multi-GPU in the future took an Intel board and an Ati graphics card. Examples of people in TPU deciding between Nvidia/Ati that went Ati just because of that, are extremely high. If anything the change goes into Nvidia's GPU division benefit. Chipset team, that's another thing, as you said they still have AMD, and they could make good mainstream chipsets for Nehalem in the future, who knows, 750i wasn't so bad after all (stability problems aside).
But Ya, My Personal preference, Im unsure i will ever use SLI or Crossfire and Always think there should be Excellent Non SLI/Crossfire Chipsets out there, Because look at my Current Combo, Works Flawlessly (I know its Old tech but it can hold its own in many games- also for future tweaking- want to reach into the 2.4-3.0GHz arena/legacy 16bit games)
For Me I wouldnt Mind Running a NV Chipset/AMD CPU/ATI Graphics Card as that is what my current machine is and has served me well for the Last 5-6 years (1 year i couldnt use it because i was overseas)
plz plz plz :respect: heheheheh
At a point in time the only way to have multitreading in computer was by having two processors, the problem was solve by making multi-cores on a single chip, the CPU industry has made more leaps in innovation, all I'm saying is maybe the GPU industry should take notes and do the same.
Also you couldn't be more wrong with architecture having not changed in all this time, it used to be where a GPU could only do one operation per SP, then they doubled that and ATi again doubled that with some crazy change they made in architecture (I'm too lazy to look up what it was called)
Another point is that having a complete overhaul of the GPU architecture would cause havoc on older games because they were designed to operate on this architecture, This problem is the reason that Ray Tracing hasn't hit the market full-force.
Finally, don't call names. It's not nice.
Their you go again with your tunnel vision, my point is now clarified, adding more to the equation does NOT over haul an architecture...second my core i7 is doing a great job running older programs, third where in the world does all that money go when you upgrade to a new 500 dollar GPU cause i sure as hell haven't seen anything innovative since not to mention DX10 is a fat joke, pointless to argue with someone of your intellect:shadedshu.
But if that's not enough of an "architecture change" for you then in essence the last GPU to use that was the NV2 core which would generate a single polygon and then it would round the edge of the polygon to make it appear 3D.... Care to take a guess what happened to that "architecture"? It died, and it took the Sega Saturn along with it, and I'll tell you why. It was because every single game had to be re-programmed to work with the way the GPU worked. This is the same reason that Ray Tracing hasn't taken a huge market share yet, because they can't get it to work in conjunction with with the current texture/shader system.
IN ADDITION we don't need huge architectural changes thanks to software like OpenGL and DirectX, which actually in many cases makes just as much difference as the grunt of the GPU.