Saturday, January 5th 2008
NVIDIA GeForce 9800GX2 New Pictures
Source: tomshardware.twRelated News
- Tags:
- GeForce 9800 GX2
- NVIDIA
- Dec 24th 2023 NVIDIA GeForce RTX 50 Series "Blackwell" On Course for Q4-2024 (127)
- May 24th 2024 NVIDIA RTX 5090 "Blackwell" Founders Edition to Implement the "RTX 4090 Ti" Cinderblock Design (118)
- Sep 26th 2024 NVIDIA GeForce RTX 5090 and RTX 5080 Specifications Surface, Showing Larger SKU Segmentation (185)
- May 5th 2024 NVIDIA to Only Launch the Flagship GeForce RTX 5090 in 2024, Rest of the Series in 2025 (154)
- Jul 15th 2024 NVIDIA GeForce RTX 50 Series "Blackwell" TDPs Leaked, All Powered by 16-Pin Connector (168)
- Nov 22nd 2024 NVIDIA GeForce RTX 5070 Ti Specs Leak: Same Die as RTX 5080, 300 W TDP (88)
- Sep 24th 2024 NVIDIA RTX 5090 "Blackwell" Could Feature Two 16-pin Power Connectors (110)
- Aug 26th 2024 NVIDIA's RTX 5060 "Blackwell" Laptop GPU Comes with 8 GB of GDDR7 Memory Running at 28 Gbps, 25 W Lower TGP (108)
- Feb 19th 2024 NVIDIA RTX 50-series "Blackwell" to Debut 16-pin PCIe Gen 6 Power Connector Standard (106)
- Oct 16th 2024 NVIDIA to Release the Bulk of its RTX 50-series in Q1-2025 (45)
104 Comments on NVIDIA GeForce 9800GX2 New Pictures
Although ATI GPUs at the moment aren't as fast as nVidia's - they easily reclaim that performance difference in Crossfire. Their GPUs communicate and work together much more efficiently. Now that motherboard capability, and chipset capability is there, they need to start going the "performance through sheer fire power" route.
I also think that having a GPU core in a CPU will do dumbfounded wonders for ATI. They'd have to clear up a lot of communication bottlenecks between a GPU core and a CPU core for it to work optimally, but you can bet that ATI cards would see a decent perfomance increase from the improved system communication.
:banghead:
You'd have a PCI-E card for outputs (HDMI/displayport) and ram, while the CPU/GPU could have the memory controller and processing power. Its really upto AMD how they want to pan it out, they have a lot of options with hypertransport, PCI-E 2.0, and integrated memory controllers.
gotta laugh tho... didnt he realise they were one company already? or did he mean something else.
And as it looks like the two cores face each other and will share the same heatsink i'm having a real problem seeing any real critical flaws in this design(given current information at hand)IDK I guess in other words that I am willing (for the time being at least)to give Nvidia's engineers the benefit of the doubt,they can't obviously be the real idiots that some would like to take them for given that they are the same ones that gave us the already critically acclaimed G80
The only issue that really scares me about this product is whether or not the support will be there long term or not.The 7950GX2 debacle still leaves a bad taste in alot of mouths.If the driver support will be there,and the price point truly will be in the $450 to $530 range,and the memory bus is large enough as well as other specs being decent......this could very well be a very interesting proposition.But for now...........BRING ON THE BENCHIES!!!!!!!!!!!!
All this talk and speculation is really just mental masturbation especially on my part:roll:
I'm also sure the cards will be cheaper! 55nm on one pcb.
Pffft!
ATi need to get the drivers down and developers really need to add far better support for multiple GPUs.
I used G80/G92 architecture for this example, since it's the more paralelized one from a manufacturing point of view, quite evident if you look at it's block diagram. R600 even tho more parallel in execution, seems more rigid in it's architecture than G80.
www.techreport.com/articles.x/11211
www.techreport.com/articles.x/13603/2
Of course dual core GPUs would make sense sooner or later, for different reasons, the most important ones invisible to consumers, but I don't think this will happen too early. Maybe R800 or G110?
AMD is currently planning to integrate the PCI Express bridge chip into its future GPUs so that it does not need to adopt third-party's chips. This design is expected to appear in AMD's next generation R700 series.
@ this price the green team is in some trouble.
Anyway, one thing is what they want to do, and another one is what they can do. Complex architectures as Phenom have associated very poor yields and a widespread number of different "workable products" and that wouldn't work very well on GPUs. They can use dualcore chips for high-end and single core ones for mainstream, but what about the others? And how would they use defective cores? And two differently deffective cores on the same die?
That's what happened with Phenoms. "One of the four cores is darn slow, do we make a slow quad or do we make a fast tricore?"
But, as you've mentioned, it could go the other way, and the whole project (which looks good on paper and in theory seems stellar in the R&D department) might go belly-up once it's actually out in the hands of consumers, being faced with various hardware and software setups.
Perhaps it's why we've seen very few rumors about the new GPU, and perhaps why AMD/ATI is taking their time with it. But they've been put into a position where they've got very little to lose anymore, and that can equate to a company willing to try and re-break trodden ground and take a risk that a more solid company wouldn't even consider. Hopefully, though, they won't go the way 3DFX did when they started shooting for extreme solutions :ohwell: