Thursday, July 3rd 2008
NVIDIA Roadmap Suggests DirectX 10.1 GPU in Q4 2008, GDDR5 in 2008
TG Daily stumbled across roadmap information of NVIDIA which provides certain details of the green giant's DirectX 10.1 implementation plans. Currently, ATI and S3 Graphics make graphics processors (GPU) compliant to DirectX 10.1 specifications. In a presentation slide, are seen plans of NVIDIA coming up with a brand new fleet of mobile (notebook) DirectX 10.1 graphics processors slated for spring 2009 and desktop GPUs either for late Q4 2008 or early 2009 with a possible ramp throughout Q1 and Q2 of 2009.
This gives competitors at least a 6-month time advantage in which they could build developer relations and aid development of games based on the DirectX 10.1 API since it's now certain that the API is going to become an industry-wide standard, with the biggest player in the discrete-graphics industry having plans to embrace it.
The second revelation that slide brings up is that NVIDIA will implement GDDR5 memory with its upcoming products within 2008.
Source:
TG Daily
This gives competitors at least a 6-month time advantage in which they could build developer relations and aid development of games based on the DirectX 10.1 API since it's now certain that the API is going to become an industry-wide standard, with the biggest player in the discrete-graphics industry having plans to embrace it.
The second revelation that slide brings up is that NVIDIA will implement GDDR5 memory with its upcoming products within 2008.
44 Comments on NVIDIA Roadmap Suggests DirectX 10.1 GPU in Q4 2008, GDDR5 in 2008
at least after the amount of pressure Ubi came under with 1.02, they might re-instate 10.1 in the near future
And that info we have today is:
1- A data sheet and company line from Ati that touts the benefits of DX10.1, but that states clear (if you read it carefully) that the benefits are on some especific features.
2- ONE game that benefits from DX10.1.
2.1 - According to the developers it was broken, so this point is not very clear anyway.
3- Many developers saying DX10.1 isn't an upgrade at all for them, because the way how they use the API doesn't have any benefit going to DX10.1.
4- The maker of the API itself saying that it has not many benefits over DX10, except on some especific features.
5- Nvidia saying it's not that important for them because it's not important for developers.
In the end what we have is two fronts in this story, one that believes that points 1-2 are true and 3-4-5 are lying, and the one that believes that 1-2 are true in some cases and that so many developers and MS won't lie just to help Nvidia, when in fact they know MS may be pissed off with them (it was not too much ago), because of what happened with the Xbox.
www.techpowerup.com/reviews/Palit/Revolution_R700/25.html
Forget about the Palit 700 card and the X2 alltogether, they are both dual GPU cards and it would be very easy for me to make a point out of them. No, look at the HD4870 and GTX 260. Same performance, Ati card has the fab process advantage and is smaller. Yet on average it consumes more. And the average shown by Wizz's reviews is actually not like other averages where the card will be hours idling, but an average of a 3DMark run: Never forget that Nvidia has always had fab process disadvantage and that's a big one, and yet they are many times above in performance per watt ratio.
Now look at this article:
forums.techpowerup.com/showthread.php?t=76104
That's what G200b will consume. That's what a TRUE Nvidia 55nm card consumes. I said that G92b was NOT a true 55nm chip plenty of times, but people prefer to dismiss that fact.
Now I am not comparing Nvidia to Ati's power draw to bash or praise any of the two, but we can only compare Nvidia to Ati, because there's only those two players in the field. And you just can't say if they continue that way they will consume a lot, because reality confronts that sentence. I mean we have two vendors and two strategies. As we can see Nvidia has the better one in that respect, power consumption. Just compare their 55nm high-ends with comparable performance, almost 300W (HD4870X2) versus 160W...
The final word about this will come with 40nm cards, so be patient.