Saturday, November 22nd 2008
Microsoft Confirms DirectX 11 to Accompany Windows 7
Microsoft's Ben Basaric, product marketing manager for Windows products, confirmed to PC Games Hardware that the next major update to the DirectX API, DirectX 11 would accompany Windows 7, the next major consumer operating system software by Microsoft. This, overwriting his own statement given to the website earlier that he wasn't sure if DirectX 11 would be ready to ship with the OS upon its launch. Furthermore, he also indicated that Windows Vista will have access to the updated API, although not sure at what point in time.
Source:
PCGH
28 Comments on Microsoft Confirms DirectX 11 to Accompany Windows 7
Saying that - if Windows 7 was still 2years away i suppose thats more or less enough time for ATi & Nvidia to start throwing out DX-11 cards but i suppose their gonna be screwing at M$ for making them release yet again some what another platform.
That means all DX10 hardware can run DX11 under vista. nice.
uxevangelist.blogspot.com/2008/11/november-2008-directx-sdk-contains-dx11.html
Haven't checked, so I don't know myself...have fun:)
Ben Basaric doesnt seem to know very much does he? Doesnt know if, doesnt know when, isnt sure if, changes his blog and does u-turns...
If he is the product marketing manager for Windows products, and he is so clueless, then he's not very good at his job. I suggest MS saves themselves some time and money, and reinvests his salary, office space, and expense account, in something more productive, even MAKING WINDOWS 7 CHEAPER! ;) Or investing in another debugger so that there isnt a need to patch the code so quickly ;)
Well will be interesting to see what happens. If DX10 cards can run DX11 it will be too slow anyways, just like first SM3.0 cards were too slow to run HDR and like first SM4.0 cards were too slow to run DX10.
But, never fear, I had read somewhere not too long ago that ATI does plan on having DX11 compliant cards on the market sometime next year. More than likely, although I can't say for sure, the HD5000 series will be the first to market with DX11 support. ATI is good about getting on the ball with new tech support.
Hopefully this means that current DX10 and DX10.1 cards will be able to run the DX11 version of the game just with certain things like the tesellator disabled. AFAIK the tesellator is the only significant hardware change between current cards and DX11 cards, as has been said above DX11 is aiming at doing what DX10 does only more efficiently, as well as adding compute shaders, tesellation and finally a multi-threaded render path.
bit-tech did a nice little sum up of what DX11 was adding that was new back in september but i think that was before the multi-threaded render path was announced.
Server OS are more refined then desktop counterparts
I say
2003>XP
2008>Vista
Windows 7 i swear better be well refined over Vista.
From what they comment in some posts at GameDev.net, there are many many others like that (i.e. cube map arrays) that are apparently present in DX10, but were not as properly implemented. I don't know how to explain why are not well implemented, probably it would be something like this:
where in DX10.1 you have to write: C = A+B
in DX10 you would have to do: ADD [content_of (A), content_of (B)] -> Store_in variable(C)
Note that the above is just a representation and has nothing to do with any real thing, but you get the idea. Despite the sentence in DX10 being much more complex, for all purposes the hardware would have to do the exact same thing.
All of the above is when it comes to the DX10 API. There's one more thing that is, DX10 hardware (what is sold as DX10) can do many more things than the ones that the DX10 API does, and it can do them in the proper manner, the one DX10.1 does (i.e C=A+B). BUT everybody has to remember that Microsoft decided that for DX10 and newer APIs the hardware has to be able to perform the 100% of the features in the way that DX10.1 especifies them. You don't support 1 of the features out of hundreds and you can't sell your card as DX10.1, although in practice and for all purposes the card can do everything in DX10.1 except that single thing. It doesn't matter if that feature is not important or if it is a future proof feature that can't be implemented in current hardware or if the hardware can do the thing in a different (better for the said hardware) manner.
Previous DX versions were plagged by lacking or changed/optimized features depending on the GPU brand, and yet they could obtain the DX certification, DX10 and up don't, but that doesn't mean the cards can't do those things. Probably Microsoft has decided to step back a bit in DX11 and let GPU manufacturers some flexibility. In the end the old way of doing things was only worse for developers in theory, but the truth is that many of them, the most important of them, don't care too much about how easy the API is for them to use. It does help them, but it's not something they want so much as being able to use a feature in as many different hardware as possible.
A clear example of what I'm saying is FarCry2. Ubisoft has been criticized because they said that DX10 and DX10.1 did the same for them. Feature and performance wise. That's because they went through of the hassle of also creating the DX10 path (remembet: ADD [content_of (A), content_of (B)] -> Store_in variable(C) ) for every feature they used, something no other developer has done AFAIK.
How much more work it is to write that think for both cards separately I don't know, must be quite a bit as FC2 is the first to use it. It seemed to pay off, as that was also the first game that runs faster AA enabled in DX10 than DX9 at least on some systems.
Furthermore by supporting both, the hardware or the driver model would be much more complex and expensive to make and mantain.
The better solution for clarity and ease of use (apart from every GPU using the same implementation, which was MS's intention) is the API supporting both paths with one single function. That function would decide the path to use, while the developer is agnostic to that election. I honestly don't think it would be so difficult for the API to decide which to choose based in the GPU ID. I think that's how it worked prior to DX10 anyway.