That doesn't make sense, since DX10 cards can't run DX10.1, So DX11 would be less of an upgrade than it. Would make DX10.1 kinda pointless if it's not included in DX11.
Well will be interesting to see what happens. If DX10 cards can run DX11 it will be too slow anyways, just like first SM3.0 cards were too slow to run HDR and like first SM4.0 cards were too slow to run DX10.
DX10 hardware can do pretty much everything (if not everything) DX10.1 does. For instance the most famous DX10.1 feature, MSAA buffer readback can be done through DX10 with NO PERFORMANCE PENALTY, contrary to what was suggested. What happens is that the feature was not a requirement in DX10 and to make things worse it was not properly explained in the documentation, so very few developers noticed it or wanted to go throught the hassle.
From what they comment in some posts at GameDev.net, there are many many others like that (i.e. cube map arrays) that are apparently present in DX10, but were not as properly implemented. I don't know how to explain why are not well implemented, probably it would be something like this:
where in DX10.1 you have to write: C = A+B
in DX10 you would have to do: ADD [content_of (A), content_of (B)] -> Store_in variable(C)
Note that the above is just a representation and has nothing to do with any real thing, but you get the idea. Despite the sentence in DX10 being much more complex, for all purposes the hardware would have to do the exact same thing.
All of the above is when it comes to the DX10 API. There's one more thing that is, DX10 hardware (what is sold as DX10) can do many more things than the ones that the DX10 API does, and it can do them in the proper manner, the one DX10.1 does (i.e C=A+B). BUT everybody has to remember that Microsoft decided that for DX10 and newer APIs the hardware has to be able to perform the 100% of the features in the way that DX10.1 especifies them. You don't support
1 of the features out of hundreds and you can't sell your card as DX10.1, although in practice and for all purposes the card can do everything in DX10.1 except that single thing. It doesn't matter if that feature is not important or if it is a future proof feature that can't be implemented in current hardware or if the hardware can do the thing in a different (better for the said hardware) manner.
Previous DX versions were plagged by lacking or changed/optimized features depending on the GPU brand, and yet they could obtain the DX certification, DX10 and up don't, but that doesn't mean the cards can't do those things. Probably Microsoft has decided to step back a bit in DX11 and let GPU manufacturers some flexibility. In the end the old way of doing things was only worse for developers in theory, but the truth is that many of them, the most important of them, don't care too much about how easy the API is for them to use. It does help them, but it's not something they want so much as being able to use a feature in as many different hardware as possible.
A clear example of what I'm saying is FarCry2. Ubisoft has been criticized because they said that DX10 and DX10.1 did the same for them. Feature and performance wise. That's because they went through of the hassle of also creating the DX10 path (remembet: ADD [content_of (A), content_of (B)] -> Store_in variable(C) ) for every feature they used, something no other developer has done AFAIK.