Tuesday, June 30th 2020
Microsoft Rumored To Release Budget Xbox Series S Console
In a recent report by the Verge, Microsoft is reportedly planning a budget console to go along with the Xbox Series X known as the Xbox Series S. The Xbox Series S will be a successor to the popular Xbox One S, codenamed Lockhart it's set to feature the key next-generation improvements found in the Xbox Series X and provide them at a lower price point.
Sources say that the Xbox Series S will target 1080p 60FPS and 1440p 6̶0̶F̶P̶S̶ 30FPS gaming performance, this will be achieved with roughly 1/3rd of the GPU power at 4 TeraFLOPs, and 10 GB of GDDR6 RAM. The Xbox Series S console is rumored to share the same 7 nm AMD Zen 2 SoC a̶t̶ ̶a̶ ̶l̶o̶w̶e̶r̶ ̶c̶l̶o̶c̶k̶ and the same ultra-fast PCIe 4.0 SSD. The console may not feature a disc drive like the Xbox One S All Digital Edition. Microsoft is yet to publicly say anything about the console, so take these rumors with a grain of salt.
Update Jun 30th: Sources tell Eurogamer that Microsoft is planning to unveil the console at an event in August.
Sources:
The Verge, @tomwarren, Eurogamer
Sources say that the Xbox Series S will target 1080p 60FPS and 1440p 6̶0̶F̶P̶S̶ 30FPS gaming performance, this will be achieved with roughly 1/3rd of the GPU power at 4 TeraFLOPs, and 10 GB of GDDR6 RAM. The Xbox Series S console is rumored to share the same 7 nm AMD Zen 2 SoC a̶t̶ ̶a̶ ̶l̶o̶w̶e̶r̶ ̶c̶l̶o̶c̶k̶ and the same ultra-fast PCIe 4.0 SSD. The console may not feature a disc drive like the Xbox One S All Digital Edition. Microsoft is yet to publicly say anything about the console, so take these rumors with a grain of salt.
Update Jun 30th: Sources tell Eurogamer that Microsoft is planning to unveil the console at an event in August.
57 Comments on Microsoft Rumored To Release Budget Xbox Series S Console
I'm sure you, and not Microsoft, have managed to spot a critical flaw that means everything will be bad. It's not like Microsoft does any kind of R&D, testing or thinking about how their products work. Not one person at that trillion dollar company has even thought about that until you did.
If there was some option to play PC games on Xbox It would be a killer feature, but then most of PC gaming market(hardware) would crumble, so It won't happen.
I'm aware manufacturers can make massive mistakes, my point is this is quite a trivial oversight if it ends up being the case (which I'm quite confident in saying it won't) and not something that is quite on the same level as some of the other mistakes.
The render target res is in fact independent, but the overall memory usage is still lower by lowering res. Most PC games today will struggle with 4GB of VRAM at 4K, but same settings at 1080p and you'd see ~1GB VRAM usage. If the Series S ships with 12GB like the One X and we assume VRAM usage to be 2-3GB at most at 1080p (and the SSD will help get that number much lower as lower-priority assets can be stored on the SSD) and 4-5GB at 1440p, then that leaves 7GB at least for the rest of the system.
See ? Are you all still not convinced games don't actually use that much more memory at 4K versus 1080p yet ? I know what's the next probably : "yeah but all those games just fill up all the available memory even if it wont use it all". No, it's not that, it's just not that big of a difference.
The being said I stand by my conclusion that 6GB is a huge chunk of memory which will act as as big constraint, forcing developer to simplify game logic and assets. And I wonder how much cheaper would this console really be, the SoC is the same, MS will pay the same for every wafer they get from AMD irrespective of where it will be used.
I don't know about your country, but In my country Xbox One X with one included game costs €399. So If the big one costs €499 and the smaller one only €299, that wouldn't be such a bad deal in my opinion, but I would rather pay more for the stronger one. I don't think the SoC is the same.
For 4Tflops you need a GPU with only 20CU at 1560Mhz or 16CU at 1950Mhz and we know the SoC has 56CU in total. They won't disable that many CU, just to put It in a much cheaper console.
If you have two different SoCs, then you will pay two different licence fees to AMD and even If It was the same the production cost for a smaller SoC is still lower and with millions of produced SoCs It's a lot of saved money.
MS gets the chips from AMD not TSMC, I don't know why you insist on this. TSMC just makes the wafers with whatever is printed on them, you could draw stick figures with transistors onto them for all they care. I don't think you quite get this, no matter the chip you pay the same amount of money for one wafer, the cost per die goes up and down because of yields. That's why it's probably too expensive to design another chip when you can just get the defective ones that you would have gotten anyway. This of course works for relatively small chips, if you'd have had something like a 700mm^2 monstrosity, yes, it would probably be cheaper to just make another chip.
Fo MS It should be better to have 2 SoCs rather than just 1. If Project Scarlett SoC is 360mm^2, but Lockhart SoC is <=300mm^2, then you can make more Lockhart SoCs from one wafer even with the same yield(failure) rate. With smaller SoC you don't need to order as much wafers as you would need with a bigger one so MS won't need to pay AMD for so many wafers.
With millions of SoCs needed for the cheaper console you won't have enough faulty SoCs to cover the demand and would need to partially disable a fully functional Scarlett SoC. It's not out of question that It would be cheaper to use Scarlett SoC instead of a smaller SoC, but only If the weaker console sells less units than the stronger one.
I found a wafer yield calculator. Here
I found that defect density is 0.09 per cm^2 for 7nm.
Let's say 300mm 7nm wafer costs $10,000.
Scarlett SoC - 360mm^2 -> 112 Good ones and 41 faulty ones.
Cost per good SoC is 10,000/112 = $89,3
"Lockhart" SoC - 285mm^2 -> 152 Good ones and 44 faulty ones.
Cost per good SoC is 10,000/152 = $65.8
If you wanted to make 10,000,000 Scarlett SoCs for the Lockhart console you would need 89,286 wafers which would cost $892,860,000.
If you wanted to make 10,000,000 Lockhart SoCs for the Lockhart console you would need 65,790 wafers which would cost $657,900,000.
You would save $234,960,000 on wafer cost. This is when I excluded the faulty Scarlett SoCs, from which some can be certainly reused.
If let's say 30 can be reused then 142 will be used in total, It would be 10,000/142 = $70,4 per SoC.
If you wanted to make 10,000,000 Scarlett SoCs for the Lockhart console you would need 70,423 wafers which would cost $704,230,000 and that's only $46,330.000 more.
Of course this difference can decrease or increase depending on how many Lockhart consoles will be made.
b) A lot of that data in VRAM is loaded "just in case", with a relatively small portion ever actually being used. Given that the new consoles are built around NVMe SSDs and superior-to-Windows storage architectures, it's not that much of a stretch to assume that they can do this less, relying on the SSD to stream in necessary data on the fly (to a certain extent, of course).
c) Those tests are run on Ultra settings. No console runs PC Ultra settings, as Ultra inevitably activates all the stupidly expensive stuff that you barely even notice. Most games have significant drops from Ultra to High.
d) It's entirely within reason to expect a lower-end console to use a lower tier of graphics settings too - likely something akin to PC "medium" settings, further lowering VRAM usage.
a) It makes no sense at all why an AMD card would use more VRAM. Does a character model gain extra vertices on an AMD GPU ? Please explain, what exactly is the driver doing.
b) Of course not all stored data is used in rendering every single frame, that holds true whether you run a game at the lowest possible settings or if you run it at 4K with everything turned all the way up. VRAM usage is VRAM usage, it makes no sense to argue that some of the data is not used. No program is ever using every single byte simultaneously at once, come on.
c) Why does it matter if it's ultra or not ? You do realize that if the settings were to be turned down, the difference between lower and higher resolutions would be even smaller, right ?
d) Of course it is, still can't justify how all that cut down memory somehow only has to do with graphical settings. I'll remind you that you can currently fit entire frame buffers of modern games in something like 6GB. How much more memory do you think higher resolution shadows maps use for example ?
As for your responses:
a) as I said, it's quite well documented that VRAM usage differs in the same game at the same settings between Nvidia and AMD. But you misread me; AMD typically has less memory flagged as in use than Nvidia. I would guess this is down to various driver related processes, such as the previously discussed pre-loading of data, compression/decompression, etc. I definitely didn't say this means either OEM's cards use more or less VRAM than the other while gaming, in fact I explicitly underscored that this is likely not what is happening.
b) so... if not all data in RAM is needed, and the system and game both know the only storage present is fast NVMe, perhaps some of that unnecessary data can be kept out of RAM and on the SSD?
c) What? No. Resolution is one of dozens if not hundreds of variables in graphics settings. Reducing rendering quality does not equate to lowering the resolution. 1080p with amazing shadows and lighting is still less sharp than 4k with poor shadows and lighting. As to whether sharpness or rendering quality is the biggest determinant of visual quality? Both, in various situations and ways. But then again here we are talking about a lower end SKU that might likely run at BOTH a lower resolution AND lower settings. Makes sense, no? And, again, this would lower VRAM needs for the lower end SKU.
As for d), you really ought to have pointed out earlier that you were talking about RAM usage outside of graphics, as that is what the entire debate has been around up until this. Being vague and overly general doesn't get your point across. Beyond this, I've covered this above.
Thinking about cost savings, the lower price point should allow for a much cheaper PSU, smaller die size for the processor or maybe harvested Series X chips with too many defects, lower RAM costs, and potentially a missing disc drive too. I hope that adds up to a $300 launch as anything more would likely not be worth releasing separately.
Microsoft has definitely done some market research before deciding this Cpu will be the same.