• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Microsoft Rumored To Release Budget Xbox Series S Console

Joined
May 28, 2020
Messages
752 (0.49/day)
System Name Main PC
Processor AMD Ryzen 9 5950X
Motherboard ASUS X570 Crosshair VIII Hero (Wi-Fi)
Cooling EKWB X570 VIII Hero Monoblock, 2x XD5, Heatkiller IV SB block for chipset,Alphacool 3090 Strix block
Memory 4x16GB 3200-14-14-14-34 G.Skill Trident RGB (OC: 3600-14-14-14-28)
Video Card(s) ASUS RTX 3090 Strix OC
Storage 500GB+500GB SSD RAID0, Fusion IoDrive2 1.2TB, Huawei HSSD 2TB, 11TB on server used for steam
Display(s) Dell LG CX48 (custom res: 3840x1620@120Hz) + Acer XB271HU 2560x1440@144Hz
Case Corsair 1000D
Audio Device(s) Sennheiser HD599, Blue Yeti
Power Supply Corsair RM1000i
Mouse Logitech G502 Lightspeed
Keyboard Corsair Strafe RGB MK2
Software Windows 10 Pro 20H2
Even if so it will have less memory.
As @Valantar mentions, the memory is a shared pool between GPU and CPU. Going from 16GB to like 8GB wouldn't be as bad since the GPU is 1/3rd the performance anyway, and therefore will use much less memory, opening up more for the CPU. Also going from 4K to 1080p will drastically reduce texture sizes and therefore memory usage. I imagine both console versions still have the same "budget" of memory that the CPU can use.

I'm sure you, and not Microsoft, have managed to spot a critical flaw that means everything will be bad. It's not like Microsoft does any kind of R&D, testing or thinking about how their products work. Not one person at that trillion dollar company has even thought about that until you did.
 
Joined
Jan 8, 2017
Messages
9,247 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
But these consoles allocate memory dynamically between the CPU and GPU, and the high end concoles will use a lot as VRAM due to their high resolutions. Running at 1080p or 1440p with lower texture quality will easily overcome this, and won't affect development for the higher end consoles whatsoever.

6 GB is a lot even when you take into consideration lower textures and resolutions. As a developer you will be forced to make sure the game fits in those 10GB, there is no going around it. You can fill the rest with trivial stuff like textures and whatnot on the higher end console but it's a waste of potential.

I'm sure you, and not Microsoft, have managed to spot a critical flaw that means everything will be bad. It's not like Microsoft does any kind of R&D, testing or thinking about how their products work. Not one person at that trillion dollar company has even thought about that until you did.

Don't get cheeky, it's not a critical flaw, it's an extra constraint developers will have to deal with. And, yes these trillion dollar companies have designed countless pieces of hardware with terrible constraints, look at Cell and it's separate GPU/system RAM, look at the ESRAM on the Xbox One, etc.

Also going from 4K to 1080p will drastically reduce texture sizes and therefore memory usage.

The render target resolution is independent from texture resolution.
 
Last edited:
Joined
Jan 24, 2011
Messages
168 (0.03/day)
There's no way the price difference will be only $100 with a GPU ⅓ the performance and less memory. If this is true it is clearly a value play, likely aiming for $300 or possibly even less.
$100 was a bit conservative estimate from me so I will change It to $150, but I don't think It will be $200.
If there was some option to play PC games on Xbox It would be a killer feature, but then most of PC gaming market(hardware) would crumble, so It won't happen.
 
Joined
May 2, 2017
Messages
7,762 (2.93/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
6 GB is a lot even when you take into consideration lower textures and resolutions. As a developer you will be forced to make sure the game fits in those 10GB, there is no going around it. You can fill the rest with trivial stuff like textures and whatnot on the higher end console but it's a waste of potential.



Don't get cheeky, it's not a critical flaw, it's an extra constraint developers will have to deal with. And, yes these trillion dollar companies have designed countless pieces of hardware with terrible constraints, look at Cell and it's separate GPU/system RAM, look at the ESRAM on the Xbox One, etc.



The render target resolution is independent from texture resolution.
Put it this way: it's entirely possible the XSX is aiming for (an equivalent of) 4K "High" or even borderline "Ultra" settings levels in most games. The amount of extra work needed to implement a "Medium" level that focuses on running smoothly at 1080p60 or 1440p30 while staying within a 7.5GB memory footprint (2.5GB is reserved for the OS) rather than the 13.5GB of the XSX? Not that much, frankly. Especially if the game is cross-platform, as most of the work would then be done for the PC port no matter what. It's also worth pointing out that consoles with a unified memory architecture need (often significantly) less total memory than an equivalent PC due to the lack of duplicated assets between RAM and VRAM.
$100 was a bit conservative estimate from me so I will change It to $150, but I don't think It will be $200.
If there was some option to play PC games on Xbox It would be a killer feature, but then most of PC gaming market(hardware) would crumble, so It won't happen.
They would struggle to sell this at those kinds of prices, given that it is after all an explicit low-end console. "Get ⅓ the performance for ⅔ the price" is a dirt-poor sales pitch. No optical drive, less RAM, smaller and cheaper motherboard, less expensive VRM components, smaller PSU, either a smaller and cheaper die or harvested chips that would otherwise go in the trash... $150 savings for this sounds like a relatively bad deal.
 
Joined
May 28, 2020
Messages
752 (0.49/day)
System Name Main PC
Processor AMD Ryzen 9 5950X
Motherboard ASUS X570 Crosshair VIII Hero (Wi-Fi)
Cooling EKWB X570 VIII Hero Monoblock, 2x XD5, Heatkiller IV SB block for chipset,Alphacool 3090 Strix block
Memory 4x16GB 3200-14-14-14-34 G.Skill Trident RGB (OC: 3600-14-14-14-28)
Video Card(s) ASUS RTX 3090 Strix OC
Storage 500GB+500GB SSD RAID0, Fusion IoDrive2 1.2TB, Huawei HSSD 2TB, 11TB on server used for steam
Display(s) Dell LG CX48 (custom res: 3840x1620@120Hz) + Acer XB271HU 2560x1440@144Hz
Case Corsair 1000D
Audio Device(s) Sennheiser HD599, Blue Yeti
Power Supply Corsair RM1000i
Mouse Logitech G502 Lightspeed
Keyboard Corsair Strafe RGB MK2
Software Windows 10 Pro 20H2
Don't get cheeky, it's not a critical flaw, it's an extra constraint developers will have to deal with. And, yes these trillion dollar companies have designed countless pieces of hardware with terrible constraints, look at Cell and it's separate GPU/system RAM, look at the ESRAM on the Xbox One, etc.
The render target resolution is independent from texture resolution.

Right, I see my hyperbole didn't quite stick its landing. Oh well.

I'm aware manufacturers can make massive mistakes, my point is this is quite a trivial oversight if it ends up being the case (which I'm quite confident in saying it won't) and not something that is quite on the same level as some of the other mistakes.

The render target res is in fact independent, but the overall memory usage is still lower by lowering res. Most PC games today will struggle with 4GB of VRAM at 4K, but same settings at 1080p and you'd see ~1GB VRAM usage. If the Series S ships with 12GB like the One X and we assume VRAM usage to be 2-3GB at most at 1080p (and the SSD will help get that number much lower as lower-priority assets can be stored on the SSD) and 4-5GB at 1440p, then that leaves 7GB at least for the rest of the system.
 
Joined
Jan 8, 2017
Messages
9,247 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
The render target res is in fact independent, but the overall memory usage is still lower by lowering res. Most PC games today will struggle with 4GB of VRAM at 4K, but same settings at 1080p and you'd see ~1GB VRAM usage. If the Series S ships with 12GB like the One X and we assume VRAM usage to be 2-3GB at most at 1080p (and the SSD will help get that number much lower as lower-priority assets can be stored on the SSD) and 4-5GB at 1440p, then that leaves 7GB at least for the rest of the system.

I know of no game that uses 1GB at 1080p and over 4 GB at 4K, the difference in memory usage is much less stark than people believe. Everyone makes the trivial error that 4 times the pixels means 4 times the overall VRAM usage. It doesn't work like that, a lot of data stored in VRAM doesn't need to scale up and remains the same in size. All of these are modern games :

1593508190795.png

1593508313830.png

1593508330241.png

1593510562861.png


See ? Are you all still not convinced games don't actually use that much more memory at 4K versus 1080p yet ? I know what's the next probably : "yeah but all those games just fill up all the available memory even if it wont use it all". No, it's not that, it's just not that big of a difference.

The being said I stand by my conclusion that 6GB is a huge chunk of memory which will act as as big constraint, forcing developer to simplify game logic and assets. And I wonder how much cheaper would this console really be, the SoC is the same, MS will pay the same for every wafer they get from AMD irrespective of where it will be used.
 
Last edited:
Joined
Jan 24, 2011
Messages
168 (0.03/day)
They would struggle to sell this at those kinds of prices, given that it is after all an explicit low-end console. "Get ⅓ the performance for ⅔ the price" is a dirt-poor sales pitch. No optical drive, less RAM, smaller and cheaper motherboard, less expensive VRM components, smaller PSU, either a smaller and cheaper die or harvested chips that would otherwise go in the trash... $150 savings for this sounds like a relatively bad deal.
I forgot It probably won't have an optical drive, so maybe $200 less in total.
I don't know about your country, but In my country Xbox One X with one included game costs €399. So If the big one costs €499 and the smaller one only €299, that wouldn't be such a bad deal in my opinion, but I would rather pay more for the stronger one.

The being said I stand by my conclusion that 6GB is a huge chunk of memory which will act as as big constraint, forcing developer to simplify game logic and assets. And I wonder how much cheaper would this console really be, the SoC is the same, MS will pay the same for every wafer they get from AMD irrespective of where it will be used.
I don't think the SoC is the same.
For 4Tflops you need a GPU with only 20CU at 1560Mhz or 16CU at 1950Mhz and we know the SoC has 56CU in total. They won't disable that many CU, just to put It in a much cheaper console.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,247 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
I don't think the SoC is the same.
For 4Tflops you need a GPU with only 20CU at 1560Mhz or 16CU at 1950Mhz and we know the SoC has 56CU in total. They won't disable that many CU, just to put It in a much cheaper console.

I can't see them commissioning another custom die from AMD, these custom designs are expensive as hell.
 
Joined
Jan 24, 2011
Messages
168 (0.03/day)
And disabling 32-36CU + other parts of GPU isn't? That's a lot of disabled space of the whole SoC and said SoC will be produced in millions.
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,734 (3.36/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
This is a horrible idea and it will cripple game development across all platforms. Developers will be forced to adhere to the CPU/RAM limits of the weaker consoles, meaning the games will be as complex as the weaker consoles allows. The GPU situation can be dealt with easily but having a lower clocked CPU with less cores potentially and less RAM will ruin progress with this generation.
I'm inclined to agree. I thought consoles were all about offering a simple, hassle-free device for your average Joe to play games on. Having multiple models at different performance levels muddles up the waters a bit. Sure, there were like 9 versions of the PS2, but they were all relatively minor changes with little impact to the end user. There are some competitive people I know with original PS4s that are put off by playing competitively because their PS4 isn't as good as some of the later ones, so they're already at a flat disadvantage. It looks like it's only going to get worse.
 
Joined
Jan 8, 2017
Messages
9,247 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
And disabling 32-36CU + other parts of GPU isn't? We are talking about millions of said SoCs used for the weaker brother.

The wafers they get from AMD will have defective CUs, AMD doesn't care about that, MS will pay for those anyway. So might as well use them in the cheaper console and by the way I don't actually think it will be a 4TF GPU, that seems way too weak. 6TF seems more reasonable.
 
Joined
Jan 24, 2011
Messages
168 (0.03/day)
The wafers they get from AMD will have defective CUs, AMD doesn't care about, MS will pay for those anyway. So might as well use them in the cheaper console and by the way I don't actually think it will be a 4TF GPU, that seems way too weak. 6TF seems more reasonable.
Wafers from AMD? Are you sure? In my opinion MS just pays a fixed licence fee per produced SoC to AMD, but the production is commissioned by MS to Samsung or TSMC.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,247 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Wafers from AMD? Are you sure? In my opinion MS just pays a fixed licence fee for produced SoCs to AMD, but the production is commissioned by MS to Samsung or TSMC.

It never works that way, you pay for the entire wafer. These SoCs will be produced in the millions, they agree to certain yields/bins and whatnot. AMD designs the chip you want and you do whatever you like with the wafer the chips are printed on.
 
Joined
Jan 24, 2011
Messages
168 (0.03/day)
My point was that AMD just designed It, so MS can hardly pay AMD for wafers. MS pays TMSC or Samsung for wafers, because they make It and also pay AMD for each SoC the agreed licence fee.
If you have two different SoCs, then you will pay two different licence fees to AMD and even If It was the same the production cost for a smaller SoC is still lower and with millions of produced SoCs It's a lot of saved money.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,247 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
MS -> AMD -> TSMC/Samsung or whoever else. That's how it works, you can't really design a chip independent from the manufacturing process, so AMD has to be the one which deals with TSMC not MS. It would make absolutely no sense to get a design from AMD and then try and get TSMC to make your chips when AMD already has the know-how on 7nm. That's the whole idea of the semi-custom concept AMD made, you can let AMD handle all of the logistics, you just say what you want them to make.

MS gets the chips from AMD not TSMC, I don't know why you insist on this. TSMC just makes the wafers with whatever is printed on them, you could draw stick figures with transistors onto them for all they care.

then you will pay two different licence fees to AMD and even If It was the same the production cost for a smaller SoC is still lower and with millions of produced SoCs It's a lot of saved money.

I don't think you quite get this, no matter the chip you pay the same amount of money for one wafer, the cost per die goes up and down because of yields. That's why it's probably too expensive to design another chip when you can just get the defective ones that you would have gotten anyway. This of course works for relatively small chips, if you'd have had something like a 700mm^2 monstrosity, yes, it would probably be cheaper to just make another chip.
 
Last edited:
Joined
Jan 24, 2011
Messages
168 (0.03/day)
Naturally If AMD designed It, then It needs to work with TSMC, but I thought the deal was between MS-AMD(design) and MS-TMSC(production) and not that AMD was commissioning production to TMSC instead of MS.
Fo MS It should be better to have 2 SoCs rather than just 1.

I don't think you quite get this, no matter the chip you pay the same amount of money for one wafer, the cost per die goes up and down because of yields. That's why it's probably too expensive to design another chip when you can just get the defective ones that you would have gotten anyway. This of course works for relatively small chips, if you'd have had something like a 700mm^2 monstrosity, yes, it would probably be cheaper to just make another chip.
If Project Scarlett SoC is 360mm^2, but Lockhart SoC is <=300mm^2, then you can make more Lockhart SoCs from one wafer even with the same yield(failure) rate. With smaller SoC you don't need to order as much wafers as you would need with a bigger one so MS won't need to pay AMD for so many wafers.
With millions of SoCs needed for the cheaper console you won't have enough faulty SoCs to cover the demand and would need to partially disable a fully functional Scarlett SoC. It's not out of question that It would be cheaper to use Scarlett SoC instead of a smaller SoC, but only If the weaker console sells less units than the stronger one.
 
Last edited:
Joined
May 27, 2019
Messages
147 (0.08/day)
Location
Greece
System Name Odyssey
Processor AMD Ryzen 7 3700x
Motherboard MSI MEG X570 UNIFY
Cooling EKWB EK-MLC Phoenix 240
Memory Crucial Ballistix Sport AT 3200MHz 32GB
Video Card(s) Sapphire Pulse RX 5700XT 8 GB
Storage ADATA XPG SX8200 Pro 1TBx2
Display(s) LG 32GK850F-B
Case Phanteks Enthoo Pro M Tempered Glass
Power Supply SeaSonic PRIME 650W Gold
I can't see them commissioning another custom die from AMD, these custom designs are expensive as hell.
the reality is we don't know if they have 2 SoC designs or will just use the defective ones for XSS. it's not like they lack the money to get 2 custom SoC designs. either way they can easily absorb the extra cost
 
Joined
Jan 24, 2011
Messages
168 (0.03/day)
Out of curiosity I made a comparison.
I found a wafer yield calculator. Here
I found that defect density is 0.09 per cm^2 for 7nm.
Let's say 300mm 7nm wafer costs $10,000.

Scarlett SoC - 360mm^2 -> 112 Good ones and 41 faulty ones.
Cost per good SoC is 10,000/112 = $89,3

"Lockhart" SoC - 285mm^2 -> 152 Good ones and 44 faulty ones.
Cost per good SoC is 10,000/152 = $65.8

If you wanted to make 10,000,000 Scarlett SoCs for the Lockhart console you would need 89,286 wafers which would cost $892,860,000.
If you wanted to make 10,000,000 Lockhart SoCs for the Lockhart console you would need 65,790 wafers which would cost $657,900,000.
You would save $234,960,000 on wafer cost. This is when I excluded the faulty Scarlett SoCs, from which some can be certainly reused.

If let's say 30 can be reused then 142 will be used in total, It would be 10,000/142 = $70,4 per SoC.
If you wanted to make 10,000,000 Scarlett SoCs for the Lockhart console you would need 70,423 wafers which would cost $704,230,000 and that's only $46,330.000 more.
Of course this difference can decrease or increase depending on how many Lockhart consoles will be made.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.93/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Remember that the Xbox Series X SoC has more CUs on it than MS is enabling (56 in silicon, 52 enabled), which given the % area of the die dedicated to the GPU will mean that a huge proportion of faulty dice will be perfectly usable for the full XSX by disabling whatever CU the error is in. The chances of a die having more than four defective CUs yet no faults that make it useless otherwise (memory controllers, CPU cores, etc.) sound mind-bogglingly small.

I'm inclined to agree. I thought consoles were all about offering a simple, hassle-free device for your average Joe to play games on. Having multiple models at different performance levels muddles up the waters a bit. Sure, there were like 9 versions of the PS2, but they were all relatively minor changes with little impact to the end user. There are some competitive people I know with original PS4s that are put off by playing competitively because their PS4 isn't as good as some of the later ones, so they're already at a flat disadvantage. It looks like it's only going to get worse.
IMO, "pay X for full performance and all the features, or pay 3/5X for a lower performance version that still does all the same things and plays all the same games" is still a relatively simple proposition for end users, even if it adds a single layer of complexity (that is frankly already there with the X1X/X1S and PS4s/PS4p). Keeping the CPU the same across the board also means that (outside of draw calls increasing CPU load for the higher end SKU) the only relevant difference will be in graphical fidelity. Same responsiveness, same framerates (given that developers adjust for the less powerful GPU), same games, same disks, same UI, though likely smaller installations/downloads on the lower end SKU, and likely no RTRT.
The wafers they get from AMD will have defective CUs, AMD doesn't care about that, MS will pay for those anyway. So might as well use them in the cheaper console and by the way I don't actually think it will be a 4TF GPU, that seems way too weak. 6TF seems more reasonable.
See above. The chance of a die from one of these 360mm wafers being unusable for the XSX is very, very small. Which leads me to think any potential Lockhart would be a different die - after all, to reach 4 TFlops at the same clock as the XSX (1825MHz) they would just need 18 CUs. If a smaller die allowed them to clock higher they could get away with 16 or even 15. Make that into a 20 CU die for some redundancy, reduce the number of memory controllers and interfaces to match the 10GB of RAM, and you've got a relatively small die. Not Renoir size (150mm2), but likely ~200mm2. That would allow for dramatic increases in chip yields - a 13x16mm die (208mm2) run through the calculator @THANATOS used above results in 244 good dice and 46 partials, nearly 2x the die count of the full Scarlett SoC. Also, as discussed above 4TF of RDNA compute won't lag much behind 6TF of GCN compute in gaming performance - and unlike the X1X this would be targeting lower resolutions than 4k, meaning 4TF ought to be perfectly fine as long as other parts of the SoC don't choke. Speaking of which:

I know of no game that uses 1GB at 1080p and over 4 GB at 4K, the difference in memory usage is much less stark than people believe. Everyone makes the trivial error that 4 times the pixels means 4 times the overall VRAM usage. It doesn't work like that, a lot of data stored in VRAM doesn't need to scale up and remains the same in size. All of these are modern games :

View attachment 160704
View attachment 160705
View attachment 160706
View attachment 160708

See ? Are you all still not convinced games don't actually use that much more memory at 4K versus 1080p yet ? I know what's the next probably : "yeah but all those games just fill up all the available memory even if it wont use it all". No, it's not that, it's just not that big of a difference.

The being said I stand by my conclusion that 6GB is a huge chunk of memory which will act as as big constraint, forcing developer to simplify game logic and assets. And I wonder how much cheaper would this console really be, the SoC is the same, MS will pay the same for every wafer they get from AMD irrespective of where it will be used.
a) Those tests are all run on Nvidia GPUs with Nvidia drivers, which are known to (on average, though it depends on the title) have more VRAM labeled "in use" while gaming than equivalent AMD GPUs. Whether they actually use or need more VRAM or if this is just a difference in how the drivers work is another issue entirely, but the point nonetheless stands.
b) A lot of that data in VRAM is loaded "just in case", with a relatively small portion ever actually being used. Given that the new consoles are built around NVMe SSDs and superior-to-Windows storage architectures, it's not that much of a stretch to assume that they can do this less, relying on the SSD to stream in necessary data on the fly (to a certain extent, of course).
c) Those tests are run on Ultra settings. No console runs PC Ultra settings, as Ultra inevitably activates all the stupidly expensive stuff that you barely even notice. Most games have significant drops from Ultra to High.
d) It's entirely within reason to expect a lower-end console to use a lower tier of graphics settings too - likely something akin to PC "medium" settings, further lowering VRAM usage.
 
Joined
Jan 8, 2017
Messages
9,247 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
se tests are all run on Nvidia GPUs with Nvidia drivers, which are known to (on average, though it depends on the title) have more VRAM labeled "in use" while gaming than equivalent AMD GPUs. Whether they actually use or need more VRAM or if this is just a difference in how the drivers work is another issue entirely, but the point nonetheless stands.
b) A lot of that data in VRAM is loaded "just in case", with a relatively small portion ever actually being used. Given that the new consoles are built around NVMe SSDs and superior-to-Windows storage architectures, it's not that much of a stretch to assume that they can do this less, relying on the SSD to stream in necessary data on the fly (to a certain extent, of course).
c) Those tests are run on Ultra settings. No console runs PC Ultra settings, as Ultra inevitably activates all the stupidly expensive stuff that you barely even notice. Most games have significant drops from Ultra to High.
d) It's entirely within reason to expect a lower-end console to use a lower tier of graphics settings too - likely something akin to PC "medium" settings, further lowering VRAM usage.

Why not just show me instances of games using a lot more memory at higher resolution ? None of your explanations have a real foundation.

a) It makes no sense at all why an AMD card would use more VRAM. Does a character model gain extra vertices on an AMD GPU ? Please explain, what exactly is the driver doing.

b) Of course not all stored data is used in rendering every single frame, that holds true whether you run a game at the lowest possible settings or if you run it at 4K with everything turned all the way up. VRAM usage is VRAM usage, it makes no sense to argue that some of the data is not used. No program is ever using every single byte simultaneously at once, come on.

c) Why does it matter if it's ultra or not ? You do realize that if the settings were to be turned down, the difference between lower and higher resolutions would be even smaller, right ?

d) Of course it is, still can't justify how all that cut down memory somehow only has to do with graphical settings. I'll remind you that you can currently fit entire frame buffers of modern games in something like 6GB. How much more memory do you think higher resolution shadows maps use for example ?
 
Joined
May 2, 2017
Messages
7,762 (2.93/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Why not just show me instances of games using a lot more memory at higher resolution ? None of your explanations have a real foundation.

a) It makes no sense at all why an AMD card would use more VRAM. Does a character model gain extra vertices on an AMD GPU ? Please explain, what exactly is the driver doing.

b) Of course not all stored data is used in rendering every single frame, that holds true whether you run a game at the lowest possible settings or if you run it at 4K with everything turned all the way up. VRAM usage is VRAM usage, it makes no sense to argue that some of the data is not used. No program is ever using every single byte simultaneously at once, come on.

c) Why does it matter if it's ultra or not ? You do realize that if the settings were to be turned down, the difference between lower and higher resolutions would be even smaller, right ?

d) Of course it is, still can't justify how all that cut down memory somehow only has to do with graphical settings. I'll remind you that you can currently fit entire frame buffers of modern games in something like 6GB. How much more memory do you think higher resolution shadows maps use for example ?
So what you are worried about is non-graphics portions of games needing that memory? Such as what, precisely? Most PC games run perfectly fine on 8GB of RAM, which under Windows 10 at best means 6GB free, though likely more like 4. And again, those games aren't built around a storage architecture delivering near-instant loads (no, it doesn't matter if you have an NVMe SSD in your PC, the game is programmed to pre-load anything it might need at the rate necessary if running from a 2.5" HDD. All your SSD is doing is speeding up the pre-load), meaning a lot of "maybe we'll need this too" data can be kept on the SSD, with the game knowing exactly how quickly it can load it, reducing actual RAM usage.

As for your responses:
a) as I said, it's quite well documented that VRAM usage differs in the same game at the same settings between Nvidia and AMD. But you misread me; AMD typically has less memory flagged as in use than Nvidia. I would guess this is down to various driver related processes, such as the previously discussed pre-loading of data, compression/decompression, etc. I definitely didn't say this means either OEM's cards use more or less VRAM than the other while gaming, in fact I explicitly underscored that this is likely not what is happening.

b) so... if not all data in RAM is needed, and the system and game both know the only storage present is fast NVMe, perhaps some of that unnecessary data can be kept out of RAM and on the SSD?

c) What? No. Resolution is one of dozens if not hundreds of variables in graphics settings. Reducing rendering quality does not equate to lowering the resolution. 1080p with amazing shadows and lighting is still less sharp than 4k with poor shadows and lighting. As to whether sharpness or rendering quality is the biggest determinant of visual quality? Both, in various situations and ways. But then again here we are talking about a lower end SKU that might likely run at BOTH a lower resolution AND lower settings. Makes sense, no? And, again, this would lower VRAM needs for the lower end SKU.

As for d), you really ought to have pointed out earlier that you were talking about RAM usage outside of graphics, as that is what the entire debate has been around up until this. Being vague and overly general doesn't get your point across. Beyond this, I've covered this above.
 
Joined
Jun 16, 2016
Messages
409 (0.14/day)
System Name Baxter
Processor Intel i7-5775C @ 4.2 GHz 1.35 V
Motherboard ASRock Z97-E ITX/AC
Cooling Scythe Big Shuriken 3 with Noctua NF-A12 fan
Memory 16 GB 2400 MHz CL11 HyperX Savage DDR3
Video Card(s) EVGA RTX 2070 Super Black @ 1950 MHz
Storage 1 TB Sabrent Rocket 2242 NVMe SSD (boot), 500 GB Samsung 850 EVO, and 4TB Toshiba X300 7200 RPM HDD
Display(s) Vizio P65-F1 4KTV (4k60 with HDR or 1080p120)
Case Raijintek Ophion
Audio Device(s) HDMI PCM 5.1, Vizio 5.1 surround sound
Power Supply Corsair SF600 Platinum 600 W SFX PSU
Mouse Logitech MX Master 2S
Keyboard Logitech G613 and Microsoft Media Keyboard
I think that this is a lot smarter than Sony's attempt at a lower price point by removing the disc drive. Microsoft will be able to squeeze Sony by having the "cheapest next gen console" even though it will be limited to 1080p. Also, by baking in a lower performance point into the equation, we could see something like a portable Xbox down the line. That's wishful thinking, of course, but at 5 nm or 3 nm maybe the Xbox APU for the Series S could be shoved into a handheld.

Thinking about cost savings, the lower price point should allow for a much cheaper PSU, smaller die size for the processor or maybe harvested Series X chips with too many defects, lower RAM costs, and potentially a missing disc drive too. I hope that adds up to a $300 launch as anything more would likely not be worth releasing separately.
 
Joined
May 27, 2019
Messages
147 (0.08/day)
Location
Greece
System Name Odyssey
Processor AMD Ryzen 7 3700x
Motherboard MSI MEG X570 UNIFY
Cooling EKWB EK-MLC Phoenix 240
Memory Crucial Ballistix Sport AT 3200MHz 32GB
Video Card(s) Sapphire Pulse RX 5700XT 8 GB
Storage ADATA XPG SX8200 Pro 1TBx2
Display(s) LG 32GK850F-B
Case Phanteks Enthoo Pro M Tempered Glass
Power Supply SeaSonic PRIME 650W Gold
Switch Lite showed that there's a market for cheaper versions of consoles. not everyone has a 4k tv and someone who can't afford a 4k tv, would likely prefer a cheaper console.
Microsoft has definitely done some market research before deciding this

Thinking about cost savings, the lower price point should allow for a much cheaper PSU, smaller die size for the processor or maybe harvested Series X chips with too many defects, lower RAM costs, and potentially a missing disc drive too. I hope that adds up to a $300 launch as anything more would likely not be worth releasing separately.
Cpu will be the same.
 
Joined
May 2, 2017
Messages
7,762 (2.93/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Switch Lite showed that there's a market for cheaper versions of consoles. not everyone has a 4k tv and someone who can't afford a 4k tv, would likely prefer a cheaper console.
Microsoft has definitely done some market research before deciding this


Cpu will be the same.
CPU being the same does not mean the SoC is the same, and it's highly unlikely that there will be enough defective XSX dice to support this being based on cut down chips. If this is real it's likely a second die design with the same CPU but less GPU CUs and RAM channels.
 
Top