- Joined
- May 14, 2004
- Messages
- 28,261 (3.72/day)
Processor | Ryzen 7 5700X |
---|---|
Memory | 48 GB |
Video Card(s) | RTX 4080 |
Storage | 2x HDD RAID 1, 3x M.2 NVMe |
Display(s) | 30" 2560x1600 + 19" 1280x1024 |
Software | Windows 10 64-bit |
It only needs U.

Processor | Ryzen 7 5700X |
---|---|
Memory | 48 GB |
Video Card(s) | RTX 4080 |
Storage | 2x HDD RAID 1, 3x M.2 NVMe |
Display(s) | 30" 2560x1600 + 19" 1280x1024 |
Software | Windows 10 64-bit |
It only needs U.
System Name | Main PC |
---|---|
Processor | AMD Ryzen 9 5950X |
Motherboard | ASUS X570 Crosshair VIII Hero (Wi-Fi) |
Cooling | EKWB X570 VIII Hero Monoblock, 2x XD5, Heatkiller IV SB block for chipset,Alphacool 3090 Strix block |
Memory | 4x16GB 3200-14-14-14-34 G.Skill Trident RGB (OC: 3600-14-14-14-28) |
Video Card(s) | ASUS RTX 3090 Strix OC |
Storage | 500GB+500GB SSD RAID0, Fusion IoDrive2 1.2TB, Huawei HSSD 2TB, 11TB on server used for steam |
Display(s) | Dell LG CX48 (custom res: 3840x1620@120Hz) + Acer XB271HU 2560x1440@144Hz |
Case | Corsair 1000D |
Audio Device(s) | Sennheiser HD599, Blue Yeti |
Power Supply | Corsair RM1000i |
Mouse | Logitech G502 Lightspeed |
Keyboard | Corsair Strafe RGB MK2 |
Software | Windows 10 Pro 20H2 |
As @Valantar mentions, the memory is a shared pool between GPU and CPU. Going from 16GB to like 8GB wouldn't be as bad since the GPU is 1/3rd the performance anyway, and therefore will use much less memory, opening up more for the CPU. Also going from 4K to 1080p will drastically reduce texture sizes and therefore memory usage. I imagine both console versions still have the same "budget" of memory that the CPU can use.Even if so it will have less memory.
System Name | Good enough |
---|---|
Processor | AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge |
Motherboard | ASRock B650 Pro RS |
Cooling | 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30 |
Memory | 32GB - FURY Beast RGB 5600 Mhz |
Video Card(s) | Sapphire RX 7900 XT - Alphacool Eisblock Aurora |
Storage | 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB |
Display(s) | LG UltraGear 32GN650-B + 4K Samsung TV |
Case | Phanteks NV7 |
Power Supply | GPS-750C |
But these consoles allocate memory dynamically between the CPU and GPU, and the high end concoles will use a lot as VRAM due to their high resolutions. Running at 1080p or 1440p with lower texture quality will easily overcome this, and won't affect development for the higher end consoles whatsoever.
I'm sure you, and not Microsoft, have managed to spot a critical flaw that means everything will be bad. It's not like Microsoft does any kind of R&D, testing or thinking about how their products work. Not one person at that trillion dollar company has even thought about that until you did.
Also going from 4K to 1080p will drastically reduce texture sizes and therefore memory usage.
$100 was a bit conservative estimate from me so I will change It to $150, but I don't think It will be $200.There's no way the price difference will be only $100 with a GPU ⅓ the performance and less memory. If this is true it is clearly a value play, likely aiming for $300 or possibly even less.
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
Put it this way: it's entirely possible the XSX is aiming for (an equivalent of) 4K "High" or even borderline "Ultra" settings levels in most games. The amount of extra work needed to implement a "Medium" level that focuses on running smoothly at 1080p60 or 1440p30 while staying within a 7.5GB memory footprint (2.5GB is reserved for the OS) rather than the 13.5GB of the XSX? Not that much, frankly. Especially if the game is cross-platform, as most of the work would then be done for the PC port no matter what. It's also worth pointing out that consoles with a unified memory architecture need (often significantly) less total memory than an equivalent PC due to the lack of duplicated assets between RAM and VRAM.6 GB is a lot even when you take into consideration lower textures and resolutions. As a developer you will be forced to make sure the game fits in those 10GB, there is no going around it. You can fill the rest with trivial stuff like textures and whatnot on the higher end console but it's a waste of potential.
Don't get cheeky, it's not a critical flaw, it's an extra constraint developers will have to deal with. And, yes these trillion dollar companies have designed countless pieces of hardware with terrible constraints, look at Cell and it's separate GPU/system RAM, look at the ESRAM on the Xbox One, etc.
The render target resolution is independent from texture resolution.
They would struggle to sell this at those kinds of prices, given that it is after all an explicit low-end console. "Get ⅓ the performance for ⅔ the price" is a dirt-poor sales pitch. No optical drive, less RAM, smaller and cheaper motherboard, less expensive VRM components, smaller PSU, either a smaller and cheaper die or harvested chips that would otherwise go in the trash... $150 savings for this sounds like a relatively bad deal.$100 was a bit conservative estimate from me so I will change It to $150, but I don't think It will be $200.
If there was some option to play PC games on Xbox It would be a killer feature, but then most of PC gaming market(hardware) would crumble, so It won't happen.
System Name | Main PC |
---|---|
Processor | AMD Ryzen 9 5950X |
Motherboard | ASUS X570 Crosshair VIII Hero (Wi-Fi) |
Cooling | EKWB X570 VIII Hero Monoblock, 2x XD5, Heatkiller IV SB block for chipset,Alphacool 3090 Strix block |
Memory | 4x16GB 3200-14-14-14-34 G.Skill Trident RGB (OC: 3600-14-14-14-28) |
Video Card(s) | ASUS RTX 3090 Strix OC |
Storage | 500GB+500GB SSD RAID0, Fusion IoDrive2 1.2TB, Huawei HSSD 2TB, 11TB on server used for steam |
Display(s) | Dell LG CX48 (custom res: 3840x1620@120Hz) + Acer XB271HU 2560x1440@144Hz |
Case | Corsair 1000D |
Audio Device(s) | Sennheiser HD599, Blue Yeti |
Power Supply | Corsair RM1000i |
Mouse | Logitech G502 Lightspeed |
Keyboard | Corsair Strafe RGB MK2 |
Software | Windows 10 Pro 20H2 |
Don't get cheeky, it's not a critical flaw, it's an extra constraint developers will have to deal with. And, yes these trillion dollar companies have designed countless pieces of hardware with terrible constraints, look at Cell and it's separate GPU/system RAM, look at the ESRAM on the Xbox One, etc.
The render target resolution is independent from texture resolution.
System Name | Good enough |
---|---|
Processor | AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge |
Motherboard | ASRock B650 Pro RS |
Cooling | 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30 |
Memory | 32GB - FURY Beast RGB 5600 Mhz |
Video Card(s) | Sapphire RX 7900 XT - Alphacool Eisblock Aurora |
Storage | 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB |
Display(s) | LG UltraGear 32GN650-B + 4K Samsung TV |
Case | Phanteks NV7 |
Power Supply | GPS-750C |
The render target res is in fact independent, but the overall memory usage is still lower by lowering res. Most PC games today will struggle with 4GB of VRAM at 4K, but same settings at 1080p and you'd see ~1GB VRAM usage. If the Series S ships with 12GB like the One X and we assume VRAM usage to be 2-3GB at most at 1080p (and the SSD will help get that number much lower as lower-priority assets can be stored on the SSD) and 4-5GB at 1440p, then that leaves 7GB at least for the rest of the system.
I forgot It probably won't have an optical drive, so maybe $200 less in total.They would struggle to sell this at those kinds of prices, given that it is after all an explicit low-end console. "Get ⅓ the performance for ⅔ the price" is a dirt-poor sales pitch. No optical drive, less RAM, smaller and cheaper motherboard, less expensive VRM components, smaller PSU, either a smaller and cheaper die or harvested chips that would otherwise go in the trash... $150 savings for this sounds like a relatively bad deal.
I don't think the SoC is the same.The being said I stand by my conclusion that 6GB is a huge chunk of memory which will act as as big constraint, forcing developer to simplify game logic and assets. And I wonder how much cheaper would this console really be, the SoC is the same, MS will pay the same for every wafer they get from AMD irrespective of where it will be used.
System Name | Good enough |
---|---|
Processor | AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge |
Motherboard | ASRock B650 Pro RS |
Cooling | 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30 |
Memory | 32GB - FURY Beast RGB 5600 Mhz |
Video Card(s) | Sapphire RX 7900 XT - Alphacool Eisblock Aurora |
Storage | 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB |
Display(s) | LG UltraGear 32GN650-B + 4K Samsung TV |
Case | Phanteks NV7 |
Power Supply | GPS-750C |
I don't think the SoC is the same.
For 4Tflops you need a GPU with only 20CU at 1560Mhz or 16CU at 1950Mhz and we know the SoC has 56CU in total. They won't disable that many CU, just to put It in a much cheaper console.
System Name | Starlifter :: Dragonfly |
---|---|
Processor | i7 2600k 4.4GHz :: i5 10400 |
Motherboard | ASUS P8P67 Pro :: ASUS Prime H570-Plus |
Cooling | Cryorig M9 :: Stock |
Memory | 4x4GB DDR3 2133 :: 2x8GB DDR4 2400 |
Video Card(s) | PNY GTX1070 :: Integrated UHD 630 |
Storage | Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5 |
Display(s) | Onn 165hz 1080p :: Acer 1080p |
Case | Antec SOHO 1030B :: Old White Full Tower |
Audio Device(s) | Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None |
Power Supply | FSP Hydro GE 550w :: EVGA Supernova 550 |
Software | Windows 10 Pro - Plex Server on Dragonfly |
Benchmark Scores | >9000 |
I'm inclined to agree. I thought consoles were all about offering a simple, hassle-free device for your average Joe to play games on. Having multiple models at different performance levels muddles up the waters a bit. Sure, there were like 9 versions of the PS2, but they were all relatively minor changes with little impact to the end user. There are some competitive people I know with original PS4s that are put off by playing competitively because their PS4 isn't as good as some of the later ones, so they're already at a flat disadvantage. It looks like it's only going to get worse.This is a horrible idea and it will cripple game development across all platforms. Developers will be forced to adhere to the CPU/RAM limits of the weaker consoles, meaning the games will be as complex as the weaker consoles allows. The GPU situation can be dealt with easily but having a lower clocked CPU with less cores potentially and less RAM will ruin progress with this generation.
System Name | Good enough |
---|---|
Processor | AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge |
Motherboard | ASRock B650 Pro RS |
Cooling | 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30 |
Memory | 32GB - FURY Beast RGB 5600 Mhz |
Video Card(s) | Sapphire RX 7900 XT - Alphacool Eisblock Aurora |
Storage | 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB |
Display(s) | LG UltraGear 32GN650-B + 4K Samsung TV |
Case | Phanteks NV7 |
Power Supply | GPS-750C |
And disabling 32-36CU + other parts of GPU isn't? We are talking about millions of said SoCs used for the weaker brother.
Wafers from AMD? Are you sure? In my opinion MS just pays a fixed licence fee per produced SoC to AMD, but the production is commissioned by MS to Samsung or TSMC.The wafers they get from AMD will have defective CUs, AMD doesn't care about, MS will pay for those anyway. So might as well use them in the cheaper console and by the way I don't actually think it will be a 4TF GPU, that seems way too weak. 6TF seems more reasonable.
System Name | Good enough |
---|---|
Processor | AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge |
Motherboard | ASRock B650 Pro RS |
Cooling | 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30 |
Memory | 32GB - FURY Beast RGB 5600 Mhz |
Video Card(s) | Sapphire RX 7900 XT - Alphacool Eisblock Aurora |
Storage | 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB |
Display(s) | LG UltraGear 32GN650-B + 4K Samsung TV |
Case | Phanteks NV7 |
Power Supply | GPS-750C |
Wafers from AMD? Are you sure? In my opinion MS just pays a fixed licence fee for produced SoCs to AMD, but the production is commissioned by MS to Samsung or TSMC.
System Name | Good enough |
---|---|
Processor | AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge |
Motherboard | ASRock B650 Pro RS |
Cooling | 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30 |
Memory | 32GB - FURY Beast RGB 5600 Mhz |
Video Card(s) | Sapphire RX 7900 XT - Alphacool Eisblock Aurora |
Storage | 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB |
Display(s) | LG UltraGear 32GN650-B + 4K Samsung TV |
Case | Phanteks NV7 |
Power Supply | GPS-750C |
then you will pay two different licence fees to AMD and even If It was the same the production cost for a smaller SoC is still lower and with millions of produced SoCs It's a lot of saved money.
If Project Scarlett SoC is 360mm^2, but Lockhart SoC is <=300mm^2, then you can make more Lockhart SoCs from one wafer even with the same yield(failure) rate. With smaller SoC you don't need to order as much wafers as you would need with a bigger one so MS won't need to pay AMD for so many wafers.I don't think you quite get this, no matter the chip you pay the same amount of money for one wafer, the cost per die goes up and down because of yields. That's why it's probably too expensive to design another chip when you can just get the defective ones that you would have gotten anyway. This of course works for relatively small chips, if you'd have had something like a 700mm^2 monstrosity, yes, it would probably be cheaper to just make another chip.
System Name | Odyssey |
---|---|
Processor | AMD Ryzen 7 3700x |
Motherboard | MSI MEG X570 UNIFY |
Cooling | EKWB EK-MLC Phoenix 240 |
Memory | Crucial Ballistix Sport AT 3200MHz 32GB |
Video Card(s) | Sapphire Pulse RX 5700XT 8 GB |
Storage | ADATA XPG SX8200 Pro 1TBx2 |
Display(s) | LG 32GK850F-B |
Case | Phanteks Enthoo Pro M Tempered Glass |
Power Supply | SeaSonic PRIME 650W Gold |
the reality is we don't know if they have 2 SoC designs or will just use the defective ones for XSS. it's not like they lack the money to get 2 custom SoC designs. either way they can easily absorb the extra costI can't see them commissioning another custom die from AMD, these custom designs are expensive as hell.
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
IMO, "pay X for full performance and all the features, or pay 3/5X for a lower performance version that still does all the same things and plays all the same games" is still a relatively simple proposition for end users, even if it adds a single layer of complexity (that is frankly already there with the X1X/X1S and PS4s/PS4p). Keeping the CPU the same across the board also means that (outside of draw calls increasing CPU load for the higher end SKU) the only relevant difference will be in graphical fidelity. Same responsiveness, same framerates (given that developers adjust for the less powerful GPU), same games, same disks, same UI, though likely smaller installations/downloads on the lower end SKU, and likely no RTRT.I'm inclined to agree. I thought consoles were all about offering a simple, hassle-free device for your average Joe to play games on. Having multiple models at different performance levels muddles up the waters a bit. Sure, there were like 9 versions of the PS2, but they were all relatively minor changes with little impact to the end user. There are some competitive people I know with original PS4s that are put off by playing competitively because their PS4 isn't as good as some of the later ones, so they're already at a flat disadvantage. It looks like it's only going to get worse.
See above. The chance of a die from one of these 360mm wafers being unusable for the XSX is very, very small. Which leads me to think any potential Lockhart would be a different die - after all, to reach 4 TFlops at the same clock as the XSX (1825MHz) they would just need 18 CUs. If a smaller die allowed them to clock higher they could get away with 16 or even 15. Make that into a 20 CU die for some redundancy, reduce the number of memory controllers and interfaces to match the 10GB of RAM, and you've got a relatively small die. Not Renoir size (150mm2), but likely ~200mm2. That would allow for dramatic increases in chip yields - a 13x16mm die (208mm2) run through the calculator @THANATOS used above results in 244 good dice and 46 partials, nearly 2x the die count of the full Scarlett SoC. Also, as discussed above 4TF of RDNA compute won't lag much behind 6TF of GCN compute in gaming performance - and unlike the X1X this would be targeting lower resolutions than 4k, meaning 4TF ought to be perfectly fine as long as other parts of the SoC don't choke. Speaking of which:The wafers they get from AMD will have defective CUs, AMD doesn't care about that, MS will pay for those anyway. So might as well use them in the cheaper console and by the way I don't actually think it will be a 4TF GPU, that seems way too weak. 6TF seems more reasonable.
a) Those tests are all run on Nvidia GPUs with Nvidia drivers, which are known to (on average, though it depends on the title) have more VRAM labeled "in use" while gaming than equivalent AMD GPUs. Whether they actually use or need more VRAM or if this is just a difference in how the drivers work is another issue entirely, but the point nonetheless stands.I know of no game that uses 1GB at 1080p and over 4 GB at 4K, the difference in memory usage is much less stark than people believe. Everyone makes the trivial error that 4 times the pixels means 4 times the overall VRAM usage. It doesn't work like that, a lot of data stored in VRAM doesn't need to scale up and remains the same in size. All of these are modern games :
View attachment 160704
View attachment 160705
View attachment 160706
View attachment 160708
See ? Are you all still not convinced games don't actually use that much more memory at 4K versus 1080p yet ? I know what's the next probably : "yeah but all those games just fill up all the available memory even if it wont use it all". No, it's not that, it's just not that big of a difference.
The being said I stand by my conclusion that 6GB is a huge chunk of memory which will act as as big constraint, forcing developer to simplify game logic and assets. And I wonder how much cheaper would this console really be, the SoC is the same, MS will pay the same for every wafer they get from AMD irrespective of where it will be used.
System Name | Good enough |
---|---|
Processor | AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge |
Motherboard | ASRock B650 Pro RS |
Cooling | 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30 |
Memory | 32GB - FURY Beast RGB 5600 Mhz |
Video Card(s) | Sapphire RX 7900 XT - Alphacool Eisblock Aurora |
Storage | 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB |
Display(s) | LG UltraGear 32GN650-B + 4K Samsung TV |
Case | Phanteks NV7 |
Power Supply | GPS-750C |
se tests are all run on Nvidia GPUs with Nvidia drivers, which are known to (on average, though it depends on the title) have more VRAM labeled "in use" while gaming than equivalent AMD GPUs. Whether they actually use or need more VRAM or if this is just a difference in how the drivers work is another issue entirely, but the point nonetheless stands.
b) A lot of that data in VRAM is loaded "just in case", with a relatively small portion ever actually being used. Given that the new consoles are built around NVMe SSDs and superior-to-Windows storage architectures, it's not that much of a stretch to assume that they can do this less, relying on the SSD to stream in necessary data on the fly (to a certain extent, of course).
c) Those tests are run on Ultra settings. No console runs PC Ultra settings, as Ultra inevitably activates all the stupidly expensive stuff that you barely even notice. Most games have significant drops from Ultra to High.
d) It's entirely within reason to expect a lower-end console to use a lower tier of graphics settings too - likely something akin to PC "medium" settings, further lowering VRAM usage.
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
So what you are worried about is non-graphics portions of games needing that memory? Such as what, precisely? Most PC games run perfectly fine on 8GB of RAM, which under Windows 10 at best means 6GB free, though likely more like 4. And again, those games aren't built around a storage architecture delivering near-instant loads (no, it doesn't matter if you have an NVMe SSD in your PC, the game is programmed to pre-load anything it might need at the rate necessary if running from a 2.5" HDD. All your SSD is doing is speeding up the pre-load), meaning a lot of "maybe we'll need this too" data can be kept on the SSD, with the game knowing exactly how quickly it can load it, reducing actual RAM usage.Why not just show me instances of games using a lot more memory at higher resolution ? None of your explanations have a real foundation.
a) It makes no sense at all why an AMD card would use more VRAM. Does a character model gain extra vertices on an AMD GPU ? Please explain, what exactly is the driver doing.
b) Of course not all stored data is used in rendering every single frame, that holds true whether you run a game at the lowest possible settings or if you run it at 4K with everything turned all the way up. VRAM usage is VRAM usage, it makes no sense to argue that some of the data is not used. No program is ever using every single byte simultaneously at once, come on.
c) Why does it matter if it's ultra or not ? You do realize that if the settings were to be turned down, the difference between lower and higher resolutions would be even smaller, right ?
d) Of course it is, still can't justify how all that cut down memory somehow only has to do with graphical settings. I'll remind you that you can currently fit entire frame buffers of modern games in something like 6GB. How much more memory do you think higher resolution shadows maps use for example ?
System Name | Baxter |
---|---|
Processor | Intel i7-5775C @ 4.2 GHz 1.35 V |
Motherboard | ASRock Z97-E ITX/AC |
Cooling | Scythe Big Shuriken 3 with Noctua NF-A12 fan |
Memory | 16 GB 2400 MHz CL11 HyperX Savage DDR3 |
Video Card(s) | EVGA RTX 2070 Super Black @ 1950 MHz |
Storage | 1 TB Sabrent Rocket 2242 NVMe SSD (boot), 500 GB Samsung 850 EVO, and 4TB Toshiba X300 7200 RPM HDD |
Display(s) | Vizio P65-F1 4KTV (4k60 with HDR or 1080p120) |
Case | Raijintek Ophion |
Audio Device(s) | HDMI PCM 5.1, Vizio 5.1 surround sound |
Power Supply | Corsair SF600 Platinum 600 W SFX PSU |
Mouse | Logitech MX Master 2S |
Keyboard | Logitech G613 and Microsoft Media Keyboard |
System Name | Odyssey |
---|---|
Processor | AMD Ryzen 7 3700x |
Motherboard | MSI MEG X570 UNIFY |
Cooling | EKWB EK-MLC Phoenix 240 |
Memory | Crucial Ballistix Sport AT 3200MHz 32GB |
Video Card(s) | Sapphire Pulse RX 5700XT 8 GB |
Storage | ADATA XPG SX8200 Pro 1TBx2 |
Display(s) | LG 32GK850F-B |
Case | Phanteks Enthoo Pro M Tempered Glass |
Power Supply | SeaSonic PRIME 650W Gold |
Cpu will be the same.Thinking about cost savings, the lower price point should allow for a much cheaper PSU, smaller die size for the processor or maybe harvested Series X chips with too many defects, lower RAM costs, and potentially a missing disc drive too. I hope that adds up to a $300 launch as anything more would likely not be worth releasing separately.
System Name | Hotbox |
---|---|
Processor | AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6), |
Motherboard | ASRock Phantom Gaming B550 ITX/ax |
Cooling | LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14 |
Memory | 32GB G.Skill FlareX 3200c14 @3800c15 |
Video Card(s) | PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W |
Storage | 2TB Adata SX8200 Pro |
Display(s) | Dell U2711 main, AOC 24P2C secondary |
Case | SSUPD Meshlicious |
Audio Device(s) | Optoma Nuforce μDAC 3 |
Power Supply | Corsair SF750 Platinum |
Mouse | Logitech G603 |
Keyboard | Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps |
Software | Windows 10 Pro |
CPU being the same does not mean the SoC is the same, and it's highly unlikely that there will be enough defective XSX dice to support this being based on cut down chips. If this is real it's likely a second die design with the same CPU but less GPU CUs and RAM channels.Switch Lite showed that there's a market for cheaper versions of consoles. not everyone has a 4k tv and someone who can't afford a 4k tv, would likely prefer a cheaper console.
Microsoft has definitely done some market research before deciding this
Cpu will be the same.