The beauty of Unreal Engine 5 is that you don't need to run it at native resolution for it to look good, it's designed this way. The built in resolution scaler (Not DLSS or FSR) works great and looks as good as native down to 60% resolution at 4K. I haven't tried with DLSS on my 3080 12GB, but FSR has issues with particles, so I decided to use the built in scaler. I'm playing on a 6950XT at 4K with everything except global illumination on ultra at 65% resolution and I stay above 60fps with the only drops in the home base area while in Umbral. I think UE5 needs to market its resolution scaling far more, it's quite good.
It's kinda the point I've been trying to make a good long while, which is we're headed into the era of FSR/DLSS 'balanced' (~59% scale) for a lot of the more intensive/ue5 games. I haven't played this title personally, but I agree that the setting generally does not look bad at all IMHO regardless of upscaling tech, and given how Starfield/UE5 Fortnite were received (I haven't seen anyone complaining about the resolution on XSX) I don't think most people have a problem with it either. It makes sense when you think about it, as the resolutions are 2227*1253 (dlss) and 2259*1270 (fsr), which are essentially the max of the THX spec regarding distance per size/visual acuity, 40 degree arc, or as I like to put it: When you sit close-enough to a tv that it fills your whole vision without needing to move your head...that resolution should still be fine for the vast majority of people. I also think it's weird reviewers, if they test FSR/DLSS at all, generally go straight from 'quality' (66%, which is still often overkill imho) to "performance" (50%)....which can start to look a little questionable. It's almost like the modes match what they say on the tin, but people don't appear to care. Thanks for being a voice of reason.
I find it amusing that people are using this game as an example of why people don't need more than 8GB of VRAM. First of all, the texture quality in this title isn't particularly good. Second of all, let's not act like every game successfully pulls apart system/vram allocation from the shared ram pool of a console. Ideally, yes, and in that case because of the whatever it is, ~14GB available to a console should generally equal an ~8/6GB split, but you need to realize that just doesn't always happen; some PC ports require 10-12, rarely 12+GB atm of one or the other if not both. If you want to criticize all of those developers for their PC ports that's fine, but it doesn't change the reality that it has and will happen, especially as we move to lower native resolutions on current-gen consoles (which will happen not only bc ue5 is heavy and/or so there is the perception of games doing more and are looking better on the same hardware, but to capitalize on the potential of a ps5pro) or more intensive settings on PC. The one thing that truly might save you is the fact the XSS has (really 8GB, 2GB is 32-bit) 10GB of ram and developers are forced to release a port for it if they join the Xbox ecosystem. That doesn't, however, mean that that console won't eventually fall into 720p territory. It's already sub-900p in a lot of instances, even if you try to make the arguement 1080p is the general aim. You can fight it all you want, but the reality is that not only is 8GB questionable for 1080p, but 12GB is questionable for 1440p when matching GPU performance to it's buffer. There absolutely already is and will be more instances in the future a 4070ti will be below 1440p/60 simply because of the buffer; that's a card that seriously could use 13-14GB but doesn't have 16GB because product segmentation and the art of the upsell to the 4080. We can agree to disagree, but let's revisit this topic after the release of the PS5pro, Blackwell, and 3nm Navi. Heck, maybe after the release of Battlemage/Navi4x (which will likely match their performance to 16GB exactly and aim for 1440p; relegating 4080 to the same market
eventually). Those cards will almost certainly be 16GB, and the next-gen likely 16-32GB. When you stop and realize the next consoles themselves will probably be 32GB, and the then older consoles then likely relegated to (1080p-)1440p maximum, it all starts to make sense. I'm not saying you're completely wrong at the moment, but wrt the fairly-immediate and especially longer-term future (especially in terms of spending $400+ on a video card) I would certainly opt for obtaining a larger buffer. People can choose to believe 4070ti is not planned for obsolescence by having 12GB or 4070 having the performance of a card that only needs ~10-11GB, but those will be the same people nVIDIA will target their fomo using settings unplayable on them when Blackwell releases. That's just (how nVIDIA runs their) business of selling new video cards, which is important to understand is incredibly savvy but also incredibly shitty for customers. They know that people will forget about this conversation when the next-generation releases and 10-12GB cards are old news and/or relegated to 1080p, just as they have now done several times before with 3/6GB cards. To bring it full circle, I and likely other people will then recommend people run those cards at 4k DLSS balanced, or roughly 59% scale (as Colossus recommends), if not 50%/'performance'/1080p.