- Joined
- Jun 10, 2014
- Messages
- 2,988 (0.78/day)
Processor | AMD Ryzen 9 5900X ||| Intel Core i7-3930K |
---|---|
Motherboard | ASUS ProArt B550-CREATOR ||| Asus P9X79 WS |
Cooling | Noctua NH-U14S ||| Be Quiet Pure Rock |
Memory | Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz |
Video Card(s) | MSI GTX 1060 3GB ||| MSI GTX 680 4GB |
Storage | Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB |
Display(s) | Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24" |
Case | Fractal Design Define 7 XL x 2 |
Audio Device(s) | Cambridge Audio DacMagic Plus |
Power Supply | Seasonic Focus PX-850 x 2 |
Mouse | Razer Abyssus |
Keyboard | CM Storm QuickFire XT |
Software | Ubuntu |
Try reading my post again.Oh but you're wrong, in last 7 years all game engines are using deferred rendering as a norm, they are all rendering scene compositely in layers using g-buffers and build frame buffer from that ... with higher resolution frame buffer come same high resolution g-buffers times number of layers, both bandwidth and fill rate requirements rise at 4K ... case in point, texture resolution can be only 2x2 pixels and that 4 texels can cover your entire 4k screen - you still require certain texture fill rate in a deferred renderer even your shaders happily compute only texture samplers interpolate between 4 samples for each pixel on your 4k screen
Frame buffers scale with screen resolution, going from 1440p to 4K, even with AA only increases the consumption with megabytes. With tiled rendering, the frame buffers mostly stay cache local, resulting in very marginal bandwidth requirements with resolution changes.
Texture resources which uses most of bandwidth are not proportional with screen resolution, they are proportional with detail levels.