- Joined
- May 22, 2015
- Messages
- 13,843 (3.95/day)
Processor | Intel i5-12600k |
---|---|
Motherboard | Asus H670 TUF |
Cooling | Arctic Freezer 34 |
Memory | 2x16GB DDR4 3600 G.Skill Ripjaws V |
Video Card(s) | EVGA GTX 1060 SC |
Storage | 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500 |
Display(s) | Dell U3219Q + HP ZR24w |
Case | Raijintek Thetis |
Audio Device(s) | Audioquest Dragonfly Red :D |
Power Supply | Seasonic 620W M12 |
Mouse | Logitech G502 Proteus Core |
Keyboard | G.Skill KM780R |
Software | Arch Linux + Win10 |
What already exists is a series of numbers that get sent to the GPU. It's up to the GPU to translate those to colors on the screen. Plus, an interpolated frame doesn't exist on top of anything. It's made up from its two neighbors instead of rendering everything from scratch. That's an optimization (if you can get it to resemble the frame that you would get if you rendered it through the usual pipeline).The mental gymnastics seems sharp. Fake frames, blurred scaling and the like are a layer above what already exists, it's not something imperceptible like the rendering shortcuts you mentioned.
This is very, very far from being an optimization.