Even with Noctua it's a problem. It's still too much heat to be flowing through the case, heating other components up.
I keep hearing this for almost 10 years now and still, they introduce smaller and smaller nodes. It's more like we're hitting heating dissipation problems due to ever raising transistor density.
They need more advanced node in order to introduce bigger progress in raster performance. Shrinking from 5nm to 4nm is not enough. Maybe with 2nm they will finally be able to ramp up compute units count similarly to how they increased it with RTX 4090. Also, it's not the best they can do, since they let part of the units handle everything else but rasterizing. Now imagine if those units were doing native rasterizing workload instead of interpolating, upscaling, etc.
They will praise DLSS4 and MFG while saying that lack of DLSS and MFG support is a negative aspect on any non-Nvidia GPU.
Quite on the contrary to currently ongoing TPU poll:
View attachment 380178
He can't really tell. You need to see it in motion/sequence to notice artifacts or ghosting, or you need image for comparison. If I let you play any game with turning on DLSS without telling you, you probably would not notice. Then I'd switch DLSS off and you would notice immediately. This is basically the issue in some games where upscaling is turned on by default - some users practically never see that the particular game may look even better on native. Users unaware of playing with turned on upscaling (by default) counts towards Nvidia's statistics of 80% users using DLSS. And as was already said here, some games won't allow turning off DLSS. I have a very bad feeling about this forced upscaling strategy. Hopefully this won't spread anymore, so GPU makers are not able to obfuscate poor generational perf. uplifts.
DLSS without (M)FG increases framerate (thus reduces frametime) by rendering scenes in lower resolution and upscaling it to higher one while utilizing model based on continously trained neural network (so called "AI"). DLSS also increases distortions in rendered upscaled scenes. Though, I must say, new DLSS4 transforming model looks very promising.
FG increases latency, as its calculations require compute time in between rendering native/upscaled frames.
View attachment 380181
(Source:
https://www.techspot.com/news/106265-early-dlss-4-test-showcases-cleaner-images-multiplied.html)
I did a quick research and Nvidia's key to reducing latency is
Reflex 2 Frame Warp technology
Frame Generation interpolates between two rendered frames, be it on DLSS or native. Right now, it can ONLY interpolate, not extrapolate. Jensen was quite incorrect when he
said that DLSS version 4 predicts the future. Algorithm simply cannot predict when and where you'll move your character with keyboard or to what side you'll move your mouse. And even if it could, error/miss rate would be enormous. DLSS4 includes up to three frame generation between every two native frames in sequence. Unless there is change in rendered scene composition (change in performance requirements), amount of time between two native rendered frames does not change, whether there is 1, 2, 3, or X generated frames injected between. When moving with your character in games, you can only see results of your keyboard presses and/or mouse movement in native rendered frames (not in interpolated), because in order to show user interaction, keyboard and mouse data must come from processor which only happens with native frames. Responsiveness increases when native framerate increases, hence interpolated frames cannot lower it. Good explanation (
source,
source):
(M)FG improves gameplay smoothness (increases overall framerate) but does not improve responsiveness (increases latency).
With Reflex 2 Warp Frame technology, GPU requests from CPU most recent mouse input to only re-render parts of already rendered frames by shifting objects in the scene and painting blank areas created by shifting. This effectively eliminates latency issue introduced with M(FG). It virtually improves responsiveness as long as mouse polling rate is higher than overall fps (be it native, generated, or both combined) and as long as CPU is fast enough. This new technology is applied to already existing Reflex technology that was utilizing CPU-GPU synchronization. Am I correct?
View attachment 380190