- Joined
- Apr 24, 2020
- Messages
- 2,709 (1.62/day)
And that also prove my point too, you talk about resolution and graphics. Like i said, things that can be easily parallelized are being run on accelerator like GPU.
Trying to be run on a GPU.
Most movie renderers remain CPU-renderers today, because the GBs of texture + vertex data do not fit on the a measly 20GBs. Ex: Moana scene is like 93GBs base + 130GBs for animations (https://www.disneyanimation.com/resources/moana-island-scene/). We all know raytracing is best on raytracing-accelerated GPUs, but all that special hardware doesn't matter if the scene literally doesn't even fit in its RAM.
And Moana was rendered over 5 years ago. Today's movies are bigger and more detailed.
And before you say it: yeah, I know about the NVidia DGX + NVSwitch. But that'd still require a "remote access" of RAM if you were to distribute the scenes out to that architecture. It'd probably work but I don't think any such renderer exists yet. There's some fun blogposts about people trying to make the CPU+GPU team work across movie-scale datasets like the Moana scene (CPU acts as a glorified RAM-box, passing the needed data to the GPU. GPU renders the scene in small pieces that can fit inside of 8GB or 16GB chunks). But that's the stuff of research, and not practice.