- Joined
- Aug 19, 2017
- Messages
- 2,582 (0.97/day)
The gaming industry is about to get massively disrupted. Instead of using game engines to power games, we are now witnessing an entirely new and crazy concept. A startup specializing in designing ASICs specifically for Transformer architecture, the foundation behind generative AI models like GPT/Claude/Stable Diffusion, has showcased a demo in partnership with Decart of a Minecraft clone being entirely generated and operated by AI instead of the traditional game engine. While we use AI to create images and videos based on specific descriptions and output pretty realistic content, having an AI model spit out an entire playable game is something different. Oasis is the first playable, real-time, real-time, open-world AI model that takes users' input and generates real-time gameplay, including physics, game rules, and graphics.
An interesting thing to point out is the hardware that powers this setup. Using a single NVIDIA H100 GPU, this 500-million parameter Oasis model can run at 720p resolution at 20 generated frames per second. Due to limitations of accelerators like NVIDIA's H100/B200, gameplay at 4K is almost impossible. However, Etched has its own accelerator called Sohu, which is specialized in accelerating transformer architectures. Eight NVIDIA H100 GPUs can power five Oasis models to five users, while the eight Sohu cards are capable of serving 65 Oasis runs to 65 users. This is more than a 10x increase in inference capability compared to NVIDIA's hardware on a single-use case alone. The accelerator is designed to run much larger models like future 100 billion-parameter generative AI video game models that can output 4K 30 FPS, all thanks to 144 GB of HBM3E memory, yielding 1,152 GB in eight-accelerator server configuration.
Regarding the Oasis design (shown below), it is vastly different than something like OpenAI Sora. While most current AI video generators create pre-rendered segments, a newly developed system enables real-time interaction by processing each frame individually. At its core, the technology leverages advanced machine learning architectures, including specialized neural networks and attention mechanisms that process spatial and temporal data simultaneously. This approach allows the AI to maintain consistency while responding to user input in real-time, demonstrating an understanding of physics that enables natural object manipulation and construction within virtual spaces. The research team implemented several technical enhancements to ensure smooth performance, such as adaptive noise processing and specialized computational optimizations. Looking ahead, developers plan to expand the system's capabilities to handle longer sequences and more complex simulations, allowing for much smoother gameplay, just in time for 4K model. Oasis is available for users to try here with a queue system (as AI nerds play games, too).
View at TechPowerUp Main Site | Source
An interesting thing to point out is the hardware that powers this setup. Using a single NVIDIA H100 GPU, this 500-million parameter Oasis model can run at 720p resolution at 20 generated frames per second. Due to limitations of accelerators like NVIDIA's H100/B200, gameplay at 4K is almost impossible. However, Etched has its own accelerator called Sohu, which is specialized in accelerating transformer architectures. Eight NVIDIA H100 GPUs can power five Oasis models to five users, while the eight Sohu cards are capable of serving 65 Oasis runs to 65 users. This is more than a 10x increase in inference capability compared to NVIDIA's hardware on a single-use case alone. The accelerator is designed to run much larger models like future 100 billion-parameter generative AI video game models that can output 4K 30 FPS, all thanks to 144 GB of HBM3E memory, yielding 1,152 GB in eight-accelerator server configuration.
Regarding the Oasis design (shown below), it is vastly different than something like OpenAI Sora. While most current AI video generators create pre-rendered segments, a newly developed system enables real-time interaction by processing each frame individually. At its core, the technology leverages advanced machine learning architectures, including specialized neural networks and attention mechanisms that process spatial and temporal data simultaneously. This approach allows the AI to maintain consistency while responding to user input in real-time, demonstrating an understanding of physics that enables natural object manipulation and construction within virtual spaces. The research team implemented several technical enhancements to ensure smooth performance, such as adaptive noise processing and specialized computational optimizations. Looking ahead, developers plan to expand the system's capabilities to handle longer sequences and more complex simulations, allowing for much smoother gameplay, just in time for 4K model. Oasis is available for users to try here with a queue system (as AI nerds play games, too).
View at TechPowerUp Main Site | Source