- Joined
- Oct 9, 2007
- Messages
- 47,231 (7.55/day)
- Location
- Hyderabad, India
System Name | RBMK-1000 |
---|---|
Processor | AMD Ryzen 7 5700G |
Motherboard | ASUS ROG Strix B450-E Gaming |
Cooling | DeepCool Gammax L240 V2 |
Memory | 2x 8GB G.Skill Sniper X |
Video Card(s) | Palit GeForce RTX 2080 SUPER GameRock |
Storage | Western Digital Black NVMe 512GB |
Display(s) | BenQ 1440p 60 Hz 27-inch |
Case | Corsair Carbide 100R |
Audio Device(s) | ASUS SupremeFX S1220A |
Power Supply | Cooler Master MWE Gold 650W |
Mouse | ASUS ROG Strix Impact |
Keyboard | Gamdias Hermes E2 |
Software | Windows 11 Pro |
Sam Altman of OpenAI just unveiled Sora, the all new speech-to-video AI model that exactly the way science fiction would want such a thing to work—imagine fluid, photorealistic, true-color video clips based entirely on text prompts. Sora is generative AI on an exponentially higher scale than Dall-E, and presumably requires an enormously higher amount of compute power. But to those that can afford to rent out a large hardware instance, this means the power to create a video of just about anything. Everything democratizes with time, and in a few years, Sora could become the greatest tool for independent content creators, as they could draw up entire worlds using just prompts and green screens. Sora strapped to a mixed reality headset such as the Apple Vision Pro, is basically a Holodeck.
View at TechPowerUp Main Site | Source
View at TechPowerUp Main Site | Source