- Joined
- Oct 9, 2007
- Messages
- 47,298 (7.53/day)
- Location
- Hyderabad, India
System Name | RBMK-1000 |
---|---|
Processor | AMD Ryzen 7 5700G |
Motherboard | ASUS ROG Strix B450-E Gaming |
Cooling | DeepCool Gammax L240 V2 |
Memory | 2x 8GB G.Skill Sniper X |
Video Card(s) | Palit GeForce RTX 2080 SUPER GameRock |
Storage | Western Digital Black NVMe 512GB |
Display(s) | BenQ 1440p 60 Hz 27-inch |
Case | Corsair Carbide 100R |
Audio Device(s) | ASUS SupremeFX S1220A |
Power Supply | Cooler Master MWE Gold 650W |
Mouse | ASUS ROG Strix Impact |
Keyboard | Gamdias Hermes E2 |
Software | Windows 11 Pro |
Arm, Intel and NVIDIA have jointly authored a paper describing an 8-bit floating point (FP8) specification and its two variants E5M2 and E4M3 to provide a common interchangeable format that works for both artificial intelligence (AI) training and inference. This cross-industry specification alignment will allow AI models to operate and perform consistently across hardware platforms, accelerating AI software development.
Computational requirements for AI have been growing at an exponential rate. New innovation is required across hardware and software to deliver computational throughput needed to advance AI. One of the promising areas of research to address this growing compute gap is to reduce the numeric precision requirements for deep learning to improve memory and computational efficiencies. Reduced-precision methods exploit the inherent noise-resilient properties of deep neural networks to improve compute efficiency.
Intel plans to support this format specification across its AI product roadmap for CPUs, GPUs and other AI accelerators, including Habana Gaudi deep learning accelerators.
FP8 minimizes deviations from existing IEEE 754 floating point formats with a good balance between hardware and software to leverage existing implementations, accelerate adoption and improve developer productivity.
The guiding principle of this format proposal from Arm, Intel and NVIDIA is to leverage conventions, concepts and algorithms built on IEEE standardization. This enables the greatest latitude for future AI innovation while still adhering to current industry conventions.
View at TechPowerUp Main Site
Computational requirements for AI have been growing at an exponential rate. New innovation is required across hardware and software to deliver computational throughput needed to advance AI. One of the promising areas of research to address this growing compute gap is to reduce the numeric precision requirements for deep learning to improve memory and computational efficiencies. Reduced-precision methods exploit the inherent noise-resilient properties of deep neural networks to improve compute efficiency.
Intel plans to support this format specification across its AI product roadmap for CPUs, GPUs and other AI accelerators, including Habana Gaudi deep learning accelerators.
FP8 minimizes deviations from existing IEEE 754 floating point formats with a good balance between hardware and software to leverage existing implementations, accelerate adoption and improve developer productivity.
The guiding principle of this format proposal from Arm, Intel and NVIDIA is to leverage conventions, concepts and algorithms built on IEEE standardization. This enables the greatest latitude for future AI innovation while still adhering to current industry conventions.
View at TechPowerUp Main Site