- Joined
- Oct 9, 2007
- Messages
- 47,300 (7.53/day)
- Location
- Hyderabad, India
System Name | RBMK-1000 |
---|---|
Processor | AMD Ryzen 7 5700G |
Motherboard | ASUS ROG Strix B450-E Gaming |
Cooling | DeepCool Gammax L240 V2 |
Memory | 2x 8GB G.Skill Sniper X |
Video Card(s) | Palit GeForce RTX 2080 SUPER GameRock |
Storage | Western Digital Black NVMe 512GB |
Display(s) | BenQ 1440p 60 Hz 27-inch |
Case | Corsair Carbide 100R |
Audio Device(s) | ASUS SupremeFX S1220A |
Power Supply | Cooler Master MWE Gold 650W |
Mouse | ASUS ROG Strix Impact |
Keyboard | Gamdias Hermes E2 |
Software | Windows 11 Pro |
Today, MLCommons published results of its industry-standard AI performance benchmark suite, MLPerf Inference v4.1. Intel submitted results across six MLPerf benchmarks for 5th Gen Intel Xeon Scalable processors and, for the first time, Intel Xeon 6 processors with Performance-cores (P-cores). Intel Xeon 6 processors with P-cores achieved about 1.9x geomean performance improvement in AI performance compared with 5th Gen Xeon processors.
"The newest MLPerf results show how continued investment and resourcing is critical for improving AI performance. Over the past four years, we have raised the bar for AI performance on Intel Xeon processors by up to 17x based on MLPerf. As we near general availability later this year, we look forward to ramping Xeon 6 with our customers and partners," said Pallavi Mahajan, Intel corporate vice president and general manager of Data Center and AI Software.
AI systems require CPUs as a critical component to successfully deploy solutions across a variety of scenarios. Intel Xeon provides a great solution for AI inference, including classical machine learning and vector search embedding.
With MLPerf Inference v4.1, Intel submitted 5th Gen Intel Xeon processors and Xeon 6 processors with P-Cores on ResNet50, RetinaNet, 3DUNet, BERT, DLRM v2 and GPT-J. Compared with 5th Gen Intel Xeon, Xeon 6 provides an average of about 1.9x better AI inference performance across these six benchmarks. Intel continues to be the only server processor vendor to submit CPU results to MLPerf.
Over the past four years, Intel has made significant gains in AI performance with CPUs since it first submitted MLPerf results. Compared with 3rd Gen Intel Xeon Scalable processors in 2021, Xeon 6 performs up to 17x better on natural language processing (BERT) and up to 15x better on computer vision (ResNet50) workloads. Intel continues to invest in AI for its CPU roadmap. As an example, it continues to innovate with Intel Advanced Matrix Extensions (AMX) through new data types and increased efficiency.
The latest MLCommons benchmarks highlight how Xeon processors deliver strong CPU AI server solutions to original equipment manufacturers (OEMs). As the need for AI compute grows and many customers run AI workloads alongside their enterprise workloads, OEMs are prioritizing MLPerf submissions to ensure they deliver highly performant Xeon systems optimized for AI workloads to customers.
Intel supported five OEM partners - Cisco, Dell Technologies, HPE, Quanta and Supermicro - with their MLPerf submissions in this round. Each customer submitted MLPerf results with 5th Gen Xeon Scalable processors, displaying their systems' support for a variety of AI workloads and deployments.
Intel will deliver more information about Xeon 6 processors with P-cores during a launch event in September.
View at TechPowerUp Main Site
"The newest MLPerf results show how continued investment and resourcing is critical for improving AI performance. Over the past four years, we have raised the bar for AI performance on Intel Xeon processors by up to 17x based on MLPerf. As we near general availability later this year, we look forward to ramping Xeon 6 with our customers and partners," said Pallavi Mahajan, Intel corporate vice president and general manager of Data Center and AI Software.
AI systems require CPUs as a critical component to successfully deploy solutions across a variety of scenarios. Intel Xeon provides a great solution for AI inference, including classical machine learning and vector search embedding.
With MLPerf Inference v4.1, Intel submitted 5th Gen Intel Xeon processors and Xeon 6 processors with P-Cores on ResNet50, RetinaNet, 3DUNet, BERT, DLRM v2 and GPT-J. Compared with 5th Gen Intel Xeon, Xeon 6 provides an average of about 1.9x better AI inference performance across these six benchmarks. Intel continues to be the only server processor vendor to submit CPU results to MLPerf.
Over the past four years, Intel has made significant gains in AI performance with CPUs since it first submitted MLPerf results. Compared with 3rd Gen Intel Xeon Scalable processors in 2021, Xeon 6 performs up to 17x better on natural language processing (BERT) and up to 15x better on computer vision (ResNet50) workloads. Intel continues to invest in AI for its CPU roadmap. As an example, it continues to innovate with Intel Advanced Matrix Extensions (AMX) through new data types and increased efficiency.
The latest MLCommons benchmarks highlight how Xeon processors deliver strong CPU AI server solutions to original equipment manufacturers (OEMs). As the need for AI compute grows and many customers run AI workloads alongside their enterprise workloads, OEMs are prioritizing MLPerf submissions to ensure they deliver highly performant Xeon systems optimized for AI workloads to customers.
Intel supported five OEM partners - Cisco, Dell Technologies, HPE, Quanta and Supermicro - with their MLPerf submissions in this round. Each customer submitted MLPerf results with 5th Gen Xeon Scalable processors, displaying their systems' support for a variety of AI workloads and deployments.
Intel will deliver more information about Xeon 6 processors with P-cores during a launch event in September.
View at TechPowerUp Main Site