TheLostSwede
News Editor
- Joined
- Nov 11, 2004
- Messages
- 17,769 (2.42/day)
- Location
- Sweden
System Name | Overlord Mk MLI |
---|---|
Processor | AMD Ryzen 7 7800X3D |
Motherboard | Gigabyte X670E Aorus Master |
Cooling | Noctua NH-D15 SE with offsets |
Memory | 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68 |
Video Card(s) | Gainward GeForce RTX 4080 Phantom GS |
Storage | 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000 |
Display(s) | Acer XV272K LVbmiipruzx 4K@160Hz |
Case | Fractal Design Torrent Compact |
Audio Device(s) | Corsair Virtuoso SE |
Power Supply | be quiet! Pure Power 12 M 850 W |
Mouse | Logitech G502 Lightspeed |
Keyboard | Corsair K70 Max |
Software | Windows 10 Pro |
Benchmark Scores | https://valid.x86.fr/yfsd9w |
As the world experiences a generational shift to artificial intelligence, each of us is participating in a new era of global expansion enabled by silicon. It's the "Siliconomy," where systems powered by AI are imbued with autonomy and agency, assisting us across both knowledge-based and physical-based tasks as part of our everyday environments.
At Intel Innovation, the company unveiled technologies to bring AI everywhere and to make it more accessible across all workloads - from client and edge to network and cloud. These include easy access to AI solutions in the cloud, better price performance for Intel data center AI accelerators than the competition offers, tens of millions of new AI-enabled Intel PCs shipping in 2024 and tools for securely powering AI deployments at the edge.
AI requires a broad range of solutions developed with openness and security in mind to speed innovation. Intel's portfolio of AI-enabling hardware and software - from CPUs, GPUs and accelerators to the oneAPI programming model, OpenVINO developer toolkit and libraries that empower the AI ecosystem - provides competitive, high-performance, open-standards solutions for customers to quickly deploy AI at scale.
Intel Developer Cloud Reaches General Availability
Intel announced general availability of the Intel Developer Cloud, which gives developers an easy path to test and deploy AI and high performance computing applications and solutions across the latest Intel CPUs, GPUs and AI accelerators. Developers can also take advantage of cutting-edge tools to enable advanced AI and performance. The details:
The Intel Developer Cloud is built on a foundation of advanced central processing units (CPUs) that are purpose-built for AI, graphics processing units (GPUs), and Intel Gaudi 2 processors for Deep Learning, along with open software and tools. The cloud development environment also provides access to the latest Intel hardware platforms, such as 5th Gen Intel Xeon Scalable processors (code-named Emerald Rapids), which will become available in the Intel Development Cloud in the next few weeks and launch on Dec. 14, and Intel Data Center GPU Max Series 1100 and 1550.
Developers can use the Intel Developer Cloud to build, test and optimize AI and high performance computing applications. They can also run small- to large-scale AI training, model optimization and inference workloads that deploy with performance and efficiency. Based on an open software foundation with oneAPI - the open multiarchitecture, multivendor programming model - Intel Developer Cloud provides hardware choice and freedom from proprietary programming models to support accelerated computing and code reuse and portability.
Customer and Performance Momentum in the Data Center
Intel announced AI performance updates and industry momentum for its data center and artificial intelligence product portfolio, including Intel Gaudi2 and 3; 4th Gen Intel Xeon; 5th Gen Intel Xeon; and future-generation Xeon processors code-named Sierra Forest and Granite Rapids. The details:
New AI Experiences Powered by Intel Core Ultra Processors
Intel will usher in the age of the AI PC with the upcoming Intel Core Ultra processors, code-named Meteor Lake, featuring Intel's first integrated neural processing unit, or NPU, for power-efficient AI acceleration and local inference on the PC. Intel confirmed Core Ultra will launch Dec. 14. The details:
Powering AI at the Edge
The opportunity for edge computing is immense, fueled by the demand for automating systems and analyzing data through AI. OpenVINO is Intel's AI inferencing and deployment runtime of choice for developers on client and edge platforms. With the OpenVINO developer toolkit, Intel is making AI at the edge even more accessible. Developer downloads of the OpenVINO toolkit have seen a 90% year-over-year increase in the past year alone. The details:
View at TechPowerUp Main Site | Source
At Intel Innovation, the company unveiled technologies to bring AI everywhere and to make it more accessible across all workloads - from client and edge to network and cloud. These include easy access to AI solutions in the cloud, better price performance for Intel data center AI accelerators than the competition offers, tens of millions of new AI-enabled Intel PCs shipping in 2024 and tools for securely powering AI deployments at the edge.
AI requires a broad range of solutions developed with openness and security in mind to speed innovation. Intel's portfolio of AI-enabling hardware and software - from CPUs, GPUs and accelerators to the oneAPI programming model, OpenVINO developer toolkit and libraries that empower the AI ecosystem - provides competitive, high-performance, open-standards solutions for customers to quickly deploy AI at scale.
Intel Developer Cloud Reaches General Availability
Intel announced general availability of the Intel Developer Cloud, which gives developers an easy path to test and deploy AI and high performance computing applications and solutions across the latest Intel CPUs, GPUs and AI accelerators. Developers can also take advantage of cutting-edge tools to enable advanced AI and performance. The details:
The Intel Developer Cloud is built on a foundation of advanced central processing units (CPUs) that are purpose-built for AI, graphics processing units (GPUs), and Intel Gaudi 2 processors for Deep Learning, along with open software and tools. The cloud development environment also provides access to the latest Intel hardware platforms, such as 5th Gen Intel Xeon Scalable processors (code-named Emerald Rapids), which will become available in the Intel Development Cloud in the next few weeks and launch on Dec. 14, and Intel Data Center GPU Max Series 1100 and 1550.
Developers can use the Intel Developer Cloud to build, test and optimize AI and high performance computing applications. They can also run small- to large-scale AI training, model optimization and inference workloads that deploy with performance and efficiency. Based on an open software foundation with oneAPI - the open multiarchitecture, multivendor programming model - Intel Developer Cloud provides hardware choice and freedom from proprietary programming models to support accelerated computing and code reuse and portability.
Customer and Performance Momentum in the Data Center
Intel announced AI performance updates and industry momentum for its data center and artificial intelligence product portfolio, including Intel Gaudi2 and 3; 4th Gen Intel Xeon; 5th Gen Intel Xeon; and future-generation Xeon processors code-named Sierra Forest and Granite Rapids. The details:
- Intel announced a large AI supercomputer will be built entirely on Intel Xeon processors and 4,000 Intel Gaudi2 AI hardware accelerators, with Stability AI as the anchor customer.
- Dell Technologies and Intel are collaborating to offer AI solutions to meet customers wherever they are on their AI journey. PowerEdge systems with Xeon and Gaudi will support AI workloads ranging from large-scale training to base-level inferencing.
- Alibaba Cloud has reported 4th Gen Xeon as a viable solution for real-time large language model (LLM) inference in its model-serving platform DashScope, with 4th Gen Xeon achieving a 3x acceleration in response time because of its built-in Intel Advanced Matrix Extensions (Intel AMX) accelerators and other software optimizations.
- Granite Rapids will include industry-leading Performance-cores (P-cores), offering better AI performance than any other CPU, and a 2x to 3x boost over 4th Gen Xeon for AI workloads.
New AI Experiences Powered by Intel Core Ultra Processors
Intel will usher in the age of the AI PC with the upcoming Intel Core Ultra processors, code-named Meteor Lake, featuring Intel's first integrated neural processing unit, or NPU, for power-efficient AI acceleration and local inference on the PC. Intel confirmed Core Ultra will launch Dec. 14. The details:
- Core Ultra delivers low-latency AI compute that is connectivity-independent with stronger data privacy.
- Core Ultra integrates an NPU into client silicon for the first time. The NPU is built to enable low power and high quality and provide entirely new PC experiences. It is ideal for workloads migrating from the CPU that need higher quality or efficiency, or for workloads that would typically run in the cloud due to lack of efficient client compute.
- Core Ultra represents an inflection point in Intel's client processor roadmap: It's the first client chiplet design enabled by Foveros packaging technology. In addition to the NPU and major advances in power-efficient performance thanks to Intel 4 process technology, the new processor brings discrete-level graphics performance with onboard Intel Arc graphics.
- Core Ultra's disaggregated architecture delivers a balance of performance and power across AI-driven tasks:
- The GPU has performance parallelism and throughput, ideal for AI infused in media, 3D applications and the render pipeline.
- The NPU is a dedicated low-power AI engine for sustained AI and AI offload.
- The CPU has a fast response ideal for lightweight, single-inference low-latency AI tasks.
- Intel highlighted a collaboration with Acer to bring AI to its upcoming Core Ultra systems showcasing how the new "Acer Parallax" software feature uses the NPU to add a 3D look and feel to user images.
Powering AI at the Edge
The opportunity for edge computing is immense, fueled by the demand for automating systems and analyzing data through AI. OpenVINO is Intel's AI inferencing and deployment runtime of choice for developers on client and edge platforms. With the OpenVINO developer toolkit, Intel is making AI at the edge even more accessible. Developer downloads of the OpenVINO toolkit have seen a 90% year-over-year increase in the past year alone. The details:
- OpenVINO 2023.1, powered by oneAPI, makes generative AI more accessible for real-world scenarios, enabling developers to write once and deploy across a broad range of devices and AI applications.
- The newest release - available for download on OpenVINO.ai - brings Intel closer to the vision of any model on any hardware anywhere.
- OpenVINO 2023.1 enables developers to optimize standard PyTorch, TensorFlow or ONNX models and offers full support for the forthcoming Core Ultra processors. It also provides more model compression techniques, improved GPU support and memory consumption for dynamic shapes, as well as more portability and performance to run across the entire compute continuum: cloud, client and edge.
- During the Innovation Day 1 keynote Intel demonstrated Fit:match, an AI solution improving today's retail fitting-room experience. Fit:match's 3D Concierge experience uses Intel RealSense Depth Cameras with lidar sensors, Intel Core processors and OpenVINO. With a focus on security and privacy, the solution can scan and match thousands of products to ensure an optimal fit for the customer, which increases purchasing conversions and reduces return rates.
View at TechPowerUp Main Site | Source