• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Updates "AI Playground" Application for Local AI Models with "Lunar Lake" Support

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
2,538 (0.96/day)
Intel has announced the release of an updated version of its AI Playground application, now optimized for the new Intel Core Ultra 200V "Lunar Lake" series of processors. This latest iteration, version 1.21b, brings a host of new features and improvements designed to make AI more accessible to users of Intel's AI-enabled PCs. AI Playground, first launched earlier this year, offers a user-friendly interface for various AI functions, including image generation, enhancement, and natural language processing. The new version introduces several key enhancements. These include a fresh, exclusive theme for 200V series processor users, an expanded LLM picker now featuring Phi3, Qwen2, and Mistral models, and a conversation manager for saving and revisiting chat discussions. Additionally, users will find adjustable font sizes for improved readability and a simplified aspect ratio tool for image creation and enhancement.

One of the most significant aspects of AI Playground is its ability to run entirely locally on the user's machine. This approach ensures that all computations, prompts, and outputs remain on the device, addressing privacy concerns often associated with cloud-based AI services. The application is optimized to take advantage of the Xe Cores and XMX AI engines found in the Intel Core Ultra 200V series processors, allowing even lightweight devices to perform complex AI tasks efficiently. Intel has also improved the installation process, addressing potential conflicts and providing better error handling. The company encourages user engagement through its Intel Insiders Discord channel, helping the community around AI Playground's development and use. Although the models users can run locally are smaller in size, usually up to 7 billion parameters with 8/4-bit quants, having a centralized application to help run them locally is significant for slowly embedding AI in all aspects of personal computing.



View at TechPowerUp Main Site | Source
 
Joined
Nov 3, 2014
Messages
264 (0.07/day)
Sounds like they're accessing the iGPU?

How long until the NPUs which they're pushing so hard become useful for LLM/Diffusion models?
 

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
2,538 (0.96/day)
Sounds like they're accessing the iGPU?

How long until the NPUs which they're pushing so hard become useful for LLM/Diffusion models?
iGPU and I assue NPU as well, every accelerator on board will be used.
 
Joined
May 1, 2020
Messages
109 (0.07/day)
Oh thank you but ComfyUI, forge, and automatic1111 are way better.
 
Top