T0@st
News Editor
- Joined
- Mar 7, 2023
- Messages
- 2,605 (3.56/day)
- Location
- South East, UK
System Name | The TPU Typewriter |
---|---|
Processor | AMD Ryzen 5 5600 (non-X) |
Motherboard | GIGABYTE B550M DS3H Micro ATX |
Cooling | DeepCool AS500 |
Memory | Kingston Fury Renegade RGB 32 GB (2 x 16 GB) DDR4-3600 CL16 |
Video Card(s) | PowerColor Radeon RX 7800 XT 16 GB Hellhound OC |
Storage | Samsung 980 Pro 1 TB M.2-2280 PCIe 4.0 X4 NVME SSD |
Display(s) | Lenovo Legion Y27q-20 27" QHD IPS monitor |
Case | GameMax Spark M-ATX (re-badged Jonsbo D30) |
Audio Device(s) | FiiO K7 Desktop DAC/Amp + Philips Fidelio X3 headphones, or ARTTI T10 Planar IEMs |
Power Supply | ADATA XPG CORE Reactor 650 W 80+ Gold ATX |
Mouse | Roccat Kone Pro Air |
Keyboard | Cooler Master MasterKeys Pro L |
Software | Windows 10 64-bit Home Edition |
AMD has caught up with NVIDIA and Intel in the race to get a locally run AI chatbot up and running on its respective hardware. Team Red's community hub welcomed a new blog entry on Wednesday—AI staffers published a handy "How to run a Large Language Model (LLM) on your AMD Ryzen AI PC or Radeon Graphics Card" step-by-step guide. They recommend that interested parties are best served by downloading the correct version of LM Studio. Their CPU-bound Windows variant—designed for higher-end Phoenix and Hawk Point chips—compatible Ryzen AI PCs can deploy instances of a GPT based LLM-powered AI chatbot. The LM Studio ROCm technical preview functions similarly, but is reliant on Radeon RX 7000 graphics card ownership. Supported GPU targets include: gfx1100, gfx1101 and gfx1102.
AMD believes that: "AI assistants are quickly becoming essential resources to help increase productivity, efficiency or even brainstorm for ideas." Their blog also puts a spotlight on LM Studio's offline functionality: "Not only does the local AI chatbot on your machine not require an internet connection—but your conversations stay on your local machine." The six-step guide invites curious members to experiment with a handful of large language models—most notably Mistral 7b and LLAMA v2 7b. They thoroughly recommend that you select options with "Q4 K M" (AKA 4-bit quantization). You can learn about spooling up "your very own AI chatbot" here.
View at TechPowerUp Main Site | Source
AMD believes that: "AI assistants are quickly becoming essential resources to help increase productivity, efficiency or even brainstorm for ideas." Their blog also puts a spotlight on LM Studio's offline functionality: "Not only does the local AI chatbot on your machine not require an internet connection—but your conversations stay on your local machine." The six-step guide invites curious members to experiment with a handful of large language models—most notably Mistral 7b and LLAMA v2 7b. They thoroughly recommend that you select options with "Q4 K M" (AKA 4-bit quantization). You can learn about spooling up "your very own AI chatbot" here.




View at TechPowerUp Main Site | Source