• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Publishes User Guide for LM Studio - a Local AI Chatbot

T0@st

News Editor
Joined
Mar 7, 2023
Messages
2,077 (3.17/day)
Location
South East, UK
AMD has caught up with NVIDIA and Intel in the race to get a locally run AI chatbot up and running on its respective hardware. Team Red's community hub welcomed a new blog entry on Wednesday—AI staffers published a handy "How to run a Large Language Model (LLM) on your AMD Ryzen AI PC or Radeon Graphics Card" step-by-step guide. They recommend that interested parties are best served by downloading the correct version of LM Studio. Their CPU-bound Windows variant—designed for higher-end Phoenix and Hawk Point chips—compatible Ryzen AI PCs can deploy instances of a GPT based LLM-powered AI chatbot. The LM Studio ROCm technical preview functions similarly, but is reliant on Radeon RX 7000 graphics card ownership. Supported GPU targets include: gfx1100, gfx1101 and gfx1102.

AMD believes that: "AI assistants are quickly becoming essential resources to help increase productivity, efficiency or even brainstorm for ideas." Their blog also puts a spotlight on LM Studio's offline functionality: "Not only does the local AI chatbot on your machine not require an internet connection—but your conversations stay on your local machine." The six-step guide invites curious members to experiment with a handful of large language models—most notably Mistral 7b and LLAMA v2 7b. They thoroughly recommend that you select options with "Q4 K M" (AKA 4-bit quantization). You can learn about spooling up "your very own AI chatbot" here.



View at TechPowerUp Main Site | Source
 
Joined
May 3, 2018
Messages
2,881 (1.19/day)
AMD believes that: "AI assistants are quickly becoming essential resources to help increase productivity, efficiency or even brainstorm for ideas."

Yeah nah.
 
Joined
Nov 3, 2014
Messages
267 (0.07/day)
This is really such an insane f**king joke. They've "caught up" by just publicly admitting how incompatible all their products are with LLMs?
 
Joined
Aug 20, 2007
Messages
21,541 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
AMD believes that: "AI assistants are quickly becoming essential resources to help increase productivity, efficiency or even brainstorm for ideas."

Yeah nah.
It can be useful for research, provided you curate the data it's searching well. It's a pretty select use case and it is certainly overblown, but even our sites founder, W1zzard, has admitted this is a handy trait.

tl;dr: Is AI overhyped? Yes. Is it useless? No.
 
Joined
May 3, 2018
Messages
2,881 (1.19/day)
It can be useful for research, provided you curate the data it's searching well. It's a pretty select use case and it is certainly overblown, but even our sites founder, W1zzard, has admitted this is a handy trait.

tl;dr: Is AI overhyped? Yes. Is it useless? No.
In the context of a home PC it largely is useless. I know full well how powerful AI can be in science, engineering and medicine to name a few, but you can bet they aren't relying on a BS copilot.
 
Joined
Jan 18, 2020
Messages
834 (0.46/day)
In the context of a home PC it largely is useless. I know full well how powerful AI can be in science, engineering and medicine to name a few, but you can bet they aren't relying on a BS copilot.

Got to keep hyping it whilst some killer non niche use case is found for LLMs.
 
Joined
Jun 18, 2021
Messages
2,569 (2.00/day)
In the context of a home PC it largely is useless. I know full well how powerful AI can be in science, engineering and medicine to name a few, but you can bet they aren't relying on a BS copilot.

It can still be usefull to bounce simple ideas from, it's useless in the context companies like Microsoft want to push it, integrating it in all corners of the OS. But having a chat bot quickly accessible and offline is pretty cool not to mention because the free party we've had with chatgpt and bard/gemini will certainly end in the not so distant future as it's simply not a sustainable service, openAI is literally bleeding money as is google, though certainly much much less because they're less used and they were much better prepared with accelerators built in house ready to go.
 
Joined
Aug 20, 2007
Messages
21,541 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
In the context of a home PC it largely is useless. I know full well how powerful AI can be in science, engineering and medicine to name a few, but you can bet they aren't relying on a BS copilot.
That's a fair assesment.
 
Joined
Dec 25, 2020
Messages
7,013 (4.81/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG Maximus Z790 Apex Encore
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Audio Device(s) Apple USB-C + Sony MDR-V7 headphones
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard IBM Model M type 1391405 (distribución española)
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
AMD, always chasing after NVIDIA with subpar implementations of the cool things that NVIDIA does, without regard for the obvious elephant in the room which they absolutely refuse to even acknowledge, let alone do anything about it.

For an answer to "Chat with RTX", that even someone with zero knowledge of computers could use, this is sorely lacking.
 
Joined
Jan 12, 2023
Messages
221 (0.31/day)
System Name IZALITH (or just "Lith")
Processor AMD Ryzen 7 7800X3D (4.2Ghz base, 5.0Ghz boost, -30 PBO offset)
Motherboard Gigabyte X670E Aorus Master Rev 1.0
Cooling Deepcool Gammaxx AG400 Single Tower
Memory Corsair Vengeance 64GB (2x32GB) 6000MHz CL40 DDR5 XMP (XMP enabled)
Video Card(s) PowerColor Radeon RX 7900 XTX Red Devil OC 24GB (2.39Ghz base, 2.56Ghz boost)
Storage 2x1TB SSD, 2x2TB SSD, 2x 8TB HDD
Display(s) Samsung Odyssey G51C 27" QHD (1440p 165Hz) + Samsung Odyssey G3 24" FHD (1080p 165Hz)
Case Corsair 7000D Airflow Full Tower
Audio Device(s) Corsair HS55 Surround Wired Headset/LG Z407 Speaker Set
Power Supply Corsair HX1000 Platinum Modular (1000W)
Mouse Logitech G502 X LIGHTSPEED Wireless Gaming Mouse
Keyboard Keychron K4 Wireless Mechanical Keyboard
Software Arch Linux
Funnily enough, their guide didn't work for me. I couldn't get any models to run on my 7900 XTX. But it did point me in the right direction which was KoboldCPP: https://github.com/YellowRoseCx/koboldcpp-rocm

This was VERY easy to set up and get running and it has ROCM support in Windows too.
 
Joined
Aug 5, 2020
Messages
28 (0.02/day)
Location
PL
Processor AMD Ryzen 7 PRO 4750U
Motherboard Yes
Display(s) 13.3"
Mouse Logitech M570
Software btw... I use Arch
hilarious. at the same time one can ollama on linux on multi (mixed) GPU automagically starting with rocm 6.0 release since december 2023. i'm running it on 780m igpu + 7600m xt egpu (via usb4).
 

ShaneTeks

New Member
Joined
Mar 27, 2024
Messages
2 (0.01/day)
Yeah it's been a difficult time for me with my RX 7800 XT. Feel now I can only use it for Gaming because of no official ROCm support. Only RX 7900 is getting goodies. I hope LM Studio succeeds. They are the only ones trying to get this to work on AMD. I'd rather not talk about DirectML.
 
Joined
Aug 5, 2020
Messages
28 (0.02/day)
Location
PL
Processor AMD Ryzen 7 PRO 4750U
Motherboard Yes
Display(s) 13.3"
Mouse Logitech M570
Software btw... I use Arch
Yeah it's been a difficult time for me with my RX 7800 XT. Feel now I can only use it for Gaming because of no official ROCm support. Only RX 7900 is getting goodies. I hope LM Studio succeeds. They are the only ones trying to get this to work on AMD. I'd rather not talk about DirectML.
wrong. it works great under linux. both images (sd.next) and LLMs (ollama)
 
Joined
Dec 25, 2020
Messages
7,013 (4.81/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG Maximus Z790 Apex Encore
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Audio Device(s) Apple USB-C + Sony MDR-V7 headphones
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard IBM Model M type 1391405 (distribución española)
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
wrong. it works great under linux. both images (sd.next) and LLMs (ollama)

Well, this has always been the problem with AMD GPUs, hasn't it? "It works, if you install A, configure B, set environment for C, compile D, adjust E, and make sure to use Linux because otherwise you ain't getting anything done."
 
Joined
Aug 5, 2020
Messages
28 (0.02/day)
Location
PL
Processor AMD Ryzen 7 PRO 4750U
Motherboard Yes
Display(s) 13.3"
Mouse Logitech M570
Software btw... I use Arch
Well, this has always been the problem with AMD GPUs, hasn't it? "It works, if you install A, configure B, set environment for C, compile D, adjust E, and make sure to use Linux because otherwise you ain't getting anything done."
it just shows how little you know about the thing. the whole market for AI, both NV and AMD is almost purely Linux. things just work there, unlike windows.

so again: it works great, if you follow the industry standard.
 

ShaneTeks

New Member
Joined
Mar 27, 2024
Messages
2 (0.01/day)
wrong. it works great under linux. both images (sd.next) and LLMs (ollama)
Have you perhaps got ComfyUI to work with the above gfx1100, 1101, 1102? and have you tested and compared the performance running SD with DirectML compared to using ROCm?
 
Top