Menu
Home
Forums
New posts
New posts
What's new
Latest activity
New profile posts
Members
Current visitors
New profile posts
Log in
Register
What's new
New posts
Menu
Log in
Register
Install the app
Install
Welcome to TechPowerUp Forums, Guest! Please check out
our forum guidelines
for info related to our community.
Home
Forums
Other
AI, Machine Learning & Crypto
Dear AMD, NVIDIA, INTEL and others, we need cheap (192-bit to 384-bit), high VRAM, consumer, GPUs to locally self-host/inference AI/LLMs
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Members who reacted to message #42
All
(6)
Like
(3)
Love
(3)
V
Today at 9:28 PM
vigor
Messages
79
Reaction score
56
Points
58
Today at 7:38 PM
Dr. Dro
·
31
·
From São Paulo, Brazil
Messages
7,472
Solutions
1
Reaction score
10,975
Points
163
Today at 7:32 PM
DirtyDingusMcgee
·
43
·
From Texas, USA
Messages
473
Reaction score
802
Points
103
Today at 7:31 PM
damric
·
From Azalea City
Messages
1,775
Reaction score
1,720
Points
163
Today at 7:23 PM
Bomby569
Messages
3,385
Reaction score
2,991
Points
163
Today at 7:17 PM
Assimilator
·
From Ikenai borderline!
Messages
5,993
Solutions
2
Reaction score
7,857
Points
163
Loading…
Loading…
Home
Forums
Other
AI, Machine Learning & Crypto
Dear AMD, NVIDIA, INTEL and others, we need cheap (192-bit to 384-bit), high VRAM, consumer, GPUs to locally self-host/inference AI/LLMs
Top