Menu
Home
Forums
New posts
New posts
What's new
Latest activity
New profile posts
Members
Current visitors
New profile posts
Log in
Register
What's new
New posts
Menu
Log in
Register
Install the app
Install
Welcome to TechPowerUp Forums, Guest! Please check out
our forum guidelines
for info related to our community.
Home
Forums
Other
AI, Machine Learning & Crypto
Dear AMD, NVIDIA, INTEL and others, we need cheap (192-bit to 384-bit), high VRAM, consumer, GPUs to locally self-host/inference AI/LLMs
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Members who reacted to message #12
All
(7)
Like
(7)
M
Yesterday at 9:12 PM
mate123
·
27
·
From Hungary
Messages
33
Reaction score
38
Points
28
W
Yesterday at 5:44 PM
Wirko
·
53
·
From Slovenia
Messages
3,784
Reaction score
2,544
Points
163
A
Yesterday at 5:12 PM
AnotherReader
·
From Mississauga, Canada
Messages
1,815
Reaction score
2,232
Points
163
L
Yesterday at 5:12 PM
londiste
Messages
3,893
Reaction score
2,684
Points
163
D
Yesterday at 1:11 PM
Dragam1337
Messages
1,194
Reaction score
1,002
Points
123
Yesterday at 1:09 PM
Assimilator
·
From Ikenai borderline!
Messages
5,993
Solutions
2
Reaction score
7,858
Points
163
C
Yesterday at 8:39 AM
Carillon
Messages
153
Reaction score
145
Points
93
Loading…
Home
Forums
Other
AI, Machine Learning & Crypto
Dear AMD, NVIDIA, INTEL and others, we need cheap (192-bit to 384-bit), high VRAM, consumer, GPUs to locally self-host/inference AI/LLMs
Top