Menu
Home
Forums
New posts
New posts
What's new
Latest activity
New profile posts
Members
Current visitors
New profile posts
Log in
Register
What's new
New posts
Menu
Log in
Register
Install the app
Install
Welcome to TechPowerUp Forums, Guest! Please check out
our forum guidelines
for info related to our community.
Home
Forums
Other
AI, Machine Learning & Crypto
Dear AMD, NVIDIA, INTEL and others, we need cheap (192-bit to 384-bit), high VRAM, consumer, GPUs to locally self-host/inference AI/LLMs
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Members who reacted to message #2
All
(12)
Haha
(5)
Like
(4)
Love
(2)
Sad
(1)
T
19 minutes ago
TheinsanegamerN
Messages
4,153
Reaction score
4,623
Points
163
X
Yesterday at 10:06 PM
x4it3n
·
From Los Angeles, CA
Messages
302
Reaction score
184
Points
53
Yesterday at 7:10 PM
720p low
Messages
58
Reaction score
68
Points
68
A
Yesterday at 5:06 PM
AnotherReader
·
From Mississauga, Canada
Messages
1,815
Reaction score
2,232
Points
163
Yesterday at 3:11 PM
AVATARAT
Messages
541
Reaction score
516
Points
143
W
Yesterday at 2:03 PM
windwhirl
Messages
2,990
Reaction score
4,849
Points
163
Yesterday at 1:58 PM
Dristun
·
From Moscow, Russia
Messages
685
Reaction score
1,132
Points
143
D
Yesterday at 1:09 PM
Dragam1337
Messages
1,194
Reaction score
1,002
Points
123
Yesterday at 1:08 PM
Assimilator
·
From Ikenai borderline!
Messages
5,993
Solutions
2
Reaction score
7,858
Points
163
Yesterday at 5:15 AM
eidairaman1
The Exiled Airman
·
From Republic of Texas (True Patriot)
Messages
43,454
Solutions
2
Reaction score
21,492
Points
163
Monday at 10:42 PM
freeagent
Moderator
·
From Winnipeg, Canada
Messages
9,631
Solutions
1
Reaction score
16,411
Points
163
T
Monday at 8:51 PM
TPUnique
Messages
46
Reaction score
69
Points
18
Loading…
Loading…
Loading…
Loading…
Home
Forums
Other
AI, Machine Learning & Crypto
Dear AMD, NVIDIA, INTEL and others, we need cheap (192-bit to 384-bit), high VRAM, consumer, GPUs to locally self-host/inference AI/LLMs
Top