• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 50 Series AI PCs Accelerate DeepSeek Reasoning Models

T0@st

News Editor
Joined
Mar 7, 2023
Messages
2,315 (3.31/day)
Location
South East, UK
The recently released DeepSeek-R1 model family has brought a new wave of excitement to the AI community, allowing enthusiasts and developers to run state-of-the-art reasoning models with problem-solving, math and code capabilities, all from the privacy of local PCs. With up to 3,352 trillion operations per second of AI horsepower, NVIDIA GeForce RTX 50 Series GPUs can run the DeepSeek family of distilled models faster than anything on the PC market.

A New Class of Models That Reason
Reasoning models are a new class of large language models (LLMs) that spend more time on "thinking" and "reflecting" to work through complex problems, while describing the steps required to solve a task. The fundamental principle is that any problem can be solved with deep thought, reasoning and time, just like how humans tackle problems. By spending more time—and thus compute—on a problem, the LLM can yield better results. This phenomenon is known as test-time scaling, where a model dynamically allocates compute resources during inference to reason through problems. Reasoning models can enhance user experiences on PCs by deeply understanding a user's needs, taking actions on their behalf and allowing them to provide feedback on the model's thought process—unlocking agentic workflows for solving complex, multi-step tasks such as analyzing market research, performing complicated math problems, debugging code and more.




The DeepSeek Difference
The DeepSeek-R1 family of distilled models is based on a large 671-billion-parameter mixture-of-experts (MoE) model. MoE models consist of multiple smaller expert models for solving complex problems. DeepSeek models further divide the work and assign subtasks to smaller sets of experts. DeepSeek employed a technique called distillation to build a family of six smaller student models—ranging from 1.5-70 billion parameters—from the large DeepSeek 671-billion-parameter model. The reasoning capabilities of the larger DeepSeek-R1 671-billion-parameter model were taught to the smaller Llama and Qwen student models, resulting in powerful, smaller reasoning models that run locally on RTX AI PCs with fast performance.

Peak Performance on RTX
Inference speed is critical for this new class of reasoning models. GeForce RTX 50 Series GPUs, built with dedicated fifth-generation Tensor Cores, are based on the same NVIDIA Blackwell GPU architecture that fuels world-leading AI innovation in the data center. RTX fully accelerates DeepSeek, offering maximum inference performance on PCs.

Throughput performance of the Deepseek-R1 distilled family of models across GPUs on the PC:



Experience DeepSeek on RTX in Popular Tools
NVIDIA's RTX AI platform offers the broadest selection of AI tools, software development kits and models, opening access to the capabilities of DeepSeek-R1 on over 100 million NVIDIA RTX AI PCs worldwide, including those powered by GeForce RTX 50 Series GPUs. High-performance RTX GPUs make AI capabilities always available—even without an internet connection—and offer low latency and increased privacy because users don't have to upload sensitive materials or expose their queries to an online service.

Experience the power of DeepSeek-R1 and RTX AI PCs through a vast ecosystem of software, including Llama.cpp, Ollama, LM Studio, AnythingLLM, Jan.AI, GPT4All and OpenWebUI, for inference. Plus, use Unsloth to fine-tune the models with custom data.

View at TechPowerUp Main Site | Source
 
Joined
Dec 12, 2016
Messages
2,173 (0.73/day)
NVIDIA GeForce RTX 50 Series AI PCs Accelerate DeepSeek Reasoning Models...and so does about every other accelerator provider apparently.
 
Joined
Sep 26, 2022
Messages
242 (0.28/day)
Location
Portugal
System Name Main
Processor 5700X
Motherboard MSI B450M Mortar
Cooling Corsair H80i v2
Memory G.SKILL Ripjaws V 32GB (2x16GB) DDR4-3600MHz CL16
Video Card(s) MSI RTX 3060 Ti VENTUS 2X OC 8GB GDDR6X
Display(s) LG 32GK850G
Case NOX HUMMER ZN
Power Supply Seasonic GX-750
Wasn't it shown that it runs fine on lower specced systems?
Aren't the 50 series vaporware right now, why promote this if there is no way to (independently) do it?
 
Joined
Sep 6, 2013
Messages
3,485 (0.84/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
AMD has shown different results with 7900XTX beating RTX 4090.
Now, in the above benchmarks from Nvidia the Radeon card is running Vulkan. Is this optimum, or does Nvidia sabotaging the 7900 here?

Also even with the above results from Nvidia, 7900 wins on performance/dollar easily.
 
Joined
Dec 14, 2011
Messages
1,233 (0.26/day)
Location
South-Africa
Processor AMD Ryzen 9 5900X
Motherboard ASUS ROG STRIX B550-F GAMING (WI-FI)
Cooling Noctua NH-D15 G2
Memory 32GB G.Skill DDR4 3600Mhz CL18
Video Card(s) ASUS GTX 1650 TUF
Storage SAMSUNG 990 PRO 2TB
Display(s) Dell S3220DGF
Case Corsair iCUE 4000X
Audio Device(s) ASUS Xonar D2X
Power Supply Corsair AX760 Platinum
Mouse Razer DeathAdder V2 - Wireless
Keyboard Corsair K70 PRO - OPX Linear Switches
Software Microsoft Windows 11 - Enterprise (64-bit)
I saw an article about 5090's bricking by doing a driver update, anyone know if this happens with the 5080's?
 

KLBRS320

New Member
Joined
Jan 29, 2025
Messages
1 (0.17/day)
 
Joined
May 10, 2023
Messages
562 (0.88/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
Wasn't it shown that it runs fine on lower specced systems?
Aren't the 50 series vaporware right now, why promote this if there is no way to (independently) do it?
The distilled models, yeah. Smaller models can run better on lower specced systems.
The bigger ones require more hardware.

No consumer hardware is going to run the actual big MoE model tho.
AMD has shown different results with 7900XTX beating RTX 4090.
Now, in the above benchmarks from Nvidia the Radeon card is running Vulkan. Is this optimum, or does Nvidia sabotaging the 7900 here?

Also even with the above results from Nvidia, 7900 wins on performance/dollar easily.
Vulkan is quite a bit slower, but it's way easier to get up and running than ROCm.
Nonetheless, those results from AMD were really weird, as even a 3090 usually beats a 7900XTX:
Screenshot 2025-02-03 at 12.08.49.png

source
 
Joined
Feb 18, 2021
Messages
98 (0.07/day)
Processor Ryzen 7950X3D
Motherboard Asus ROG Crosshair X670E Hero
Cooling Corsair iCUE H150i ELITE LCD
Memory 64GB (2X 32GB) Corsair Dominator Platinum RGB DDR4 60000Mhz CL30
Video Card(s) Zotac GeForce RTX 4090 AMP Extreme AIRO 24GB
Storage WD SN850X 4TB NVMe / Samsung 870 QVO 8TB
Display(s) Asus PG43UQ / Samsung 32" UJ590
Case Phanteks Evolv X
Power Supply Corsair AX1600i
Mouse Logitech MX Master 3
Keyboard Corsair K95 RGB Platinum
Software Windows 11 Pro 24H2
Great, so now the Chinese will be grabbing these by the truck load. At this rate they'll never be in stock.
 
Joined
Jun 3, 2008
Messages
833 (0.14/day)
Location
Pacific Coast
System Name Z77 Rev. 1
Processor Intel Core i7 3770K
Motherboard ASRock Z77 Extreme4
Cooling Water Cooling
Memory 2x G.Skill F3-2400C10D-16GTX
Video Card(s) EVGA GTX 1080
Storage Samsung 850 Pro
Display(s) Samsung 28" UE590 UHD
Case Silverstone TJ07
Audio Device(s) Onboard
Power Supply Seasonic PRIME 600W Titanium
Mouse EVGA TORQ X10
Keyboard Leopold Tenkeyless
Software Windows 10 Pro 64-bit
Benchmark Scores 3DMark Time Spy: 7695
Great, so now the Chinese will be grabbing these by the truck load. At this rate they'll never be in stock.
Chinese? Why Chinese? Why not everyone else?
DeepSeek is a open-source software which can be run anywhere in the world.
 
Joined
Feb 18, 2021
Messages
98 (0.07/day)
Processor Ryzen 7950X3D
Motherboard Asus ROG Crosshair X670E Hero
Cooling Corsair iCUE H150i ELITE LCD
Memory 64GB (2X 32GB) Corsair Dominator Platinum RGB DDR4 60000Mhz CL30
Video Card(s) Zotac GeForce RTX 4090 AMP Extreme AIRO 24GB
Storage WD SN850X 4TB NVMe / Samsung 870 QVO 8TB
Display(s) Asus PG43UQ / Samsung 32" UJ590
Case Phanteks Evolv X
Power Supply Corsair AX1600i
Mouse Logitech MX Master 3
Keyboard Corsair K95 RGB Platinum
Software Windows 11 Pro 24H2
True, but the current leaders in AI for now seem to be the US first and then China, so if someone is going to buy hardware that accelerates this, then I'd bet on China buying in bulk first.
 
Joined
Apr 29, 2020
Messages
144 (0.08/day)
I'm running the 14B distill of DeepSeek on my 7735HS laptop with just the integrated 680M. Inference doesn't require a lot of compute, it just needs a lot of memory (I have 32GB). It isn't the fastest, but it is still usable. 7B is much faster but doesn't perform well enough as an AI in my experience.

So many people still think that AI has to run in the 'cloud' because of the compute requirements, or require high end nvidia cards. Training an AI does, but most people aren't training AIs, they are just running a pre-trained model (inferencing). Any semi-recent GPU or CPU can do that for the small to mid-sized AI models. If you want to run the full 670b model then yeah you will need a high-end workstation (because of the ram requirements mainly), but a 14b distill can meet the majority of people's LlM needs and will run on consumer hardware.
 
Joined
Jun 2, 2017
Messages
9,704 (3.46/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
This used to be called Damage Control. The narrative is so precious.

Chinese? Why Chinese? Why not everyone else?
DeepSeek is a open-source software which can be run anywhere in the world.
If you looked at how many 4090s were sold to China vs the rest of the World. The rest of the World is not locked out for 5090s but 5090Ds. Are those not China only variants? People act like China is inert after North America gave China it's manufacturing all of those years ago.
 
Joined
Sep 8, 2020
Messages
239 (0.15/day)
System Name Home
Processor 5950x
Motherboard Asrock Taichi x370
Cooling Thermalright True Spirit 140
Memory Patriot 32gb DDR4 3200mhz
Video Card(s) Sapphire Radeon RX 6700 10gb
Storage Too many to count
Display(s) U2518D+u2417h
Case Chieftec
Audio Device(s) onboard
Power Supply seasonic prime 1000W
Mouse Razer Viper
Keyboard Logitech
Software Windows 10
Well, the Chinese threw a wrench in the US Ai business by making it free and if it wasn't enough, i don't know how much money AI makes right now but i know it eats up a lot of billions, Nvidia is certainly hurting now when some investors question the billions they threw at this just for Deepseek to make it free.
If you were a billionare, would you invest billions into a business that has the potential to be free for all ?
 
Top