• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Do you use your GPU for Compute?

Besides Gaming, what do you use GPU Compute for?

  • AI

    Votes: 2,504 9.9%
  • Encoding

    Votes: 3,134 12.4%
  • Rendering

    Votes: 3,008 11.9%
  • Mining Crypto

    Votes: 791 3.1%
  • Folding/BOINC

    Votes: 762 3.0%
  • No compute, just gaming

    Votes: 14,986 59.5%

  • Total voters
    25,185
  • Poll closed .
No compute, just gaming : 59.9% :kookoo:

The only explanation is that most people have no idea how a PC works. Quite shameful, I say, because they are also the loudest on the forum.
I voted for encoding, but rendering is not foreign to me either. And as the screenshot below indicates, the video card helps in many other ways. PCMark only uses OpenCL because it is the only one supported by all video cards. In reality, owners of nVidia video cards get much better results, thanks to CUDA, NVENC and OptiX.

Before jumping with the axes, try to understand the message in the capture below.
comp pcmark 10.jpg
 
I miss option for Content consuming: DECODING/Video&Youtube playback/Stream decoding maybe streaming (OK that is encoding, but not everbody streaming who use GPU for encoding (exporting) an edited video.

[Even my old GPU (GTX1060) have better power efficiency to watch youtube comparing to an undervolted high efficient i5-10400F on light load CPU decoding VP9 youtube video.]

....
I voted for encoding, but rendering is not foreign to me either....
I use CPU render, because of better quality and video quality/bitrate efficiency.
There are countless parameters to set and have option for 2 pass encoding. But I do not encode much, so I do not care speed, htat is why I prefer using CPU render for a better quality/compression ratio.
 
I believe Mathematica uses the GPU for some calculations, but I don't know if this is by default.
 
Last edited:
  • Like
Reactions: bug
Sometimes I play games on it...
And that's it.
 
There's one missing category. It's a bit hard to describe, but there's this software that will use compute behind the scenes. Things like video/photo editors or even spreadsheet manipulators will offload things, given a chance. I expect that is the more widespread usage of compute among home users.
The obvious problem being that, even if you added the category, users may still be oblivious to their using compute.
 
Well, back when I had a tower with a somewhat powerful GPU (3080), I accidentally reconfigured the entire universe to my liking, so I'm not sure what category that would fall under... but since I don't do gamz, I felt like I had to give it something useful to do, hehehe :)

And don't worry, that was a temporary, one-time project, and I've since moved on to other, more significant things.....like watching the world get destroyed by constant wars, disease, hunger, poverty, and greed ! /s
 
I use CPU render, because of better quality and video quality/bitrate efficiency.
There are countless parameters to set and have option for 2 pass encoding. But I do not encode much, so I do not care speed, htat is why I prefer using CPU render for a better quality/compression ratio.
Artifacts and quality were a problem 15 years ago, not now. The differences are so small that even professional software has integrated the GPU Hardware Encoding option for export. Regardless of whether you use it or not, until export you have the editing and there GPU Hardware Decoding is the hand of God.
File size? It's a joke right? Can you buy 4TB for $100 or 8TB for $160 and take care of the extra 5-7%, space occupied by a video file generated with GPU Encode activation? How about doubling, even tripling the performance?
Here are comparisons made by those from Puget for AMD, nVidia and software (CPU only). Honestly, only AMD is a point of discussion, but I would also use their version for home use if there is nothing else.

You can use the video card for streaming, multimedia viewing, and browsers, antivirus and many others have the Hardware Acceleration option enabled by default. The video card even helps to open a folder. That's why you can't say that you're buying a video card "only for gaming".
A sempron dual core from 2010 has big problems decoding youtube 1080p@30fps using an integrated graphics from its time. Playback is fluent, but you can't do anything else with the computer (80% CPU Usage). In 1440p it is overwhelmed. The same sempron + 1050/1050Ti can render 4K@60fps without breaking a sweat.

 
You can use the video card for streaming, multimedia viewing, and browsers, antivirus and many others have the Hardware Acceleration option enabled by default. The video card even helps to open a folder. That's why you can't say that you're buying a video card "only for gaming".

That dGPU does not exist in a vacuum. Most processors have integrated graphics which do all those video accelerated things with ease while using less power than a dGPU. So using those base mechanics of video display and playback as some argument saying people on this of all forums are somehow ignorant of what their GPUs are doing is reading the room wrong. And reading the intent of the poll wrong by nitpicking.

It's clear that the poll is about using the *power* of a dGPU for things that it's power can actually make a difference with: AI/folding/rendering/crypto (in the past). Not displaying a spreadsheet or YT video that a N100's iGPU can do with ease.
 
Artifacts and quality were a problem 15 years ago, not now. The differences are so small that even professional software has integrated the GPU Hardware Encoding option for export. Regardless of whether you use it or not, until export you have the editing and there GPU Hardware Decoding is the hand of God.
File size? It's a joke right? Can you buy 4TB for $100 or 8TB for $160 and take care of the extra 5-7%, space occupied by a video file generated with GPU Encode activation? How about doubling, even tripling the performance?
Here are comparisons made by those from Puget for AMD, nVidia and software (CPU only). Honestly, only AMD is a point of discussion, but I would also use their version for home use if there is nothing else.

You can use the video card for streaming, multimedia viewing, and browsers, antivirus and many others have the Hardware Acceleration option enabled by default. The video card even helps to open a folder. That's why you can't say that you're buying a video card "only for gaming".
A sempron dual core from 2010 has big problems decoding youtube 1080p@30fps using an integrated graphics from its time. Playback is fluent, but you can't do anything else with the computer (80% CPU Usage). In 1440p it is overwhelmed. The same sempron + 1050/1050Ti can render 4K@60fps without breaking a sweat.


GPU encoding is nice for it's speed but CPU encoding is definitely still king of the castle when it comes to quality. Even for scenarios where quality loss is acceptable like streaming, having a dual PC setup with the 2nd PC being a dedicated CPU encoding system is the general recommendation if you want a step up above GPU encoding for streaming. The quality difference between a dual rig system like that and a regular GPU encoded stream is significant even at the same bitrate. 5-7% increase in size for GPU encodes is exceedingly conservative. In many cases it's 35% larger or more.

Professional software includes GPU encoding to speed up previews, timeline jumping, prototyping, and for those where quality isn't a top factor. It's putting to good use the faster lower quality nature of GPU encoders.
 
Spending most of the free time on stable deduction and playing games. Which option should I choose?
 
GPU encoding is nice for it's speed but CPU encoding is definitely still king of the castle when it comes to quality. Even for scenarios where quality loss is acceptable like streaming, having a dual PC setup with the 2nd PC being a dedicated CPU encoding system is the general recommendation if you want a step up above GPU encoding for streaming. The quality difference between a dual rig system like that and a regular GPU encoded stream is significant even at the same bitrate. 5-7% increase in size for GPU encodes is exceedingly conservative. In many cases it's 35% larger or more.

Professional software includes GPU encoding to speed up previews, timeline jumping, prototyping, and for those where quality isn't a top factor. It's putting to good use the faster lower quality nature of GPU encoders.
You are confused. The editing parts you mentioned are in the decoding chapter and have been present for decades in professional software. Maybe you don't know or they forgot that there is an unorthodox way for 8800GT/S/9xxx to be recognized by Premiere (simple editing of a file). They had nothing to do with the export of the video material. In the meantime, naturally, the inherent deficiencies of the beginning were eliminated and Premiere (and many others) introduced HW Acceleration for export starting from 2020.
In parallel, the importance of HW decoding increased and, at least for the moment, nVidia doubled the number of encoding units for 4070Ti or more powerful. When decoding, they are all about the same. When encoding, the performance is proportional to the power of the graphics chip and the choice made in the case of the dedicated video card.
A Puget review on the introduction of this facility in Premiere shows quite clearly the igpu limits for export (see capture). They mention that they used an earlier version of Premiere for QuickSync because it is the last one that supports igpu activation when the program detects a dedicated video card in the system.
As for the quality, I have provided you with a link where you can find a video material exported with SW and HW. Without the details about each file, something tells me that you wouldn't have bet a substantial amount to guess which is SW and which is HW. And 7% extra on file size was really generous. They are below 5% in the case of large files and almost negligible in the case of small ones.

I think you should have voted for Content Creation, which includes everything: renderings, encodings and many others that are not related to games... Now I see the same desperation as in the case of X3D and the 1% boost that saves the planet... when you have it. Everyone uses CPU for TikTok editing because GPU Hardware Acceleration doesn't cover wrinkles and poverty.

I repeat: if you say that you are buying a video card "only for games" it is clear that you have no idea of its importance beyond 3D rendering.

Clipboard01.jpg
 
What's that 8% on AI about? Did people confuse this option with DLSS, which is gaming, not compute? Or do people run ChatGPT from home or something? :laugh:

As for me, I occasionally run Einstein@Home on BOINC to help scientists find black holes and pulsars and other interesting space phenomena. Not very often, though, as electricity is expensive in the UK.
 
If multiple apply, pick the most important one and maybe let us know in the comments
Kinda need the multi choice, as I do blender compute and transcoding almost daily.
 
so far only to AV1 encode my 2TB "Homework" Folder.
 
I've run some GPT4ALL on my 3080TI (when it doesn't run out of VRAM and swap to CPU) and I've tried stable diffusion to realize that I don't know how to make people that don't look goofy. In the past (before the AI craze) I did do lots of AI upscaling with Video2X and Waifu2xGUI. I guess the upscaling would count as rendering? I also on occasion clip gameplay so I do some encoding.

so far only to AV1 encode my 2TB "Homework" Folder.
My preferred homework device doesn't have AV1 decode :( and my card only decodes and doesn't encode AV1.
 
A little test.
Source: Samsung Wonderland Demo (you can download them for free)
CPU: 14700KF @ max 104W (~=7900X in encodings)
GPU: RTX 3070 Ti
Target: 4K -> 1080p; 20000 kbps; H265 10 bit; slow (CPU) and slowest (GPU). The render time for the GPU allowed me the most aggressive preset. When rendering with only CPU it is a disaster.
Output file size:
GPU: 255 GB
CPU: 386 GB
Render Time:
GPU: 66 s
CPU: 380 s
Note: NVENC doesn't just mean GPU. It works in tandem: CPU + GPU.

Under the spoiler you have 10 captures made with icat. If you look with a magnifying glass you will see the differences. Anyway, at NVENC, in this case, you have a reserve of 131GB to shut everyone's mouth. For home use, it's just a waste of time because the video material generated by nvenc looks pretty good.

sizecompare.jpg

cpu 1080p H265 10bit pass 2 slow mode.jpg

NVENC H265 10bit 1080p 10bit Slowest mode.jpg


Samsung Wonderland Demo CPU__Samsung Wonderland Demo GPU 01.png
Samsung Wonderland Demo CPU__Samsung Wonderland Demo GPU 02.png
Samsung Wonderland Demo CPU__Samsung Wonderland Demo GPU 03.png
Samsung Wonderland Demo CPU__Samsung Wonderland Demo GPU 04.png
Samsung Wonderland Demo CPU__Samsung Wonderland Demo GPU 05.png
Samsung Wonderland Demo CPU__Samsung Wonderland Demo GPU 06.png
Samsung Wonderland Demo CPU__Samsung Wonderland Demo GPU 07.png
Samsung Wonderland Demo CPU__Samsung Wonderland Demo GPU 08.png
Samsung Wonderland Demo CPU__Samsung Wonderland Demo GPU 09.png
Samsung Wonderland Demo CPU__Samsung Wonderland Demo GPU 10.png
 
Last edited:
With all the hype around AI, we are wondering.. besides gaming, what do you use your GPU for?

If multiple apply, pick the most important one and maybe let us know in the comments
I mainly game, but everything else is a tiebreaker in choosing a card.
I use Encoding both for live screen recording as well as remastering in Davinci. I don't do much rendering anymore but its nice to have the support.
I used to do F@H but the politics of it got tiring, I have done a significant amount of Boinc, AI and crypto but I tend to have boxen for that and my main rig is unburdened.
 
Seeing the responses to this thread are making me question the reading comprehension of our forum members :kookoo:
 
No compute, just gaming : 59.9% :kookoo:

The only explanation is that most people have no idea how a PC works. Quite shameful, I say, because they are also the loudest on the forum.
Clearly the scope of the question was any non-gaming applications. Obviously when you're playing a game, computations are involved. The answer was clearly towards people that don't do anything on their PC but gaming, basing it on the allowed response, along with the quantity of responses(i.e. one response), and the implied meaning of the words used, even if the usage could be considered incorrect.

On that note, anything the GPU does could actually be considered rendering anyway, even if it doesn't necessarily fall under that specific category.
 
A little test.
Source: Samsung Wonderland Demo (you can download them for free)
CPU: 14700KF @ max 104W (~=7900X in encodings)
GPU: RTX 3070 Ti
Target: 4K -> 1080p; 20000 kbps; H265 10 bit; slow (CPU) and slowest (GPU). The render time for the GPU allowed me the most aggressive preset. When rendering with only CPU it is a disaster.
Output file size:
GPU: 255 GB
CPU: 386 GB
Render Time:
GPU: 66 s
CPU: 380 s
Note: NVENC doesn't just mean GPU. It works in tandem: CPU + GPU.

Under the spoiler you have 10 captures made with icat. If you look with a magnifying glass you will see the differences. Anyway, at NVENC, in this case, you have a reserve of 131GB to shut everyone's mouth. For home use, it's just a waste of time because the video material generated by nvenc looks pretty good.

View attachment 327970
View attachment 327971
View attachment 327972


There are a couple of problems with your comparison:

1) The samsung wonderland demo is perhaps the worst possible video to test encode quality. Only small portions of the screen consist of moving elements and the rest is fixed in place. In addition said movements are slow. It's not representative of any movie or video I can think of, even most nature tours have vastly more complex video to encode.

2) You have the GPU set to the highest quality preset and the CPU to the 3rd highest and therefore it isn't an apples to apples comparison.

3) Variable refresh rate and average KBPS are just not used anymore. A variable frame-rate which can cause playback issues on many devices in addition to just a choppy frame-rate in some instances during playback. Constant frame-rate takes a bit more space but ensures smooth playback. You want to be using CF / RF instead of average bitrate as well. A constant CF means the encoder will target a certain quality level, which especially makes sense here given you are trying to compare two CPU / GPU encoding. I haven't seen anyone encoding using average bitrate in over a decade and for good reason, constantly quality provides a faster smaller encode with no downsides: https://handbrake.fr/docs/en/latest/technical/video-cq-vs-abr.html

NVENC is GPU only. Handbrake still encodes filters, subtitles, ect on the CPU but the video itself is encoded entirely on the GPU when NVENC is enabled: https://handbrake.fr/docs/en/latest/technical/video-nvenc.html

I voted AI because I use my GPU all the time for AI inference. I use my CPU far more for encoding videos, where I'll typical split a 5 min section out of a video and encode multiple times at different CF values to compare the quality and size differences. My GPU could encode videos very fast but it's not worth it to me for the qualty trade-off observed.
 
Last edited:
Rendering, 3d modeling, video editing. The whole deal. It's extremely important for me to get these done quickly enough as I'm running even more GPU intensive at the same time.
 
Clearly the scope of the question was any non-gaming applications. Obviously when you're playing a game, computations are involved. The answer was clearly towards people that don't do anything on their PC but gaming, basing it on the allowed response, along with the quantity of responses(i.e. one response), and the implied meaning of the words used, even if the usage could be considered incorrect.

On that note, anything the GPU does could actually be considered rendering anyway, even if it doesn't necessarily fall under that specific category.
Exactly this! DLSS is gaming, regardless of what code it runs on your GPU.
 
I put down rendering, as it's closest to the photogrammetry that I do on as well as game, and it certainly works the GPU, CPU and RAM hard for long periods of time.
 
There are a couple of problems with your comparison:

1) The samsung wonderland demo is perhaps the worst possible video to test encode quality. Only small portions of the screen consist of moving elements and the rest is fixed in place. In addition said movements are slow. It's not representative of any movie or video I can think of, even most nature tours have vastly more complex video to encode.

2) You have the GPU set to the highest quality preset and the CPU to the 3rd highest and therefore it isn't an apples to apples comparison.

3) Variable refresh rate and average KBPS are just not used anymore. A variable frame-rate which can cause playback issues on many devices in addition to just a choppy frame-rate in some instances during playback. Constant frame-rate takes a bit more space but ensures smooth playback. You want to be using CF / RF instead of average bitrate as well. A constant CF means the encoder will target a certain quality level, which especially makes sense here given you are trying to compare two CPU / GPU encoding. I haven't seen anyone encoding using average bitrate in over a decade and for good reason, constantly quality provides a faster smaller encode with no downsides: https://handbrake.fr/docs/en/latest/technical/video-cq-vs-abr.html

NVENC is GPU only. Handbrake still encodes filters, subtitles, ect on the CPU but the video itself is encoded entirely on the GPU when NVENC is enabled: https://handbrake.fr/docs/en/latest/technical/video-nvenc.html

I voted AI because I use my GPU all the time for AI inference. I use my CPU far more for encoding videos, where I'll typical split a 5 min section out of a video and encode multiple times at different CF values to compare the quality and size differences. My GPU could encode videos very fast but it's not worth it to me for the qualty trade-off observed.

1. I took one by chance, but I tested many more over time.
2. As I explained, GPU encoding allows the most aggressive preset. For the CPU, placebo means 3-4 frames per second. Imagine what a work material of 1-2 hours and 60 fps means if for even 3 minutes at 24 fps you need almost 45 minutes. Just imagine.
3. I assure you that nothing exploded and the planet is safe. The same settings are in both variants, I don't see where the problem is.
Someone complained that GPU decoding lacks 2-Pass encoding. Maybe you agree.
4. And where did I go wrong when I said that it is a GPU+CPU tandem?

Am I to understand that the planet is turned upside down and, now, the CPU beats the GPU at encoding speed?
 
Just gaming, like the vast majority of people with gaming rigs (that percentage is going to be way higher than for the people who frequent techpowerup and vote in polls here).
 
Back
Top