This poll should be multiple choice.
I'm probably doing the first three in late 2023. I don't mine crypto (who does on general purpose silicon anymore?) nor have I done any of the folding stuff in the past twenty years.
It also doesn't accommodate for people with multiple systems. My primary gaming build is really just for that. My secondary gaming build is also used for video upscaling, currently with waifu2x, probably with something else in the near future. I've dabbled a couple of times with the Topaz software.
Even when I do a waifu2x upscale, there are clearly periods when the CPU is doing most of the work and other times where both the CPU and GPU are being heavily used. However, I don't really know what silicon the GPU is using at any time, whether it's GPU cores, RT cores, ML cores, media encoding cores, or just a lot of VRAM i/o.
For my daily driver Mac, I'm sure the GPU is used for multiple tasks including ML as well as hardware media encoding and decoding. I'm also using the Upscayl AI image AI tool. I don't really game on the Mac since I have a better suited Windows PC for that (RTX 3080 Ti).
I've also used Nvidia Broadcast and Nvidia Canvas on a couple of my lesser RTX builds. The former is realtime video processing (like background replacement), the latter is just a silly little AI paint program.
I know Apple does some image/video processing on the GPU (Neural Engine cores or GPU cores, probably both) on the Mac, just like on my iPhone and iPad. I'm pretty sure DaVinci Resolve also takes advantage of the differentiated silicon. Same with Pixelmator Pro.
And with each major macOS and iOS/iPadOS release, there are more functions that harness the GPU and ML cores. When I upgrade to the latest macOS and iOS/iPadOS in spring 2024, my GPUs will be doing even more work.
Hell, even web browsers will use hardware acceleration when available. This includes video playback. I know RTX Video Upscaling is an available setting in the Nvidia Control Panel starting with the Ampere generation. And hardware video decoders have been included in CPUs and GPUs for years. So even if you're watching a YouTube video, your machine is probably using differentiated GPU silicon to decode the stream (H.264, HEVC, etc.).
At least one of the video conferencing systems uses ML processing to fake proper eye contact. I know I've tried this out but I don't remember which system I used. My Mac? A Windows PC?
There's also some sort of text recognition feature when analyzing photos. My guess is this is also being offloaded to available GPU/ML cores in 2023 whether it's my phone or my computers.
Really the main moment I am conscious of this is when I'm using Handbrake. If the requisite hardware exists, there will be encoding options for hardware (Intel, AMD, Nvidia, or VideoToolbox which is Apple). Most of the time I don't really know what calculations are being done on which transistors.
When I fire up something like Photos or iMovie on my Mac, iPhone, or iPad, different tasks are using different transistors. As long as the task is executed successfully, I don't really care if they're CPU cores, GPU cores, ML cores, RT cores, media engine cores, whatever. It's up to the software developer to harness whatever resources available to make it work. I think Apple put Neural Engine cores starting from the iPhone 6S generation. And the Neural Engine cores are being used more in 2023 than in 2015.
And some day, someone will add another type of differentiated transistor into a system. And I'll use that as well probably without knowing.
I'm pretty sure there are 6-8+ other tasks that are harnessing GPU/ML cores that I'm not even aware about. And there will be more in 2024 whether it's something built into an operating system API or a standalone feature in a specific application.
Unless you only game on your system, it is highly unlikely in late 2023 that your GPU is only being used during gaming. If you do anything else on your system, you are probably using the GPU for non-gaming tasks. You just don't realize that software developers have moved some functionality to these transistors while you were playing your game.