Monday, March 3rd 2025

NVIDIA GeForce RTX 50 Series Faces Compute Performance Issues Due to Dropped 32-bit Support

PassMark Software has identified the root cause behind unexpectedly low compute performance in NVIDIA's new GeForce RTX 5090, RTX 5080, and RTX 5070 Ti GPUs. The culprit: NVIDIA has silently discontinued support for 32-bit OpenCL and CUDA in its "Blackwell" architecture, causing compatibility issues with existing benchmarking tools and applications. The issue manifested when PassMark's DirectCompute benchmark returned the error code "CL_OUT_OF_RESOURCES (-5)" on RTX 5000 series cards. After investigation, developers confirmed that while the benchmark's primary application has been 64-bit for years, several compute sub-benchmarks still utilize 32-bit code that previously functioned correctly on RTX 4000 and earlier GPUs. This architectural change wasn't clearly documented by NVIDIA, whose developer website continues to display 32-bit code samples and documentation despite the removal of actual support.

The impact extends beyond benchmarking software. Applications built on legacy CUDA infrastructure, including technologies like PhysX, will experience significant performance degradation as computational tasks fall back to CPU processing rather than utilizing the GPU's parallel architecture. While this fallback mechanism allows older applications to run on the RTX 40 series and prior hardware, the RTX 5000 series handles these tasks exclusively through the CPU, resulting in substantially lower performance. PassMark is currently working to port the affected OpenCL code to 64-bit, allowing proper testing of the new GPUs' compute capabilities. However, they warn that many existing applications containing 32-bit OpenCL components may never function properly on RTX 5000 series cards without source code modifications. The benchmark developer also notes this change doesn't fully explain poor DirectX9 performance, suggesting additional architectural changes may affect legacy rendering pathways. PassMark updated its software today, but legacy benchmarks could still suffer. Below is an older benchmark run without the latest PassMark V11.1 build 1004 patches, showing just how much the newest generations suffers without a proper software support.
Sources: PassMark on X, via Tom's Hardware
Add your own comment

52 Comments on NVIDIA GeForce RTX 50 Series Faces Compute Performance Issues Due to Dropped 32-bit Support

#1
mb194dc
Oh dear, the problems continue to pile up for 5 series...
Posted on Reply
#2
MxPhenom 216
ASIC Engineer
Nvidia really didn't think this generation through. If you are going to drop support for this stuff at least make it known several months prior to launch so software devs can adapt or just don't do it at all. Ffs
Posted on Reply
#4
mrnagant
Why did Nvidia skip deprecation and a timeline for removing support? Seems to not have been thought out well. If 5000 series was going to kill 32-bit, they should have done a notification of deprecation back on 3000 or 4000 series to give developers time to migrate. And to also allow people to not purchase a 5000 series if they still need 32-bit support for a bit.

They need to back pedal on this. Offer 2 drivers at least, with one containing 32-bit support. Even if they don't update it as often as the 64-bit. But support this driver for at least 2 years or something.

Hell, they could likely even make some money like MS does with ESU on Windows. Have an ESU driver companies can pay for if they need to maintain 32-bit support.
Posted on Reply
#5
Assimilator
MxPhenom 216Nvidia really didn't think this generation through. If you are going to drop support for this stuff at least make it known several months prior to launch so software devs can adapt or just don't do it at all. Ffs
Or write a compatibility/translation layer to transparently handle 32-bit as 64-bit.

I agree with you that the 5000-series has been a disaster. I really get the impression NVIDIA's A-team is all working on "AI" chips and the B-team was left to do consumer graphics, with the inevitable result that we got a B-grade product and a lot of unnecessary foot-shooting. Huang is not doing his job very well.
Posted on Reply
#6
_roman_
The customer buys graphic card hardware and graphic card software. Which usually works in Windows. 32 bit support for Windows is obsolete since when? Gnu linux world 32 bit was obsolete years ago.

At a point in time you have a transition from the technology. With MSDOS we had the luck with dosbox.

I wonder how long nvidia will support those obsolete features in their driver packages. Especially when the driver package only is valid for certain graphic card generations. Obsolete feature opencl32? phsyx? cuda32?

--

The hole benchmark is faulty - P*ssmark

www.passmark.com/products/performancetest/history.php

x.com/PassMarkInc/status/1894279961374330896
Posted on Reply
#7
Niceumemu
There was no real reason to remove this support in this gen other than penny pinching cards which are already much more expensive than their predecessors, truly earning their nickname of ngreedia this time

The architecture is nearly identical to the previous gen judging by the performance "improvements" that scale pretty much linearly with transistors and power consumption so there was no real reason to have done this
Posted on Reply
#8
_roman_
mrnagantIf 5000 series was going to kill 32-bit, they should have done a notification of deprecation back on 3000 or 4000 series to give developers time to migrate.
quote tech article: This architectural change wasn't clearly documented by NVIDIA, whose developer website continues to display 32-bit code samples and documentation despite the removal of actual support.

I think I read recently that cuda32 is for 7 years already obsolete.
Posted on Reply
#9
MxPhenom 216
ASIC Engineer
AssimilatorOr write a compatibility/translation layer to transparently handle 32-bit as 64-bit.

I agree with you that the 5000-series has been a disaster. I really get the impression NVIDIA's A-team is all working on "AI" chips and the B-team was left to do consumer graphics, with the inevitable result that we got a B-grade product and a lot of unnecessary foot-shooting. Huang is not doing his job very well.
Come on AMD. The door is opened wide for you this time
Posted on Reply
#10
Visible Noise
This says more about Passmark than Nvidia. 32 bit code was deprecated seven years ago by Nvidia, and apparently Passmark didn’t know their own code base well enough to realize they were still running 32 bit code.

Did they say when they are going to fix their software ?

Edit: OpenCL? Even more irrelevant. Is anyone updating their OpenCL drivers? OpenCL was dead a decade ago.
Posted on Reply
#11
Vya Domus
NiceumemuThere was no real reason to remove this support in this gen
AssimilatorOr write a compatibility/translation layer to transparently handle 32-bit as 64-bit.
I suspect there may be something about Blackwell's ISA that hinders 32bit software from running natively. GPUs don't work like CPUs, you can't really write native software for them because GPU makers change the ISA often, a lot of the time from architecture to architecture, everything that runs on the GPU has to be compiled first, that's why I suspect there might be a hardware limitation somewhere so perhaps some 32bit instructions can't be issued or something like that.

On top of this Nvidia has the shitty habit of disclosing absolutely nothing about their ISA, like it's some sort of national secret.
Posted on Reply
#12
stickleback123
I needed a GPU and was lucky enough to pick up a 5080 RTX for RRP on launch day, as there were no 4080 Super cards to buy for weeks before and nothing from team red, so I was lucky to get even that and wouldn't have paid more than FE RRP, but all the same this must be the most disappointing generation of cards I can remember, and I can remember back to EGA launching in '84...
Posted on Reply
#13
valicu2000
The gaming segment means less and less for nGreedya, so why bother? Less chips for consumers, more chips for data centers and more profit ...



Posted on Reply
#14
john_
I can understand Nvidia dropping 32bit support. With chips designed for servers and AI, Nvidia doesn't really care that much for compatibility with old software. In any case the solution for anyone wanting to run DirectX 9 games at 4K with 500 fps or games supporting hardware PhysX, is getting a 3000/4000 series Nvidia GPU and adding it in a second PCIe x16 slot. Someone paying $200-$500 over MSRP, probably can pay a little more for 32bit support.





PS
....and in a year from now people will keep saying how badly AMD hardware, drivers, features, support, everything/you name it, are.......
mrnagantAnd to also allow people to not purchase a 5000 series if they still need 32-bit support for a bit.
Aaaaaaaa...........that's why......
AssimilatorHuang is not doing his job very well.
Huang is having a huge problem to fill all those AI orders and he is sending all the chips that somewhat fail quality check, but still can be called functional, to gamers.
Posted on Reply
#16
igormp
Vya DomusI suspect there may be something about Blackwell's ISA that hinders 32bit software from running natively. GPUs don't work like CPUs, you can't really write native software for them because GPU makers change the ISA often, a lot of the time from architecture to architecture, everything that runs on the GPU has to be compiled first, that's why I suspect there might be a hardware limitation somewhere so perhaps some 32bit instructions can't be issued or something like that.

On top of this Nvidia has the shitty habit of disclosing absolutely nothing about their ISA, like it's some sort of national secret.
That's not about the GPU not supporting "32-bit", it's about nvidia's compiler dropping 32-bit support to build new stuff (so targeting x86 32-bit), and their newest runtime (which runs on the CPU to dispatch stuff to the GPU) to run such 32-bit software on their newest µarch.

Anyhow, as others have already said, nvidia has deprecated 32-bit support since CUDA 9 (almost 8 years since then):
CUDA Tools
  • 32-bit Linux CUDA Applications. CUDA Toolkit support for 32-bit Linux CUDA applications has been dropped. Existing 32-bit applications will continue to work with the 64-bit driver, but support is deprecated.
Nvidia also never had proper backwards compatibility. Newer products always required a update to the CUDA version, so you'd have to rebuild your application to support the newest applications nonetheless.
Posted on Reply
#17
ScaLibBDP
>>...Returned the error code "CL_OUT_OF_RESOURCES (-5)"...

PassMark developers should Not accuse NVIDIA in any wrong doing and they simply should Not allocate any blocks of memory greater than 2 GB, or significantly exceeding 2 GB, for 32-bit applications.

Many companies already dropped full support for any 32-bit applications.

PassMark developers stated that some tests are 32-bit and it is Not clear why they did Not port these tests to 64-bit.

PS: All my internal HPC related OpenCL 32-bit tests do Not allocate greater than 2 GB of memory and, actually I do Not pay attention for these 32-bit verifications for a long time.

Do Not blame somebody else and look at what is wrong on your side first.

Also, in a 64-bit world it is Not possible to mix 64-bit and 32-bit codes. Microsoft had a similar problem many years ago when trying to mix 32-bit and 16-bit codes in Windows 3.1 with Win32s extension, Windows 95, Windows for Workgroups, etc 32-bit operating systems. A solution from Microsoft was very complex, unreliable, and based on thunk-based technique.

In the middle of 90th we had a problem on a financial project with mixing 32-bit and 16-bit codes and a Microsoft DDE ( Dynamic Data Exchange Win32 API ) client-server solution was used to "execute" 16-bit codes from a client DLL in a 32-bit server application. That was very complex, I personally worked on it, and the solution was abandoned after a provider of the 16-bit cryptography DLL finally released the 32-bit version.

I would Not worry about problems of PassMark and I would Not blame NVIDIA for Not clearly informing PassMark developers about dropped support.

NVIDIA always makes critical notes in Release Notes and companies involved in Software Development should read them on NVIDIA website, for example for a driver ABC for a GPU card XYZ.

PassMark developers, just port all these 32-bit tests to 64-bit world and forget about it.
Posted on Reply
#18
Visible Noise
Vya DomusI suspect there may be something about Blackwell's ISA that hinders 32bit software from running natively. GPUs don't work like CPUs, you can't really write native software for them because GPU makers change the ISA often, a lot of the time from architecture to architecture, everything that runs on the GPU has to be compiled first, that's why I suspect there might be a hardware limitation somewhere so perhaps some 32bit instructions can't be issued or something like that.

On top of this Nvidia has the shitty habit of disclosing absolutely nothing about their ISA, like it's some sort of national secret.
I suspect you haven’t ever coded in your life. Do you even know what OpenCL is?

Hell, it was created by Apple and even they don’t support it anymore.
Posted on Reply
#19
Vya Domus
igormpThat's not about the GPU not supporting "32-bit", it's about nvidia's compiler dropping 32-bit support to build new stuff
One of the reasons for dropping support for something in a compiler is that the hardware itself no longer supports it, without knowing what's actually happening under the hood it's hard to tell why they did this.

GPUs work in a regime where almost everything is compiled first before running anyway, it's very bizarre to drop support for something unless there was a technical reason to do so. It's one thing to drop support for development and a different thing to remove the ability to run certain software entirely.
Posted on Reply
#21
KLMR
Probably the current implementation of passmark depicts a more real situation of the performance the user will obtain from the product.

Things must move on? Sure. This way? At all.

Imagine intel or AMD dropping suport of some 32bit instructions or sets without prior warning... any warning.
Posted on Reply
#22
Visible Noise
MxPhenom 216That's not what I'm referring too. With all the shit popping up with this gen for Nvidia, the bar is extremely low for AMD to capitalize on.
Capitalize on what? Software they haven’t supported in a decade?
Posted on Reply
#23
Vya Domus
ScaLibBDP>>...Returned the error code "CL_OUT_OF_RESOURCES (-5)"...

PassMark developers should Not accuse NVIDIA in any wrong doing and they simply should Not allocate any blocks of memory greater than 2 GB, or significantly exceeding 2 GB, for 32-bit applications.
That error can come up for many different reasons, not to mention that if this was the case it would also crash on other cards which do have support for 32bit.
Posted on Reply
#24
LastDudeALive
Honestly, if the only lasting impact of the 50 series is forcing software devs to finally get off their asses and update all 32-bit code still in common programs, it will have been a good generation.
Posted on Reply
#25
_roman_
Please no ngreedia posts. The topic is about Passmark / Nvidia Software Stack and nvidia graphic cars / maybe windows operating systems.

We should learn from that topic. Passmark is not really a decent benchmark software. Someone there is unable to read the release notes. They do not have proper coders or enough coders to keep their benchmarks up to date.

If you do not know which libaries you use in a project - you have an issue.

32bit is dead in my point of view for personal computers.
Posted on Reply
Add your own comment
Mar 3rd, 2025 18:15 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts