• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 9070 XT Alleged Benchmark Leaks, Underwhelming Performance

Joined
May 10, 2023
Messages
385 (0.64/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
SAM and Rebar are the same thing?
Yes but not quite, there are some minor specifics on SAM (SAM is basically rebar with some extra optimizations on top).
The way you said it sounded like you could enable rebar, get 10~17% perf, and then add SAM to get another 10~17% on top of the previous one, which is not the case.
 
Joined
Jun 2, 2017
Messages
9,411 (3.40/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
Yes but not quite, there are some minor specifics on SAM (SAM is basically rebar with some extra optimizations on top).
The way you said it sounded like you could enable rebar, get 10~17% perf, and then add SAM to get another 10~17% on top of the previous one, which is not the case.

Thanks for that. I read up on it.
 

njshah

New Member
Joined
Dec 27, 2024
Messages
1 (0.50/day)
I don't get why AMD would go this route, RT and upscaling are just Nvidia snake oil. Just call them out as such. The tech press also not saying what's in front of them is the other problem. Pressure is applied?

The performance hit from RT for what amounts to slightly better shadows and reflections isn't worth it at all.

Upscaling is another step backwards, reducing image quality and introducing shimmering and other artifacting.

AMD shouldn't bother with either and say why they're doing it. They'll never get anywhere copying Nvidia's sales tactics but doing it much worse.

Good for Amd then that you aren't leading it , you'd run it into the ground in 6 months lol

RT is the single biggest leap in photorealism in real time rendering , EVER, you either use amd and cant enjoy fhe feature properly and need fsr performance or you are simply trolling. Its here to stay. Games are starting to use RT features that cant even be disabled anymore AMD is lagging behind sorely because of RT and updcaling and other bells and whistles, RDNA2 was very competitive in raster and often faster and yet it still got beat because the vast majority wants these new features , a very tiny minority on the Internet moans about RT being a gimmick.

Upscaling surpassed native in more and more ways ever since dlss 2.0 launched , if you still get artifacting or shimmering then you are a fsr user.

and anyone who calls it a gimmick has his brain shoved where the sun doesn't shine.

RT, Updcaling, AI is the future and any company that does not implement better solutions wilm be left in the dust
 
Joined
Jan 18, 2020
Messages
843 (0.47/day)
Good for Amd then that you aren't leading it , you'd run it into the ground in 6 months lol

RT is the single biggest leap in photorealism in real time rendering , EVER, you either use amd and cant enjoy fhe feature properly and need fsr performance or you are simply trolling. Its here to stay. Games are starting to use RT features that cant even be disabled anymore AMD is lagging behind sorely because of RT and updcaling and other bells and whistles, RDNA2 was very competitive in raster and often faster and yet it still got beat because the vast majority wants these new features , a very tiny minority on the Internet moans about RT being a gimmick.

Upscaling surpassed native in more and more ways ever since dlss 2.0 launched , if you still get artifacting or shimmering then you are a fsr user.

and anyone who calls it a gimmick has his brain shoved where the sun doesn't shine.

RT, Updcaling, AI is the future and any company that does not implement better solutions wilm be left in the dust

Don't agree at all, it's amazing to me how Nvidia have been able to take image quality backwards. Meanwhile imposing a huge performance penalty for barely noticeable reflection and lighting improvements.

Unfortunately it's very hard to do image quality comparisons between engines and different technologies. It's more subjective than fps... There are games from 5+ years ago that look and run better than what's coming out now imo.

It's not a gimmick, it's snake oil that exists so Nvidia can sell more GPUs. Its as simple as, if a 4 year old card can run every game at 4k 120fps+, no one will have any reason to upgrade.

So they came up with RT and added upscaling to the mix.
 
Joined
Dec 1, 2022
Messages
272 (0.36/day)
Don't agree at all, it's amazing to me how Nvidia have been able to take image quality backwards. Meanwhile imposing a huge performance penalty for barely noticeable reflection and lighting improvements.

Unfortunately it's very hard to do image quality comparisons between engines and different technologies. It's more subjective than fps... There are games from 5+ years ago that look and run better than what's coming out now imo.

It's not a gimmick, it's snake oil that exists so Nvidia can sell more GPUs. Its as simple as, if a 4 year old card can run every game at 4k 120fps+, no one will have any reason to upgrade.

So they came up with RT and added upscaling to the mix.
Agreed, real time ray tracing is impressive tech, but its not worth the trade off on performance to have more reflections and more realistic lighting.
And the fact upscaling reduces image quality, combined with games coming out now with reduced texture quality looking like vaseline smeared all over the screen, in order to force the RT marketing even to those on low end cards. Also some games are coming out with RT on by default, or no option to turn RT off, which makes AMD cards look worse than they really are. I think Nvidia realized they made a mistake with the GTX 10 series, those cards lasted for years without needing to upgrade, so Nvidia sells software features, some of those are only available unless you buy the latest GPU.
 
Last edited:
Joined
Sep 6, 2013
Messages
3,427 (0.83/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
Do you have all your points saved in notepad so you can just copy-paste them multiple times a day? Or do you use Onenote?
I see experience here. Which one is better for the job you describe? Notepad or Onenote?
I am only asking out of curiosity. I prefer enriching my ignore list than doing this as a part time job.

Agreed, real time ray tracing is impressive tech, but its not worth the trade off on performance to have more reflections and more realistic lighting.
RT is going to become a necessity really soon and even people who insist on raster will be forced to reconsider. The way to do it is the same as with hardware PhysX. When Hardware PhysX came out, programmers suddenly forgot how to program physics effects on CPUs. You either had physics effects with hardware PhysX, or almost nothing at all without hardware PhysX.
Alice was one of my favorite games back then and I was totally refusing to play it without PhysX at high. So a second mid range Nvidia GPU was used in combination with an AMD primary card to play the game. Of course it was also necessary to crack Nvidia's driver lock for this to work. Hardware PhysX in the end failed to gain traction, so now we again enjoy high quality Physics effects without needing to pay Nvidia for what was completely free in the past.

Now about RT. In that latest video from Tim of HUB, where he spotted full screen noise when enabling RT(maybe they rejected his application to work at Nvidia? Strange no other site, including TPU, investigated these findings), there was at least one comparison in Indiana Jones that completely shocked me. And in both images RT was on, the difference was in one was RT NORMAL and in the other RT FULL.
1735308912320.png

The difference in lighting is shocking at least. In NORMAL it is like the graphics engine is malfunctioning, not working as expected, or like there is a huge bug somewhere in the code. In the past, games that where targeting both audiences who wanted RT and audiences who where happier with high fps raster graphics, had lighting differences between RT and raster modes, where you had to pause the image and start investigating lighting to really see the differences.
Here in a game that RT is the only option, the difference can be spotted with the eyes closed. And lighting is totally broken when leaving RT in NORMAL setting, the same as 15 years ago having PhysX in low.

I think Nvidia will try to push programmers in using libraries where only with FULL RT the gamer gets correct lighting. With medium, normal, low or whatever other option the gamer chooses in settings, lighting will be completely broken. That will drive people to start paying 4 digit prices just to get the lighting that today enjoy for free.

Maybe TPU would like to investigate it, .....or maybe not?
 
Last edited:
Joined
Apr 30, 2020
Messages
1,007 (0.59/day)
System Name S.L.I + RTX research rig
Processor Ryzen 7 5800X 3D.
Motherboard MSI MEG ACE X570
Cooling Corsair H150i Cappellx
Memory Corsair Vengeance pro RGB 3200mhz 32Gbs
Video Card(s) 2x Dell RTX 2080 Ti in S.L.I
Storage Western digital Sata 6.0 SDD 500gb + fanxiang S660 4TB PCIe 4.0 NVMe M.2
Display(s) HP X24i
Case Corsair 7000D Airflow
Power Supply EVGA G+1600watts
Mouse Corsair Scimitar
Keyboard Cosair K55 Pro RGB
I see experience here. Which one is better for the job you describe? Notepad or Onenote?
I am only asking out of curiosity. I prefer enriching my ignore list than doing this as a part time job.


RT is going to become a necessity really soon and even people who insist on raster will be forced to reconsider. The way to do it is the same as with hardware PhysX. When Hardware PhysX came out, programmers suddenly forgot how to program physics effects on CPUs. You either had physics effects with hardware PhysX, or almost nothing at all without hardware PhysX.
Alice was one of my favorite games back then and I was totally refusing to play it without PhysX at high. So a second mid range Nvidia GPU was used in combination with an AMD primary card to play the game. Of course it was also necessary to crack Nvidia's driver lock for this to work. Hardware PhysX in the end failed to gain traction, so now we again enjoy high quality Physics effects without needing to pay Nvidia for what was completely free in the past.

Now about RT. In that latest video from Tim of HUB, where he spotted full screen noise when enabling RT(maybe they rejected his application to work at Nvidia? Strange no other site, including TPU, investigated these findings), there was at least one comparison in Indiana Jones that completely shocked me. And in both images RT was on, the difference was in one was RT NORMAL and in the other RT FULL.
View attachment 377379
The difference in lighting is shocking at least. In NORMAL it is like the graphics engine is malfunctioning, not working as expected, or like there is a huge bug somewhere in the code. In the past, games that where targeting both audiences who wanted RT and audiences who where happier with high fps raster graphics, had lighting differences between RT and raster modes, where you had to pause the image and start investigating lighting to really see the differences.
Here in a game that RT is the only option, the difference can be spotted with the eyes closed. And lighting is totally broken when leaving RT in NORMAL setting, the same as 15 years ago having PhysX in low.

I think Nvidia will try to push programmers in using libraries where only with FULL RT the gamer gets correct lighting. With medium, normal, low or whatever other option the gamer chooses in settings, lighting will be completely broken. That will drive people to start paying 4 digit prices just to get the lighting that today enjoy for free.

Maybe TPU would like to investigate it, .....or maybe not?
I have problem with that.
Every single game I've seen with CPU PhysX all have the same problem on DX12 STUTTER FEST all synchronization problems. It's using the same base main thread to sync up. GPU are still many magnitudes faster than CPUS in physX. My 2080 ti is literally 4 times faster than my 5800x 3D in Physx. The problem is Nvidia took away GPU physic from the 4,000 series now. No one has noticed until they try to use it on older game that have GPU physX that can enable & runs like crap on an RTX 4090.
 
Joined
Sep 6, 2013
Messages
3,427 (0.83/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
My 2080 ti is literally 4 times faster than my 5800x 3D in Physx.
Because the software version was always crap and slow and single thread and using MMX. At least in it's first versions.
GPU physX that can enable & runs like crap on an RTX 4090.
Didn't knew that. Why do it? Maybe they didn't wanted to spend any more time to support it? Only option a secondary GPU in a second PCIe X16 slot. Maybe even a 1 lane PCIe slot will be enough for PhysX.
 
Joined
Apr 30, 2020
Messages
1,007 (0.59/day)
System Name S.L.I + RTX research rig
Processor Ryzen 7 5800X 3D.
Motherboard MSI MEG ACE X570
Cooling Corsair H150i Cappellx
Memory Corsair Vengeance pro RGB 3200mhz 32Gbs
Video Card(s) 2x Dell RTX 2080 Ti in S.L.I
Storage Western digital Sata 6.0 SDD 500gb + fanxiang S660 4TB PCIe 4.0 NVMe M.2
Display(s) HP X24i
Case Corsair 7000D Airflow
Power Supply EVGA G+1600watts
Mouse Corsair Scimitar
Keyboard Cosair K55 Pro RGB
Because the software version was always crap and slow and single thread and using MMX. At least in it's first versions.

Didn't knew that. Why do it? Maybe they didn't wanted to spend any more time to support it? Only option a secondary GPU in a second PCIe X16 slot. Maybe even a 1 lane PCIe slot will be enough for PhysX.
Did you not read my RTX 2080 is 4 times faster than a CPU in physX???? . A GPU on pci-express 1x would have no hope of coming that close.
 
Joined
Apr 1, 2022
Messages
22 (0.02/day)
Nvidia doesn't dictate anything. The company will have problems once TSMC stops releasing new processes, which will inevitably happen because the Moore's law has been dead for a while already.
Nvidia relies on the old nodes 6nm and 4nm, and this is a disaster for them.
First of all....

Acknowledging Moore's Law as anything more than a tactic to convince investors is simply uninformed.

And second, the slowed rate of transistor density doubling is something that affects ALL COMPANIES who design chip architectures. If anything, NVIDIA has traditionally been the one of the companies who have not relied on a superior process node to make their products competitive.

And finally, let's remember that AMD, Intel, and NVIDIA are all American companies. They aren't each others enemies and none of them are any less of a greedy corporation than the other.
 
Joined
Jun 2, 2017
Messages
9,411 (3.40/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
I have problem with that.
Every single game I've seen with CPU PhysX all have the same problem on DX12 STUTTER FEST all synchronization problems. It's using the same base main thread to sync up. GPU are still many magnitudes faster than CPUS in physX. My 2080 ti is literally 4 times faster than my 5800x 3D in Physx. The problem is Nvidia took away GPU physic from the 4,000 series now. No one has noticed until they try to use it on older game that have GPU physX that can enable & runs like crap on an RTX 4090.
Once again Nvidia removes a feature and does not tell anyone.
 
Joined
Jun 19, 2024
Messages
163 (0.84/day)
System Name XPS, Lenovo and HP Laptops, HP Xeon Mobile Workstation, HP Servers, Dell Desktops
Processor Everything from Turion to 13900kf
Motherboard MSI - they own the OEM market
Cooling Air on laptops, lots of air on servers, AIO on desktops
Memory I think one of the laptops is 2GB, to 64GB on gamer, to 128GB on ZFS Filer
Video Card(s) A pile up to my knee, with a RTX 4090 teetering on top
Storage Rust in the closet, solid state everywhere else
Display(s) Laptop crap, LG UltraGear of various vintages
Case OEM and a 42U rack
Audio Device(s) Headphones
Power Supply Whole home UPS w/Generac Standby Generator
Software ZFS, UniFi Network Application, Entra, AWS IoT Core, Splunk
Benchmark Scores 1.21 GigaBungholioMarks
Joined
Apr 30, 2020
Messages
1,007 (0.59/day)
System Name S.L.I + RTX research rig
Processor Ryzen 7 5800X 3D.
Motherboard MSI MEG ACE X570
Cooling Corsair H150i Cappellx
Memory Corsair Vengeance pro RGB 3200mhz 32Gbs
Video Card(s) 2x Dell RTX 2080 Ti in S.L.I
Storage Western digital Sata 6.0 SDD 500gb + fanxiang S660 4TB PCIe 4.0 NVMe M.2
Display(s) HP X24i
Case Corsair 7000D Airflow
Power Supply EVGA G+1600watts
Mouse Corsair Scimitar
Keyboard Cosair K55 Pro RGB
Dude, PhysX has been open source for a decade. They didn’t take anything away, they gave it to everyone.



From my research it's only "open source" for CPU & nothing else.
Once again Nvidia removes a feature and does not tell anyone.

I noticed this when RTX 4090 got reviewed I even mentioned it to W1zzrd about it in his GPU-z. Seems more like no one cared to say anything about it.
 
Last edited:
Joined
Jun 19, 2024
Messages
163 (0.84/day)
System Name XPS, Lenovo and HP Laptops, HP Xeon Mobile Workstation, HP Servers, Dell Desktops
Processor Everything from Turion to 13900kf
Motherboard MSI - they own the OEM market
Cooling Air on laptops, lots of air on servers, AIO on desktops
Memory I think one of the laptops is 2GB, to 64GB on gamer, to 128GB on ZFS Filer
Video Card(s) A pile up to my knee, with a RTX 4090 teetering on top
Storage Rust in the closet, solid state everywhere else
Display(s) Laptop crap, LG UltraGear of various vintages
Case OEM and a 42U rack
Audio Device(s) Headphones
Power Supply Whole home UPS w/Generac Standby Generator
Software ZFS, UniFi Network Application, Entra, AWS IoT Core, Splunk
Benchmark Scores 1.21 GigaBungholioMarks
From my research it's only "open source" for CPU & nothing else.Once again Nvidia removes a feature and does not tell anyone.


I noticed this when tRTX 4090 got reviewed I even mentioned it to W1zzrd about it in his GPU-z. Seems more like no one cared to say anything about it.

GPU support was removed around the same time. When CPUs became fast enough to run physics there was no point to running on the GPU and incurring all the communication overhead.

Edit: I looked it up, Nvidia stopped GPU development of Physx in 2018 because nobody was using it anymore.
 
Last edited:
Joined
May 10, 2023
Messages
385 (0.64/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
From my research it's only "open source" for CPU & nothing else.Once again Nvidia removes a feature and does not tell anyone.
No, that repo has the GPU code as well (reliant on CUDA), although I don't think anyone uses that.
The O3DE's fork also has support for GPU on CUDA.
FWIW, there are many other physics engines nowadays that are really good. And doing those physics calculations on the CPU has become way better given the advancements in SIMD stuff and whatnot.
 
Joined
Apr 14, 2022
Messages
768 (0.78/day)
Location
London, UK
Processor AMD Ryzen 7 5800X3D
Motherboard ASUS B550M-Plus WiFi II
Cooling Noctua U12A chromax.black
Memory Corsair Vengeance 32GB 3600Mhz
Video Card(s) Palit RTX 4080 GameRock OC
Storage Samsung 970 Evo Plus 1TB + 980 Pro 2TB
Display(s) Acer Nitro XV271UM3B IPS 180Hz
Case Asus Prime AP201
Audio Device(s) Creative Gigaworks - Razer Blackshark V2 Pro
Power Supply Corsair SF750
Mouse Razer Viper
Keyboard Asus ROG Falchion
Software Windows 11 64bit
I noticed that in the 4080, 4090 reviews the PhysX is missing but in my case is ticked.
What am I missing?

4080.gif
 
Joined
Apr 30, 2020
Messages
1,007 (0.59/day)
System Name S.L.I + RTX research rig
Processor Ryzen 7 5800X 3D.
Motherboard MSI MEG ACE X570
Cooling Corsair H150i Cappellx
Memory Corsair Vengeance pro RGB 3200mhz 32Gbs
Video Card(s) 2x Dell RTX 2080 Ti in S.L.I
Storage Western digital Sata 6.0 SDD 500gb + fanxiang S660 4TB PCIe 4.0 NVMe M.2
Display(s) HP X24i
Case Corsair 7000D Airflow
Power Supply EVGA G+1600watts
Mouse Corsair Scimitar
Keyboard Cosair K55 Pro RGB
GPU support was removed around the same time. When CPUs became fast enough to run physics there was no point to running on the GPU and incurring all the communication overhead.

Edit: I looked it up, Nvidia stopped GPU development of Physx in 2018 because nobody was using it anymore.
There is more overhead on the CPU compared to GPU. So that statement is contradictor to the actual facts of how it works in games when it's applied.
When it is still locked to the main thread for anything done in gaming & it's also synced to that same thread.

That edit is funny because I can play a bunch of games where physX will show up when I enable it to show up on the Nvidia control panel & there's about recent 2020 on up games I have that show it.

No, that repo has the GPU code as well (reliant on CUDA), although I don't think anyone uses that.
The O3DE's fork also has support for GPU on CUDA.
FWIW, there are many other physics engines nowadays that are really good. And doing those physics calculations on the CPU has become way better given the advancements in SIMD stuff and whatnot.

If it's Reliant on Cuda then it won't work anything, but a Nvidia Gpu.
So
Why bother calling open sources, when it's not?

That's why I said it only works on "CPU" for the open-source part, as it isn't limited to Cuda.
 
Joined
May 10, 2023
Messages
385 (0.64/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
Why bother calling open sources, when it's not?
It is open source, you're free to use it anywhere you want and modify it to your will.
You're even free to port the CUDA-specific parts to vulkan, OpenCL or whatever you may want.

Many machine learning frameworks are also open-source but mostly work on CUDA on the GPU-side because that's what most users use and it's the best API to work with. No one stops calling those "open source" because of that.
 
Joined
Sep 6, 2013
Messages
3,427 (0.83/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
Did you not read my RTX 2080 is 4 times faster than a CPU in physX???? . A GPU on pci-express 1x would have no hope of coming that close.
Did you not read what I wrote? I gave you an explanation of why the CPU version is so slow. And please first try the 2080 for physics, than come tell me it is slow if you limit the PCIe lanes. It's not processing graphics and comes with a huge amount of VRAM for simply physics. I haven't checked it, but I doubt it becomes so slow to have to avoid it completely. Just run a couple benchmarks and see what happens. It wouldn't harm you.
 
Joined
Apr 30, 2020
Messages
1,007 (0.59/day)
System Name S.L.I + RTX research rig
Processor Ryzen 7 5800X 3D.
Motherboard MSI MEG ACE X570
Cooling Corsair H150i Cappellx
Memory Corsair Vengeance pro RGB 3200mhz 32Gbs
Video Card(s) 2x Dell RTX 2080 Ti in S.L.I
Storage Western digital Sata 6.0 SDD 500gb + fanxiang S660 4TB PCIe 4.0 NVMe M.2
Display(s) HP X24i
Case Corsair 7000D Airflow
Power Supply EVGA G+1600watts
Mouse Corsair Scimitar
Keyboard Cosair K55 Pro RGB
Did you not read what I wrote? I gave you an explanation of why the CPU version is so slow. And please first try the 2080 for physics, than come tell me it is slow if you limit the PCIe lanes. It's not processing graphics and comes with a huge amount of VRAM for simply physics. I haven't checked it, but I doubt it becomes so slow to have to avoid it completely. Just run a couple benchmarks and see what happens. It wouldn't harm you.
I have my 5800x 3D is still 4 times slower than a single RTX 2080 ti in physX calculations.
 
Joined
Jun 19, 2024
Messages
163 (0.84/day)
System Name XPS, Lenovo and HP Laptops, HP Xeon Mobile Workstation, HP Servers, Dell Desktops
Processor Everything from Turion to 13900kf
Motherboard MSI - they own the OEM market
Cooling Air on laptops, lots of air on servers, AIO on desktops
Memory I think one of the laptops is 2GB, to 64GB on gamer, to 128GB on ZFS Filer
Video Card(s) A pile up to my knee, with a RTX 4090 teetering on top
Storage Rust in the closet, solid state everywhere else
Display(s) Laptop crap, LG UltraGear of various vintages
Case OEM and a 42U rack
Audio Device(s) Headphones
Power Supply Whole home UPS w/Generac Standby Generator
Software ZFS, UniFi Network Application, Entra, AWS IoT Core, Splunk
Benchmark Scores 1.21 GigaBungholioMarks
I have my 5800x 3D is still 4 times slower than a single RTX 2080 ti in physX calculations.

How does it affect real world game performance? Which gives better performance, running a current version of PhysX on a CPU, or a 3.x (or earlier) version on a GPU?
 
Joined
Sep 6, 2013
Messages
3,427 (0.83/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
Joined
Apr 24, 2020
Messages
2,738 (1.60/day)
GPU support was removed around the same time. When CPUs became fast enough to run physics there was no point to running on the GPU and incurring all the communication overhead.

Edit: I looked it up, Nvidia stopped GPU development of Physx in 2018 because nobody was using it anymore.

CPU to GPU communications are over the incredibly slow PCIe communication lines. Putting physics on GPU only makes sense if they had zero effect on the gameplay or game-engine (ex: emulating wind or cloth physics or hair movements). But if you wanted those physics-changes to actually affect the engine, it needed to traverse PCIe lane and back to CPU-land.

GPU might be faster than CPU, but PCIe is slower than both of them. GPUs are only fast because all the graphics data is on GPU-side only and never leaves. If data needs to traverse backwards over PCIe, it slows down to the point of not being worth it at all.

CPUs never were that slow at physics (even if GPUs are better at it). But its a matter of PCIe more than anything else.
 
Top