• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Ratchet & Clank Rift Apart Benchmark Test & Performance Analysis

AMD themselves are working on a driver update to fix the crashing issue, so it's obvious nvidia is behind it and nvidia fanboys are having double standards. The amd crusade at it again
Not sure if the crashing issue is even related to RT not being enabled.

AMD: "Application crash or driver timeout may be observed while playing Ratchet & Clank: Rift Apart with Ray-Tracing and Dynamic Resolution Scaling enabled on some AMD Graphics Products, such as the Radeon RX 7900 XTX."

Actually, you cannot enable RT in the game with AMD, resolution scaling or not. If Nixxes was fixing the issue above, then they would turn off resolution scaling when RT is enabled on AMD, yet they disabled RT completely

however DS logically surely will consume VRAM
Logically? Please explain.

IMO if you have a fast path from storage to VRAM, then you don't need to preload stuff into VRAM, because you can just load it shortly before it's needed -> lower VRAM usage
 
Not sure if the crashing issue is even related to RT not being enabled.

AMD: "Application crash or driver timeout may be observed while playing Ratchet & Clank: Rift Apart with Ray-Tracing and Dynamic Resolution Scaling enabled on some AMD Graphics Products, such as the Radeon RX 7900 XTX."

Actually, you cannot enable RT in the game with AMD, resolution scaling or not. If Nixxes was fixing the issue above, then they would turn off resolution scaling when RT is enabled on AMD, yet they disabled RT completely


Logically? Please explain.

IMO if you have a fast path from storage to VRAM, then you don't need to preload stuff into VRAM, because you can just load it shortly before it's needed -> lower VRAM usage
Well I am assumed a seperate DS cache, but you are right, it can be just loaded straight in as a texture. Maybe just a small i/o buffer, so yeah I think I got that wrong.
 
It's 1.1.. 1.2 was just released in April, no way for anyone to integrate that so soon
So you're saying that pretty much every article written on this port is wrong? Almost all of them mentions it using DS 1.2 and even Nixxes themselves claim that they are using 1.2 on Steam (post from July 18 if you want to check it), so in this case I think you are wrong.
 
Game does have RT, only AMD hardware can't enable RT due to driver bugs
I'm certain you knew what I was talking about... Driver bug - convenient isn't it?

Didn't realize AMD GPU owners cared about RT....... Hopefully they get their drivers in order.

Also just looking at the last couple years Nvidia has done a much better Job in sponsored titles supporting competing technologies unlike AMD and why there was a collective groan when it was announced Starfield was an AMD sponsored game.
I am one of those AMD owners who cares a lot about RT and I'm also one of those who stated they will be going Nvidia for my next card UNLESS AMD comes exceptionally close in RT or matches Nvidia's offerings. RT is the future and I'm all in for it. Second part of your comment is rather bogus as one of the main remarks made by the media and fanboys alike, Nvidia doesn't do this. Folks and selective memory loss.

Could be that AMD released a driver update the broke raytracing in R&C. They borked RL as well recently (very annoying).
As AMD hasn't released a fix yet there's probably a reason for it (reverting breaks something more important for example) so they had two options.
- Delay launch of R&C for AMD to release an updated driver
- Just launch R&C but disable raytracing on AMD

As most don't really care about raytracing on AMD and considering it's an Nvidia sponsored title I can't really fault Nixxes for just releasing it.

In short I don't think Nvidia has anything to do with it aside that if AMD sponsored it they would've maybe delayed the launch.
Too much speculation, I don't have any of the answers for those. I already own this on PS5 so I won't be testing it on PC. Lastly, I'm the AMD unicorn owner - I love RT and I can't wait for it to be implemented in a far more efficient manner.
 
I'm certain you knew what I was talking about... Driver bug - convenient isn't it?


I am one of those AMD owners who cares a lot about RT and I'm also one of those who stated they will be going Nvidia for my next card UNLESS AMD comes exceptionally close in RT or matches Nvidia's offerings. RT is the future and I'm all in for it. Second part of your comment is rather bogus as one of the main remarks made by the media and fanboys alike, Nvidia doesn't do this. Folks and selective memory loss.

Ummm.... Thankfully modders have been more competent implementing DLSS than the developers implementing FSR....





 
So you're saying that pretty much every article written on this port is wrong? Almost all of them mentions it using DS 1.2 and even Nixxes themselves claim that they are using 1.2 on Steam (post from July 18 if you want to check it), so in this case I think you are wrong.
Guess I am wrong and it's 1.2
 
Nvidia's limitation of 8 lanes for the 4060 and the 4060 Ti is just as lamentable given that the 1050 Ti had 16 PCIe lanes with a smaller die size than any of these GPUs. This is why I don't buy the argument about limited space at the edge of the die. The die sizes are:
It's rather telling that the oldest and smallest die is the only one with 16 PCIe lanes.

The smallest die also only have PCI-3 3.0, not 4.0. 4.0 use way more space and those thing no longer scale much with process. Also the 1050 Ti use GDDR5 witch also take less space than GDDR6
 
The smallest die also only have PCI-3 3.0, not 4.0. 4.0 use way more space and those thing no longer scale much with process. Also the 1050 Ti use GDDR5 witch also take less space than GDDR6
PCIe is a serial link; 4.0 vs 3.0 won't add much for die space. 16 lanes of PCIe 3 are estimated to be about only 3 mm^2 in TSMC's 28 nm process.
 
Too much speculation, I don't have any of the answers for those. I already own this on PS5 so I won't be testing it on PC. Lastly, I'm the AMD unicorn owner - I love RT and I can't wait for it to be implemented in a far more efficient manner.
So we're going to speculate that Nvidia is intentionally doing this to AMD instead? Because that's what my comment is a reply to.
I'd like to err to innocent until proven guilty.

RTX 4060 Ti 16 GB has been added
Another game to add to the list of games that are "problematic" for 8GB cards. Though I expected some more pain.
I wonder if we can see the difference between 8GB and 16GB visually in realistic conditions.
 
Shouldn't you be blaming the developer of the game?
Apparently not: https://www.amd.com/en/support/kb/release-notes/rn-rad-win-23-7-2
Application crash or driver timeout may be observed while playing Ratchet & Clank™: Rift Apart with Ray-Tracing and Dynamic Resolution Scaling enabled on some AMD Graphics Products, such as the Radeon™ RX 7900 XTX.

Also, I'm not sure how this qualifies as a Nvidia-sponsored title, since it's a console port. It literally started on AMD hardware.
 
Well, as usual, wont even bother to try it until it gets patched. I am sick and tired of people complaining about beta releases ... early adopters that are obviously incomplete and poorly optimized. If you guys vote with your wallets, they will listen.

Repeat with me: "no buy until game runs smooth ..." fell free to add any expletives at the end. :roll:
 
AMD GPUs are doing surprisingly well in the 1% lows here, much better than their Nvidia counterparts. These results correlate directly with their half-precision/FP16 processing power:

GPU1% low FPS @ 4KFP16 TFLOPS
7900 XTX75122.9
7900 XT63103.2
40906182.6
40805448.8
6900 XT4846.1
3090 Ti4640.0
4070 Ti4540.1
6800 XT4541.5
30904035.7
68003832.3

Here's my theory:

Ratchet & Clank is the first PC title to take advantage of GPU Decompression. This feature of DirectStorage 1.1+ allows the decompression of game assets to take place on the GPU. This approach is way faster than the traditional decompression on the CPU because it avoids a number of bottlenecks, frees up the CPU for other game-related tasks, and takes advantage of massive parallel processing capabilities of modern GPUs. It also leverages much higher bandwidth of the card's VRAM for decompressing and copying game data. Since GPGPU workloads execute faster with partial floating point precision, I would presume that the GDeflate compression stream format used by GPU Decompression performs better on cards with higher FP16 rating.

Since the whole idea of GPU Decompression (besides reducing load times) is to improve asset streaming -- notably in open world games -- GPUs that can do it faster should also show better 1% and 0.1% low figures, allowing for smoother gameplay.

Well I am assumed a seperate DS cache, but you are right, it can be just loaded straight in as a texture. Maybe just a small i/o buffer, so yeah I think I got that wrong.
You're right in saying that DirectStorage increases VRAM usage, but it does so by a negligible amount. It places two additional staging buffers in VRAM whose size can be defined. It is assumed that 128-256 MB per buffer is optimal:

ds.png
ds1.png

Images taken from here. And here's a good article on the subject.
 
Last edited:
AMD GPUs are doing surprisingly well in the 1% lows here, much better than their Nvidia counterparts. These results correlate directly with their half-precision/FP16 processing power:

GPU1% low FPS @ 4KFP16 TFLOPS
7900 XTX75122.9
7900 XT63103.2
40906182.6
40805448.8
6900 XT4846.1
3090 Ti4640.0
4070 Ti4540.1
6800 XT4541.5
30904035.7
68003832.3

Here's my theory:

Ratchet & Clank is the first PC title to take advantage of GPU Decompression. This feature of DirectStorage 1.1+ allows the decompression of game assets to take place on the GPU. This approach is way faster than the traditional decompression on the CPU because it avoids a number of bottlenecks, frees up the CPU for other game-related tasks, and takes advantage of massive parallel processing capabilities of modern GPUs. It also leverages much higher bandwidth of the card's VRAM for decompressing and copying game data. Since GPGPU workloads execute faster with partial floating point precision, I would presume that the GDeflate compression stream format used by GPU Decompression performs better on cards with higher FP16 rating.

Since the whole idea of GPU Decompression (besides reducing load times) is to improve asset streaming -- notably in open world games -- GPUs that can do it faster should also show better 1% and 0.1% low figures, allowing for smoother gameplay.


You're right in saying that Direct Storage increases VRAM usage, but it does so by a negligible amount. It places two additional staging buffers in VRAM whose size can be defined. It is assumed that a 128-256 MB per buffer is optimal:

View attachment 306681View attachment 306682
Images taken from here. And here's a good article on the subject.

I always thought it was odd Nvidia has their own version of this..... Guessing if true this is why.

edit Apparently RTX I/O is being used in this...
 
I'm certain you knew what I was talking about... Driver bug - convenient isn't it?


I am one of those AMD owners who cares a lot about RT and I'm also one of those who stated they will be going Nvidia for my next card UNLESS AMD comes exceptionally close in RT or matches Nvidia's offerings. RT is the future and I'm all in for it. Second part of your comment is rather bogus as one of the main remarks made by the media and fanboys alike, Nvidia doesn't do this. Folks and selective memory loss.


Too much speculation, I don't have any of the answers for those. I already own this on PS5 so I won't be testing it on PC. Lastly, I'm the AMD unicorn owner - I love RT and I can't wait for it to be implemented in a far more efficient manner.

Yeah it's Nvidia fault that AMD has poor track record of producing proper Radeon drivers.

Either way Radeon owners can enable RT in R&C later with a driver/game update, but DLSS will never come to AMD sponsored games like Jedi Survivor,RE 4 Remake,Callisto Protocol, or even Starfield
 
Man, AMD is totally pissing all over Nvidia in the 1% lows. :eek: In a "Nvidia sponsored" game, lol. Pretty embarrassing.

Apparently not: https://www.amd.com/en/support/kb/release-notes/rn-rad-win-23-7-2


Also, I'm not sure how this qualifies as a Nvidia-sponsored title, since it's a console port. It literally started on AMD hardware.

"AMD is working with the game developers of Ratchet & Clank™: Rift Apart to resolve some stability issues when Ray-Tracing is enabled."

By the wording they pretty much blame the game developers for the problems. :cool: Guess they are not very experienced yet with the implementation of RT for PC games.

AMD themselves are working on a driver update to fix the crashing issue, so it's obvious nvidia is behind it and nvidia fanboys are having double standards. The amd crusade at it again

Well, if you check the (negative) STEAM reviews you'd notice that the Nvidia users also got tons of problems with the game. :oops: From constant crashing to bad framerates, memory leaks, visual glitches, stuttering, freezing, black screens, etc. Game has quite some early adopter issues, even if you're in the green boat. Some people even say the game looks better on the PlayStation.


I'd like to see impact of Direct Storage tested depending on a few drives from different performance segments.

ComputerBase recently did a (member supported) benchmark test with the DirectStorage 1.2 BulkLoadDemo Benchmark. :) Quite a long list of tested drives.

https://www.computerbase.de/2023-07/directstorage-bulkloaddemo-benchmark-community-test/
What I find interesting is that the Crucial T700 4TB is wiping the floor with all other drives, comming out as the fastest single drive, while the 2TB version is just on par with the competitors. Not sure how this affects game performance. But I guess reviewers also don't know yet either. Just new waters.
 
Can’t see any meaningful “holy sh*t” rt on vs off differences on the comparisons (max+rt vs max), just some slight enhancements on shadows. Is that because I’m watching the comparisons on an iPad?

But, gawd dayumn, it looks GREAT and runs even better on RDNA3 (those minimum FPS are *chef’s kiss*). I’m waiting some patches, but my 7900XT is ready for it.
 
NVIDIA sponsored title that has the latest DLSS and XeSS versions, but not the latest version of FSR... and RT does not work on AMD.
 
Can’t see any meaningful “holy sh*t” rt on vs off differences on the comparisons (max+rt vs max), just some slight enhancements on shadows. Is that because I’m watching the comparisons on an iPad?

But, gawd dayumn, it looks GREAT and runs even better on RDNA3 (those minimum FPS are *chef’s kiss*). I’m waiting some patches, but my 7900XT is ready for it.
You are missing some obvious reflections I guess, for example in the 2nd shot up, right...
But I gotta say while playing none of this matters much. That vibe the game gives off at the highest settings - that you're playing some pixar cgi film - remains untouched ^^
 
Game does have RT, only AMD hardware can't enable RT due to driver bugs
>Game has horrible stutters on Nvidia cards resulting in 4090 having worse 1% lows than 6800XT
>Developer's fault

>Game doesnt support RT for radeon cards, even tho the original game was built for PS5 and its rdna2 architecture
>Fault of the drivers, totally not developers
 
>Game has horrible stutters on Nvidia cards resulting in 4090 having worse 1% lows than 6800XT
>Developer's fault

>Game doesnt support RT for radeon cards, even tho the original game was built for PS5 and its rdna2 architecture
>Fault of the drivers, totally not developers

Lol who here ever said anything about low 1% low FPS being devs fault? Obviously Nvidia has to improve their driver for this game just like AMD need to fix RT.

But hey Radeon owners can enable FSR just fine
 
I always thought it was odd Nvidia has their own version of this..... Guessing if true this is why.

edit Apparently RTX I/O is being used in this...
They don't have their "own version," it's just branding of their implementation of the DirectStorage standard.
 
Last edited:
Back
Top