• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Hogwarts Legacy Benchmark Test & Performance Analysis

I don't get the hate. First 'Next gen' games are always like this. Devs eventually learn to optimize better, and cards get more powerful. This has always happened in gaming.
Visuals are getting harder to improve meaningfully without impacting performance too much, that's how diminishing returns work. I also wouldn't expect AAA, story-based games to err on the side of performance rather than visuals.

There are two problems people have:

1) The performance and cost for that performance. I remember my Radeon 4850 Vapor-X doing a really good job of running Crysis and that card was only $230 USD brand new. Of course people are going to complain, you could buy an entire console for cheaper then what it would cost to buy just a GPU capable of running this game as well as the console. Forget about the cost of the rest of the PC given this game's CPU requirements, talking $1,300+.

2) The visual benefit. There are comparably good looking open world games that run much better


Thank you, I was waiting for someone to make this joke :)
 
@W1zzard are you playing it with DLSS3 on? If so, does it work well in this title to make it playable or not really?
 
Yes, amd cards suck at RT
Not a valid explanation when it's at 3090 (Ti) / 4080 level in every other game.
1675969668220.png
 
Can someone provide a logical reason for the decline in performance of the 7900XTX to the level of a Mid-end 3060 when RT is enabled? lol
Probably some kind of bug.. do note this is at ultra rt .. as mentioned in the conclusion, dialing down RT to lower levels helps AMD a lot.
 
I feel like we've come to meet diminishing returns in terms of visuals per hardware requirements.
For ages I haven't felt a real sense of awe in any game.
The last one probably was the Witcher 3, which ran fine on 980 Ti.
Nowadays we have countless times the processing power and I don't feel visuals have improved much. Even evolving backwards in some cases.
Is it just me?
 
I feel like we've come to meet diminishing returns in terms of visuals per hardware requirements.
For ages I haven't felt a real sense of awe in any game.
The last one probably was the Witcher 3, which ran fine on 980 Ti.
Nowadays we have countless times the processing power and I don't feel visuals have improved much. Even evolving backwards in some cases.
Is it just me?
No, I feel the same.
 
Arc A770 results have been added .. especially the RT perf looks pretty good. I could manage a benchmark run without crashes at 4K Ultra + RT Ultra by repeatedly rebooting and just not giving up. It's still suboptimal, I'll ping Intel about it
 
I feel like we've come to meet diminishing returns in terms of visuals per hardware requirements.
For ages I haven't felt a real sense of awe in any game.
The last one probably was the Witcher 3, which ran fine on 980 Ti.
Nowadays we have countless times the processing power and I don't feel visuals have improved much. Even evolving backwards in some cases.
Is it just me?
good graphics do not a good game make
more polygons do not good graphics make
 
I guess that all the people who swear that they can't enjoy a game without playing at 4K Ultra with RT are going to have to pass on this game! Good lord, I've never seen such a staggering hit to performance for what amounts to just slightly more than nothing in image quality. However, I see a big elephant in the room that has me puzzled.

There's something in these results that isn't making sense to me. On one chart, it says how much VRAM that the game was demonstrated to need:
vram.png

As we see, the game uses 14,576MB at 2160p ultra settings with RT, which is mind-boggling, but was measured.

However, then we see this:
performance-rt-3840-2160.png

The RTX 3090, which has more than enough VRAM, is only 2.6FPS faster than the RTX 3080 which has nowhere near enough. The performance difference is about 17% which is close enough to their 13% average performance delta that it can be inferred to have nothing to do with VRAM. I say this because at 2160p, the performance delta between them in Rainbow Six Siege is 16% and Rainbow Six Siege does NOT use much VRAM.

This is the TL : DR of this mystery:
  1. The game needs 14,576MB at 2160p Ultra with RT
  2. The RTX 3080 10GB has, at most, 10,240MB of VRAM
  3. The performance delta between the RTX 3080 and RTX 3090 in a game that doesn't use much VRAM is 15.5%.
  4. At 4K Ultra with RT, the RTX 3090 only has a 17% FPS lead on the RTX 3080, despite the 3080 having 4GB+ less VRAM than the game needs.
What could be the cause of this?
 
Last edited:
Arc A770 results have been added .. especially the RT perf looks pretty good. I could manage a benchmark run without crashes at 4K Ultra + RT Ultra by repeatedly rebooting and just not giving up. It's still suboptimal, I'll ping Intel about it
Thanks for that! :) RT results look promising. Crashes do not, unfortunately.

I guess that all the people who swear that they can't enjoy a game without playing at 4K Ultra with RT are going to have to pass on this game! Good lord, I've never seen such a staggering hit to performance for what amounts to just slightly more than nothing in image quality. However, I see a big elephant in the room that has me puzzled.

There's something in these results that isn't making sense to me. On one chart, it says how much VRAM that the game was demonstrated to need:
vram.png

As we see, the game uses 14,576MB at 2160p ultra settings with RT, which is mind-boggling, but was measured.

However, then we see this:
performance-rt-3840-2160.png

The RTX 3090, which has more than enough VRAM, is only 2.6FPS faster than the RTX 3080 which has nowhere near enough. The performance difference is about 17% which is close enough to their 13% average performance delta that it can be inferred to have nothing to do with VRAM. I say this because at 2160p, the performance delta between them in Rainbow Six Siege is 16% and Rainbow Six Siege does NOT use much VRAM.
4K_RSS-p.webp

This is the TL : DR of this mystery:
  1. The game needs 14,576MB at 2160p Ultra with RT
  2. The RTX 3080 10GB has, at most, 10,240MB of VRAM
  3. The performance delta between the RTX 3080 and RTX 3090 in a game that doesn't use much VRAM is 15.5%.
  4. At 4K Ultra with RT, the RTX 3090 only has a 17% FPS lead on the RTX 3080, despite the 3080 having 4GB+ less VRAM than the game needs.
What could be the cause of this?
I think this only proves once again that VRAM allocation and VRAM usage aren't the same thing.
 
seems like there's a CPU limit in this game
 
Great game but man it seems games are taking more and more power to run. I have a 4090 and it runs 4k over 60fps without ray tracing but with it nope got to use dlss. I don’t see how people with gpu’s like a 1060, rx 580, 1650, and so play recent games even at 1080p. Your average person who just wants to game should just get a ps5 or series s or x. Plus it seem like 75% the pc ports coming out last year and this year have problems. Its sad I have a 7950x and 4090 but just find it easier and less maddening to play on consoles. If they were like last gen i would say no most games were 30 fps and they had horrible cpu’s and mechanical hd’s. But this gen we get a decent cpu pretty fast gpu’s and nvme ssd’s and most game have a 60 fps mode or even 120.
 
The RTX 3090, which has more than enough VRAM, is only 2.6FPS faster than the RTX 3080 which has nowhere near enough.
"Allocated" is not "used". You can see cards with 8 GB dropping down in their relative positioning, that's when they are actually running out of VRAM (enough to make a difference in FPS)
 
Great game but man it seems games are taking more and more power to run. I have a 4090 and it runs 4k over 60fps without ray tracing but with it nope got to use dlss. I don’t see how people with gpu’s like a 1060, rx 580, 1650, and so play recent games even at 1080p. Your average person who just wants to game should just get a ps5 or series s or x. Plus it seem like 75% the pc ports coming out last year and this year have problems. Its sad I have a 7950x and 4090 but just find it easier and less maddening to play on consoles. If they were like last gen i would say no most games were 30 fps and they had horrible cpu’s and mechanical hd’s. But this gen we get a decent cpu pretty fast gpu’s and nvme ssd’s and most game have a 60 fps mode or even 120.


A 3080 class card can run this at 4k with DLSS quality/balanced no RT pretty easily - a 6700xt ($300) with FSR is good at 1440P.

Sure, if you want to crank all settings and turn off resolution scaling you get a slideshow, but those aren't the settings consoles run either. PS5 uses scaling on top of dynamic resolution so it's cheaper sure, but Im not sure why having ability to make the game look much better is infuriating.
 
Pretty sure that vga drivers could do with some optimization too... Having said this it pisses me off that AMD have not delivered a new driver since Dec 2022 for anything other than series 7 cards!!!
 
Based on the article's data, I would say that all GPUs are running terribly, especially if you compare the cost in fps with the graphical result obtained.

Honestly, this runs like Cyberpunk - if you crank everything and turn RT on you get 38-45 FPS on a 4090. But the game looks identical with DLSS quality/3.0 and that runs at 120+fps.

I think optimization + good scaling implementation and the result will be very good.
 
Back
Top