• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GameTechBench GPU benchmark is already out!

Joined
Sep 29, 2020
Messages
155 (0.09/day)
Did I mention there is an easter egg in one of leaderboards? Will you find it? o_O



Yep, tried both options. I've never been able to complete the PT test on the 6600XT.

The only test that actually crashes for me (with an allocation error) on the 6600XT is raster 4320p. Miraculously, even 4320p + RT finished normally.
Some AMD cards seem to have problems with the PT benchmark, other don't. This makes me thing it's an AMD issue, maybe the drivers, maybe the hardware. Let's see if they reply.

About the second, that was indeed a miracle :p It would be more normal to make it always crash at that resolution :p

First of all, this is a very cool bench and I thank the creator for making it. It's an easy way to show these limitations to people and I appreciate it; especially the 1%, .1%, and minimums. Awesome!
I honestly wish W1z would include something exactly like this in his suite. You're doing great work and it helps simplify things for a lot of people, I think! :rockout:
Thank you very very much for your kind words! :love:

BTW, maybe would be better if you @mention someone this way, to make them read your interesting message!
 
Joined
Sep 20, 2021
Messages
644 (0.50/day)
Processor Ryzen 7 9700x
Motherboard Asrock B650E PG Riptide WiFi
Cooling Underfloor CPU cooling
Memory 2x32GB 6200MT/s
Video Card(s) ASUS Prime Radeon RX 9070 XT OC Edition
Storage Kingston Fury Renegade 1TB, Seagate Exos 12TB
Display(s) MSI Optix MAG301RF 2560x1080@200Hz
Case Phanteks Enthoo Pro
Power Supply NZXT C850 850W Gold
Mouse Bloody W95 Max Naraka
Did I mention there is an easter egg in one of leaderboards? Will you find it? o_O
Probably this?
1741985533432.png
 
Joined
Mar 2, 2011
Messages
171 (0.03/day)
Thank you @Arctucas , Steam hate understood! :p I will try my best.
Thanks for the update I paid on Steam. If you have a version outside Steam at some point let me know pls.

It seems I can't change the rendering/ benchmark mode, is greyed out.

Assuming 1080 Ti doesn't work in HW Lumen or Path Tracing.



RasterSW.jpg


Is not an Easter egg just a typo, RX 6600 96 TMU;)

It seem that from old version 0.997 some extra load was added, my score back than was 1296 now is 1142. Both tests in same driver and same 1440p.:nutkick::clap:a subtle whisper ....upgrade now....:slap:

No sound cuts:rockout:


Old vers gametech.jpg
 

Attachments

  • Gametechgreyed.jpg
    Gametechgreyed.jpg
    89.2 KB · Views: 17
Last edited:
Joined
Sep 20, 2021
Messages
644 (0.50/day)
Processor Ryzen 7 9700x
Motherboard Asrock B650E PG Riptide WiFi
Cooling Underfloor CPU cooling
Memory 2x32GB 6200MT/s
Video Card(s) ASUS Prime Radeon RX 9070 XT OC Edition
Storage Kingston Fury Renegade 1TB, Seagate Exos 12TB
Display(s) MSI Optix MAG301RF 2560x1080@200Hz
Case Phanteks Enthoo Pro
Power Supply NZXT C850 850W Gold
Mouse Bloody W95 Max Naraka
Ryzen 9700x PBO
2x32GB DDR5 @ 6200MHz, 28-36-32-48-80
RX 9070 XT +10% PL
Win 11 24H2 (26100)

Some updates.

GameTechBench_TopScores_720p.png
GameTechBench_720p__Raster_6633.png
GameTechBench_720p_RT_6281.png

GameTechBench_FHD_Raster_4933.png

GameTechBench_2k_Raster_3622.png

GameTechBench_4k_Raster_2056.png

GameTechBench_8k_Raster_210.png


Aaand fail :D
GameTechBench_8k_RT_fail.png
 
Joined
Sep 29, 2020
Messages
155 (0.09/day)
Thank you @rusty caterpillar !

Is the 1080 Ti RT capable? It detects it at a low level, using c++ and asking to the directx directly. For some reason it doesn't detect it, and it shouldn't fail. Maybe it's not 100% compatible with all RT features, so it's not detected.

PS: many changes since you were here! Performance probably changed because all the environment changed too? :p
 
Joined
Mar 2, 2011
Messages
171 (0.03/day)
Joined
May 13, 2008
Messages
1,096 (0.18/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX

Hardly a fail; again pretty perfect IMHO. I will tell you, though, my OCD wants those mins to be 60/45 so badly, and really is my question if this product (in whatever power variation with tweaking) can do that.
Is it slightly anal-retentive when things like VRR exist? Sure, but it's the principle of the thing...especially if using FG...I think when you boil the 9070 xt down to it's core as a product it should (be able to) do that.
If nothing else, within driver improvements perhaps? I'm not blaming the benchmark, as this appears to track with many other games as well (~58-59fps mins before tweaking it).

I kind of just want to buy one and loop this bench until I could keep it that way (60 mins in software, if not preferably hardware; whichever is possible). I find things like that oddly satisfying. :p
I keep telling myself to wait until there is a somewhat reasonably priced card that can keep 1440p60 with up-scaling to 4k, but for many and/or perhaps most, that is perfectly good, certainly for right now imho.
Thank you very very much for your kind words! :love:

@miguel1900 (;))

You are very welcome.

Thank you again for your work, as I think it will come in handy as easily digestible as more people begin to understand the limitations of cards wrt lumen; rt/pt in general wrt bandwidth/buffer etc.

I am curious, how difficult would it be to implement FSR1-4 and DLSS2-4 support into this (even with an Optiscaler-like hack)? Perhaps also the path tracer renderer of UE 5.5 as an option across resolutions?
IMHO, this would be the literal perfect bench for the future of gaming if those were there, given their general use-case. Being able to test essentially everything, even if some things (right now) appear a little nuts.
Or sometimes, scaler may appear outdated. You never know when the perf/quality trade-off might be worth it and this might be a quick way to check (and maybe see how far we've gone in each direction).

1080pRT->1440 'quality' FSR3/4 on a 9070 xt for example, as that is the intended market. 4090 and 1440p->4k RT or 1080p->4k PT using DLSS3/4 (which may slip more with new DLSS versions).
Maybe 1080p->4k in some combination for others, etc.
It could not only show the difference in performance between the companies up-scaling in a RT load, but also perhaps change in performance between versions a la FSR3/4 or CNN/transformer (and future iterations).
(That's something I'm always attempting to explain and would be great if I could show it in a ~2.5min bench, especially if people could run it themselves).
It would be cool if this was easily 'solvable' (by looking at something like general 1%,.1% lows; minimums) and I could point someone towards a chart for each in those scenarios.
Maybe factoring in OCing, etc, and the version of up-scaler used. So many people are new to these scenarios that I think it would help them to understand them and each product's general capability.

As I say, I'm one of those guys that's really waiting for 4K60 raster, 1440p60RT, and 1080p60PT mins to line up with up-scaling of the later two to 4k. I'm trying to figure out exactly what that would take.
I'm sure there are people that are targeting one step up, while one step step down may or may not continue to be a ~9070 XT (could trend slightly downward over time, similar to 4090 and it's general goals).
Or perhaps some slightly less orthodox combination.
It would appear they are generally intended to be mostly proportional (I don't know general sample ratio; you probably could figure it out fairly easily), granted that's not always exactly the case in practice.
Ofc scaling from 1440p->4k needs a little extra perf and 1080p->4k quite a bit more, but it could give people an idea of what they need if scaling from your current native Lumen setup (which I think is great.)
To me, at least, being able to find some kind of bench to test limits for all those things would be wonderful, as I think that could help people find their 'perfect card' for themselves given the increase of such scenarios.

I don't mean to ask you to do extra work, and I think this bench is wonderful as it is for native rez software and hardware lumen, only that this could make it truly shine above any other I've seen as a one-stop shop.

IMO we need those scenarios to be tested, scrutinized, and understood (by both reviewers and consumers, including myself as the latter, not just developers), and the way you display the info is fantastic.

It already perfectly showcases 5080's 1440p limitations (I attempt to explain lows; why stability important; especially as a product ages and/or higher DLSS requirements over time), and I appreciate it greatly.

Honestly, if that package existed, I would scream bloody murder until reviewers put it into their suites and ran the full gamut of native and scaling settings on every card. I think all of those things are that important.

Truly, perhaps the most important factors and/or limitations of any card right now and certainly going forward into the future (especially wrt the next generation of consoles and scaling from their innate capabilities).

Does what I'm saying make sense and/or do you agree? Is something like this possible and/or is it something you could/would consider implementing?
 
Last edited:
Joined
May 13, 2008
Messages
1,096 (0.18/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
For the science.
With default driver cache and disabled Nvidia Cache, In my case Nvidia cache is written on an M2 Gen3 drive still, there is a 0.7 % difference.


View attachment 389924

Say what you will about 1080Ti and RT...it has lived a damn good life in the world of raster.
Just remember when you see benches like this that RT game implementations were literally created so people would upgrade from that card, or perhaps many wouldn't of until the next gen of consoles.
No joke. Same thing for 2080 Ti and upgrading DLSS (perf hit) and pushing higher ratios of RT (above what it and older AMD cards targeted).
Otherwise people likely would be using them for 1080p/1440p and completely happy for a very long time. If you ever wonder why 7900xtx is *barely* a 1080p RT card, that's why. Pushing it up at a unnatural ratio.

I appreciate you running it, for the science! :lovetpu:
 
Joined
Mar 2, 2011
Messages
171 (0.03/day)
Say what you will about 1080Ti and RT...it has lived a damn good life in the world of raster.
Just remember when you see benches like this that RT game implementations were literally created so people would upgrade from that card, or perhaps many wouldn't of until the next gen of consoles.
No joke. Same thing for 2080 Ti and upgrading DLSS (perf hit) and pushing higher ratios of RT (above what it and older AMD cards targeted).
Otherwise people likely would be using them for 1080p/1440p and completely happy for a very long time. If you ever wonder why 7900xtx is *barely* a 1080p RT card, that's why. Pushing it up at a unnatural ratio.

I appreciate you running it, for the science! :lovetpu:
You know already my only issue with 1080 Ti, power consumption. It draws constantly an average of 230 Watt(capped FPS at 60 but, delivers around 50 only) in HD 2 @ 1440 P Native in game settings or in Last Train Home I can see an average of 250 Watt.
Not planning to reduce quality in game settings on my 1440 screen.
With an 1080 Screen maybe I would not even watch any GPU launch, even though I'm always curious about new implementations and cooling designs.
 
Joined
May 13, 2008
Messages
1,096 (0.18/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
You know already my only issue with 1080 Ti, power consumption. It draws constantly an average of 230 Watt(capped FPS at 60 but, delivers around 50 only) in HD 2 @ 1440 P Native in game settings or in Last Train Home I can see an average of 250 Watt.
Not planning to reduce quality in game settings on my 1440 screen.
With an 1080 Screen maybe I would not even watch any GPU launch, even though I'm always curious about new implementations and cooling designs.

I hear that (my cards also toasty). I think next-gen will be the one for you. I think there will be a power-efficient 1440p-aimed design to replace 5080 (or essentially 1/3 faster than 9070 xt), probably a '6070' or Ti.
I personally think it will be 192-bit/18GB 9216 @ 3780/36gbps.
(Although it could be up to 4200/40000, as that is what I think AMD will do with their designs...either the former first and then the latter, or the latter potentially first [depending on if nVIDIA see AMD as a threat]).

It's one of those things where it probably won't scale 1440p->4k very well, but for someone like you it wouldn't matter. In your situation it would be the perfect GPU bc tight design and power-efficient for 1440p.

I really don't know what AMD is going to do to compete in that segment. I understand competing at 1080p (replacing N48), I understand 1440p and 4k up-scaling (essentially double). 4k native (triple).

It may come down to how their configurations of chiplets work. If they are 2048sp, can they be oriented in something like a 4x design rather than 3/6? If 3072sp chiplets, can it be 3 instead of 2, etc?
Those are the interesting questions right now; chiplet capabilities and possible configs along with nVIDIA's clock/ram aim out the gate (it'll eventually be latter spec though, imo)...and I don't have the answers.

I have a feeling for AMD it might come down to doing something like 6144sp/16GB on the low-end (similar to nVIDIA), but with the next design up splitting their 256-bit bus different ways.
For instance, perhaps a full-fat 2048x6 (essentially a 7900xtx with higher clocks and RT/FSR improvements that more-or-less competes with similar from nVIDIA and replacing 4090) with 16x2GB ram chips.
Perhaps a cut-down version (1920x6? Maybe even less in another sku?) could have orientations of something like 8x3GB or 8x2GB. 16GB @ 40gbps should be similar to 18GB @ 36gbps, mb slower bc bigger bus.
So for that sku maybe they only enable 1536-1792sp (x6) to compete with nVIDIA's native design, with either similar or lower clocks (in the latter example they *could* theoretically be more power-efficient).

It may *not* be as efficient though. So for your market (and concerns) it will be very interesting to watch. For you, nVIDIA may still be the ticket, but for many others, perhaps AMD. There is no 'one answer'.

Good luck on your journey (as I share the same but with hopes to upscale to 4k; probably around 100TF or so and hopefully 32GB), and may we both find ways to serve our performance and/or power requirements.

:toast:
 
Last edited:
Joined
Sep 29, 2020
Messages
155 (0.09/day)
@miguel1900 (;))

You are very welcome.

Thank you again for your work, as I think it will come in handy as easily digestible as more people begin to understand the limitations of cards wrt lumen; rt/pt in general wrt bandwidth/buffer etc.

I am curious, how difficult would it be to implement FSR1-4 and DLSS2-4 support into this (even with an Optiscaler-like hack)? Perhaps also the path tracer renderer of UE 5.5 as an option across resolutions?
IMHO, this would be the literal perfect bench for the future of gaming if those were there, given their general use-case. Being able to test essentially everything, even if some things (right now) appear a little nuts.
Or sometimes, scaler may appear outdated. You never know when the perf/quality trade-off might be worth it and this might be a quick way to check (and maybe see how far we've gone in each direction).

1080pRT->1440 'quality' FSR3/4 on a 9070 xt for example, as that is the intended market. 4090 and 1440p->4k RT or 1080p->4k PT using DLSS3/4 (which may slip more with new DLSS versions).
Maybe 1080p->4k in some combination for others, etc.
It could not only show the difference in performance between the companies up-scaling in a RT load, but also perhaps change in performance between versions a la FSR3/4 or CNN/transformer (and future iterations).
(That's something I'm always attempting to explain and would be great if I could show it in a ~2.5min bench, especially if people could run it themselves).
It would be cool if this was easily 'solvable' (by looking at something like general 1%,.1% lows; minimums) and I could point someone towards a chart for each in those scenarios.
Maybe factoring in OCing, etc, and the version of up-scaler used. So many people are new to these scenarios that I think it would help them to understand them and each product's general capability.

As I say, I'm one of those guys that's really waiting for 4K60 raster, 1440p60RT, and 1080p60PT mins to line up with up-scaling of the later two to 4k. I'm trying to figure out exactly what that would take.
I'm sure there are people that are targeting one step up, while one step step down may or may not continue to be a ~9070 XT (could trend slightly downward over time, similar to 4090 and it's general goals).
Or perhaps some slightly less orthodox combination.
It would appear they are generally intended to be mostly proportional (I don't know general sample ratio; you probably could figure it out fairly easily), granted that's not always exactly the case in practice.
Ofc scaling from 1440p->4k needs a little extra perf and 1080p->4k quite a bit more, but it could give people an idea of what they need if scaling from your current native Lumen setup (which I think is great.)
To me, at least, being able to find some kind of bench to test limits for all those things would be wonderful, as I think that could help people find their 'perfect card' for themselves given the increase of such scenarios.

I don't mean to ask you to do extra work, and I think this bench is wonderful as it is for native rez software and hardware lumen, only that this could make it truly shine above any other I've seen as a one-stop shop.

IMO we need those scenarios to be tested, scrutinized, and understood (by both reviewers and consumers, including myself as the latter, not just developers), and the way you display the info is fantastic.

It already perfectly showcases 5080's 1440p limitations (I attempt to explain lows; why stability important; especially as a product ages and/or higher DLSS requirements over time), and I appreciate it greatly.

Honestly, if that package existed, I would scream bloody murder until reviewers put it into their suites and ran the full gamut of native and scaling settings on every card. I think all of those things are that important.

Truly, perhaps the most important factors and/or limitations of any card right now and certainly going forward into the future (especially wrt the next generation of consoles and scaling from their innate capabilities).

Does what I'm saying make sense and/or do you agree? Is something like this possible and/or is it something you could/would consider implementing?

Thank you for your feedback, @alwayssts!

Indeed, adding upscalers is something I have on my TODO, but lately I’ve been doubting it because I’ve noticed (I think) that upscalers might be "forced" externally in games, right? (Please tell me more if you know about this.) That could be a problem if people force the upscaler externally while selecting to run the test natively, in order to cheat.

Right now, AFAIK, this can’t be forced in my benchmark because it isn’t included within it (or at least I think that’s the reason), but I’m afraid that if I add it internally, it could then be forced externally, even if you disable it in the test's options.

In any case, these would be comparisons that reviewers would need to make, because there wouldn’t be leaderboards for tests with upscalers, as they would be tons of leaderboards to include and store.

Regards!
 
Joined
May 13, 2008
Messages
1,096 (0.18/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
Thank you for your feedback, @alwayssts!

Indeed, adding upscalers is something I have on my TODO, but lately I’ve been doubting it because I’ve noticed (I think) that upscalers might be "forced" externally in games, right? (Please tell me more if you know about this.) That could be a problem if people force the upscaler externally while selecting to run the test natively, in order to cheat.

Right now, AFAIK, this can’t be forced in my benchmark because it isn’t included within it (or at least I think that’s the reason), but I’m afraid that if I add it internally, it could then be forced externally, even if you disable it in the test's options.

In any case, these would be comparisons that reviewers would need to make, because there wouldn’t be leaderboards for tests with upscalers, as they would be tons of leaderboards to include and store.

Regards!

I honestly wasn't aware that people did that! Perhaps someone that reads this thread knows the answer if it can be tricked without it being flagged in your situation.
I wouldn't think it could be, but also I don't live to cheat at benchmarks; I care more about the honest data. It's sad that those type of things have to be considered, although I understand the concern.

The most I was aware are the ways to change the implementation through things like optiscaler (for FSR3 to FSR4/XeSS or FSR and replacing w/ DLSS hooks/vectors).
It's not unlike changing a DLSS implementations by forcing it in the driver or swapping the .dll. So indeed with any single implementation that variance could occur (using a different version for more perf).
But I was not aware of people actually tricking it to run without being enabled in the actual software.

This was why I was curious if it was possible to add all of them individually, although that might be a big ask. At least DLSS3/4, FSR 3.1/4, and XeSS. Even FG could be helpful (to determine diff from native).
Because otherwise there is a possibility someone could change the implementation for either a savory reason (to test older/newer versions of the scaler and relative performance) or unsavory (higher score).
If each labeled, there wouldn't be a need to do that.

I could absolutely understand not wanting to make leaderboards for up-scalers, as if you do use one implementation there could be those variances for those reasons.

I suppose all that *really* matters is DLSS3/FSR 3.1, with DLSS4/FSR4 able to be forced for academics, but then may not be indicative of actual perf of native DLSS4/FSR4 in games (as it's more costly but looks better).

And, of course, only RDNA4 can run FSR4 right now.

It is in-fact a very strange situation all-around that I don't think has been catalogued very well over the course of their lifecycles. This is why having both versions of each may be benefitial, if possible.

As far as forcing it without the flag, perhaps if you implement it you could beta test with someone that aware of the ways to generally cheat the system and see if they find it possible (and/or have a solution).

Is this a scenario that could be remedied by forcing fullscreen dedicated acceleration and not allowing a window? Again, it's not something I have personally tried.
 
Last edited:
Joined
Sep 29, 2020
Messages
155 (0.09/day)
Exactly @alwayssts , some people just want to be the best at any cost, even if it means cheating. And in a benchmark, this is a crucial issue.

Let's see if others who know more about this can provide additional insights. If not, in the end, I'll integrate some measures and conduct in-depth testing myself, although it will take time. I need to be absolutely sure that I'm not leaving any loopholes that could ruin the leaderboards.

Regarding fullscreen, what do you think would actually change? That said, the benchmark was already running in fullscreen, but some users complained about the blink when switching resolutions. Nowadays, people no longer want that 'old-school' screen blink, so I switched to the standard mode used today.

Anyway, going back to some of my earlier thoughts, I've always believed that the most important thing is to measure the RAW performance of graphics cards. I would say that upscalers are just a multiplier of that performance (not exactly, but with negligible differences).

For example, if a GPU runs at 60 FPS and, with DLSS, it reaches 90 FPS (+50% or x1.5), then a GPU that originally runs at 30 FPS could reach 45 FPS (also +50% or x1.5), right?
 
Joined
Jul 31, 2023
Messages
59 (0.10/day)
IMO there's no reason to have upscaling in a benchmark. All that will do is potentially make it CPU dependent if fps gets high enough.
 
Joined
Sep 21, 2020
Messages
1,800 (1.09/day)
Processor 5800X3D -30 CO
Motherboard MSI B550 Tomahawk
Cooling DeepCool Assassin III
Memory 32GB G.SKILL Ripjaws V @ 3800 CL14
Video Card(s) ASRock MBA 7900XTX
Storage 1TB WD SN850X + 1TB ADATA SX8200 Pro
Display(s) Dell S2721QS 4K60
Case Cooler Master CM690 II Advanced USB 3.0
Audio Device(s) Audiotrak Prodigy Cube Black (JRC MUSES 8820D) + CAL (recabled)
Power Supply Seasonic Prime TX-750
Mouse Logitech Cordless Desktop Wave
Keyboard Logitech Cordless Desktop Wave
Software Windows 10 Pro
With the latest change to the main menu, the background now displays properly. Thanks for the fix!

gtb.jpg
 
Joined
Sep 29, 2020
Messages
155 (0.09/day)
IMO there's no reason to have upscaling in a benchmark. All that will do is potentially make it CPU dependent if fps gets high enough.
Thank you @yzonker , I'm more in line with that way of thinking too, but all feedback is welcome of course! and the more options, the better—though only from an in-game menu during free walkthrough mode, maybe (as keeping main options simple and clean I think is positive too).

With the latest change to the main menu, the background now displays properly. Thanks for the fix!

View attachment 391136
Great Bob! I updated it mainly with AMD users in mind, since you couldn’t render the previous animated background. I hope you like it!
 
Joined
Jun 25, 2018
Messages
28 (0.01/day)
Location
Russian Federation
System Name Mini pc
Processor 8845hs
Motherboard GMK K8
Cooling Thermalright Macho Rev.B
Memory 5600 cl40-40-40
Video Card(s) 780m 3.2-3.3GHz
Storage WD sn740 1tb, Netac 7000t 2 tb
Display(s) LG oled65c4
Audio Device(s) daart aurora
Software Win11
amd 8845hs 780m
1080p = 420 (7fps)
срамота-бенч 2.jpg
 
Joined
May 13, 2008
Messages
1,096 (0.18/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
Exactly @alwayssts ,

Anyway, going back to some of my earlier thoughts, I've always believed that the most important thing is to measure the RAW performance of graphics cards. I would say that upscalers are just a multiplier of that performance (not exactly, but with negligible differences).

For example, if a GPU runs at 60 FPS and, with DLSS, it reaches 90 FPS (+50% or x1.5), then a GPU that originally runs at 30 FPS could reach 45 FPS (also +50% or x1.5), right?
IMO there's no reason to have upscaling in a benchmark. All that will do is potentially make it CPU dependent if fps gets high enough.

I understand wanting to test raw performance, which I would argue at this point up-scaling actually is part of, but also I think many people find up-scaling confusing but actually do/will need to use it.
Especially during instances of Lumen and other hardware RT.
For instance, while 1440p .67x scaling is similar to 1080p native in many scenarios (RT included), it is not true of any other instance.
Running 1440p->4k used to be something like 8% max over native 1440p, but that has changed with DLSS4/FSR4 to sometimes up to around 15%...although it varies. Hence when RT the limitation, should give a consistent result many (including myself) haven't really been able to decipher beyond 'quality' (960p up-scaling) 1440p being ~1080p native, especially as the up-scalers evolve (as they already have).

Trying to hit two birds w/ response here; It is important because in many instances of Lumen (and/or other RT-limited instances) people are using hardware up-scaling; CPUs won't be the limitation.
It's already been shown by people with varying CPUs getting very similar results based upon the RT capability of the GPU.

This benchmark accurately represents a real-world instance limited by Lumen (hardware ray-tracing), of which there a few games out there that show identical, if not near-identical results, even if different RT.
This 'benchmark' not only very real and similar results to games right now; it shows the actual FR (and important metrics), not just reliant on a 'score' to decipher for yourself what that means; not unlike a game.
The difference being game results come from a tester running around a game looking for demanding sections to find that same limitation; this benchmark takes 2.5min to run (hence a much better/easier solution).

I hope people understand where I'm coming wrt this. It's about people knowing what they need; if their current setup can meet the level they wish to play. This includes rt, but also up-scaling (as evolving metrics).
Perhaps even developers having an easy understanding of each card's relative limiations in this regard before working on their own implementations/settings, with RT/up-scaling then being less of a variable.

I also think it would extremely handy for reviews, as they'd know if a game was behaving differently than this bench it is either game has problems and/or a certain card may be limited in some other way.
Also, if testing many cards, this would be fairly quick way to give people the answer to the question I feel is most important right now (native and up-scaled performance of RT across any current or future card).

It's important because you see not only reviews testing less games because of this phenomenon, but it has also become a ridiculous amount of work for reviewers that test a lot of games (like W1zzard).
The later of which can't currently find time to do that testing (which is actually extremely important) because of how many games they run (often with similar results to this bench).

If I can look at one bench and see something like a 9070 xt will generally keep ~60fps in 1080p or 'quality' 1440p up-scaling (and it does/would) that should be true of many games. Other cards similar answers.
Without testing many games that all show the same limitation at their weakest point, which is often RT. As I said, this bench literally proves my point about 5080 and 1440p RT; to me that stuff is important.
Because then people do not get the wrong idea and/or expectation from hardware. Where-as game results can/do vary based on how a reviewer benches the game, this bench should stay consistent.
Which is to say, it would show the lowest common denominator that some reviews may not find in their run-through of part of a level in their bench. nor would it take the time to find that same limitation.
 
Last edited:
Top