Monday, September 17th 2018

NVIDIA RTX 2080 / 2080 Ti Results Appear For Final Fantasy XV

The online results database for the Final Fantasy XV Benchmark has been partially updated to include NVIDIA's RTX 2080 and 2080 Ti. Scores for both standard and high quality settings at 2560x1440 and 3840x2160 are available. While the data for 1920x1080 and lite quality tests are not.

Taking a look at the RTX 2080 Ti results, show it beating out the GTX 1080 Ti by 26% and 28% in the standard and high quality tests respectively, at 2560x1440. Increasing the resolution to 3840x2160, again shows the RTX 2080 Ti ahead, this time by 20% and 31% respectively. The RTX 2080 offers a similar performance improvement over the GTX 1080 at 2560x1440, where it delivers a performance improvement of 28% and 33% in the same standard and high quality tests. Once again, increasing the resolution to 3840x2160 results in performance being 33% and 36% better than the GTX 1080. Overall, both graphics cards are shaping up to be around 30% faster than the previous generation without any special features. With Final Fantasy XV getting DLSS support in the near future, it is likely the performance of the RTX series will further improve compared to the previous generation.
Source: Final Fantasy XV
Add your own comment

39 Comments on NVIDIA RTX 2080 / 2080 Ti Results Appear For Final Fantasy XV

#26
crazyeyesreaper
Not a Moderator
coonbroas demanding as Final Fantasy XV is I was worried that these cards would struggle under it . now I would feel I got my moneys worth in a new 20 series [ that's if I was wanting or willing to buy one to start with ]
Keep in mind FF XV is getting DLSS so essentially with tensor cores super sampling is free. So in theory less jaggies sharper cleaner image without performance loss so the performance gap in an apples to apples comparison will likely grow far larger. However I myself won't be buying an RTX series card. Sticking with my 1080Ti for the time being.
Posted on Reply
#27
Batailleuse
lynx29it would be nice to see some modern benchmarks with the latest drivers in modern AAA games with SLI and CF @W1zzard

its been awhile since anyone has shown any honestly.
that's because SLI is basically dead.

uk.hardware.info/reviews/8113/crossfire-a-sli-anno-2018-is-it-still-worth-it

Less and less Game support, dx12 doesn't help, poor performance gain.

you basically get 5-15% better total FPS by doubling GPU (1080tix2)in most games except a few outliers. 5-15% fps for 100% price I don't think its worth it. plus I quote them
"Furthermore, micro-stuttering remains an unresolved problem. It varies from game to game, but in some cases 60 fps with CrossFire or SLI feels a lot less smooth than the same frame rate on a single GPU. What's more, you're more likely to encounter artefacts and small visual bugs when you're using multiple GPUs as sometimes the distribution of the graphic workload doesn't seem to be entirely flawless."

So you get 5-15%, but you get a framerate that feels not smooth. better just spend the Extra cash on the best in line Single GPU to get the best performances.
Posted on Reply
#28
TheGuruStud
bugWell, I can understand you're used to the huge strides going from GCN 1.0 to 1.1 to 1.2 and so on. But Nvidia can possibly keep up with that.
Though rumor has it these were actually meant for 10nm, but there's no 10nm capacity available atm.

I'm also curious about how you derive performance from specs without knowing the frequency these chips run. Cause that alone can go ±25%.
The chips are maxed out on boost, which I wager is about the same as currents cards (1,900). Clocks don't do anything when they're already this high.
Posted on Reply
#29
Enzarch
For more reference on a reasonably overclocked rig (1700X @ 4GHz / 2800MHz CL14 & 1080Ti @ 2113 / 12200MHz)



Posted on Reply
#30
medi01
crazyeyesreaperThe RTX 2080 offers a similar performance improvement over the GTX 1080 at 2560x1440, where it delivers a performance improvement of 28% and 33%
Too bad it costs like 1080Ti.
Posted on Reply
#31
laszlo
i'm sure these new cards will be the best in the market holding the crown but prices are too high due lack of competition

hope intel will develop faster their gpu and amd will throw something on market ...so previous gen card prices go lower than i'll upgrade but not sooner..

even now cheapest used 1080 i can find is ~400$ and rx580 ~300 (those used in mining rigs are slightly cheaper but over-killed maybe..)
Posted on Reply
#32
Th3pwn3r
xkm1948Most salty comments are coming from 1080Ti owners. We get it. You need to justify your purchase.
I'm trying to make sense of that but it's impossible. Can you explain what the heck you were saying?
Posted on Reply
#33
BrainCruser
Batailleusethat's because SLI is basically dead.

uk.hardware.info/reviews/8113/crossfire-a-sli-anno-2018-is-it-still-worth-it

Less and less Game support, dx12 doesn't help, poor performance gain.

you basically get 5-15% better total FPS by doubling GPU (1080tix2)in most games except a few outliers. 5-15% fps for 100% price I don't think its worth it. plus I quote them
"Furthermore, micro-stuttering remains an unresolved problem. It varies from game to game, but in some cases 60 fps with CrossFire or SLI feels a lot less smooth than the same frame rate on a single GPU. What's more, you're more likely to encounter artefacts and small visual bugs when you're using multiple GPUs as sometimes the distribution of the graphic workload doesn't seem to be entirely flawless."

So you get 5-15%, but you get a framerate that feels not smooth. better just spend the Extra cash on the best in line Single GPU to get the best performances.
I am pretty sure that for multigpunvidia are moving to NVLink, which if done properly will enable better multi-gpu scaling that is transparent to the games. NVLink is similar to multi CPU motherboards with NUMA memory arrangement. (It will add the gpu memories together, and the processors will work in paralel on a single frame)
Posted on Reply
#34
Unregistered
lynx29I'm not impressed at all. I'll wait for the 2019 cards, especially since 1080 ti overclocks like a beast, as well does reg 1080 and they come very close to those stock scores of 2080 and 2080 ti.
Right there with ya. Absolutely terrible performance when considering the price bump per model. I had been looking forward to the upgrade, but at these price points / performance increases...absolutely not. I'd like to think they'll sell terribly so the pricing will adjust itself into a more reasonable range given the performance increase, but no way of knowing how they'll do. If they sell well, the pricing will obviously stay up in the ludicrous range and I'll just wait until the 3000 series.
Posted on Edit | Reply
#35
N3M3515
phanbueyLol. Yes... we are salty because we need to justify our purchase. You definitely got it.

www.forbes.com/sites/jasonevangelho/2018/09/17/new-rtx-2080-benchmarks-final-fantasy-xv-results-reveal-pricing-problem/#7732da865124

"The GTX 1080 boasts an average 50% better performance than the GTX 980 at 1440p across 7 synthetic and in-game benchmarks. And it rises to the 4K challenge, kicking out an average 65% higher framerates across those same tests compared to the GTX 980. And its entry price is only 9% higher than its 900 series predecessor."
This
Posted on Reply
#36
Bjorn_Of_Iceland
neatfeatguyNeither. I'll sit on my 980Ti. Still gives me great performance for my gaming needs. 3 years out of it so far and it looks like I'll easily get another 2-3 years out of my card (as long as it doesn't die on me). If I sorely feel the need to upgrade, I'll just steal back the second 980Ti I have in my HTPC/Plex/Gaming computer and run SLI again.....curious how well that would be a few years from now....
If you are running on a 1080p that is. Some people already moved passed 1080p just like how we move passed 720p in 2008.
Posted on Reply
#37
HD64G
bugWell, I can understand you're used to the huge strides going from GCN 1.0 to 1.1 to 1.2 and so on. But Nvidia can possibly keep up with that.
Though rumor has it these were actually meant for 10nm, but there's no 10nm capacity available atm.

I'm also curious about how you derive performance from specs without knowing the frequency these chips run. Cause that alone can go ±25%.
And here we are with the reviews' results that came out a few hours ago. Comparing custom 1080s and custom 1080Tis vs the FE-pre-oced 2080s and 2080Tis, we have less than 25% diff. And I predicted that sort of diff simply because bigger dies cannot clock higher enough than Pascal to get much more performance just from that. And GDDR6 was obligatory to not constraint the higher core count from giving the higher performance intended. So, with those cores available, that increase was the almost certain one. Efficiency and IPC increase is close to 0% after all.

So, much higher price for a much worse vfm cannot become a market success if customers are rational persons, eh?
Posted on Reply
#38
bug
HD64GAnd here we are with the reviews' results that came out a few hours ago. Comparing custom 1080s and custom 1080Tis vs the FE-pre-oced 2080s and 2080Tis, we have less than 25% diff. And I predicted that sort of diff simply because bigger dies cannot clock higher enough than Pascal to get much more performance just from that. And GDDR6 was obligatory to not constraint the higher core count from giving the higher performance intended. So, with those cores available, that increase was the almost certain one. Efficiency and IPC increase is close to 0% after all.

So, much higher price for a much worse vfm cannot become a market success if customers are rational persons, eh?
You are still so wrong...

For a chip to be 25% faster than another on average it must be way faster than 25% when fully loaded. Simply because it cannot be any faster while neither is fully loaded. I'm just stated the obvious here, if you were truly interested in Turing (as opposed to the "I know without reading" attitude), you would have read Anand's in depth analysis.

And if the above seems overly complicated to you, think about cars: can you shorten the travel time between town A and B by 25% by using a car that's only 25% faster than the reference?
Posted on Reply
#39
HD64G
bugYou are still so wrong...

For a chip to be 25% faster than another on average it must be way faster than 25% when fully loaded. Simply because it cannot be any faster while neither is fully loaded. I'm just stated the obvious here, if you were truly interested in Turing (as opposed to the "I know without reading" attitude), you would have read Anand's in depth analysis.

And if the above seems overly complicated to you, think about cars: can you shorten the travel time between town A and B by 25% by using a car that's only 25% faster than the reference?
Only results matter for performance in order to make a buy or not decision. And that is the main way to compare hw. Only if someone uses only a program or 2 that gains more that the average needs to check just those. And on average means that in other games there is smaller than 25% gains (mainly DX11 ones) and in others bigger than that (mainly DX12 or Vulcan ones as hw async compute is there now). Conclusion: VFM of Turing GPUs in their price state today is awful. And your arguments tend to make it worse for nVidia. It is a simple new gen on almost the same manufacturing procedure that cannot give more without making a bigger die. Simple as that. No need to anyone to spend so much now apart from those indifferent of cost to just have the fastest pc hw possible at any time. They will lose much money though as the Turing GPUs will have imho big price cuts in a few (2-3) months and till then RTX might be still missing.
Posted on Reply
Add your own comment
Dec 19th, 2024 13:24 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts