Monday, July 23rd 2018
The Mill Crunches Away: Alleged NVIDIA GTX 1170 Benchmarks Surface
"History turned to legend, legend turned to myth", is part of one of the opening lines on the first Lord of the Rings movie. And it just so happens it applies pretty well to the overall rumor mill context and expectation: we'll see if this is history in the making or not. Case in point: leaked benchmarks point towards NVIDIA's next-gen 1100 series of graphics cards to bring tangible performance improvements, with the 1170 tier delivering GTX 1080 Ti levels of performance. This is conveyed through a 3D Mark Firestrike score of 22,989 - of which true authenticity can't be ascertained, due to the old "photo of a screenshot" trick. The 2.5 GHz core clock also seems too good to be true - and the 16 GB memory pool tends towards that end of the spectrum as well. Still, it wets the appetite, doesn't it? Just another rumor that we'll eventually see either confirmed or dismissed - like the expected launch date.
Source:
WCCFTech
37 Comments on The Mill Crunches Away: Alleged NVIDIA GTX 1170 Benchmarks Surface
Certainly happy to keep mine for a while longer yet.
Don't be too focused on theoretical specs. If Nvidia puts 8 GB on "GTX 1180", then it will probably be enough. Memory is still expensive, and GDDR6 is slightly more expensive than GDDR5/X.
GTX 980 is the most forgotten card in TPU, dont take it that serious, im quite happy with my old 980, im playing a bit less and focused on other projects, thanks for the advise kid,
Regards,
Maxwell's 970/980 offered both delta compression (higher efficiency in use of bandwidth) and additional VRAM over the 'late Kepler' cards. The 7970 suffers a similar fate. Another important aspect is increased use of tesselation over time. Maxwell features improved tesselation over Kepler as well.
Contrary to popular belief, cards never 'age well', but cards at a VRAM limit will not last as long as others without it.
And if you're wondering why TPUs relative performance database shows 'no change' across the years: check this out: we didn't use 4K yet at the time and the games tested are different:
www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_780/26.html
www.techpowerup.com/reviews/MSI/GTX_970_Gaming/27.html
Some cards do age well , or rather it's that some age poorly. Kepler was a radical redesign with many deficiencies at the hardware level which were mitigated through software , software which is no longer maintained as well through drivers updates. It's no mistake that "game ready drivers" became a thing around that time. Exactly , but these happen all the time and it's taken cares of through driver updates. There are API tools to inform the driver what engine is used by the application, why do you think such feature exists ? So that specific optimization may be applied.
Tesselation isn't even used much these days?! Are you blind, sir. Its everywhere.
And the 970 has 4GB it just has 0.5GB that is slower, but it still exists on the same PCB and the driver can still utilize it. So we're not talking about '0.5 GB' and 1GB is quite huge, its a whoppin 25% more. Sure, another bit can be explained with driver optimizations but you can still do just fine in every game without the game ready drivers and just getting the major ones. In fact since late Maxwell- early Pascal most of the 'Game Ready' drivers are bug fixes and fixes to the driver before it.
But, if you want to believe its all about driver TLC, be my guest.
You think this is as cut and dry as chader counts and RAM capacity , it's not.
I said this:
The gap did widen you're right but that is mostly attributable to VRAM and changes in game/engine design to cater to the current console crop. It has nothing to do with Nvidia or drivers and everything with cards that run into bottlenecks they didn't touch before in newer benchmark suites and games.
You say: Its not a problem to agree on something :p And coming back to the driver TLC post Kepler, I think we can put that to rest as well, you literally confirmed it yourself that other factors are in play.
You also said this : You were the one who claimed there was just one factor namely VRAM , not me , I never once said there aren't other factors in play on the contrary I actually pointed out other things. What I did say though and perhaps I wan't clear enough is essentially that those factors which I mentioned aren't independent : hardware deficiencies => more work required on the software side => lack of software optimizations => a gap in performance
On the other hand VRAM is an independent factor from the above : not enough VRAM => bad performance. This is something that doesn't change over time , you push a GPU over it's VRAM limit it means bad performance back then and now with or without driver support , this is a constant of sorts. And a consistent widening performance differential as time goes cannot be explained by it under every circumstance.