Monday, June 22nd 2020

NVIDIA GeForce "Ampere" Hits 3DMark Time Spy Charts, 30% Faster than RTX 2080 Ti

An unknown NVIDIA GeForce "Ampere" GPU model surfaced on 3DMark Time Spy online database. We don't know if this is the RTX 3080 (RTX 2080 successor), or the top-tier RTX 3090 (RTX 2080 Ti successor). Rumored specs of the two are covered in our older article. The 3DMark Time Spy score unearthed by _rogame (Hardware Leaks) is 18257 points, which is close to 31 percent faster than the RTX 2080 Ti Founders Edition, 22 percent faster than the TITAN RTX, and just a tiny bit slower than KINGPIN's record-setting EVGA RTX 2080 Ti XC. Futuremark SystemInfo reads the GPU clock speeds of the "Ampere" card as 1935 MHz, and its memory clock at "6000 MHz." Normally, SystemInfo reads the memory actual clock (i.e. 1750 MHz for 14 Gbps GDDR6 effective). Perhaps SystemInfo isn't yet optimized for reading memory clocks on "Ampere."
Source: HardwareLeaks
Add your own comment

67 Comments on NVIDIA GeForce "Ampere" Hits 3DMark Time Spy Charts, 30% Faster than RTX 2080 Ti

#26
cucker tarlson
you mean sharpening is an alternative to image recreation :laugh: and a better one :laugh:
and if you claim otherwise you're a fanboy :roll:


sweetie,image sharpening is hardly a feature,calling it that is a stretch in 2020.it's the easiest trick a gpu can do to produce image that may look better to the untrained eye.
nvidia has had that feature via freestlye for years.but "folks from the known camp" tend to forget about it,until amd has it 2 years later that is

and hwunboxed has their own video on dlss 2.0,I guess you missed that one.I advise you not to watch it cause it'll just make you sad how good they say it is.

let me inform you since you're slow to learn new facts
Provided we get the same excellent image quality in future DLSS titles, the situation could be that Nvidia is able to provide an additional 30 to 40 percent extra performance when leveraging those tensor cores. We'd have no problem recommending gamers to use DLSS 2.0 in all enabled titles because with this version it’s basically a free performance button.

The visual quality is impressive enough that we’d have to start benchmarking games with DLSS enabled -- provided the image quality we’re seeing today holds up in other DLSS games -- similar to how we have benchmarked some games with different DirectX modes based on which API performs better on AMD or Nvidia GPUs It’s also apparent from the Youngblood results that the AI network tensor core version is superior to the shader core version in Control.
Posted on Reply
#27
medi01
cucker tarlsonimage recreation
I hope, this was sarcasm.
cucker tarlsonthat feature
No, not "that feature" but NVidias upscaling tech, called DLSS 1.0 was beaten, according to the review linked above.
Nobody direclty compared it to the image upscaler called DLSS 2.0.

If you are too concerned about the naming:



It is notable that using neural network like processing to upscale images is not something either of the GPU manufacturers pioneered:
towardsdatascience.com/deep-learning-based-super-resolution-without-using-a-gan-11c9bb5b6cd5
cucker tarlsonit'll just make you sad how good they say it is.
Sad is the reality distortion thing, there is nothing sad about upscaling doing its job.
Posted on Reply
#28
cucker tarlson
medi01Nobody direclty compared it to the image upscaler called DLSS 2.0.
what are you talking about,there's been a dozen reviews of dlss 2.0 form every major site.FOR MONTHS
here,even your favorite amd-leaning data manipulating channel has to admit that
Provided we get the same excellent image quality in future DLSS titles, the situation could be that Nvidia is able to provide an additional 30 to 40 percent extra performance when leveraging those tensor cores. We'd have no problem recommending gamers to use DLSS 2.0 in all enabled titles because with this version it’s basically a free performance button.

The visual quality is impressive enough that we’d have to start benchmarking games with DLSS enabled -- provided the image quality we’re seeing today holds up in other DLSS games -- similar to how we have benchmarked some games with different DirectX modes based on which API performs better on AMD or Nvidia GPUs It’s also apparent from the Youngblood results that the AI network tensor core version is superior to the shader core version in Control.
Posted on Reply
#29
Metroid
This image is a good indicator of what the 3080 will be x the 2080 ti, on this image we can clearly see that the gtx 1080 was 26% stock and 34% little overclocked faster than the gtx 980 TI and this ampere news leak shows gtx 3080 is 31% faster than 2080ti.

Posted on Reply
#30
cucker tarlson
MetroidThis image is a good indicator of what the 3080 will be x the 2080 ti, on this image we can clearly see that the gtx 1080 was 34% faster than the gtx 980 TI and this ampere news leak shows gtx 3080 is 31% faster than 2080ti.

mostly due to vram and bandwidth at 4k
at 1440p that was 25% maybe
Posted on Reply
#31
Blue4130
Metroidthis ampere news leak shows gtx 3080 is 31% faster than 2080ti.
No, it shows that an unknown ampere card is 31%faster. Could be 3080,could be 3090, heck, could be 3070.
Posted on Reply
#32
cucker tarlson
Blue4130No, it shows that an unknown ampere card is 31%faster. Could be 3080,could be 3090, heck, could be 3070.
exactly.
this is rtx whangdoodle the thread should say
Posted on Reply
#33
r.h.p
medi01Your RDNA of which size/price is shaking, chuckle?


AMD already has own alternative to DLSS, it was compared to 1.0 and conclusion was it clearly beats it (no comparisons to 2.0 yet), which went unnoticed by the folks from the known camp, as, ultimately, it's about fanboism.
they promised me navi 22 rdna 2 with a wicked cooler , now im fraked ;)
Posted on Reply
#34
R0H1T
cucker tarlsonlet me inform you since you're slow to learn new facts
Let's be clear about your "facts" ~ we'll need a lot more data & a lot more (in game) comparisons to see if there's a discernible loss in IQ or not, as compared to when DLSS is off. So when you're quoting them, remember to highlight this very important part ~
The visual quality is impressive enough that we’d have to start benchmarking games with DLSS enabled -- provided the image quality we’re seeing today holds up in other DLSS games
And I'll add again ~ we need a lot more data not just one off comparisons from youtubers.
Posted on Reply
#35
cucker tarlson
R0H1TLet's be clear about your "facts" ~ we'll need a lot more data & a lot more (in game) comparisons to see if there's a discernible loss in IQ or not, as compared to when DLSS is off. So when you're quoting them, remember to highlight this very important part ~
And I'll add again ~ we need a lot more data not just one off comparisons from youtubers.
I quoted one source.
This does not mean there is just one dlss 2.0 review out there.
and dlss 2.0 is not trained on a per-game basis.
but how would you know that,there's been reviews for 4 months and you don't even know
Posted on Reply
#36
R0H1T
Yes & that's why I said a lot more data than "just one off comparisons from youtubers" so I know there's multiple reviews out there, I'd also like more print media comparisons featuring DLSS 2.0 including yours truly TPU. Again more games & a lot more data.
cucker tarlsonbut how would you know that,there's been reviews for 4 months and you don't even know
You don't wanna go down that rabbit hole with me, trust me :rolleyes:
Posted on Reply
#37
medi01
cucker tarlsonwhat are you talking about,there's been a dozen reviews of dlss 2.0 form every major site.FOR MONTHS
Comprehending written statements is hard, I guess. What was your interpretation of "compared it", what is that "it", insightful one?
Posted on Reply
#38
cucker tarlson
medi01Comprehending written statements is hard, I guess. What was your interpretation of "compared it", what is that "it", insightful one?
what ? can you start making sense please ? quote the part you're referring to maybe ?

why would anyone compare image sharpening to image resonstruction really ? make sense much ?

nvidia vs amd image sharpening in driver makes sense.

dlss vs image sharpening ? why ?

to get a 40% performance uplift from using resolution drop + sharpening you have to drop the resolution by 40% and apply tons of sharpening that may look good to an untrained eye but real bad on closer inspection.
you're not getting same image quality as native resolution vs dlss quality preset with said performance uplift.
maybe dlss performance vs res scale drop + sharpening would be comparable in terms of quality,but then again with performance preset you're getting double the framerate
www.purepc.pl/nvidia-dlss-20-test-wydajnosci-i-porownanie-jakosci-obrazu?page=0,8

sorry,but what you're arguing here is just irreleveant.
Posted on Reply
#39
TheoneandonlyMrK
I think a few of you are going off topic.

This is a Ampere performance rumour thread.


Soo on topic.
Dya see the pciex version of a100.

250 watts not 400 like sxm and only 10% performance loss.

To me this indicates they really are pushing the silicon passed it's efficiency curve ,150 watts for that last 10%

We're expecting upto 300 watts Tdp on a 3080ti so it seems like they're pushing that curve.
Posted on Reply
#40
R0H1T
We'll have to wait on that one, pretty sure Nvidia can dial down clocks if they see RDNA2 top tiers not able to compete with them on price or performance. Clocks can literally change after a launch, as AMD showed us o_O
Posted on Reply
#41
cucker tarlson
theoneandonlymrkI think a few of you are going off topic.

This is a Ampere performance rumour thread.


Soo on topic.
Dya see the pciex version of a100.

250 watts not 400 like sxm and only 10% performance loss.

To me this indicates they really are pushing the silicon passed it's efficiency curve ,150 watts for that last 10%

We're expecting upto 300 watts Tdp on a 3080ti so it seems like they're pushing that curve.
tbp
tdp is unknown
Posted on Reply
#42
TheoneandonlyMrK
cucker tarlsontbp
tdp is unknown
I didn't say it was , I said upto.

Based on cooling and hypothetical common sense.
Posted on Reply
#43
cucker tarlson
theoneandonlymrkI didn't say it was , I said upto.

Based on cooling and hypothetical common sense.
2080Ti is 285W and it's the most power efficient turing



so yeah,400w and up to 300w are kinda different,right ?
Posted on Reply
#44
medi01
cucker tarlsonquote the part you're referring to maybe ?
Radeon Susan, dude, it ain't hard:

cucker tarlsonwhy would anyone compare image sharpening to image resonstruction really ?
Why would anyone call upscaling different names?
Because marketing.
theoneandonlymrkDya see the pciex version of a100.

250 watts not 400 like sxm and only 10% performance loss.

To me this indicates they really are pushing the silicon passed it's efficiency curve ,150 watts for that last 10%
Puzzled where you've seen the perf consumption figures.
Posted on Reply
#45
cucker tarlson
medi01Radeon Susan, dude, it ain't hard:



Why would anyone call upscaling different names?
Because marketing.


Puzzled where you've seen the perf consumption figures.
didn't answer me.
why compare resolution drop w. image reconstruction to just a pure,simple resolution scale drop.
is resolution dropping a new feature now ? :laugh:
Posted on Reply
#47
95Viper
Quit the insults and drama
Stay on topic and keep it civil

Thank You and Have a Good Day
Posted on Reply
#48
Unregistered
Turing was very disappointing (overpriced) for the performance uplift compared to Pascal, hopefully Ampere and RDNA2 will bring real generational jump.
#49
cucker tarlson
Xex360Turing was very disappointing (overpriced) for the performance uplift compared to Pascal, hopefully Ampere and RDNA2 will bring real generational jump.
it was both.
I mean I could understand a $1000 2080Ti cause the competition wasn't there (still isn't 2 yrs later,might not be this year),but for a +30-35% over 1080Ti - not really
Posted on Reply
#50
Lucas_
I expected more.. naja.
Posted on Reply
Add your own comment
Dec 24th, 2024 13:42 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts