• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Anthem VIP Demo Benchmarked on all GeForce RTX & Vega Cards

30 fps more with driver updates, are you insane? he even has 9900K against 8700k in CPU-bound game, LMAO
Are you implying @W1zzard doesn't know what he's doing when benchmarking? Then I suggest you might want to depart this site rather quickly and rather quietly.
 
Are you implying @W1zzard doesn't know what he's doing when benchmarking? Then I suggest you might want to depart this site rather quickly and rather quietly.
i would rather trust VIDEO evidence than some numbers. Maybe you want to explain how he got 108 while its ~80 in the video with superior CPU?
 
i would rather trust VIDEO evidence than some numbers. Maybe you want to explain how he got 108 while its ~80 in the video with superior CPU?

W1zzard is probably using a light and static scene to measure relative performance.
I agree, during heavy scenes this game does require a ton of GPU power to maintain the targeted framerate.
 
RTX 2060's 1900Mhz stealth overclock ROPS / Vega 64's 1536 Mhz ROPS = 1.236979166666667

At 1080p, Vega 64's 88 fps x 1.236979166666667 = 108.8 fps

Conclusion: Vega 64 is ROPS bound. TFLOPS means little since it doesn't include ROPS read-write factors.

AMD should improve raster engines and ROPS.
 
RTX 2060's 1900Mhz stealth overclock ROPS / Vega 64's 1536 Mhz ROPS = 1.236979166666667

At 1080p, Vega 64's 88 fps x 1.236979166666667 = 108.8 fps

Conclusion: Vega 64 is ROPS bound. TFLOPS means little since it doesn't include ROPS read-write factors.
That's BS.
RTX 2060 have only 48 ROPs vs Vega64's 64. Even with RTX 2060's max boost, Vega 64 still have higher fillrate.
 
May the Main-Thread Limitation be with you, young Game-Engine-Constructor.
 
W1zzard is probably using a light and static scene to measure relative performance.
I agree, during heavy scenes this game does require a ton of GPU power to maintain the targeted framerate.
ok, maybe, guy in a video got 105 while staring at a wall, still not 108, tho. Looking at walls is how you benchmark games in 2019? I expect actual gameplay fps to be shown in such benchmarks, not some random best case scenario wall staring, or at least provide info on what scene you were benchmarking
 
That's BS.
RTX 2060 have only 48 ROPs vs Vega64's 64. Even with RTX 2060's max boost, Vega 64 still have higher fillrate.
Replace RTX 2060 (TU106, 48 ROPS, 3MB L2 cache before DCC applied) into it's highest version which is RTX 2070 (TU106) which has 64 ROPS with 4MB L2 cache.

Even with 48 ROPS at 1900Mhz, the fill rates are already similar to Vega 64's at 1536Mhz and that's not even factoring delta color compression (DCC) differences.
 
Last edited:
I'm sick and tired of lazy devs and greedy publishers, pushing unfinished products down our throats. Most of today's game releases would be called beta 20 years back. Just look at Assassins Creed Odyssey joke, 30 fps (0.1 % - 16 fps ) at 4K with GTX 1080 TI and that is $60 game! Lousy code optimization is performance killer, no matter what kind of monster hardware you own. Most publishers don't give a f... about PC gaming anymore. 90 % of the games are criminally badly optimized console ports.
 
I remember it was claimed at the launch of both Polaris and Vega; don't judge it yet - it will become much better over time!

Gee, sounds just like DXR. But hey, buy it now because future!
 
Replace RTX 2060 (TU106, 48 ROPS, 3MB L2 cache before DCC applied) into it's highest version which is RTX 2070 (TU106) which has 64 ROPS with 4MB L2 cache.
What is your point? RTX 2070 is far beyond Vega 64, with nearly comparable fillrate.
Your claim is still incorrect, Vega 64 is not ROP limited, nor is it bandwidth or Gflop limited.

GCN's problem is saturation of resources, which is why it struggles more on lower resolutions than higher.

Even with 48 ROPS at 1900Mhz, the fill rates are already similar to Vega 64's at 1536Mhz and that's not even factoring delta color compression (DCC) differences.
Don't mix in compression, that's not relevant for this.
 
game is a meh warframe/destiny rip anyway

map is tiny (even with the assumption that the full retail map will be 50% bigger((which may or may not be true) its still tiny)

loading screens are frequent which completely trashes immersion

Controls are between meh and horrible

gunplay is uninspiring

and the UI is the worst I have experienced in several years

What is your point? RTX 2070 is far beyond Vega 64, with nearly comparable fillrate.
Your claim is still incorrect, Vega 64 is not ROP limited, nor is it bandwidth or Gflop limited.

GCN's problem is saturation of resources, which is why it struggles more on lower resolutions than higher.


Don't mix in compression, that's not relevant for this.

it accually is very relevant vega's fill rate is garbage

to the people that don't get it ill make it as clear as I can

AMD Does not make gaming cards, they make workstation cards that happen to play games
and no I don't care how they are marketed they are workstation cards because thats the only thing GCN is good at which is compute

huge difference there is absolutely no point in comparing amd to nvidia anymore when it comes to gaming they can't and do not compete so just STOP just STOP IT
 
Last edited:
So now even demos/benchmarks require an online connection, feature limited activations and we still flock to them? This world is going down the drain.

And congrats on the article. Besides not mentioning whether this is running in DX11 or DX12 more, whether it users any Turing specific technologies (and at which level), it's all there. :wtf:
 
Last edited:
RTX 2060's 1900Mhz stealth overclock ROPS / Vega 64's 1536 Mhz ROPS = 1.236979166666667

At 1080p, Vega 64's 88 fps x 1.236979166666667 = 108.8 fps

Conclusion: Vega 64 is ROPS bound. TFLOPS means little since it doesn't include ROPS read-write factors.

AMD should improve raster engines and ROPS.
GCN so far are Geometry bound, GCN has been stuck on 4 Geometry Engines since Hawaii / 290X.
Vega was suppose to have new features to by-pass that limit, but RTG never managed to get it to work, so it ended up just being a higher clocked Fiji.
One of the reasons why Polaris performs as well as Hawaii with half the ROP etc, it is still limited to 4 Geometry Engines (other optimizations aside), which is much less of an issue on a mid-range GPU vs a high-end.
 
Last edited:
The loading screens kill it for me. If they were quick I wouldn't care but they are lengthy I even had it hang twice on loading screens. Curious what improvements a driver can make in this, then on top of that what improvements DLSS will bring.
 
this benchmark is obvious bullshit, this guy cant even get 100fps ONCE at 1080p with 2060 and 9900k, while TPU managed to get 108 AVERAGE, yea sure

Well, you forgot about something.

The guy is using max settings and TPU is using ultra preset. The difference is HBAO on ultra and HBAO full on Maxed.
 
Very impressed with how the RTX2060 perform at 1440p Ultra. Beating the 1070, Vega 56 & 64 at the same time is just... beautiful. $350 card is even more worthy right now than a "brand new" Pascal GPU or the hard to get Vega cards now.
 
Very impressed with how the RTX2060 perform at 1440p Ultra. Beating the 1070, Vega 56 & 64 at the same time is just... beautiful. $350 card is even more worthy right now than a "brand new" Pascal GPU or the hard to get Vega cards now.
It really is nothing special, in the pass generations there are plenty of X60 class GPUs that match or beat the former high-end GPU at like half the price.
Turing offers one of the least performance increase for a new gen card in a long time.
I mean the 1060 was equal or faster than the 980 and sold for $250, which is already an increased price over the previous gen 60-class cards.
 
It really is nothing special, in the pass generations there are plenty of X60 class GPUs that match or beat the former high-end GPU at like half the price.
Turing offers one of the least performance increase for a new gen card in a long time.
I mean the 1060 was equal or faster than the 980 and sold for $250, which is already an increase over the previous gen.
Is that so? Name such a card that isn't the GTX 1060.
 
at least the 2060 keeps up with the 1070Ti but beats it with some OCing. not gonna hear from a person who claims of owning a 2080Ti but never even post some benchmarks from a variety of games & benching suites to show that Turing is "nothing special" & "offers one of the least perf gain for a new gen".
 
at least the 2060 keeps up with the 1070Ti but beats it with some OCing. not gonna hear from a person who claims of owning a 2080Ti but never even post some benchmarks from a variety of games & benching suites to show that Turing is "nothing special" & "offers one of the least perf gain for a new gen".
You don't need me to do benchmarks for you when this site offers them.
Is that so? Name such a card that isn't the GTX 1060.
The 960 was similar to the 680 / 770 and sold at $200.
The 660ti was a bit faster than the GTX 580.
perfrel_1920.gif
perfrel_1920.gif
 
Last edited:
back to topic: At least I know that the 2060 beat 2 of AMD's top of the line GPUs in 1440p Ultra.
 
@W1zzard would be awesome if everygame you benchmarked you could release a video showing your benchmark path so community could do their own testing.

I bet donations would go a long way to help him burn his entire life on benching games.
 
Back
Top