Apr 3rd, 2025 20:50 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts

Friday, September 9th 2016

AMD GPUs See Lesser Performance Drop on "Deus Ex: Mankind Divided" DirectX 12

Deus Ex: Mankind Divided is the latest AAA title to support DirectX 12, with its developer Eidos deploying a DirectX 12 renderer weeks after its release, through a patch. Guru3D put the DirectX 12 version of the game through five GPU architectures, AMD "Polaris," GCN 1.1, GCN 1.2, NVIDIA "Pascal," and NVIDIA "Maxwell," through Radeon RX 480, Radeon R9 Fury X, Radeon R9 390X, GeForce GTX 1080, GeForce GTX 1060, and GeForce GTX 980. The AMD GPUs were driven by RSCE 16.9.1 drivers, and NVIDIA by GeForce 372.70.

Looking at the graphs, switching from DirectX 11 to DirectX 12 mode, AMD GPUs not only don't lose frame-rates, but in some cases, even gain frame-rates. NVIDIA GPUs, on the other hand, significantly lose frame-rates. AMD GPUs tend to hold on to their frame-rates at 4K Ultra HD, marginally gain frame-rates at 2560 x 1440, and further gain frame-rates at 1080p. NVIDIA GPUs either barely hold on to their frame-rates, or significantly lose them. AMD has on multiple occasions claimed that its Graphics CoreNext architecture, combined with its purist approach to asynchronous compute make Radeon GPUs a better choice for DirectX 12 and Vulkan. Find more fascinating findings by Guru3D here.
More graphs follow.

Add your own comment

114 Comments on AMD GPUs See Lesser Performance Drop on "Deus Ex: Mankind Divided" DirectX 12

#76
RejZoR
You do know a standard named DirectCompute exists for a very long time, right? It can eliminate CPU involvement entirely if you decide to do so.
Posted on Reply
#77
ZeDestructor
RejZoRYou do know a standard named DirectCompute exists for a very long time, right? It can eliminate CPU involvement entirely if you decide to do so.
Clearly not given the number of games that use DirectCompute and still have meaningful CPU load. Fact is, as of right now, you can't run everything on GPUs (as much as AMD and nV would like it to be the case), and in many cases it's more efficient to use the CPU for things than can be done on GPUs.

And FYI, CUDA and Stream have existed for longer, and OpenCL for about the same amount of time
Posted on Reply
#78
RejZoR
Doing Ai and basic physics on CPU is perfectly reasonable. Doing more than just path finding and basic rigid object physics, you'll need a GPU. That's a fact.
Posted on Reply
#79
Primey_
Chaitanyaso this trend of ngreedia gpus lagging in dx12 titles continues. how long before a lawsuit against maxwell and pascal gpus for false advertisement of dx12 capabilties?
You're bloody hilarious. Do you honestly think Nvidia can be sued for such a thing? They advertised dx12 compatibility and last time I looked DX12 Nvidia cards run DX12.
Posted on Reply
#80
rtwjunkie
PC Gaming Enthusiast
I won't be updating my copy to DX12. With my 980Ti I have a super-smooth performance already. If I am only going to see a drop in fps with 12, then I will just stick with the CPU utilization on 11, which is going very well.
Posted on Reply
#81
Yorgos
the54thvoidIt is bizarre but then, DX12 isn't needed for Nvidia cards but for AMD it gives better fps.
This reply goes also to the guy saying the there is no visual difference.

Just imagine this:
We have the card A, card A gives us X fps with the Y visual effects on a game.
When the card A and every next generation of this card is able to draw more fps from the same game, or scene, what do we(us programmers) do? WE MAKE BETTER VISUAL EFFECTS.
That's what has been going on since the 80s, not only in graphics, Processors were able to handle more, the software got better and better for the end user in each aspect.
If you make ZERO, that's 0, progress in your APIs, μArch, OS, e.t.c. then you get 0 better visuals back.

What is going to happen in the future? well that's easy to predict, game companies will see that they cannot squeeze more complicated graphics in the games and they will keep them the same for every generation of the games.
...and that's what nVidia is being doing the last decade. Nvidia shits on every gamers' face with their libraries because that's what they can handle and they do not let studios' innovations surface the market.
Do you remember what happened with Crysis series? It was marvelous that we had a studio push the hardware that much and push the companies to make beastier gpus.
Do you remember what happen with every Gameworks title? same visuals, only forced you to jump to a newer generation of nVidia's cards because they crippled their older gpus.

...but consumers are stupid and they deserve get stolen by such A-holes like nVidia.

That my "friends" is called evolution and a step forward in the technology, but some people think it's not much because they have to justify their 500$ or € purchase, not only by BSing themselves, but spreading the S*** all over the world.
Posted on Reply
#82
rtwjunkie
PC Gaming Enthusiast
YorgosThis reply goes also to the guy saying the there is no visual difference.
You are aware that Square Ennix themselves said there would be no difference in visuals with this DX12 patch?
Yorgos...but consumers are stupid and they deserve get stolen by such A-holes like nVidia.

That my "friends" is called evolution and a step forward in the technology, but some people think it's not much because they have to justify their 500$ or € purchase, not only by BSing themselves, but spreading the S*** all over the world
And you are innocent of spreading shit and hate, perpetuating this stupid red and green war? :shadedshu:
Posted on Reply
#83
RejZoR
No you/they don't improve graphics when there are performance gains. They've proven that consistently over years. The worst waste of potential being tessellation I've bitched about many times. Engines waste huge amounts of polygons on things developers decided is "important" and the rest is the same blocky mess. So, we have elements that have 50 times more polygons than they actually need because of tesselation and there are objects that aren't even affected by it. Result, games that often run even worse and they don't even look any better than games without any tesselation.
Posted on Reply
#84
the54thvoid
Super Intoxicated Moderator
YorgosThis reply goes also to the guy saying the there is no visual difference.

Just imagine this:
We have the card A, card A gives us X fps with the Y visual effects on a game.
When the card A and every next generation of this card is able to draw more fps from the same game, or scene, what do we(us programmers) do? WE MAKE BETTER VISUAL EFFECTS.
That's what has been going on since the 80s, not only in graphics, Processors were able to handle more, the software got better and better for the end user in each aspect.
If you make ZERO, that's 0, progress in your APIs, μArch, OS, e.t.c. then you get 0 better visuals back.

What is going to happen in the future? well that's easy to predict, game companies will see that they cannot squeeze more complicated graphics in the games and they will keep them the same for every generation of the games.
...and that's what nVidia is being doing the last decade. Nvidia shits on every gamers' face with their libraries because that's what they can handle and they do not let studios' innovations surface the market.
Do you remember what happened with Crysis series? It was marvelous that we had a studio push the hardware that much and push the companies to make beastier gpus.
Do you remember what happen with every Gameworks title? same visuals, only forced you to jump to a newer generation of nVidia's cards because they crippled their older gpus.

...but consumers are stupid and they deserve get stolen by such A-holes like nVidia.

That my "friends" is called evolution and a step forward in the technology, but some people think it's not much because they have to justify their 500$ or € purchase, not only by BSing themselves, but spreading the S*** all over the world.
Not sure what angle your reply to me is. Deus Ex is punishing on cards (relative to other games). The dev said DX12 may give better performance but not visuals.
The huge memory use may be a factor in that the monstrous textures are taking an awful lot of processing grunt to shift. API or not, I think running near 6GB on 1440p shows how much is being rendered. That takes power from a hefty chip. New FX may have to wait as devs keep creating more 'on screen' visuals which consume the chips capacity.

I don't know though, it seems if we want photo realistic, processing power with far higher Tflops is required. Maybe Pascal Titan X points in that direction with its 'rubbish' Async but enormous power.
Posted on Reply
#85
dyonoctis
Now it's hard to see just how much of a game changer DX12 will actually be.
I still remember the moment when square enix showed the DX 12 demo :
Granted it was working on four GTX Titan X, but that was glorious, it showed what could be done with low level api on pc.
The irony is that Nvidia doesn't seem to get any benefit from DX12 : they have either same, or negative performance. Even the gain on amd gpu isn't really ashtonishing:
www.techspot.com/review/1081-dx11-vs-dx12-ashes/page3.html
Getting low level optimisation on a compute monster like the Fury X is only worth 3 fps ?

I remember a time were gpu makers had developers making public realeased technical demo to show what their Gpu could do. They keep blabering about DX12, showing numbers on slides that looks impressive, but right now we don't have a single example of what you can expect of DX 12 on a single High-End gpu.

Vulkan is at the moment the api who got the more impressive results, but if the story of open gl repeat itself, I fear this isn't going to matter that much, as the studios will keep giving DX12 the priority.

So yhea, DX12 is at least giving you a great bump if you got a weak cpu, but it that really it ? Was that really worth making so much noise ? Right now gaming developpers are not giving me any reason to get hyped about DX 12 benefits. Will they be any project ambitious, and crazy enough to make people make ask : "Can it run XXX tho ?" while using of the benefits of a low level api ?
Deus Ex MD looks nice, but it's not "out of this world" nice. I'm not seeing a huge gap between it and Tomb Raider 2015, DOOM, or even TW3.
Posted on Reply
#86
$ReaPeR$
Vayra86I wouldn't go so far as to say dominance, it's just that AMD is finally getting their money's worth out of their 'metal'.

- AMD uses a wider bus
- AMD uses more shaders
- AMD runs at lower clocks
- Polaris provides about similar (or slightly higher) perf/clock to Pascal
- Polaris still has a lower perf/watt than Pascal
- GCN has not radically changed since HD7xxx.

AMD just runs a wider GPU across the board, as they have done for a long time. GCN is geared to be an extremely balanced arch that has some overcapacity on the VRAM end. It is built to let the core do all the work it can do, whereas Nvidia's arch is always focused at 'efficiency gains through tight GPU balance' - Nvidia obtains that balance by cutting heavily into bus width and removing everything from the GPU core that isn't required for gaming. They've tried several things, of which DP was the first thing they dropped with Kepler, then delta compression enabled them to further reduce bus width. This is also why Nvidia's cards don't stretch their legs at higher resolutions, but rather lose performance. Only the GDDR5X-supported 1080 avoids that fate.

On DX11, AMD GPU's were just fine and they excelled only at higher resolutions. Why? Not just because of VRAM, but because of the fact that higher res = lower CPU load. In DX12, GCN gets to stretch its legs even earlier and also at lower resolutions, in part also because of the better CPU usage of that API. Vulkan is similar. That CPU usage was the last hurdle for GCN to really come to fruition. Say what you want, but AMD has really made a smart move here, even though we can doubt how conscious that move has really been. They have effectively gained architectural advantage by letting the market do most of the work.

The irony is that the market for gaming has moved towards GCN, and GCN has seen very minimal architectural changes, while the market is moving away from Nvidia's cost/efficiency improvement-focused GPU architecture. At the same time, Nvidia can almost eclipse that change through a much higher perf/watt, but that only hides so much of the underlying issue, an issue of Nvidia GPU's having to clock really high to gain solid performance, because they lack not only a wide bus right now, but also raw shader counts.

I think it is inevitable, and safe to predict, that Nvidia has now reached a new cap with regards to clock speeds on the core. The only way forward is for them to once again start building bigger and wider GPUs. AMD, on the flip side, has more wiggle room and a lot of things left to improve - clocks, efficiency, and judging the RX480, they also have space left on the die.
this is an actual informative opinion. thank you mate!
i totally agree with what you said, i had the same feeling when i first heard about vulkan/dx12. if Nvidia is not careful they could lose a big chunk of the market in the next one to two gens of cards, which imo would be something positive for the consumer since i think that the gpu market has been lacking serious competition for some years now.
Posted on Reply
#87
FierceRed
$ReaPeR$if Nvidia is not careful they could lose a big chunk of the market in the next one to two gens of cards, which imo would be something positive for the consumer since i think that the gpu market has been lacking serious competition for some years now.
While of course a possibility, I wouldn't put any money/stock/holding of breath in that happening. Call me cynical if you must but Nvidia won't lose significant market share in the next 2 gens even if they deserve to for the same reason Apple doesn't lose market share even though they deserve to; they have a refined and sadly effective hype and marketing machine that constantly builds and stokes the consumer norm that they are the superior choice.

Though I hope you're right for the price drops that need to happen for all consumers, in the age of Likes, Views and Trending, mindshare is the real metric that maintains a marketshare's status quo and Nvidia is throwing too much TWIMTBP money around for that to change any time soon.
Posted on Reply
#88
NGreediaOrAMSlow
The graphs and numbers posted by guru3d reflect what has been seen so far between the new cards and the old ones. All AMD see an increase, NVidia newer 10xx see a smaller nearly negligible increase, older 9xx see a penalty due to incomplete DX12.

The 1080 issue was mentioned as acknowledge and under investigation. After they are done, probably will behave like the 1060 numbers.

If the game is poorly optimized (or not), or does use more complex scenery than other games it doesn't matter as much as the actual card performance curve, specially if you educate enough and look not only at APIs, but at the real source, which is explained in detail here
Posted on Reply
#89
Prima.Vera
RejZoRAMD beats NVIDIA in every single DX12 game (except games where both suck hard compared to DX11 for no logical reason). Must be "AMD biased games". Right. It's not because the rendering engine in Radeons is clearly superior for such tasks since HD7000 series when they introduced GCN, it has to be "bias". C'mon people, can you be less of a fanboys?...
No, AMD definitely DOES NOT beat nVidia in ANY/EVERY single D3D12 game. The right words are: AMD has a better performance gain in D3D12 than nVidia. That's all.
And please stop putting words like "fanboy" on your comments. It makes you look more and more like one. ;)
Posted on Reply
#90
Malabooga
Prima.VeraNo, AMD definitely DOES NOT beat nVidia in ANY/EVERY single D3D12 game. The right words are: AMD has a better performance gain in D3D12 than nVidia. That's all.
And please stop putting words like "fanboy" on your comments. It makes you look more and more like one. ;)
Yes it does. 2016. is definitely not good year for NVidia, unfortunately DX12/Vulkan came much faster than "3-4 years away" like those who dont know better were saying. Guess what, Microsoft > than Intel, NVidia and AMD combined and what Microsoft wants Microsoft gets lol

And currently Microsoft wants W10 adoption and DX12 is a part of that (and has been since release a year ago). And majority of gamers are on W10 in a year.
Posted on Reply
#91
Malabooga
Vayra86I wouldn't go so far as to say dominance, it's just that AMD is finally getting their money's worth out of their 'metal'.

- AMD uses a wider bus
- AMD uses more shaders
- AMD runs at lower clocks
- Polaris provides about similar (or slightly higher) perf/clock to Pascal
- Polaris still has a lower perf/watt than Pascal
- GCN has not radically changed since HD7xxx.

AMD just runs a wider GPU across the board, as they have done for a long time. GCN is geared to be an extremely balanced arch that has some overcapacity on the VRAM end. It is built to let the core do all the work it can do, whereas Nvidia's arch is always focused at 'efficiency gains through tight GPU balance' - Nvidia obtains that balance by cutting heavily into bus width and removing everything from the GPU core that isn't required for gaming. They've tried several things, of which DP was the first thing they dropped with Kepler, then delta compression enabled them to further reduce bus width. This is also why Nvidia's cards don't stretch their legs at higher resolutions, but rather lose performance. Only the GDDR5X-supported 1080 avoids that fate.

On DX11, AMD GPU's were just fine and they excelled only at higher resolutions. Why? Not just because of VRAM, but because of the fact that higher res = lower CPU load. In DX12, GCN gets to stretch its legs even earlier and also at lower resolutions, in part also because of the better CPU usage of that API. Vulkan is similar. That CPU usage was the last hurdle for GCN to really come to fruition. Say what you want, but AMD has really made a smart move here, even though we can doubt how conscious that move has really been. They have effectively gained architectural advantage by letting the market do most of the work.

The irony is that the market for gaming has moved towards GCN, and GCN has seen very minimal architectural changes, while the market is moving away from Nvidia's cost/efficiency improvement-focused GPU architecture. At the same time, Nvidia can almost eclipse that change through a much higher perf/watt, but that only hides so much of the underlying issue, an issue of Nvidia GPU's having to clock really high to gain solid performance, because they lack not only a wide bus right now, but also raw shader counts.

I think it is inevitable, and safe to predict, that Nvidia has now reached a new cap with regards to clock speeds on the core. The only way forward is for them to once again start building bigger and wider GPUs. AMD, on the flip side, has more wiggle room and a lot of things left to improve - clocks, efficiency, and judging the RX480, they also have space left on the die.
Nice to see someone who gets it, kudos to you. NVidia has squeezed every MHz from TSMCs 16nm node and only option is to make more GCN like arhictecture because they are at the end of the road with their MHz chase. And last time they tried that we got Fermi.
Posted on Reply
#92
NGreediaOrAMSlow
MalaboogaYes it does. 2016. is definitely not good year for NVidia, unfortunately DX12/Vulkan came much faster than "3-4 years away" like those who dont know better were saying. Guess what, Microsoft > than Intel, NVidia and AMD combined and what Microsoft wants Microsoft gets lol

And currently Microsoft wants W10 adoption and DX12 is a part of that (and has been since release a year ago). And majority of gamers are on W10 in a year.
Actually currently it does. While AMD current architecture is better placed or optimized for DX12, in terms of raw clock, max TFLOPS Nvidia dominates.

And no matter how well you optimize there is a limit in how much a hardware can do. While probably like cars, those numbers can be taken with a kind of grain of salt in terms that manufactures may inflate them, if you look at given numbers, Nvidia's are higher... Period.
Posted on Reply
#93
Prima.Vera
MalaboogaYes it does. 2016. is definitely not good year for NVidia, unfortunately DX12/Vulkan came much faster than "3-4 years away" like those who dont know better were saying. Guess what, Microsoft > than Intel, NVidia and AMD combined and what Microsoft wants Microsoft gets lol

And currently Microsoft wants W10 adoption and DX12 is a part of that (and has been since release a year ago). And majority of gamers are on W10 in a year.
Sorry, you wrote a bunch of nonsense there is not even worth replaying. This post was just informational btw...
Posted on Reply
#94
ViperXTR
Techreport's DX12 Deus Ex Mankind Divided benchmarks, including frametime analysis

techreport.com/review/30639/examining-early-directx-12-performance-in-deus-ex-mankind-divided/3


So that's a thing. Switching over to DXMD's DirectX 12 renderer doesn't improve performance on any of our cards, and it actually makes life much worse for the Radeons. The R9 Fury X turns in an average FPS result that might make you think its performance is on par with the GTX 1070 once again, but don't be fooled—that card's 99th-percentile frame time number is no better than even the GTX 1060's. Playing DXMD on the Fury X and RX 480 was a hitchy, stuttery experience, and our frame-time plots confirm that impression.

In the green corner, the GTX 1070 leads the 99th-percentile frame-time pack by a wide margin, and that translates into noticeably smoother gameplay than any other card here can provide while running under DirectX 12.

I hate to toot TR's horn here, but tests like these demonstrate why one simply can't take average FPS numbers at face value when measuring graphics-card performance. We've been saying so for years. From our results and our subjective experience, it's clear that the developers behind Deus Ex: Mankind Divided have a lot of optimizing to do for Radeons before the game's DirectX 12 mode goes gold in a week and change. AMD's driver team may also have a few long nights ahead, though in theory, DX12 puts much more responsibility on the shoulders of the developer.

It's also clear that it's too early to call a winner between the green and red teams for DirectX 12 performance in this beta build of Deus Ex, even if AMD seems to feel confident in doing so. The Radeon cards we tested perform poorly in our latency-sensitive frame-time metrics in DX12 mode, meaning that the Fury X's hitchy gameplay stands in stark contrast to its respectable average-FPS result. Even if Nvidia isn't shouting from the rooftops about Pascal's performance in DXMD's DX12 mode right now, the green team has some kind of smoothness advantage despite the game's beta tag. To be fair, we used different settings than AMD did while gathering its performance numbers, but we don't feel like the choices we made would be much different than those the average enthusiast would have with this hardware.
Posted on Reply
#95
Prima.Vera
A lot of discussion seems to focus only on comparing the D3D12 renderer performance AMD vs. nVidia. But seen how the both companies perform not as expected, isn't there a chance that actually this is Microsoft's fault for providing a crappy renderer instead? Or even lazy and buggy programing of the developer??
Posted on Reply
#96
Xzibit
Prima.VeraA lot of discussion seems to focus only on comparing the D3D12 renderer performance AMD vs. nVidia. But seen how the both companies perform not as expected, isn't there a chance that actually this is Microsoft's fault for providing a crappy renderer instead? Or even lazy and buggy programing of the developer??
or most likely
NixxesNixxes Software is proud to announce that we're working on Deus Ex: Mankind Divided™ for PC.
Its a DX12 Beta patch on a port. Like every other port. Expect a handful of patches after its out of beta for it to be ironed out.
Posted on Reply
#97
Shambles1980
NGreediaOrAMSlowThe graphs and numbers posted by guru3d reflect what has been seen so far between the new cards and the old ones. All AMD see an increase, NVidia newer 10xx see a smaller nearly negligible increase, older 9xx see a penalty due to incomplete DX12.

The 1080 issue was mentioned as acknowledge and under investigation. After they are done, probably will behave like the 1060 numbers.

If the game is poorly optimized (or not), or does use more complex scenery than other games it doesn't matter as much as the actual card performance curve, specially if you educate enough and look not only at APIs, but at the real source, which is explained in detail here
that guy talks a load of bull though..

i agree nvidias dx11 drivers are very efficient. amd still out perform them in cases.. like 2x480 can out perform a 1080 in dx12 on more than 1 game. there is more too it than drivers alone, the amd crads are faster. Although i do agree that they gain more in dx12 % wize mostly due to drivers. The hardware archetecture of amd cards have been aimed towards a low lvl api since the hd 7750, Nvidia still havent bothered with it yet.. And given we wont be expecting dx12 to be come mainstream for games for atleast a nother year. i think nvidia did the right call. because they can have a new gen of gpu out just at the time dx12 becomes main stream.. obviously this makes the 10 sereise gpu's utterly pointless. But thats not an issue for nvidia they sold the things now, and next gen they can convince people they really need to upgrade to fully benifit low lvl api.
Posted on Reply
#98
INSTG8R
Vanguard Beta Tester
Vayra86I wouldn't go so far as to say dominance, it's just that AMD is finally getting their money's worth out of their 'metal'.

- AMD uses a wider bus
- AMD uses more shaders
- AMD runs at lower clocks
- Polaris provides about similar (or slightly higher) perf/clock to Pascal
- Polaris still has a lower perf/watt than Pascal
- GCN has not radically changed since HD7xxx.

AMD just runs a wider GPU across the board, as they have done for a long time. GCN is geared to be an extremely balanced arch that has some overcapacity on the VRAM end. It is built to let the core do all the work it can do, whereas Nvidia's arch is always focused at 'efficiency gains through tight GPU balance' - Nvidia obtains that balance by cutting heavily into bus width and removing everything from the GPU core that isn't required for gaming. They've tried several things, of which DP was the first thing they dropped with Kepler, then delta compression enabled them to further reduce bus width. This is also why Nvidia's cards don't stretch their legs at higher resolutions, but rather lose performance. Only the GDDR5X-supported 1080 avoids that fate.

On DX11, AMD GPU's were just fine and they excelled only at higher resolutions. Why? Not just because of VRAM, but because of the fact that higher res = lower CPU load. In DX12, GCN gets to stretch its legs even earlier and also at lower resolutions, in part also because of the better CPU usage of that API. Vulkan is similar. That CPU usage was the last hurdle for GCN to really come to fruition. Say what you want, but AMD has really made a smart move here, even though we can doubt how conscious that move has really been. They have effectively gained architectural advantage by letting the market do most of the work.

The irony is that the market for gaming has moved towards GCN, and GCN has seen very minimal architectural changes, while the market is moving away from Nvidia's cost/efficiency improvement-focused GPU architecture. At the same time, Nvidia can almost eclipse that change through a much higher perf/watt, but that only hides so much of the underlying issue, an issue of Nvidia GPU's having to clock really high to gain solid performance, because they lack not only a wide bus right now, but also raw shader counts.

I think it is inevitable, and safe to predict, that Nvidia has now reached a new cap with regards to clock speeds on the core. The only way forward is for them to once again start building bigger and wider GPUs. AMD, on the flip side, has more wiggle room and a lot of things left to improve - clocks, efficiency, and judging the RX480, they also have space left on the die.
Thus has been my thinking with AMD and why I've stuck with them.
Posted on Reply
#100
ViperXTR
G33k2Fr34kBoth the frame time and average fps results in the techreport review are different from what Hilbert measured at guru3d. Perhaps techpowerup can run these tests and get us the final results.
Techreport and computerbase used actual gameplay scenarios in testing while Guru3D used the built in benchmark
Posted on Reply
Add your own comment
Apr 3rd, 2025 20:50 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts