Wednesday, August 29th 2018
AMD Brings Faster Performance and Advanced Features to Strange Brigade
Today, gamers around the world will face off against an ancient, forgotten evil power in the highly anticipated Strange Brigade. AMD and Rebellion have worked closely to ensure smooth, immersive gameplay on Radeon RX Graphics in Strange Brigade.
- FreeSync 2 HDR: Brings low-latency, high-brightness pixels and a wide color gamut to High Dynamic Range (HDR) content for PC displays, enabling Strange Brigade to preserve details in scenes that may otherwise be lost due to limited contrast ratios. Ultimately, it lets bright scenes to appear much brighter and dark scenes to be truly dark - all while keeping details visible.
- Asnychronous Compute: Strange Brigade by default has asynchronous compute enabled improving GPU utilization, input latency, efficiency and performance by tapping into GPU resources that would otherwise be underutilized. For example, running various screen space effects during the shadow map rendering.
39 Comments on AMD Brings Faster Performance and Advanced Features to Strange Brigade
Serious question. Seems like their "gimmicky" stuff is fairly neutral (freesync, chill, etc).
AMD pushes AMD's strong points - well,duuuhhhh
nVidia pushes AMD's low points-ummm..... cause they're competitors ? btw gameworks is pretty easily disabled and amd has a driver trick to reduce tesselation.
Also,it wouldn't hurt you do do some reading once in a while rather than base the entirety of your knowledge on a video that red-leaning adoredtv did years ago, GW performance is fine on radeon
AMD sure would love to push "nvidia's low points" either. But what low points do they have ? Dx12 and async performance was improved a long time ago, to the point that 1080 outperforms V64 in Hitman and matches it in DOOM.You can see it here, new dx12 game with async comes out,1080 matches Vega LC right since launch,980Ti faster than Fury X ans even ahead of 1070.
Here's a thought: has it occurred to you that, if nVidia focused itself on their own strong points, they might become even better then they already are?
What i meant earlier with AMD pushing their own strong points is they are pushing both DX12 and asynchronous compute: both of which they are better @ than compared to DX11, due to the way their arch works.
What i mean earlier with nVidia pushing AMD's low points is, for example, tessellation. nVidia is much better @ tessellation then AMD is so they choose to force their sponsored games to have ridiculous amounts of this knowing full well their latest generation is going to take a hit in performance but they'll gladly do it for two reasons:
1 - their previous generation will be hit harder, meaning there will be "an improvement" for going to the newer generation, thus they sell more cards of the newer generation
2 - AMD will be hit even harder, thus "showing" nVidia cards are superior so they sell more cards
By artificially exacerbating the difference (better @ tessellation), they are "showing" their cards are better. I just wish they showed their cards were better without resorting to this sort of crap.
Maybe I should have said "perks".
AMD's perks are mostly transparent and work through global settings... They don't require anything from others. In that sense, they're not robust additions like Nvidia's. i.e. "strong features" that makes them stand out (ahem.. and rarely get used). I feel like an AMD card is going to be roughly the same from game to game. There is no RTX or Hairworks feature that drastically changes some games from others.
Even FreeSync is based off the VESA standard. It's not some big addition to displays that requires an extra couple hundred dollars.
There is simply no data that backs up what he says, just some silly examples dragged way out of context. Go take a long look at his Turing prediction real quick and even you must see the problems. He just spouts utter nonsense and knows literally nothing. The poor man can't even do math. And your blabber about strong and weak points... generalization that makes no sense in any way shape or form - you're already starting to sound like him.
Referring to the video again really doesn't get your point across either, quite the opposite. Never mind the fact that this entire discussion is grossly offtopic?
If you want to dive deeper into this; google for performance comparisons between driver versions of Kepler > Maxwell versus AMD's GCN from 7970 > Fury. They exist, and they show nothing you speak of. On the contrary. Furthermore, it has usually been AMD who was late to react to performance problems or leave them unattended forever, and they are keen to point a finger at Nvidia to play the underdog card. In the end, its about AMD lacking control of their driver / developer communication versus Nvidia being much better at that - and investing much more into it. Gameworks is only a tiny sliver of this and usually presents a win-win scenario where you CAN use the feature but never really have to.
Here's what it looked like in original review:
And here's after patch 1.3:
Here's the original review in full (in Russian): gamegpu.com/action-/-fps-/-tps/fallout-4-test-gpu-2015.html
And here's the full review of the 1.3 patch (in Russian): gamegpu.com/rpg/rollevye/fallout-4-beta-patch-1-3-test-gpu.html
As I said, out of context. FO4 barely rewards fast GPUs.
Then, and as Adored pointed out, how come the 960, which is miles behind the 780 Ti before the patch suddenly get's on par with it after the patch?
According to TPU's GTX 960 OC review, the 780 TI is 57% faster then this particular OCed 960 @ this resolution:
With the 1.3 patch and AMD cards, the opposite happened, with all gaining FPS. Why? Because the patch broke the functionality of gameworks and the game was "forced" to render without it. As such, and "suddenly", what was crippling AMD's cards disappeared while nVidia's were "hurt" with the loss of it, since the game's drivers were optimized for it's use. You are right, and i here by apologize for my role in derailing this topic. If anyone else wishes to further discuss this off-topic part, feel free to use conversations.
anyone knows using fo4 as a reference for any analysis is not only flawed but dumb and a waste of time for people who approach the topic seriously. anyone except adorkedtv cause the only one that plays out according to his narrative ran and still runs like garbage.
If you want to get the truth I suggest you start using actual tech reviewers and journalist known as trusted sources, not a tool.crippling kepler is a myth, it just didn't get bette with time since it was already optimized on launch, it was the second itenaration of kepler architecture after gtx 6 series. Mawxell was a new and changed architecture, it got better over time cause it was designed that way,no gpu manufacturer will design an architecture without looking into the future.