Wednesday, August 29th 2018

AMD Brings Faster Performance and Advanced Features to Strange Brigade

Today, gamers around the world will face off against an ancient, forgotten evil power in the highly anticipated Strange Brigade. AMD and Rebellion have worked closely to ensure smooth, immersive gameplay on Radeon RX Graphics in Strange Brigade.
  • FreeSync 2 HDR: Brings low-latency, high-brightness pixels and a wide color gamut to High Dynamic Range (HDR) content for PC displays, enabling Strange Brigade to preserve details in scenes that may otherwise be lost due to limited contrast ratios. Ultimately, it lets bright scenes to appear much brighter and dark scenes to be truly dark - all while keeping details visible.
  • Asnychronous Compute: Strange Brigade by default has asynchronous compute enabled improving GPU utilization, input latency, efficiency and performance by tapping into GPU resources that would otherwise be underutilized. For example, running various screen space effects during the shadow map rendering.
Gamers can get the PC version of Strange Brigade for free when purchasing an AMD Radeon RX Vega, RX 580 or RX 570 graphics card, along with free copies of Assassin's Creed Odyssey and Star Control: Origins as part of AMD's latest game bundle. For more information, please check out the website here.
Add your own comment

39 Comments on AMD Brings Faster Performance and Advanced Features to Strange Brigade

#26
StrayKAT
What are AMD's strong points? :D

Serious question. Seems like their "gimmicky" stuff is fairly neutral (freesync, chill, etc).
Posted on Reply
#27
cucker tarlson
HTCActually: the video is referring Maxwell VS Kepler and not Pascal VS Maxwell, but the point stands.
No,the point does not stand if the video is showing maxwell vs kepler,not pascal vs maxwell.I told you,lay off adoredtv stuff, it makes you see things that are not there.
FluffmeisterSo what is wrong with nVidia pushing their strong points too? Or does the world revolve around AMD?
Exactly, I don't get it.

AMD pushes AMD's strong points - well,duuuhhhh

nVidia pushes AMD's low points-ummm..... cause they're competitors ? btw gameworks is pretty easily disabled and amd has a driver trick to reduce tesselation.

Also,it wouldn't hurt you do do some reading once in a while rather than base the entirety of your knowledge on a video that red-leaning adoredtv did years ago, GW performance is fine on radeon





AMD sure would love to push "nvidia's low points" either. But what low points do they have ? Dx12 and async performance was improved a long time ago, to the point that 1080 outperforms V64 in Hitman and matches it in DOOM.You can see it here, new dx12 game with async comes out,1080 matches Vega LC right since launch,980Ti faster than Fury X ans even ahead of 1070.
Posted on Reply
#28
HTC
cucker tarlsonNo,the point does not stand if the video is showing maxwell vs kepler,not pascal vs maxwell.I told you,lay off adoredtv stuff, it makes you see things that are not there.
My whole point is shown in the video, which you refuse to see: how nVidia is hurting their own previous arch to showcase their newer arch.

Here's a thought: has it occurred to you that, if nVidia focused itself on their own strong points, they might become even better then they already are?
Posted on Reply
#29
nemesis.ie
StrayKATWhat are AMD's strong points? :D

Serious question. Seems like their "gimmicky" stuff is fairly neutral (freesync, chill, etc).
How are Freesync and Chill "gimmicky"? They both provide very useful benefits.
Posted on Reply
#30
StrayKAT
nemesis.ieHow are Freesync and Chill "gimmicky"? They both provide very useful benefits.
I never said they weren't useful (I'm a Vega owner myself). I just meant unique features outside raw number crunching (like Nvidia's Hairworks, etc.. AMD's "gimmicks" are neutral in that don't require third parties/game developers to be in on the deal).
Posted on Reply
#31
HTC
nemesis.ieHow are Freesync and Chill "gimmicky"? They both provide very useful benefits.
Tbh, Chill actually is because only some games benefit from it. Freesync on the other hand is far from "gimmicky".

What i meant earlier with AMD pushing their own strong points is they are pushing both DX12 and asynchronous compute: both of which they are better @ than compared to DX11, due to the way their arch works.

What i mean earlier with nVidia pushing AMD's low points is, for example, tessellation. nVidia is much better @ tessellation then AMD is so they choose to force their sponsored games to have ridiculous amounts of this knowing full well their latest generation is going to take a hit in performance but they'll gladly do it for two reasons:

1 - their previous generation will be hit harder, meaning there will be "an improvement" for going to the newer generation, thus they sell more cards of the newer generation
2 - AMD will be hit even harder, thus "showing" nVidia cards are superior so they sell more cards

By artificially exacerbating the difference (better @ tessellation), they are "showing" their cards are better. I just wish they showed their cards were better without resorting to this sort of crap.
Posted on Reply
#32
StrayKAT
Dudes.. I didn't mean gimmicky as a bad thing. lol

Maybe I should have said "perks".

AMD's perks are mostly transparent and work through global settings... They don't require anything from others. In that sense, they're not robust additions like Nvidia's. i.e. "strong features" that makes them stand out (ahem.. and rarely get used). I feel like an AMD card is going to be roughly the same from game to game. There is no RTX or Hairworks feature that drastically changes some games from others.

Even FreeSync is based off the VESA standard. It's not some big addition to displays that requires an extra couple hundred dollars.
Posted on Reply
#33
Vayra86
HTCMy whole point is shown in the video, which you refuse to see: how nVidia is hurting their own previous arch to showcase their newer arch.

Here's a thought: has it occurred to you that, if nVidia focused itself on their own strong points, they might become even better then they already are?
I'll say it clearly because you seem to have a fat plank in front of your head - LAY OFF THE ADOREDTV drugs. He's a fucking tool. Or don't and join him in his misery.

There is simply no data that backs up what he says, just some silly examples dragged way out of context. Go take a long look at his Turing prediction real quick and even you must see the problems. He just spouts utter nonsense and knows literally nothing. The poor man can't even do math. And your blabber about strong and weak points... generalization that makes no sense in any way shape or form - you're already starting to sound like him.

Referring to the video again really doesn't get your point across either, quite the opposite. Never mind the fact that this entire discussion is grossly offtopic?

If you want to dive deeper into this; google for performance comparisons between driver versions of Kepler > Maxwell versus AMD's GCN from 7970 > Fury. They exist, and they show nothing you speak of. On the contrary. Furthermore, it has usually been AMD who was late to react to performance problems or leave them unattended forever, and they are keen to point a finger at Nvidia to play the underdog card. In the end, its about AMD lacking control of their driver / developer communication versus Nvidia being much better at that - and investing much more into it. Gameworks is only a tiny sliver of this and usually presents a win-win scenario where you CAN use the feature but never really have to.
Posted on Reply
#35
HTC
Vayra86I'll say it clearly because you seem to have a fat plank in front of your head - LAY OFF THE ADOREDTV drugs. He's a fucking tool. Or don't and join him in his misery.

There is simply no data that backs up what he says, just some silly examples dragged way out of context. Go take a long look at his Turing prediction real quick and even you must see the problems. He just spouts utter nonsense and knows literally nothing. The poor man can't even do math. And your blabber about strong and weak points... generalization that makes no sense in any way shape or form - you're already starting to sound like him.
Have you checked the source of his claims? That GameGPU site? It took a bit (link no longer works) but i managed to track it down:

Here's what it looked like in original review:



And here's after patch 1.3:



Here's the original review in full (in Russian): gamegpu.com/action-/-fps-/-tps/fallout-4-test-gpu-2015.html

And here's the full review of the 1.3 patch (in Russian): gamegpu.com/rpg/rollevye/fallout-4-beta-patch-1-3-test-gpu.html
Posted on Reply
#36
Vayra86
HTCHave you checked the source of his claims? That GameGPU site? It took a bit (link no longer works) but i managed to track it down:

Here's what it looked like in original review:



And here's after patch 1.3:



Here's the original review in full (in Russian): gamegpu.com/action-/-fps-/-tps/fallout-4-test-gpu-2015.html

And here's the full review of the 1.3 patch (in Russian): gamegpu.com/rpg/rollevye/fallout-4-beta-patch-1-3-test-gpu.html
All I see here is a notoriously badly optimized game on a dated engine that runs into CPU bottlenecks a lot.

As I said, out of context. FO4 barely rewards fast GPUs.
Posted on Reply
#37
HTC
Vayra86All I see here is a notoriously badly optimized game on a dated engine that runs into CPU bottlenecks a lot.

As I said, out of context. FO4 barely rewards fast GPUs.
Really?

Then, and as Adored pointed out, how come the 960, which is miles behind the 780 Ti before the patch suddenly get's on par with it after the patch?

According to TPU's GTX 960 OC review, the 780 TI is 57% faster then this particular OCed 960 @ this resolution:



With the 1.3 patch and AMD cards, the opposite happened, with all gaining FPS. Why? Because the patch broke the functionality of gameworks and the game was "forced" to render without it. As such, and "suddenly", what was crippling AMD's cards disappeared while nVidia's were "hurt" with the loss of it, since the game's drivers were optimized for it's use.
Vayra86Referring to the video again really doesn't get your point across either, quite the opposite. Never mind the fact that this entire discussion is grossly offtopic?
You are right, and i here by apologize for my role in derailing this topic. If anyone else wishes to further discuss this off-topic part, feel free to use conversations.
Posted on Reply
#38
cucker tarlson
HTCIf anyone else wishes to further discuss this off-topic part, feel free to use conversations.
:roll:no one wants that with the exception of you forcing us to see some outdated videos

anyone knows using fo4 as a reference for any analysis is not only flawed but dumb and a waste of time for people who approach the topic seriously. anyone except adorkedtv cause the only one that plays out according to his narrative ran and still runs like garbage.


If you want to get the truth I suggest you start using actual tech reviewers and journalist known as trusted sources, not a tool.crippling kepler is a myth, it just didn't get bette with time since it was already optimized on launch, it was the second itenaration of kepler architecture after gtx 6 series. Mawxell was a new and changed architecture, it got better over time cause it was designed that way,no gpu manufacturer will design an architecture without looking into the future.

Posted on Reply
#39
Vayra86
cucker tarlson:roll:no one wants that with the exception of you forcing us to see some outdated videos

anyone knows using fo4 as a reference for any analysis is not only flawed but dumb and a waste of time for people who approach the topic seriously. anyone except adorkedtv cause the only one that plays out according to his narrative ran and still runs like garbage.


If you want to get the truth I suggest you start using actual tech reviewers and journalist known as trusted sources, not a tool.crippling kepler is a myth, it just didn't get bette with time since it was already optimized on launch, it was the second itenaration of kepler architecture after gtx 6 series. Mawxell was a new and changed architecture, it got better over time cause it was designed that way,no gpu manufacturer will design an architecture without looking into the future.

That graph is precisely one of the sources I was getting at earlier. Thx
Posted on Reply
Add your own comment
Dec 19th, 2024 03:51 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts