• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Wolfenstein: Youngblood Benchmark Test & Performance Analysis

Joined
Aug 6, 2017
Messages
7,412 (2.77/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
Could we maybe not talk about "SJW" stuff one way or another in a technical forum? If you didn't like the game just say so, and stop at that. Thanks.

I got this game for free with my 2070 Super. Honestly, it doesn't look that good but I'll give it a shot since I already have it. I am interested in playing with raytracing once they add that in. And the frames that W1zzard got across the upper end of the RTX cards is reassuring that maybe I won't have to turn everything down to mud just to get it running at 60 fps.

Honestly, I'm glad I waited to get a new card because RTRT is kind of underwhelming. Until this game I didn't even own a game that could exercise the RTX hardware, and if I didn't get a bump from 1080 to 1080 Ti levels of performance, I would have been pretty pissed. Also, there's not much daylight between 2070 Super and 2080 Super performance in this game, feeling better about saving $200 there.

What's going on with Nvidia where they can't help these companies get RTX out in a reasonable fashion? If you miss the first month of a game, 90% of the people who are excited to play it have already done so. Young Blood's RTX is going to be relegated to RTX demo at this point. It's sad but I doubt we're going to see day one RTX until the new Xbox and Playstation come out and it's pushed as THE "next gen" feature. So backwards that us PC folk have had this for a year now and game devs are still slow walking even bad implementations of the new tech.
a fellow 5775c/2070S owner waiting for YB RTX :)

Frankly I never noticed "SJW stuff" in Colossus,maybe people are just too sensitive.In that case I suggest either growing a thicker skin or changing attitute to more relaxed.It was a pretty awesome shooter,despite the story being so-so you never got bored.Who cares about story in fps games anyway.I'm playing bioshock infinite now and it's a yawnfest.
 
Last edited:
Joined
Jun 16, 2016
Messages
409 (0.13/day)
System Name Baxter
Processor Intel i7-5775C @ 4.2 GHz 1.35 V
Motherboard ASRock Z97-E ITX/AC
Cooling Scythe Big Shuriken 3 with Noctua NF-A12 fan
Memory 16 GB 2400 MHz CL11 HyperX Savage DDR3
Video Card(s) EVGA RTX 2070 Super Black @ 1950 MHz
Storage 1 TB Sabrent Rocket 2242 NVMe SSD (boot), 500 GB Samsung 850 EVO, and 4TB Toshiba X300 7200 RPM HDD
Display(s) Vizio P65-F1 4KTV (4k60 with HDR or 1080p120)
Case Raijintek Ophion
Audio Device(s) HDMI PCM 5.1, Vizio 5.1 surround sound
Power Supply Corsair SF600 Platinum 600 W SFX PSU
Mouse Logitech MX Master 2S
Keyboard Logitech G613 and Microsoft Media Keyboard
a fellow 5775c/2070S owner waiting for YB RTX :)

Frankly I never noticed "SJW stuff" in Colossus,maybe people are just too sensitive.In that case I suggest either growing a thicker skin or changing attitute to more relaxed.It was a pretty awesome shooter,despite the story being so-so you never got bored.Who cares about story in fps games anyway.I'm playing bioshock infinite now and it's a yawnfest.

4.3 GHz is higher than I can handle in my case, good on you! Yeah, I figured my 5775c could continue to last for a bit longer with the 2070S. Very happy so far, no obvious bottlenecking though I do play at 4k. I did have to upgrade my PSU, which was a bit surprising, but it's better in the long run.

I'm torn about whether these high framerates in max quality in Young Blood across the board at 4k are good for PC gaming because more people will be able to play at the details "imagined" by the developers, or bad because this game essentially will not get any better with age. I really think it's just the fact that the DOOM engine is so optimized for console 60 fps. Where do you go but up into the 100's when you have to support 1.2 TFLOPS polaris on Xbox One at 60 fps?

I'm guessing DOOM Eternal is going to be similarly biased towards high framerates across the board. It'll be interesting to see if next-gen DOOM and Wolfenstein return to actually pushing our systems.
 
Joined
Aug 6, 2017
Messages
7,412 (2.77/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
4.3 GHz is higher than I can handle in my case, good on you! Yeah, I figured my 5775c could continue to last for a bit longer with the 2070S. Very happy so far, no obvious bottlenecking though I do play at 4k. I did have to upgrade my PSU, which was a bit surprising, but it's better in the long run.

I'm torn about whether these high framerates in max quality in Young Blood across the board at 4k are good for PC gaming because more people will be able to play at the details "imagined" by the developers, or bad because this game essentially will not get any better with age. I really think it's just the fact that the DOOM engine is so optimized for console 60 fps. Where do you go but up into the 100's when you have to support 1.2 TFLOPS polaris on Xbox One at 60 fps?

I'm guessing DOOM Eternal is going to be similarly biased towards high framerates across the board. It'll be interesting to see if next-gen DOOM and Wolfenstein return to actually pushing our systems.
it handles a 165hz monitor fine too,I assure you.BF1 I'm getting 140-160 fps with GPU staying at that +95% sweet spot usage with rare drops to low 90s.
I like how nvidia is trying to make Vulkan rtx a pc-exclusive thing.No need to water it down so that crappy consoles can run it.VRS is a much better way to go around performance barrier too,should've been used in exodus though dlss has improved significantly too.
 
Last edited:
Joined
Sep 10, 2015
Messages
529 (0.16/day)
System Name My Addiction
Processor AMD Ryzen 7950X3D
Motherboard ASRock B650E PG-ITX WiFi
Cooling Alphacool Core Ocean T38 AIO 240mm
Memory G.Skill 32GB 6000MHz
Video Card(s) Sapphire Pulse 7900XTX
Storage Some SSDs
Display(s) 42" Samsung TV + 22" Dell monitor vertically
Case Lian Li A4-H2O
Audio Device(s) Denon + Bose
Power Supply Corsair SF750
Mouse Logitech
Keyboard Glorious
VR HMD None
Software Win 10
Benchmark Scores None taken
"I seriously doubt people will replay games just for the RTX experience. "

Well, if you bought that overpriced stuff with the "feature of the future" (something AMD was literally bashed about when they did it) you have to if you want to put any of those cents into good use. Otherwise you just have to accept that you were screwed with the one that "just works". That could be a serious pain in the ass for someone who don't like to be thrown over...
 
Joined
Jul 24, 2009
Messages
1,002 (0.18/day)
Seems Im fine with my selection of GPUs, but must say those results are bit weird. Almost like if AMD rigged them.

Also, why does ID engine so much like Unreal? Based on screens only.
 
Joined
Apr 3, 2013
Messages
105 (0.02/day)
Processor Intel Xeon E5-1650 v2
Motherboard ASUS P9X79
Video Card(s) NVIDIA GTX 1080 FE
Display(s) ASUS PG43UQ
VR HMD Valve Index
Software Windows 7
I think Adaptive Shading needs to be ON in the options. Having it OFF is like running AMD with packed math (FP16) off too.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.46/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
The biggest "WHAT?!" for me is 1080Ti performance. And that one is definitely not constrained with VRAM. Interesting results all around.
What is interesting about it? 1080 Ti is slightly slower than RTX 2070 and 5700 XT is about on parity with RTX 2070. 1080 Ti is exactly where it should be.

I really only see two interesting things: Turing performs better in Vulkan than Pascal and 4 GiB VRAM is becoming legacy.
 
Joined
Nov 3, 2013
Messages
2,141 (0.53/day)
Location
Serbia
Processor Ryzen 5600
Motherboard X570 I Aorus Pro
Cooling Deepcool AG400
Memory HyperX Fury 2 x 8GB 3200 CL16
Video Card(s) RX 6700 10GB SWFT 309
Storage SX8200 Pro 512 / NV2 512
Display(s) 24G2U
Case NR200P
Power Supply Ion SFX 650
Mouse G703 (TTC Gold 60M)
Keyboard Keychron V1 (Akko Matcha Green) / Apex m500 (Gateron milky yellow)
Software W10
What is interesting about it? 1080 Ti is slightly slower than RTX 2070 and 5700 XT is about on parity with RTX 2070. 1080 Ti is exactly where it should be.

I really only see two interesting things: Turing performs better in Vulkan than Pascal and 4 GiB VRAM is becoming legacy.
Dude thats 2070 super, which is basically a 2080.
In this game 1080Ti looses to a regular 2070, against which it was generally 10+% faster.
 
Joined
Oct 1, 2006
Messages
4,932 (0.74/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
Seems Im fine with my selection of GPUs, but must say those results are bit weird. Almost like if AMD rigged them.

Also, why does ID engine so much like Unreal? Based on screens only.
This game and the previous game is one of the few games that runs on Vulkan and the only games that support all the new features on Turing.
If anything this is game "rigged" in favor of Turing.

I think Adaptive Shading needs to be ON in the options. Having it OFF is like running AMD with packed math (FP16) off too.
The reason why Adaptive Shading is off is because it would make the game run at different quality compare to all other cards.
Turing can run FP16 as well. In fact this is what the Tensors Cores do.
https://www.anandtech.com/show/13973/nvidia-gtx-1660-ti-review-feat-evga-xc-gaming/2
The Curious Case of FP16: Tensor Cores vs. Dedicated Cores

Even though Turing-based video cards have been out for over 5 months now, every now and then I’m still learning something new about the architecture. And today is one of those days.


Something that escaped my attention with the original TU102 GPU and the RTX 2080 Ti was that for Turing, NVIDIA changed how standard FP16 operations were handled. Rather than processing it through their FP32 CUDA cores, as was the case for GP100 Pascal and GV100 Volta, NVIDIA instead started routing FP16 operations through their tensor cores.


The tensor cores are of course FP16 specialists, and while sending standard (non-tensor) FP16 operations through them is major overkill, it’s certainly a valid route to take with the architecture. In the case of the Turing architecture, this route offers a very specific perk: it means that NVIDIA can dual-issue FP16 operations with either FP32 operations or INT32 operations, essentially giving the warp scheduler a third option for keeping the SM partition busy. Note that this doesn’t really do anything extra for FP16 performance – it’s still 2x FP32 performance – but it gives NVIDIA some additional flexibility.


Of course, as we just discussed, the Turing Minor does away with the tensor cores in order to allow for a learner GPU. So what happens to FP16 operations? As it turns out, NVIDIA has introduced dedicated FP16 cores!


These FP16 cores are brand new to Turing Minor, and have not appeared in any past NVIDIA GPU architecture. Their purpose is functionally the same as running FP16 operations through the tensor cores on Turing Major: to allow NVIDIA to dual-issue FP16 operations alongside FP32 or INT32 operations within each SM partition. And because they are just FP16 cores, they are quite small. NVIDIA isn’t giving specifics, but going by throughput alone they should be a fraction of the size of the tensor cores they replace.


To users and developers this shouldn’t make a difference – CUDA and other APIs abstract this and FP16 operations are simply executed wherever the GPU architecture intends for them to go – so this is all very transparent. But it’s a neat insight into how NVIDiA has optimized Turing Minor for die size while retaining the basic execution flow of the architecture.


Now the bigger question in my mind: why is it so important to NVIDIA to be able to dual-issue FP32 and FP16 operations, such that they’re willing to dedicate die space to fixed FP16 cores? Are they expecting these operations to be frequently used together within a thread? Or is it just a matter of execution ports and routing? But that is a question we’ll have to save for another day.
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.46/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Dude thats 2070 super, which is basically a 2080.
In this game 1080Ti looses to a regular 2070, against which it was generally 10+% faster.
Ah, I see, so Turing has a fairly significant advantage against Pascal in Vulkan.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,849 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
The reason why Adaptive Shading is off is because it would make the game run at different quality compare to all other cards.
That was exactly my thinking
 
Joined
Apr 3, 2013
Messages
105 (0.02/day)
Processor Intel Xeon E5-1650 v2
Motherboard ASUS P9X79
Video Card(s) NVIDIA GTX 1080 FE
Display(s) ASUS PG43UQ
VR HMD Valve Index
Software Windows 7
But then adaptive shading is off because the picture is degraded (although review says it looks perceptually the same).
RTX is off (not here obviously) because the picture is prettier.
Pascal has no FP16, why not turn it off for the other cards too.

I think we should foster innovation, not take the car on the horse track.
 
Joined
Oct 1, 2006
Messages
4,932 (0.74/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
Pascal has no FP16, why not turn it off for the other cards too.

I think we should foster innovation, not take the car on the horse track.
Both current generation nVidia and AMD GPUs support FP16, so what is the problem?
Pascal is pretty much an improved Maxwell.
It was a rather brute force approach to performance, nVidia knowing that DX12 or Vulkan is not yet common.
Pascal is the very opposite of fostering innovation.
 
Last edited:
Joined
Mar 10, 2014
Messages
1,793 (0.46/day)
Ah, I see, so Turing has a fairly significant advantage against Pascal in Vulkan.

One could have that advantage on d3d12 too, especially if using advanced Turing features trough nvapi. With Vulkan those can be used through extensions. I.E. look at RAGE 2; only available renderer for that is Vulkan but RTX 2070S and gtx1080ti are still quite equal on that game.

Both current generation nVidia and AMD GPUs support FP16, so what is the problem?
Pascal is pretty much an improved Maxwell.
It was a rather brute force approach to performance, nVidia knowing that DX12 or Vulkan is not yet common.
Pascal is the very opposite of fostering innovation.

Consumer Pascals did not have full speed fp16 mainly because of product segmentation at the time. AI/ML made use it and they wan't to sell high priced Tesla p100 for you(or later on Pascal inside Tegra X2). But since Tensor cores came there's no need to cripple full speed fp16. They do cripple tensor performance on consumer cards though(FP16 w/FP32 Accumulate is at half rate on geforce).
 
Joined
Jan 17, 2006
Messages
932 (0.14/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
@W1zzard How would you rate the game? How long does it take to play through? My rule of thumb is that a game needs to be really fun and not cost more than €1/hour at most. Ideally less.

With things like Xbox on PC at a reasonable price, "full price" games need to either be a lot cheaper or offer a longer playing experience IMO.

I'm SOOOO glad I'm playing Metro Exodus on game pass as I wouldn't pay more than €5 for it, the full price is horrendous for what it is IMO. 2033 was massively better.
 
Last edited:

64K

Joined
Mar 13, 2014
Messages
6,773 (1.73/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) Temporary MSI RTX 4070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Temporary Viewsonic 4K 60 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
@W1zzard How would you rate the game? How long does it take to play through? My rule of thumb is that a game needs to be really fun and not cost more than €1/hour at most. Ideally less.

With things like Xbox on PC at a reasonable price, "full price" games need to either be a lot cheaper or offer a longer playing experience IMO.

I'm SOOOO glad I'm playing Metro Exodus on game pass as I wouldn't pay more than €5 for it, the full price is hoorendous for what it is IMO. 2033 was massively better.

What I've been doing for many years now is wait until the game has been out for a couple of years and then pick it up on a really good sale for $10 or $15 (I will probably make an exception on Cyberpunk 2077 and Squadron 42 and pay full price).

This game is releasing at $30 so you can probably get it for $7.50 or maybe even $5 doing it that way. Also you will gain the benefit of playing the game after it's been patched and polished for the best gaming experience. If a game supports mods then there should be plenty to pick from at that time as well.
 
Joined
Jan 17, 2006
Messages
932 (0.14/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
Likewise unless it's something I really want to play and then I usually wait at least a month for a sale.

Ditto with SC. ;)

The patching is key. I was really, really POed with TW3 on release with the bugs and downgraded graphics. No more pre-ordering for me! (other than SC).
 
Joined
Apr 21, 2010
Messages
578 (0.11/day)
System Name Home PC
Processor Ryzen 5900X
Motherboard Asus Prime X370 Pro
Cooling Thermaltake Contac Silent 12
Memory 2x8gb F4-3200C16-8GVKB - 2x16gb F4-3200C16-16GVK
Video Card(s) XFX RX480 GTR
Storage Samsung SSD Evo 120GB -WD SN580 1TB - Toshiba 2TB HDWT720 - 1TB GIGABYTE GP-GSTFS31100TNTD
Display(s) Cooler Master GA271 and AoC 931wx (19in, 1680x1050)
Case Green Magnum Evo
Power Supply Green 650UK Plus
Mouse Green GM602-RGB ( copy of Aula F810 )
Keyboard Old 12 years FOCUS FK-8100
I'm not impressive by screenshots.I was expected like Forza 4 or Crysis 3.
 
Joined
Oct 21, 2005
Messages
7,061 (1.01/day)
Location
USA
System Name Computer of Theseus
Processor Intel i9-12900KS: 50x Pcore multi @ 1.18Vcore (target 1.275V -100mv offset)
Motherboard EVGA Z690 Classified
Cooling Noctua NH-D15S, 2xThermalRight TY-143, 4xNoctua NF-A12x25,3xNF-A12x15, 2xAquacomputer Splitty9Active
Memory G-Skill Trident Z5 (32GB) DDR5-6000 C36 F5-6000J3636F16GX2-TZ5RK
Video Card(s) ASUS PROART RTX 4070 Ti-Super OC 16GB, 2670MHz, 0.93V
Storage 1x Samsung 970 Pro 512GB NVMe (OS), 2x Samsung 970 Evo Plus 2TB (data), ASUS BW-16D1HT (BluRay)
Display(s) Dell S3220DGF 32" 2560x1440 165Hz Primary, Dell P2017H 19.5" 1600x900 Secondary, Ergotron LX arms.
Case Lian Li O11 Air Mini
Audio Device(s) Audiotechnica ATR2100X-USB, El Gato Wave XLR Mic Preamp, ATH M50X Headphones, Behringer 302USB Mixer
Power Supply Super Flower Leadex Platinum SE 1000W 80+ Platinum White, MODDIY 12VHPWR Cable
Mouse Zowie EC3-C
Keyboard Vortex Multix 87 Winter TKL (Gateron G Pro Yellow)
Software Win 10 LTSC 21H2
What I've been doing for many years now is wait until the game has been out for a couple of years and then pick it up on a really good sale for $10 or $15 (I will probably make an exception on Cyberpunk 2077 and Squadron 42 and pay full price).

This game is releasing at $30 so you can probably get it for $7.50 or maybe even $5 doing it that way. Also you will gain the benefit of playing the game after it's been patched and polished for the best gaming experience. If a game supports mods then there should be plenty to pick from at that time as well.
I bought Prey that way for $8 this past week. been on the fence about buying Dishonored as thats also on deep sale. Buying things 3 years late gives you a stack of games to play, you are just encountering them on a different schedule from others.
 

64K

Joined
Mar 13, 2014
Messages
6,773 (1.73/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) Temporary MSI RTX 4070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Temporary Viewsonic 4K 60 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
I bought Prey that way for $8 this past week. been on the fence about buying Dishonored as thats also on deep sale. Buying things 3 years late gives you a stack of games to play, you are just encountering them on a different schedule from others.

Dishonored is a really good game and a must-play imo. The Definitive Edition for $4 comes with the game and the 3 DLCs. Probably around 30 hours of gaming but longer if you do everything.

The only downside, aside from waiting and not playing the game when most people are and they're discussing the game and how much fun it is, is avoiding spoilers but most gamers are considerate and don't post spoilers.
 

rtwjunkie

PC Gaming Enthusiast
Supporter
Joined
Jul 25, 2008
Messages
13,995 (2.34/day)
Location
Louisiana
Processor Core i9-9900k
Motherboard ASRock Z390 Phantom Gaming 6
Cooling All air: 2x140mm Fractal exhaust; 3x 140mm Cougar Intake; Enermax ETS-T50 Black CPU cooler
Memory 32GB (2x16) Mushkin Redline DDR-4 3200
Video Card(s) ASUS RTX 4070 Ti Super OC 16GB
Storage 1x 1TB MX500 (OS); 2x 6TB WD Black; 1x 2TB MX500; 1x 1TB BX500 SSD; 1x 6TB WD Blue storage (eSATA)
Display(s) Infievo 27" 165Hz @ 2560 x 1440
Case Fractal Design Define R4 Black -windowed
Audio Device(s) Soundblaster Z
Power Supply Seasonic Focus GX-1000 Gold
Mouse Coolermaster Sentinel III (large palm grip!)
Keyboard Logitech G610 Orion mechanical (Cherry Brown switches)
Software Windows 10 Pro 64-bit (Start10 & Fences 3.0 installed)
@W1zzard How would you rate the game? How long does it take to play through? My rule of thumb is that a game needs to be really fun and not cost more than €1/hour at most. Ideally less
I’m over the ten hour mark and have left most of the main missions alone, as most were above my level of ability. Trust me, that’s actually good advice they give. So I’ve used the supporting missions to improve abilities. Yeah, they introduced sone RPG lite elements which IMO slow down your fun and progression a little. The combat is just as wild and fast-paced as ever though.
I'm SOOOO glad I'm playing Metro Exodus on game pass as I wouldn't pay more than €5 for it, the full price is horrendous for what it is IMO. 2033 was massively better.
I find this amazing. I bought Metro: Exodus on release and felt at $49.99 I got a tremendous deal! It is by far one of the greats (my Top 5) and have played it 3 times already; the fun and enjoyment hasn’t diminished.

The attention to detail that makes it feel so alive is is almost OCD on the part of the devs. The first two were like museums, even when walking among the living. Honestly I would have paid $80 for it on release if I knew how good it was.
 
Joined
Nov 3, 2011
Messages
695 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H150i Elite LCD XT White
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K FreeSync/GSync DP, LG 32UL950 32in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
This game and the previous game is one of the few games that runs on Vulkan and the only games that support all the new features on Turing.
If anything this is game "rigged" in favor of Turing.


The reason why Adaptive Shading is off is because it would make the game run at different quality compare to all other cards.
Turing can run FP16 as well. In fact this is what the Tensors Cores do.
https://www.anandtech.com/show/13973/nvidia-gtx-1660-ti-review-feat-evga-xc-gaming/2
The Curious Case of FP16: Tensor Cores vs. Dedicated Cores

Even though Turing-based video cards have been out for over 5 months now, every now and then I’m still learning something new about the architecture. And today is one of those days.


Something that escaped my attention with the original TU102 GPU and the RTX 2080 Ti was that for Turing, NVIDIA changed how standard FP16 operations were handled. Rather than processing it through their FP32 CUDA cores, as was the case for GP100 Pascal and GV100 Volta, NVIDIA instead started routing FP16 operations through their tensor cores.


The tensor cores are of course FP16 specialists, and while sending standard (non-tensor) FP16 operations through them is major overkill, it’s certainly a valid route to take with the architecture. In the case of the Turing architecture, this route offers a very specific perk: it means that NVIDIA can dual-issue FP16 operations with either FP32 operations or INT32 operations, essentially giving the warp scheduler a third option for keeping the SM partition busy. Note that this doesn’t really do anything extra for FP16 performance – it’s still 2x FP32 performance – but it gives NVIDIA some additional flexibility.


Of course, as we just discussed, the Turing Minor does away with the tensor cores in order to allow for a learner GPU. So what happens to FP16 operations? As it turns out, NVIDIA has introduced dedicated FP16 cores!


These FP16 cores are brand new to Turing Minor, and have not appeared in any past NVIDIA GPU architecture. Their purpose is functionally the same as running FP16 operations through the tensor cores on Turing Major: to allow NVIDIA to dual-issue FP16 operations alongside FP32 or INT32 operations within each SM partition. And because they are just FP16 cores, they are quite small. NVIDIA isn’t giving specifics, but going by throughput alone they should be a fraction of the size of the tensor cores they replace.


To users and developers this shouldn’t make a difference – CUDA and other APIs abstract this and FP16 operations are simply executed wherever the GPU architecture intends for them to go – so this is all very transparent. But it’s a neat insight into how NVIDiA has optimized Turing Minor for die size while retaining the basic execution flow of the architecture.


Now the bigger question in my mind: why is it so important to NVIDIA to be able to dual-issue FP32 and FP16 operations, such that they’re willing to dedicate die space to fixed FP16 cores? Are they expecting these operations to be frequently used together within a thread? Or is it just a matter of execution ports and routing? But that is a question we’ll have to save for another day.
Turing has rapid pack math feature with normal CUDA FP/INT cores not just from Tensor cores.
Read https://en.wikipedia.org/wiki/GeForce_20_series
Tensor's TFLOPS are seperated from normal CUDA core's TFLOPS.
 
Last edited:
Joined
Oct 1, 2006
Messages
4,932 (0.74/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
Turing has rapid pack math feature with normal CUDA FP/INT cores not just from Tensor cores.
Read https://en.wikipedia.org/wiki/GeForce_20_series
Tensor's TFLOPS are seperated from normal CUDA core's TFLOPS.
"Rapid Packed Math" is just another AMD marketing name for their FP16 capabilities.
The point stands that both current AMD and nVidia GPUs can run FP16 at double rate compare to FP32, there is no trickery in the benchmark.
 
Last edited:
Top