• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Battlefield V with GeForce RTX DirectX Raytracing

Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
So, going by that logic and working backwards, the fastest single-core CPUs before dual cores arrived were around $63? 'Cause that's where you end up if you think the price should follow the number of cores, and divide the 9900K's ~$500 by 8 (disregarding IPC entirely, of course).

In other words: this is not how things work, this has never been how things work, and shouldn't be how things work.
LOL, no.. just no.

That isn't how I think it works, I am just throwing in a different perception. The fact IS that there are 2x the cores and threads on this CPU versus the other one. There are MANY factors which go into pricing it. But to disparage the pricing and ignore any actual differences, isn't looking at the big picture.

I'm glad you brought that up since it allows me to point out that we seem to have gone a bit off the topic of RTX and to state that the doubling of Nvidia's pricing for the RTX 2080 ti is completely unjustified. I thought Intel was greedy, Nvidia is the worst. The government says my wages have to go up by a minimum of 2.7% (inflation rate for 2018) but these companies are charging me double (Because there is no competition in the market) : That should not be legal (It is in fact Illegal to "run a monopoly" but they get away with it somehow).
It is legal.. NVIDIA isn't a monopoly. Not with RTG popping out frighteningly mediocre performing GPUs. Here is to hoping Intel's discrete entry can shake things up. :)





I digress... RTX thread. My apologies.
 
Joined
May 2, 2017
Messages
7,762 (2.83/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
LOL, no.. just no.

That isn't how I think it works, I am just throwing in a different perception. The fact IS that there are 2x the cores and threads on this CPU versus the other one. There are MANY factors which go into pricing it. But to disparage the pricing and ignore any actual differences, isn't looking at the big picture.
Let's go big picture then: Intel gave us four cores (with HT most of the time) with minimal IPC improvements for a decade, and prices never dropped. They essentially flat out refused to increase core counts in the MSDT market, despite customers asking for it, instead taking every opportunity to sell ever-cheaper silicon (minimal increases in transistor count, per-transistor prices constantly dropping, relatively flat R&D costs) at the same price point. Then they're suddenly faced with competition, double the number of cores in a year, change nothing else, and they raise prices by 50%. Your justification is short-sighted and cuts Intel far too much slack. They've made it obvious over the last decade that they're only interested in padding margins.

And Nvidia is exactly the same. As are most corporations, really, but the ones with near-monopolistic market positions make it far more obvious.
 
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Who asked for it? The general PC using population didn't ask for, want, or need it (more cores and threads)...quads have been out for nearly a decade and just now people are saying a quad with HT (for a gamer) would be the low end.

But this what happens when there is little competition. AMD lulled Intel to sleep for the better part of a decade with their sub-par performing architectures until Ryzen. This caused Intel to put out more incremental updates in IPC and clock speed performance. Since AMD went wide because they can't compete in clock speeds or overclocking headroom, this forced big blue to react and throw more cores/threads on their incremental updates.

My justification and POV includes more than just that talking point (but again, this isn't the time and place for a deep dive). What is myopic is seemingly ignoring the fact that it doubled the amount of c/t with faster boost speeds and overclocking headroom. AMD shines in situations where it can use more threads for the same price. But falls to second place, of two, situations outside of that. Both processors (and GPUs, LOL) have a place in the market. Just be sure the measuring stick is the same size for each thing that is measured.

Cheers. :)
 
Joined
May 2, 2017
Messages
7,762 (2.83/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Who asked for it? The general PC using population didn't ask for, want, or need it (more cores and threads)...quads have been out for nearly a decade and just now people are saying a quad with HT (for a gamer) would be the low end.

But this what happens when there is little competition. AMD lulled Intel to sleep for the better part of a decade with their sub-par performing architectures until Ryzen. This caused Intel to put out more incremental updates in IPC and clock speed performance. Since AMD went wide because they can't compete in clock speeds or overclocking headroom, this forced big blue to react and throw more cores/threads on their incremental updates.

My justification and POV includes more than just that talking point (but again, this isn't the time and place for a deep dive). What is myopic is seemingly ignoring the fact that it doubled the amount of c/t with faster boost speeds and overclocking headroom. AMD shines in situations where it can use more threads for the same price. But falls to second place, of two, situations outside of that. Both processors (and GPUs, LOL) have a place in the market. Just be sure the measuring stick is the same size for each thing that is measured.

Cheers. :)
This is getting too OT even for me, but I'll give one last reply: I can't answer for anyone else here, but I'm certainly not ignoring the doubling of cores/threads. My view on this is simple: it's about d**n time. Intel deserves zero credit for this, given that they've been dragging their feet on increasing this for a full decade. As for who has been asking for it, I'd say most of the enthusiast community for the past 3-4 years? People have been begging Intel to increase core/thread counts outside of HEDT for ages, as the increases in IPC/per-thread perf have been minimal, giving people no reason to upgrade and forcing game developers to halt any CPU-demanding new features as there wouldn't be an install base capable of running it. Heck, we still have people running overclocked Sandy Bridge chips and doing fine, as the de-facto standard of 4c8t being the high end and 4c4t being common has caused utter stagnation in game CPU loads.

Where the core count increase does matter is in the doubling of silicon area required by the cores, which of course costs money - but CFL-R is still smaller than SB or IVB. Due to per-area cost increasing on denser nodes, the total die cost is likely higher than these, but nowhere near enough to justify a 50% price increase. Why? Because prices up until then had remained static, despite production becoming cheaper. In other words, Intel came into this with already padded margins, and decided to maintain these rather than do the sensible thing and bring price and production cost back into relation. That is quite explicitly screwing over end-users. Personally, I don't like that.

Also, what I find myopic is how you seemingly treat corporate greed as a law of nature rather than what it is: corporate greed. There is no necessity whatsoever in Intel ceasing innovation and padding margins when competition disappeared. Heck, you go one step further and blame Intel's greed on AMD, which is quite absurd. Here's a shocker: Intel could very well have kept innovating (i.e. maintained the status quo) or dropped prices as production got cheaper, no matter whether they had competition or not. That would sure have increased sales, at least. Instead, they chose to be greedy, and you're acting like that's not a choice. It is. And please don't come dragging the "fiduciary duty" crap, as there's nothing in that saying that you have to put your customers' wallets through a blender to fulfill that.

Nobody here is judging different products by different standards; quite the opposite, we're taking the whole context into account. AMD's recent rebound is more impressive due to how far behind they are - but in pure numbers, they're still slightly behind. However, they crush Intel on price/perf across loads, and roughly match them for price/perf in gaming. That's not bad. I judge Intel more harshly, as they're in a massively advantageous position, and yet have managed to completely screw this up. They're barely clinging to their lead despite having nearly a decade to cement it. That's disappointing, to say the least, but not surprising when they've prioritized squeezing money out of their customers rather than innovation. And again, that's their choice, and they'll have to live with it.

And I suppose that's how this all ties back into the RTX debacle - hiking up prices for barely-tangible performance increases and the (so far quite empty) promise of future gains, seemingly mostly to pad out corporate margins as much as possible.
 
Joined
Dec 31, 2009
Messages
19,371 (3.57/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
LOL, there is so much to reply too...... but I said I was leaving it alone (in this thread), and will.

One thing I do appreciate is a mature conversation without personal barbs. That is hard to find on TPU these days. :toast:
 

HTC

Joined
Apr 1, 2008
Messages
4,661 (0.77/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
Not for $1300+ They wouldn't.
Nvidia needed a way to raise prices (to over double that of the previous generation). top-end Laptop prices have doubled over the last 2 years, top-end phone prices have doubled too, Even CPUs (Intel 7700k cost $300 on release, new 9900k cost $600). Nvidia felt they were missing out so they used "Real-time raytracing" to double their prices. Consumers be boned.

You may have missed a big part of my post, dude: i was referring to a potential add-on card with RT capabilities and i said nothing about pricing.

nVidia could have made 2000 series cards with quite a bit more "horse power" if it did NOT have the RT capabilities built in, and it could quite possibly sell it for current 2000 series prices: they could actually "get away" with it because the performance uplift VS 1000 series would justify it.

Instead, the "gave us" RT capabilities that are evidently insufficient for today's high end gaming, unless you find acceptable the requirement of going from 4K @ 50 - 80 FPS to 1080p @ 50 - 70 FPS in order to have partial RT (just reflections, for now).
 
Joined
Dec 27, 2013
Messages
887 (0.22/day)
Location
somewhere
Oh dear. That doesn't even look that good. RTX 20 series is a joke. The hardware isn't ready for Real-time RT yet (it shows) and you're footing the bill for the huge die sizes because they are built on a node that will be outdated in 6 months. Worth it, though, right?

Right?
 
Joined
Nov 24, 2018
Messages
1 (0.00/day)
So that's it... The revolution... Back to 90s ! Unreal Engine already made it with DX6 or DX7.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,162 (2.82/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
All I have to say is that the moment DXR is turned on with a 2080 Ti, even on low, that performance is less than a Vega 56 at 4k. For a 2080, performance at low is practically the same as a 580. At 4k that makes the game practically unplayable. If you want to crank it up, you better have a 2080 Ti and be running at 1080p. To me, that's unacceptable for a graphics card that costs over $1,000 USD. Even more so if you consider this little quote from the review:
It is very important to realize that DXR will not take over rendering of the whole scene. Everything in Battlefield V, with DXR enabled, is rendered exactly the same way, looking exactly the same as with the regular DirectX 12 renderer. The only exception are surfaces that are marked as "reflective" by the developer during model/level design. When one of these surfaces is rendered, it will be fed with raytraced scene data to visually present accurate reflections, that do look amazing and more detailed than anything we've ever seen before.

So remember, you're losing 50-60% of your performance so some elements of the world look better. That's no good, no good at all.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.83/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
One thing we need to remember here is that hybrid RT is in its very infancy, and there is no doubt it will improve dramatically over time. There are undoubtedly undiscovered or untried tricks and workarounds to make this perform better, like rendering the RT scene at a lower resolution and upscaling with DLSS or hereto unknown ways of compensating for reduced ray counts. However, it seems unlikely that we'll see >100% performance improvements with current hardware, at least in the next couple of years, which is more or less what is needed to bring this to acceptable levels of price/performance. The previous generation brought us to solid 4k>/=60Hz, and now suddenly we're back down to half that, and worse performance than even two generations back even at lower resolutions. Add in the dramsimtic price increases, and we're looking at something that simply isn't acceptable. Making RTRT feasible at all is great, but what Turing really shows us is that it isn't really, and given the necessary die area and bleak outlook for future node shrinks on silicon, it might not be for a long, long time.
 
Last edited:
Joined
Feb 1, 2013
Messages
1,264 (0.29/day)
System Name Gentoo64 /w Cold Coffee
Processor 9900K 5.2GHz @1.312v
Motherboard MXI APEX
Cooling Raystorm Pro + 1260mm Super Nova
Memory 2x16GB TridentZ 4000-14-14-28-2T @1.6v
Video Card(s) RTX 4090 LiquidX Barrow 3015MHz @1.1v
Storage 660P 1TB, 860 QVO 2TB
Display(s) LG C1 + Predator XB1 QHD
Case Open Benchtable V2
Audio Device(s) SB X-Fi
Power Supply MSI A1000G
Mouse G502
Keyboard G815
Software Gentoo/Windows 10
Benchmark Scores Always only ever very fast
Using

to sell

. Well done.
 
Joined
May 12, 2006
Messages
1,572 (0.23/day)
Location
The Gulag Casino
System Name ROG 7900X3d By purecain
Processor AMD Ryzen 7 7900X3D
Motherboard ASUS Crosshair X670E Hero
Cooling Noctua NH U12A
Memory 64Gb G.Skill Trident Z5 neo RGB 6400@6000mhz@1.41v
Video Card(s) Aorus RTX4090 Extreme Waterforce
Storage 990Pro2Tb-1TbSamsung Evo M.2/ 2TbSamsung QVO/ 1TbSamsung Evo780/ 120gbKingston Now
Display(s) LG 65UN85006LA 65" Smart 4K Ultra HD HDR LED TV
Case Thermaltake CoreX71 Limited Edition Etched Tempered Glass Door
Audio Device(s) On board/NIcomplete audio 6
Power Supply Seasonic FOCUS 1000w 80+
Mouse M65 RGB Elite
Keyboard K95 RGB Platinum
Software Windows11pro
Benchmark Scores [url=https://valid.x86.fr/gtle1y][img]https://valid.x86.fr/cache/banner/gtle1y-6.png[/img][/url]
lol i'm sticking with my non rtx card for now... :toast:
 
Top