• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Battlefield V with GeForce RTX DirectX Raytracing

ksdomino

New Member
Joined
Nov 16, 2018
Messages
6 (0.00/day)
Wait, what? It's been clear from long before cards were available that RTX would entail a performance loss. Some of the first commentary on Nvidia's promo material - across plenty of web sites - was "this is choppy, clearly below 60fps". Did you not look up any information whatsoever before buying?

"REAL-TIME"
Everything that I read said "realtime"..Even the Nvidia website today still has multiple instances of "Realtime raytracing" performed by dedicated cores.
Nobody expected real-time to mean at lower-res (1080p is not acceptable at even half of this card's price-point) or at below 30 fps (which is also unacceptable for people that buy higher end cards) .
Choppy is not equal to real-time.

"DEDICATED"
There's nothing dedicated about those cores unless by "dedicated" they mean they are dedicating themselves to harming performance!

1080p is a step backwards.
30FPS is a step backwards.
50% performance hit (making a card (that costs over $1200) effectively perform like a card that costs $300) is unacceptable.
Charging people more than twice as much than the previous generation is capitalism at it's worst. If they keep driving prices up like this we'll all be screwed (even those that can currently afford it).
I've never been a fanboy of anything as I like to buy the best regardless of who made it but this is my last time buying anything Nvidia.
Just because no-one has competed with them, that shouldn't allow them to deceive, overcharge and exploit their customers and their position that much! I feel really ripped off! Sorry for the rant.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.85/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
"REAL-TIME"
Everything that I read said "realtime"..Even the Nvidia website today still has multiple instances of "Realtime raytracing" performed by dedicated cores.
Nobody expected real-time to mean at lower-res (1080p is not acceptable at even half of this card's price-point) or at below 30 fps (which is also unacceptable for people that buy higher end cards) .
Choppy is not equal to real-time.

"DEDICATED"
There's nothing dedicated about those cores unless by "dedicated" they mean they are dedicating themselves to harming performance!

1080p is a step backwards.
30FPS is a step backwards.
50% performance hit (making a card (that costs over $1200) effectively perform like a card that costs $300) is unacceptable.
Charging people more than twice as much than the previous generation is capitalism at it's worst. If they keep driving prices up like this we'll all be screwed (even those that can currently afford it).
I've never been a fanboy of anything as I like to buy the best regardless of who made it but this is my last time buying anything Nvidia.
Just because no-one has competed with them, that shouldn't allow them to deceive, overcharge and exploit their customers and their position that much! I feel really ripped off! Sorry for the rant.
I entirely agree with parts of your rant (these prices are beyond absurd, and are frankly an insult to customers - as was 10XX series pricing above the 1070), but you seem desperately in need of a "read before you buy" lesson.

-There was plenty of press coverage going into the choppiness of Nvidia's demos and how this didn't bode well for actual performance of a first-generation product like this - particularly given how real-time ray tracing (regardless of resolution) hasn't been possible outside of massive render farms up until now. There were analyses of resolution and video frame rate demonstrating clearly that the video demos were 1080p at below 60fps. You seem to have made your purchase decision based solely on Nvidia's advertising. A hint: advertisers aren't neutral arbiters of truth. They're (only) interested in selling you stuff. Period. So, if you want to avoid being screwed over, wait for third-party reviews, and stick to trustworthy sources for said reviews, not "influencers" and other paid shills. And if you keep reading breathless articles praising this new and revolutionary technology that's going to enhance your games 9999999x, revive the dog you had when you were a kid and make you a millionaire just by existing, stop reading, go somewhere else, and scratch the source in question off the "trustworthy sources" list. Trustworthy journalists don't write like that.

-"Real time (rendering/ray-tracing)" means nothing more than frames being displayed immediately as they're rendered, as opposed to pre-rendering and displaying in sequence later. Of course, in practice, any form of effective real-time rendering requires a rendering output rate quick enough to maintain visual fluidity - for which there are many standards; 15fps for old cartoons, 24fps for film, 30fps for console games (usually) and 60fps (or more) for anything fast-paced. Nvidia are not wrong when saying that they're doing real-time ray traced rendering - it just doesn't live up to the standards of current gamers in terms of frame rate or resolution. This means that the tech isn't ready for prime time, but not that it isn't real-time. And what do sensible people do with tech that isn't ready for prime time? Either let it mature before buying, or buy it knowing that it's going to suck, but suck in a new and innovative way, and one that you can afford to waste money on.

-"Dedicated hardware" means the hardware is meant for a single (or single group of) job(s). Are you suggesting the RT cores either don't exist, or are used for other things than processing ray tracing? 'Cause there's nothing to indicate either of the two. Having dedicated hardware doesn't automatically imply that said hardware is good enough. You seem to think "dedicated hardware" means something entirely different than what it actually means.

Were you ripped off? Price wise? Sure. Absolutely. Nvidia is price gouging, this is beyond any doubt. In terms of Nvidia overpromising? Possibly. This is first-gen tech, with one single implementation in real life. It may well improve, but it's pretty much a given that it'll suck to begin with. The thing is, nobody has ever mentioned this improving performance, though, nor was the necessary drop in resolution a surprise to anyone paying attention.

Don't be gullible. Wait for reviews. Spend your money wisely, and get proven, not promised, value for money. Don't pay people for making grandiose promises. That's how all the failed Kickstarted games, game consoles, and other gaming-related vaporware were funded, after all.
 

ksdomino

New Member
Joined
Nov 16, 2018
Messages
6 (0.00/day)
I agree, the world is now a "buyer beware" marketplace with companies like Nvidia and these shark practices.
Your post is entirely correct, also these guys know:
Totally worth wasting all that die space! If only people were warned XD
As expected , utter shat..
OMFG! This is even worse, than I thought. RIP Ray tracing, see you resurrected in a decade.
If I'd pay $1.300 for GPU, I'd expect flawless RTX gaming at 4K not a slideshow. Maybe RTX 4080TI can deliver.
I'll wait next time and read more before buying.
 
Joined
May 12, 2006
Messages
1,572 (0.23/day)
Location
The Gulag Casino
System Name ROG 7900X3d By purecain
Processor AMD Ryzen 7 7900X3D
Motherboard ASUS Crosshair X670E Hero
Cooling Noctua NH U12A
Memory 64Gb G.Skill Trident Z5 neo RGB 6400@6000mhz@1.41v
Video Card(s) Aorus RTX4090 Extreme Waterforce
Storage 990Pro2Tb-1TbSamsung Evo M.2/ 2TbSamsung QVO/ 1TbSamsung Evo780/ 120gbKingston Now
Display(s) LG 65UN85006LA 65" Smart 4K Ultra HD HDR LED TV
Case Thermaltake CoreX71 Limited Edition Etched Tempered Glass Door
Audio Device(s) On board/NIcomplete audio 6
Power Supply Seasonic FOCUS 1000w 80+
Mouse M65 RGB Elite
Keyboard K95 RGB Platinum
Software Windows11pro
Benchmark Scores [url=https://valid.x86.fr/gtle1y][img]https://valid.x86.fr/cache/banner/gtle1y-6.png[/img][/url]
its just an early adopter thing... I wouldnt feel too sore. @ksdomino you still have a blazing fast gpu without rtx :toast:
 

GamersGen

New Member
Joined
Nov 17, 2018
Messages
1 (0.00/day)
Can you enable somehow DXR on 1080ti or is it only available on RTX hardware?
 
Joined
Apr 30, 2012
Messages
3,881 (0.85/day)
Can you enable somehow DXR on 1080ti or is it only available on RTX hardware?

DXR has a Fallback Layer for developer purposes.

Not sure you want to even if it could be done. 1080 Ti would be between x3-x4 slower
 
Joined
May 2, 2017
Messages
7,762 (2.85/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
DXR has a Fallback Layer for developer purposes.

Not sure you want to even if it could be done. 1080 Ti would be between x3-x4 slower
3-4x? Really? If that's all the RT cores can do, that's not much. Usually the difference between dedicated hardware and not is far higher than that, but of course I haven't seen anyone actually try this. Still, I'd expect an absolute slide show, way below 1 fps. After all, if it was just 3-4x they could see about the same gains just by filling the die area of GT104 with cuda cores instead of RT cores.
 
Joined
Dec 10, 2017
Messages
266 (0.11/day)
Processor Intel core i5 4590s
Motherboard Asus Z97 Pro Gamer
Cooling Evercool EC115A 915SP Cpu cooler,Coolermaster [200mm (front and top)+140mm rear]
Memory Corsair 16GB(4x4) ddr3 CMZ16GX3M4X1600C9(Ver8.16)(XMP)
Video Card(s) MSI GTX 970 GAMING 4G
Storage Western Digital WDC WD2001FAS 2TB Black, Toshiba DT01ACA100 1TB
Display(s) LG Flatron L177WSB
Case Coolermaster CM Storm Enforcer
Audio Device(s) Creative A550 Speakers 5.1 channel
Power Supply SuperFlower Leadex 2 Gold 650W SF-650F14EG
Mouse PLNK M-740 Optical Mouse
Keyboard ibuypower GKB100 Gaming Keyboard
Software Windows 7 Sp1 64 bit
jensen huang is an excellent salesman...the end
 

HTC

Joined
Apr 1, 2008
Messages
4,661 (0.77/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
This round of cards (in this case, RTX cards) simply doesn't have enough "horse power" to fully utilize RT: perhaps they can achieve this with next gen of cards? Dunno, really.

What nVidia should have done was make stronger card for non-RT workloads and leave RT for a dedicated add-on card, which could also feature a much stronger RT capabilities. There would be several pros / cons with this approach:

Pros

- stronger non RT performance
- much smaller chips without the RT area
- possibly cheaper then current RTX cards
- stronger RT performance (????) due to dedicated card with a bigger RT core (than that of current RTX's cards)
- possibility of using RT @ greater than 1080p with 60+ FPS (minimums)

Cons

- necessity of a dedicated RT add-on card
- won't sell as much as a card that also has RT, like RTX cards do
- add-on card likely won't work with AMD GPU(s) present in the system
- much more expensive than current RTX cards due to being essentially two cards instead of one

Perhaps other pros / cons that don't occur to me atm.
 
Joined
May 2, 2017
Messages
7,762 (2.85/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
This round of cards (in this case, RTX cards) simply doesn't have enough "horse power" to fully utilize RT: perhaps they can achieve this with next gen of cards? Dunno, really.

What nVidia should have done was make stronger card for non-RT workloads and leave RT for a dedicated add-on card, which could also feature a much stronger RT capabilities. There would be several pros / cons with this approach:

Pros

- stronger non RT performance
- much smaller chips without the RT area
- possibly cheaper then current RTX cards
- stronger RT performance (????) due to dedicated card with a bigger RT core (than that of current RTX's cards)
- possibility of using RT @ greater than 1080p with 60+ FPS (minimums)

Cons

- necessity of a dedicated RT add-on card
- won't sell as much as a card that also has RT, like RTX cards do
- add-on card likely won't work with AMD GPU(s) present in the system
- much more expensive than current RTX cards due to being essentially two cards instead of one

Perhaps other pros / cons that don't occur to me atm.
I agree that that's how it should have gone, the problem is that nobody would have bought an expensive add-on card that didn't do anything at all in current games, and no developers would waste resources developing features with a 0-person install base for required hardware. One solution would have been bundling the two together, though that would increase the cost and exclude ITX builds, and you'd risk that people simply didn't install the RT card.
 

HTC

Joined
Apr 1, 2008
Messages
4,661 (0.77/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
I agree that that's how it should have gone, the problem is that nobody would have bought an expensive add-on card that didn't do anything at all in current games, and no developers would waste resources developing features with a 0-person install base for required hardware. One solution would have been bundling the two together, though that would increase the cost and exclude ITX builds, and you'd risk that people simply didn't install the RT card.

Agreed, to some extent: these cards would sell a heck-of-a-lot-less than nVidia would like but they'd still sell:

- enable RT @ far less details with existing cards, including 1080 generation cards (think 1/3 or less than current low RT, so it's achievable even without RT cores): this would give developers the "excuse" to have the technology available despite the lack of dedicated hardware (if you can call current RTX cards that)
- wait for games that have RT enabled and then launch the add-on card
- showcase the superior performance / quality with the add-on card

Played right, i think it would sell quite a bit.

It could sell a lot more if nVidia allowed it to be used with an AMD GPU, but i seriously doubt they do that.
 
Joined
May 2, 2017
Messages
7,762 (2.85/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Agreed, to some extent: these cards would sell a heck-of-a-lot-less than nVidia would like but they'd still sell:

- enable RT @ far less details with existing cards, including 1080 generation cards (think 1/3 or less than current low RT, so it's achievable even without RT cores): this would give developers the "excuse" to have the technology available despite the lack of dedicated hardware (if you can call current RTX cards that)
- wait for games that have RT enabled and then launch the add-on card
- showcase the superior performance / quality with the add-on card

Played right, i think it would sell quite a bit.

It could sell a lot more if nVidia allowed it to be used with an AMD GPU, but i seriously doubt they do that.
Such an add-on card would likely benefit quite a lot from NVlink (syncing up RT rendering with what the GPU is doing, possibly using the GPU's VRAM), so I doubt it would work with anything not new or anything from AMD. Of course, they could make two SKUs, one with NVlink and one with old-style SLI or even one working over PCIe - that could at least serve as an argument for people to upgrade to newer GPUs in time.
 
  • Like
Reactions: HTC

HTC

Joined
Apr 1, 2008
Messages
4,661 (0.77/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
Such an add-on card would likely benefit quite a lot from NVlink (syncing up RT rendering with what the GPU is doing, possibly using the GPU's VRAM), so I doubt it would work with anything not new or anything from AMD. Of course, they could make two SKUs, one with NVlink and one with old-style SLI or even one working over PCIe - that could at least serve as an argument for people to upgrade to newer GPUs in time.

If that was the case, than they'd have a legitimate excuse of not allowing an AMD GPU present in the system for it to work, but that would also restrict 1080 generation cards from using it, so dunno if that is a good plan, unless it's not doable otherwise due to lack of bandwidth or something like that.

Imagine if this could be used even with 1080: how many users are already with this card, or even the 1080Ti? All of a sudden, the user base could grow substantially so long as the card is reasonably priced.

For it to work, the add-on card must NOT help with anything other than RT capabilities, meaning it wouldn't boost non-RT frames for example, or it would eat up 2080 generation cards adoption.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,624 (3.70/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
It's not "Ultra Settings" when texture filtering is set to low...
That's just a mistake in the settings screenshot, I clicked around too quickly congrats for being the first to notice :)
 
Joined
Dec 31, 2009
Messages
19,371 (3.58/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
That's just a mistake in the settings screenshot, I clicked around too quickly congrats for being the first to notice :)
Can you kindly take the time to share how this was tested? Like what scene? Apologies for chasing you around in threads, but I've asked a couple of times over the past several days without a response. Apologies if I missed one. :)

@W1zzard
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,745 (3.32/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
This is an absolute nightmare!! :(:(
I'm hoping it's all a mistake and those extra RTX cores that I paid (double what the card is worth) for are somehow not turned on. I expected a performance gain now it's come to light that these new feature cause a performance loss.
COMPLETELY UNACCEPTABLE!
No wonder they didn't show real game performance at launch.
I can't believe Nvidia did this to us.
I mean, I'm not here to defend RTX or anything... but literally everyone has been telling everyone it was gonna run like shit since day 1. Even nVidia's own marketing couldn't pretty it up very much.
 

ksdomino

New Member
Joined
Nov 16, 2018
Messages
6 (0.00/day)
Agreed, to some extent: these cards would sell a heck-of-a-lot-less than nVidia would like but they'd still sell:
Not for $1300+ They wouldn't.
Nvidia needed a way to raise prices (to over double that of the previous generation). top-end Laptop prices have doubled over the last 2 years, top-end phone prices have doubled too, Even CPUs (Intel 7700k cost $300 on release, new 9900k cost $600). Nvidia felt they were missing out so they used "Real-time raytracing" to double their prices. Consumers be boned.
 
Joined
Dec 31, 2009
Messages
19,371 (3.58/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
7700K, with 4c/8t is half the cost of a 8c/16t CPU... makes sense to me in that light...Also, MSRP to MSRP is $305 and $488*. (where $488 is 1K tray price, I'd expect a bit over $500 is MSRP).

While pricing has gone up with 'flagship' pieces, should't ignore the differences between the two parts either like having double the cores/threads isn't worth it (to those who can use more than 4c/8t - which is more than one would think).
 
Last edited:

ksdomino

New Member
Joined
Nov 16, 2018
Messages
6 (0.00/day)
7700K, with 4c/8t is half the cost of a 8c/16t CPU... makes sense to me in that light...Also, MSRP to MSRP is $305 and $549.

While pricing has gone up with 'flagship' pieces, should't ignore the differences between the two parts either like having double the cores/threads isn't worth it (to those who can use more than 4c/8t - which is more than one would think).

Couldn't disagree more. for 2 reasons:
1. Msrp is a myth. Items never sell for Msrp.
2. Inflation.
Inflation states that on average, you would have to spend 2.70% more money in 2018 than in 2017 for the same item. In other words, $100 in 2017 is equivalent in purchasing power to $102.87 in 2018.

Technological advances (however limited or hindered by greedy companies) have occurred since the beginning of time. "cores", "hyperthreading", "Real time ratracing", [insert next big marketing phrase here] = at best a real world increase of 10-15% in performance. There is no way I'll accept that as justification for doubling of prices. Personally I can't believe you're defending these practices but to each their own, you're as entitled to an opinion as everyone else.
 
Joined
May 2, 2017
Messages
7,762 (2.85/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
7700K, with 4c/8t is half the cost of a 8c/16t CPU... makes sense to me in that light...Also, MSRP to MSRP is $305 and $488*. (where $488 is 1K tray price, I'd expect a bit over $500 is MSRP).

While pricing has gone up with 'flagship' pieces, should't ignore the differences between the two parts either like having double the cores/threads isn't worth it (to those who can use more than 4c/8t - which is more than one would think).
It used to work like this: technology made progress, things got better at the same price or cheaper for the same performance. This is mostly logical, as normally R&D costs are large, but stable-ish (they tend to rise, but not by 2x for one generation) fab costs decrease drastically per transistor with node changes (and more as the node matures), and other costs are negligible. This generation has some anomalies: no new process, so similar cost per transistor as 7th Gen. Twice the cores, so twice the transistors for that. No new arch, so no more IPC, but also no real R&D cost compared to a node shrink or new arch. The size increase and lack of R&D ought to even out in terms of total cost. Yet instead sales prices have increased dramatically. Something is off here, and it sure looks like Intel is padding its margins.
 
Joined
Dec 31, 2009
Messages
19,371 (3.58/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
One can't compare a CPUs MSRP versus another's current pricing (7700K and 9900K) either. 7700K was also found for $350-$400 after initial release mind you ;)... Similar markup by % (that new processor not in stock price), in fact. These prices will come back down closer to MSRP as stock improves and the shiney new product luster wears off. :)

That said, everyone can justify, or not, the addition of new technology towards higher pricing. You don't have to accept it as justification, but you should at least consider the fact that it has double+ the processing power when all cores/threads are used. And like it or not, going with more cores/threads is the way the market is going (thank AMD for that one).

Now, I'm not happy about it either, don't misunderstand me, but the difference in performance between those two processors when using all the threads IS double.

It used to work like this: technology made progress, things got better at the same price or cheaper for the same performance. This is mostly logical, as normally R&D costs are large, but stable-ish (they tend to rise, but not by 2x for one generation) fab costs decrease drastically per transistor with node changes (and more as the node matures), and other costs are negligible. This generation has some anomalies: no new process, so similar cost per transistor as 7th Gen. Twice the cores, so twice the transistors for that. No new arch, so no more IPC, but also no real R&D cost compared to a node shrink or new arch. The size increase and lack of R&D ought to even out in terms of total cost. Yet instead sales prices have increased dramatically. Something is off here, and it sure looks like Intel is padding its margins.
Its a twist on 14++. There were some changes here. There are RnD costs. These new cores aren't 'glued' on to the processor and they work. So while it clearly isn't the same as a whole new arch, there are several factors involved.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.85/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
One can't compare a CPUs MSRP versus another's current pricing (7700K and 9900K). 7700K was also found for $350+ after initial release mind you ;)... Similar markup in fact by %.

That said, everyone can justify, or not, the addition of new technology towards higher pricing. You don't have to accept it as justification, but you should at least consider the fact that it has double+ the processing power when all cores/threads are used. And like it or not, going with more cores/threads is the way the market is going (thank AMD for that one).

Now, I'm not happy about it either, don't misunderstand me, but the difference in performance between those two processors when using all the threads IS double.
So, going by that logic and working backwards, the fastest single-core CPUs before dual cores arrived were around $63? 'Cause that's where you end up if you think the price should follow the number of cores, and divide the 9900K's ~$500 by 8 (disregarding IPC entirely, of course).

In other words: this is not how things work, this has never been how things work, and shouldn't be how things work.
 

ksdomino

New Member
Joined
Nov 16, 2018
Messages
6 (0.00/day)
Now, I'm not happy about it either, don't misunderstand me, but the difference in performance between those two processors when using all the threads IS double.
I'm glad you brought that up since it allows me to point out that we seem to have gone a bit off the topic of RTX and to state that the doubling of Nvidia's pricing for the RTX 2080 ti is completely unjustified. I thought intel was greedy, Nvidia is the worst. The government says my wages have to go up by a minimum of 2.7% (inflation rate for 2018) but these companies are charging me double (Because there is no competition in the market) : That should not be legal It is in fact Illegal to "price fix" ( https://www.classlawgroup.com/antitrust/unlawful-practices/price-fixing/ ) but they get away with it somehow.
 
Last edited:
Top