• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Tom's Hardware Edioter-in-chief's stance on RTX 20 Series : JUST BUY IT

Joined
Feb 14, 2012
Messages
2,355 (0.50/day)
System Name msdos
Processor 8086
Motherboard mainboard
Cooling passive
Memory 640KB + 384KB extended
Video Card(s) EGA
Storage 5.25"
Display(s) 80x25
Case plastic
Audio Device(s) modchip
Power Supply 45 watts
Mouse serial
Keyboard yes
Software disk commander
Benchmark Scores still running
I wonder what prebuilt's are going to sell for.

Have you ever signed an NDA before buying a pre-built PC? The way it's meant to be paid! :cool:
 
Last edited:
Joined
Nov 4, 2005
Messages
11,984 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
Well that doesn’t spell very good news for the 2070 then! In fact, in addition to the horrible increase in prices, now we have decreased price vs performance value for the 2070 compared to the 1070 and most previous ones.


The thermals based on the quoted power figures puts the 2080Ti almost every ounce of performance the chip has at the maximum power specification.

I bet this is Nvidias failure on process node not being ready, much like AMD has had with Hawaii and Vega, low yields and more power consumption than expected.
 
Joined
Feb 18, 2012
Messages
2,715 (0.58/day)
System Name MSI GP76
Processor intel i7 11800h
Cooling 2 laptop fans
Memory 32gb of 3000mhz DDR4
Video Card(s) Nvidia 3070
Storage x2 PNY 8tb cs2130 m.2 SSD--16tb of space
Display(s) 17.3" IPS 1920x1080 240Hz
Power Supply 280w laptop power supply
Mouse Logitech m705
Keyboard laptop keyboard
Software lots of movies and Windows 10 with win 7 shell
Benchmark Scores Good enough for me
Exactly...and they occupy different places in the families.


You can sort however you like, but it’s not how the company that makes them sorts them, so, yeah....THAT is why no reviewer or Nvidia or any AIB partner will say or market the 2080 as the 1080Ti’s replacement.

Basically, if you only look at price as far as replacement decisions go, you’re gonna price yourself down and out of playing pc games in just a few generations. You’ll be buying a 5030 and wondering why the only games you can play maxed out are your games older than 4 years. I would hate to see that happen to anyone. :)
When people buy such expensive items they do have to look at prices. This generation cards are much more expensive than last gen.

The price of a 2080 was the price of a 1080ti and not a lot of people bought the 1080ti because they didnt have the budget for it. This generations price are over 30% more, I dont think inflation increased 30% in only 2 years. So the 1080ti is the new 2080, the 1080 is the 2070, price wise.
I will give myself as an example, my wallet can afford 30 2080ti but my eyes have a budget of a 2060, when ever it comes out. The price between the 2080ti and 2080 is only a few hundred dollars but most people dont have those few hundred.
 
Last edited:
Joined
Oct 5, 2017
Messages
595 (0.23/day)
Financial transactions, database lookups, out of order execution with branch dependencies.

So like 70% of real world use in business, for example I built new machines for a workspace based on the fact that production at peak times was 15 minutes per machine saved per work day resulted in one extra customer transaction, and paid for themselves in a few days of peak production, plus employees saved time and could go home sooner.

Banks all still batch process at night, sometimes due to account locks and the sheer volume batch activity may be spread over multiple days from the weekend or holidays. Every second of machine time gained can result in hundreds, or thousands to millions of dollars of money the bank may save or spend on borrowing funds from the federal reserve to cover transactions, and that costs interest to borrow the bank has to pay.
I'm not saying 15% is "a big deal". It isn't most of the time. But if it is possible and without much fuss or sacrifices, then why not?
BTW: by ignoring those 15% you're basically undermining the sense of overclocking, which isn't exactly in-line with this community. But don't worry - I'm with you on this one! :-D Overclocking is the least cost-effective way of improving performance and, hence, fairly pointless - apart from the top CPUs available, obviously. :)

I'm doing a lot of stuff that can't be split between cores. It's mostly stemming from the way I work or the tasks I'm given. That's why I care.

But you're doing it as well, sometimes unconsciously. Browsing WWW is a basic example. 10 years ago it was limited by our internet connections. Today you're not waiting for the data, but for the rendering engine. :)

What are "production tasks"?
Editing? Depends what and how you're editing. :) Quite a few popular photo algorithms are serial (sequential) and utilize just 1 thread. This is why programs like Photoshop struggle to utilize more than 3-4 threads during non-batch operations.
Principal component analysis (e.g. used for face recognition) is iterative as well - people are making very decent money and careers by finding ways to make it faster on multi-thread PCs. :)

Nope. A parallel algorithm is one that can be run on many elements independently with no impact on the result. For example summing vectors is perfectly parallel. Monte Carlo simulations are great as well, since all runs are independent by definition.

But many problems aren't that easy. We modify them, make compromises and sacrifice a bit of precision to make them work on HPC.
Example: training neural networks (easily one of the most important problems of our times) is sequential by definition. :) You can't run it on many cores.
So we partition the data, run training on each sample independently and then average the results. It isn't equivalent to running it properly.

And forced parallelisation isn't just affecting the results. It's very problematic both theoretically and practically. What I mean is: in case of some algorithms parallelisation requires more advanced math than the algorithm itself...

"Here's something that can't be parallelised"

Ok, with you so far.

"And here's how we parallelise it regardless, while accepting a small drawback"

Does this not render your post somewhat contradicting?


I'd understand if by nature, these things literally did not function in any practical sense if parallelised, but what you're describing is more cores being available and those cores being utilised for practical benefit rather than fitting into an inefficient paradigm for comparitively smaller reasons. Which is exactly what I'm saying will happen more and more as mainstream corecounts grow.

As for the internet browser argument - With only 2 tabs open right now, chrome is currently using 9 processes. You can argue that these are not "run on many elements independently with no impact on the result" since some of those processes will rely on the results of others, but even so, it is splitting the work between cores for the sake of doing the work more efficiently.

I think you're relying on a perfect definition of parallel here when it doesn't constitute the whole breadth of why corecount increases benefit the consumer.
 
Joined
Sep 15, 2007
Messages
3,946 (0.63/day)
Location
Police/Nanny State of America
Processor OCed 5800X3D
Motherboard Asucks C6H
Cooling Air
Memory 32GB
Video Card(s) OCed 6800XT
Storage NVMees
Display(s) 32" Dull curved 1440
Case Freebie glass idk
Audio Device(s) Sennheiser
Power Supply Don't even remember
And it's true. Stop pulling words out of context.

And I can't understand why people can't believe that RTX 2000 will be faster than GTX 1000 in non ray tracing games. Pascal was Maxwell on 16nm and was faster, Turing is brand new architecture.

Also noticed another Tom's Hardware article "Why You Shouldn’t Buy Nvidia’s RTX 20-Series Graphics Cards (Yet)". Looks like they just wanted to troll and succeeded.

Paxwell is Volta and Volta is Turing. The only difference is the addition of scam cores. Clocks didn't even change this time lol

Oh, they're not trolling. Editor in Chief/Toms just got paid. I missed an update
 
Last edited:
Joined
Sep 17, 2014
Messages
22,458 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Paxwell is Volta and Volta is Turing. The only difference is the addition of scam cores. Clocks didn't even change this time lol

Oh, they're not trolling. Editor in Chief/Toms just got paid. I missed an update

That's my take on this as well. On all counts. As predicted...
 

rtwjunkie

PC Gaming Enthusiast
Supporter
Joined
Jul 25, 2008
Messages
13,995 (2.34/day)
Location
Louisiana
Processor Core i9-9900k
Motherboard ASRock Z390 Phantom Gaming 6
Cooling All air: 2x140mm Fractal exhaust; 3x 140mm Cougar Intake; Enermax ETS-T50 Black CPU cooler
Memory 32GB (2x16) Mushkin Redline DDR-4 3200
Video Card(s) ASUS RTX 4070 Ti Super OC 16GB
Storage 1x 1TB MX500 (OS); 2x 6TB WD Black; 1x 2TB MX500; 1x 1TB BX500 SSD; 1x 6TB WD Blue storage (eSATA)
Display(s) Infievo 27" 165Hz @ 2560 x 1440
Case Fractal Design Define R4 Black -windowed
Audio Device(s) Soundblaster Z
Power Supply Seasonic Focus GX-1000 Gold
Mouse Coolermaster Sentinel III (large palm grip!)
Keyboard Logitech G610 Orion mechanical (Cherry Brown switches)
Software Windows 10 Pro 64-bit (Start10 & Fences 3.0 installed)
The price between the 2080ti and 2080 is only a few hundred dollars but most people dont have those few hundred.
You’re confusing what can be afforded with what is a replacement. Just because it is all YOU (or I as well) can afford doesn’t make it the replacement. It makes it YOUR replacement. Keep doing that, and like I said, you’ll be down to a xx30 in a few gens.

But next level down almost always costs more than the previous gen occupying a slot one level higher. We don’t have to like it, but that’s how it is.
 
Joined
Feb 18, 2012
Messages
2,715 (0.58/day)
System Name MSI GP76
Processor intel i7 11800h
Cooling 2 laptop fans
Memory 32gb of 3000mhz DDR4
Video Card(s) Nvidia 3070
Storage x2 PNY 8tb cs2130 m.2 SSD--16tb of space
Display(s) 17.3" IPS 1920x1080 240Hz
Power Supply 280w laptop power supply
Mouse Logitech m705
Keyboard laptop keyboard
Software lots of movies and Windows 10 with win 7 shell
Benchmark Scores Good enough for me
You’re confusing what can be afforded with what is a replacement. Just because it is all YOU (or I as well) can afford doesn’t make it the replacement. It makes it YOUR replacement. Keep doing that, and like I said, you’ll be down to a xx30 in a few gens.

But next level down almost always costs more than the previous gen occupying a slot one level higher. We don’t have to like it, but that’s how it is.
You didnt read the entire post. I know that i said that you and I can afford it but most other people cant afford that huge price increase adjusted for inflation so they have to get something lower priced.
It could be a factor that the gpu die size of this gen is huge or that Nvidia has no competition for this gen or any other factors. It might be that Nvidia likes to keep a certain profit margin so the only way to keep that profit margin was by increasing prices that high.
 

rtwjunkie

PC Gaming Enthusiast
Supporter
Joined
Jul 25, 2008
Messages
13,995 (2.34/day)
Location
Louisiana
Processor Core i9-9900k
Motherboard ASRock Z390 Phantom Gaming 6
Cooling All air: 2x140mm Fractal exhaust; 3x 140mm Cougar Intake; Enermax ETS-T50 Black CPU cooler
Memory 32GB (2x16) Mushkin Redline DDR-4 3200
Video Card(s) ASUS RTX 4070 Ti Super OC 16GB
Storage 1x 1TB MX500 (OS); 2x 6TB WD Black; 1x 2TB MX500; 1x 1TB BX500 SSD; 1x 6TB WD Blue storage (eSATA)
Display(s) Infievo 27" 165Hz @ 2560 x 1440
Case Fractal Design Define R4 Black -windowed
Audio Device(s) Soundblaster Z
Power Supply Seasonic Focus GX-1000 Gold
Mouse Coolermaster Sentinel III (large palm grip!)
Keyboard Logitech G610 Orion mechanical (Cherry Brown switches)
Software Windows 10 Pro 64-bit (Start10 & Fences 3.0 installed)
You didnt read the entire post. I know that i said that you and I can afford it but most other people cant afford that huge price increase adjusted for inflation so they have to get something lower priced.
It could be a factor that the gpu die size of this gen is huge or that Nvidia has no competition for this gen or any other factors. It might be that Nvidia likes to keep a certain profit margin so the only way to keep that profit margin was by increasing prices that high.
Fair enough on those points. I only used you and I as a means of it being less impersonal, and thus more people felt included. :)
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,745 (3.30/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
You’re confusing what can be afforded with what is a replacement. Just because it is all YOU (or I as well) can afford doesn’t make it the replacement. It makes it YOUR replacement. Keep doing that, and like I said, you’ll be down to a xx30 in a few gens.

But next level down almost always costs more than the previous gen occupying a slot one level higher. We don’t have to like it, but that’s how it is.

I don't see anyone spending that kind of money on a low end card. Anyone who tries to sell me a low end xx30 card for 1080 prices, or even 1060 prices, is either crazy or from 200 years in the future (inflation). People might still buy these expensive cards, but a lot of people won't. Everyone has a line that can be crossed. The more they push it, the more lines they're crossing. For those of us who don't like it, there's always other options... AMD, second hand markets... in fact, a thought occurs:

I wonder how many people who would not have bought a used mining card, now would buy one, given these 2xxx series prices? There's a lot of people that really hate mining, but there's also a lot of people who really hate getting jerked around by nVidia. I wonder where their lines are.
 
Joined
Sep 7, 2017
Messages
3,244 (1.23/day)
System Name Grunt
Processor Ryzen 5800x
Motherboard Gigabyte x570 Gaming X
Cooling Noctua NH-U12A
Memory Corsair LPX 3600 4x8GB
Video Card(s) Gigabyte 6800 XT (reference)
Storage Samsung 980 Pro 2TB
Display(s) Samsung CFG70, Samsung NU8000 TV
Case Corsair C70
Power Supply Corsair HX750
Software Win 10 Pro
I don't see anyone spending that kind of money on a low end card. Anyone who tries to sell me a low end xx30 card for 1080 prices, or even 1060 prices, is either crazy or from 200 years in the future (inflation). People might still buy these expensive cards, but a lot of people won't. Everyone has a line that can be crossed. The more they push it, the more lines they're crossing. For those of us who don't like it, there's always other options... AMD, second hand markets... in fact, a thought occurs:

I wonder how many people who would not have bought a used mining card, now would buy one, given these 2xxx series prices? There's a lot of people that really hate mining, but there's also a lot of people who really hate getting jerked around by nVidia. I wonder where their lines are.

You couldn't pay me to buy a mining card. I specifically waited for Vega prices to drop a bit and find a deal I could live with.
 
Joined
Jun 28, 2016
Messages
3,595 (1.17/day)
"And here's how we parallelise it regardless, while accepting a small drawback"

Does this not render your post somewhat contradicting?
Sorry, maybe I messed that explanation up a bit.

In short:
At lowest level of any program are basic processor instructions, which only use 1 thread. That's obvious.
We call an algorithm serial, if it runs on one thread, i.e. each instruction sent to CPU needs the previous one to be completed.
We call an algorithm parallel, if it can be run as many independent serial parts.

Basic example: adding vectors. You can add elements by position at the same time, so it is a parallel program (and the reason why GPUs have thousands of cores).

The other thing I wrote about is forcing "parallelism" (I should have used " "), i.e. making a program use as many cores as it can - even if the underlying problem is serial by definition. This is also happening but it has some drawbacks. It's very expensive, it complicates the code and often impacts the result (it introduces an additional error).
It would be better if we could make computers with just few, powerful cores. But we can't for now, so we're doing the next best thing: making many weaker cores and utilizing them as much as we can.

Keep in mind this will change in the future.
A quantum computer is basically a single-core machine - just very fast for a particular class of problems.
But more traditional superconductivity computers are also being developed. These will have cores and instructions much like x86. But they will run at 300GHz (and more) instead of 3 GHz.
Drawback: of course superconductivity requires near 0 temperature...
This means you'll be able to easily replace a HPC with hundreds of CPUs that looks like a enormous fridge... with an actual enormous fridge with just a few cores. :)

As for the internet browser argument - With only 2 tabs open right now, chrome is currently using 9 processes.
Yes! It's divided into tasks that can be run separately. But it won't utilize 9 processors, because these tasks are tiny. Some of them are doing simple things: like generating browser's GUI or checking for updates.
HTML parsing in general and JavaScript are single-threaded.

You can check yourself how number of chrome threads depends on number of active tabs. :)

I think you're relying on a perfect definition of parallel here when it doesn't constitute the whole breadth of why corecount increases benefit the consumer.
But it is important to understand that computing is basically single-threaded and why programs utilize 2-4 cores instead of everything you have. And why making them utilize 16 is either impossible (usually) or very, very difficult.
A lot of people here think that games don't go past 4 cores because of Intel's conspiracy. :)
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,745 (3.30/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Well, if you are a game dev, and the vast majority of your target audience is using machines with <4 cores, are you going to design a game to run on 16? Now that would be just silly. I don't know what they're doing now to make it work, but I've seen plenty examples of poorly threaded games. Supreme Commander 2 will still slow to a crawl, even with an overclocked 8700k... because all the AI shit (tracking many thousands of units) happens on one thread. Plenty of other games had really weak threading, where one main thread still did all the heavy lifting, but another thread would handle the audio or something (I think Crysis was like that). Stalker, one of my favorite games, had dual core support hacked in with a patch after release. Today, though, the later Battlefield games are a pretty good example of games that seem to be threaded pretty well. They see improvements beyond 4 cores... but it's also not uncommon for gamers to have >4 core chips. The PS4 has an 8 core chip, and so does the xBone. Know what would be really silly? Developing a game console with a bunch of cores nobody will ever use...
 
Joined
Sep 7, 2017
Messages
3,244 (1.23/day)
System Name Grunt
Processor Ryzen 5800x
Motherboard Gigabyte x570 Gaming X
Cooling Noctua NH-U12A
Memory Corsair LPX 3600 4x8GB
Video Card(s) Gigabyte 6800 XT (reference)
Storage Samsung 980 Pro 2TB
Display(s) Samsung CFG70, Samsung NU8000 TV
Case Corsair C70
Power Supply Corsair HX750
Software Win 10 Pro
Well, if you are a game dev, and the vast majority of your target audience is using machines with <4 cores, are you going to design a game to run on 16? Now that would be just silly. I don't know what they're doing now to make it work, but I've seen plenty examples of poorly threaded games. Supreme Commander 2 will still slow to a crawl, even with an overclocked 8700k... because all the AI shit (tracking many thousands of units) happens on one thread. Plenty of other games had really weak threading, where one main thread still did all the heavy lifting, but another thread would handle the audio or something (I think Crysis was like that). Stalker, one of my favorite games, had dual core support hacked in with a patch after release. Today, though, the later Battlefield games are a pretty good example of games that seem to be threaded pretty well. They see improvements beyond 4 cores... but it's also not uncommon for gamers to have >4 core chips. The PS4 has an 8 core chip, and so does the xBone. Know what would be really silly? Developing a game console with a bunch of cores nobody will ever use...

The consoles should have been a good sign to move in that direction. I guess not :\
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,745 (3.30/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Admittedly I remember the consoles are a little weird when it comes to that. I remember reading something about the whole 8 cores isn't running the game; there's some rigamaroo involved with some of the core(s) being set aside for the OS and/or other functions that aren't the game. But still, it's not like when you start a game on your quad core computer, every other process dies except for the game, so that's kind of a moot point anyway.
 
Joined
Jan 8, 2017
Messages
9,438 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Well, if you are a game dev, and the vast majority of your target audience is using machines with <4 cores, are you going to design a game to run on 16?

This is such a common misconception. You don't develop software to run on 2 , 4 , 16 cores. You simply add multi-threading than scales better or worse with an increasing number of cores.

Know what would be really silly? Developing a game console with a bunch of cores nobody will ever use...

They are havely used though, otherwise they wouldn't be able to get anything to run at an acceptable speed.
 
Joined
Jun 28, 2016
Messages
3,595 (1.17/day)
Well, if you are a game dev, and the vast majority of your target audience is using machines with <4 cores, are you going to design a game to run on 16?
This is what I'm talking about all the time. If a problem isn't parallel (it doesn't run on as many cores as possible just like that), you'll have to struggle to utilize more cores.
And of course you're right! Since 4 cores were dominating the market, game developers stopped there. Why would they spend more money and time on optimizing for CPUs that don't exist?
Important question: did you really feel limited by those 4 cores?
Now you want games to utilize 8 or 16 threads, but what would this game do with all that grunt? Graphic complexity goes up, but computation-wise games don't evolve that much, because there isn't much potential. Even today majority of CPU tasks during gaming is just operating the GPU. :)
because all the AI shit (tracking many thousands of units) happens on one thread.
Game AI is a great example of a sequential problem. :)
Know what would be really silly? Developing a game console with a bunch of cores nobody will ever use...
Which is true. Two cores on both Xbox one and PS4 are reserved for the console itself. A limited access to the 7th core was added at some point. There's also some headroom for split-screen gaming.
This is such a common misconception. You don't develop software to run on 2 , 4 , 16 cores. You simply add multi-threading than scales better or worse with an increasing number of cores.
The opposite. Seriously. :)
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,745 (3.30/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
This is such a common misconception. You don't develop software to run on 2 , 4 , 16 cores. You simply add multi-threading than scales better or worse with an increasing number of cores.

If you could simply scale processing like that, there wouldn't be thousands of posts on this site alone about "game X is multithreaded, but it sucks". There's some fancy trickery involved that they're just recently (within the last handful of years) starting to get right.

This is what I'm talking about all the time. If a problem isn't parallel (it doesn't run on as many cores as possible just like that), you'll have to struggle to utilize more cores.
And of course you're right! Since 4 cores were dominating the market, game developers stopped there. Why would they spend more money and time on optimizing for CPUs that don't exist?
Important question: did you really feel limited by those 4 cores?/QUOTE]

As far as gaming goes, not by the cores, but by per thread performance. Two games (SUPCOM2 and 7 Days to Die) need a lot of CPU grunt to run. I could likely double (or close to it) my CPU performance by upgrading to an overclocked 8350k.

Now once in a while, when I do DVDrips (especially if they're interlaced) I'm definitely limited by 4 cores. Such a task leaves me wanting something like Threadripper... but even if it encodes at 10fps, it'll get done eventually, and realistically I'd still choose a 9600k or 9700k for superior per-thread performance (mostly for gaming).
 

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
19,581 (2.86/day)
Location
Piteå
System Name White DJ in Detroit
Processor Ryzen 5 5600
Motherboard Asrock B450M-HDV
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Kingston Fury 3400mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston A400 240GB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Plantronics 5220, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Cherry MX Board 1.0 TKL Brown
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
Joined
Sep 17, 2014
Messages
22,458 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000

Because a good AI is responsive and takes into account player activity. You cannot do that in parallel, it is conditional. The best you can do with many threads is predict every possible outcome beforehand.

As for multi threading in games in a broad sense, the real problem never really gets fixed even with good multithreading and that is the real-time element of gaming. The weakest link determines performance (FPS) and will create idle time on everything else, rendering your many threads rather useless. This is why single thread will always remain the primary factor in performance, and its why clocks always matter. And it explains why you can see lots of activity on your 6c/12t CPU but it won't get you higher FPS.
 
Joined
Jan 8, 2017
Messages
9,438 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
If you could simply scale processing like that, there wouldn't be thousands of posts on this site alone about "game X is multithreaded, but it sucks".

Yes, some things can be easily scaled just like that but doesn't mean it will be effective though. Most multithreading methods and APIs make abstraction of the hardware that it runs on and leaves the task of scheduling core affinity to the OS. That being said you can write plenty of software that would offer more or less a linear increase of speed on an ideal machine. However, we don't have ideal machines, so you can easily run into bottlenecks such as lack of memory bandwidth even though algorithmically speaking the software you wrote can scale indefinitely.

Example : a simple window function over a huge matrix , like a filter of sorts. Say the matrix has N elements then you can launch N number of threads in execution and that should technically speed the up the computation by N times. If you were to gradually increase the number of threads from 1 to a very large N you will notice at some point the speed up gain drops massively. That's because a computation like that is extremely memory dependent, you need to read write a lot of data and there is only so much memory bandwidth you have available on a machine. And so there you go , an example of an easily scalable process where you can still run into a "it's multithreaded and it still sucks" scenario.
 
Last edited:
Joined
Jun 28, 2016
Messages
3,595 (1.17/day)
Because for it to be parallel, you would have to be able to write a function getNextStep(agent) and run it on all "intelligent" subjects. But that's not true because of interactions - like the obvious collision detection. And there is an issue of real time, so ideally you'd want to execute batches, not individual procedures randomly...
But of course people try to utilize more cores with their AI engines and it is possible. Just don't expect miracles.

Another thing is what @Vayra86 mentioned: there's a human playing as well - it's an interactive program. This means you can't really simulate in advance if the player has many possible moves.
And of course there's a new performance issue, because you have to react to players' actions as quickly as possible, while all the "multi-threading" effort introduces a lot of lag (on top of what many-core CPUs have already...).
 
Joined
Oct 19, 2007
Messages
8,259 (1.32/day)
Processor Intel i9 9900K @5GHz w/ Corsair H150i Pro CPU AiO w/Corsair HD120 RBG fan
Motherboard Asus Z390 Maximus XI Code
Cooling 6x120mm Corsair HD120 RBG fans
Memory Corsair Vengeance RBG 2x8GB 3600MHz
Video Card(s) Asus RTX 3080Ti STRIX OC
Storage Samsung 970 EVO Plus 500GB , 970 EVO 1TB, Samsung 850 EVO 1TB SSD, 10TB Synology DS1621+ RAID5
Display(s) Corsair Xeneon 32" 32UHD144 4K
Case Corsair 570x RBG Tempered Glass
Audio Device(s) Onboard / Corsair Virtuoso XT Wireless RGB
Power Supply Corsair HX850w Platinum Series
Mouse Logitech G604s
Keyboard Corsair K70 Rapidfire
Software Windows 11 x64 Professional
Benchmark Scores Firestrike - 23520 Heaven - 3670
Just wanted to post this here. Video starts with screenshots at 3:37

 
Joined
Aug 20, 2012
Messages
4 (0.00/day)
There aren't even games worth playing, game releases are stagnating since 2015, and quality also never rises above "meh...", and is even much lower in - literally only one or two per year - those titles which would use this technology. So, when there are no games worth playing, or those games are the same as ever, how dumb must one be to justify such an expense.

Then, the price is itself a testament to stagnation. Pascal is refitted and shrunk Maxwell, nothing else (unless you're a "cool" VR-head). And Pascal is obviously still used as "current gen" even with the "new gen" out at the same time and its price oriented by it. How stupid is that. So the new generation will always rise in price whereas the old generation is still essentially state of the art, with very minor changes to justify the new generation tag (while not really committing to it). One has to be really, really stupid to not see that this is just marketing bullshit, partly predatorily exploiting the mining situation - which has also been responsible for the stagnation. This is all a mess. But unfortunately, people do indeed get stupider, even if hardware and games stagnate...

A new generation is supposed to replace the old generation, because it is not good enough anymore. That means, the price should be overall the same, at least within a certain time sphere of adjustment, because the value is essentially the same, with the value of the old generation lowered. What Nvidia has been doing is essentially building their industry since 4 years or so on equivalents of Trumptards who will defend any big-pants dick-move for their ego's sake (and because they can't think two moves ahead or backwards). This is all self-serving, without purpose (but for Nvidia's pockets). Well, one can observe it from a distance and get a few years older - probably without any change to any of it. Which is clearly not good but is still better than throwing money away for this (lack of difference) without thinking.

There is no possible discussion that takes it seriously, so I'm basically meandering around the external factors of it (which is all this has become about).
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Because for it to be parallel, you would have to be able to write a function getNextStep(agent) and run it on all "intelligent" subjects. But that's not true because of interactions - like the obvious collision detection. And there is an issue of real time, so ideally you'd want to execute batches, not individual procedures randomly...
But of course people try to utilize more cores with their AI engines and it is possible. Just don't expect miracles.

Another thing is what @Vayra86 mentioned: there's a human playing as well - it's an interactive program. This means you can't really simulate in advance if the player has many possible moves.
And of course there's a new performance issue, because you have to react to players' actions as quickly as possible, while all the "multi-threading" effort introduces a lot of lag (on top of what many-core CPUs have already...).
You are clearly bright.

But on the whole you continue to argue against progress and change (multithreaded) , it's probably got some way to go yes, but where would four cores get us in ten years , no where.

Change takes time but we are on the core wars path.

Maybe the paradigm shift in coding is required to mitigate the narrow minded issues your creating but i think we will get there.

Case in point you talk of Ai as it is done now, no one lkkes or wants that shit, we the users and gamers want real non scripted thinking Ai ,neural nets and all , and that's not gonna run on a single core(x86).

Hardware and software needs to progress nit stagnate.
 
Top