• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Tom's Hardware Edioter-in-chief's stance on RTX 20 Series : JUST BUY IT

I wonder what prebuilt's are going to sell for.

Have you ever signed an NDA before buying a pre-built PC? The way it's meant to be paid! :cool:
 
Last edited:
Well that doesn’t spell very good news for the 2070 then! In fact, in addition to the horrible increase in prices, now we have decreased price vs performance value for the 2070 compared to the 1070 and most previous ones.


The thermals based on the quoted power figures puts the 2080Ti almost every ounce of performance the chip has at the maximum power specification.

I bet this is Nvidias failure on process node not being ready, much like AMD has had with Hawaii and Vega, low yields and more power consumption than expected.
 
Exactly...and they occupy different places in the families.


You can sort however you like, but it’s not how the company that makes them sorts them, so, yeah....THAT is why no reviewer or Nvidia or any AIB partner will say or market the 2080 as the 1080Ti’s replacement.

Basically, if you only look at price as far as replacement decisions go, you’re gonna price yourself down and out of playing pc games in just a few generations. You’ll be buying a 5030 and wondering why the only games you can play maxed out are your games older than 4 years. I would hate to see that happen to anyone. :)
When people buy such expensive items they do have to look at prices. This generation cards are much more expensive than last gen.

The price of a 2080 was the price of a 1080ti and not a lot of people bought the 1080ti because they didnt have the budget for it. This generations price are over 30% more, I dont think inflation increased 30% in only 2 years. So the 1080ti is the new 2080, the 1080 is the 2070, price wise.
I will give myself as an example, my wallet can afford 30 2080ti but my eyes have a budget of a 2060, when ever it comes out. The price between the 2080ti and 2080 is only a few hundred dollars but most people dont have those few hundred.
 
Last edited:
Financial transactions, database lookups, out of order execution with branch dependencies.

So like 70% of real world use in business, for example I built new machines for a workspace based on the fact that production at peak times was 15 minutes per machine saved per work day resulted in one extra customer transaction, and paid for themselves in a few days of peak production, plus employees saved time and could go home sooner.

Banks all still batch process at night, sometimes due to account locks and the sheer volume batch activity may be spread over multiple days from the weekend or holidays. Every second of machine time gained can result in hundreds, or thousands to millions of dollars of money the bank may save or spend on borrowing funds from the federal reserve to cover transactions, and that costs interest to borrow the bank has to pay.
I'm not saying 15% is "a big deal". It isn't most of the time. But if it is possible and without much fuss or sacrifices, then why not?
BTW: by ignoring those 15% you're basically undermining the sense of overclocking, which isn't exactly in-line with this community. But don't worry - I'm with you on this one! :-D Overclocking is the least cost-effective way of improving performance and, hence, fairly pointless - apart from the top CPUs available, obviously. :)

I'm doing a lot of stuff that can't be split between cores. It's mostly stemming from the way I work or the tasks I'm given. That's why I care.

But you're doing it as well, sometimes unconsciously. Browsing WWW is a basic example. 10 years ago it was limited by our internet connections. Today you're not waiting for the data, but for the rendering engine. :)

What are "production tasks"?
Editing? Depends what and how you're editing. :) Quite a few popular photo algorithms are serial (sequential) and utilize just 1 thread. This is why programs like Photoshop struggle to utilize more than 3-4 threads during non-batch operations.
Principal component analysis (e.g. used for face recognition) is iterative as well - people are making very decent money and careers by finding ways to make it faster on multi-thread PCs. :)

Nope. A parallel algorithm is one that can be run on many elements independently with no impact on the result. For example summing vectors is perfectly parallel. Monte Carlo simulations are great as well, since all runs are independent by definition.

But many problems aren't that easy. We modify them, make compromises and sacrifice a bit of precision to make them work on HPC.
Example: training neural networks (easily one of the most important problems of our times) is sequential by definition. :) You can't run it on many cores.
So we partition the data, run training on each sample independently and then average the results. It isn't equivalent to running it properly.

And forced parallelisation isn't just affecting the results. It's very problematic both theoretically and practically. What I mean is: in case of some algorithms parallelisation requires more advanced math than the algorithm itself...

"Here's something that can't be parallelised"

Ok, with you so far.

"And here's how we parallelise it regardless, while accepting a small drawback"

Does this not render your post somewhat contradicting?


I'd understand if by nature, these things literally did not function in any practical sense if parallelised, but what you're describing is more cores being available and those cores being utilised for practical benefit rather than fitting into an inefficient paradigm for comparitively smaller reasons. Which is exactly what I'm saying will happen more and more as mainstream corecounts grow.

As for the internet browser argument - With only 2 tabs open right now, chrome is currently using 9 processes. You can argue that these are not "run on many elements independently with no impact on the result" since some of those processes will rely on the results of others, but even so, it is splitting the work between cores for the sake of doing the work more efficiently.

I think you're relying on a perfect definition of parallel here when it doesn't constitute the whole breadth of why corecount increases benefit the consumer.
 
And it's true. Stop pulling words out of context.

And I can't understand why people can't believe that RTX 2000 will be faster than GTX 1000 in non ray tracing games. Pascal was Maxwell on 16nm and was faster, Turing is brand new architecture.

Also noticed another Tom's Hardware article "Why You Shouldn’t Buy Nvidia’s RTX 20-Series Graphics Cards (Yet)". Looks like they just wanted to troll and succeeded.

Paxwell is Volta and Volta is Turing. The only difference is the addition of scam cores. Clocks didn't even change this time lol

Oh, they're not trolling. Editor in Chief/Toms just got paid. I missed an update
 
Last edited:
Paxwell is Volta and Volta is Turing. The only difference is the addition of scam cores. Clocks didn't even change this time lol

Oh, they're not trolling. Editor in Chief/Toms just got paid. I missed an update

That's my take on this as well. On all counts. As predicted...
 
The price between the 2080ti and 2080 is only a few hundred dollars but most people dont have those few hundred.
You’re confusing what can be afforded with what is a replacement. Just because it is all YOU (or I as well) can afford doesn’t make it the replacement. It makes it YOUR replacement. Keep doing that, and like I said, you’ll be down to a xx30 in a few gens.

But next level down almost always costs more than the previous gen occupying a slot one level higher. We don’t have to like it, but that’s how it is.
 
You’re confusing what can be afforded with what is a replacement. Just because it is all YOU (or I as well) can afford doesn’t make it the replacement. It makes it YOUR replacement. Keep doing that, and like I said, you’ll be down to a xx30 in a few gens.

But next level down almost always costs more than the previous gen occupying a slot one level higher. We don’t have to like it, but that’s how it is.
You didnt read the entire post. I know that i said that you and I can afford it but most other people cant afford that huge price increase adjusted for inflation so they have to get something lower priced.
It could be a factor that the gpu die size of this gen is huge or that Nvidia has no competition for this gen or any other factors. It might be that Nvidia likes to keep a certain profit margin so the only way to keep that profit margin was by increasing prices that high.
 
You didnt read the entire post. I know that i said that you and I can afford it but most other people cant afford that huge price increase adjusted for inflation so they have to get something lower priced.
It could be a factor that the gpu die size of this gen is huge or that Nvidia has no competition for this gen or any other factors. It might be that Nvidia likes to keep a certain profit margin so the only way to keep that profit margin was by increasing prices that high.
Fair enough on those points. I only used you and I as a means of it being less impersonal, and thus more people felt included. :)
 
You’re confusing what can be afforded with what is a replacement. Just because it is all YOU (or I as well) can afford doesn’t make it the replacement. It makes it YOUR replacement. Keep doing that, and like I said, you’ll be down to a xx30 in a few gens.

But next level down almost always costs more than the previous gen occupying a slot one level higher. We don’t have to like it, but that’s how it is.

I don't see anyone spending that kind of money on a low end card. Anyone who tries to sell me a low end xx30 card for 1080 prices, or even 1060 prices, is either crazy or from 200 years in the future (inflation). People might still buy these expensive cards, but a lot of people won't. Everyone has a line that can be crossed. The more they push it, the more lines they're crossing. For those of us who don't like it, there's always other options... AMD, second hand markets... in fact, a thought occurs:

I wonder how many people who would not have bought a used mining card, now would buy one, given these 2xxx series prices? There's a lot of people that really hate mining, but there's also a lot of people who really hate getting jerked around by nVidia. I wonder where their lines are.
 
I don't see anyone spending that kind of money on a low end card. Anyone who tries to sell me a low end xx30 card for 1080 prices, or even 1060 prices, is either crazy or from 200 years in the future (inflation). People might still buy these expensive cards, but a lot of people won't. Everyone has a line that can be crossed. The more they push it, the more lines they're crossing. For those of us who don't like it, there's always other options... AMD, second hand markets... in fact, a thought occurs:

I wonder how many people who would not have bought a used mining card, now would buy one, given these 2xxx series prices? There's a lot of people that really hate mining, but there's also a lot of people who really hate getting jerked around by nVidia. I wonder where their lines are.

You couldn't pay me to buy a mining card. I specifically waited for Vega prices to drop a bit and find a deal I could live with.
 
"And here's how we parallelise it regardless, while accepting a small drawback"

Does this not render your post somewhat contradicting?
Sorry, maybe I messed that explanation up a bit.

In short:
At lowest level of any program are basic processor instructions, which only use 1 thread. That's obvious.
We call an algorithm serial, if it runs on one thread, i.e. each instruction sent to CPU needs the previous one to be completed.
We call an algorithm parallel, if it can be run as many independent serial parts.

Basic example: adding vectors. You can add elements by position at the same time, so it is a parallel program (and the reason why GPUs have thousands of cores).

The other thing I wrote about is forcing "parallelism" (I should have used " "), i.e. making a program use as many cores as it can - even if the underlying problem is serial by definition. This is also happening but it has some drawbacks. It's very expensive, it complicates the code and often impacts the result (it introduces an additional error).
It would be better if we could make computers with just few, powerful cores. But we can't for now, so we're doing the next best thing: making many weaker cores and utilizing them as much as we can.

Keep in mind this will change in the future.
A quantum computer is basically a single-core machine - just very fast for a particular class of problems.
But more traditional superconductivity computers are also being developed. These will have cores and instructions much like x86. But they will run at 300GHz (and more) instead of 3 GHz.
Drawback: of course superconductivity requires near 0 temperature...
This means you'll be able to easily replace a HPC with hundreds of CPUs that looks like a enormous fridge... with an actual enormous fridge with just a few cores. :)

As for the internet browser argument - With only 2 tabs open right now, chrome is currently using 9 processes.
Yes! It's divided into tasks that can be run separately. But it won't utilize 9 processors, because these tasks are tiny. Some of them are doing simple things: like generating browser's GUI or checking for updates.
HTML parsing in general and JavaScript are single-threaded.

You can check yourself how number of chrome threads depends on number of active tabs. :)

I think you're relying on a perfect definition of parallel here when it doesn't constitute the whole breadth of why corecount increases benefit the consumer.
But it is important to understand that computing is basically single-threaded and why programs utilize 2-4 cores instead of everything you have. And why making them utilize 16 is either impossible (usually) or very, very difficult.
A lot of people here think that games don't go past 4 cores because of Intel's conspiracy. :)
 
Well, if you are a game dev, and the vast majority of your target audience is using machines with <4 cores, are you going to design a game to run on 16? Now that would be just silly. I don't know what they're doing now to make it work, but I've seen plenty examples of poorly threaded games. Supreme Commander 2 will still slow to a crawl, even with an overclocked 8700k... because all the AI shit (tracking many thousands of units) happens on one thread. Plenty of other games had really weak threading, where one main thread still did all the heavy lifting, but another thread would handle the audio or something (I think Crysis was like that). Stalker, one of my favorite games, had dual core support hacked in with a patch after release. Today, though, the later Battlefield games are a pretty good example of games that seem to be threaded pretty well. They see improvements beyond 4 cores... but it's also not uncommon for gamers to have >4 core chips. The PS4 has an 8 core chip, and so does the xBone. Know what would be really silly? Developing a game console with a bunch of cores nobody will ever use...
 
Well, if you are a game dev, and the vast majority of your target audience is using machines with <4 cores, are you going to design a game to run on 16? Now that would be just silly. I don't know what they're doing now to make it work, but I've seen plenty examples of poorly threaded games. Supreme Commander 2 will still slow to a crawl, even with an overclocked 8700k... because all the AI shit (tracking many thousands of units) happens on one thread. Plenty of other games had really weak threading, where one main thread still did all the heavy lifting, but another thread would handle the audio or something (I think Crysis was like that). Stalker, one of my favorite games, had dual core support hacked in with a patch after release. Today, though, the later Battlefield games are a pretty good example of games that seem to be threaded pretty well. They see improvements beyond 4 cores... but it's also not uncommon for gamers to have >4 core chips. The PS4 has an 8 core chip, and so does the xBone. Know what would be really silly? Developing a game console with a bunch of cores nobody will ever use...

The consoles should have been a good sign to move in that direction. I guess not :\
 
Admittedly I remember the consoles are a little weird when it comes to that. I remember reading something about the whole 8 cores isn't running the game; there's some rigamaroo involved with some of the core(s) being set aside for the OS and/or other functions that aren't the game. But still, it's not like when you start a game on your quad core computer, every other process dies except for the game, so that's kind of a moot point anyway.
 
Well, if you are a game dev, and the vast majority of your target audience is using machines with <4 cores, are you going to design a game to run on 16?

This is such a common misconception. You don't develop software to run on 2 , 4 , 16 cores. You simply add multi-threading than scales better or worse with an increasing number of cores.

Know what would be really silly? Developing a game console with a bunch of cores nobody will ever use...

They are havely used though, otherwise they wouldn't be able to get anything to run at an acceptable speed.
 
Well, if you are a game dev, and the vast majority of your target audience is using machines with <4 cores, are you going to design a game to run on 16?
This is what I'm talking about all the time. If a problem isn't parallel (it doesn't run on as many cores as possible just like that), you'll have to struggle to utilize more cores.
And of course you're right! Since 4 cores were dominating the market, game developers stopped there. Why would they spend more money and time on optimizing for CPUs that don't exist?
Important question: did you really feel limited by those 4 cores?
Now you want games to utilize 8 or 16 threads, but what would this game do with all that grunt? Graphic complexity goes up, but computation-wise games don't evolve that much, because there isn't much potential. Even today majority of CPU tasks during gaming is just operating the GPU. :-)
because all the AI shit (tracking many thousands of units) happens on one thread.
Game AI is a great example of a sequential problem. :-)
Know what would be really silly? Developing a game console with a bunch of cores nobody will ever use...
Which is true. Two cores on both Xbox one and PS4 are reserved for the console itself. A limited access to the 7th core was added at some point. There's also some headroom for split-screen gaming.
This is such a common misconception. You don't develop software to run on 2 , 4 , 16 cores. You simply add multi-threading than scales better or worse with an increasing number of cores.
The opposite. Seriously. :-)
 
This is such a common misconception. You don't develop software to run on 2 , 4 , 16 cores. You simply add multi-threading than scales better or worse with an increasing number of cores.

If you could simply scale processing like that, there wouldn't be thousands of posts on this site alone about "game X is multithreaded, but it sucks". There's some fancy trickery involved that they're just recently (within the last handful of years) starting to get right.

This is what I'm talking about all the time. If a problem isn't parallel (it doesn't run on as many cores as possible just like that), you'll have to struggle to utilize more cores.
And of course you're right! Since 4 cores were dominating the market, game developers stopped there. Why would they spend more money and time on optimizing for CPUs that don't exist?
Important question: did you really feel limited by those 4 cores?/QUOTE]

As far as gaming goes, not by the cores, but by per thread performance. Two games (SUPCOM2 and 7 Days to Die) need a lot of CPU grunt to run. I could likely double (or close to it) my CPU performance by upgrading to an overclocked 8350k.

Now once in a while, when I do DVDrips (especially if they're interlaced) I'm definitely limited by 4 cores. Such a task leaves me wanting something like Threadripper... but even if it encodes at 10fps, it'll get done eventually, and realistically I'd still choose a 9600k or 9700k for superior per-thread performance (mostly for gaming).
 

Because a good AI is responsive and takes into account player activity. You cannot do that in parallel, it is conditional. The best you can do with many threads is predict every possible outcome beforehand.

As for multi threading in games in a broad sense, the real problem never really gets fixed even with good multithreading and that is the real-time element of gaming. The weakest link determines performance (FPS) and will create idle time on everything else, rendering your many threads rather useless. This is why single thread will always remain the primary factor in performance, and its why clocks always matter. And it explains why you can see lots of activity on your 6c/12t CPU but it won't get you higher FPS.
 
If you could simply scale processing like that, there wouldn't be thousands of posts on this site alone about "game X is multithreaded, but it sucks".

Yes, some things can be easily scaled just like that but doesn't mean it will be effective though. Most multithreading methods and APIs make abstraction of the hardware that it runs on and leaves the task of scheduling core affinity to the OS. That being said you can write plenty of software that would offer more or less a linear increase of speed on an ideal machine. However, we don't have ideal machines, so you can easily run into bottlenecks such as lack of memory bandwidth even though algorithmically speaking the software you wrote can scale indefinitely.

Example : a simple window function over a huge matrix , like a filter of sorts. Say the matrix has N elements then you can launch N number of threads in execution and that should technically speed the up the computation by N times. If you were to gradually increase the number of threads from 1 to a very large N you will notice at some point the speed up gain drops massively. That's because a computation like that is extremely memory dependent, you need to read write a lot of data and there is only so much memory bandwidth you have available on a machine. And so there you go , an example of an easily scalable process where you can still run into a "it's multithreaded and it still sucks" scenario.
 
Last edited:
Because for it to be parallel, you would have to be able to write a function getNextStep(agent) and run it on all "intelligent" subjects. But that's not true because of interactions - like the obvious collision detection. And there is an issue of real time, so ideally you'd want to execute batches, not individual procedures randomly...
But of course people try to utilize more cores with their AI engines and it is possible. Just don't expect miracles.

Another thing is what @Vayra86 mentioned: there's a human playing as well - it's an interactive program. This means you can't really simulate in advance if the player has many possible moves.
And of course there's a new performance issue, because you have to react to players' actions as quickly as possible, while all the "multi-threading" effort introduces a lot of lag (on top of what many-core CPUs have already...).
 
Just wanted to post this here. Video starts with screenshots at 3:37

 
There aren't even games worth playing, game releases are stagnating since 2015, and quality also never rises above "meh...", and is even much lower in - literally only one or two per year - those titles which would use this technology. So, when there are no games worth playing, or those games are the same as ever, how dumb must one be to justify such an expense.

Then, the price is itself a testament to stagnation. Pascal is refitted and shrunk Maxwell, nothing else (unless you're a "cool" VR-head). And Pascal is obviously still used as "current gen" even with the "new gen" out at the same time and its price oriented by it. How stupid is that. So the new generation will always rise in price whereas the old generation is still essentially state of the art, with very minor changes to justify the new generation tag (while not really committing to it). One has to be really, really stupid to not see that this is just marketing bullshit, partly predatorily exploiting the mining situation - which has also been responsible for the stagnation. This is all a mess. But unfortunately, people do indeed get stupider, even if hardware and games stagnate...

A new generation is supposed to replace the old generation, because it is not good enough anymore. That means, the price should be overall the same, at least within a certain time sphere of adjustment, because the value is essentially the same, with the value of the old generation lowered. What Nvidia has been doing is essentially building their industry since 4 years or so on equivalents of Trumptards who will defend any big-pants dick-move for their ego's sake (and because they can't think two moves ahead or backwards). This is all self-serving, without purpose (but for Nvidia's pockets). Well, one can observe it from a distance and get a few years older - probably without any change to any of it. Which is clearly not good but is still better than throwing money away for this (lack of difference) without thinking.

There is no possible discussion that takes it seriously, so I'm basically meandering around the external factors of it (which is all this has become about).
 
Because for it to be parallel, you would have to be able to write a function getNextStep(agent) and run it on all "intelligent" subjects. But that's not true because of interactions - like the obvious collision detection. And there is an issue of real time, so ideally you'd want to execute batches, not individual procedures randomly...
But of course people try to utilize more cores with their AI engines and it is possible. Just don't expect miracles.

Another thing is what @Vayra86 mentioned: there's a human playing as well - it's an interactive program. This means you can't really simulate in advance if the player has many possible moves.
And of course there's a new performance issue, because you have to react to players' actions as quickly as possible, while all the "multi-threading" effort introduces a lot of lag (on top of what many-core CPUs have already...).
You are clearly bright.

But on the whole you continue to argue against progress and change (multithreaded) , it's probably got some way to go yes, but where would four cores get us in ten years , no where.

Change takes time but we are on the core wars path.

Maybe the paradigm shift in coding is required to mitigate the narrow minded issues your creating but i think we will get there.

Case in point you talk of Ai as it is done now, no one lkkes or wants that shit, we the users and gamers want real non scripted thinking Ai ,neural nets and all , and that's not gonna run on a single core(x86).

Hardware and software needs to progress nit stagnate.
 
Back
Top