• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Tom's Hardware Edioter-in-chief's stance on RTX 20 Series : JUST BUY IT

I don't buy this "they were just clickbaiting". Bollocks, it's shilling, whether it's for something in return now or perhaps for something in the future it's another matter.
 
Man, you should work for marketing at Nvidia, your claims are as vague and meaningless as theirs.



Written by a different editor a day prior, you can spin in whichever way you want, it was a load of braindead bullshit.
More than that, the article Steve is hammering at, directly insults that editor.
 
I divide tasks like that because they take different hardware to work efficiently. A computer with an i5 8400 and a GTX1080 (may as well throw in the old 16GB RAM too) will do great at gaming, but not so much anything that takes a lot of CPU grunt to run
So you call everything else "productivity" and you assume they need many cores?
Because there are countless tasks (also multi-threaded) where this i5 8400 will beat a 2S Xeon server. Not to mention that for pure single-thread tasks 8400 is faster than any AMD CPU available...
 
So you call everything else "productivity" and you assume they need many cores?
Because there are countless tasks (also multi-threaded) where this i5 8400 will beat a 2S Xeon server. Not to mention that for pure single-thread tasks 8400 is faster than any AMD CPU available...
Honestly, I'm getting really tired of seeing this argument trotted out.

Can anyone in this thread name anything they do on a daily basis that meets both of the following criteria?

1 - Is an extremely demanding task where being executed 15% faster by one machine than another is a big deal
2 - Absolutely cannot, under any circumstances, be split between multiple cores to make it faster?

Gaming isn't that. It can be parallelised and we're nowhere near the limits of how much. Production tasks, rendering, editing, working in a DAW? Those are massively parallel already.

Almost *all* high performance computing, including gaming, became parallelised long ago. The only reason it isn't already more parallelised is because, thanks to Intel's stagnation and AMD's irrelevance until the last 2 years, the "Target Machine" for all of the companies producing games, had to be a quadcore.

As recently as 3 years ago it would have been suicide for a games studio to make a game where the engine was optimised for even 6 cores, let alone 8. Such a game would have chugged and spluttered on 4 core machines. The experience would have been terrible for anyone not on HEDT - although it would have definitely closed the gap between an FX processor and an Intel one - but financially there was no reason to invest the time in doing that, as very few gamers were using AMD.

Single threading isn't a magic bullet for gaming performance. Games can be, and are already in some cases, parallelised beyond 4 threads. RPCS3 already saturates many more cores. PUBG sees noticable performance uplift on 6 cores rather than 4.

Rambling on about Intel's single threaded performance isn't going to cut the mustard forever. Games will expand to 8 cores, and 6700K and 7700K owners will feel the walls close in on them as their clockspeed advantage is overtaken by greater parallelisation. Those CPUs won't be like the 2500K, where it stayed relevant for years due to stagnation. They're going to be eclipsed relatively quickly.

Intel knows this, that's why they're finally expanding core counts. AMD banked on it years ago and it's paying dividends now - Ryzen is more energy efficient than Core Arch is for the same number of cores. Why? Because AMD have been doing 8 cores for years longer than Intel have, and so had a headstart on making it more efficient, while Intel sat on their laurels.
 
TLDR version of my article:
• RTX cards *look* amazing
• The *promised* experience is worth the high price.
• Don’t buy an older card (if at all possible)

https://twitter.com/geekinchief
 
So you call everything else "productivity" and you assume they need many cores?
Because there are countless tasks (also multi-threaded) where this i5 8400 will beat a 2S Xeon server. Not to mention that for pure single-thread tasks 8400 is faster than any AMD CPU available...
shill much?
 
Can anyone in this thread name anything they do on a daily basis that meets both of the following criteria?

1 - Is an extremely demanding task where being executed 15% faster by one machine than another is a big deal
2 - Absolutely cannot, under any circumstances, be split between multiple cores to make it faster?
I'm not saying 15% is "a big deal". It isn't most of the time. But if it is possible and without much fuss or sacrifices, then why not?
BTW: by ignoring those 15% you're basically undermining the sense of overclocking, which isn't exactly in-line with this community. But don't worry - I'm with you on this one! :-D Overclocking is the least cost-effective way of improving performance and, hence, fairly pointless - apart from the top CPUs available, obviously. :)

I'm doing a lot of stuff that can't be split between cores. It's mostly stemming from the way I work or the tasks I'm given. That's why I care.

But you're doing it as well, sometimes unconsciously. Browsing WWW is a basic example. 10 years ago it was limited by our internet connections. Today you're not waiting for the data, but for the rendering engine. :)
Production tasks, rendering, editing, working in a DAW? Those are massively parallel already.
What are "production tasks"?
Editing? Depends what and how you're editing. :) Quite a few popular photo algorithms are serial (sequential) and utilize just 1 thread. This is why programs like Photoshop struggle to utilize more than 3-4 threads during non-batch operations.
Principal component analysis (e.g. used for face recognition) is iterative as well - people are making very decent money and careers by finding ways to make it faster on multi-thread PCs. :)
Almost *all* high performance computing, including gaming, became parallelised long ago.
Nope. A parallel algorithm is one that can be run on many elements independently with no impact on the result. For example summing vectors is perfectly parallel. Monte Carlo simulations are great as well, since all runs are independent by definition.

But many problems aren't that easy. We modify them, make compromises and sacrifice a bit of precision to make them work on HPC.
Example: training neural networks (easily one of the most important problems of our times) is sequential by definition. :) You can't run it on many cores.
So we partition the data, run training on each sample independently and then average the results. It isn't equivalent to running it properly.

And forced parallelisation isn't just affecting the results. It's very problematic both theoretically and practically. What I mean is: in case of some algorithms parallelisation requires more advanced math than the algorithm itself...
 
I'm not saying 15% is "a big deal". It isn't most of the time. But if it is possible and without much fuss or sacrifices, then why not?
BTW: by ignoring those 15% you're basically undermining the sense of overclocking, which isn't exactly in-line with this community. But don't worry - I'm with you on this one! :-D Overclocking is the least cost-effective way of improving performance and, hence, fairly pointless - apart from the top CPUs available, obviously. :)

I'm doing a lot of stuff that can't be split between cores. It's mostly stemming from the way I work or the tasks I'm given. That's why I care.

But you're doing it as well, sometimes unconsciously. Browsing WWW is a basic example. 10 years ago it was limited by our internet connections. Today you're not waiting for the data, but for the rendering engine. :)

What are "production tasks"?
Editing? Depends what and how you're editing. :) Quite a few popular photo algorithms are serial (sequential) and utilize just 1 thread. This is why programs like Photoshop struggle to utilize more than 3-4 threads during non-batch operations.
Principal component analysis (e.g. used for face recognition) is iterative as well - people are making very decent money and careers by finding ways to make it faster on multi-thread PCs. :)

Nope. A parallel algorithm is one that can be run on many elements independently with no impact on the result. For example summing vectors is perfectly parallel. Monte Carlo simulations are great as well, since all runs are independent by definition.

But many problems aren't that easy. We modify them, make compromises and sacrifice a bit of precision to make them work on HPC.
Example: training neural networks (easily one of the most important problems of our times) is sequential by definition. :) You can't run it on many cores.
So we partition the data, run training on each sample independently and then average the results. It isn't equivalent to running it properly.

And forced parallelisation isn't just affecting the results. It's very problematic both theoretically and practically. What I mean is: in case of some algorithms parallelisation requires more advanced math than the algorithm itself...
One of your actual problems appears to be reading and context, yet again a thread on how Amds shit and Ipc , really tut.

Get out of your own headspace.
 
you cant compare 2080 vs 1080. 2080 has price of a 1080ti at release, more or less. how well does it compare vs 1080ti? oh yeah. almost same.

Turing is copy pasted pascal with RTX fancy stuff u cant even experience at ur native resolutions, have fun folks, the ones who ordered xD
 
you cant compare 2080 vs 1080. 2080 has price of a 1080ti at release, more or less. how well does it compare vs 1080ti? oh yeah. almost same.

Turing is copy pasted pascal with RTX fancy stuff u cant even experience at ur native resolutions, have fun folks, the ones who ordered xD
Once again I have to explain. Of course 2080 will outperform and be more expensive than a 1080ti. Price will always go up. But that is not the comparison. 2080 occupies THE SAME PLACE in the new series as the 1080 did in the prior series.

It is compared because 2080 replaces 1080. I still am amazed some people don’t get this. All the you have to do is look at which chip is used in each model and you have your answer as to what replaces what.
 
Once again I have to explain. Of course 2080 will outperform and be more expensive than a 1080ti.
I'm not so sure about that for the reasons AdoredTV pointed out. TL;DR: 2080 will probably beat 1080 ti at 1080p (if you're spending $500+ to play at 1080p, you're mad), 1080 ti will be slightly faster than 2080 at 1440p, and 1080 ti will significant be ahead of 2080 at 2160p.

NVIDIA is going to ask for a $50-200 premium on RTX at the start to capitalize on their marketing campaign.
 
I'm not so sure about that for the reasons AdoredTV pointed out. TL;DR: 2080 will probably beat 1080 ti at 1080p (if you're spending $500+ to play at 1080p, you're mad), 1080 ti will be slightly faster than 2080 at 1440p, and 1080 ti will significant be ahead of 2080 at 2160p.

NVIDIA is going to ask for a $50-200 premium on RTX at the start to capitalize on their marketing campaign.
The point is which chip is for what model. That will always explain what card replaces what card.

And again, your explanation is premature also since we don’t yet have any reviews.
 
Once again I have to explain. Of course 2080 will outperform and be more expensive than a 1080ti. Price will always go up. But that is not the comparison. 2080 occupies THE SAME PLACE in the new series as the 1080 did in the prior series.

It is compared because 2080 replaces 1080. I still am amazed some people don’t get this. All the you have to do is look at which chip is used in each model and you have your answer as to what replaces what.

mate, i could care less what ''place'' does it occupy at nvidia GPU tree, how i sort GPU's is PRICE they are asking, and comparing it to prev generation and performance, you still dont get it?
 
Honestly, I'm getting really tired of seeing this argument trotted out.

Can anyone in this thread name anything they do on a daily basis that meets both of the following criteria?

1 - Is an extremely demanding task where being executed 15% faster by one machine than another is a big deal
2 - Absolutely cannot, under any circumstances, be split between multiple cores to make it faster?


Financial transactions, database lookups, out of order execution with branch dependencies.

So like 70% of real world use in business, for example I built new machines for a workspace based on the fact that production at peak times was 15 minutes per machine saved per work day resulted in one extra customer transaction, and paid for themselves in a few days of peak production, plus employees saved time and could go home sooner.

Banks all still batch process at night, sometimes due to account locks and the sheer volume batch activity may be spread over multiple days from the weekend or holidays. Every second of machine time gained can result in hundreds, or thousands to millions of dollars of money the bank may save or spend on borrowing funds from the federal reserve to cover transactions, and that costs interest to borrow the bank has to pay.
 
And again, your explanation is premature also since we don’t yet have any reviews.
We know enough from NVIDIA's promotional slides, the TSMC 12 nm process, and tech specs (both Pascal and Turing) to make fairly accurate guesstimates. 1080/2080 is still a small chip compared to 1080 Ti/2080 Ti.
 
1080/2080 is still a small chip compared to 1080 Ti/2080 Ti.
Exactly...and they occupy different places in the families.

mate, i could care less what ''place'' does it occupy at nvidia GPU tree, how i sort GPU's is PRICE they are asking, and comparing it to prev generation and performance, you still dont get it?
You can sort however you like, but it’s not how the company that makes them sorts them, so, yeah....THAT is why no reviewer or Nvidia or any AIB partner will say or market the 2080 as the 1080Ti’s replacement.

Basically, if you only look at price as far as replacement decisions go, you’re gonna price yourself down and out of playing pc games in just a few generations. You’ll be buying a 5030 and wondering why the only games you can play maxed out are your games older than 4 years. I would hate to see that happen to anyone. :)
 
Last edited:
I wonder what prebuilt's are going to sell for.
 
....how i sort GPU's is PRICE they are asking, and comparing it to prev generation and performance, you still dont get it?

You say you can't compare a 2080 to a 1080 because the former is more expensive? Well, obviously in price they are different but in model and as @rtwjunkie says, in the family architecture, you clearly can.

The chip family was:
GP 102 Titan X and Titan XP
GP 102 1080ti
GP 104 1080
GP 104 1070

The Turing family is:
GT 102 2080ti
GT 104 2080
GT 104 2070

That the new xx80 is more expensive than ever before doesn't classify its place in the product stack. All Nvidia have done is move the whole family into quite a shitty price zone.

With your statement, you're confusing perceived value (price they are asking) as a reference to model. In essence, that's your subjective opinion, not a fact. Fact is, 2080 replaces 1080 in the product line, 2080ti replaces 1080ti.

Price sucks, thats the truth.
 
Back
Top