• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Tom's Hardware Edioter-in-chief's stance on RTX 20 Series : JUST BUY IT

1535229984286.jpg
 
OMG that GN reaction was funny! :laugh: What a great video.

I read that Tom's article first and thought wtf dude, NVIDIA clearly paid you handsomely to write this puff piece advertorial buy it now! drivel. Was the author trolling for clicks perhaps? Pure clickbait? If so, it worked... at the expense of Tom's credibility.
 
He wasn't completely right when he said the argument being made is just "first". That's one aspect of it but the larger point the Tom's writer was making is that the person who paid through the nose got a lot of years of enjoyment out of a product while other (presumably poorer) people literally had poorer lives by virtue of their poverty.

So the argument was more rich = awesome

The sales pitch is about making ordinary people feel that they can and should try to indulge in an extravagant purchase because their quality of life will be improved. Touch the lifestyles of the rich and famous. Live a little before you drop dead. Who needs to worry about that mortgage. Spend the grocery bills for the next six months on pure ray tracing pleasure.

(I am 3:40 into the video so I'll see what else he talks about.)

update 1: He points out the self-defeating problem of undercutting a middleman (the reviewer) for the middleman's business. However, he should remember that propaganda and marketing (payola) are also part of the review site process. Some sites don't take bribes but plenty are on the take to some degree. One of the biggest examples is being reliant upon early access to parts and info, to beat other review sites for ad revenue. Another is not having to pay for the stuff, getting samples instead.

update 2: 14:00 in or so he misses that it's the 1080 he compared it with not the 1080 Ti. Anandtech notably, for its announcement hype article, made a chart (the first one that appeared on page 1) that specifically excluded the 1080 Ti, as part of the Nvidia marketing agenda.
 
Last edited:
I read the toms opinion peace the day it came out on their site, pure garbage from them.
 
Last edited:
This is exactly the Tom's Hardware that I know for more than 15 years.
 
How boring. Grasping at straws, just like how they abandon other "great ideas" of theirs...
 
Damn, the only reason to even visit that site anymore is for their PSU and case reviews basically, what a shame.
 
I'm aware we already have "fake" shadows and lighting, because ray tracing takes too much... what I was saying is I wonder if there's a way to make real ray tracing more efficient, in a way that isn't "fake". Maybe not, but if not, ray tracing should probably never happen, at least in real time, like in games. That's a lot of power just to do such a silly effect, for the power required to produce it, anyway.
Not much more than what RTX does. They use fewer rays and then use matrices to fill in the blanks (aka, faking it).

There's no way to do RTRT using little power. Rasterization will always win that regard.
 
I musta had around 50,00 posts on THG.... over 10+ years but since THG was purchased by Purch (along with anadtech), I have no use for them. Purch is a product placement entity, not a media company. No doubt RTR is an incredible new development, but geez .... can we at least wait till the cards actually come out before generating 4 pages of comments as to what it is or isn't ? Perhaps even wait till there's a few games capable of utilizing its capabilities.

Of course we've been here before ... Mantle was gonna change everythng (It didn't) . But i think it's best to refrain from writing the restaurant review until it actually opens and we've samples its wares a few times.
 
At the end of the day, these are the type of visuals that matter and "change" gaming (although I don't see one Blizz character here.. but they definitely deserve a place. Tracer is not even 5 years old, but I bet people could spot her in this form). This goes back before gaming really.. to Walt Disney and Mickey Mouse ears. That's the kind of visual that sticks with people. Not seeing muzzle flash in a Battlefield enemy's eyes.
 

Attachments

  • 3ed885e40d48106d85e9730c2119d208.png
    3ed885e40d48106d85e9730c2119d208.png
    143.3 KB · Views: 238
Not much more than what RTX does. They use fewer rays and then use matrices to fill in the blanks (aka, faking it).

There's no way to do RTRT using little power. Rasterization will always win that regard.

RTX is 1spp + denoiser

For these 3
  • Ray Traced Area Shadows
    • Spherical Rect. Directional Lights
    • Soft Shadows
  • Ray Traced Glossy Reflections
    • Inter-Object Reflections
    • Mirror to Glossy
  • Ray Traced Ambient Occlusion
    • High Quality Contact Hardening
    • Support for off-screen objects
At GDC 2018 Nvidia RTX
  • The aim is to reach a denoising budget of ~1 ms or less for 1080p target resolution on gaming class GPUs.

Should have seen it coming
 
  • The aim is to reach a denoising budget of ~1 ms or less for 1080p target resolution on gaming class GPUs.

33 to 16 ms for an entire frame and from which 1 ms is the denoising process ? This clears things up, the bottleneck is still by far the rays themselves. If SOTTR wasn't even reaching 60 fps at 1080p, there must have been a ton of idle shaders, I wonder if they can't just run part of the ray-tracing on these idle resources.
 

Ouch... as if the missing kidney wasn't enough, it also looks like they drilled holes in the spine! It also looks like the remaining kidney was weirdly cut and pasted in there... so he's got a pretty painful looking lack of lumbar support and I guess a picture of a single kidney, rather than an actual one... should be enough to get by on.
 
From my perspective (as just a viewer/gamer), lights matter to me most when it comes to simultaneous bulbs.. and things flickering out if I move my perspective quickly. Which is more an engine issue, isn't it? Or is this going to improve that too?

This is partly what I meant earlier when saying a well designed game is the one that tricks/hides from you it's own limitations. I would even say these limitations are good in that they encourages/engages the designer to think artistically.. rather than relying on the beauty of realistic accidents (like realistic lighting may do).
 
33 to 16 ms for an entire frame and from which 1 ms is the denoising process ? This clears things up, the bottleneck is still by far the rays themselves. If SOTTR wasn't even reaching 60 fps at 1080p, there must have been a ton of idle shaders, I wonder if they can't just run part of the ray-tracing on these idle resources.
Guarentee you the GPU is pegged at 100%. Not enough shaders.
 
From my perspective (as just a viewer/gamer), lights matter to me most when it comes to simultaneous bulbs.. and things flickering out if I move my perspective quickly. Which is more an engine issue, isn't it? Or is this going to improve that too?

This is partly what I meant earlier when saying a well designed game is the one that tricks/hides from you it's own limitations. I would even say these limitations are good in that they encourages/engages the designer to think artistically.. rather than relying on the beauty of realistic accidents (like realistic lighting may do).

It depends on how well the game is. 1spp + denoiser may run into issues with edges. Artifact flickering.

They did introduce ATAA and DLSS. Might be away to combat that. Just remembered how taxing ATAA is.

To minimize the artifact flicker you'd have to use more samples per pixel which makes it slower, No no for Real Time
 
Last edited:
From my perspective (as just a viewer/gamer), lights matter to me most when it comes to simultaneous bulbs.. and things flickering out if I move my perspective quickly. Which is more an engine issue, isn't it? Or is this going to improve that too?

This is partly what I meant earlier when saying a well designed game is the one that tricks/hides from you it's own limitations. I would even say these limitations are good in that they encourages/engages the designer to think artistically.. rather than relying on the beauty of realistic accidents (like realistic lighting may do).

Lighting can matter a lot. By some extraordinary stroke of luck, I found this comparison of Quake (yes, that Quake, from 1996) with and without colored lights in a modern engine. That's just a nifty effect though, and that was even possible on the PS1 (Doom and Quake 2, at least, did it, not sure about others). Here's a better comparison of different lighting modes.

Stalker's static lighting looks pretty crappy compared to dynamic, but dynamic looks pretty good. I don't really think we need ray tracing, at least not yet. Maybe years later when it can be added relatively simply, like PhysX is/was. Today we're throwing loads of hardware meant explicitly for ray tracing, and it doesn't even run well. o_O
 
Guarentee you the GPU is pegged at 100%. Not enough shaders.

It's not clear what sort of impact the RT and Tensor cores have when they execute instructions along side the FPU and integer shaders. The register file space and cache are probably nowhere near enough and memory iops end up taking place on the global memory much more often on Turing compared to Pascal therefore a lot of ALU cycles are lost. The fact that these demos can't reach 60fps at 1080p is an indicator that the FP32 shaders are massively underutilized under those specific workloads. I really doubt that they are offloading ray-tracing ops onto traditional shaders, there is simply not enough cache and memory bandwidth to do all this stuff at the same time.

Nvidia shoved so many ALUs in this thing, HBM could have really helped.
 
Last edited:
They're using a 13 TFLOP chip to try to do 1000+ TFLOP worth of work. The fact they are able to fake it with so little hardware resources is a good thing. The specific bottlenecks don't really matter. They'd need to increase performance exponentially to see a significant improvement in rays/sec.

The ALUs are likely used to address memory, RT core, and Tensor cores. Tensor core especially is a 4x4x4 matrix and to use a matrix of any kind, you need ALU performance. The FPUs are doing ray bounces while the ALU is keeping tabs on the color information collected from each bounce.

Pretty sure HBM2 isn't much faster than GDDR6 but it is significantly more costly and limits die size. HBM is lower latency but NVIDIA addressed that by adding more cache to the processor. AMD has an unhealthy obsession with HBM.

RTRT theoretically shouldn't change the memory footprint much. It is computationally heavy more so than memory.
 
Last edited:
Uuu... so many people beating the drum for "Tom's Hardware published an article of herculean stupidity that pertained to the idea of pre-ordering based on nothing more than promises. ".
How funny it is considering what was going on before Ryzen and Vega came out.
I bet most of the other company fanboys here would already order Navi if someone wanted to take their money:-)

It's just sad that the same people who praise the other company for "fine wine", "future proofing" and "innovation" are now criticizing NV for pushing RTRT. :-D
This is the most advanced card available today. Just live with it. NV got so far ahead in performance that they finally had a moment to do something interesting.

And I simply knew HBM will appear here (mentioned by the usual members). :-D
 
Uuu... so many people beating the drum for "Tom's Hardware published an article of herculean stupidity that pertained to the idea of pre-ordering based on nothing more than promises. ".
How funny it is considering what was going on before Ryzen and Vega came out.
I bet most of the other company fanboys here would already order Navi if someone wanted to take their money:)

It's just sad that the same people who praise the other company for "fine wine", "future proofing" and "innovation" are now criticizing NV for pushing RTRT. :-D
This is the most advanced card available today. Just live with it. NV got so far ahead in performance that they finally had a moment to do something interesting.

And I simply knew HBM will appear here (mentioned by the usual members). :-D
Very few people are criticizing Nvdia about anything, most people are simply sayingg that pre-ordering is a mistake.
 
The ALUs are likely used to address memory, RT core, and Tensor cores. ... FPUs are doing ray bounces while the ALU is keeping tabs on the color information collected from each bounce.

Those are just assumptions, what's certain is that a lot of ALUs have been added which are now allowed to operate concurrently and that requires a ton of memory bandwidth, registers and cache to hide the latencies. The higher memory and cache bandwidth sure helps mitigate some of that but I doubt it's anywhere near enough for effective use of all the shaders. GPU do not work like CPUs, memory limitations become much more apparent since there aren't as many sophisticated mechanisms to keep ALUs occupied.
 
Back
Top