• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Should I return to nVidia ?

Status
Not open for further replies.
well then i just show here at 7900xt in avatar with -10 powerlimit

barely reaches 2300mhz but sure it keeps stock performance when u take power away....
Yes, my 7800 XT loses about 100 MHz with the -10% power limit, however, the performance decrease (that is: your FPS) is unnoticeable unless you're running a benchmark.
 
it seems like you simply dont get it.
There is a term you threw around foolishly earlier. Dunning & Kruger.

One does not need to be intelligent to read something and apply it as a defence to an offense one wishes to push with.

Likewise you don't need to know the intricate details of how the GPU works, just slot it in the PCI-E slot and get numbers.

You failed even that deep.
 
thank god it regrows with a lot of beer consumption ;)
please stop making fun of our health coverage in the united states, everyone knows beer is not covered under our insurance plans and costs $800 per pint out of pocket at the hospital
 
Yes, my 7800 XT loses about 100 MHz with the -10% power limit, however, the performance decrease (that is: your FPS) is unnoticeable unless you're running a benchmark.
I don't even actively use the power limiter. Just clocks. It doesn't matter much if I set +15%, -10% or 0% unless the core clock is limited hard, that's where the real watts are saved. Because RDNA3 is STILL not in its optimal efficiency curve at -10% . You can also still get VRAM clock up 100-200mhz as well, that's where you regain some performance as well for low power investment.

Either way, I think this debunks the BS statement that you can't get RDNA3 efficient or control its efficiency.
 
high end buyer simply pay for the privilege to use it years in advance. but its has to start somewhere. i mean a 4090 today has as much power as a ps6 will have maybe ps 6 will have 20 % more at best. but we will get a 4090ti in 1-2 months and 6 months after that the 5090 will hit. ;) but pssssttt
When RT becomes a must-have, the 40-series (4090 included) will be long obsolete.

As a home user, it is counterproductive to invest in a technology that gets adopted at a slower rate than the hardware that supports it ages.
 
@Chomiq
hey, im part german and Virgo, so arguing is "built in" :D
 
I have RTX 4090. I have never played an entire game using raytracing nor DLSS.
Very strange, why buy a 4090 then, that's like it's singular selling point to justify the crazy price.
 
@Vya Domus
good friend of mine isnt doing much outside work/family,
and gaming pvp @4K +120Hz and didnt go for RTX 3 series, so when he found an open box
at microcenter, he got it, as the wify wanted a big screen for herself.

a lot of folks do "save up" and get something above whats "needed" to do the job,
so they can keep it longer and/or have some spare power left, and dont need to max the hw to get stuff done.
 
Very strange, why buy a 4090 then, that's like it's singular selling point to justify the crazy price.
Raster performance. Simple as.
 
Raster performance. Simple as.
Plus efficiency, CUDA, Tensor cores for tasks other than RT, DLSS/DLAA, hardware framegen, 24 GB that's actually usable because the entire professional software market is optimized for NVIDIA. The list goes on.

Check out the GPU compute performance here.

I mean, it's so contrasting the performance between NVIDIA/AMD/Intel that a 2080 Ti is roughly as fast as a 7900XTX in two of the three tasks.

An Arc A770 is competitive with a 7900XTX in one of the three.

But then, that's still only on the level of an RTX 3050 so...
 
Raster performance. Simple as.
I still find it strange people would pay that much, a 4090 is only like 15% faster in raster than a 7900XTX, crazy premium you're paying there for not much in exchange, each to their own I guess.
 
Can our resident 4070ti throw me some data for ada lovelace wattage in lower power states? Stuff like 1440p locked 60hz (any games) or 4k locked 60 (mostly in dcs/msfs/racesims if u have). I would like to compare with my GRE.

Most of the games I play are cpu bound. So not maxing the gpu often. (Squad/several rts/mmo). I got a GRE and fairly happy with it but am considering the 4070s still. Mostly due to high power prices and AMD still using quite abit. Wondering how the power scaling is in lower states on Nvidia. Most likely better, but how much better?
 
Can our resident 4070ti throw me some data for ada lovelace wattage in lower power states? Stuff like 1440p locked 60hz (any games) or 4k locked 60 (mostly in dcs/msfs/racesims if u have). I would like to compare with my GRE.

Most of the games I play are cpu bound. So not maxing the gpu often. (Squad/several rts/mmo). I got a GRE and fairly happy with it but am considering the 4070s still. Mostly due to high power prices and AMD still using quite abit. Wondering how the power scaling is in lower states on Nvidia. Most likely better, but how much better?
power-vsync.png
 
Yup I saw that chart and that got ny interest.

But I did my own test with mafia remaster and compared to a YouTuber showing it on 4070s 1440p/60 and there power use was very similar.100-130 for both

So was looking for more samples ideally :p
 
Yup I saw that chart and that got ny interest.

But I did my own test with mafia remaster and compared to a YouTuber showing it on 4070s 1440p/60 and there power use was very similar.100-130 for both

So was looking for more samples ideally :p
TPU tests with wireview so it's more accurate generally than software readings.
 
The lowest power draw in 3D is usually around 56-80w on my 7900 XT and this is just because the card kicks into 3D mode, games that do this barely use the GPU. RDNA3 does not do well here.

At idle I rarely go above 25 watts (Youtube mostly hits this).

This information is at 170hz 1440P.
 
TPU tests with wireview so it's more accurate generally than software readings.
And also not CPU limited.

Yup I saw that chart and that got ny interest.

But I did my own test with mafia remaster and compared to a YouTuber showing it on 4070s 1440p/60 and there power use was very similar.100-130 for both

So was looking for more samples ideally :p
Comparisons to Youtubers aren't highly recommended, because you don't know what setup they really use. Trust factor on Youtube is just about zero. With written reviews, you can check and cross check configurations etc.

In general you can consider a review to at least give you the best relative look at gaps between GPUs. Since they're averages across a large benchmark suite with a single baseline (system). So that gap you see up there in the charts really is the gap you will see, perhaps a bit less pronounced, and if you don't see it, you're limiting your GRE harder in some way.
 
I still find it strange people would pay that much, a 4090 is only like 15% faster in raster than a 7900XTX, crazy premium you're paying there for not much in exchange, each to their own I guess.
I don't know about that. I have a 7900 XTX too and I'm pretty sure the difference is bigger, around 1.2x typically, but yes to each their own. 90% of the time what is my 4090 doing? It's raster. Not CUDA. Not tensor inference on int8. Not optical flow analysis. I bought based on my actual use and yes at the high end pricing goes off the wall (but it's been that way since Titan).
 
Last edited:
@JohH is CUDA not part of the raster pipeline?
Not really, CUDA also uses the shader units, but not the raster engines nor texture mapping units and so on. It scales with memory bandwidth and compute. If raster was only compute bound then the 4090 would be much faster than it is, relative to a 4080 for example.
 
is CUDA not part of the raster pipeline?
Nvidia calls "CUDA" their compute API as well as the "CUDA cores" in a GPU, which aren't really cores but they started doing that back in 2008 to show that their GPUs can do compute. Basically, one is the name of an API the other is just some vaguely related nomenclature for the hardware, same way AMD calls cores "stream processors", or they used to.
 
Can I just say that people that say Ray Tracing will replace rasterization please stop with that nonsense. The clues in the name for gods sake. It will replace the lighting and how light reacts to objects, materials and textures but that is all. It's a common mistake people make but rasterization in some form will probably be around forever.

Use Unreal Engine 4/5 or Godot or Unity and make me a pure Ray Tracing game. Oh, is that a sphere or cube you used. Raster. That material you put on it. Raster. Everything in a game including lighting (unless it's a RT lit game) and the physics/AI is raster. Buildings, player characters, the ground etc all raster. RT can't replace those.

In other words as games get bigger (which they are) there will still be a need for a lot of rasterization power, don't dismiss that fact.
 
Status
Not open for further replies.
Back
Top