• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce GTX 780 Ti 3 GB

I don't know if this has been discussed, but it would be nice to include some BF4 benchmarks.

Or at least change the description for BF3 as it's no longer "Arguably one of the most anticipated online shooters of recent times" :)

W1zzard mentioned that he add bf4 along with other games next month.
 
AMd should just sell the bare boards at this point. :laugh:

That being said, I really dig the reference cooler's look with the red highlights. I wonder if it would fit onto a 7970 :confused: (unlikely)

A 280x cooler should..maybe. Not sure if the VRM section would line up "perfectly".
 
No one cares about 4k performance. None of you are playing games on 4k monitors. If you think 4k is "the future" then you're probably the same people who bought into 1080p for 4x the cost with ZERO content coming for 5 years. There isn't even a significant gaming population above 1080p, which is why reviewers generally look at 1080p and 1440p or 1600p results, where the 780ti smokes the 290x without taking overclocking into consideration.

You AMD fans need to stop trying to skew results, it's ridiculous. Wait until aftermarket 290x and aftermarket 780ti are out and reviewed and then let's see what the 1440p results say. I can tell you this right now: No factory card (not on water) from the 290x line is going to beat the 780ti and I highly doubt that even on water any of them will beat a 780ti on water. The 290x has major heat issues, major noise issues (SLI 708ti is quieter than 290x uber).

I will gladly pay the $150 dollars to avoid AMDs software, drivers, the heat and noise. When you're talking about two GPUs at $1100 or $1400 who cares? If you can afford that system in the first place then $300 is pretty much nothing when considering the total system cost...

I agree with you on 4K. Worthless to even bring up right now. 1440p is what many people can afford.

But talk about skewing things. Maybe $300 is nothing to you, but that is skewed financial sense to me.

Particularly when Wizards performance numbers at 2560x1600 show the 780ti at 100% and the 290 (non-x) at 87%.

Again, maybe 13% performance is worth 75% more in cost to you, but I would say that is skewed reasoning.

...
 
It was never intended to do so. IMHO it is great card, considering it has more transistors than competitor card yet it produce less heat that is impressive. If nothing else it may force AMD to starts thinking about bundle more games with R9 series graphics cards so it will be win win situation for us customers.
But also the nvidia chips are on a much bigger die
 
I agree with you on 4K. Worthless to even bring up right now. 1440p is what many people can afford.

But talk about skewing things. Maybe $300 is nothing to you, but that is skewed financial sense to me.

Particularly when Wizards performance numbers at 2560x1600 show the 780ti at 100% and the 290 (non-x) at 87%.

Again, maybe 13% performance is worth 75% more in cost to you, but I would say that is skewed reasoning.

...

The problem is people have been bringing up 4K benchmarks in order to show the benefits of the 290(x) over the 780 Ti.

Kinda flies in the face of their "good value" argument.
 
Is there any game that can max out 2GB RAM at 1080?

Just wondering, because the 690 can now be had for around the same price as a 780 Ti in some places and it's generally faster, if you don't mind SLI (I don't). However, it's only got 2GB useable RAM (2x2 config) whereas the 780 Ti has 3GB. Obviously, if the RAM maxes out the performance will tank or the game will crash if badly written, so is to be avoided at all costs.

By the time 4K performance will actually matter, with affordable single pane 4K monitors, all of today's cards will be obsolete anyway, hence I don't care about 4K performance (sorry AMD).
 
The problem is people have been bringing up 4K benchmarks in order to show the benefits of the 290(x) over the 780 Ti.

Kinda flies in the face of their "good value" argument.

I'm sure with their 512-bit bus they will do fine at higher resolutions.

Personally just think that talk about 4k is still very premature.

In making their decision, people should stick to what matters now, and realize $300 is a lot money and they are not using 4k.

I find people bringing up non-important points for the sake of arguing pointless.

...
 
A 280x cooler should..maybe. Not sure if the VRM section would line up "perfectly".

What you say, would it fit?

1st gigabyte windforce 280x, 2nd asus directcu 280x, 3rd 290x
 

Attachments

  • comparison.jpg
    comparison.jpg
    205.2 KB · Views: 539
@W1zzard: any chance you could show this card version of this chart?

http://tpucdn.com/reviews/AMD/R9_290X/images/analysis_quiet.gif

Would like a better sense of the throttling amount this card has, if any.

Anybody ever did this test or review with Ultra Low Power State (ULPS) set to 0 in the registry? Personally, I think down-throttling is due to the fact that the card doesn't need to push higher Core Frequencies or GPU Loads to do the same level of work.

In CrossfireX, in a lot of current games, the 1st GPU won't push full GPU Load, or Core Frequency because it isn't necessary to get the same amount of output needed to finish drawing frames. In other games that are more intensive, the GPUs will push 100% at full stock frequencies on both GPUs at 95 degs C tops.

Disabling ULPS would probably prove whether people are making a big deal about the throttling anomalies, or there are merits to back up the claims. I'm leaning towards the possibility that it's just NVidia consumers blowing something simple and insignificant, out of proportion...

About GTX 780 Ti. Nice card, but in some ways, I feel as if NVidia kicked it's consumers in the balls when they bought GTX Titan for $1069 and $1099 for partial 2880 Cuda Cores, and K6000 with a whopping Price Tag of almost $4000.00 for 6 GBs VRam less, 64bit floating precision, and the small minor additions that come with Workstation Cards.

RX9-290x still has a higher Max Consumption Wattage over 300 Watts, but GTX 780 Ti was only 10 watts different full load in comparison to the RX9-290x. Max Temps are less than 10 degs C difference. RX9-290x is 95 degs c on full load, GTX 780 Ti is 89 degs on full load?... I only took a glimpse of the numbers. For games optimized for AMD, there's only a 2 to 7 FPS difference, and for games that are optimized for NVidia, there's only a 10 to 20 fps difference. Still, in theory, GTX 780 Ti is suppose to be a theoretical 15% performance increase, but it seems like GTX Titan inches closer to GTX 780 Ti on resolutions higher than 1600p. For $699.99, I am thinking more of like $749.99 on Newegg because they need to make a profit, is probably what you're looking to pay on the first day of release. I remember when GTX 680s first hit the shelves, Newegg jacked the price up from $599.99 to $699.99. Well the price was somewhere in between those figures... Also, to take into account, with that price tag, NVidia users are getting DX11.2 full support. Ya... If you look at the "nitty gritty", NVidia consumers aren't getting a whole lot more back. Just a GTX 780 Refresh with full Titan Cores, and some additional perks... Like the Frame Time Variance Graphs on the GTX 780 Ti. Curve Band looks smaller, and the minimum extreme is a little lower than RX9-290x on a single card setup. This is something that should have been seen back in GTX 680, 690, Titan, and 780.... Lower frame times equate to higher fps, and small frame time bands equates to less deviation or stalling. AMD is now better at Multi-GPU setups because Scaling on the new PCIe Based CrossfireX is roughly from 1.8 to 2.0x the FPS. NVidia is now, again, the better single card/GPU solution. Seems like AMD and NVidia are playing musical chairs by switching between these two factors.

One other thing. GTX 780 Ti OC's past 1100 Mhz. Asus ROG Ares II has a turbo clock of 1100 Mhz, dual GPU solution, and it can actually OC past 1200 Mhz Core, 1750 Mhz Mem with an ASCI Quality of 71%... I've expected more from GTX 780 Ti. I'll be expecting more from RX9-290x with a better cooling solution to go past the 1250 Mhz to 1300 Mhz mark with a higher power envelope then it's competitors.

@ the Btarunr and W1zzard,

You should provide screen shots of the GPU-Z in your testing setup. The reason is this. You don't really state it in your write ups, but a lot of readers are under the assumption that you're testing on the PCIe 3.0 x16. On some other sites, they don't. Now there won't be a difference between PCIe 3.0 x16 and PCIe 2.0 x16 except for the bandwidth, but for your readers, I think you should just add in the screen shot to show that you're using the Graphic Card, and that it's using that PCIe interface in the test. Just something minor to consider. The only ones who use a screen shot of GPU-Z during their benches is Legitreviews.com.
 
Last edited by a moderator:
One other thing. GTX 780 Ti OC's past 1100 Mhz. Asus ROG Ares II has a turbo clock of 1100 Mhz, dual GPU solution, and it can actually OC past 1200 Mhz Core, 1750 Mhz Mem with an ASCI Quality of 71%... I've expected more from GTX 780 Ti. I'll be expecting more from RX9-290x with a better cooling solution to go past the 1250 Mhz to 1300 Mhz mark with a higher power envelope then it's competitors.

Usually w1z does not increase the voltage in his reviews.

Be sure that this card, provided you get a good bin like the sample he reviewed, will hit 1400 MHz with 1.35v on the core.

But that's just trivial because you are comparing just frequencies without taking architectures and core count into the equation.

On a side note, 780 ti 3GB just came up here at our retailers, I'm kinda on the fence about waiting the 6GB models, I'd only use more than 3GB with Skyrim TBH.
 
Last edited:
You should provide screen shots of the GPU-Z in your testing setup. The reason is this. You don't really state it in your write ups, but a lot of readers are under the assumption that you're testing on the PCIe 3.0 x16. On some other sites, they don't. Now there won't be a difference between PCIe 3.0 x16 and PCIe 2.0 x16 except for the bandwidth, but for your readers, I think you should just add in the screen shot to show that you're using the Graphic Card, and that it's using that PCIe interface in the test. Just something minor to consider. The only ones who use a screen shot of GPU-Z during their benches is Legitreviews.com.

the oc page shows a gpuz screenshot, and we do test at x16 3.0, of course
 
I'm sure with their 512-bit bus they will do fine at higher resolutions.

Personally just think that talk about 4k is still very premature.

In making their decision, people should stick to what matters now, and realize $300 is a lot money and they are not using 4k.

I find people bringing up non-important points for the sake of arguing pointless.

...

Oh I absolutely agree, I'm just a little bemused that on one hand people argue $300 savings, whilst using benchmarks done on $2800+ monitors to prove their point.
 
item # 1
nVidea shitting in the face of their customers again more ... do not know
well but only if nVidea drawing conclusions could actually have
Kepler launched the Titan big with their 2880 cuda cores
NV unlocked and actually did not to compete with the
Quadro k6000 ( full unlocked ) to know but did not
and could do so for those who bought the Titan as the best that
could offer to disappoint ... pfff ¬ ¬


Point # 2
Keppler and not improve much but will go up 290x performance with better drivers might be a tie technician .

In walks consumption almost equal 290x and 780Ti

TechPowerUp .com/reviews/NVIDIA/GeForce_GTX_780_Ti/25.html


Point # 3
No actual stock still ... have not even begun to sell out not that day but still not like the play of the demo XD

noticias3d .com/articulo.asp?idarticulo=1873&pag=30


Point # 4
Mantle remains to show what if a simple R9 290pro gives thrashed to GTX780Ti

Point # 5
Keppler is CPU dependent so if you do not have a i7 4770 forgetting better go see the reviewss fps XD
 
Oh I absolutely agree, I'm just a little bemused that on one hand people argue $300 savings, whilst using benchmarks done on $2800+ monitors to prove their point.

its pretty obvious that they come here not to actually argue a sensible point but to criticize with any means necessary. -doesn't matter if it actually makes sense or not
 
Oh I absolutely agree, I'm just a little bemused that on one hand people argue $300 savings, whilst using benchmarks done on $2800+ monitors to prove their point.

Also taking the price tag out of the equation, 4K just isn't worth it as a purchase yet.

Current monitors that are being offered suck overall, I would never pick up the ASUS monitor, not even for 1K € considering what it offers.

Go take a look at the Anandtech review, it doesn't deliver in colors (which I admit I'm a bit picky on the subject), responsiveness (that much latency could give issues even on single player games) and it's a bloody tiled display.

What 4K needs is a native 4K panel that doesn't use MST over DP and does 4k at 60Hz without two streams, then I'd be tempted to purchase one.

Oh I forgot the new HDMI, that's important, too.
 
Anybody ever did this test or review with Ultra Low Power State (ULPS) set to 0 in the registry? Personally, I think down-throttling is due to the fact that the card doesn't need to push higher Core Frequencies or GPU Loads to do the same level of work.

In CrossfireX, in a lot of current games, the 1st GPU won't push full GPU Load, or Core Frequency because it isn't necessary to get the same amount of output needed to finish drawing frames. In other games that are more intensive, the GPUs will push 100% at full stock frequencies on both GPUs at 95 degs C tops.

Disabling ULPS would probably prove whether people are making a big deal about the throttling anomalies, or there are merits to back up the claims. I'm leaning towards the possibility that it's just NVidia consumers blowing something simple and insignificant, out of proportion...

About GTX 780 Ti. Nice card, but in some ways, I feel as if NVidia kicked it's consumers in the balls when they bought GTX Titan for $1069 and $1099 for partial 2880 Cuda Cores, and K6000 with a whopping Price Tag of almost $4000.00 for 6 GBs VRam less, 64bit floating precision, and the small minor additions that come with Workstation Cards.

RX9-290x still has a higher Max Consumption Wattage over 300 Watts, but GTX 780 Ti was only 10 watts different full load in comparison to the RX9-290x. Max Temps are less than 10 degs C difference. RX9-290x is 95 degs c on full load, GTX 780 Ti is 89 degs on full load?... I only took a glimpse of the numbers. For games optimized for AMD, there's only a 2 to 7 FPS difference, and for games that are optimized for NVidia, there's only a 10 to 20 fps difference. Still, in theory, GTX 780 Ti is suppose to be a theoretical 15% performance increase, but it seems like GTX Titan inches closer to GTX 780 Ti on resolutions higher than 1600p. For $699.99, I am thinking more of like $749.99 on Newegg because they need to make a profit, is probably what you're looking to pay on the first day of release. I remember when GTX 680s first hit the shelves, Newegg jacked the price up from $599.99 to $699.99. Well the price was somewhere in between those figures... Also, to take into account, with that price tag, NVidia users are getting DX11.2 full support. Ya... If you look at the "nitty gritty", NVidia consumers aren't getting a whole lot more back. Just a GTX 780 Refresh with full Titan Cores, and some additional perks... Like the Frame Time Variance Graphs on the GTX 780 Ti. Curve Band looks smaller, and the minimum extreme is a little lower than RX9-290x on a single card setup. This is something that should have been seen back in GTX 680, 690, Titan, and 780.... Lower frame times equate to higher fps, and small frame time bands equates to less deviation or stalling. AMD is now better at Multi-GPU setups because Scaling on the new PCIe Based CrossfireX is roughly from 1.8 to 2.0x the FPS. NVidia is now, again, the better single card/GPU solution. Seems like AMD and NVidia are playing musical chairs by switching between these two factors.

I think you missed what i'm trying to know.

With the R9 290x:

1 - take any benchmark you like that's able to push the card so that it throttles a lot @ stock settings (everything, fan included) and try and get the average speed of it throughout the test
2 - set the default speed of the card to the value you discovered in point #1
3 - the card will now throttle way less and, in theory, it should produce the same result as with everything @ stock

If throttling lots of times makes it slower then throttling a few times in the above scenario, then the fact it throttles too much due to the shoddy cooler is actually hampering performance: that's what i want to know.


nVidia's approach is different and i just don't know if it's possible to test :(

I sure hope it is possible: i'm very curious to know how the different technologies compare, efficiency wise!
 
I think you missed what i'm trying to know.

With the R9 290x:

1 - take any benchmark you like that's able to push the card so that it throttles a lot @ stock settings (everything, fan included) and try and get the average speed of it throughout the test
2 - set the default speed of the card to the value you discovered in point #1
3 - the card will now throttle way less and, in theory, it should produce the same result as with everything @ stock

If throttling lots of times makes it slower then throttling a few times in the above scenario, then the fact it throttles too much due to the shoddy cooler is actually hampering performance: that's what i want to know.


nVidia's approach is different and i just don't know if it's possible to test :(

I sure hope it is possible: i'm very curious to know how the different technologies compare, efficiency wise!

1 and 2 is the samething, you are taking an average of something why should the result be any different? 4+8 is the same as 6+6 [ (4+8)/2 is the same as (6+6)/2]
 
But also the nvidia chips are on a much bigger die
Although the count is one thing it's all about transistor density. AMD is much greater and packing those transistors that much closer, while keeping them cooling is a big achievement

The problem is people have been bringing up 4K benchmarks in order to show the benefits of the 290(x) over the 780 Ti.
I'm sure with their 512-bit bus they will do fine at higher resolutions.

Personally just think that talk about 4k is still very premature.

In making their decision, people should stick to what matters now, and realize $300 is a lot money and they are not using 4k.

I find people bringing up non-important points for the sake of arguing pointless.

Exactly, I don't say 4k is relevant in market, but to really see the engineering and processing power to push all those pixel's is something to revel in, and applaud the technical hurdle it's achieving for that size of die! ;)
 
1 and 2 is the samething, you are taking an average of something why should the result be any different? 4+8 is the same as 6+6

If the throttling is efficient, then yes. If not, then no.

That's the very thing i want to know: the efficiency!
 
If the throttling is efficient, then yes. If not, then no.

That's the very thing i want to know: the efficiency!

ok im starting to see your point.

what you are saying is that the card running on a constant (slightly) lower temperature is more efficient than doing a cycle of low-to-maximum spikes
 
ok im starting to see your point.

what you are saying is that the card running on a constant (slightly) lower temperature is more efficient than doing a cycle of low-to-high spikes

Not temperature but speed, but yes: low-to-high spikes in speed.
 
Its a good question, but, the switching is so fast, I do not think it would make a difference. Not to mention, you lose the peaks too by starting out lower.

Turn the fan up. :)
 
Back
Top