Wednesday, May 21st 2014
ASUS Unveils the GeForce GTX 780 STRIX 6 GB Graphics Card
ASUS today took the veil off a new graphics card powered by NVIDIA's GK110 GPU, a custom GeForce GTX 780 that packs 6 GB of memory (double the amount found on 'regular' GTX 780s) and makes use of a DirectCU II cooling solution that can be totally silent, in certain situations. Named GeForce GTX 780 STRIX, the card in question will not be running its two fans when idle or during light workload scenarios (assuming the GPU temperature is below a threshold) but will power them up as soon as the GPU load increases.
The GeForce GTX 780 STRIX has 2304 CUDA Cores, a 384-bit memory interface, SLI support, dual-DVI, HDMI and DisplayPort outputs and will be available in two versions - one with stock clocks and one with an overclocked GPU. ASUS is promising more information 'soon'.Update [22/5] - ASUS has officially announced the STRIX GTX 780 and revealed the specifications of the OC version of the card. Codenamed STRIX-GTX780-OC-6GD5, the card will have GPU Base/Boost clocks of 889/941 MHz and a memory clock of 6008 MHz.
The GeForce GTX 780 STRIX has 2304 CUDA Cores, a 384-bit memory interface, SLI support, dual-DVI, HDMI and DisplayPort outputs and will be available in two versions - one with stock clocks and one with an overclocked GPU. ASUS is promising more information 'soon'.Update [22/5] - ASUS has officially announced the STRIX GTX 780 and revealed the specifications of the OC version of the card. Codenamed STRIX-GTX780-OC-6GD5, the card will have GPU Base/Boost clocks of 889/941 MHz and a memory clock of 6008 MHz.
35 Comments on ASUS Unveils the GeForce GTX 780 STRIX 6 GB Graphics Card
Second, so to run 4k at ultra with decent frame rates you have to run SLI or CFX, what happens when you compare the two you see the VRAM limits come into play very easily. Every game in that review at the high resolutions (4k+) the AMD cards pull way ahead expecially in some cases like BF4 (albeit they used mantle which is an advantage so you can skip that if it so pleases). So this is anything but an "Artificial Construct" because its pretty apparent theres a limit somewhere when the 780ti was clearly supposed to be the faster video card when compared to a R9 290X. An SLI configuration versus a CFX configuration should result in the SLI being faster since scaling remains roughly on par with eachother for 2 GPU's, however in these scenarios they are completely limited and lose by up to 44% performance difference. Even on TomsHardware Reviewof the 295X2 in CFX with the results being closer (Which im not comparing using quadfire for it right now, only 2 card setups) the 295 or 290X setup is still ahead in almost all cases.
******************** AN ASIDE : NOT GERMANE TO THE ACTUAL DISCUSSION********************* Actual Crysis 3 scaling: CrossfireX : 82%....SLI: 54% at the same level of framerate shown in the HardOCP review
...yet even with a 50+% scaling advantage, and a 33% larger framebuffer, the dual 290X's are only 12.5%-22.7% faster in Crysis3 according to the article you just linked to....yet single cards are within a few percentage points of each other. Now why would that be? If your supposition of the value of a larger framebuffer of the AMD cards holds water, then the only other possible answer is that AMD's driver isn't working very well. Peachy. If clocks aren't indicative, why bother asking the question in the first place?
In all actuality, the GTX Titan should not be ahead stock to stock of a GTX 780ti in SLI in Crysis 3, yet it is...Thats a weaker card, from the same company with weaker core clocks that is ahead. Not by much, but its a difference and on a weaker card. In fact the picture you yourself posted shows that as well...Odd that you would still claim this with obvious proof right there in your face especially because that's a 1 Card comparison.
Not all games nor setting cause the VRAM to be a limiting factor, but some do which is very apparent at 4K. Even if saying only a handful of games right now do limit because of VRAM, the future will only bring in more higher performance games that will need more VRAM. On top of that every review site shows different results for each game which will then of course depend on the settings. But if the limiter is being hit, then its causing at least SOME performance loss somewhere even if its not much. Or it could be that 1 card hits its end before that really comes too much into play...As ive said, noone is going to game at 4k with 1 GPU because as every review sites says its pretty much not feasible without dropping quality in most games. So the options are in reality the 780ti SLI setup or a 290X CFX setup/295X2 for the gamer crowd unless you want to splurge on the Titan Blacks which cost a significant amount more. Or the other alternative is to wait and buy 780ti or 780 6gb cards which shockingly will fix the issue...
Get over trying to bring in a fanboy argument into this discussion, ive said that the 6gb alleviates the issue multiple times in multiple posts on the same forum. I could not care less in this instance about a AMD vs Nvidia debate, I more care about the necessity of more than 3gb on GPU's in this new 4K trend which will be alleviated thanks to EVGA, Palit, Asus, and the others making the 6gb cards. They obviously saw the need for it, so they are going to release it.
The cards aren't "stock to stock", they were all overclocked to their highest maximum stable frequencies, as was stated in numerous forums. If you're looking at taking clocks out of the equation then this is the chart you should be looking at since it compares framebuffer and core count.
And we've come full circle. I never said that you couldn't find a situation where the larger framebuffer wouldn't provide better numbers - a juggling of full screen AA and texture settings could easily manufacture that scenario. My point is that the GPU runs out of processing power before the vRAM limitation becomes the limiting factor- unless you see sub-30 f.p.s. as indicative of real world usage in Crysis 3. Do you? And you see this future arriving before the next series of cards which will undoubtedly be better suited for this exact scenario ? Given that Pirate Islands and GM 204 are slated to arrive in around six months, and GK110's successor will be taped out in the next couple of weeks or so, that seems like a very optimistic viewpoint- more so given that a 4K adopter probably won't have any qualms about upgrading to the newest and most powerful boards. And I've yet to see any actual proof to back up the assertion.
What needs to be shown is the same GPU using two different frame buffer capacities (say 3GB and 6GB) being benchmarked at playable framerates - at least for the larger frame buffer card....and in more than a single benchmark. I doubt very many people buy a 4K screen and multi high end GPUs for a single title.
At the moment it isn't really anything more than the occasional outlier result....if that. Well, marketing saw the need for it if nothing else. Strange that Nvidia OK'd 6GB 780's the moment thatSapphire's 8GB 290X showed up at CEBITdon't you think? Sapphire announce a 8GB 290X on the 13th March, EVGA announce their 6GB 780 eight days later. Odd that 4K gaming has been a widespread talking point for some time, yet it suddenly became imperative to have 3GB+ from both IHV's and premiere single-vendor AIB's within days of each other.
So, feel free to post links to any gaming benchmark that highlights the difference between frame buffer only (say, 3GB vs 6GB, or 4GB vs 8GB) using the same GPU at the same clocks. A comparison should eliminate as many variables as possible. Anything else comes under the heading of opinion - and while your welcome to air yours as is everyone else, it hardly constitutes proof.
Well, marketing saw the need for it if nothing else. Strange that Nvidia OK'd 6GB 780's the moment thatSapphire's 8GB 290X showed up at CEBITdon't you think? Sapphire announce a 8GB 290X on the 13th March, EVGA announce their 6GB 780 eight days later. Odd that 4K gaming has been a widespread talking point for some time, yet it suddenly became imperative to have 3GB+ from both IHV's and premiere single-vendor AIB's within days of each other.
Which I knew would happen, AMD has been pro 4k for awhile and Nvidia jumped on the same bandwagon. Only difference if that 3gb is right on the edge and not enough for all games which makes 4k gaming bad on its 3gb counterparts. Multiple companies from Nvidia have announced 6gb edition cards yet only sapphire has announced an 8gb R9 290X card setup. Probably has something to do with the fact that 4gb has not been limited nearly as much as 3gb has been. Opinion??? I just showed games using up the 3gb Frame buffer in my first posted video which means its not enough...Whatever im done here and won't read whatever is posted next. I have already shown my point...
In the video (Digital Storm Titan Black 4k), 3 games and a benchmark are used to show the relative performance of the 3 cards in both single and multi-card systems.
The order for all single GPU configs in terms of performance for the games are as follows.
Titan Black
780ti
Titan
Now the Order for the multi-GPU's are as follows until Crysis 3
Titan Black
780ti
Titan
On Crysis 3, the Order changes on the Multi-GPU side to
Titan Black
Titan
780ti
Titan Overtakes the 780TI in a multi-GPU setup when it was behind in a single GPU config. This is obviously indicative of something either going horribly wrong or something holding it back. The logical is that the game exceeded something the 780ti does not have, but since the GPU and generally the core clocks are lower on a Titan (Unless they overclocked it a lot further, but it still shows in everything else to be lower on the FPS in games) then the only major contributing factor left is the difference of 3gb on the card.
This indicates a need for more ram...
1. The video just shows a GPU-Z screen for the stock card it doesn't follow that the stock clocks were used- especially when the Digital Storm reviewer actually posted the overclocks used
2. You're also referencing the wrong Digital Storm review from the chart I posted. No. I mean what I referenced Simply adding a combination of enough MSAA (or SSAA) AND texture settings (or using full dynamic lighting or similar) to fill the smaller framebuffer without choking the GPU which would stall out both versions of the card. You still have to make the distinction between usage and allocation. This subject has been reiterated more times than I can remember (Here's one....Here's another one) in response to the "running out of vRAM" doom mongers. All vRAM usage monitors don't report actual usage, the report vRAM allocation - that is, actual usage + whatever the application wants to cache. Typically, if the frame buffer is larger, the app takes to opportunity to use it to cache more resources. It isn't uncommon for a an app to max out whatever framebuffer it finds.
I'd also note that Rome II's engine actively adjusts quality as well as caching to tailor itself to the available framebuffer, so I wouldn't take any 3GB usage scenario's as gospel.
As for the rest, I still don't see any benchmarks comparing a 3GB to 6GB of the same GPU-based card in a multi GPU configuration, showing an advantage at a playable framerate.
Even if you look at stoke 780 in 2 way SLI compared to the titan, the titan fails
In terms of overall gaming performance, the graphical capabilities of the Nvidia GeForce GTX 780 SLI are significantly better than the Nvidia GeForce GTX Titan.
www.game-debate.com/gpu/index.php?gid=1760&gid2=1582&compare=geforce-gtx-780-sli-vs-geforce-gtx-titan
The GTX 780 has 288.4 GB/sec greater memory bandwidth than the GeForce GTX Titan, which means that the memory performance of the GTX 780 is massively better than the GeForce GTX Titan. (this is stock 3gb NOT this new 6Gb variation.
I am in the camp that we are going to see a greater amount of applications and games that will require and thrive with more vram.
I also am an advocate of vga hotwire which allows for some great overclocking in areas that Nvidia has locked for the average user.
Sli is not for everyone, however in this array it would be larger than the sum of its parts. The link I posted, I think is self explanatory. However if you care to dispute any or all of it, im all ears.
James
For some reason, some people seem to think that these new vRAM taxing games and apps will be critical within the time frame of current architectures ( Kepler, Maxwell, Volcanic Islands, Pirate Islands). I am not in that camp. I believe that for vRAM limitations to be exposed will require a greater lead-in time. 4K is niche, made all the worse by Windows font issues. Game image quality levels aren't dictated by the high end, they are dictated by the graphics ability of the cards that represent the bulk of sales.
By the time 4K and higher image quality levels ( path / ray tracing, voxel based global illumination etc.) become more widely accepted ( look at the time it took for 1080p to become mainstream) we will have a whole new series of architectures based upon high bandwidth memory (HBM) that make these current top cards look like entry level - note that these current cards will be largely consigned to the history books and budget gamers buying second hand when GPUs built on 20nm/16nm packing wide I/O memory (which is available to OEMs/ODMs now) in a year or so. If you are of the opinion that vRAM limitation will become critical before then, then you are right - be aware that the vast majority of GPUs being sold are 1GB and 2GB boards though - you really think developers are going to alienate 90+% of the user base without allowing the hardware to mature? Thanks, I'm well aware of the advantages and otherwise of SLI (I've used dual/triple cards since the GeForce 6800 series) and CrossfireX for that matter, and I'm well acquainted with the GTX 780 in particularconsidering it is what I'm presently using.