After NVIDIA scored a big technological victory over AMD in peformance-per-watt and pure performance with its "Kepler" GPU architecture, and AMD's rather lukewarm response to the GeForce GTX 600 series coupled with the company's intention not to launch its next GPU generation until much later this year (think X'mas), it was only natural of NVIDIA to milk its existing GK104 silicon for another generation of GeForce GTX products, with a few superficial additions. The GeForce GTX 770 we have with us today is the first of many such products in NVIDIA's pipeline over the next few months.
The 2880-core GK110 was always going to be the most flexible chip for NVIDIA. With GK104 beating AMD's "Tahiti" in single-GPU performance and efficiency, the GK110 never had to feature in the GeForce GTX 600 series. It made its consumer debut with the GeForce GTX TITAN. With only 2688 cores enabled, and warranting a $1000 price point, it went on to make the $650 the 2304-core GeForce GTX 780 commanded look good. The rest of NVIDIA's GeForce GTX 700 series product stack is in for a pseudo-upgrade. The GeForce GTX 770 looks a lot like the GeForce GTX 680 on paper, and it is rumored that the GeForce GTX 760 Ti could bear a similar resemblance to the GeForce GTX 670, the GTX 760 to the GTX 660 Ti, and so on. I call this card a pseudo-upgrade because its specification increases don't come at the same price. The GTX 770 is priced roughly on par with the GTX 680, and other models in the series could feature similar pricing trends.
To be fair to NVIDIA, the GeForce GTX 770 isn't a complete and utter rebranding of the GeForce GTX 680 (à la GeForce 8800 GT to 9800 GT). Sure, it is driven by the same GK104 silicon with the same exact core configuration of 1536 cores, 128 TMUs, 32 ROPs, and a 256-bit wide memory interface; but it features a different reference-design PCB that comes with a stronger VRM to support higher clock speeds, and the new GPU Boost 2.0 technology. The similarities the GTX 770 bears to the GTX 680 are in that sense more along the lines of those between the GeForce 8800 GTS-512 and GeForce 9800 GTX.
The GeForce GTX 770 ships with the highest reference clock speeds of any NVIDIA GPU to date. Its core is clocked at 1046 MHz, with a GPU Boost frequency of 1085 and a blisteringly fast 7.00 GHz memory that churns out 224 GB/s of memory bandwidth. The card features 2 GB of memory, but 4 GB variants could come out pretty soon.
In this review, we have with us Palit's premium GeForce GTX 770 offering, the JetStream OC. Featuring factory-overclocked speeds, the card has a custom-design PCB with a stronger VRM and a new cooling solution. The cooler appears meatier than the one that cools the GeForce GTX 680 JetStream and should handle the increased thermal loads of the GTX 770 better. A better cooling solution helps with sustaining GPU-boost states.
GTX 770 Market Segment Analysis
GeForce GTX 570
GeForce GTX 660 Ti
GeForce GTX 670
Radeon HD 7970
GeForce GTX 770
Palit GTX 770 JetStream
HD 7970 GHz Ed.
GeForce GTX 680
GeForce GTX 780
GeForce GTX Titan
Shader Units
480
1344
1344
2048
1536
1536
2048
1536
2304
2688
ROPs
40
24
32
32
32
32
32
32
48
48
Graphics Processor
GF110
GK104
GK104
Tahiti
GK104
GK104
Tahiti
GK104
GK110
GK110
Transistors
3000M
3500M
3500M
4310M
3500M
3500M
4310M
3500M
7100M
7100M
Memory Size
1280 MB
2048 MB
2048 MB
3072 MB
2048 MB
2048 MB
3072 MB
2048 MB
3072 MB
6144 MB
Memory Bus Width
320 bit
192 bit
256 bit
384 bit
256 bit
256 bit
384 bit
256 bit
384 bit
384 bit
Core Clock
732 MHz
915 MHz+
915 MHz+
925 MHz
1046 MHz+
1150 MHz+
1050 MHz
1006 MHz+
863 MHz+
837 MHz+
Memory Clock
950 MHz
1502 MHz
1502 MHz
1375 MHz
1753 MHz
1753 MHz
1500 MHz
1502 MHz
1502 MHz
1502 MHz
Price
$250
$280
$370
$380
$399
$425
$450
$430
$650
$1020
Architecture
As mentioned in the introduction, the GeForce GTX 770 has a lot in common with the GeForce GTX 680. They're both based on the same 28 nm GK104 silicon, circa March 2012. The chip is based on the "Kepler" microarchitecture, with four independent graphics processing clusters (GPCs) that each has two streaming multiprocessors (SMXs) with 192 CUDA cores a piece. Each SMX holds 16 texture memory units (TMUs), which make up the chip's 128 TMUs. The GK104 features a 256-bit wide GDDR5 memory interface, which raised the bar for memory clock speed with the GTX 680, as it is the first card to feature 6 GHz of memory clock. With a little better VRM, the GTX 770 raised that bar yet again, with a 7 GHz out-of-the-box memory clock offering 224 GB/s bandwidth.
The new reference-design PCB allowed NVIDIA to, aside from giving the GK104 a strong enough VRM to maintain those high clock speeds, deploy its brand new GPU Boost 2.0 technology, which makes higher GPU core clock speeds available to demanding applications by taking into account not only power draw but also GPU temperatures. Lower operating temperatures are rewarded with better boosting opportunities for the GPU, which creates a real incentive to buy cards with better-performing cooling solutions than NVIDIA's reference-design.
GeForce Experience
With last week's GeForce 320.18 WHQL drivers, NVIDIA released the first stable version of GeForce Experience. The application simplifies game configuration for PC gamers who aren't well-versed in all the necessary technobabble required to get that game to run at the best possible settings, based on the hardware available to you. GeForce Experience is aptly named as it completes the experience of owning a GeForce graphics card; PCs, being the best possible way to play video games, should not be any harder to use than gaming consoles.
With your permission, the software scans your system for installed games before recommending optimal settings that give you the highest possible visual details at consistent, playable frame rates. The software is also optimized to reduce settings that have a big performance impact at low visual cost. You could easily perform these changes yourself in-game, probably through trial and error, but you can trust GeForce Experience to pick reasonably good settings if you are too lazy to do so yourself. I imagine the software to be particularly useful for gamers who aren't familiar with the intricacies of game configurations yet want the best possible levels of detail.
The simplicity of inserting a disc or cartridge and turning on the device is what attracts gamers to consoles. Gamers who pick the PC platform should hence never be faulted for their lack of knowledgeable with graphics settings, and that's what GeForce Experience addresses. Price is a non-argument. $300 gets you a console, but the same $300 can also get you a graphics card that lets you turn your parents' Dell desktop into a gaming machine that eats consoles for breakfast. GeForce Experience keeps itself up to date by fetching settings data from NVIDIA each time you run it, which will also keep your GeForce drivers up to date.
I gave GeForce Experience a quick try for Battlefield 3 and it picked a higher AA mode that was still playable in BF3, so it does value image-quality. It also takes into account the rest of the system, and not just the GPU.