Monday, January 16th 2012
NVIDIA Kepler Inbound for March-April
NVIDIA's next high-performance GPU that will attempt to restore NVIDIA's performance leadership in the consumer graphics segment, under the GeForce Kepler family, is slated for a March-April launch, according to a VR-Zone report. At CES 2012, NVIDIA focused on its Tegra product line, and demonstrated its applications in smartphones, tablets, and even automotives, but chose to avoid talking about its GeForce family.
According to the report, NVIDIA wants to avoid doing a paper-launch like AMD, which launched its Radeon HD 7970 on December 22, 2011, but its market availability was non-existent till after two weeks, on January 9, 2012. NVIDIA wants to ensure the GeForce product based on its new high-performance GPU will be available in the market on launch-day, which is pinned somewhere within late March and early April. On April 8, Intel will launch its third-generation Core processor family.
Source:
VR-Zone
According to the report, NVIDIA wants to avoid doing a paper-launch like AMD, which launched its Radeon HD 7970 on December 22, 2011, but its market availability was non-existent till after two weeks, on January 9, 2012. NVIDIA wants to ensure the GeForce product based on its new high-performance GPU will be available in the market on launch-day, which is pinned somewhere within late March and early April. On April 8, Intel will launch its third-generation Core processor family.
82 Comments on NVIDIA Kepler Inbound for March-April
I just wish they'd offer more software goodies. Like FXAA and SMAA algorithms along with already available MLAA. And i wish NVIDIA would do the same. I know there are such features that can be injected in any game but with it you're risking VAC ban and well, i don't want that.
Alas it won't happen as they are greedy and want profit here and now.
And before you bash and say "oh no, 27" is not mainstream, 22" is" remember this: a few years ago a 19" monitor cost you 350 euro and that was mainstream, now your 27" monitor costs 270 euro.
You don't use monitor size to talk about performance, makes no sense, a 23,6 incher can have the same resolution as a 24"/27" etc... The performance of a system is the same on different inch size monitors with the same resolution.
As for the news, it's not too late and I'm glad Nvidia announced, I really wanna change ship this time :)
This www.overclockers.co.uk/showproduct.php?prodid=MO-011-HO&groupid=17&catid=1120&subcat= is the cheapest 2560x1440 res 27" monitor i can find. After that you're heading towards £500.
And the reason 1080 is the standard is because of HD tv, not much we can do there.
So looking for a non 1080 27" is just stupid. 27" 1080 start @ 216 euro here.
On topic: bring on the leaked benchmarks!!! :rockout:
or "The Kepler" GK100 the mother of all nVidia gpu ???
I do believe the mobile market is huge, and it will only continue to grow, but I'm on par with what Bill said once, yes Bill. It went something like "A device on the go, a tablet/bigger device @ work, and a huge device @ home (referring to desktops/TVs+consoles)." I like that idea, since on the desktop you don't have any restrictions in battery life / power, aside from your PSU, of course. You get the full glory of your CPU/GPU unlike on your laptop as well. And the keyboard + mouse, detailed monitor, plus the enhanced GFX, compared to a console.
I still think the PC market is HUGE, massive. I would love it if nVidia had it right with the PC gaining market share above the consoles once again. While I liked and enjoyed the idea of consoles back in the day, they are just too damn restrictive nowadays.
So far nVidia's GPUs have a bigger die size and also more memory chips due to the wider bus.
So in a price war AMD have the upper hand. This is their strategy since the HD3k series.
Just think about Tahiti, Tahiti has 4.3 billion transistor. Much much more than the 2.7 In Cayman and also much more than GF100/110's 3 billion. GPGPU is expensive (64 bit, memory management, etc.), something that nobody ever noticed or even cared about. Well finally AMD matched Nvidia on GPGPU features and capabilities and the result is a chip with 1.3 billion transistor more (+40%) than GF110 that is only 15% faster, with 15% faster clocks, clock for clock they are mostly equal. With a little help from 28 nm and it's good transistor clocking, Nvidia could in theory release an hypothetical GF111 @ 900 Mhz that would be as fast as the HD7970. 3 billion transistor vs 4.3 billion, now imagine a 3.6 billion chip or 4.5 billion transistor one.
I know it's not something many people here want to hear, and that it even hurts them, but I think Nvidia is in for an easy win this time around. They have to screw up badly not to score this one.
Although to be quite fair, he does work with Nvidia, he has stated many times over that FXAA is entirely free source for anyone to use and adapt.
Just to clarify, I run a 6870 and pretty much inject FXAA into almost every game I play, as MLAA is trash in comparison.
(BTW I own a lot of cards from both teams, red and green)
Price to performance was big a generation ago. Now it's going to be very hard for AMD to keep that crown as they have increased pricing...look at the 7970 (which I have). I paid 600+ which is what I paid for each of my 580 3gb....so I expect high end Kepler to be around 650 and to be faster than the 7970. Where's the price to performance now? Also, it's comical that you blame your ignorance in purchasing on the company. Anyone who did ANY reading about the 6870 could have told you to keep your 5xxx card. Want new tech? You will have to pay for it, as always.
The core configuration and the actual performance of 7970 doesn't impress me at all TBH.
Kepler will be an improvement over the 7970 in performance, but honestly not on power or production costs unless they're really able to pull a rabbit out of the hat. So, it still does mean complex power sections and coolers to engineer and package onto a PCB.
Per chip I can't see TSMC giving any price break to Nvidia. The only way this works is that if Nvidia purchases the entire wafer and their architecture sort much better and they get better yields. I don't know how either group contacts on purchasing or that either side has an architectural advantage in production at 28Nm. They’re both probably struggling with TSMC, the only upside is Nvidia will be reaping production benefits for being behind, while AMD has 5mo's ahead in sales. It probably just evens out a on the bottom line by year’s end.