Friday, August 23rd 2013
NVIDIA Working on GK110-based Dual-GPU Graphics Card?
GeForce GTX 295 showed that its possible to place two GPUs with ludicrously high pin-counts next to each other on a single PCB, and if you get a handle over their thermals, even deploy a 2-slot cooling solution. NVIDIA might be motivated to create such a dual-GPU graphics card based on its top-end GK110 chip, to counter AMD's upcoming "Volcanic Islands" GPU family, or so claims a VideoCardz report, citing sources.
The chips on the card needn't be configured, or even clocked like a GTX Titan. The GTX 780 features just 2,304 of the chip's 2,880 CUDA cores, for example. Speaking of 2,880 CUDA cores, the prospect of NVIDIA developing a single-GPU GeForce product with all streaming multiprocessors on the GK110 enabled, the so-called "Titan Ultra," isn't dead. NVIDIA could turn its attention to such a card if it finds AMD's R9 2xxx within its grasp.
Source:
VideoCardz
The chips on the card needn't be configured, or even clocked like a GTX Titan. The GTX 780 features just 2,304 of the chip's 2,880 CUDA cores, for example. Speaking of 2,880 CUDA cores, the prospect of NVIDIA developing a single-GPU GeForce product with all streaming multiprocessors on the GK110 enabled, the so-called "Titan Ultra," isn't dead. NVIDIA could turn its attention to such a card if it finds AMD's R9 2xxx within its grasp.
43 Comments on NVIDIA Working on GK110-based Dual-GPU Graphics Card?
Now a dual GPU GK110 card would be crazy! Price would be right around ~$1300 I would think.
Not that AMD is doing any different. Year after year of rehashes for both. Still have very minimal reason to upgrade from my 460s.
Those were great days in some ways. Of course, the driver situation was abhorrent, especially on the ATI side, and things were a lot more iffy about games working properly. Plus, SLI (where it existed) was incredibly iffy.
Still, those were fun times. Now we have cards sitting on the same tech with little updates for years. Years. Not hard to imagine a future where nVidia doesn't even bother to release a rename/rebranding and in that year, I doubt AMD would even feel compelled to release something new at the end of the year like they do right now.
I suspect AMD is going to try and ride the wave of gaming interest from new consoles with their new hardware. If that's the case, then expect cards to hit the $299, $399 and/or $499 price points as "counterpoints" to the upcoming next gen consoles. They'll make a big, big push to argue they're in all the consoles and they're in all the important ports (except Watch Dogs or AC4) from next gen's launch. It's a fair argument. I'm sure they'll have a great bundle of games for their coupon system they've constructed by November, too.
That's assuming the rumors are accurate and not just smoke 'n mirrors to unnerve nVidia/customers who might go 7xx series. I find it hard to imagine AMD releasing a $550 card when a $400 console is being released and called, "next gen." It seems like that'd be the price point they'd target aggressively. "Why spend that $400 on a PS4 when you can buy a Radeon with 6GB and Titan-like performance? Plus, you get access to three new games we've just lined up in our bundle. Including Battlefield 4."
That'd be a compelling argument and it would light a fire under nVidia's butt.
Die shrinks in the past were mostly painless, whereas nowadays not only you have to develop new architectures, you also have to take into account all the intricacies of new nodes - I hope you haven't forgotten how painful it's recently been for both NVIDIA and AMD to move to new nodes.
Then maybe you'll recall that modern GPUs are extremely complex beasts containing up to 6 billions of transistors. How many of you could design such things? How many of you even have the knowledge of high-level APIs to access them? Direct3D, OpenCL, CUDA, OpenGL, etc. etc. etc.
Modern GPUs have almost surpassed the x86/x86-64 architecture in terms of complexity, and definitely surpassed it in terms of transistors count.
Then what about TDP? Do you really think NVIDIA and AMD really want to create beasts consuming over 500W of electricity? Yeah, there'll be a market for them - they'll probably move 10K units and bankrupt.
Still, you'll be screaming, "give us moar!" (sic)
Would probably be retardedly cripped by a "barely enough" PWM section, like my Titans.
There's no fun in having a powerhouse if you can't tweak it, especially because these cards are rarely purchased by people who do not tweak.
So it's barely enough to get the card through stock ;)
videocardz.com/45403/nvidia-to-launch-more-cards-this-year-maxwell-in-q1-2014
I think it's interesting that it is taking two generations for AMD to catch up to the performance of the GK110 chip. As most are aware, this chip was supposed to be the GTX 680, but GK104 was cheap and competitive enough to keep GK110 on the shelf. All the same, the chips in the current crop of GTX 780's have a manufacturing date of Summer 2012, so they could have come out at any time over the past year.
Imagine the leap over GK110. ..
Absolutely no way in hell will that happen unless it's in minute volumes / paper launch. They have major problems with it. I'd be amazed if it hits the channel in competitive volume before Q3.
That's if it even makes it ... they've cancelled a hell of a lot of stuff in the last 18 months.
P.S. If that Videocardz article is right about first Maxwell cards being 28nm, it'll be hugely watered down. Maxwell was always intended to be 20nm. I simply can't imagine this is true. Everything on the grapevine says yields will be horrific for Maxwell ... but if 28nm is true it probably means it's totally unviable in its 20nm form - rejigging it for 28nm would probably mean a complete redesign, which would be really expensive and unlikely to be very competitive.
It should come as no surprise to anyone they have options up their sleeve to rain on AMD's impending parade. Equally it's unlikely they have been sitting on their hands the whole time counting their money and waiting for AMD to respond.
Things are finally going to get interesting again rather than yet another discount and an ever expanding bundle of games. :rolleyes:
Maxwell isn't coming for a long time, and is reportedly suffering even worse yields than ever before ... and I don't believe the VC article about 28nm.
As for Maxwell, I've yet to see any definitive information regarding yields, performance, or cancellations - with the notable exception of Charlie D, who has a habit of declaring new GPUs (for page views) followed by the imaginary GPUs then being cancelled (also for page views). Old ploy. Most people with any sense don't buy into it. Feel free to share with the group any actual information.
As for yields in general, if AMD are supposedly delivering "Titan-beating-performance" then they are almost certainly going to have to do it with the largest die yet used on an ATI/AMD GPU product - it certainly should eclipse the 420mm^2 of the R600, so I wouldn't bank on any miracles in pricing. AMD's initial pricing of the HD 7990 (and HD 7970 before it) and the FX-9590 should be seen as an indicator of what to expect vis-à-vis the given performance.
I've seen it drop as little as 1.261v when setting it 1.32v.
That not a good VRM. Good for me was reference Volterra on my two 6990s.
People seem to forget that GPU designs are developed YEARS before release. Both Nvidia and AMD have products in varying stages that won't come out until 2016. Along the way processes get refined and advancements occur but bottom line is these products have a long time from drawing board to consumer. It's funny when I read some comments that indicate the poster thinks, oh a product came out from company A, now company B will immediately do this to respond. Not how it works.
As far as a dual 110, I think it would be an incredible performer.