# NVIDIA Readies GeForce GTX 750 Ti Based on "Maxwell"



## btarunr (Jan 17, 2014)

NVIDIA's next-generation GPU architecture, codenamed "Maxwell," will debut this February, with the unexpectedly positioned GeForce GTX 750. The card will launch on February 18, to be specific. Maxwell will introduce a host of new features for NVIDIA, beginning with Unified Virtual Memory. The feature lets the GPU and CPU share the same memory. Such a feature is already implemented on the current CUDA, but Maxwell could be designed to reduce overhead involved in getting the thing to work. The next big feature is that Maxwell GPUs will embed a 64-bit ARM CPU core based on NVIDIA's "Project Denver." This CPU core will allow the GPU to reduce dependency on the system's main processor in certain GPGPU scenarios. Pole-vaulting the CPU's authority in certain scenarios could work to improve performance

Getting back to the GeForce GTX 750 Ti, NVIDIA's aim is simple, to see how "Maxwell" performs on the existing, proven 28 nanometer silicon fab process, before scaling it up on the future 20 nm nodes, with bigger chips. Given its name, we expect it to be positioned in between the GTX 760 and the GTX 660 in terms of gaming performance, but we won't be surprised if it falls into an entirely different league with GPGPU. There are no specifications at hand.





*View at TechPowerUp Main Site*


----------



## Assimilator (Jan 17, 2014)

Maxwell in production in February already, even thought there was nothing about it at CES? Not sure how much I believe this.


----------



## RCoon (Jan 17, 2014)

After reading that I thought I was reading an AMD article. It all sounds so very familiar.


----------



## BiggieShady (Jan 17, 2014)

btarunr said:


> we won't be surprised if it falls into an entirely different league with GPGPU



If it sits between GTX 660 and GTX 760 in pricing and it shows to be relatively much better in compute, coin miners are going to be all over it ... aaand it's gone


----------



## Fiery (Jan 17, 2014)

I'm sorry, but the GTX 750 Ti is not based on Maxwell, but Kepler:

http://www.techpowerup.com/gpudb/2462/geforce-gtx-750-ti.html


----------



## Cheeseball (Jan 17, 2014)

Whether it's based on Kepler or Maxwell, it looks like it's a worthy successor to the GeForce GTX 650 Ti BOOST, which gave the HD 7850 a run for it's money.


----------



## Ghost (Jan 17, 2014)

Fiery said:


> I'm sorry, but the GTX 750 Ti is not based on Maxwell, but Kepler:
> 
> http://www.techpowerup.com/gpudb/2462/geforce-gtx-750-ti.html


That's older information.


----------



## LAN_deRf_HA (Jan 17, 2014)

Whats the current state of 20 nm nodes? I was expecting the real deal to show up not far into 2014. This makes me think they might be aiming for later in the year.


----------



## Fiery (Jan 17, 2014)

Ghost said:


> That's older information.



You mean older than a rumour?   The TPU GPU database entry at least has a PCI device ID that backs up the information that it is based on Kepler (GK106).  While the Maxwell-related rumours have nothing to back them up.  Not even any public beta ForceWare includes an .INF file that would list any Maxwell GPUs.


----------



## ensabrenoir (Jan 17, 2014)

Fiery said:


> I'm sorry, but the GTX 750 Ti is not based on Maxwell, but Kepler:
> 
> http://www.techpowerup.com/gpudb/2462/geforce-gtx-750-ti.html



There is a difference between "based on" and "built on".


----------



## Casecutter (Jan 17, 2014)

BiggieShady said:


> and it shows to be relatively much better in compute, coin miners are going to be all over it ... aaand it's gone


Interesting thinking you might have something… A rushed 28Nm product that hashes closer to say a 280X, but much cheaper and efficient would stand the current situation on its ear. It would be a good move for Nvidia, basically switch all the underutilize GK106 and GK104 starts to this and hope they can maintain a light sprinkle on what’s still like a parched desert. But does such thinking take into account that Nvidia would've needed to know even before Novembers "gold rush" that there could be such great insurgence for compute MH/s, and schedule the "starts" to anticipate the demand?  I mean it’s been like 6-8 weeks since the frenzy started that’s not enough time to build even a drop of what will be sucked up in hours.  This might not be a paper launch, but it might as well be because yes they’ll be… gone!  And then in the same boat as AMD, which they really really want to be even if it doesn't advance their "gaming initatives".  I will say the MSRP will tell us how good of miner tell be expensive like $250 we’ll have an inkling of what’s to come.


----------



## btarunr (Jan 17, 2014)

Fiery said:


> I'm sorry, but the GTX 750 Ti is not based on Maxwell, but Kepler:
> 
> http://www.techpowerup.com/gpudb/2462/geforce-gtx-750-ti.html



That's based on old/OEM info. It was around the time of our older GTX 750 Ti article: http://www.techpowerup.com/190588/nvidia-geforce-gtx-750-ti-detailed.html.


----------



## lemonadesoda (Jan 17, 2014)

Obvious by omission, where is DP 1.2 support for QHD?


----------



## Nordic (Jan 17, 2014)

BiggieShady said:


> If it sits between GTX 660 and GTX 760 in pricing and it shows to be relatively much better in compute, coin miners are going to be all over it ... aaand it's gone


A gtx 780ti, the top end card produces ~430 k/h and and amd 7870 produces about ~425 k/h. I would be surprised if this card was able to mine better than a 780ti honestly. It may be maxwell but it is mid end.


----------



## BiggieShady (Jan 17, 2014)

james888 said:


> A gtx 780ti, the top end card produces ~430 k/h and and amd 7870 produces about ~425 k/h. I would be surprised if this card was able to mine better than a 780ti honestly. It may be maxwell but it is mid end.



That is assuming that maxwell doesn't bring any new architecture changes. The whole mining performance gap is due to some operations that GCN does in single cycle, kepler CUDA core needs several cycles. It would be foolish of nVidia not to address these kinds of optimizations with new gpu architecture.


----------



## Casecutter (Jan 17, 2014)

james888 said:


> I would be surprised if this card was able to mine better than a 780ti honestly. It may be maxwell but it is mid end.


Isn’t it more of the equation of MH/s vs. efficiency vs. cost.  It's also a question of CPU's and mobo’s you don't want that equiptment to give less hashrate when load up.  But there's a premium for C-F mobo's if you find those, while firgure there's still plentiful 4 slot SLI capable mobo's that cost less it make the equivalence of the system more attractive as you can use lots of cheap Intel CPU that are more abundant.  Go look for a low watt Semprons’ there gone also!


----------



## 15th Warlock (Jan 17, 2014)

james888 said:


> A gtx 780ti, the top end card produces ~430 k/h and and amd 7870 produces about ~425 k/h. I would be surprised if this card was able to mine better than a 780ti honestly. It may be maxwell but it is mid end.



The days when a Titan mined 350~400 KH/s and a 780ti mined 400+ KH/s are long in the past, with Cudaminer my Titans easily mine 512~535 KH/s each:







The GT430 is used to drive my display while the Titans mine 

And yes, I know this is still a far cry from my 290X' performance (900+ KH/s) at a much lower price, but with each consecutive release cudaminer increases the mining efficiency of CUDA based cards, so don't dismiss Maxwell so easily, chances are Nvidia is going to catch up to AMD when it comes to mining efficiency, and Maxwell could well be the right architecture to do so


----------



## xorbe (Jan 17, 2014)

Must every graphics card thread turn into a mining thread.


----------



## newtekie1 (Jan 17, 2014)

15th Warlock said:


> And yes, I know this is still a far cry from my 290X' performance (900+ KH/s) at a much lower price, but with each consecutive release cudaminer increases the mining efficiency of CUDA based cards, so don't dismiss Maxwell so easily, chances are Nvidia is going to catch up to AMD when it comes to mining efficiency, and Maxwell could well be the right architecture to do so



And remember, the whole point of Maxwell is that it includes a 64-bit CPU on die to handled some of the workload that would normally be offloaded to the main system CPU.  This is supposed to improve CUDA performance quite a bit(how much is yet to be seen of course).


----------



## Nordic (Jan 17, 2014)

Casecutter said:


> Isn’t it more of the equation of MH/s vs. efficiency vs. cost.


Yes. Since it is a mid end card, even with better hardware I personally would not expect it to surpass nvideas current top end in compute. I brought up the 7870 for example, as I think that would be the card that this would compete with.

@15th Warlock, thanks for that. I did not know cudaminer had made such gains as I am already invested in AMD hardware.




If this card can put out ~450k/h for ~ $225 at ~160w it would be a good alternative to amd, comparable to the 7870. If it can do any better than that on any of those three fronts this thing will be quite a special card.


----------



## Casecutter (Jan 17, 2014)

james888 said:


> If this card can put out ~450k/h for ~ $225 at ~160w it would be a good alternative to amd, comparable to the 7870. If it can do any better than that on any of those three fronts this thing will be quite a special card.


Exactly then factor the ease and abundance of low cost Intel CPU's and 4 slot PCI mobo's and the cost to have quad GPU set-up for less.  What that whole setup lets just say hash's some in the range and as Tri 280X build and does it with less power while being easier to locate parts and less expensive to throw together.  Although I suppose they aren't "SLI or C-F" the actual cards so you just need a cheap mobo with four slots.


----------



## Eagleye (Jan 17, 2014)

> Basically the performance gap is a product of AMD’s focus on integer compute performance, and Nvidia’s relative lack of interest in that aspect of GPU performance. To be clear this is not a software issue, but rather an architectural design trade-off that Nvidia made to de-emphasize integer compute in order to meet their other design goals.



https://semiaccurate.com/2014/01/15/amd-gpu-good-mining/


----------



## T4C Fantasy (Jan 17, 2014)

there may be 2 750 Tis, 1 is definitely a GK106....


----------



## RyneSmith (Jan 17, 2014)

Casecutter said:


> Exactly then factor the ease and abundance of low cost Intel CPU's and 4 slot PCI mobo's and the cost to have quad GPU set-up for less.  What that whole setup lets just say hash's some in the range and as Tri 280X build and does it with less power while being easier to locate parts and less expensive to throw together.  Although I suppose they aren't "SLI or C-F" the actual cards so you just need a cheap mobo with four slots.



Would be some nice cost savings... haha

Considering I've just bought 4 280x to mine with and have a setup probably around $2200 or so... 4 750 TIs would be a nice bit of savings if they are priced at 250-300...


----------



## TheoneandonlyMrK (Jan 17, 2014)

Slightly chuckle worthy dreaming going on so one enabled 64bit arm core is now going to allow highly parallel algorithms to run better than amd's arch. 
I doubt it as its aimed at Gamers and like gk104 compute will be compromised or nvidia are daft


----------



## Nordic (Jan 18, 2014)

Casecutter said:


> Exactly then factor the ease and abundance of low cost Intel CPU's and 4 slot PCI mobo's and the cost to have quad GPU set-up for less.  What that whole setup lets just say hash's some in the range and as Tri 280X build and does it with less power while being easier to locate parts and less expensive to throw together.  Although I suppose they aren't "SLI or C-F" the actual cards so you just need a cheap mobo with four slots.


What makes you think this will perform the same as a 280x?

I base my k/h assumption that a mid end, although maxwell card, would not perform better than the top end kepler card.


----------



## acekombatkiwi1 (Jan 18, 2014)

I went with 4x R9 270s last week because they only pull 138w each and give me 445KH/s.


----------



## Death Star (Jan 18, 2014)

In the world of GPGPU, having access to a CPU through a very low-latency channel is a very appealing prospect. First off, this eliminates some of the normally required PCIe transfers, which are slow and costly. That's obviously always a good thing. CPU's are very good at general purpose computations. Unlike GPUs, CPUs are very good at making decisions. If you can offload a decent portion of the decision making to the CPU, that's a few percent of extra performance on *basic* GPGPU kernels while potentially much more on sophisticated kernels.

About a month ago I restructured my finite-difference time-domain code by moving as many decision-making tasks (mostly if-statements) CPU-side as possible. In doing so I removed about 15 if-statements, but added 10-50 extra arithmetic operations per if-statement that I removed. The end result was a 4% performance increase, which given the simplicity of the kernels to begin with is pretty damn good. It will be interesting to see how the addition of the ARM on-board affects all of this.


----------



## TheGuruStud (Jan 18, 2014)

Translation: 20nm is so bad that TSMC told us to go F ourselves for production. We're desperately trying to get yields up, but every wafer super, extra, mega sucks. And we gotta paper launch a card (maxwell in general) really far ahead this time LOLZ.


----------



## NC37 (Jan 18, 2014)

Man invents CPUs
Man invents GPUs
CPUs become GPUs
GPUs become CPUs
Computers become self aware!!

Man invents Chobits


----------



## TheoneandonlyMrK (Jan 18, 2014)

Death Star said:


> In the world of GPGPU, having access to a CPU through a very low-latency channel is a very appealing prospect. First off, this eliminates some of the normally required PCIe transfers, which are slow and costly. That's obviously always a good thing. CPU's are very good at general purpose computations. Unlike GPUs, CPUs are very good at making decisions. If you can offload a decent portion of the decision making to the CPU, that's a few percent of extra performance on *basic* GPGPU kernels while potentially much more on sophisticated kernels.
> 
> About a month ago I restructured my finite-difference time-domain code by moving as many decision-making tasks (mostly if-statements) CPU-side as possible. In doing so I removed about 15 if-statements, but added 10-50 extra arithmetic operations per if-statement that I removed. The end result was a 4% performance increase, which given the simplicity of the kernels to begin with is pretty damn good. It will be interesting to see how the addition of the ARM on-board affects all of this.


So 4% eh , add 10-20% and this poorly named card is not getting anywhere near a 7870 in compute never mind a 280x.
It should however play games really well and with nvidias recent cross licence with intel they might have something brewing but then again intel did keep x86 off the licence so it may yet be tricky especially since intel seam to have access to nvidias patents and much more leverage in foundry tech then nvidia.
This is panic status releasing by nvidia , , like I said ages ago maxwell (big proper one with 1tb bandwidth nested memory) is 2014 q4 earliest likely 1h 2015 on 20nm unless they have  fully ditched tsv attached memory plans which would make for a lot of Bs in nvidias earlier pr slides


----------



## Death Star (Jan 18, 2014)

theoneandonlymrk said:


> So 4% eh , add 10-20% and this poorly named card is not getting anywhere near a 7870 in compute never mind a 280x.



The key to the 4% was just that it was gained by getting rid of a few if-statements. There are plenty of other operations that can be used in more sophisticated kernels that would gleam higher performance gains with offloading to a CPU.

It will be interesting to see how much it actually helps with more complicated GPGPU kernels, but yeah I doubt it will approach anything like a 7870 or 280x in the vast majority of circumstances. Seems like more of a niche card.


----------



## JDG1980 (Jan 18, 2014)

acekombatkiwi1 said:


> I went with 4x R9 270s last week because they only pull 138w each and give me 445KH/s.



What settings did you use for that? My 7870 (with the same number of SPs) tops out at about 350 KH/sec.


----------



## Crap Daddy (Jan 18, 2014)

Funny how this thread went from maxwell to mining.


----------



## Nordic (Jan 18, 2014)

Crap Daddy said:


> Funny how this thread went from maxwell to mining.


I would rather it be about maxwell as that is far more interesting.


----------



## TheoneandonlyMrK (Jan 19, 2014)

james888 said:


> I would rather it be about maxwell as that is far more interesting.


Indeed


----------



## Nordic (Jan 19, 2014)

I've known about maxwell for quite a long while now and what it brings to the table. Anyone know what amd's R300 series cards are going to bring?


----------



## Bjorn_Of_Iceland (Jan 19, 2014)

I want the whole thing, not some half assed derivative lol


----------



## NeoXF (Jan 19, 2014)

15th Warlock said:


> The days when a Titan mined 350~400 KH/s and a 780ti mined 400+ KH/s are long in the past, with Cudaminer my Titans easily mine 512~535 KH/s each:
> 
> 
> 
> ...



So we'll have mining-induced GPU price inflation on top of nVidia's typical one? Nice!


----------



## Frick (Jan 19, 2014)

How do you mine inductance? Mining is so confusing.


----------



## 15th Warlock (Jan 19, 2014)

NeoXF said:


> So we'll have mining-induced GPU price inflation on top of nVidia's typical one? Nice!



You know it's coming... Newegg is already inflating prices of AMD cards, and hoarding their most sought after cards, artificially restricting supply so they can create little gems like this beauty here:


*Mining Kit UKO-K140K: AMD Sempron 2.8GHz Single Core, 6 X Radeon R9 290X 4GB, 4GB RAM, 2 X 1500W 80 PLUS Gold Power Supply*
All this can be yours for only: $4,119.99!!

It's only a matter of time before Nvidia steps into the ring, and then it's off to the races so to speak, dunno yet what part Maxwell will play in this game... 

I'm all for gaming and computing inspired innovation, and I really hope Nvidia is focusing on that with Maxwell, but the Pandora box has been opened, and fact is there's a big market in the mining business right now, and sooner than later we'll have two contenders in this arena.

I'm looking forward particularly to what Maxwell will bring in terms of power efficiency, not very exited about the desktop parts, but mostly about the mobile parts!


----------



## Xzibit (Jan 19, 2014)

Benchmark leaks from Coolaler.com


----------



## BiggieShady (Jan 20, 2014)

15th Warlock said:


> You know it's coming... Newegg is already inflating prices of AMD cards, and hoarding their most sought after cards, artificially restricting supply so they can create little gems like this beauty here:
> 
> 
> *Mining Kit UKO-K140K: AMD Sempron 2.8GHz Single Core, 6 X Radeon R9 290X 4GB, 4GB RAM, 2 X 1500W 80 PLUS Gold Power Supply*
> ...



Those bastards, they did it ... save $250 LOL
At least they could throw in 6 PCIE risers and a piece of a PC gamer's soul


----------



## xorbe (Jan 20, 2014)

Xzibit said:


> Benchmark leaks from Coolaler.com



If true, that's a big gap from 750 Ti to 760 yeah?


----------



## TheoneandonlyMrK (Jan 21, 2014)

Is this looking like the other side of the tegra  k1 binning tree to anyone else, ???.
Like poor core bin but max gpu array bin


----------

