Tuesday, May 27th 2014
GeForce GT 740 Pictured, Arrives Later This Week
NVIDIA is planning to drag its entry-level GPU lineup through 2014 with the 'new' GeForce GT 740. Reports suggest that the card is a re-branding of the 28 nm GK107-based GeForce GT 640, with higher clock speeds. The chip will feature 384 CUDA cores, 32 TMUs, 16 ROPs, and a 128-bit wide GDDR5 memory interface, holding 1 GB of memory. Variants with 2 GB of memory will also be available. The card draws all its power from its PCI-Express 3.0 x16 bus interface, however some custom-designs could feature single 6-pin power connectors. NVIDIA AIC partners will have their custom-design cards out on day-one. Expreview revealed pictures of two such cards, by Galaxy and Gainward.
Source:
Expreview
11 Comments on GeForce GT 740 Pictured, Arrives Later This Week
You can guess why...
Because the GK208 version of the GT 640 have half the ROPs and TMUs. I hope it will not get much hotter, because I'm after a decent low profile card that is not too loud.
Talking about ROP performance impact, I can share some of my personal experience (however, I highly recommend to contact game developers if you want to have an advanced knowledge of this theme). If you use your GPU for general-purpose computing (let's say we have a simple C++ AMP application that multiplies two square matrices of size 512 both), you don't explicitly direct your compiler to make any use of ROPs. If we take a look at the whole graphics computing pipeline, we'll see that there's no atomic task that we can force to be executed with ROP logic w/ GPGPU instruments. Keep in mind that it doesn't mean that they're not being used when you execute your GPU-accelerated app, we just don't say anywhere in our code that we care about them. In specific tasks (like programming game engines), we generally use HLSL programming language, which is not that different in terms of compile-time product. Basically, it allows us to go at "lower" level while writing code so we can directly manipulate every single piece of GPU logic available (yeah, and ROPs, too). When the code is compiled, we can use "profiler" tools given us by GPU manufacturers or Microsoft too see which part of our code is being slower then expected. That way we'll be able to optimize it. The exact same thing goes to TMU's - they're just a specific part of GPU logic that makes its job the way it was designed to: we can use it explicitly if we need to, or we can just forget about them if we're writing more general-purpose code. The impact will differ: I'd say, we won't benefit from extra ROPs/TMUs in GPGPU apps, but in games there will be a small difference.
www.techpowerup.com/gpudb/894/geforce-gtx-650.html
That’s worse than the 750, which is an excellent purchase more often between $110-120; we all realize it's the smart purchase to the GTX 650; which still have an abundance of SKU’s in the channel! What can they do with all that inventory?
Main bulk of 640's are still falling between $75-100 with many 640 making due on DDR3. So they if they down-badge/rebrand the 650 they aren't moving the market, unless they can be price be competitive with the R7 250 / 250X, which spare fairly competitively performance-wise with the GTX 650. Although AMD has Orland pricing between $75-110, and haven’t had any need move those prices from MSRP, as they have been excellently competitive to the GTX640-650. Then there’s 250X, and R7 260 which do have 6-pins, but still in that $80-110 bracket so let see how good Nvidia 28nm pricing can get... I’d say MSRP has to be $80 just to have any serious contention.