Monday, July 7th 2008
NVIDIA Plans to Nuke R700
NVIDIA Plans to Nuke R700?
Let's face it, the ATI RV770 and its derivatives have become a rage. Everybody loves this chip and wants a card based on this, be it the card that made NVIDIA slash their prices, the HD4850 or the HD4870 which rivals the GeForce GTX 260 at a decent price. In surveys conducted by several websites, be it TweakTown or Hexus.net, majority community members chose ATI as a brand over NVIDIA, rougly indicating that the HD4000 series has done an excellent repair job with ATI and its brand value.
Nothing (exciting) is going NVIDIA's way these days, their notebook graphics division has taken a beating over the recent faulty parts issue. The NVDA stock is a little volatile at the stock market these days, after the company announced it predicts weaker earnings this quarter financial year. Here's something to ponder: If NVIDIA predicts weaker earnings, how come talks are they have something to counter the R700, which AMD already made statements about, saying it will "overwhelm the GeForce GTX 280"?
NordicHardware reports that something is in the making from NVIDIA, while not exactly sure, it just could be a 55nm fab processed GPU, could be G200b (55nm die-shrunk GeForce GTX 280?). While unreliable sources have always been pointing that NVIDIA has a very shallow roadmap for the time-being, and that we can't expect something revolutionary anytime soon, contradictory reports already followed, again from NordicHardware in a report that NVIDIA could release DirectX 10.1 GPU's by late Q4 2008 to spring 2009, but a point we would have ignored then but holds the key to this news is "NVIDIA could implement GDDR5 memory within 2008". How come they didn't mix the GDDR5 bit with the late-Q4 '08 early Q1 '09 for DX 10.1 GPU part? Does it imply that in the very near future we could just see a current generation NVIDIA GPU with GDDR5 memory? So could the new product NVIDIA reveals sometime in September be the one that's a die-shrunk G200 with GDDR5 memory? Time will tell. What can be said for sure is that NVIDIA is not in a comfortable position right now, definitly not with the R700 dressing up to go to office.With inputs from NordicHardware
Let's face it, the ATI RV770 and its derivatives have become a rage. Everybody loves this chip and wants a card based on this, be it the card that made NVIDIA slash their prices, the HD4850 or the HD4870 which rivals the GeForce GTX 260 at a decent price. In surveys conducted by several websites, be it TweakTown or Hexus.net, majority community members chose ATI as a brand over NVIDIA, rougly indicating that the HD4000 series has done an excellent repair job with ATI and its brand value.
Nothing (exciting) is going NVIDIA's way these days, their notebook graphics division has taken a beating over the recent faulty parts issue. The NVDA stock is a little volatile at the stock market these days, after the company announced it predicts weaker earnings this quarter financial year. Here's something to ponder: If NVIDIA predicts weaker earnings, how come talks are they have something to counter the R700, which AMD already made statements about, saying it will "overwhelm the GeForce GTX 280"?
NordicHardware reports that something is in the making from NVIDIA, while not exactly sure, it just could be a 55nm fab processed GPU, could be G200b (55nm die-shrunk GeForce GTX 280?). While unreliable sources have always been pointing that NVIDIA has a very shallow roadmap for the time-being, and that we can't expect something revolutionary anytime soon, contradictory reports already followed, again from NordicHardware in a report that NVIDIA could release DirectX 10.1 GPU's by late Q4 2008 to spring 2009, but a point we would have ignored then but holds the key to this news is "NVIDIA could implement GDDR5 memory within 2008". How come they didn't mix the GDDR5 bit with the late-Q4 '08 early Q1 '09 for DX 10.1 GPU part? Does it imply that in the very near future we could just see a current generation NVIDIA GPU with GDDR5 memory? So could the new product NVIDIA reveals sometime in September be the one that's a die-shrunk G200 with GDDR5 memory? Time will tell. What can be said for sure is that NVIDIA is not in a comfortable position right now, definitly not with the R700 dressing up to go to office.With inputs from NordicHardware
60 Comments on NVIDIA Plans to Nuke R700
But I was joking. I prefer Nvidia, but I feel the best card for the money I ever bought was the 9800xt back in the day. I miss that ground breaking engineering.:( (it was a big jump in technology)
With what shall we nuke it, dear NV, dear NV
With what shall we nuke it, dear NV, dear NV, with what?
.
.
.
.
.
G200b
.
.
.
.
R780 (Super RV700 X2)
.
.
.
D12U
saga continues. woops!
It will be toughter for them to regain the price/perf lead and have loose profits, though. But it is very soon to speculate on this too. They are struggling in this front right now because of the low yields. It's impossible to predict if they will not have a lot better yields with GT200b.
You have to compare GTX260 to the HD4870. The higher performance model will always draw more power, just as HD4850 is better than HD4870 and 8800 GT higher at the top, despite G92 not being as efficient as newer chips. An HD4870 to compete with GTX280 would not increase it's power consumption linearly, but exponentially, not much, not very exagerated but enough to make it clearly lose to GTX280 in perf/watt.
Very few has been said about GT200b (confirmed by Nv, I couldn't care less of the rumors). We only know it will be 55nm. A die-shrunk GT200 they call. I remember a time when G92 and RV670 were also called just a die-shrink. And was true, to an extent, but they were a lot more. This "news" tell us that GT200b might have GDDR5 and DX10.1, and that would mean it will be a lot more than just a die-shrink. Much more than what G92 was to G80.
As to why they didn't make G92 in the first place and why they have repeated the "mistake" with GT200, I do have my theory. If you want my opinion, G80 was as it was and GT200 is as it is (same concept, more extreme), not because they were looking for gaming performance alone, but also for other aplication's performance too: CUDA and (IMO to more extent than what most may think) TESLA. I'd bet TESLA is as important for Nvidia as the workstation market is for Intel and AMD. Enthusiast here (and elsewhere) have the tendency of overestimate desktop market and underestimate the bussiness market. They will base their next TESLA on GT200 as they did with G80 (no G92 TESLA) and it requires some things that graphics don't require.
Those things (common claim: "You don't need a 512 bit interface, it doesn't affect gaming performance") are what made G80, R600 (was also made with GPGPU in mind) and GT200 very big and expensive, a better balanced for gaming chip is coming soon to fix that just as G92 came in the past. Until then you have what you have, and you can buy it or not. They don't have to please you all the time, they sell their product so you can buy it, but they don't owe you anything. Sincerely, people need to understand Nvidia and Ati (and Intel, AMD, etc...) are companies doing their bussiness, they don't owe us anything. In the case of Nvidia, GT200 is the product, which they made to implement on GTX cards as well as in Quadro and TESLA solutions, GT200b will only be desktop and fit better that (and only that) role. But GT200 on it's own is a good product, better than the competition in many ways except the price, if you don't like it don't buy it, but in no way it is comparable to the FX series. In fact, contrary to FX, GTX cards are faster than the competition, while being better in perf/watt, heat output and overclocking. Even only by die-shrinking it (without the aforementioned optimization that IMO is inevitable and was on Nv's mind from the start) they could fix the perf/price, because it will probably allow both lower prices and better clocks.
In the end all that I said is speculation, as we don't know anything and I don't necesarily believe in it. I just wrote it to counter your speculation. That way we stay neutral, your message is it will never be viable and my message is it could kinda own again. The real thing will be somewhere in the middle. Unlike you, I'm comparing the right cards. You have to compare the cards on the same performance level, no matter from which brand they are or where is their place in the lineup. Comparing GTX260 to HD4850 is like comparing and sports car to an utilitary. And 280/4870 like comparing the Ferrari Modena to a Maranello. Of course they will consume more they are faster and it's a lot more expensive (in $ and consumption) to increase performance the higher you go. Same with GPUs. There are physical limits and constraints on the perf/watt matter and because of that, the higher you aim the worse it will be. Nvidia aimed higher AND using a bigger fab process, it would be natural if they had a lot worse perf/watt, but in reality they have the better one. 55nm wil only increase that advantage. BTW that advantage seems to be architectural as it was present in G92/RV670 too. Look at the charts, G92 owns.