Sunday, May 17th 2009
NVIDIA GT300 Already Taped Out
NVIDIA's upcoming next-generation graphics processor, codenamed GT300 is on course for launch later this year. Its development seems to have crossed an important milestone, with news emerging that the company has already taped out some of the first engineering samples of the GPU, under the A1 batch. The development of the GPU is significant since it is the first high-end GPU to be designed on the 40 nm silicon process. Both NVIDIA and AMD however, are facing issues with the 40 nm manufacturing node of TSMC, the principal foundry-partner for the two. Due to this reason, the chip might be built by another foundry partner (yet to be known) the two are reaching out to. UMC could be a possibility, as it has recently announced its 40 nm node that is ready for "real, high-performance" designs.
The GT300 comes in three basic forms, which perhaps are differentiated by batch quality processing: G300 (that make it to consumer graphics, GeForce series), GT300 (that make it to high-performance computing products, Tesla series), and G200GL (that make it to professional/enterprise graphics, Quadro series). From what we know so far, the core features 512 shader processors, a revamped data processing model in the form of MIMD, and will feature a 512-bit wide GDDR5 memory interface to churn out around 256 GB/s of memory bandwidth. The GPU is compliant with DirectX 11, which makes its entry with Microsoft Windows 7 later this year, and can be found in release candidate versions of the OS already.
Source:
Bright Side of News
The GT300 comes in three basic forms, which perhaps are differentiated by batch quality processing: G300 (that make it to consumer graphics, GeForce series), GT300 (that make it to high-performance computing products, Tesla series), and G200GL (that make it to professional/enterprise graphics, Quadro series). From what we know so far, the core features 512 shader processors, a revamped data processing model in the form of MIMD, and will feature a 512-bit wide GDDR5 memory interface to churn out around 256 GB/s of memory bandwidth. The GPU is compliant with DirectX 11, which makes its entry with Microsoft Windows 7 later this year, and can be found in release candidate versions of the OS already.
96 Comments on NVIDIA GT300 Already Taped Out
Power: Nvidia might got power chip than ATI
$$$$: ATI might advantage on this & may use x2 to fight Nvidia monster chip.
Watt: Due to ATI higher clock... it might, ATI>Nvidia
The wave: ATI nowaday become stronger, make nvidia sick.
:P
Please people, don't start a debate on GT300 vs RV580, especially here, when the products have only been taped out and not even released on paper yet. At this point there are no performance figures, and due to the non-linear increases in performance with architectural upgrades (i.e. shader count boost), you can't really expect one card to perform like you think it would. Think of other factors such as retiming of the core, etc (Like in AMD's case, 4870 to 4890).
In the end it comes down to not theory, but how the card actually works. I find arguments such as "AMD used two cores to beat one" stupid as there is no actual "beating" when you the customer can make your own decisions to buy something, plus how they achieved the performance in the end will mean squat for you the consumer.
I'm not really suggesting anything but a some members here need to learn how to reason, and be a bit more rational with their decisions.
To senninex, if you havent realised the market is very unpredictable. If the whole maket went by patterns like the one you mentioned, everyone would be free of their financial woes. Note a very curious thing. GTX 295 is cheaper in australia than the HD4870X2 (well at least in sydney where prices are damn high), and the GTX260 is cheaper than the HD4890. (Ironically most people go for 4890s/4850s/4870s instead of the green camp's stuff... unpredictable.
so debating on who's got the best offering is useless for now..
even though I always use NVIDIA g.cards, I still want ATI to come up with
something big.. competition makes nvidia & ati cards more affordable,
at the same it pushes both camp to develop their products even
more better.. just my 2 scents :)
www.theinquirer.net/inquirer/news/1137331/a-look-nvidia-gt300-architecture
Your not buying the same kind of power.
All of this fanboy ism is really rubbish, as well as all the speculation in this thread.
There is NO use in speculating what they are going to come out with... just wait and see when it comes out, and go with what you know, as well as what works for you.
And arguing that ATI has a better price/performance ratio is kind of a moot point as it depends on how the BUYER percieves the price. To some people who are on an extream budget a cheaper/decent performance card makes more sence... To someone who is not on a shoestring budget and does not want to have to upgrade the card later something in the bigger price bracket would fit the bill....
I go with whats fast as for the most part i dont really care about how much it costs... as you all complaine about $400 CPU's, $600 videocard, and $200 HDDs.... I just think back to when i bought my first CD-ROM drive for $800. so $800 for an entire videocard that beats everything else on the market is ok in my books.
All Charlie Demerjian is, is a speculative writer who's viewpoints seem to always slam nvidia. He comes up and expands on every point, to a huge extend, with the basis being absolutely nothing, apart from pure speculation for each factor he discusses about Nvidia. He seems to think that if the same happened before, the same will happen again, and doesn't realise what matters is not how something gets there, its what the actual thing does. Its like some fanboy bitching about how AMD got the 4850 to kill a GTX280, he says something like:
Nvidia chipmaking of late has been laughably bad. GT200 was slated for November of 2007 and came out in May or so in 2008, two quarters late. We are still waiting for the derivative parts. The shrink, GT206/GT200b is technically a no-brainer, but instead of arriving in August of 2008, it trickled out in January, 2009. The shrink of that to 40nm, the GT212/GT200c was flat out canceled, Nvidia couldn't do it.
AMD was even worse when it came to the HD2k series. He doesnt even mention anything about that. He doesnt even bother to mention the actual product itself. The Final product. Why would the fucking consumer give a shit about how something was developed, how long it took? Does it matter if the Larrabee is so "similar" to the GT300 when AMD also has a similar product, because if it does, then well its like saying that car companies should be ashamed for copying mercedes benz in the first place, wait no, most of the human race should be ashamed for copying the wheel off the original inventors! Bloody idiots, and biased rampant journalists like these should really step down. IF there was no "copying" we'd find that every company has a different graphics socket, with different memory chips, and finally a multitude of display outputs. All graphics cards are based off similar architectures, Cache (RAM, onboard cache), CPU (Core), input and output. I dont see other journalists slamming companies; its a standard. Yet we see charlie using a completely invalid argument that the standard affair of DX11 shader SIMD/MIMD units is "copying", and its "ironic".
He doesnt seem to realise that the consumer is more concerned about the product being useable. You are not the average consumer if you obssess about how something got there.
However if its something like invalidating the RoHS ratification, then by all means, please complain. Its these people who are indirect, and side with one side who cause so many problems for us.
Hope he reads this and rethinks his position.
Its idiotic bitching about the "arrogance" of a company. All it is is the image. Its not the substance. I do not give a fucking shit if the CEO has bad attitude. Why? Because we the consumer only get the END Product, judge off the END product, what you receive-note customer support count as well. See I will STILL buy intel's products despite their bad corporate profile. If you ask why I'd suggest rereading this line. Its not up to the consumer to condemn a company, unless they've caused you pain or something. Actual, tangible pain. Not "Zomfg intel uses fake cores im buying AMD" (disregards the fact that "fake core" CPUs perform better anyway). Some reason consumers prefer to go for the non-tangible assets of a product, instead of what it can actually do. I think that a lot of firms pride off this idiocracy, as well as "fanboism" one noteable exampme being gibson guitars.
rumored specs of the 5870
www.hardware-infos.com/news.php?news=2908
personally I'm not interested in the highend as much as the 200-300$ range as that's where I'll be buying. if we have another bout like the 4870 vs gtx260 it's good news for teh yogurt.
Nvidia doesn't use "less and more powerful" shaders, but apparently on ATI cards the shaders are just calculated differently. For instance, I've heard that ATI uses some sort of "pentagonal" method to counting its shaders -- counting each individual shader as having 5 sides or something (Nvidia doesn't do this, apparently).
So, for a more even comparison, just divide the number of shaders on an ATI card by 5.
HD 4870 = 160 shaders.
GTX 280 = 240 shaders.
Make more sense now? The GTX 280 is faster, of course, but for what it has, the RV770 isn't bad, either.
I'm sure I read at least a pager after this post...
Just yesterday we reported, that Nvidia's upcoming high end desktop chip G300 would have successfully mastered its tape out and would currently be in A1 stepping, which is maybe already the final.
Now we can present you the clock frequencies of the samples, which Nvidia is already very satisfied with, so that they could also be the final ones.
Thus the running G300 samples in A1 stepping work with 700 MHz chip frequency, 1600 MHz shader frequency and 1100 MHz memory frequency - latter was already suggested in our previous news report.
With the help of the already revealed shader units and the memory interface width we can make first accurate quantitative comparisons.
As we already told you, the G300 will have 512 instead of 240 shader units. The approximate structure, that is still 1D shader units which can compute one MADD and MUL per frequency, seems to be kept, so that we can already conclude that the running samples will achieve a theoretic performance of believe it or not 2457 Gigaflops.
Although the simile to the G200, representing GTX 280, seems to be difficult, because the G300 does not have classical SIMD units anymore, but MIMD like units, the pure quantitative comparison shows a difference of 163 per cent in performance.
The memory bandwidth can now be considered, too, with the knowledge about the memory frequency. So Nvidia will also reach remarkable 281.6 GB/s with 1100 MHz. Just quantitatively it matches with exactly 100 per cent more memory bandwidth in comparison to the GTX 280.
Statements about the theoretic TMU and ROP performance cannot be made yet, despite of the known chip frequency, because the amount of those is not yet known or, in case of the ROPs, it is not even certain if they will still be Fixed-Function-Units.