Sunday, May 17th 2009

NVIDIA GT300 Already Taped Out

NVIDIA's upcoming next-generation graphics processor, codenamed GT300 is on course for launch later this year. Its development seems to have crossed an important milestone, with news emerging that the company has already taped out some of the first engineering samples of the GPU, under the A1 batch. The development of the GPU is significant since it is the first high-end GPU to be designed on the 40 nm silicon process. Both NVIDIA and AMD however, are facing issues with the 40 nm manufacturing node of TSMC, the principal foundry-partner for the two. Due to this reason, the chip might be built by another foundry partner (yet to be known) the two are reaching out to. UMC could be a possibility, as it has recently announced its 40 nm node that is ready for "real, high-performance" designs.

The GT300 comes in three basic forms, which perhaps are differentiated by batch quality processing: G300 (that make it to consumer graphics, GeForce series), GT300 (that make it to high-performance computing products, Tesla series), and G200GL (that make it to professional/enterprise graphics, Quadro series). From what we know so far, the core features 512 shader processors, a revamped data processing model in the form of MIMD, and will feature a 512-bit wide GDDR5 memory interface to churn out around 256 GB/s of memory bandwidth. The GPU is compliant with DirectX 11, which makes its entry with Microsoft Windows 7 later this year, and can be found in release candidate versions of the OS already.
Source: Bright Side of News
Add your own comment

96 Comments on NVIDIA GT300 Already Taped Out

#76
a_ump
icon1I always use NVDIA cards but nevertheless I still want ATI to come up with something strong with their next gen graphic cards. Healthy competition on both side is always nice too keep
the price of the next gen card more affordable, if ATI doesn't come up with anything strong to compete w/ this beast then the price of Nvidia's GT300 will be sky high..

with the next gen cards just around the corner this is getting more exciting Lol.
true, but look at it this way. we know if ATI isn't equal or almost on par with nv performance then their prices will be much lower, still driving nv down some. And also, look at the current cards, the HD 5870 according to specs should be at least 40% percent faster than the HD 4890 and what game is out that the HD 4890 can't handle? granted GTA4, crysis, and stalker are all that come to mind for me, so +30% performance even and what won't the HD 5870 handle? i kno DX11 is coming, but i'm hoping it's more of performance enhanced than visually enhanced. Honestly the next time your out in the country a lil or something check out the forestry and leaves and shit, then look at crysis. It's not far off at all from realistic appearance so if developers use DX11 for more of a streamlined DX10 and performance based then there shouldn't be a problem running games. i look for DX11 to bring physics to a more realistic level than for it to make textures and whatnot more appealing.
Posted on Reply
#77
senninex
nVidia Vs ATI

Power: Nvidia might got power chip than ATI
$$$$: ATI might advantage on this & may use x2 to fight Nvidia monster chip.
Watt: Due to ATI higher clock... it might, ATI>Nvidia
The wave: ATI nowaday become stronger, make nvidia sick.


:P
Posted on Reply
#78
[I.R.A]_FBi
senninexPower: Nvidia might got power chip than ATI
$$$$: ATI might advantage on this & may use x2 to fight Nvidia monster chip.
Watt: Due to ATI higher clock... it might, ATI>Nvidia
The wave: ATI nowaday become stronger, make nvidia sick.


:P
good analysis
Posted on Reply
#79
tkpenalty
I have to admit, fanboyism of any sort, for any camp, or developing a brand loyalty as such is an entirely foolish practise which leads to no gains in the future. Joining sides with a GPU company or any company, religion, etcetera will basically mean that you will root for them, and thus have a somewhat biased judgement as to what is the truth as you will not want to say anything disadvantageous for what you go for. True if everyone thought like me AMD would have had no customers during that Phenom I era, but its advice, for your personal benefit.

Please people, don't start a debate on GT300 vs RV580, especially here, when the products have only been taped out and not even released on paper yet. At this point there are no performance figures, and due to the non-linear increases in performance with architectural upgrades (i.e. shader count boost), you can't really expect one card to perform like you think it would. Think of other factors such as retiming of the core, etc (Like in AMD's case, 4870 to 4890).

In the end it comes down to not theory, but how the card actually works. I find arguments such as "AMD used two cores to beat one" stupid as there is no actual "beating" when you the customer can make your own decisions to buy something, plus how they achieved the performance in the end will mean squat for you the consumer.

I'm not really suggesting anything but a some members here need to learn how to reason, and be a bit more rational with their decisions.

To senninex, if you havent realised the market is very unpredictable. If the whole maket went by patterns like the one you mentioned, everyone would be free of their financial woes. Note a very curious thing. GTX 295 is cheaper in australia than the HD4870X2 (well at least in sydney where prices are damn high), and the GTX260 is cheaper than the HD4890. (Ironically most people go for 4890s/4850s/4870s instead of the green camp's stuff... unpredictable.
Posted on Reply
#80
Hayder_Master
when i see GT300 specification now , i see it is look like 5870x2
Posted on Reply
#81
icon1
None of us either know how the next series of graphic cards will really perform,
so debating on who's got the best offering is useless for now..

even though I always use NVIDIA g.cards, I still want ATI to come up with
something big.. competition makes nvidia & ati cards more affordable,
at the same it pushes both camp to develop their products even
more better.. just my 2 scents :)
Posted on Reply
#83
Tau
DarrenWould you pay 21% more for a car that runs 5% faster? NO

So why would anyone with a brain do it for a GPU or a CPU?
Unfortunatly thats what it costs when your talking about that kind of speed. In the car world say it costs $8,000 to build a 400HP 4 cyl, it then costs $12,000 to take that same motor to 500HP. The costs get huge as the performance increase is minimal.
ValdezI hope it's clear, that ati doesn't want to make a big gpu to compete with nvidia. This is where ati's and nvidia's strategies differs, nvidia wants one, but a very powerful gpu, and ati wants to go multicore. So they make a small, but very efficient gpu, and release 1,2 or 4 gpu cards. 1 rv870 can't compete with gt300, but it doesn't have to. That's the x2's task.
And we didn't even spoke about the x4 variant (if the gossips of MCM are true) :D.
It doesent matter how many cores it takes to compete with each other... the only thing that matters at the end of the day is price, performance, power draw, and heat really...
SteevoHow is a 1Ghz or 950Mhz product from a AIB partner any different than a AIB partner for Nvidia releasing a overclocked product?


For the record 1Ghz is within spec for these chips, and not a "overclock" part.


What I care about is performance at the end of the day. How many cried when GTA4 came out and they couldn't play it at high settings, and yet it just required more memory, so in all actuality my lowly 4850 beat a 786MB card of the green camp. I overclocked my card, spent less and played more. same for a 4890, and hell even the 4770 in my parents build. $99 that kicks the shit out of a GTS250.


The difference here is we are all talking performance per dollar, performance maximum availability, and drivers.


In almost every segment of the market ATI has Nvidia beat on price and or performance. For a actual mid range card the 4770/4850 rapes Nvidia, 4890 rapes everything but 295 but for the money of a 295 you could X-fire two 4890's and still rape it, hell a 4870 X2 is 14% more efficient per dollar.
Sure ATI has them beat in price... want to know why?

Your not buying the same kind of power.

All of this fanboy ism is really rubbish, as well as all the speculation in this thread.

There is NO use in speculating what they are going to come out with... just wait and see when it comes out, and go with what you know, as well as what works for you.

And arguing that ATI has a better price/performance ratio is kind of a moot point as it depends on how the BUYER percieves the price. To some people who are on an extream budget a cheaper/decent performance card makes more sence... To someone who is not on a shoestring budget and does not want to have to upgrade the card later something in the bigger price bracket would fit the bill....

I go with whats fast as for the most part i dont really care about how much it costs... as you all complaine about $400 CPU's, $600 videocard, and $200 HDDs.... I just think back to when i bought my first CD-ROM drive for $800. so $800 for an entire videocard that beats everything else on the market is ok in my books.
Posted on Reply
#84
tkpenalty
Please do not draw conclusions from pure speculation guys... you can say all you want about how a GPU will perform based off its specifications but thats like saying a car with a larger engine should be faster.


All Charlie Demerjian is, is a speculative writer who's viewpoints seem to always slam nvidia. He comes up and expands on every point, to a huge extend, with the basis being absolutely nothing, apart from pure speculation for each factor he discusses about Nvidia. He seems to think that if the same happened before, the same will happen again, and doesn't realise what matters is not how something gets there, its what the actual thing does. Its like some fanboy bitching about how AMD got the 4850 to kill a GTX280, he says something like:

Nvidia chipmaking of late has been laughably bad. GT200 was slated for November of 2007 and came out in May or so in 2008, two quarters late. We are still waiting for the derivative parts. The shrink, GT206/GT200b is technically a no-brainer, but instead of arriving in August of 2008, it trickled out in January, 2009. The shrink of that to 40nm, the GT212/GT200c was flat out canceled, Nvidia couldn't do it.

AMD was even worse when it came to the HD2k series. He doesnt even mention anything about that. He doesnt even bother to mention the actual product itself. The Final product. Why would the fucking consumer give a shit about how something was developed, how long it took? Does it matter if the Larrabee is so "similar" to the GT300 when AMD also has a similar product, because if it does, then well its like saying that car companies should be ashamed for copying mercedes benz in the first place, wait no, most of the human race should be ashamed for copying the wheel off the original inventors! Bloody idiots, and biased rampant journalists like these should really step down. IF there was no "copying" we'd find that every company has a different graphics socket, with different memory chips, and finally a multitude of display outputs. All graphics cards are based off similar architectures, Cache (RAM, onboard cache), CPU (Core), input and output. I dont see other journalists slamming companies; its a standard. Yet we see charlie using a completely invalid argument that the standard affair of DX11 shader SIMD/MIMD units is "copying", and its "ironic".

He doesnt seem to realise that the consumer is more concerned about the product being useable. You are not the average consumer if you obssess about how something got there.
However if its something like invalidating the RoHS ratification, then by all means, please complain. Its these people who are indirect, and side with one side who cause so many problems for us.


Hope he reads this and rethinks his position.

Its idiotic bitching about the "arrogance" of a company. All it is is the image. Its not the substance. I do not give a fucking shit if the CEO has bad attitude. Why? Because we the consumer only get the END Product, judge off the END product, what you receive-note customer support count as well. See I will STILL buy intel's products despite their bad corporate profile. If you ask why I'd suggest rereading this line. Its not up to the consumer to condemn a company, unless they've caused you pain or something. Actual, tangible pain. Not "Zomfg intel uses fake cores im buying AMD" (disregards the fact that "fake core" CPUs perform better anyway). Some reason consumers prefer to go for the non-tangible assets of a product, instead of what it can actually do. I think that a lot of firms pride off this idiocracy, as well as "fanboism" one noteable exampme being gibson guitars.
Posted on Reply
#86
a_ump
all i've heard is to ignore what that Charlie dude says on the inq. Though he makes it sound very true and argumentative, Nvidia aren't idiots. I don't see them pulling a GT300 if all those risk factors are true, and though as i said Charlie argued them very well and convincingly, it just doesn't make sense. I've already pondered the whole "ATI will be better at DX11 due to DX10.1 being very close to it" but yea nvidia isn't dumb enough to pull a flop when ATI is all over their ass. I had also contemplated the whole new speculated design behind the supposed 512 shaders. Doubling shaders doesn't happen often or easily, and with how large the die area the GT200 took to incorporate 240 shaders along with the other good stuff, moving to 512 would have to change these shaders dramatically in size(i would think) which being something that new could as Charlie said suffer performance and yields. If what charlie says is true(i dout it) then i'm very interested to see how nvidia handles it. Ah so much speculation :p
Posted on Reply
#87
Blacksniper87
yep there is a lot of speculation which is why wanquers like Charlie need to stop pretending they know everything about nvidia's upcoming cards
Posted on Reply
#88
yogurt_21
fuel for the fire

rumored specs of the 5870
www.hardware-infos.com/news.php?news=2908

personally I'm not interested in the highend as much as the 200-300$ range as that's where I'll be buying. if we have another bout like the 4870 vs gtx260 it's good news for teh yogurt.
Posted on Reply
#89
WarEagleAU
Bird of Prey
If ATI wants to truly stay ahead of what looks phenomenal on paper with the GT300, they need to unlock their 800 stream processors from the core. I keep posting this hoping someone from there will see it. 1200 - 2400 SPUs clocked around 1100mhz??? shoot, I cant even begin describing how that will kick ass.
Posted on Reply
#90
Melvis
Quick question is the 4870x2 sideport working yet? did ATI release drivers for this sideport to be activated and to gain more performance? I never heard if they did or not.
Posted on Reply
#91
mdm-adph
DrPepperThe number of SP's aren't half as important as the way the shaders handle instructions. For example nvidia uses a different method to ati and thats why nvidia can use less, more powerful shaders than ATI. Whose to say ATI will continue with thier current shader model and change it so they can use 1200sp's and get the same performance as 2400. Anyway back to the point ATI engineers prefer to find cheap methods of adding performance and compared to nvidia's tactic of adding more.
Well, that's different from what I've heard.

Nvidia doesn't use "less and more powerful" shaders, but apparently on ATI cards the shaders are just calculated differently. For instance, I've heard that ATI uses some sort of "pentagonal" method to counting its shaders -- counting each individual shader as having 5 sides or something (Nvidia doesn't do this, apparently).

So, for a more even comparison, just divide the number of shaders on an ATI card by 5.

HD 4870 = 160 shaders.
GTX 280 = 240 shaders.

Make more sense now? The GTX 280 is faster, of course, but for what it has, the RV770 isn't bad, either.
Posted on Reply
#92
crazy pyro
Wah, where'd the extra two pages go?
I'm sure I read at least a pager after this post...
Posted on Reply
#93
Blacksniper87
Hey so the specs are in on the taped GT300's 700 mhz core 1600 mhz shaders and 1100 mhz memory apparently with room to move
Posted on Reply
#94
a_ump
source? and i'm suspicious of whether nvidia would get GDDR5 to work at that frequency the first time they use it on a card. ATI only managed 900mhz(3,600mhz effective) with their first implementation of it, or was that because of the chips themselves being in effecient?
Posted on Reply
#95
Blacksniper87
a_umpsource? and i'm suspicious of whether nvidia would get GDDR5 to work at that frequency the first time they use it on a card. ATI only managed 900mhz(3,600mhz effective) with their first implementation of it, or was that because of the chips themselves being in effecient?
www.hardware-infos.com/news.p...2954&sprache=1

Just yesterday we reported, that Nvidia's upcoming high end desktop chip G300 would have successfully mastered its tape out and would currently be in A1 stepping, which is maybe already the final.
Now we can present you the clock frequencies of the samples, which Nvidia is already very satisfied with, so that they could also be the final ones.

Thus the running G300 samples in A1 stepping work with 700 MHz chip frequency, 1600 MHz shader frequency and 1100 MHz memory frequency - latter was already suggested in our previous news report.
With the help of the already revealed shader units and the memory interface width we can make first accurate quantitative comparisons.

As we already told you, the G300 will have 512 instead of 240 shader units. The approximate structure, that is still 1D shader units which can compute one MADD and MUL per frequency, seems to be kept, so that we can already conclude that the running samples will achieve a theoretic performance of believe it or not 2457 Gigaflops.
Although the simile to the G200, representing GTX 280, seems to be difficult, because the G300 does not have classical SIMD units anymore, but MIMD like units, the pure quantitative comparison shows a difference of 163 per cent in performance.

The memory bandwidth can now be considered, too, with the knowledge about the memory frequency. So Nvidia will also reach remarkable 281.6 GB/s with 1100 MHz. Just quantitatively it matches with exactly 100 per cent more memory bandwidth in comparison to the GTX 280.

Statements about the theoretic TMU and ROP performance cannot be made yet, despite of the known chip frequency, because the amount of those is not yet known or, in case of the ROPs, it is not even certain if they will still be Fixed-Function-Units.
Posted on Reply
#96
a_ump
i didn't really know the difference between MIMD and SIMD, but googled a lil bit and i thk i understand. MIMD has each processor handle a program or w/e, but SIMD has all of them work together to quickly finish each program, at least htat's the jist of it that i got. So it'd make sense for nvidia to increase from 240 to 512 if these cores are going to be slower with each processing data of it's own instead of all of them working together to compute that data. Is that correct?
Posted on Reply
Add your own comment
Dec 22nd, 2024 06:24 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts