Tuesday, September 19th 2006

NVIDIA G80 specs

VR-Zone has specs of the upcoming GeForce 8800GTX/GT. Here goes:
  • Unified Shader Architecture
  • Support FP16 HDR+MSAA
  • Support GDDR4 memory
  • Close to 700M transistors (G71 - 278M / G70 - 302M)
  • New AA mode : VCAA
  • Core clock scalable up to 1.5GHz
  • Shader Peformance : 2x Pixel / 12x Vertex over G71
  • 8 TCPs & 128 stream processors
  • Much more efficient than traditional architecture
  • 384-bit memory interface (256-bit+128-bit)
  • 768MB memory size (512MB+256MB)
  • Two models at launch : GeForce 8800GTX and GeForce 8800GT
  • GeForce 8800GTX : 7 TCPs chip, 384-bit memory interface, hybrid water/fan cooler, water cooling for overclocking. US$649
  • GeForce 8800GT : 6 TCPs chip, 320-bit memory interface, fan cooler. US$449-499
Source: VR-Zone
Add your own comment

22 Comments on NVIDIA G80 specs

#1
magibeg
wow that sounds really huge an insaine... i'll bet a little exaggerated... core clock up to 1.5ghz??? And with 700 M transistors? I think that would consume the 250 watts the inq was talking about :P. If its all true though thats one beast of a card!
Posted on Reply
#2
Judas
# 384-bit memory interface (256-bit+128-bit)
# 768MB memory size (512MB+256MB) can some one explain this to me why the plus ?
yeah it is rather nice :D
does it support Dx10?
Posted on Reply
#3
RickyG512
now ive always prefered ATI for life coz of better image quality and so on, but this just seems to kick R600
Posted on Reply
#4
EastCoasthandle
hmm will we need a separate PSU for this? More doesn't necessarily mean efficient nor compatible. And, isn't the die shrink suppose to require less power...not more??? What is going on here. CPU die shrinks requiring less power more performance=more efficient. However, ATI and Nvidia next gen die shrinks are going in the opposite direction which makes no sense at all. Even worst, Nvidia's 7900 series actually proves that you can create a video card using less power and still get better performance so none of this is making sense.

Its been posted before and I'll rehash what was said. Appears the first gen DX10 compatible cards are not up to what they should be. I bet the second gen cards will be a lot more efficient... This all looks like marketing to me...
Posted on Reply
#5
Homeless
Oh my god that sounds insane... I hope it also has dx10 support
Posted on Reply
#6
HaZe303
R600 will still kick nv´s ass. Nvidia is just feeling the pressure from ATI so they have to release some G80 specs to lure people the green way!? :D
Posted on Reply
#7
tofu
RickyG512now ive always prefered ATI for life coz of better image quality and so on, but this just seems to kick R600
Really?
It seems the G80 has 24 Pipes (2 x G71 = 48 haha) LESS than the R600...
HaZe303R600 will still kick nv´s ass. Nvidia is just feeling the pressure from ATI so they have to release some G80 specs to lure people the green way!? :D
Very true m8!
Posted on Reply
#8
deftonesmx17
tofuReally?
It seems the G80 has 24 Pipes (2 x G71 = 48 haha) LESS than the R600...
Shader Peformance : 2x Pixel / 12x Vertex over G71

I take it you didnt read the word right next to shader, it was PERFORMANCE(it is really Peformance lol), not a shader unit count. This means that it has 2x the pixel shader performace of a G71, not that it has twice the pixel shader unit count. :rolleyes:
Posted on Reply
#9
POGE
Twice the performance for a next gen card compared to the last gen is a very easy feat. Look at 6800's and 7800's.
Posted on Reply
#10
dracolnyte
i swear nvidia wasnt going the unified shader way, that was what ati was going for. looks like they have changed their mind and decided to go the ati way with the shaders, instead of their 32 pixel shaders and 32 vertex shaders?? this all seems really like a bluff to me
Posted on Reply
#11
tvdang7
wth..........i still havnt bought a 512 card yet now they want 768mb?
Posted on Reply
#12
jocksteeluk
any idea on what the power usage will be? and is this based on 65nm proccess yet?
Posted on Reply
#13
Tomcat81970
sweet, i wonder when its going to be released... i wonder if this might speed up ati's launch of there r600s. Also with the base price of 450, hopefully it should help keep ati's price down when they launch.
Posted on Reply
#14
GSG-9
I hope they are out soon.
tvdang7wth..........i still havnt bought a 512 card yet now they want 768mb?
They have 1gig cards you know.
Posted on Reply
#15
randomperson21
Wow, i'm impressed. But i'm gonna wait to make a judgement on whether R600 or the G80 is better until solid benchies come out.
Posted on Reply
#16
saeed_violinist
finally, this champion is D3D10 compatible or not? I really love D10 rendered scenes of flightsimulator X, if they are not fake though
Posted on Reply
#17
selway89
saeed_violinistfinally, this champion is D3D10 compatible or not? I really love D10 rendered scenes of flightsimulator X, if they are not fake though
Yes this should support DX10. The R600 from ATI tho has my vote on performance, and with ATIs attractive prices at the moment, i can see this been good value.

I like how the R600 will have erm its number of pipes (forgoten how many) and will dyunamicly change if it processes its data through the pipes for vertex or shaders stuff. This will be really efficient and very very fast.

Nvidias 768MB config etc seems some what odd. But again as others have said ill have to wait until benchmarks are available. Then again im still using a Raddy 9800 Pro, think ill upgrade by then when these hit the stores =P

Selway89
Posted on Reply
#18
ryboto
selway89I like how the R600 will have erm its number of pipes (forgoten how many) and will dyunamicly change if it processes its data through the pipes for vertex or shaders stuff. This will be really efficient and very very fast.
It's called a unified architecture, and according to this news, the G80 will have one too.
selway89Nvidias 768MB config etc seems some what odd. But again as others have said ill have to wait until benchmarks are available. Then again im still using a Raddy 9800 Pro, think ill upgrade by then when these hit the stores =P

Selway89
The memory configuration is such that the card will have a 384-bit memory interface, as opposed to the 256bit on the high end cards we see today. It should give the memory a higher bandwidth, though, i'm not sure that will really be the case, as the news shows (256+128), so, i'm not sure what it really means.
Posted on Reply
#19
Pinchy
cant wait till the benchies come out :D

i go for ati, but these nvidia cards look promising
Posted on Reply
#20
Tonyjack
This is pure speculation on my part, but it sounds like there is one memory controller using 256mb via a 128bit bus and one using 512mb via a 256bit bus.
Posted on Reply
#21
Pinchy
yea it looks like something like that...
Posted on Reply
Add your own comment
Oct 1st, 2024 20:21 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts