# AMD Radeon HD 7900 ''Tahiti'' Pictured, 384-bit Memory Bus Confirmed?



## btarunr (Dec 8, 2011)

A Beyond3D forum member posted a mysterious picture of two graphics cards that could very well be engineering samples of AMD's true next-generation Radeon HD 7900 "Tahiti" graphics cards. The final products most probably won't look like these, with a bare red PCB, but it does look like the reference cooler design is ready. A more important feature in that picture is the spotting of traces for at least 11 memory chips, the 12th one (not highlighted) is apparently near the PCIe slot interface. The presence of 12 memory chips gives rumors of Tahiti featuring a 384-bit wide memory interface a shot in the arm. This will be the first AMD GPU in over 5 years to feature a memory bus wider than 256-bit. The R600 Radeon HD 2900 GPU featured a 512-bit GDDR4-capable memory interface.





*View at TechPowerUp Main Site*


----------



## radrok (Dec 8, 2011)

Oh god can't wait, give me 2 no no wait 3


----------



## robal (Dec 8, 2011)

At last !  Some picture p0rn !


----------



## Volkszorn88 (Dec 8, 2011)

stfu already and take my money


----------



## TheoneandonlyMrK (Dec 8, 2011)

no disclosure on what its use is though damn


----------



## Aetherius (Dec 8, 2011)

Cool! Im ready for these to release!!


----------



## btarunr (Dec 8, 2011)

Let's play _spot the 12th memory chip_.

This?






BTW, that is a huge GPU package.


----------



## Zubasa (Dec 8, 2011)

btarunr said:


> BTW, that is a huge GPU package.


So is the GTX 580, and we know what a beast that is


----------



## LAN_deRf_HA (Dec 8, 2011)

Is there anything about the solder points on the back of those chips that indicates whether or not it's GDDR5/XDR?


----------



## radrok (Dec 8, 2011)

Also found this magnification


----------



## Super XP (Dec 8, 2011)

From what I've found out they are going with GDDR5. Also it seems AMD has been pushing for DDR memory for some time now. Though XDR would have been interesting indead. Now for the questions? Do I wait for another HD 6970 and do CrossfireX or do I sell my HD 6970, add a few more $$ and go for the upcoming HD 7970  It all depends on Price/Performance


----------



## radrok (Dec 8, 2011)

Super XP said:


> From what I've found out they are going with GDDR5. Also it seems AMD has been pushing for DDR memory for some time now. Though XDR would have been interesting indead. Now for the questions? Do I wait for another HD 6970 and do CrossfireX or do I sell my HD 6970, add a few more $$ and go for the upcoming HD 7970  It all depends on Price/Performance



Just by experience, go for the newer series... AMD/ATI drivers have always been the best when using the latest gen cards


----------



## btarunr (Dec 8, 2011)

LAN_deRf_HA said:


> Is there anything about the solder points on the back of those chips that indicates whether or not it's GDDR5/XDR?



Looking at the picture radrok posted, those traces look GDDR5. That said, it is rumored that Tahiti will feature support for both GDDR5 and XDR2, but they'll need two different PCB designs for GDDR5 boards and XDR2 ones. They're nowhere near pin-compatible. So this has to be that GDDR5 board. 

Although there's just a small possibility that Tahiti will support XDR2, looking at these pictures is no way of dismissing that possibility. We'll have to wait for the products to actually launch, or any pre-release info to kill that "Tahiti-XDR2" rumor.


----------



## robal (Dec 8, 2011)

LAN_deRf_HA said:


> Is there anything about the solder points on the back of those chips that indicates whether or not it's GDDR5/XDR?



I doubt it.

Especially that XDR2 controller is compatible with GDDR5.
That means, even if it is integrated in Tahiti die, memory interfaces will look identical. Even same memory packages, probably.

Cheers,


----------



## radrok (Dec 8, 2011)

If I were from AMD I would honestly fear to have something to do with Rambus even though XDR2 would have been surely intriguing


----------



## Hayder_Master (Dec 8, 2011)

At laaaaaaaaaast


----------



## cadaveca (Dec 8, 2011)

Shit...maybe I'm totally off on what I'm expecting. How very interesting.

Sucks to see a red PCB, and dual 8-pin connectors though. 375W monster?



btarunr said:


> Let's play _spot the 12th memory chip_.
> 
> This?
> http://img.techpowerup.org/111208/bta4098dc.jpg
> ...



What about the IOMMU that has been confirmed already? That could be used as teh "12th chip".


Shit. I was really expecting 128-bit IOMMU and 256-bit memory bus. Does GCN really need that much bandwidth? I'm so confused!!!


----------



## Super XP (Dec 8, 2011)

radrok said:


> If I were from AMD I would honestly fear to have something to do with Rambus even though XDR2 would have been surely intriguing


AMD can really benefit by using XDR2 memory. They just need to get some sort of aggreement in place so they cannot screw each other over. I mean RAMBUS can surely use the money and exposure by working with AMD.


----------



## Zubasa (Dec 8, 2011)

radrok said:


> If I were from AMD I would honestly fear to have something to do with Rambus even though XDR2 would have been surely intriguing


Guess what? Rambus has a hand in just about anything memory related.
Good luck avoiding them.


----------



## Super XP (Dec 8, 2011)

cadaveca said:


> Sucks to see a red PCB,


 Let's keep *ATI* alive


----------



## btarunr (Dec 8, 2011)

cadaveca said:


> Sucks to see a red PCB



AMD high end card engineering samples (the ones made in Canada) have no backplate, and have red PCB. The mass production cards over the past two generations had black PCB and backplate. 



cadaveca said:


> What about the IOMMU that has been confirmed already? That could be used as teh "12th chip".
> 
> 
> Shit. I was really expecting 128-bit IOMMU and 256-bit memory bus. Does GCN really need that much bandwidth? I'm so confused!!!



IOMMU can be integrated into the GPU die. It doesn't have to be a separate chip.


----------



## cadaveca (Dec 8, 2011)

I understand that iommu doesn't need a chip. that was my point.

I've been talking about iommu for much longer than nearly anyone else. This has to be the most exciting thing about GCN for me. Most don't even have a clue what it'd be used for.


----------



## pantherx12 (Dec 8, 2011)

cadaveca said:


> Shit...maybe I'm totally off on what I'm expecting. How very interesting.
> 
> Sucks to see a red PCB, and dual 8-pin connectors though. 375W monster?
> 
> ...



It's 2 x 6 pin dude, although having said that the card underneath is 8 and 6.


----------



## X800 (Dec 8, 2011)

I found this picture.


----------



## btarunr (Dec 8, 2011)

X800 said:


> I found this picture.



Nice, so the missing chip (marked 1 in that picture) is right where we thought it would be.


----------



## cadaveca (Dec 8, 2011)

pantherx12 said:


> It's 2 x 6 pin dude, although having said that the card underneath is 8 and 6.



Um, that's not what I see(I see 8+6 in both cards, and pinout for 8x8):


----------



## pantherx12 (Dec 8, 2011)

cadaveca said:


> Um, that's not what I see(I see 8+6 in both cards, and pinout for 8x8):
> 
> http://www.techpowerup.com/forums/attachment.php?attachmentid=44623&stc=1&d=1323369311



I blame blurriness I can't see the edges 


Can see the one closest to the edge could be an 8 pin now, on the other card looks like it's the other way round XD


----------



## mastrdrver (Dec 8, 2011)

btarunr said:


> BTW, that is a huge GPU package.



Doesn't look any larger then the backside of my 5870s.


----------



## cadaveca (Dec 8, 2011)

Whatever it is, it's dman exciting. Seeing cards now...does that mean we get a launch within weeks? One can hope...


----------



## Casecutter (Dec 8, 2011)

btarunr said:


> that "Tahiti-XDR2" rumor


 Me think's... AMD will hold that and refine it until the top Shelf Kepler's come.  Either the XDR2 designation to the existing 7970 or a new model.  Consider if they wait... could they release both a Single and Dual XDR2  
That might send Kepler packing...


----------



## Solaris17 (Dec 8, 2011)

btarunr said:


> Maybe we could discuss memory mapping in another thread?
> 
> If you two like, I can spin related posts from this thread off into a new thread in the GPU forum.



So basically your going to change the address space?


----------



## btarunr (Dec 8, 2011)

Solaris17 said:


> So basically your going to change the address space?



No, pressing buttons that change the "t=" value of those posts in the database.

Back to topic.


----------



## Delta6326 (Dec 8, 2011)

It is 2x 8pin you can tell from the solder points. Now who know you may only need 1x 8 1x 6 to be able to run them.

How long do you think these are they look like they are slightly longer than that M/B so maybe 9.5 -10"?







Lets play a game name those other parts we have 
ASUS Crosshair V Formula AM3+ AMD 990FX SATA 6Gb/s...
SanDisk Ultra SDSSDH-120G-G25 2.5" 120GB SATA II I...
CORSAIR Enthusiast Series TX850 V2 850W ATX12V v2....
Just can't figure out the ram or that case maybe a Corsair Obsidian Series 650D (CC650DW-1) Black Ste... hard to tell basing it off a couple screws I can see.


----------



## GSquadron (Dec 8, 2011)

From what i know they will make high end parts with xdr2 vram
so this is not a high end part. If it is, than there will be no xdr2 vram


----------



## erocker (Dec 8, 2011)

I doubt the final card will have two 8 pin adapters. Both Cayman and Cypress had two 8 pins on their "engineering samples".


----------



## Solaris17 (Dec 8, 2011)

erocker said:


> I doubt the final card will have two 8 pin adapters. Both Cayman and Cypress had two 8 pins on their "engineering samples".



I agree if anything they did it so they could beat the ES cards to get an idea of theoretical performance instead of basing it off of computer models.


----------



## D4S4 (Dec 8, 2011)

just wanted to say that that's a f*cking awkward place for a 12th memory chip, especially since it's right between the gpu and pcie connector and there seems to be plenty of room on the other end of the "ring".


----------



## btarunr (Dec 8, 2011)

D4S4 said:


> just wanted to say that that's a f*cking awkward place for a 12th memory chip, especially since it's right between the gpu and pcie connector and there seems to be plenty of room on the other end of the "ring".



As if placing VRM near display outputs wasn't awkward enough (on HD 6800 series reference boards). It doesn't matter.


----------



## BrooksyX (Dec 8, 2011)

Very nice. Do want!


----------



## D4S4 (Dec 8, 2011)

btarunr said:


> As if placing VRM near display outputs wasn't awkward enough (on HD 6800 series reference boards). It doesn't matter.



thb i never even saw that. seems that crosstalk and other interference are no longer an issue which is nice but one chip so close must have smaller latency than the others, i wonder how they've handled that. i know there are a lot of wavy traces going from memory chips on my x1800.


----------



## WarraWarra (Dec 8, 2011)

So vram only on one side of the card ?? surely they have vram on both sides of the card or 2x 12 capable ram chip places ??
Would be a waste to have ram on the back side, no cooling for it and only gpu on front under the cooling system.

If they follow the same as with the mGPU's then the top of the range gaming  ones has xdr memory in either 2GB/4GB versions.


----------



## radrok (Dec 8, 2011)

GDDR5 doesn't need that much cooling, my 6990s have some ram on the back of the pcb and they've never had any problem even at 1450mhz.
What's really toasty is the GPU and VRMs


----------



## Marineborn (Dec 8, 2011)

i seen the supposid specs on these cards, apprently extremly nice, ill pick up 2 when it comes out. to replace my 2 6870's that will be sold at that time


----------



## R_1 (Dec 8, 2011)

Looks and feels cheap! Where is the back plate?


----------



## erocker (Dec 8, 2011)

R_1 said:


> Looks and feels cheap! Where is the back plate?



Since there's no components on the backside a backplate isn't needed. I'd rather not have a backplate trapping heat on the PCB anyways. It's also an engineering sample. Cayman's ES didn't have a backplate either, the retail 6970 did.


----------



## Lionheart (Dec 8, 2011)

XDR2 memory could be used for an eyefinity HD7970 version ^_^


----------



## radrok (Dec 8, 2011)

Also as an exclusive on the 7990 to gain more bandwidth?


----------



## badtaylorx (Dec 8, 2011)

R_1 said:


> Looks and feels cheap! Where is the back plate?



what are  you smokin???

i think it looks like they're goin back to a 5XXX style design.....LOVE IT


----------



## DarkOCean (Dec 9, 2011)

badtaylorx said:


> what are  you smokin???
> 
> i think it looks like they're goin back to a 5XXX style design.....LOVE IT



my thoughts exactly.


----------



## Cruise51 (Dec 9, 2011)

*drooling on myself*

8-pin might be for pushing clocks? Looks power hungry but that won't stop me from buying it.


----------



## alexsubri (Dec 9, 2011)

[yt]Vh78T--ZUxY[/yt]


----------



## Animalpak (Dec 9, 2011)

back to red PCB ?


----------



## LAN_deRf_HA (Dec 9, 2011)

badtaylorx said:


> what are  you smokin???
> 
> i think it looks like they're goin back to a 5XXX style design.....LOVE IT



The batmobile was awful and the plastic accents looked like they belonged on a McDonalds happy meal toy. 6xxx was a radical improvement over that POS. What you see here isn't really indicative of anything. They very likely will change the PCB color and give it a back plate, and may possible even change the cooler accents before this thing rolls out.


----------



## btarunr (Dec 9, 2011)

R_1 said:


> Looks and feels cheap! Where is the back plate?



Where GTX 580's backplate went. Backplate heaven.


----------



## makwy2 (Dec 9, 2011)

I'll be needing a couple of those please.


----------



## Hayder_Master (Dec 9, 2011)

Ok now let we see, they say first it will be XDR2, and now it have 384 bit.
For expect at least it will be have 1200mhz memory frequency, so how much be the total bandwidh??
No way only PCI-E 3.0 right !


----------



## Mussels (Dec 9, 2011)

384 bit on DDR5/whatever comes after that sounds quite nice.


----------



## Frizz (Dec 9, 2011)

I never liked their red PCB design but I am sure there will be plenty to choose from . Keen to see how these go against their current 6xxx series.


----------



## General Lee (Dec 9, 2011)

It's an engineering sample, the final product most likely won't have a red pcb.


----------



## laszlo (Dec 9, 2011)

i'll like to see a green PCB you know why ......hhhhhh

just wonder why they need 384 bit as current used 256 provided enough bandwidth is the gpu not so good as expected or too powerfull?


----------



## 1nf3rn0x (Dec 9, 2011)

I'm throwing money at the screen but  nothings happening?!?!?


----------



## Goodman (Dec 9, 2011)

Found this...

REAL Final Specs for AMD's GCN Lineup. Release date set for December with shipping products in January

HD7970 ( Tahiti XT )
Core's 2048 operate at 1GHz clock
3GB of GDDR5 memory at 5.5GHz
Memory Bandwidth 264GB/s
Memory Bus Width 384-bits
Texture Units 128
ROPs 64
Manufacturing Process TSMC 28nm
Price $449

HD7950 ( Tahiti Pro )
Core's 1920 operate at 900MHz clock
3GB of GDDR5 memory at 5.0GHz
Memory Bandwidth 240GB/s
Memory Bus Width 384-bits
Texture Units 120
ROPs 60
Manufacturing Process TSMC 28nm
Price $349

HD7990 ( New Zealand ) HD7970 X2 Double all specs "March Realease"
6GB of GDDR5 memory at 5.5GHz
Price $699.

LINK--> http://www.brightsideofnews.com/new...-mix-gcn-with-vliw4--vliw5-architectures.aspx

Other Site say this...







LINK--> http://www.asrotech.com/2011/09/07/radeon-hd-7900-series-will-be-released-in-q1-of-2012/


----------



## btarunr (Dec 9, 2011)

Goodman said:


> LINK--> http://www.brightsideofnews.com/new...-mix-gcn-with-vliw4--vliw5-architectures.aspx
> 
> Other Site say this...
> 
> ...



That's an old table from Nordichardware.


----------



## SK-1 (Dec 9, 2011)

I always like the odd numbered ATI series for some strange reason...So sounds like time to upgrade soon (budget permitting)


----------



## AsRock (Dec 9, 2011)

If the screw holes to the VRM area stay like that ( saying i was in the market for a new card ) it be the 1st one i get with a 3rd party cooler on it as i like being able to screw the VRM heatsinks on and not relay on sticky tape.

So if this is the 7970 i really hope they changed it too how they normally are.  I think there is a good chance of them changing it as the 6970 VRM's have a much less heat issue than the cards before.


----------



## Lionheart (Dec 9, 2011)

LAN_deRf_HA said:


> The batmobile was awful and the plastic accents looked like they belonged on a McDonalds happy meal toy. 6xxx was a radical improvement over that POS. What you see here isn't really indicative of anything. They very likely will change the PCB color and give it a back plate, and may possible even change the cooler accents before this thing rolls out.



That's your negative opinion, they are beast!


----------



## radrok (Dec 9, 2011)

I agree with Lionheart, I liked a lot the design on my old 5970 it was awesomely nasty if you get what I mean 
The HD6990s box cooler is just well... a box cooler with a sticker and a NOISY fan :| (not that the 5970 was quiet eh)


----------



## LAN_deRf_HA (Dec 9, 2011)

Do these cores have any IPC improvements or can they be directly compared to the 69xx cards?


----------



## leonard_222003 (Dec 9, 2011)

SK-1 and everyone that says upgrade , FOR WHAT ? except 1-2 games with absurd level of detail that you can't even spot  ,  there is no game that will need all that power.
PC's have now like over xxx times the power of a console and they still lack games that trully use all that power , of course except 1-2 games.
We are in for a shock when a new generetation of consoles arrives with a HD6850 or GTX460 and blows away everything PC's had until that day , we will probably need a crossfire of these things to play on medium those console games.


----------



## cdawall (Dec 9, 2011)

leonard_222003 said:


> SK-1 and everyone that says upgrade , FOR WHAT ? except 1-2 games with absurd level of detail that you can't even spot  ,  there is no game that will need all that power.
> PC's have now like over xxx times the power of a console and they still lack games that trully use all that power , of course except 1-2 games.
> We are in for a shock when a new generetation of consoles arrives with a HD6850 or GTX460 and blows away everything PC's had until that day , we will probably need a crossfire of these things to play on medium those console games.



I hope they finally offer a performance upgrade from my pair of 3870X2's or GTX470. If they do I will upgrade as the 3870X2's are finally showing age and being let down by lack of memory. As for consoles blowing anything away...The PS3 has a NV "7800GTX" and on release that wasn't the top dog anymore. I doubt next gen consoles will be any different. They will have good video, but it will never be better than PC and never has.


----------



## robal (Dec 9, 2011)

leonard_222003 said:


> We are in for a shock when a new generetation of consoles arrives with a HD6850 or GTX460 and blows away everything PC's had until that day , we will probably need a crossfire of these things to play on medium those console games.



I'm looking forward to it.

Maybe I'm naive, but I think that next gen consoles will have more in common with PC (graphics API, eg: Xbox and Windows) and thus, PC-ports will have better relative performance.


----------



## Mussels (Dec 9, 2011)

laszlo said:


> i'll like to see a green PCB you know why ......hhhhhh
> 
> just wonder why they need 384 bit as current used 256 provided enough bandwidth is the gpu not so good as expected or too powerfull?



because the width of the bus and the amount of ram available are linked.

with the same density memory modules, you'd get:
256 bit with 1GB or 2GB ram

yet with the same density on a 384 bit bus, you'd have 50% more ram for:
1.5GB/3GB


while they may not need the bandwidth, the 50% greater ram is far more cost effective than doubling to 2GB on all the cards.


----------



## catnipkiller (Dec 9, 2011)

Do i see made in Canada or am i just high?


radrok said:


> Also found this magnification
> http://img839.imageshack.us/img839/5559/pc070155.jpg


----------



## Assimilator (Dec 9, 2011)

What's the bet they're running those cards in a Faildozer system...


----------



## cadaveca (Dec 9, 2011)

catnipkiller said:


> Do i see made in Canada or am i just high?



AMD has a small fab line in Markham, Ontario.


----------



## dir_d (Dec 9, 2011)

Fab it out at home then send the blueprints to TMSC.


----------



## cadaveca (Dec 9, 2011)

dir_d said:


> Fab it out at home then send the blueprints to TMSC.



Yeah. IN the past, i think that might have bene the reason why rumours had ATI cards doing realyl well..then they didn't. Perhaps AMD's own line had great results, but when they moved over to TSMC, things changed. Makes a little bit of sense, but who knows.

All i know is that I have no idea what to expect out of these cards any more. The good thing about that though, is that it makes them all that much more exciting. I can't wait to get one or two!


----------



## pantherx12 (Dec 9, 2011)

LAN_deRf_HA said:


> Do these cores have any IPC improvements or can they be directly compared to the 69xx cards?



Well supposedly the new highest ends cards use "graphics core next" what ever that means 


So I expect an IPC improvement, especially if you consider the stream processor amount hasn't increased by the usual amount you would see from a smaller fab process.

That or they've got some other goodies planned, they always wanted to put side-port *memory on the GPU for example but could never justify the die space requirement on larger fab processes, maybe this is the chips that finally have it.

*


Might of got the name mixed up with something else AMD do, but it's essentially on die memory which would really help with math performance amongst other things.

( or so AMD think)



By the by just to +1 all the ES comments, I doubt pcb will be red, I doubt it will need 2 x 8 pin, they just do that so they can try a wide variety of voltages etc with one ES so they can find the sweet spot etc.


----------



## techtard (Dec 9, 2011)

I vote we hype the shit out of the 7xxx series so the internet explodes into rage again when the product fails to live up tto the hype.

Or, we can wait for reviews. Shouldn't be too long now.


----------



## dir_d (Dec 9, 2011)

cadaveca said:


> Yeah. IN the past, i think that might have bene the reason why rumours had ATI cards doing realyl well..then they didn't. Perhaps AMD's own line had great results, but when they moved over to TSMC, things changed. Makes a little bit of sense, but who knows.
> 
> All i know is that I have no idea what to expect out of these cards any more. The good thing about that though, is that it makes them all that much more exciting. I can't wait to get one or two!



I dont know what to expect either, i was truly convinced that it would have taken them to atleast end of 2nd quarter to have these cards with the new architecture. I have no clue how they will perform.


----------



## alexsubri (Dec 9, 2011)

I thought XDR2 was on hold


----------



## SK-1 (Dec 9, 2011)

leonard_222003 said:


> SK-1 and everyone that says upgrade , FOR WHAT ? except 1-2 games with absurd level of detail that you can't even spot  ,  there is no game that will need all that power.



I understand where you're coming from. I was mostly thinking about the added Tessellation power and better AA in these newer cards.


----------



## Horrux (Dec 10, 2011)

cadaveca said:


> I understand that iommu doesn't need a chip. that was my point.
> 
> I've been talking about iommu for much longer than nearly anyone else. This has to be the most exciting thing about GCN for me. Most don't even have a clue what it'd be used for.



Please do explain, I am ignorant of this.


----------



## SonDa5 (Dec 10, 2011)

I hope AMD does a good job with these.


----------



## btarunr (Dec 10, 2011)

cadaveca said:


> AMD has a small fab line in Markham, Ontario.



They have an small foundry with PCB printers, placers only they're very small-scale. ES GPUs are still sourced from TSMC. Other components are still bought from Asia. Those cards you see could have very well cost AMD $20,000~$50,000 a piece to make (I'm obviously not including the R&D costs of Tahiti).

Once the ES card designs are tested stable in some of the most atrocious conditions (overheat, overhumidity, sucky PSU, Furmark, etc.,), they become qualification samples, and are produced in slightly more number to send to ODMs. PCPartner and TUL are the main upstream ODMs, AIB partners buy from them and place their stickers, handle all the shipping/regulation/warranty stuff, and resell. 

Think of AMD and component makers as Crude sourcing and shipping companies, ODMs as oil refineries, and AIBs as oil marketing companies.


----------



## cdawall (Dec 10, 2011)

Assimilator said:


> What's the bet they're running those cards in a Faildozer system...



"faildozer" is still faster than what you run. It is not a bad chip it is just not the "fastest." It is no more of a fail than Phenom II, Athlon II, core i5 or anything else that isn't the fastest on the market. As for running it on a dozer based system its on a Crosshair board look at the pictures its kind of hard to miss not to mention the OEM AMD heatpipe cooler. Way to flaimbait completely off topic though thanks for the vast insight you have given us towards AMD's new series of GPU's.


----------



## Mussels (Dec 10, 2011)

cdawall said:


> "faildozer" is still faster than what you run. It is not a bad chip it is just not the "fastest." It is no more of a fail than Phenom II, Athlon II, core i5 or anything else that isn't the fastest on the market. As for running it on a dozer based system its on a Crosshair board look at the pictures its kind of hard to miss not to mention the OEM AMD heatpipe cooler. Way to flaimbait completely off topic though thanks for the vast insight you have given us towards AMD's new series of GPU's.



not to mention that if you look at DX11 gaming reviews (gaming future as opposed to gaming present) the extra multi threading makes bulldozer quite powerful, especially in multi GPU setups.


----------



## TheoneandonlyMrK (Dec 10, 2011)

some people see just green blue or red though eh and not all three, does 12 memory chips deffinately define 1.5 or 3 gig people?

im thinkin could they not use 256 bit of it for gpu vmem use, and 128 for ioummu, as in 128 bit just for a direct link to memory or something and dunno just thinkin outloud


----------



## Mussels (Dec 10, 2011)

theoneandonlymrk said:


> some people see just green blue or red though eh and not all three, does 12 memory chips deffinately define 1.5 or 3 gig people?
> 
> im thinkin could they not use 256 bit of it for gpu vmem use, and 128 for ioummu, as in 128 bit just for a direct link to memory or something and dunno just thinkin outloud



i have no idea what IOMMU is, but current memory design for motherboards and GPU's definitely shows that bus width and memory amounts are tied together.


----------



## Horrux (Dec 10, 2011)

Mussels said:


> i have no idea what IOMMU is, but current memory design for motherboards and GPU's definitely shows that bus width and memory amounts are tied together.



Yeah can someone explain this iommu businesss?


----------



## TheoneandonlyMrK (Dec 10, 2011)

Horrux said:


> Yeah can someone explain this iommu businesss?



i dont remmember much of what ive read right now but imho its incresaed memory compatamilty between the gfx card and mobo/os with the gfx card able to control mem as the cpu does to some extent and as far as im thinkin if they already have dual DMA busses on caymen gpus if they added a third that might increase the bus width without increasing the memmory footprint i may be getting confused tho eh


----------



## cadaveca (Dec 10, 2011)

Horrux said:


> Yeah can someone explain this iommu businesss?



http://www.techpowerup.com/forums/showthread.php?t=156426


----------



## Horrux (Dec 10, 2011)

I posted the question in that thread, but maybe it is more appropriate here, come to think of it:

What's the (new) IOMMU gonna do for us?


----------



## cadaveca (Dec 10, 2011)

It can provide the ability for large shared system cache for GPGPU and gmaing, once drivers are sorted out, as well as the OS.

Many users, myself included, have found AMD inadequte for Multi-GPU use. Investigating the issue reveals that the AMD CPUs lack enough PCIe-to-System Ram bandwidth via AMD's onboard memory controller. At the same time, AMD's CPUs only use about 65% of the bandwidth the DIMMs they use support. The IOMMU could perhaps take advantage of that 35% of bandwidth left over, and make 3D performance much better on AMD platforms with multiple VGAs.

As well, it could allow VGAs to shared pooled resources in local data caches. In other words, you could have one GPU access data on the other card's memory space, making onboard VGA ram on multiple cards a total space, rather than a duplicated space. For for like the HD6990, it has 4 GB of ram, but only 2GB usable effectively. IOMMU could make it have 4GB of usable space.

Those are possibilities. Until the cards come out, and AMD start talking more about htem, we'll not know for sure what the IOMMU will truly offer. What I can say is that no "consumer" OS other than Linux actually supports IOMMUs, so I remain hesitant to guess what will happen.


----------



## Horrux (Dec 10, 2011)

cadaveca said:


> It can provide the ability for large shared system cache for GPGPU and gmaing, once drivers are sorted out, as well as the OS.
> 
> Many users, myself included, have found AMD inadequte for Multi-GPU use. Investigating the issue reveals that the AMD CPUs lack enough PCIe-to-System Ram bandwidth via AMD's onboard memory controller. At the same time, AMD's CPUs only use about 65% of the bandwidth the DIMMs they use support. The IOMMU could perhaps take advantage of that 35% of bandwidth left over, and make 3D performance much better on AMD platforms with multiple VGAs.
> 
> ...



OK now that is clearer and indeed this new IOMMU seems to hold much promise. Exciting times.


----------



## Benetanegia (Dec 10, 2011)

To the extent of my (limited) knowledge, GART is a common space for both GPUs, so nothing changes in that regards (other than allowing the GPUs to access more memory), it's the vram that needs to be replicated in a multi-GPU situation not system ram whether in GART space or otherwise. vram access is a lot faster than main memory access (+pcie access), so using/relying in a common pool like that in main memory could posibly degrade performance rather than inprove it.



cadaveca said:


> Investigating the issue reveals that the AMD CPUs lack enough PCIe-to-System Ram bandwidth via AMD's onboard memory controller.



Then the answer is right there. New IOMMU or GART, both will communicate through PCIe so that's a dead end, like I said above.

And of course the whole thing becomes even more irrelevant fr graphics, when you consider that the new cards will have 3 GB of memory. With so much memory and memory bandwidth to boot, the last thing you wnt to do, is to move data from and to main memory.



> For for like the HD6990, it has 4 GB of ram, but only 2GB usable effectively. IOMMU could make it have 4GB of usable space.



Where did they say that one GPU can read vram of the other one? Plus why would you want to do that in the first place? It would NOT help graphics performance at all (GPGPU that's another thing). Graphics performance entirely depends on the bandwidth/availability/lag between the GPU and its own vram, controlled by its own memory controller. As long as you move anything from vram to any other memory pool performance can and most probably will degrade.


----------



## cadaveca (Dec 10, 2011)

Benetanegia said:


> Then the answer is right there. New IOMMU or GART, both will communicate through PCIe so that's a dead end, like I said above.



That's not exactly the issue with AMD CPUs though. It's not really the PCIe that is the problem, nor is HTT(what goes from PCIe controller on chipset to CPU NB). 

Although, you may be right, just not on where the bottleneck occurs(probably due to my poor explanation ). It depends on how the IOMMU interfaces with the CPU memory controller. The bottleneck could simply be occuring beucase of how GART is dealt with. You only have 256 MB of GART space in system ram, which means the contorller is constantly writing to the GART space from system ram due to it's limited size. Allowing for a large buffer size would mean less writes to GART, which can boost performance as the CPU doesn't have to copy from System RAM to GART. Same thing with sharing VGA ram...maybe you should check last year's Fusion Summit presentation and it might give you a better idea.

Anyway, we should be discussing this in the thread created for it.


----------



## TheoneandonlyMrK (Dec 10, 2011)

12x128 Mb chips would equal 2gig im only speculating from rumours i heard and only sayin it cos no one else is

http://www.xbitlabs.com/news/memory...sung_Begins_to_Produce_7GHz_GDDR5_Memory.html

samsungs new gddr5 7GHz capable

and with rumours imho i see a 256 bit memory bus between gfx gpu and mem plus a seperate sideband sort of 128bit iommu bus making what they will call a 384bit bus and more system to gpu bandwith utilised

like i say a 3rd on die DMA maybe 4 in total



Mussels said:


> i have no idea what IOMMU is, but current memory design for motherboards and GPU's definitely shows that bus width and memory amounts are tied together.



AMD have not been shy about makeing drastic marked hardware changes lately take APU's BD and GCN for example all marked changes from that which went before

no facts support this pos bs just speculating


----------



## Benetanegia (Dec 10, 2011)

cadaveca said:


> That's not exactly the issue with AMD CPUs though. It's not really the PCIe that is the problem, nor is HTT(what goes from PCIe controller on chipset to CPU NB).
> 
> Although, you may be right, just not on where the bottleneck occurs(probably due to my poor explanation ). It depends on how the IOMMU interfaces with the CPU memory controller. The bottleneck could simply be occuring beucase of how GART is dealt with. You only have 256 MB of GART space in system ram, which means the contorller is constantly writing to the GART space from system ram due to it's limited size. Allowing for a large buffer size would mean less writes to GART, which can boost performance as the CPU doesn't have to copy from System RAM to GART. Same thing with sharing VGA ram...maybe you should check last year's Fusion Summit presentation and it might give you a better idea.
> 
> Anyway, we should be discussing this in the thread created for it.



You're probably right there, but the benefit would not make the vram replication issue dissapear though, and the benefit would be there for single GPU too. Also that wouldn't make the multi-gpu situation you described any more appealing either. You want everything on local vram, as much as posible. Remember just because the GPU can access any *virtual* address thanks to IOMMU, physical memory path still exists and you don't want anything graphics related to be read "directly" from main memory or in case it's posible from the other GPU's vram.



theoneandonlymrk said:


> 12x128 Mb chips would equal 2gig im only speculating from rumours i heard and only sayin it cos no one else is



12x128 == 1536


----------



## TheoneandonlyMrK (Dec 10, 2011)

Benetanegia said:


> 12x128 == 1536




damn too muchim still pondering the other bit though


----------



## Goodman (Dec 11, 2011)

Have your salt ready...







Link--> http://www.geeks3d.com/20111209/amd-radeon-hd-7970-tahiti-xt-pictures-benchmarks/

True or not it seems believable


----------



## crazyeyesreaper (Dec 11, 2011)

gotta say thats fud lol that article is still referencing PCIE 3.0 and XDR2 i doubt 7000 series will be PCIE 3.0 i say this because AMD;s own chipset coming out NEXT year is still PCIE 2.0,


----------



## pantherx12 (Dec 11, 2011)

crazyeyesreaper said:


> gotta say thats fud lol that article is still referencing PCIE 3.0 and XDR2 i doubt 7000 series will be PCIE 3.0 i say this because AMD;s own chipset coming out NEXT year is still PCIE 2.0,




At-least they've gone with more reasonable results then usual 


If it is true they must of given tessellation a big boost as metro is over double the performance.


----------



## mastrdrver (Dec 11, 2011)

theoneandonlymrk said:


> 12x128 Mb chips would equal 2gig im only speculating from rumours i heard and only sayin it cos no one else is
> 
> http://www.xbitlabs.com/news/memory...sung_Begins_to_Produce_7GHz_GDDR5_Memory.html
> 
> ...



AMD is not going to use 7Ghz chips. They take more voltage then the ones that are used on the 6970. Also realize that Tahiti with its 384 bit bus will not be clock as high as previous AMD cards since the wider bus puts a larger strain on the controller. It would require more voltage to run the wider bus at previous AMD GPU memory speeds.

Besides, with the wider bus you do not need the higher clock just to get the bandwidth up.


----------



## crazyeyesreaper (Dec 11, 2011)

pantherx12 said:


> At-least they've gone with more reasonable results then usual
> 
> 
> If it is true they must of given tessellation a big boost as metro is over double the performance.



metro dosent use tessellation that much it uses less tessellation then every other dx11 game that uses it what hits gpus hard in metro is lighting shadows and direct compute
lighting hits it hard because it uses subsuface scattering think of it this way put a flashlight under your fingers the red you see is light passing through upper layers of skin Metro uses that technique among others


----------



## Benetanegia (Dec 11, 2011)

The thing that hits my GTX 460 harder in Metro is the DX11 depth of filed, by far. Without tesselation and DOF I get around 40 fps. Enabling tesselation brings it down to 35 fps or so. DOF (w/o tesselation) brings it down to 25. With both enabled it's 20 fps or so.

I don't know if DOF is a real D3D11 feature or if it uses compute.

There's many ways in which Metro could see a huge improvement from Tahiti, but I agree that tesselation doesn't seem like one. Even the much much hgher memory bandwidth makes more sense to me.


----------



## crazyeyesreaper (Dec 11, 2011)

Depth of Field in Metro 2033 uses Direct Compute 

its effective but as you can tell no current gpus are really ready for full on direct compute tasks in real time while running a game,  

DOF via Direct Compute has no improvement in quality vs current methods but has an insane performance impact, essentially tho Direct Compute does allow them to add more to the render pipeline without making it longer, thats the real benefit so eventually this will become a good thing currently gpus arent powerful enough for it .


----------



## pantherx12 (Dec 11, 2011)

crazyeyesreaper said:


> metro dosent use tessellation that much it uses less tessellation then every other dx11 game that uses it what hits gpus hard in metro is lighting shadows and direct compute
> lighting hits it hard because it uses subsuface scattering think of it this way put a flashlight under your fingers the red you see is light passing through upper layers of skin Metro uses that technique among others



I stand corrected, well in that case if this is real math performance has gone up?


----------



## crazyeyesreaper (Dec 11, 2011)

correct if there benching metro with DOF on then it is possible that a 7970 could see that kind of performance gain.

and since a 6970 stock with all settings turned on gets me around that same mark performance wise then yes at least the 6970 numbers on that chart look correct, and should the GCN be targeted like CUDA cores are towards improved mathmatical performance and super computing type tasks then yes Direct Compute effects would no longer stress the architecture, but again i still call fud on that graph in its entirety


----------



## pantherx12 (Dec 11, 2011)

crazyeyesreaper said:


> correct if there benching metro with DOF on then it is possible that a 7970 could see that kind of performance gain.
> 
> and since a 6970 stock with all settings turned on gets me around that same mark performance wise then yes at least the 6970 numbers on that chart look correct, and should the GCN be targeted like CUDA cores are towards improved mathmatical performance and super computing type tasks then yes Direct Compute effects would no longer stress the architecture, but again i still call fud on that graph in its entirety



Well after bulldozer I know to never get overly excited again that's for sure 

Assuming everything does go well and they come out with decent performance I'll probably grab a 7970.

Last top end card I had was 4870 and well, it was only really midrange XD


----------



## techtard (Dec 12, 2011)

The 4870 was a beast. Mid range prices, top end grunt. I have a friend who still uses a pair in crossfire. I sold him mine and went 5850 for an inexpensive upgrade.

I haven't upgraded yet, surprisingly my ddr2 based AM2+ rig is still getting the job done so far. 
Granted, I don't play the most demanding of the newer games.

Depending on how my rig runs SW:TOR I may just keep this beast until it dies.

Waiting to see that AMD has up it's sleeve for the 7xxx series, and nVidia for their next GTX series.


----------



## mediasorcerer (Dec 12, 2011)

Good to see you here again tard, im waiting too why not?


----------



## pantherx12 (Dec 12, 2011)

techtard said:


> The 4870 was a beast. Mid range prices, top end grunt. I have a friend who still uses a pair in crossfire. I sold him mine and went 5850 for an inexpensive upgrade.
> 
> I haven't upgraded yet, surprisingly my ddr2 based AM2+ rig is still getting the job done so far.
> Granted, I don't play the most demanding of the newer games.
> ...



I disliked mine, I had it for a week and sold the entire system that it was with 

Had more fun with a 9800 gt, and then a 3800 series crossfire set up XD


----------



## amadzack (Dec 21, 2011)

i Dont like cheap Red pcb..wait2 where the blackplate?


----------



## Nick89 (Dec 21, 2011)

so how much memory will it have? 1536mb? It has 12 memory chips and if each chip is 128 megabytes that equals 1536mb or ram.


----------



## cadaveca (Dec 21, 2011)

3GB the rumours state, not 1536MB. Apparant launch is tomorrow, so we'll find out the details tomorrow.


----------



## Delta6326 (Dec 22, 2011)

amadzack said:


> i Dont like cheap Red pcb..wait2 where the blackplate?



Don't worry it should have black PCB the red is just for sample. No need for backplate.


----------



## cadaveca (Dec 22, 2011)

Delta6326 said:


> No need for backplate.



I think probably 80% of people don't care if the backplate is functional or not. Bare PCBs suck, and backplates are cheap.


----------

