# AMD Staring at 140W Barrier with Phenom II X4 965?



## btarunr (Jul 10, 2009)

Two of AMD's biggest setbacks with the 65 nm Phenom X4 series were 1. the TLB erratum fiasco with the B2 revision of the chip, and 2. the virtual TDP wall it hit with the 2.60 GHz Phenom X4 9950, at 140W. At that wattage, several motherboards were rendered incompatible with the processor because they lacked the power circuitry that could handle it. The company eventually worked out a lower-wattage 125W variant of the said chip, and went on to never release a higher-clocked processor based on the core. 

MSI published the complete CPU support list of its a new BIOS for the 790GX-G65 motherboard a little early, revealing quite some about unreleased AMD processors. At the bottom of the list its the Phenom II X4 965. This 3.40 GHz quad-core chip will succeed the Phenom II X4 955 as AMD next flagship desktop offering. Its TDP is an alarming 140W. Alarming, because this is a chip with a mere 2 unit bus multiplier increment over the Phenom II X4 940, the launch-vehicle for AMD's 45 nm client processor lineup. There are, however, two things to cheer about. RB-C2 is not going to be the only revision of this core, future revisions could bring TDP down, or at least make sure clock-speeds of future models keep escalating, while respecting the 140W mark. A future variant of Phenom II 965 could come with a reduced TDP rating. The list interestingly also goes on to reveal that AMD will have a 95W version of the 3.00 GHz Phenom II X4 945. 





*View at TechPowerUp Main Site*


----------



## FordGT90Concept (Jul 11, 2009)

Jesus, that's more than a Core i7 965 with HT and Turbo on. 

I hope they don't start a TDP war.


----------



## TheGuruStud (Jul 11, 2009)

They need to bin (I don't really mean bin, but w/e) these chips better. You can run 3.4 ghz on what, 1.25v? 

If they'd drop the volts on the BEs, then they wouldn't have to worry about high TDP at stock clock. Noobs.


----------



## ShadowFold (Jul 11, 2009)

Bring on the HEEEEATTTT


----------



## D4S4 (Jul 11, 2009)

amd`s prescott


----------



## Kitkat (Jul 11, 2009)

yeah see i knew they already had (from previous anoucment) a lowered (required) twp i was unsure if 965 would have it and i guess not but my 955s ok for now, like i said in previous post too id like to see 975 i thought theyd skip 965 anyway. But good info on the revision sounds nice. I also hear too that half the info thats out about it is false even what mobo manus are posting (from an interview i read amdzone i belive) but even that was weeks ago. i think 975 will have all the upgrades most are looking for. As far as it being incompatible with some mobos most who buy this chip wont care BEs were never meant (even lower twps) for lower end boards anyway those ppl know what they get them selves into when they buy a low end board and a high end chip (atleast i hope they do) It only means we can keep our 955s for another 3/12 months lol thats usulay the time it takes.


----------



## snakeoil (Jul 11, 2009)

phenom II is a power efficient architecture, instead intel's core i7 is a certified powerhog, temps under load are 80 c for core i7 with stock cooler and stock speed , while phenom II is just 45 c under load with stock cooler and stock speed. everybody that have core i7  has to suffer the heat and the price (like in hell) while phenom II users are cool and with money in the wallet.


----------



## TheGuruStud (Jul 11, 2009)

D4S4 said:


> amd`s prescott



Cmon now. It's nowhere near 200W tdp   (I'm serious, intel lied so bad back then)


----------



## a_ump (Jul 11, 2009)

snakeoil said:


> phenom II is a power efficient architecture, instead intel's core i7 is a certified powerhog, temps under load are 80 c for core i7 with stock cooler and stock speed , while phenom II is just 45 c under load with stock cooler and stock speed. everybody that have core i7  has to suffer the heat and the price (like in hell) while phenom II users are cool and with money in the wallet.



power efficient, the Phenom II wins. but money wise, a Phenom II build and i7 920 build are very close now, enough to make that price difference argument negligible. 









$50 buck difference, and the intel build has 2gb more ram, so if they went 3gb the build's would be even closer in price, the 1gb difference isn't important as anyone getting 4gb or less usually goes 32-bit OS so the usuable ram for the AMD build with 4GB would still be 3gb-3.5gb.


----------



## tkpenalty (Jul 11, 2009)

Its almost as bad as a prescott, minus the inverse exponential increase in performance (anything over 3.6Ghz = no perf increase whatsoever for prescotts ). AMD Really need to work on a new architecture instead of a die shrink, beacuse die shrinks don't really help for high TDPs, especially if it is due to the way the architecture works. Die shrinks are stopgap in this case basically.



a_ump said:


> power efficient, the Phenom II wins. but money wise, a Phenom II build and i7 920 build are very close now, enough to make that price difference argument negligible.
> http://img.techpowerup.org/090710/Intel174.jpg
> http://img.techpowerup.org/090710/AMD139.jpg
> $50 buck difference, and the intel build has 2gb more ram, so if they went 3gb the build's would be even closer in price, the 1gb difference isn't important as anyone getting 4gb or less usually goes 32-bit OS so the usuable ram for the AMD build with 4GB would still be 3gb-3.5gb.



Power efficiency doesn't mean lower power usage, it means how much performance for how much power you use. The phenom is less efficient than the i7s which also has an intergrated memory controller.


----------



## snakeoil (Jul 11, 2009)

a_ump said:


> power efficient, the Phenom II wins. but money wise, a Phenom II build and i7 920 build are very close now, enough to make that price difference argument negligible.
> http://img.techpowerup.org/090710/Intel.jpg
> http://img.techpowerup.org/090710/AMD.jpg
> 
> $50 buck difference, and the intel build has 2gb more ram, so if they went 3gb the build's would be even closer in price, the 1gb difference isn't important as anyone getting 4gb or less usually goes 32-bit OS so the usuable ram for the AMD build with 4GB would still be 3gb-3.5gb.



maybe if you use the crappiest parts available not everywhere, you cant deny that core i7 is more expensive if you use quality parts, and because is very hot you need a good cooler and a well ventilated case which make it more expensive. could you  reduce the size of your post please?,


----------



## ShadowFold (Jul 11, 2009)

a_ump said:


> power efficient, the Phenom II wins. but money wise, a Phenom II build and i7 920 build are very close now, enough to make that price difference argument negligible.
> http://img.techpowerup.org/090710/Intel174.jpg
> http://img.techpowerup.org/090710/AMD139.jpg
> $50 buck difference, and the intel build has 2gb more ram, so if they went 3gb the build's would be even closer in price, the 1gb difference isn't important as anyone getting 4gb or less usually goes 32-bit OS so the usuable ram for the AMD build with 4GB would still be 3gb-3.5gb.








100$ cheaper and same overclocking performance.


----------



## tkpenalty (Jul 11, 2009)

snakeoil said:


> maybe if you use the crappiest parts available not everywhere, you cant deny that core i7 is more expensive if you use qualilty parts, and because is very hot you need a good cooler and a well ventilated case which make it more expensive. could you  reduce the size of your post please?,





snakeoil said:


> phenom II is a power efficient architecture, instead intel's core i7 is a certified powerhog, temps under load are 80 c for core i7 with stock cooler and stock speed , while phenom II is just 45 c under load with stock cooler and stock speed. everybody that have core i7  has to suffer the heat and the price (like in hell) while phenom II users are cool and with money in the wallet.



Intel's CPUs only run so "warm" because of incorrect temperature readings from programs such as core temp which always never address the issue of the tjunction temps being 15 (or 25) or so degrees off the real readings, but yeah its slightly warm, but nothing to fret over (80*C? BS, the CPU can't even run at that temperature without shutting itself down). Secondly the stock cooler is pure CRAP. But comparably an i7 CPU doesn't have to high TDPs just to blow any Phenom II out of the water. (Slight OC). Its only because AMD supplies a slightly better CPU that they dont run so warm. 

You don't seem to mention the performance difference between the i7 and PII.

Okay people, note that we're less than one percent of this market's consumers. From what I can see, AMD are being mainly used in the value segment, and not really performance, while higher end offerings are typically Intel (OEMs).


----------



## snakeoil (Jul 11, 2009)

there are a few things that core i7 users cant deny

1. they have to suffer the heat
2. they have to suffer the price which is higher than phenom 2
3. they cant deny that they need a high end cooler if they want to overclock
4. they cant deny that they need a well ventilated case which is expensive.
5. they cant deny that the dragon platform is superior to the intel platform
6. they cant deny that intel graphics are a disgrace and a shame and its not getting any better.
7. they cant deny that core 2 is end of life old architecture with       socket soon to be discontinued and core i  7 is too expensive to replace it.

etc.


----------



## a_ump (Jul 11, 2009)

snakeoil said:


> maybe if you use the crappiest parts available not everywhere, you cant deny that core i7 is more expensive if you use quality parts, and because is very hot you need a good cooler and a well ventilated case which make it more expensive. could you  reduce the size of your post please?,



crappiest parts available, i believe you simply looked at the price and refused to believe they were that close. I picked the best rated parts for each build on newegg that weren't rediculous. Everything on the AMD build is 5eggs with plenty of reviews, the same goes for the intel build excluding the motherboard which has a 4 egg rating. That whole "need a well ventilated case and cpu cooler" um excuse is pointless to mention. Who buys a top of the line CPU(either build) without purchasing a decent case anyways? or aftermarket cooler to add to it. If your between and AMD and i7 purchase, is that going to change the case you'd pick because you change the CPU? No your going to pick the same case no matter the build.

I simply tried to back up my point with screenies of the 3 different parts between an i7 and AMD build. You only typed up words. However ShadowFold actually backed up his claims with a screeny of an AMD build that was cheaper, instead of a post that only contained text. I stand correct price wise, however not efficiency per dollar.


----------



## Dippyskoodlez (Jul 11, 2009)

TheGuruStud said:


> They need to bin (I don't really mean bin, but w/e) these chips better. You can run 3.4 ghz on what, 1.25v?
> 
> If they'd drop the volts on the BEs, then they wouldn't have to worry about high TDP at stock clock. Noobs.



It's so easy, the solution is right here guys!


Crisis averted!


----------



## a_ump (Jul 11, 2009)

snakeoil said:


> there are a few things that core i7 users cant deny
> 
> 1 they have to suffer the heat
> 2 they have to suffer the price which is higher than phenom 2
> ...



1: i agree, i7 is indeed hotter than Ph II
2: agree again, but an i7 owner can retort that a Ph II owner suffer lesser performance depending on the build and use of the PC
3: i agree, but so would anyone choosing to overclock a Phenom II BE, high end cooler i disagree with as the xiggy 1283 darkknight is 40 bucks and performs just fine, best 2009 coolers review
4: I don't know anyone that would purchase either high end build without purchasing a good case, as someone paying for the 955 black edition is more than likely going to overclock and the desire for lower temps even if they would be acceptable is still there.
5+6: i wasn't talking integrated graphics, no body in their right mind purchasing either build is going to go with integrated graphics...seriously
7: core 2 and its socket will be dead, absolutely, but then core 2 matches Phenom II performance. So that statement is a moot point.


----------



## Darren (Jul 11, 2009)

a_ump said:


> power efficient, the Phenom II wins. but money wise, a Phenom II build and i7 920 build are very close now, enough to make that price difference argument negligible.
> http://img.techpowerup.org/090710/Intel174.jpg
> http://img.techpowerup.org/090710/AMD139.jpg
> $50 buck difference, and the intel build has 2gb more ram, so if they went 3gb the build's would be even closer in price, the 1gb difference isn't important as anyone getting 4gb or less usually goes 32-bit OS so the usuable ram for the AMD build with 4GB would still be 3gb-3.5gb.



So what happens to the prices once you put in a 780 chipset and DDR2 PC8500 in the Phenom build instead? More than $50 difference, more than $100 difference I assure you. 


And you guys have it easy, in the UK there is about a £200 difference.


----------



## wolf2009 (Jul 11, 2009)

bring on my heater for next winter ! it gets quite cold here and didn't know AMD was getting into the heating business. 

On topic, 140W alarming, for whom ? Somebody running a top of the line processor would do so for OCing and would surely get a good board with adequate power circuitry. With a slew of good boards from MSI and Gigabyte, don't think 140W should be a problem anymore. 

The people with 780 and 770 boards might have a problem though but they would be stupid to think that they could save money by going with a cheap board and top of the line CPU. In the end it will cost them more.


----------



## Kei (Jul 11, 2009)

I've been curious for a long time now if any of the Intel guys out there are able to knock their voltages down significantly at stock speeds (or nearly stock). I understand the TDP ratings of all of the AMD and Intel processors, but I've never had an AMD processor that I actually needed to run at....heck or near the stock voltage.

I've run my PII 920 at the stock 2.8Ghz on only 1.184v with no problems since day one. That's down from 1.30v stock, I did the same thing with my PI 9850 and P1 9500 processors which both undervolted like champs. The 4850 I used to run does the same thing, the Kuma I just setup does the same thing (1.13v so far from 1.30v).

Does Intel do the samething with their processors in being able to drop the voltage to far lower than the stock voltage level without reducing performance at all?

Kei

(btw, I don't care about the super overclock voltages only stock or very close to stock)


----------



## wolf2009 (Jul 11, 2009)

Kei said:


> I've been curious for a long time now if any of the Intel guys out there are able to knock their voltages down significantly at stock speeds (or nearly stock). I understand the TDP ratings of all of the AMD and Intel processors, but I've never had an AMD processor that I actually needed to run at....heck or near the stock voltage.
> 
> I've run my PII 920 at the stock 2.8Ghz on only 1.184v with no problems since day one. That's down from 1.30v stock, I did the same thing with my PI 9850 and P1 9500 processors which both undervolted like champs. The 4850 I used to run does the same thing, the Kuma I just setup does the same thing (1.13v so far from 1.30v).
> 
> ...



yes, Core i7 can run at 0.88 V idle

http://www.anandtech.com/mb/showdoc.aspx?i=3593&p=2


> The ASRock X58 Extreme passed our full test suite at 21x160 for a final 3.37GHz core speed. We enabled the BIOS with full power management options and Core Vid at 1.15V (with offset) resulting in an idle voltage of 0.880V and full load voltage at 1.016V. VTT was set to 1.2V and VDimm at 1.60V with memory timings at 7-8-7-20 1T for DDR3-1600 speeds.


----------



## ShadowFold (Jul 11, 2009)

They have the voltage higher like that so it's 101% stable.


----------



## Flyordie (Jul 11, 2009)

Different die's run at different TDPs.
PII 920's TDP = 125W however, some die's were capable of running @ 1.0V for stock speeds giving the TDP a rating of as low as 65W. 

I have estimated that my TDP on the PII 920 I have @ 3.4Ghz is 95-100W judging from the APC's wattage reading and the efficiency factor of my PSU @ 83%.

What I am trying to say is this- They ramped the voltage to get higher usable dies.
-edit-
I have run my PII @ 3,086Mhz @ 1.0V and it was 100% stable but when I get into the 3.2-3.4Ghz range the voltage needed to keep the speed goes up a large amount... I need 1.375V to keep stable at that speed.  @ 4Ghz I was able to get it to POST and load windows @ 1.485V but the NB died on the board b4 I could get a CPU-Z... ;-(
So it all depends on the die imho. Luck of the draw.


----------



## 3xploit (Jul 11, 2009)

snakeoil said:


> there are a few things that core i7 users cant deny
> 
> 1. they have to suffer the heat
> 2. they have to suffer the price which is higher than phenom 2
> ...



1. my 920 runs at 3.9ghz on air and loads at mid 60s - which is perfectly fine for any chip (amd or intel)
2. true, but i get much more performance
3. most people buy high end air coolers or water on these forums anyways regardless if they use intel or amd
4. i run my whole setup in a $50 antec 300
5. LOL ok there
6. no i7 x58 boards even use integrated graphics so wtf are you saying
7. core 2 quads still hold their own against phenom ii's and keep up with i7s in gaming

8. you are an amd fanboy lol


----------



## erocker (Jul 11, 2009)

This thread has nothing to do with i7 so stop while you're ahead. There's plenty of other threads to take your i7 discussion to.


----------



## a_ump (Jul 11, 2009)

can i say no...?  lol yea but you kno how it is when you get into that heated debate with another member . eh 140TDP who cares, no body that is interested in buying high end pc's.


----------



## [I.R.A]_FBi (Jul 11, 2009)

real men use real (hot) cores


----------



## Baam (Jul 11, 2009)

Cheapest Phenom II 955 combo
http://www.newegg.com/Product/ComboDealDetails.aspx?ItemList=Combo.212169

Cheapest i7 combo
http://www.newegg.com/Product/ComboDealDetails.aspx?ItemList=Combo.213702

Almost $200 difference.


----------



## TheMailMan78 (Jul 11, 2009)

Honestly I was expecting more out of a 140w chip. Kinda a let down from AMD.


----------



## [I.R.A]_FBi (Jul 11, 2009)

erocker said:


> This thread has nothing to do with i7 so stop while you're ahead. There's plenty of other threads to take your i7 discussion to.




split plz?


----------



## TheMailMan78 (Jul 11, 2009)

[I.R.A]_FBi said:


> real men use real (hot) cores



We AMD boys like our CPU's like we like our women. Hot and fast.


----------



## Steevo (Jul 11, 2009)

And four real ones at a time.


----------



## Dippyskoodlez (Jul 11, 2009)

Steevo said:


> And four real ones at a time.





I only have one... 


CPU....


----------



## btarunr (Jul 11, 2009)

snakeoil said:


> there are a few things that Phenom(II) users cant deny
> 
> 1. they have to suffer the heat
> 2. they have to suffer the price which is higher than Core 2 (PII 955 @ 249, while C2Q 9550 @ $219)
> ...



Fixed a few things. As promised, you won't be posting here anymore.



ShadowFold said:


> http://img.techpowerup.org/090710/Capture027286.jpg
> 
> 100$ cheaper and same overclocking performance.









$18 cheaper, faster.


----------



## Kei (Jul 11, 2009)

wolf2009 said:


> yes, Core i7 can run at 0.88 V idle
> 
> http://www.anandtech.com/mb/showdoc.aspx?i=3593&p=2



Thanks Wolf, but I mean for the normal use voltage? I don't know the stock voltage of the processor you're talking about but are they able to enjoy the same ability to drop the voltage and just leave it there (not just idle voltages).

I like overclocking, but tend to have more fun when undervolting a system making it run as efficiently as it possibly can while using as close to no energy as possible. I wonder what the stable voltage is for an Intel quad in comparison to their stock voltage. For sure they can go lower, but how low I wonder without having to slow the system down or risk instability.

I really think I can get the AMD X2 4050e system I just built for my sister to run almost no voltage at all and still be stable. It should be fun to try out. I'm still working on the X2 7750 setup I just put together for a friend to see if we can get the voltage lower. With my old 9850 Phenom I was able to get down to 1.088v @ 2.7Ghz still using all four cores so I hope maybe with only using those 2 cores we can get lower on the 7750 since it's also a Phenom I family member. 

Kei


----------



## [I.R.A]_FBi (Jul 11, 2009)

btarunr said:


> Fixed a few things. As promised, you won't be posting here anymore.



Have a brew on me


----------



## Flyordie (Jul 11, 2009)

Who will be the first to 4Ghz?  Take your pick...


----------



## btarunr (Jul 11, 2009)

It wouldn't matter, just as Intel reaching 3.80 GHz with its Cedarmill Pentium 4 didn't matter as far as competition went.


----------



## Mussels (Jul 11, 2009)

AMD netburst, eh?


----------



## Flyordie (Jul 11, 2009)

Yeah, but its Quad, not single and chucks out WAY less heat.
If you bin a Deneb right, you should be able to wiggle down to 125W TDP @ 4Ghz.


----------



## eidairaman1 (Jul 11, 2009)

i believe AMD is reaching the Thermal Barrier of the Current Arch, sort of what happened with the Athlon XP at 3200.


----------



## Flyordie (Jul 11, 2009)

eidairaman1 said:


> i believe AMD is reaching the Thermal Barrier of the Current Arch, sort of what happened with the Athlon XP at 3200.



Yeah, 4 to 4.2 is the best they could really do with this with 4 cores...
3 cores could get 4.3-4.35
2 cores could get 4.5..
1 core should get 5.0


----------



## Mussels (Jul 11, 2009)

both AMD and intel always have the chips run a good percentage above required volts. The reason is that when they go on shit OEM motherboards, they get vdroop and it needs to counter that.

They cant just release chips that work at EXACTLY the needed voltage to save power and heat, without knocking out their biggest buyers.

oh noes i7 runs warm you remember the athlonXP days? before barton, they were hot as hell - intel have got die shrinks and i5 due real soon just like AMD had barton. intel have 32nm chips very close to release, while on the AMD side... they have 140W chips close to release.

AMD fanboys just need to realise that this isnt the P4 days - Intel are ahead.


----------



## eidairaman1 (Jul 11, 2009)

Flyordie said:


> Yeah, 4 to 4.2 is the best they could really do with this with 4 cores...
> 3 cores could get 4.3-4.35
> 2 cores could get 4.5..
> 1 core should get 5.0



its not clocks im worried about its the TDP now.


----------



## Flyordie (Jul 11, 2009)

Mussels said:


> both AMD and intel always have the chips run a good percentage above required volts. The reason is that when they go on shit OEM motherboards, they get vdroop and it needs to counter that.
> 
> They cant just release chips that work at EXACTLY the needed voltage to save power and heat, without knocking out their biggest buyers.
> 
> ...



Im not denying that...  im just saying it within the realm of AMD only...



eidairaman1 said:


> its not clocks im worried about its the TDP now.


those 4Ghz+ clocks would all be under 140W TDP... (or should be if binned right)
Remember, my PII is at 3.4Ghz and only kicking out a TDP of about 95-100W


----------



## Mussels (Jul 11, 2009)

Flyordie said:


> those 4Ghz+ clocks would all be under 140W TDP... (or should be if binned right)
> Remember, my PII is at 3.4Ghz and only kicking out a TDP of about 95-100W



are you running at the minimum voltage for that, or are you 3-4 notches higher? i made this point.


----------



## Flyordie (Jul 11, 2009)

Mussels said:


> are you running at the minimum voltage for that, or are you 3-4 notches higher? i made this point.



I am actually running 1.375V @ 3.4Ghz.  Some VERY minor VDroop takes it to 1.374V though...


----------



## btarunr (Jul 11, 2009)

Flyordie said:


> If you bin a Deneb right, you should be able to wiggle down to 125W TDP @ 4Ghz.



And you think they're going to come across even 2500 of such dies (2500 is the standard wholesale stock quantity)?


----------



## Flyordie (Jul 11, 2009)

btarunr said:


> And you think they're going to come across even 2500 of such dies (2500 is the standard wholesale stock quantity)?



with the rumored C3... yes.  They may have to disable a memory controller to do it, but its possible. They could also keep it below 140W and still be fine.


----------



## btarunr (Jul 11, 2009)

Flyordie said:


> They may have to disable a memory controller to do it, but its possible.



== memory sub-system/bandwidth fail.


----------



## Dippyskoodlez (Jul 11, 2009)

btarunr said:


> == memory sub-system/bandwidth fail.



Exactly.

Disabling part of the CPU to remain within TDP is not a viable option with a high end part.


----------



## 1Kurgan1 (Jul 11, 2009)

I don't really see the issue as most people are OC'ing who buy these chips as AMD's don't come in many built factory computers. Odds are most people are pushing this kind of wattage already.


----------



## Mussels (Jul 11, 2009)

1Kurgan1 said:


> I don't really see the issue as most people are OC'ing who buy these chips as AMD's don't come in many built factory computers. Odds are most people are pushing this kind of wattage already.



overclockers and enthusiasts are probably 5% of the market. lots - and i mean lots - of people just buy the high end parts and leave them at stock, because its the fastest they can buy.


----------



## BrooksyX (Jul 11, 2009)

Dang 140w, thats pretty high.


----------



## eidairaman1 (Jul 11, 2009)

1Kurgan1 said:


> I don't really see the issue as most people are OC'ing who buy these chips as AMD's don't come in many built factory computers. Odds are most people are pushing this kind of wattage already.



so what about your machine?


----------



## mtosev (Jul 11, 2009)

AMD did it again. 

the cpu has a higher TDP then an i7 cpu and also has more MHz but is still shower then an i7 cpu whitch has less MHz.

AMD FAILED.


----------



## Darren (Jul 11, 2009)

mtosev, 

The CPU has not been released yet, all this information is based on sniffing information, wait for to see when or if the CPU gets released at 140w before we call them a fail, secondly what has what has MHz got to do with anything?

Has anyone ever noticed, its always the guys with ancient processors such as a the Pentium 4 *cough cough* or Celeron or something stupidly slow and old that always bash AMDs flagship processor. They would cream in their pants to swap their Pentium 4 for a Phenom II.


----------



## eidairaman1 (Jul 11, 2009)

im on a old machine myself and will be upgrading at the end of the year, the Core i7 seems to be out of my range and i don't need all that processing for what i do anyway.

BTW TheMailMan78 you have another plus 1 for your wise cracks


----------



## Mussels (Jul 11, 2009)

darren: please dont double post, and try not to post inflammatory material.

If you respond to someone baiting you, it gets both people in trouble. If you ignore them, only they do.


----------



## wiak (Jul 11, 2009)

TheGuruStud said:


> Cmon now. It's nowhere near 200W tdp   (I'm serious, intel lied so bad back then)


they still do
heck a intel 95W cpu is a AMD 125W cpu, its just how intel calulates *load*
btw the Phenom II 965 isnt even released yet, last thing i heard its a month away from release, and msi could have gotten a 125W part and forgot to update the cpu support list


----------



## hat (Jul 11, 2009)

I wish they would just multiply voltage by whatever amperage the processor sucks up...


----------



## Mussels (Jul 11, 2009)

AMD dont use real watts either, both companies use TDP.

I dont have a huge argument against it either -looking at my video cards, furmark uses 100W more than any game, no matter the settings i use, so if AMD wanted to label it with a TDP of less than its max, that makes sense to me.

Same applies with CPU's, the odds on 100% power usage outside of stress testing is pretty much nil.


----------



## Deleted member 24505 (Jul 11, 2009)

How exactly is TDP calculated?


----------



## mtosev (Jul 11, 2009)

tigger said:


> How exactly is TDP calculated?



i think its based on estimates.


----------



## FordGT90Concept (Jul 11, 2009)

snakeoil said:


> phenom II is a power efficient architecture, instead intel's core i7 is a certified powerhog, temps under load are 80 c for core i7 with stock cooler and stock speed , while phenom II is just 45 c under load with stock cooler and stock speed. everybody that have core i7  has to suffer the heat and the price (like in hell) while phenom II users are cool and with money in the wallet.


I think Core i7 is more efficient for several reasons:
-Core i7 965 (130w TDP) is lower power than Phenom 965 (140w TDP).
-Core i7 965 (~1.1v) does more work at fewer volts  than Phenom II 965 (~1.2v).
-Core i7 965 (80-90C) runs hot with HT enabled because more of the chip is being used (signifying architecture efficiency).
-Phenom 965 can't hold a candle to the Core i7 965 in terms of performance (only exception being high resolution gaming).




tigger said:


> How exactly is TDP calculated?


It is more like a specification (Thermal Design Power).  AMD figures out the most this processor could safely draw and that establishes the TDP.  From there, they decide how heavy of an HSF is needed to dissipate as much heat as the TDP suggests and also, motherboards have to have enough voltage regulators to handle that high of a load.  TDP is determined by AMD/Intel to signify to the rest of the industry what it will take to run and cool that processor (or chipset).


----------



## Mussels (Jul 11, 2009)

tigger said:


> How exactly is TDP calculated?



no one knows. They say its an 'average' load number.


----------



## TheLaughingMan (Jul 11, 2009)

*Hey*



ShadowFold said:


> http://img.techpowerup.org/090710/Capture027286.jpg
> 
> 100$ cheaper and same overclocking performance.



Wow, that build is my computer but, I got 1333 RAM because it was on sale from G.Skills for only $55.  I will be trying to OC it to 1600 and see how it holds up.


----------



## Dippyskoodlez (Jul 11, 2009)

Mussels said:


> no one knows. They say its an 'average' load number.



TDP is supposed to be worst case scenario. Hence why we see much lower numbers on average.

Intel and AMD also calculate their TDP differently, so they are not comparable directly.


----------



## TheMailMan78 (Jul 11, 2009)

Dippyskoodlez said:


> TDP is supposed to be worst case scenario. Hence why we see much lower numbers on average.
> 
> Intel and AMD also calculate their TDP differently, so they are not comparable directly.



Unless you turn "Cool and Quiet" off. Then it stay at max all the time.


----------



## Dippyskoodlez (Jul 11, 2009)

TheMailMan78 said:


> Unless you turn "Cool and Quiet" off. Then it stay at max all the time.



Max should not be "worst case scenario", not all CPU's are exactly the same.

i.e. TWKR.


----------



## TheMailMan78 (Jul 11, 2009)

Dippyskoodlez said:


> Max should not be "worst case scenario", not all CPU's are exactly the same.



Well of course but every chip is rated in a certain power spectrum. This is why the chip we are talking about is 140w. Of course I could be wrong. I'm a noob making educated guesses


----------



## Flyordie (Jul 11, 2009)

Dippyskoodlez said:


> Exactly.
> 
> Disabling part of the CPU to remain within TDP is not a viable option with a high end part.



Making a pure AM3 part wouldn't hurt...
Disabling the DDR2 controller would save 10-15W as seen in the comparison of the same CPU on the 2 different platforms.


----------



## Dippyskoodlez (Jul 11, 2009)

TheMailMan78 said:


> Well of course but every chip is rated in a certain power spectrum. This is why the chip we are talking about is 140w. Of course I could be wrong. I'm a noob making educated guesses



Well you have to consider AXP's were always rated the same.

An AXP 2600+ was rated the same TDP was the 3200+.

There's a baseline margin of error for an absolute piss poor quality CPU that is added for a safety margin and to reduce the number of deaths/DOA chips. Then there is the real performance of chips which will change vastly too. TWKR chips were reported as "high leakage" parts.  TDP only refers to the absolute maximum a given grade of CPU should -ever- put out, and if designed right should never _actually_ hit that threshold. (The exact method this is calculated is not given by AMD -OR- Intel. )

Desktop CPU's are often 85W chips.. but hardly any have ever actually held 85W of  heat output before overclocking. Otherwise I should be able to use my AXP cooler on my A64. 

AMD's TDP does give a good indicator of struggling with power consumption for CPU production.


----------



## TheMailMan78 (Jul 11, 2009)

Dippyskoodlez said:


> Well you have to consider AXP's were always rated the same.
> 
> An AXP 2600+ was rated the same TDP was the 3200+.
> 
> ...


Thanks for the education.


----------



## btarunr (Jul 11, 2009)

Flyordie said:


> Making a pure AM3 part wouldn't hurt...
> Disabling the DDR2 controller would save 10-15W as seen in the comparison of the same CPU on the 2 different platforms.



Uh what? _All_ K10 CPUs have two memory controllers, but those are two independent 64-bit memory controllers (hence the ganged/unganged DCT modes). It's not that one is DDR2 and the other DDR3. The IMCs on Phenom II AM3 chips support both DDR3 and DDR2, it's not that there are two sets of memory controllers based on the standard.


----------



## Dippyskoodlez (Jul 11, 2009)

TheMailMan78 said:


> Thanks for the education.



http://en.wikipedia.org/wiki/Average_CPU_Power


----------



## FordGT90Concept (Jul 11, 2009)

Dippyskoodlez said:


> An AXP 2600+ was rated the same TDP was the 3200+.


TDPs are generally measured and labeled for an entire line, not individually.  To be safe, they usually use the highest spec'd processor to set it.  In your example, the TDP on the 2600+ was probably what was tested on the 3200+.  They then design the HSF and motherboards to power/cool the 3200+ knowing that any lower model of processor will be perfectly safe.

The same can be said of modern CPUs.  X58 motherboards and HSF are designed to power and cool a Core i7 965.  That way, they can use the same motherboards and HSFs to run 950s, 940s, and 920s.  The specification may have to be changed for Core i7 975 and then again, might not.  It depends on changes between the two and whether or not they over rated the TDP on the Core i7 965.


----------



## Dippyskoodlez (Jul 11, 2009)

FordGT90Concept said:


> TDPs are generally measured and labeled for an entire line, not individually.  To be safe, they usually use the highest spec'd processor to set it.  In your example, the TDP on the 2600+ was probably what was tested on the 3200+.  They then design the HSF and motherboards to power/cool the 3200+ knowing that any lower model of processor will be perfectly safe.



Rewording my post?


----------



## FordGT90Concept (Jul 11, 2009)

Just clarifying.


----------



## Kitkat (Jul 12, 2009)

Darren said:


> mtosev,
> 
> The CPU has not been released yet, all this information is based on sniffing information, wait for to see when or if the CPU gets released at 140w before we call them a fail, secondly what has what has MHz got to do with anything?
> 
> Has anyone ever noticed, its always the guys with ancient processors such as a the Pentium 4 *cough cough* or Celeron or something stupidly slow and old that always bash AMDs flagship processor. They would cream in their pants to swap their Pentium 4 for a Phenom II.



rofl yep Mr Wtosev Presscott (lmao woops)



TheMailMan78 said:


> Honestly I was expecting more out of a 140w chip. Kinda a let down from AMD.



Thats why say 975 will have the lower twp they already lowered it but i think itll be in that one. 965 looks like bump from this info. A while back there was a story about it but i think there still testing it?


----------



## cdawall (Jul 12, 2009)

just a curiosity of mine why are these chips being compared to prescotts? prescotts were known for being absurdly hot while not improving at all over previous generation chips. phenom II is for one stock clocked@3.4ghz which is higher than damn near every 65nm K10 chip will clock and runs very cool even at 3.8ghz these chips put out less heat than a prescott single core.


as for the 140W thing that is a max TDP most of these chips will never hit that. hell my 65nm 9150e downstairs is rated at 65w TDP it doesn't even pull 40w right now.


----------



## Kitkat (Jul 12, 2009)

cdawall said:


> just a curiosity of mine why are these chips being compared to prescotts? prescotts were known for being absurdly hot while not improving at all over previous generation chips. phenom II is for one stock clocked@3.4ghz which is higher than damn near every 65nm K10 chip will clock and runs very cool even at 3.8ghz these chips put out less heat than a prescott single core.
> 
> 
> as for the 140W thing that is a max TDP most of these chips will never hit that. hell my 65nm 9150e downstairs is rated at 65w TDP it doesn't even pull 40w right now.



meh quote of a quote of a quote ,the guy arguing "amds fail" ironicly has presscott. And the original presscot comment was wayyy off  hope that clears it up for u lol


----------



## cdawall (Jul 12, 2009)

Kitkat said:


> meh quote of a quote of a quote ,the guy arguing "amds fail" ironicly has presscott. And the original presscot comment was wayyy off  hope that clears it up for u lol



no idea why this is huge 140w is meh most people overclock to 3.8ghz @1.4v on the 955's which pulls like 200w+


----------



## TheMailMan78 (Jul 12, 2009)

cdawall said:


> no idea why this is huge 140w is meh most people overclock to 3.8ghz @1.4v on the 955's which pulls like 200w+



I'm on 1.44v right now


----------



## Flyordie (Jul 12, 2009)

btarunr said:


> Uh what? _All_ K10 CPUs have two memory controllers, but those are two independent 64-bit memory controllers (hence the ganged/unganged DCT modes). It's not that one is DDR2 and the other DDR3. The IMCs on Phenom II AM3 chips support both DDR3 and DDR2, it's not that there are two sets of memory controllers based on the standard.



After looking at the engineering notes, my idea was correct but it was backwards.  
Its a DDR2 controller with DDR3 "Extensions"...  so maybe it won't work... they used alot of shared silicon on the PII's IMC.
Nothing that can't be changed through some creative engineering from the guys over in India.  (the engineering team that developed Phenom II)
Which also leads me to say this- I just bought 2x Istanbuls for $18/each (shipping) direct from AMD as ES's with unlocked multi's to test out their maximum thermal load limits.  Will be fun I guess destroying them... *cries*


----------



## cdawall (Jul 12, 2009)

TheMailMan78 said:


> I'm on 1.44v right now



that puts you at ~170w which is about what my 550BE pulls@1.55v and 4ghz in perspective at the same setting a athlon II X2 250 pulls ~145w


----------



## TheMailMan78 (Jul 12, 2009)

cdawall said:


> that puts you at ~170w which is about what my 550BE pulls@1.55v and 4ghz in perspective at the same setting a athlon II X2 250 pulls ~145w



1.5v seems a little high for 24/7 use. I would be careful with that.


----------



## Flyordie (Jul 12, 2009)

TheMailMan78 said:


> 1.5v seems a little high for 24/7 use. I would be careful with that.



I think 3.4Ghz is the sweetspot for Deneb.  3.4 @ 1.35V is pretty good performance/watt.


----------



## Kitkat (Jul 12, 2009)

cdawall said:


> no idea why this is huge 140w is meh most people overclock to 3.8ghz @1.4v on the 955's which pulls like 200w+



me neither it was s far off base. my 955 is def over 140 rofl


----------



## eidairaman1 (Jul 12, 2009)

TheMailMan78 said:


> 1.5v seems a little high for 24/7 use. I would be careful with that.



i dont see what the problem is, most had Athlon XPs at 1.9/2.0VCore when overclocked at 2.7-3.0GHz


----------



## Kitkat (Jul 12, 2009)

3xploit said:


> 1. my 920 runs at 3.9ghz on air and loads at mid 60s - which is perfectly fine for any chip (amd or intel)
> 2. true, but i get much more performance
> 3. most people buy high end air coolers or water on these forums anyways regardless if they use intel or amd
> 4. i run my whole setup in a $50 antec 300
> ...





phenom II dosnt need to keep up with any f i7's "gaming" the only game i see is paying more for nothing. This isnt a video card discussion.


----------



## Steevo (Jul 12, 2009)

My 940 is running 1.55 vcore .05 higher than it is rated for, and AMD shows 125W at 1.5V


Considering at stock I can run just under a volt .975 including my vdroop it shold be labeled a 80W TDP chip, and I only need a couple extra tenths to make stress testing stable at this speed, if I back it off 100Mhz I can run 1.45vcore and it will be stable.


Personally I can run GTA4 at max res and high settings with no issues, and it didn't cost through the nose like a comparable Intel platform woudl have cost, and the 940 was a drop in replacement for my 9850BE, and I can run newer chips in this just fine, and the supposed loss from not having DDR3 is almost nonexistant. Plus the option to Xfire, with four cards.......


----------



## mtosev (Jul 12, 2009)

Kitkat said:


> rofl yep Mr Wtosev Presscott (lmao woops)
> 
> 
> 
> Thats why say 975 will have the lower twp they already lowered it but i think itll be in that one. 965 looks like bump from this info. A while back there was a story about it but i think there still testing it?



strange that that is my older pc. for which dont have any DDR ram anymore. hence it wasnt used in the past 2 years. to lazy to update the configuration tab. anyway my system now is a Core 2 Duo E6600, Asus P5w DH Deluxe, 2GB Gskill ram,...


----------



## Steevo (Jul 12, 2009)

mtosev said:


> strange that that is my older pc. for which dont have any DDR ram anymore. hence it wasnt used in the past 2 years. to lazy to update the configuration tab. anyway my system now is a Core 2 Duo E6600, Asus P5w DH Deluxe, 2GB Gskill ram,...



$$$$ processor, and slower on average than a chip that costs 25% less


----------



## btarunr (Jul 12, 2009)

cdawall said:


> as for the 140W thing that is a max TDP most of these chips will never hit that. hell my 65nm 9150e downstairs is rated at 65w TDP it doesn't even pull 40w right now.



TDP is a company rating, not a measurement.



Flyordie said:


> After looking at the engineering notes, my idea was correct but it was backwards.
> Its a DDR2 controller with DDR3 "Extensions"...  so maybe it won't work... they used alot of shared silicon on the PII's IMC.
> Nothing that can't be changed through some creative engineering from the guys over in India.  (the engineering team that developed Phenom II)
> Which also leads me to say this- I just bought 2x Istanbuls for $18/each (shipping) direct from AMD as ES's with unlocked multi's to test out their maximum thermal load limits.  Will be fun I guess destroying them... *cries*



No, your idea was wrong all along. You said you could disable DDR2 controllers, and save power (insert absurd amount here). While I said there are two memory controllers which support both DDR2 and DDR3. You're getting technical on how I am right. The memory controller is designed that way, making it DDR3-exclusive is not going to cut its energy draw. Learn how things work. 

Besides, AMD's market-share will plummet if they come up with AM3-exclusive chips. Nobody with decent AM2+ boards will continue using AMD because an upgrade-path end requiring you to buy a new board and memory. AMD retained a lot of market share banking on the backwards-compatibility of these processors. 

 Yes, Istanbuls are worthless at least for the client platform. No wonder they're giving away their ESes for scrap prices.


----------



## mtosev (Jul 12, 2009)

Steevo said:


> $$$$ processor, and slower on average than a chip that costs 25% less



yep but mine is now almost 3 years old.


----------



## Wile E (Jul 13, 2009)

cdawall said:


> no idea why this is huge 140w is meh most people overclock to 3.8ghz @1.4v on the 955's which pulls like 200w+



Because most people don't OC. 140w is most definitely an issue when it comes to OEMs.



eidairaman1 said:


> i dont see what the problem is, most had Athlon XPs at 1.9/2.0VCore when overclocked at 2.7-3.0GHz



You can't compare cpus of different architectures, or even cpus of the same architecture, but on a different process. 90nm K8 maxed out at around 1.55V safely, 65nm K8 was only good to about 1.5V safely. These are modified k8's built on 45nm. I wouldn't go above 1.45V for 24/7, personally.


----------



## cdawall (Jul 13, 2009)

Wile E said:


> Because most people don't OC. 140w is most definitely an issue when it comes to OEMs.
> 
> 
> 
> You can't compare cpus of different architectures, or even cpus of the same architecture, but on a different process. 90nm K8 maxed out at around 1.55V safely, 65nm K8 was only good to about 1.5V safely. These are modified k8's built on 45nm. I wouldn't go above 1.45V for 24/7, personally.



OEM's will never use this chip.

 outside of falcon northwest, alienware and voodoo who uses black edition or extreme edition cpu's? this chip will never be in the mainstream market. the companies that will use these chips will put them in higher end boards such as alienware who will use an asus M4A79 series board.

and AMD has officially spec'd K10 45nm to run 1.5v stock and unofficially announced 1.55v is safe on good air cooling

 just check newegg under the phenom 955


----------



## ShadowFold (Jul 13, 2009)

I run my 720 at 1.52v 24/7. Stays nice and cool


----------



## Wile E (Jul 13, 2009)

ShadowFold said:


> I run my 720 at 1.52v 24/7. Stays nice and cool



Heat has absolutely nothing to do with it. It has to do with electron migration. If you are past a "safe" 24/7 voltage, even sub-zero temps won't stop degradation.

But if AMD says 1.5V, who am I to argue?


----------



## cdawall (Jul 13, 2009)

Wile E said:


> Heat has absolutely nothing to do with it. It has to do with electron migration. If you are past a "safe" 24/7 voltage, even sub-zero temps won't stop degradation.
> 
> But if AMD says 1.5V, who am I to argue?



lol no kidding if they say 1.5v is truly safe 1.55v should be ok for about 5 years


----------



## Wile E (Jul 13, 2009)

cdawall said:


> OEM's will never use this chip.
> 
> outside of falcon northwest, alienware and voodoo who uses black edition or extreme edition cpu's? this chip will never be in the mainstream market. the companies that will use these chips will put them in higher end boards such as alienware who will use an asus M4A79 series board.
> 
> ...


ANd OEMs will use this chip. Dell hasn't updated their lines to include Phenom II, but their AMD computers all offer the top of the line Phenom I cpus as options.

Hp currently offers up to the 945 on their site, etc., etc.

Why do you think AMD makes these cpus? To appease us enthusiasts? Not really. They make much more money on the OEM sector. For an oem to consider these, it has to be able to fit into their lineup as seamlessly as possible, meaning they need to be able to use their existing cooling solutions and mobos.


----------



## cdawall (Jul 13, 2009)

Wile E said:


> ANd OEMs will use this chip. Dell hasn't updated their lines to include Phenom II, but their AMD computers all offer the top of the line Phenom I cpus as options.
> 
> Hp currently offers up to the 945 on their site, etc., etc.
> 
> Why do you think AMD makes these cpus? To appease us enthusiasts? Not really. They make much more money on the OEM sector. For an oem to consider these, it has to be able to fit into their lineup as seamlessly as possible, meaning they need to be able to use their existing cooling solutions and mobos.



no they wont like you said HP offers the 945 which is a non black edition cpu and its more than likely the 95w version of the cpu.

the top of the line cpu's every OEM has used were vanilla chips 9850 none BE and the lower watt versions 

no manufacturer like Dell, HP or gateway has used a unlocked just released chip in quite some time (since the FX series days)

find me a OEM with a phenom II 955 in it or a phenom I 9950. but for now some chips are made just for enthusiasts to play with. other chips are for mainstream pc's.


max in an HP is the phenom 945







max in a dell is a phenom 9650






max in a gateway is a phenom 810


----------



## Wile E (Jul 13, 2009)

cdawall said:


> no they wont like you said HP offers the 945 which is a non black edition cpu and its more than likely the 95w version of the cpu.
> 
> the top of the line cpu's every OEM has used were vanilla chips 9850 none BE and the lower watt versions
> 
> ...


This is about a 965, which is not a BE


----------



## cdawall (Jul 13, 2009)

Wile E said:


> This is about a 965, which is not a BE



no it is look in the cpu support lists from asus and MSI they published that this will be a Black Edition cpu  not to mention people already have them


----------



## Wile E (Jul 13, 2009)

Ahh, I see. The last news I read about it said it was not a BE.

And off the top of my head, Alienware has offered the BE's. I'm gonna have to say that they still represent more in terms of sales than the DIY enthusiast community, otherwise they wouldn't still be around.


----------



## eidairaman1 (Jul 13, 2009)

isnt this Processor 3.4GHz stock?


----------



## Wile E (Jul 13, 2009)

eidairaman1 said:


> isnt this Processor 3.4GHz stock?



Yeah.


----------



## cdawall (Jul 13, 2009)

cdawall said:


> OEM's will never use this chip.
> 
> * outside of falcon northwest, alienware and voodoo who uses black edition or extreme edition cpu's?* this chip will never be in the mainstream market. the companies that will use these chips will put them in higher end boards such as alienware who will use an asus M4A79 series board.
> 
> ...





Wile E said:


> Ahh, I see. The last news I read about it said it was not a BE.
> 
> And off the top of my head, Alienware has offered the BE's. I'm gonna have to say that they still represent more in terms of sales than the DIY enthusiast community, otherwise they wouldn't still be around.



lol i so said that already


----------



## eidairaman1 (Jul 13, 2009)

Wile E said:


> Yeah.



Well this is the Fastest AMD CPU in clock speed now and it has 2 more Cores than the X2 6400 did.


----------



## Imsochobo (Jul 13, 2009)

First off, i7 is expensive, not thaat expensive
2 amd can give you loads off fun. Tricore 4890 Cheap ass mobo and mem
3 140 w means it is over 125 at peak like can happen but doesn't use 140 w
4 amd got a platform and intel doesn't. Gaming platform that is.

5 and the most important one
Many here will buy i7 and 965 955 but the random guy you pass on the street would want a tricore 800 series quad or a 600 series quad.
And amd really just care about bringing us value products that most people want and give overclckers fun and set records.
That's what sell, no denying it. They do it very very well.
And no denying that i7 is a masterpiece aswell and have it's place in the market, it's not ment for the average tamer like phii.

I5 will be thaat product.

There is no problem playing whatever you want on a freaking cheap ads system, reason: amd tricore and 4850 ftw.
I really can't say anything else than amd has the most of the market ATM till i5 comes which will be probaly awesome! And looking forward to it !


----------



## eidairaman1 (Jul 13, 2009)

be careful what you say around here, there are intel users in this topic that will say your wrong.


----------



## Steevo (Jul 13, 2009)

eidairaman1 said:


> be careful what you say around here, there are intel users in this topic that will say your wrong.



YOU ARE WRONG!!!!!


Must be the heat of summer getting to all the i7 owners


----------



## eidairaman1 (Jul 13, 2009)

Steevo said:


> YOU ARE WRONG!!!!!
> 
> 
> Must be the heat of summer getting to all the i7 owners



Pardon Steevo, Im not running i7, im running athlon xp and FYI my next machine will be a Top End Phenom 2.


----------



## tastegw (Jul 13, 2009)

btarunr said:


> Fixed a few things. As promised, you won't be posting here anymore.
> 
> 
> 
> .



good job!


----------



## FordGT90Concept (Jul 13, 2009)

Imsochobo said:


> 3 140 w means it is over 125 at peak like can happen but doesn't use 140 w


When it does, your voltage regulators are going to pop like popcorn if they aren't ready for it or your PSU will put on a light show.  140w means your system better be ready to give it 140w as well as dissipate 140w of thermal energy.




Imsochobo said:


> 4 amd got a platform and intel doesn't. Gaming platform that is.


All the X## chipsets are gaming platforms.  X58 especially is offering SLI and Crossfire with enough PCIE lanes to power three of them.  You can play games on the P##, Q##, and G## series of chipsets too but you won't get all the shiny bells and whistles.  AMD really has no "gaming platform" that can quite measure up to Core i7 + X58.  Remember, AMD is committed to AMD graphics cards and naturally prefers Crossfire-only platform.  Intel, on the other hand, doesn't have a dog in that race yet so they'll offer what people want (both).




Imsochobo said:


> Many here will buy i7 and 965 955 but the random guy you pass on the street would want a tricore 800 series quad or a 600 series quad.





Imsochobo said:


> I really can't say anything else than amd has the most of the market ATM till i5 comes which will be probaly awesome! And looking forward to it !


They want a computer for $x and don't particularly care what's in it.  Need I remind you, Intel is still selling more processors than AMD to the tune of three to one.


----------



## cdawall (Jul 13, 2009)

FordGT90Concept said:


> All the X## chipsets are gaming platforms.  X58 especially is offering SLI and Crossfire with enough PCIE lanes to power three of them.  You can play games on the P##, Q##, and G## series of chipsets too but you won't get all the shiny bells and whistles.  AMD really has no "gaming platform" that can quite measure up to Core i7 + X58.




what are you talking about 790FX is AMD's gaming platform and it is the same number of lanes that X58 has and the same other options addons bells and whistles etc. woopdee fricken do X58 offers SLi and Xfire both companies offer good cards around the same price so either buy 980A/780A and get SLi or buy 790FX and get ATi cards.


----------



## Mussels (Jul 13, 2009)

this really isnt the thread to have an AMD vs intel war.

Perhaps generalnonsense.net would be a good place to hash it out?


----------



## Meecrob (Jul 13, 2009)

Mussels said:


> both AMD and intel always have the chips run a good percentage above required volts. The reason is that when they go on shit OEM motherboards, they get vdroop and it needs to counter that.
> 
> They cant just release chips that work at EXACTLY the needed voltage to save power and heat, without knocking out their biggest buyers.
> 
> ...



um, dude the Barton ran hotter then the tbred-b due to extra cache, so WTF are you on about?

I owned EVERY socket A k7 core, and the only truely hot ones where the tbird and the palomino the tbred-a where not hot at stock(didnt clock for shit tho) and the tbred-b where killer clockers and didn't produce alot of heat for their clocks.

as to the TDP, Intel uses what they call "AVERAGE" numbers where AMD rates their chips at MAX numbers, this is why at times intels numbers have looked FAR FAR better then other makers chips, yet have produced more heat(preshott anybody?)

a good example is the ATOM, go take a look at the PLATFORM power use on atom vs the via nano, the ATOM platform uses more power enlarge due to the HORRIBLE chipset used, and despite the higher power use, its overall performance(gfx+cpu perf not just cpu benches) is worse.


----------



## cdawall (Jul 13, 2009)

Mussels said:


> both AMD and intel always have the chips run a good percentage above required volts. The reason is that when they go on shit OEM motherboards, they get vdroop and it needs to counter that.
> 
> They cant just release chips that work at EXACTLY the needed voltage to save power and heat, without knocking out their biggest buyers.
> 
> ...



i remember K7 days a i had a AXP 2000+ ran about 40C idle and 55C load on a copper cored cooler nothing special and the chip was a tbred. no hotter than a P4 willie which performed worse, cost more, and ran hotter. 

no one said AMD was ahead infact i believe it was the intel fanboys who went and started slinging shit about this chip being 140w. maybe i missed the memo but intel rates that wonderful i7 920 to be a 130w chip. no one crapped their pants no one ran home screaming yet its running a huge 10w less than this chip will? 

not to mention at stock were the vast majority of BOTH of these chips will run the AMD chip *will* outperform the i7 920. no if, ands, or butts about it with the clock speed ramped up to 3.4ghz this chip will have an advantage over a stock core i7 920 in just about every task. now when oc'd the 920 takes the lead i understand that everyone understands that however with that lead it outputs more heat than these chips will consume more power. performance per watt these two chips will be on AMD's favor. the phenom 965BE shows promise with prerelease retail branded chips around 4-4.1ghz on air alone and a vcore of 1.45-1.5v with those clock speeds you are looking at 170-200w TDP's on these chips. Now a i7 920 D0 will hit around 4.3-4.4ghz on 1.4-1.45v you are looking at 290-320w TDP's. 


thats for the cpu alone now why don't we compare the power consumption of X58 vs 790FX. 790FX consumes 3w idle and 10w on load giving it an 8w TDP intels X58 24.1w TDP wow 3x as much power just to talk to the peripherals....

so this gives you 178-208w TDP from CPU+MOBO on AMD's side and 314-344w TDP on intel's side 41% higher than AMD's solution you could put a phenom II X4 905e rig together and run it on the energy you save going with a 965BE over a 920 and oc'ing both. that should say something.


----------



## TheMailMan78 (Jul 13, 2009)

Mussels said:


> AMD fanboys just need to realise that this isnt the P4 days - Intel are ahead.


 I take offense to that Mussles. I'm an AMD Fanboy but I'm no dummy. The i7 is faster than AMDs current lineup. Thats a fact period. The only thing I ever argued is you get a better bang for your buck with AMD. However even that is changing to Intels favor. AMD better pull a fucking rabbit out of their asses soon. This Intel i7 being faster crap is getting old. :shadedshu

I can honestly say my next rig might be an Intel/Nvidia combo. 

I haven't received a shareholders newsletter in months.


----------



## FordGT90Concept (Jul 13, 2009)

Meecrob said:


> I owned EVERY socket A k7 core, and the only truely hot ones where the tbird and the palomino the tbred-a where not hot at stock(didnt clock for shit tho) and the tbred-b where killer clockers and didn't produce alot of heat for their clocks.


Thoroughbred-A had only one layer of insulation.  At stock, idle temps were often above 60C.  Thoroughbred-B and Barton cores had two layers of insulation.  Their temperatures are comparatively much lower.  AMD screwed up on Thoroughbred-A.


----------



## Meecrob (Jul 13, 2009)

FordGT90Concept said:


> Thoroughbred-A had only one layer of insulation.  At stock, idle temps were often above 60C.  Thoroughbred-B and Barton cores had two layers of insulation.  Their temperatures are comparatively much lower.  AMD screwed up on Thoroughbred-A.



i had a few tbred-a's they wouldn't run above around 65c,just crash, and at stock i never saw them go above 57c on the retail amd cooler(far from the best), Not counting the times peoples coolers where clogged with dust.

the palomino's where the hottest chips I have seen AMD produce, those suckers would run 54c idle, and 60+ load on the stock cooler, with 3rd party cooling you could stabilize them, but they never clocked for shit :/

the tbred-a, some overclocked a little, but never worth buying for the overclock, but they didnt run as nearly as hot as the pallys in my experiance 

http://www.cpu-world.com/CPUs/K7/TYPE-Athlon XP.html



> All Athlon XP Palomino CPUs had 266 MHz bus speed, and were manufactured using 0.18 micron technology.
> 
> Next revision of Athlon XP core, called Thoroughbred, was manufactured on newer 0.13 micron technology, and as a result, had smaller die size and lower power dissipation than the Palomino core. Bus speed of some Thoroughbred processors was increased to 333 MHz.
> 
> ...



duno if this will help, but the tbred chips where .13 vs the pally at .18, the first run of any new prosess for AMD sucks ballz for clocking, look back at the k8's for example, I saw alot of 90nm chips that clocked worse then the 130's

examples being the winchester cores(first 90nm cores), they for the most part clocked WORSE then the 130nm parts they replaced, BUT the Venice cores clocked very well and ran nice and cool(for their day) 

All I know is that the barton wasnt the first cool running axp, the tbred-b was, and the 1700+ tbred-b was a STEAL, every single one I got my hands on clocked to 2600+ or higher clocks with ease, even the locked ones!!!


----------



## Wile E (Jul 13, 2009)

Meecrob said:


> um, dude the Barton ran hotter then the tbred-b due to extra cache, so WTF are you on about?
> 
> I owned EVERY socket A k7 core, and the only truely hot ones where the tbird and the palomino the tbred-a where not hot at stock(didnt clock for shit tho) and the tbred-b where killer clockers and didn't produce alot of heat for their clocks.
> 
> ...


No, AMD no longer rates their chips at max. They changed their system at the introduction of the original Phenom. I believe there was a news post, or article posted, around here about it at some point.



cdawall said:


> i remember K7 days a i had a AXP 2000+ ran about 40C idle and 55C load on a copper cored cooler nothing special and the chip was a tbred. no hotter than a P4 willie which performed worse, cost more, and ran hotter.
> 
> *no one said AMD was ahead infact i believe it was the intel fanboys who went and started slinging shit about this chip being 140w. maybe i missed the memo but intel rates that wonderful i7 920 to be a 130w chip. no one crapped their pants no one ran home screaming yet its running a huge 10w less than this chip will? *
> 
> ...


The big, important part you are missing is, the i7 boards were built with these power draws in mind. Not all of the Phenom II boards were built with 140w tdp in mind. That's the reason people look on it as a con. Not because you have to worry about just the heat of the chip, but because the average joe, and the oem are now gonna have to worry whether their boards will handle it as well.

Now, if AMD would've outlined 140w TDPs from the beginning, and all AMD boards were built with that in mind, then yeah, it would be no problem. In short, it's a product planning issue.



cdawall said:


> lol i so said that already


Right, but they still need to make sure they an run a 140w cpu in their machines. They now have to double check if their coolers, and vreg cooling are up to the task. It still doesn't change my original point that the higher TDP will effect OEM's.



cdawall said:


> what are you talking about 790FX is AMD's gaming platform and it is the same number of lanes that X58 has and the same other options addons bells and whistles etc. woopdee fricken do X58 offers SLi and Xfire both companies offer good cards around the same price so either buy 980A/780A and get SLi or buy 790FX and get ATi cards.



Yeah, but that leaves you switching boards if you want to try a multi-card setup from the other camp. So it's not "woopdee fricken do", it's a very legitimate advantage i7 has over Phenom II. That also helps OEMs, as they now don't have to stock out 2 different boards for SLI or Crossfire setups. That's very significant.

And who the hell brought up K7 vs P4 and Atom vs Nano? Those arguments are silly, and just need to stop. They have no bearing on the current topic at all.


----------



## Meecrob (Jul 13, 2009)

huh, thought I read an opteron article recently that said they still rated at max, but changed how they calculated the figurers for max....meh, main thing to remember is that the 2 companies DO NOT COME UP WITH THEIR NUMBERS THE SAME WAY, as such you cant compare them 1:1 

an example is that prescott thats rated at 89watts, but puts out far far more heat then my cpu's i have had rated at over 100....

again, u cant go by the ratings the companies put on their own chips, its like compairing chips based on their clocks speeds..... u could have a p4 a 4.6gz but it would still be slower then a core2 or k8/k10 chip at 3gz,


----------



## Wile E (Jul 13, 2009)

Meecrob said:


> huh, thought I read an opteron article recently that said they still rated at max, but changed how they calculated the figurers for max....meh, main thing to remember is that the 2 companies DO NOT COME UP WITH THEIR NUMBERS THE SAME WAY, as such you cant compare them 1:1
> 
> an example is that prescott thats rated at 89watts, but puts out far far more heat then my cpu's i have had rated at over 100....
> 
> again, u cant go by the ratings the companies put on their own chips, its like compairing chips based on their clocks speeds..... u could have a p4 a 4.6gz but it would still be slower then a core2 or k8/k10 chip at 3gz,



I know. I'm not one of the ones saying that it draws more than i7. That's not the important factor at all. Read my edited post above.


----------



## FordGT90Concept (Jul 13, 2009)

cdawall said:


> not to mention at stock were the vast majority of BOTH of these chips will run the AMD chip *will* outperform the i7 920. no if, ands, or butts about it with the clock speed ramped up to 3.4ghz this chip will have an advantage over a stock core i7 920 in just about every task.


3.2 GHz to 3.4 GHz only represents a 6% increase in clockspeed.  Core i7 920 beat Phenom II 955 in most benchmarks by a margin wider than 6%.  The numbers suggest the Phenom II 965 will tighten the gap but not "outperform" the Core i7 920.


----------



## cdawall (Jul 13, 2009)

Wile E said:


> The big, important part you are missing is, the i7 boards were built with these power draws in mind. Not all of the Phenom II boards were built with 140w tdp in mind. That's the reason people look on it as a con. Not because you have to worry about just the heat of the chip, but because the average joe, and the oem are now gonna have to worry whether their boards will handle it as well.



i doubt that this 140w chip will end up in any mainstream OEM PC and the companies that do use phenoms like HP and gateway are already set up to use 140w chips i have personally dissected all of the new builds. Gateway uses a ECS rebranded 780G board with a 4+1 phase design and it matches the design of one of the ECS 780G boards already made for 140w chips. HP uses a GP8200 based asus board that is commonly used with 140w chip support. the coolers in both machines are made by AVC who makes the OEM coolers for AMD and just so happens to ship them with those very cpu's that AMD sells.




Wile E said:


> Now, if AMD would've outlined 140w TDPs from the beginning, and all AMD boards were built with that in mind, then yeah, it would be no problem. In short, it's a product planning issue.



intel never outlines its highest wattage chips this chip is an oh shit we need more umph and thats what it is here for.



Wile E said:


> Right, but they still need to make sure they an run a 140w cpu in their machines. They now have to double check if their coolers, and vreg cooling are up to the task. It still doesn't change my original point that the higher TDP will effect OEM's.



oddly enough the 9950 140w chip didn't cause an OEM panic oh wait they didn't use it until it became a more binned 125w part




Wile E said:


> Yeah, but that leaves you switching boards if you want to try a multi-card setup from the other camp. So it's not "woopdee fricken do", it's a very legitimate advantage i7 has over Phenom II. That also helps OEMs, as they now don't have to stock out 2 different boards for SLI or Crossfire setups. That's very significant.



asrock has a 750A board that can do SLI and Xfire they use it in falcon northwest machines.




FordGT90Concept said:


> 3.2 GHz to 3.4 GHz only represents a 6% increase in clockspeed.  Core i7 920 beat Phenom II 955 in most benchmarks by a margin wider than 6%.  The numbers suggest the Phenom II 965 will tighten the gap but not "outperform" the Core i7 920.



not in most benchmarks in some benchmarks mainly those that deal in multitasking you know something that takes advantage of 8 cores in gaming benchmarks they were with spitting distance of each other.


----------



## Wile E (Jul 13, 2009)

cdawall said:


> i doubt that this 140w chip will end up in any mainstream OEM PC and the companies that do use phenoms like HP and gateway are already set up to use 140w chips i have personally dissected all of the new builds. Gateway uses a ECS rebranded 780G board with a 4+1 phase design and it matches the design of one of the ECS 780G boards already made for 140w chips. HP uses a GP8200 based asus board that is commonly used with 140w chip support. the coolers in both machines are made by AVC who makes the OEM coolers for AMD and just so happens to ship them with those very cpu's that AMD sells.


Doesn't matter what the boards are based on. You are still looking in too narrow a field. Case cooling also comes into play.



cdawall said:


> intel never outlines its highest wattage chips this chip is an oh shit we need more umph and thats what it is here for.


Ummm, what? Don't know exactly what you were trying to say here, but regardless, Intel setups don't have to worry about additional wattage.



cdawall said:


> oddly enough the 9950 140w chip didn't cause an OEM panic oh wait they didn't use it until it became a more binned 125w part


Exactly my point. They couldn't use 140w cpus. This is what I've been getting at the whole time.



cdawall said:


> asrock has a 750A board that can do SLI and Xfire they use it in falcon northwest machines.


So, one AMD board vs how many Intel boards? Not to mention ASRock is a POS brand compared to other options available. Your point is moot. i7 still has the multi-card platform advantage.


----------



## TheMailMan78 (Jul 13, 2009)

Wile E said:


> So, one AMD board vs how many Intel boards? Not to mention ASRock is a POS brand compared to other options available. Your point is moot. i7 still has the multi-card platform advantage.


 (Picks up his teddy bear and interrupts the adults) So an i7 platform can do both crossfire and SLI?


----------



## Wile E (Jul 13, 2009)

TheMailMan78 said:


> (Picks up his teddy bear and interrupts the adults) So an i7 platform can do both crossfire and SLI?



Yeah, on a good many x58 boards. Some of the low end ones don't offer SLI support, but most X58 boards do both.


----------



## TheMailMan78 (Jul 13, 2009)

Wile E said:


> Yeah, on a good many x58 boards. Some of the low end ones don't offer SLI support, but most X58 boards do both.



What chipset supports both? Sorry I'm learning this Intel stuff. Bare with me.


----------



## Wile E (Jul 13, 2009)

TheMailMan78 said:


> What chipset supports both? Sorry I'm learning this Intel stuff. Bare with me.



X58, but low end X58 boards may opt out of SLI support, to make it cheaper to buy, but still have Crossfire support. It's a licensing thing. But most x58 boards do both SLI and Crossfire. Just have to double check the SLI support.


----------



## cdawall (Jul 13, 2009)

Wile E said:


> Doesn't matter what the boards are based on. You are still looking in too narrow a field. Case cooling also comes into play.
> 
> Ummm, what? Don't know exactly what you were trying to say here, but regardless, Intel setups don't have to worry about additional wattage.
> 
> ...



you right i mine the fact that damn near every OEM PC just has a 120MM fan in the back regardless to cpu or video card or anything else in the case has never really been proven to cool anything.

most new AMD boards can handle 140w just fine it was a group of MSI, asus and GB boards that came out when AMD first released 125w and 140w parts that couldn't cope and they couldn't cope with the older 125w 6000+ and 6400+ either.

my point was no OEM has used that high of a watt chip when it first came out none of them do you see any OEM that had the 6400+ in a PC or how about one with the FX57 or wait how about a QX9650? no you see ones with a 5600+ in it or a 4000+ or a Q9550 most OEM's do not use high bin cpu's at all regardless of TDP.

and go ahead on your high horse about the multi card shit i honestly don't care that intel threatened and cheated nvidia out of the SLi license but whatever if that business practice makes you happy go buy some stock in intel.


----------



## TheMailMan78 (Jul 13, 2009)

Wile E said:


> X58, but low end X58 boards may opt out of SLI support, to make it cheaper to buy, but still have Crossfire support. It's a licensing thing. But most x58 boards do both SLI and Crossfire. Just have to double check the SLI support.



I had no idea. No matter how much I think I know I'm still a noob in big boy pants. Ya know I really think I'm going Intel VERY soon. Anyone want to buy a beast of an AMD rig?? See specs


----------



## erocker (Jul 13, 2009)

TheMailMan78 said:


> Ya know I really think I'm going Intel VERY soon.



:shadedshu I hear the sounds of a thousand Canadians crying. Does anyone know if P55 chipsets will be CrossFire and SLI compatible?


----------



## Wile E (Jul 13, 2009)

cdawall said:


> you right i mine the fact that damn near every OEM PC just has a 120MM fan in the back regardless to cpu or video card or anything else in the case has never really been proven to cool anything.
> 
> most new AMD boards can handle 140w just fine it was a group of MSI, asus and GB boards that came out when AMD first released 125w and 140w parts that couldn't cope and they couldn't cope with the older 125w 6000+ and 6400+ either.
> 
> ...


lol. That had the biggest AMD fanboy overtone I have heard from you. 

Look, I don't give a shit what system I buy. I buy the best for my money at the time of purchase. Last time I built an entire rig, my budget was high, and the QX got the nod, as it was faster than anything else out there.

Next year, I plan to upgrade again, and probably on a rather large budget again. Whoever has the fastest setup, with the features I need in my price range will get my money.

I don't deal in fanboy emotions, I deal in facts. It is a fact that 140w is a problem for many OEMs that offer AMD, or else they would actually offer 140w cpus instead of waiting for 125w variants to release months later.

It is a fact that case airflow effects cooling, and that a single low rpm 120mm fan as an exhaust in an mATX case is not adequate on a good many high-heat output systems.

And yes, high performance OEMs do offer the top tier cpus, and higher TDPs do effect their designs, whether you like it or not.

So, go ahead on your anti-intel tirade, it still doesn't change facts, cd.


----------



## Wile E (Jul 13, 2009)

erocker said:


> :shadedshu I hear the sounds of a thousand Canadians crying. Does anyone know if P55 chipsets will be CrossFire and SLI compatible?



Rumor has it that it will.


----------



## ShadowFold (Jul 13, 2009)

SLI support isn't important to me at all. I kind of want to build an i7 rig just to play with. If AMD doesn't announce anything cool by September I'll probably build an Intel just to tide me over until they can get their stuff back together  My Crosshair III is so lonely with only a triple core  I need a 6core or something that does crazy clocks.


----------



## TheMailMan78 (Jul 13, 2009)

erocker said:


> :shadedshu I hear the sounds of a thousand Canadians crying. Does anyone know if P55 chipsets will be CrossFire and SLI compatible?



Ya know I really wish AMD supported both platforms. Crossfire and SLI. Thats what temps me the most. I love EVGA cards. If I could get two 295s with my Phenom II 720 I would be in hog heaven.


----------



## Meecrob (Jul 13, 2009)

TheMailMan78 said:


> Ya know I really wish AMD supported both platforms. Crossfire and SLI. Thats what temps me the most. I love EVGA cards. If I could get two 295s with my Phenom II 720 I would be in hog heaven.


AMD CANT, nVidia wont let them run SLI on AMD chipsets and wont allow CF on nVidia chipsets.

this is NOT AMD's Choice/Fault its nVidia, they are to worried about loosing a few mobo or videocard sales to allow cross compatibility between cf and sli.

Its childish and STUPID because but its how some companies think.

yes nV would loose some sales in one cat, but it would more then be made up for by the sales in the other, there are alot of people i know who would buy nVidia chipsets IF they could run CF and SLI on them, same goes for nVidia videocards, IF they could use 2+ nVidia videocards in a 790 board in SLI they would jump on it, but thanks to nVidia's stupidity they loose out on sales because those same people just say "Screw it" and go get a couple ati cards.


----------



## Meecrob (Jul 13, 2009)

Wile E said:


> lol. That had the biggest AMD fanboy overtone I have heard from you.
> 
> Look, I don't give a shit what system I buy. I buy the best for my money at the time of purchase. Last time I built an entire rig, my budget was high, and the QX got the nod, as it was faster than anything else out there.
> 
> ...



I dont know if you realize it, but your posts come of very anti-amd, no amd isnt "the best" anymore, but the fact is that most OEM boards Could run the chips, but most oem's DO NOT USE HIGH END AMD CHIPS, hell most of them dont even offer high end intel systems anymore, and if they do, they have one choice, even the so-called high end OEM's dont tend to offer high end AMD.

Alienware=Dell, so them even offering anything AMD is a miracle, only happening to avoid 
lawsuits for themselves and intel.

voodoopc......to me they are kinda a joke these days

falconNW.....same deal, sure they can build you a high end AMD rig, but you could build you're self the same rig far cheaper......or have somebody locally build one for you.

I could list other sites that offer AMD systems that you could consider OEM's, but none of them that offer "high end" AMD stuff have even close to the market share of the top oem's like dell,hp,gateway,acer,exct  the top OEM's have rarely offered the top end cpu's because honestly THEY DON'T MAKE ENOUGH MONEY OFF THOSE SYSTEMS, they make far more off the budget and mid range class systems and sell far more of them.

go to any bigbox system seller and see what they sell more of, low end, mid range or high end, most of them around here dont even offer what i would call high end systems, they offer mid range as high end and low and ultra low as your other choices.

on a side note, most OEM systems that have 120mm fans in them, don't use "low speed" fans they use high speed fans that are speed controlled by the board, example, pull the 120mm fan out of a dell, reconfigure the plug to work on a normal board, and plug it in, the fan will be LOUD AS HELL, and move ONE HELL OF ALOT OF AIR, in fact in the case of dell boxes, that fans MORE THEN ENOUGH to cool even the hottest of cpu's and keep the flow going to allow even high end videocards to be happy(dell may suck, but their case designs for mini and full towers are quite good for airflow most of the time)


----------



## TheMailMan78 (Jul 13, 2009)

Meecrob said:


> AMD CANT, nVidia wont let them run SLI on AMD chipsets and wont allow CF on nVidia chipsets.
> 
> this is NOT AMD's Choice/Fault its nVidia, they are to worried about loosing a few mobo or videocard sales to allow cross compatibility between cf and sli.
> 
> ...


Cdawall says Asrock has one.


----------



## sinar (Jul 13, 2009)

Sandra's info of my cpu


----------



## Meecrob (Jul 13, 2009)

TheMailMan78 said:


> Cdawall says Asrock has one.



hacked drivers, same as ULI had, but nVidia will find a way to block it, forcing people to use older drivers(mobo chipset drivers) or amd/ati will block it to avoid problems with nvidia....

i havent found that board available at any retail outlet yet either....


----------



## FordGT90Concept (Jul 13, 2009)

SLI requires hardware to work (included in the nForce chipset), Crossfire is purely drivers.  Both require a licence.  NVIDIA chipsets can support Crossfire but because there's no way AMD will sell a license to NVIDIA, Intel will be the only chipset manufacturer in the foreseeable future to offer both.  At the same time, NVIDIA would take SLI from Intel at the first opportunity they get.

Kinda off topic though.


----------



## Meecrob (Jul 13, 2009)

FordGT90Concept said:


> SLI requires hardware to work (included in the nForce chipset), Crossfire is purely drivers.  Both require a licence.  NVIDIA chipsets can support Crossfire but because there's no way AMD will sell a license to NVIDIA, Intel will be the only chipset manufacturer in the foreseeable future to offer both.  At the same time, NVIDIA would take SLI from Intel at the first opportunity they get.
> 
> Kinda off topic though.



wrong, SLI can be software or hardware driven, thats how the x58 chipset gets its SLI support, its SOFTWARE, the company pays a license fee to have software based SLI support on x58 boards for example, thats why cheaper x58 boards lack sli support.

since Im sure you will argue without looking into it yourself.

http://www.engadget.com/2009/03/19/...ticks-i?icid=sphere_blogsmith_inpage_engadget



> When NVIDIA announced support for SLI on motherboards sporting Intel's X58 chipset, there was something of a hidden catch -- manufacturers needed to pay to become "certified." Yes, you might have thought all you needed was a pair of parallel PCI-E slots and couple of matching video cards to get your SLI on, but non-certified boards find themselves shunned by NVIDIA graphics hardware. However, where there's a will there's usually a way, and for at least one of those woefully illegitimate mobos there's a workaround. GIGABYTE didn't bother to get certification for its EX58-UD4 motherboard, but it did for the EX58-UD4P, and it turns out the same BIOS works on both. Naturally it takes a little extra work to get the wrong version up in the right EEPROM, but the read link has all the details you need to re-flash with finesse.



its NOT hardware based, nvidia's drivers detect if the boards bios have a cert fro SLI in them, if they do they allow SLI to work, if not, well your BONED.


----------



## TheMailMan78 (Jul 13, 2009)

Meecrob said:


> wrong, SLI can be software or hardware driven, thats how the x58 chipset gets its SLI support, its SOFTWARE, the company pays a license fee to have software based SLI support on x58 boards for example, thats why cheaper x58 boards lack sli support.
> 
> *since Im sure you will argue without looking into it yourself.*
> 
> ...


No need for the attitude buddy. Right or wrong show some respect.


----------



## Meecrob (Jul 13, 2009)

give respect, receive respect.

dont post something as fact if its not


----------



## TheMailMan78 (Jul 13, 2009)

Meecrob said:


> give respect, receive respect.
> 
> dont post something as fact if its not



Mistakes can be made. No one is perfect. He said nothing disrespectful to you. Slow your roll son.


----------



## Kitkat (Jul 13, 2009)

so it is only 125w then?
http://www.technalogic.com/Inu_products/INU_ProdDetailsL19.asp?ref=84342616
edit!
hey at fudzilla too
http://buy.fudzilla.com/a441449.html

Do me a fave buy me one then just im me and ill let u know about the shipping thanx in advance lol


----------



## wiak (Jul 14, 2009)

http://buy.fudzilla.com/a441449.html

hehe its 125W just like the 955, kinda bad to put 140W in the title when it just might have been a early sample
and guess what core i7 is a 130W part hehe


----------



## Meecrob (Jul 14, 2009)

nice job with fact checking OP.....


----------



## Wile E (Jul 14, 2009)

Meecrob said:


> I dont know if you realize it, but your posts come of very anti-amd, no amd isnt "the best" anymore, but the fact is that most OEM boards Could run the chips, but most oem's DO NOT USE HIGH END AMD CHIPS, hell most of them dont even offer high end intel systems anymore, and if they do, they have one choice, even the so-called high end OEM's dont tend to offer high end AMD.
> 
> Alienware=Dell, so them even offering anything AMD is a miracle, only happening to avoid
> lawsuits for themselves and intel.
> ...


Dell's use standard 120mm fans. When's the last time you actually looked in a Dell system? Almost all OEMs choose quiet over airflow. I still service a lot of these machines, and the case cooling is barely adequate for average wattage systems, let alone high wattage systems.

As far as building a high-end AMD yourself, that's not the scope of this current conversation. The scope of the current conversation has been that 140w rating does have a negative effect, especially to OEMs.

ANd my posts aren't intended to come off as AMD. It's just that a lot of the very pro-AMD people post a fair bit of half-truths or misinformation. I only try to counter that. Like for instance, you'll never hear me say anything bad about building an AMD rig if i7 is out of budget, and you won't find me recommending LGA775 builds anymore. Only i7 or AMD (possibly i5 when it releases).



FordGT90Concept said:


> SLI requires hardware to work (included in the nForce chipset), Crossfire is purely drivers.  Both require a licence.  NVIDIA chipsets can support Crossfire but because there's no way AMD will sell a license to NVIDIA, Intel will be the only chipset manufacturer in the foreseeable future to offer both.  At the same time, NVIDIA would take SLI from Intel at the first opportunity they get.
> 
> Kinda off topic though.





Meecrob said:


> wrong, SLI can be software or hardware driven, thats how the x58 chipset gets its SLI support, its SOFTWARE, the company pays a license fee to have software based SLI support on x58 boards for example, thats why cheaper x58 boards lack sli support.
> 
> since Im sure you will argue without looking into it yourself.
> 
> ...



Meecrob is correct. It's purely a licensing issue, and nVidia bears most of the blame, because not only do they not allow SLI on AMD chipsets, but they heavily frown upon Crossfire on NV chipsets, except in some very rare exceptions.



Kitkat said:


> so it is only 125w then?
> http://www.technalogic.com/Inu_products/INU_ProdDetailsL19.asp?ref=84342616
> edit!
> hey at fudzilla too
> ...


Stores and Fudzilla are not good sources. Stores because mistakes made on one, sometimes trickle down to others, and Fudzilla because, well because it's Fudzilla. lol.

We'll have to wait for official word from AMD. If is indeed 125w, then this whole news post, and the ensuing debates, were indeed pointless.


----------



## btarunr (Jul 14, 2009)

Kitkat said:


> so it is only 125w then?
> http://www.technalogic.com/Inu_products/INU_ProdDetailsL19.asp?ref=84342616
> edit!
> hey at fudzilla too
> ...



Nice find. That's the 125W model.


----------



## Flyordie (Jul 14, 2009)

Its teetering the 125-130W TDP barrier.  It should be released as a 125W part though.


----------



## Meecrob (Jul 14, 2009)

nv have activly BLOCKED cf on their chipsets via the chipset drivers :/

if you got a dell p4 era or newer take that silent fan out, and plug it into a real mobo, it will blow your mind how much air it moves, im using one at this very moment on my cpu, its a panaflow(nbm or whatever) at full speed you can hear it across the room easly, i run it at 50% on all but the hottist days because it even gets a bit loud for my taist(and im no silance purist or for that matter lover......)

IF you ramp the fanspeed in a dell up, the case design is excellent(came from thier older BTX designs/airflow) i have seen plenty of these systems that have cases that are quite nice, other then the dell proprietary front panel connectors, they have a VERY OPEN front grill, the rear is well ventelated as well in the mini towers and full towers, the fans SILENT but thats due to them using the pwm feature to keep it that way till you hit meltdown temps(let a p4/p-d rig hit dells critical temps and the fan will spool up to full power)

i agree tho, many of the systems OEM's make dont have the best airflow, its gotten better over the years, as long as you avoid the thin clients(god those SUCK ASS) 

if u can rob some of those 120's give them a test at full tilt, u will be suprised by how powerfull they really are.....real waste for dell to run them at 15-20% when they arent loud at all at 50%


----------



## Wile E (Jul 14, 2009)

Meecrob said:


> nv have activly BLOCKED cf on their chipsets via the chipset drivers :/
> 
> if you got a dell p4 era or newer take that silent fan out, and plug it into a real mobo, it will blow your mind how much air it moves, im using one at this very moment on my cpu, its a panaflow(nbm or whatever) at full speed you can hear it across the room easly, i run it at 50% on all but the hottist days because it even gets a bit loud for my taist(and im no silance purist or for that matter lover......)
> 
> ...



I have used them at full power. Almost every single one I've pulled from a system is a low speed fan, run directly on the psu. I'm not retarded, I know how to run a fan at full speed. lol.


----------



## btarunr (Jul 14, 2009)

Wile E said:


> We'll have to wait for official word from AMD. If is indeed 125w, then this whole news post, and the ensuing debates, were indeed pointless.



The 125W model goes by the model number HDZ965FBGIBOX, while the 140W one goes as HDZ965FBK4DGI. This happened with the X4 9950, both its variants were sold, only that the 125W models came a good month or so later. It looks unlikely that such a thing will happen again, now that the 125W one is already spotted.


----------



## Wile E (Jul 14, 2009)

btarunr said:


> The 125W model goes by the model number HDZ965FBGIBOX, while the 140W one goes as HDZ965FBK4DGI. This happened with the X4 9950, both its variants were sold.



Why would they bother with a 140w part at all then? That doesn't really make much sense to me. I could see if the 125w part came a few months later, but why both at the same time?


----------



## btarunr (Jul 14, 2009)

Also "HDZ965FBK4*DGI*" (140W) tells us that it is not a PIB (processor-in-(a)-box), as "BOX" would be the suffix. So the bets are off, no 140W commercially available 965 BE.

No Phenom II is ending with the "BOX" suffix. Wait and see. Sorry for any confusion.


----------



## wiak (Jul 14, 2009)

btarunr said:


> Also "HDZ965FBK4*DGI*" (140W) tells us that it is not a PIB (processor-in-(a)-box), as "BOX" would be the suffix. So the bets are off, no 140W commercially available 965 BE.


OEMs anyone?


----------



## Wile E (Jul 14, 2009)

btarunr said:


> Also "HDZ965FBK4*DGI*" (140W) tells us that it is not a PIB (processor-in-(a)-box), as "BOX" would be the suffix. So the bets are off, no 140W commercially available 965 BE.



Still doesn't really make sense to sell it as a 140w part, even if it is an oem chip.


----------



## btarunr (Jul 14, 2009)

Wile E said:


> Still doesn't really make sense to sell it as a 140w part, even if it is an oem chip.



It doesn't, but if they have thousands of these in the middle of the Pacific (from Malaysia, en route California), AMD can't do much about it.


----------



## FordGT90Concept (Jul 14, 2009)

Meecrob said:


> wrong, SLI can be software or hardware driven, thats how the x58 chipset gets its SLI support, its SOFTWARE, the company pays a license fee to have software based SLI support on x58 boards for example, thats why cheaper x58 boards lack sli support.
> 
> since Im sure you will argue without looking into it yourself.
> 
> http://www.engadget.com/2009/03/19/...ticks-i?icid=sphere_blogsmith_inpage_engadget


Let me bullet the response...

-All X58 chipsets have the necessary hardware to run SLI.  In fact, all X38 and X48 chips also have that ability.

-The Intel 5400 (Skulltrail chipset) does not natively support SLI because it is largely based on Intel 5000 series chipsets.  Because this is the Skulltrail platform, not having SLI was deemed unacceptable so to add it, the use two NVIDIA nForce 100 MCP chips and modified the chipset enough (to include those additional chips) to warrant giving it its own model number (5400). 

-In all cases of SLI-enabled chipsets that aren't manufactured by NVIDIA, an SLI license must be acquired for third-party manufacturers to sell that particular SKU.  If they don't purchase the SKU, they need to disable it (via BIOS code) or else run the risk of getting sued by NVIDIA (protecting IP).

-SLI is a hardware- & software-based technology.  You need a chipset that is capable of handling SLI (hardware), you need two or more similar video cards (also hardware), you need proper BIOS that unlocks the hardware (software), and you need drivers to tell the OS how to use it (also software).  All four elements combined is SLI--two of which are embedded in the motherboard.





Meecrob said:


> its NOT hardware based, nvidia's drivers detect if the boards bios have a cert fro SLI in them, if they do they allow SLI to work, if not, well your BONED.


Skulltrail says otherwise.  If hardware was not required for SLI, why would Intel have put two lowly NVIDIA chips on their $600 motherboard?


----------



## btarunr (Jul 14, 2009)

FordGT90Concept said:


> Skulltrail says otherwise.  If hardware was not required for SLI, why would Intel have put two lowly NVIDIA chips on their $600 motherboard?



So that NVIDIA makes money on those chips without having to work out a licensing program like they did for X58. Every SLI-compatible X58 motherboard fetches them US $5.


----------



## Wile E (Jul 14, 2009)

btarunr said:


> So that NVIDIA makes money on those chips without having to work out a licensing program like they did for X58. Every SLI-compatible X58 motherboard fetches them US $5.



Yep. You are mistaken on this, ford. It is purely software. Nvidia forced use of their nf100 and nf200 chips in the past to artificially boost their chipset sales. Most SLI ready X58 boards have neither of those chips.


----------



## FordGT90Concept (Jul 14, 2009)

btarunr said:


> So that NVIDIA makes money on those chips without having to work out a licensing program like they did for X58. Every SLI-compatible X58 motherboard fetches them US $5.


$5 is cheaper for Intel than buying two NVIDIA chips and reengineering the 5000 chipset to accommodate them.  It would have been cheaper for Intel to contract NVIDIA to make the Skulltrail chipset like AMD did with DualFX.


----------



## Flyordie (Jul 14, 2009)

FordGT90Concept said:


> $5 is cheaper for Intel than buying two NVIDIA chips and reengineering the 5000 chipset to accommodate them.  It would have been cheaper for Intel to contract NVIDIA to make the Skulltrail chipset like AMD did with DualFX.



There are some AMD based FX boards out there... they even support up to 140W per socket... lol.


----------



## btarunr (Jul 14, 2009)

FordGT90Concept said:


> $5 is cheaper for Intel than buying two NVIDIA chips and reengineering the 5000 chipset to accommodate them.  It would have been cheaper for Intel to contract NVIDIA to make the Skulltrail chipset like AMD did with DualFX.



In case of dualFX, the chipset was 100% NVIDIA. It's called nForce 680a SLI. 

In case of Skulltrail, like I said, they found a makeshift way of making sure NVIDIA gets its cut without having to sign agreements with Intel (since it is the manufacturer of Skulltrail), beyond purchasing nForce 200 like any other component. With X58, the dealings were between the motherboard vendors and NVIDIA. Intel has no role, except that it eventually got one for its DX58SO.


----------



## FordGT90Concept (Jul 14, 2009)

Wile E said:


> Yep. You are mistaken on this, ford. It is purely software. Nvidia forced use of their nf100 and nf200 chips in the past to artificially boost their chipset sales. Most SLI ready X58 boards have neither of those chips.


Then explain why no Intel boards prior to X38 have SLI.  Following that, only the X48 and X58 have SLI.  As you know, P35s and P45s sold like mad.  If motherboard manufacturers could have paid $5 to allow SLI on those boards, don't you think they would have?  That would have put their boards on top.  But nope, so far only the X## and 5400 (via 100 MCP) have SLI.


----------



## Wile E (Jul 14, 2009)

FordGT90Concept said:


> $5 is cheaper for Intel than buying two NVIDIA chips and reengineering the 5000 chipset to accommodate them.  It would have been cheaper for Intel to contract NVIDIA to make the Skulltrail chipset like AMD did with DualFX.



Most x58 SLI boards do not have *ANY* NF chip on them. It is only a BIOS string that enables SLI support in the drivers.

The only X58 SLI boards that have the NF200 chips on them, are those that support Tri SLI at 16x, 16x, 16x speeds. And those come at a heavy premium, thus nVidia making extra money on those boards.

And it makes perfect sense for them to allow SLI on non-NF equipped boards, because that's the only way they will make ANY money on i7 in the midrange at all. They would lose a ton of money if they didn't license it to Intel, as they wouldn't be making a dime on those mid range x58 boards, as opposed to making $5 on them now.

Remember, they have no i7 chipset of their own, so it was the only way for them to get a piece of the pie.


----------



## FordGT90Concept (Jul 14, 2009)

It is in the IO Hub (aka northbridge) where has always been.  The hardware changes surround the PCI Express bus.


----------



## btarunr (Jul 14, 2009)

FordGT90Concept said:


> Then explain why no Intel boards prior to X38 have SLI.



Simple, NVIDIA wouldn't cannibalize its nForce 700 for Intel's sake. Now there's a lot of drama surrounding NVIDIA and Intel, and how Intel isn't allowing NV to make a full-blown chipset. NVIDIA needs to make money, it found one small way of doing it till things are settled. Even if they are, I'm not sure if you'll want to buy a "nForce 980i SLI" over Intel X58, because the latter supports CrossFireX too.


----------



## Wile E (Jul 14, 2009)

FordGT90Concept said:


> It is in the IO Hub.



Nothing more than semantics. It's still an NV chip, and you knew what I meant.

They didn't have to enable SLI on X38, X48, P45, etc, etc. because they had competing LGA775 chipsets in 750, 780, and 790.

They have nothing to compete against Intel on i7 chipsets. Allowing it on midrange X58 without the NF200 was the only way they could get a piece of the midrange pie.

SLI has always been software limited. Remember the ULI chipsets that supported SLI, but NV blocked in a driver update?


----------



## btarunr (Jul 14, 2009)

Wile E said:


> Remember the ULI chipsets that supported SLI, but NV blocked in a driver update?



Exactly, after NVIDIA acquired ULI, enter nForce 560 SLI, nForce 500 SLI, etc.


----------



## FordGT90Concept (Jul 14, 2009)

Put simply, I see more evidence that SLI involves hardware than to the contrary.  It isn't solely a license agreement.




Wile E said:


> SLI has always been software limited. Remember the ULI chipsets that supported SLI, but NV blocked in a driver update?


When was this?  NVIDIA acquired ULI in late 2005.


----------



## Wile E (Jul 14, 2009)

FordGT90Concept said:


> Put simply, I see more evidence that SLI involves hardware than to the contrary.  It isn't solely a license agreement.
> 
> 
> 
> When was this?  NVIDIA acquired ULI in late 2005.



Exactly when they blocked ULI chipsets from running SLI.

And as far as SLI needing hardware, google it on X58 my friend. It's purely a BIOS string. No hardware needed, unless you want 16,16,16x Tri SLI instead of 16,16,8. None of the SLI boards, and I repeat, NONE of the SLI boards that support only 16,16,8 or lower have the NF chips on them.


----------



## FordGT90Concept (Jul 14, 2009)

The following statements are made with this in mind: almost all hardware can be replicated using software at a performance penalty.

The "hacks" only worked up to nForce 4 and 500 series.  From that point on, something changed.  Were these original hacked drivers merely a corporate-sponsored software version (basically like Crossfire) or was SLI purely software on these chipsets?  After nForce 500 series, something changed.


X58 already has the necessary hardware to run SLI (x16, x8, x8).  The BIOS string determines whether or not the functionality is reported.  Ehm, just like how nForce 600 series chips is the hardware for SLI on those platforms, X58 is the hardware for SLI on its platform.  NVIDIA licensed the technology so Intel could embed it.

Obviously, if the manufacturer wants more lanes than that, they'll have to supplement the X58 chip.


----------



## btarunr (Jul 14, 2009)

FordGT90Concept said:


> X58 already has the necessary hardware to run SLI (x16, x8, x8).  The BIOS string determines whether or not the functionality is reported.  Ehm, just like how nForce 600 series chips is the hardware for SLI on those platforms, X58 is the hardware for SLI on its platform.  NVIDIA licensed the technology so Intel could embed it.
> 
> Obviously, if the manufacturer wants more lanes than that, they'll have to supplement the X58 chip.



Every X58 board would have shipped with SLI support. That's not the case. The GeForce driver recognizes an X58-board which is meant to support SLI from its secret-sauce qualified-platform list.

Maybe if a virtual-machine software ever emulates a Core i7 + qualified X58 motherboard, we'll get to see that first hand on any machine? It's a big maybe. 

Anyway, we're going south of the topic. Nice discussion, can resume it elsewhere.


----------



## Wile E (Jul 14, 2009)

FordGT90Concept said:


> The following statements are made with this in mind: almost all hardware can be replicated using software at a performance penalty.
> 
> The "hacks" only worked up to nForce 4 and 500 series.  From that point on, something changed.  Were these original hacked drivers merely a corporate-sponsored software version (basically like Crossfire) or was SLI purely software on these chipsets?  After nForce 500 series, something changed.
> 
> ...



No, the only thing that changed was the encryption on the drivers that hid the SLI switch from the hackers. X58 was never designed with SLI in mind. In fact, it's not even designed with Crossfire in mind. It's just designed to have x number of PCIe lanes available.

And, I'm not sure if you remember, but when X48 first release, I believe either Falcon Northwest or VooDooPC released a system with SLI on an X48 board(and then later Crossfire on a 790i board). The only thing that was custom, was the BIOS and drivers. There were no hardware changes at all.


----------



## FordGT90Concept (Jul 14, 2009)

btarunr said:


> Anyway, we're going south of the topic. Nice discussion, can resume it elsewhere.


Pretty much.  I wish I could find the patent.  Even then, how it is implemented isn't something the patent would contain in detail so yeah, it is pretty much impossible to be certain.


Edit: I Googled "SLI patent" and came up with this:


> http://www.onsiteil.com/tips-and-tricks-center/61-sli-crossfire-explained
> 
> NVIDIA SLI takes advantage of the increased bandwidth of the PCI ExpressTM bus architecture, and features *hardware and software* innovations within NVIDIA GPUs (graphics processing units) and *NVIDIA nForce4 MCPs* (media and communications processors).


I doubt they are talking solely about the SLI bridge there.

It's all gray area...intentionally. 


Nevermind, that's on a thousand sites, word for word, as part of an NVIDIA statement when it launched.


Edit: There is no filed patents.


----------



## vagxtr (Jul 15, 2009)

D4S4 said:


> amd`s prescott



they had their prescotts with B2 and B3 revisions they cant reapet tha all the time :rofl:

But the skyscraper large fact that they f-k-d us in favor of cutting down server chip -Istanbul core R&D they really did. They produce some Deneb core that consumes almost or even more than 50% of all that power (ACP) caused with enormous L3 cache thats nothing more than marketing hog .... Deneb==Nehalem marketing fables. and we see how great it should perform with just 2M of L3 cache or maybe 3M only and power consumption would be 100W instead 140W.




TheGuruStud said:


> Cmon now. It's nowhere near 200W tdp   (I'm serious, intel lied so bad back then)



They couldnt lie too much cause power VRM circutry couldnt handle too much more than 130W in those days. Even p4D need special boards for that double precotts (smithhhyfields iirc) just to power up. And canceled Tejas should be a real power hog with 150W on idle 2.80GHz  



tkpenalty said:


> Intel's CPUs only run so "warm" because of incorrect temperature readings from programs such as core temp which always never address the issue of the tjunction temps being 15 (or 25) or so degrees off the real readings, but yeah its slightly warm, but nothing to fret over (80*C? BS, the CPU can't even run at that temperature without shutting itself down). Secondly the stock cooler is pure CRAP.




Please dont be such a cry baby. And self explanatory rocket scientist in the same post.

Current CPUs are made on Silicon with all that Ge/Hf and so on. And yes pure silicon could easily run on 150°C Tj. But not the processing units. Most of old CPUs (P6,K7,P4-prescott) work well on 95°C and it wasn't some sweating sauna for them. Just other elements on mobo sufferd (like capacitors that were overheated by overburdened VRM and poorly cooled CPU)

Most modern GPUs nv7800->gtx280 works great @100C ntm all that new ATI rv670/rv770/rv790/rv740 that in most cases if power supply on card is properly designed could work 120°C for a few hours of FurMark stress test. Yep they'll consume alot but they wont fail while power circuitry can  And the best thing is that GPUs are based on pure silicon process that they call Bulk and not so fancy SiGe, streched-strained, with HighK gates, or just poor old SiO base.



eidairaman1 said:


> i believe AMD is reaching the Thermal Barrier of the Current Arch, sort of what happened with the Athlon XP at 3200.



OMG. Unfortunatly they don't reach anynthing with current architecture. They reach their thermal envelope @140W and ~3.6GHz with old F3 revision on last Athlon FX core. Since then they just buying time and delaying real R&D in the desktop/workstation segment. And buying themselves a lot of time. But at least they stamping more and more cores with advanced processing for server cores like Istanbul. Not that helps us at all. But we get TWKR nuclear fusion hog. yupiiii. we really need to worship AMD golden bull now.


----------



## Wile E (Jul 15, 2009)

Holy triple post. The edit and multi-quote buttons are your friends. Get acquainted with them.


----------



## vagxtr (Jul 15, 2009)

Wile E said:


> Holy triple post. The edit and multi-quote buttons are your friends. Get acquainted with them.



great i didnt notice it. so that's why i multi-quoted  part of them and other i typed while i was reading the rest of the small thread. but tnx for advice


----------

