# AMD Bulldozer Eng. Sample leaked, benched



## twilyth (Jun 10, 2011)

article - http://wccftech.com/2011/06/10/amd-...benchmarked-tested-asus-sabertooth-990fx-am3/


----------



## hellrazor (Jun 10, 2011)

That board looks like a beast from hell!

I'm not familiar with any of those benching programs, so I don't have anything else to say.


----------



## Melvis (Jun 10, 2011)

Ill believe it when i see it, im calling fake on this sorry. 

I cant see an 8 core CPU getting a worse score then the current X6 of today, doesn't add up.


----------



## CrackerJack (Jun 10, 2011)

Melvis said:


> Ill believe it when i see it, im calling fake on this sorry.
> 
> I cant see an 8 core CPU getting a worse score then the current X6 of today, doesn't add up.



same here, something just isn't adding up... but who knows


----------



## twilyth (Jun 10, 2011)

I have a few Bulldozer alerts set on Google and this just happened to hit my mailbox this morning.  I'm just posting it for what it's worth, if anything.  If you think I should change to title to make it less dramatic, suggest something and if I can still do it I will.


----------



## Wile E (Jun 10, 2011)

If it is a real BD, it could just be a glitch with the program. Might be like the old dual cores and need a special driver to work properly in some multi-threaded apps.

I sure hope that's not the real performance, anyway. If it is, BD is a total and utter failure.


----------



## twilyth (Jun 10, 2011)

Wile E said:


> If it is a real BD, it could just be a glitch with the program. Might be like the old dual cores and need a special driver to work properly in some multi-threaded apps.
> 
> I sure hope that's not the real performance, anyway. If it is, BD is a total and utter failure.



Please be more specific since I don't know much about benchmarks.  thanks.


----------



## CrackerJack (Jun 10, 2011)

twilyth said:


> Please be more specific since I don't know much about benchmarks.  thanks.



Simple version: Those results were shit


----------



## twilyth (Jun 10, 2011)

CrackerJack said:


> Simple version: Those results were shit


That's not helping me or anyone else understand.


----------



## Wile E (Jun 10, 2011)

twilyth said:


> That's not helping me or anyone else understand.



Which part of my comment is unclear/needs explained?


----------



## twilyth (Jun 10, 2011)

Wile E said:


> If it is a real BD, it could just be a glitch with the program. Might be like the old dual cores and need a special driver to work properly in some multi-threaded apps.
> 
> I sure hope that's not the real performance, anyway. If it is, BD is a total and utter failure.





Wile E said:


> Which part of my comment is unclear/needs explained?



The response you quoted was to CJ.

Why do you think the results if true mean BD is a failure.  I mean I did specifically ask about benchmarks in relationship to your comment.


----------



## Wile E (Jun 10, 2011)

twilyth said:


> The response you quoted was to CJ.


Irrelevant. I just quoted the last post on the general topic. You knew what I meant, and responded accordingly. That's all that matters.



twilyth said:


> Why do you think the results if true mean BD is a failure.  I mean I did specifically ask about benchmarks in relationship to your comment.



You could've also been referring to my comment on possible glitches and the drivers some of the older AMD chips needed to perform properly in some programs. In case you missed it, I did make more than one point in my original comment.

If these benches are true, BD is a failure because it's IPC per core is terrible. Intel's last generation 4 cores with HT overpower it, let alone SandyBridge, or the upcoming skt2011 cpus.


----------



## HalfAHertz (Jun 10, 2011)

My guess is that it flopped because of the "slow" memory they used. From what I've heard the BD memory controller is really something and to get the full performance you need to use 1866MHz sticks. Otherwise the memory controller quickly gets saturated because of all 8 cores accessing it and the performance drops exponentially.


----------



## Wile E (Jun 10, 2011)

HalfAHertz said:


> My guess is that it flopped because of the "slow" memory they used. From what I've heard the BD memory controller is really something and to get the full performance you need to use 1866MHz sticks. Otherwise the memory controller quickly gets saturated because of all 8 cores accessing it and the performance drops exponentially.



I can see that being a bottleneck, but I don't see it being one to this degree.


----------



## twilyth (Jun 10, 2011)

Wile E said:


> Irrelevant. I just quoted the last post on the general topic. You knew what I meant, and responded accordingly. That's all that matters.
> 
> 
> 
> ...



I was pointing something out I though you didn't realize.  Not sure why that is a problem or "irrelevant", but whatever.

Where are you seeing IPC figures?


----------



## InnocentCriminal (Jun 10, 2011)

I can't take anything from wccftech with any seriousness. 

I'll hold out for some legitimate results.


----------



## animal007uk (Jun 10, 2011)

I was reading the other day about how AMD were not happy with the performance of BD on the early sample chips, After reading that link in the OP my guess is that the CPU shown is one of the first (engineering samples) that AMD were not happy with.

My understanding is AMD are working on new stepping (B2 or something) that should be in the retail CPU's.

If what i have read is true then im also guessing that them benchmarks are not to be taking seriously as true scores, We just have to wait and see.

It also mentions the new stepping in the link.


----------



## twilyth (Jun 10, 2011)

animal007uk said:


> I was reading the other day about how AMD was not happy with the performance of BD on the early sample chips, After reading that link in the OP my guess is that the CPU shown is one of the first (eng samples) that AMD was not happy with.
> 
> My understanding is AMD are working on new stepping (B2 or something) that should be in the retail CPU's.
> 
> ...


I read the same thing and I think you got that spot on.  That's my understanding for why they pushed the launch from June to July-September.  It does make sense.

Aren't these chips the first to use high-k gates?  If so, manufacturing issues were bound to crop up.


----------



## Velvet Wafer (Jun 10, 2011)

> Temps of the chip were reported at 13.4C (Idle) and 16.2C (Load) but the user mentioned that it was a Glitch in the AMD Overdrive software as the temps reported in BIOS were hovering around 35-40C (Cool n Quiet Enabled).



So i was indeed right. The damn,worthless AOC is just bugged!

EDIT:
just saw it, there are vantage leaks out there too (also on this UK page).... these look better surely:
http://wccftech.com/2011/05/30/amd-bulldozer-fx8110-scores-81917-3dmark-vantage-cpu-test-details/
281 steps in CPU test 2 in Vantage is murderous in my opinion...i7 does not even 100,severely OCed


----------



## hat (Jun 10, 2011)

I really hope those aren't true results...


----------



## Red_Machine (Jun 10, 2011)

Also, Cool n Quiet was enabled, which lowers performance.


----------



## repman244 (Jun 10, 2011)

http://www.youtube.com/watch?v=d8QRKdyBzKQ

And everyone should be aware that these chips are B0 ES chips, it seems they all have severe restrictions hence the low scores. I mean come on, AMD would never release a chip that is slower than Thuban and call it FX.


----------



## Velvet Wafer (Jun 10, 2011)

repman244 said:


> http://www.youtube.com/watch?v=d8QRKdyBzKQ
> 
> And everyone should be aware that these chips are B0 ES chips, it seems they all have severe restrictions hence the low scores. I mean come on, AMD would never release a chip that is slower than Thuban and call it FX.


i really liked that quote... seems to throw at least some information regarding the issue, even it can be false



> *B0 Stepping => Capped clock speed, broken turbo, memory bandwidth restricted...
> The retail release is B2 Stepping. (B1 Stepping has speed scaling issue that needs to be corrected.)*


----------



## Fatal (Jun 10, 2011)

3.2 over clock?  1333 memory  I would have tried to blow that thing up the scores are poor but I will wait for a legit review. I am starting to believe AMD is screwed it will be a miracle for them to touch Intel Sandy Bridge chip's performance.


----------



## Nesters (Jun 10, 2011)

twilyth said:


> Why do you think the results if true mean BD is a failure



Scoring less than PII X6 and also costing more would be an epic failure. If BD wasn't quite an improvement, price cuts would have been smaller for existing processors.


----------



## MilkyWay (Jun 10, 2011)

Theres no way they would release a processor that was worse in performance than thier older processors.

I think i will wait for some real world performance tests in things like games or media encoding which could be boosted due to the extra cores.


----------



## NdMk2o1o (Jun 10, 2011)

MilkyWay said:


> Theres no way they would release a processor that was worse in performance than thier older processors.
> 
> I think i will wait for some real world performance tests in things like games or media encoding which could be boosted due to the extra cores.



+1 

Suggesting such a thing is idiotic imo


----------



## twilyth (Jun 10, 2011)

Everybody needs to at least look at the article Velvet posted here



> The System managed to obtain a score of 26676 marks (CPU – 81917 |GPU – 19718). I have to say that the CPU score shows the power of the Extra Cores. The site managed to compare the Score with an Intel based Rig (i7 2600K + GTX 580) which managed to get a score of 29125 however when we look at the details the score shows that the GPU was the only reason for the overall score being higher, The GTX 580 being a faster card than the 560 scored higher while the CPU (i7 2600K) was a little less than 20,000 marks away from the Fx-8110 performance level with a score of 64146. Here let me make it all easy for you guys to understand:
> 
> 3D Mark Vantage CPU Score:
> 
> ...


----------



## MilkyWay (Jun 10, 2011)

A 2600k is 3.4ghz stock and the bulldozer they used was 3.8ghz i wonder if it made any difference if they where the same speeds? Maybe not a massive difference im guessing.

If the above post is anything to go by the FX chips look decent, maybe should be compared to socket 2011 whenever those are released.


----------



## twilyth (Jun 10, 2011)

I want to point out that the BD's score was 27.5% higher than sandy's but the clock speed was only 11% higher for BD.

I do the math so you don't have to.


----------



## SaiZo (Jun 10, 2011)

Could the entire thing be photoshopped? I actually looked at the fifth picture (from top), where the CPU is seated on the socket. They forgot to have that black thing they had on the first picture. So I opened up PSD, and started to mess around.. Can't read the exact figures but if I manage to get it readable - should I post it here??


----------



## Wyverex (Jun 10, 2011)

If the Vantage score is anything to go by, (at least in multi-threaded applications) 8-core AMD FX is 14% faster than 4-core Sandy Bridge clock-for-clock.

Quite impressive and terrible at the same time.
Impressive due to cost and AMD finally, once again, being competitive.
Terrible due to AMD needing 8 cores to compete with a quad (true, the quad does have HT so it "acts like it has more cores")



But, I don't really believe neither of the tests. I refuse to get hyped and/or trust any pre-release info.

_*Wyv impatiently waits for official release and legit reviews*_


----------



## Velvet Wafer (Jun 10, 2011)

Wyverex said:


> If the Vantage score is anything to go by, (at least in multi-threaded applications) 8-core AMD FX is 14% faster than 4-core Sandy Bridge clock-for-clock.
> 
> Quite impressive and terrible at the same time.
> Impressive due to cost and AMD finally, once again, being competitive.
> ...



To be true, the Technology implemented is similar to Hyperthreading, in that there are not really 8 cores, but rather 4 modules, with each 2 "cores"...
so i guess its not too surprising, and also not too terrible,regarding, that even real 6 cores have problems competing with SB


----------



## BarbaricSoul (Jun 10, 2011)

I own't believe any pre-release benches until they are backed up by a reputable website like TPU or HARD, or they come from AMD themselves


----------



## H82LUZ73 (Jun 10, 2011)

I call fake on 2 reasons 
1 CPUZ needs to be beta 1.58 for BD to be read ,and look at the vids and pics at the FX logo see the greyish shadows on it,

2 the most overlooked of the pics and vids of this so called BD in action why the F is the tpd at 186 when we ll know the chips will be 125 tpd and 95 tpd for the lower A series.

Well if you looked at the Asrock extreme 5 990fx manual you will see a pre shot of the chip and it also has a notch near the gold triangle ,This so called chip is a x6 thuban ES chip.


----------



## Zen_ (Jun 10, 2011)

animal007uk said:


> I was reading the other day about how AMD were not happy with the performance of BD on the early sample chips, After reading that link in the OP my guess is that the CPU shown is one of the first (engineering samples) that AMD were not happy with.
> 
> My understanding is AMD are working on new stepping (B2 or something) that should be in the retail CPU's.



This is true and was already reported awhile back. 

You guys can just take this with a grain of salt from random guy on the internet, but I don't think it's a big secret that earlier this year AMD summoned all their senior engineers to an emergency meeting in Austin to discuss a serious kink in the Zambezi architecture (hence the B2 stepping). For the casual observer that's not an electrical engineer it's almost impossible to fathom how complicated these things are.


----------



## CrackerJack (Jun 10, 2011)

twilyth said:


> That's not helping me or anyone else understand.



Sorry, thought I was pretty clear. Wasn't trying to be rude 

Check here, those results are from x2 5k blacky... 

BD= 23 sec 
x2 5000= 29 sec

So either it's a fake... or a fail.. or just maybe a horrible bug...


----------



## JrRacinFan (Jun 10, 2011)

I don't call this fake, I call it moreless early sample BS. Sure this may be a leaked preview but too early of a stepping as mentioned earlier.


----------



## twilyth (Jun 10, 2011)

CrackerJack said:


> Sorry, thought I was pretty clear. Wasn't trying to be rude
> 
> Check here, those results are from x2 5k blacky...
> 
> ...



It probably was for someone who benches and knows what a good 1M score is.  But even here there are different levels of geekdom.  I'm sorry for being terse, but in situations like that, some of us need at least a clue.  I don't really mean that as a criticism either, just an observation.

To show the depth of my ignorance, I have this idea that super pi is single threaded.  If that happens to actually be true, then there are probably a couple of possible explanations.  Maybe the pgm was only use one core of one module rather than the whole module or maybe it needs to be modified given how different BD's architecture is.  IDK.  I could just be pissing in the wind here.  It's just hard for me to belief it would suck so bad on super pi and be so much better on 3D Mark Vantage


----------



## cadaveca (Jun 10, 2011)

twilyth said:


> Maybe the pgm was only use one core of one module rather than the whole module or maybe it needs to be modified given how different BD's architecture is



I hope this is the case. Otherwise...UGH.

Pricing that we "know about" now kinda has me hopeful, though.


----------



## cheesy999 (Jun 10, 2011)

cadaveca said:


> I hope this is the case. Otherwise...UGH.
> 
> Pricing that we "know about" now kinda has me hopeful, though.



if that is the case we have more performance per core at a lower ghz so its less of a 'i hope this is the case' and more of A 'If it is AMD will rule supreme)


----------



## twilyth (Jun 10, 2011)

cheesy999 said:


> AMD will rule supreme


Mmmmm.  I like the sound of that.

Yes.  Very nice.  Very, very nice.


----------



## TheMailMan78 (Jun 10, 2011)

Waste of time until we see real benches.


----------



## cheesy999 (Jun 10, 2011)

TheMailMan78 said:


> Waste of time until we see real benches.



for once in my life i agree with mailman, tonight we take a visit to AMD headquarters to 'look' at the engineering samples


----------



## Wyverex (Jun 10, 2011)

Velvet Wafer said:


> To be true, the Technology implemented is similar to Hyperthreading, in that there are not really 8 cores, but rather 4 modules, with each 2 "cores"...
> so i guess its not too surprising, and also not too terrible,regarding, that even real 6 cores have problems competing with SB


I know that, but AMD insists on calling those real cores... that's why I'm calling them cores too


----------



## cheesy999 (Jun 10, 2011)

Wyverex said:


> I know that, but AMD insists on calling those real cores... that's why I'm calling them cores too



well they share a lot less then a lot of people think, its less 'fake cores' and more of 'scales of economy' getting rid of things that don't need to be duplicated


----------



## Damn_Smooth (Jun 10, 2011)

I find it a good sign that a chip that was never meant for consumer hands, a chip that has problems booting windows, can outperform Sandy in anything. 

It definitely makes waiting for real performance scores a lot easier.


----------



## seronx (Jun 11, 2011)

Just going to help you guys burn it in


1, CPU-Z is a fake, the CPU-Z guy hasn't even gotten a FX Chip yet(and if it was done regardless it would say AuthenticAMD)

2. If he was using a B0 chip it is 1 year old and in no way reflects the September Chips

3. AMD ES chips show up as AuthenticAMD and no model number is shown clock speed is also not shown when done in Cinebench


--------------------
The main differences why the AMD CPUs thread amount is consider core amount is because it is not bottlenecked via resources

i7 2600k
L1.
4 x 32 KB instruction caches
4 x 32 KB data caches
L2.
4 x 256 KB
L3.
8 MB

Latency:
4 (L1 cache)
11 (L2 cache)
25 (L3 cache)

AMD Bulldozer 8-core
L1.
4x64KB instruction caches
8x16KB data caches
L2.
4 x 2MB
L3.
8MB

I only know the L3 cache latency for AMD Bulldozer
Latency:
19 (L3 Cache)

Bulldozer also has 2 schedulers vs the 1 scheduler in SB


----------



## cdawall (Jun 11, 2011)

i have had major issues with prerelease chips and current software or BIOS's


----------



## cadaveca (Jun 11, 2011)

cdawall said:


> BIOS's





Best i can come up with. Unless you are tlaking drivers, software age doesn't matter...it either goes faster, slower, or just the same. It's not there will magically appear new versions of EVERYTHING for Bulldozer...


----------



## cdawall (Jun 11, 2011)

cadaveca said:


> Best i can come up with. Unless you are tlaking drivers, software age doesn't matter...it either goes faster, slower, or just the same. It's not there will magically appear new versions of EVERYTHING for Bulldozer...



yes and no if they are stuck in cool and quiet thats software and bios not talking thats how some of the chips react to incorrectly read p-states


----------



## cadaveca (Jun 11, 2011)

cdawall said:


> incorrectly read p-states


----------



## twilyth (Jun 11, 2011)

Here are some more benches from Chiphell.  Problem is, if you translate the page you can't see the images.

They've added a few other things like Cinebench and Fritz Chess.

http://www.chiphell.com/thread-210890-1-1.html

Edit:  OK, I think this page has been up since the 8th.  Figured it was worth posting.

edit2:  More info - compares 2500k to BD.  Also links to benching vids.

edit3:





> Super Pi (1m):
> 
> AMD FX-8130P: 21.456 sec
> 
> ...



Something is clearly wrong here.


----------



## Wile E (Jun 11, 2011)

twilyth said:


> I was pointing something out I though you didn't realize.  Not sure why that is a problem or "irrelevant", but whatever.
> 
> Where are you seeing IPC figures?



IPC is a derived value. Instructions per Clock. In other words, their performance in clock vs clock, core vs core against Intel is dismal according to those numbers. It's not big enough of an improvement.



Velvet Wafer said:


> So i was indeed right. The damn,worthless AOC is just bugged!
> 
> EDIT:
> just saw it, there are vantage leaks out there too (also on this UK page).... these look better surely:
> ...


I'd bet money that's with CUDA on. If you want to use vantage to compare cpus, you should use CPU Test 1. It's uses no acceleration. My 980X did 5250 the last time I ran it stock. They skewed the test results on that page. Pretty dirty trick.



twilyth said:


> I want to point out that the BD's score was 27.5% higher than sandy's but the clock speed was only 11% higher for BD.
> 
> I do the math so you don't have to.



See above. They aren't telling the entire story in his link. 


All that said, I think these are fake. At least I sure hope so, because the showing in the OP is very poor.


----------



## seronx (Jun 11, 2011)

twilyth said:


> Here are some more benches from Chiphell.  Problem is, if you translate the page you can't see the images.
> 
> They've added a few other things like Cinebench and Fritz Chess.
> 
> ...



The results are fake



Wile E said:


> IPC is a derived value. Instructions per Clock. In other words, their performance in clock vs clock, core vs core against Intel is dismal according to those numbers. It's not big enough of an improvement.
> 
> 
> I'd bet money that's with CUDA on. If you want to use vantage to compare cpus, you should use CPU Test 1. It's uses no acceleration. My 980X did 5250 the last time I ran it stock. They skewed the test results on that page. Pretty dirty trick.
> ...



You are correct these are fake(not because they are poor showing mainly because it isn't showing the right codewords)

Unknown CPUs in cinebench show up as

AuthenticAMD
GenuineIntel


----------



## Heavy_MG (Jun 11, 2011)

Damn_Smooth said:


> I find it a good sign that a chip that was never meant for consumer hands, a chip that has problems booting windows, can outperform Sandy in anything.
> 
> It definitely makes waiting for real performance scores a lot easier.



Agreed.
The ES B0/B1 stepping that chiphell tested has bad performance because  the multiplier is off,so clocks are actually lower ,the turbo core is broken/disabled,and some cache was apparently disabled,there was even word that some cores were shut off to keep the TDP low. It was only designed for motherboard manufacturers to test for compatiblity.
Only so much can improved in a new revision,however hopefully IPC improves with B2.
Also IMO,it's silly to use Super Pi,which is optimized for Intel,to compare Intel to AMD.


----------



## seronx (Jun 11, 2011)

Heavy_MG said:


> Agreed.
> The ES B0/B1 stepping that chiphell tested has bad performance because  the multiplier is off,so clocks are actually lower ,the turbo core is broken/disabled,and some cache was apparently disabled,there was even word that some cores were shut off to keep the TDP low. It was only designed for motherboard manufacturers to test for compatiblity.
> Only so much can improved in a new revision,however hopefully IPC improves with B2.
> Also IMO,it's silly to use Super Pi,which is optimized for Intel,to compare Intel to AMD.



It's fake

Don't get your eye strings in bundle

You do realize CMT has 80% more throughput in Intel optimized workloads(That is for servers but the same applies to Desktops)((30% more throughput is the lowest amount))

SuperPi is also 1 core so it doesn't use SMT or CMT


If it was real

The most realistic result you will see

is

FX-4XXX - X < 9.345 seconds

i7 2500K - 9.345 seconds

Super Pi uses the x87 ISA(Which is AMD Optimized)
(Reason why AMD has gotten lackluster superpi scores before is because they shot themselves in the foot I don't know how but they purposely reduced x87 performance by half which has been fixed in the Bulldozer Architecture)


----------



## hellrazor (Jun 11, 2011)

Anybody else notice it's running @ .93 volts?


----------



## inferKNOX (Jun 11, 2011)

Those might be true for that sample, but I in no way believe that that's the same CPU AMD is going to sell us, so this thread has no point other than to show how fast a punctured bicycle can go.
I have a feeling that Zambezi has been the sort of headache for AMD that Fermi has been for nVidia, or probably worse since AMD has the task of catching up to the competition on top of all it's stresses.


----------



## seronx (Jun 11, 2011)

inferKNOX said:


> Those might be true for that sample, but I in no way believe that that's the same CPU AMD is going to sell us, so this thread has no point other than to show how fast a punctured bicycle can go.



I wouldn't believe anything WCCF whatever publishes

If there is a leak the only place I'll believe it is from

Ars Technica



inferKNOX said:


> I have a feeling that Zambezi has been the sort of headache for AMD that Fermi has been for nVidia, or probably worse since AMD has the task of catching up to the competition on top of all it's stresses.



I doubt it....It's mainly they are regretting going Fabless, This year is probably the only year(Full out) they will be able to go 32nm then next year GloFo announces 22nm 4D Transistors lol
(GloFo expects to upgrade most of its Fabs to 20/22nm by 2012(end of 2012) and they are working with AMD to make a new fab process that should rival intels 22nm 3D transistors)


----------



## pantherx12 (Jun 11, 2011)

The super pi scores are in line with phenom 2 scores.

Silly fakers.


----------



## user21 (Jun 12, 2011)

AMD showed their world's 1st slowest 8 core AMD FX processor and still cant get a better bench LOL


----------



## Damn_Smooth (Jun 12, 2011)

user21 said:


> AMD showed their world's 1st slowest 8 core AMD FX processor and still cant get a better bench LOL



AMD didn't show anything. If these are real, they are from a chip that was never supposed to be released to the public and it was used to test compatibility at best.

I'm not sure if you were attempting to troll, but if you were, that was pretty weak.


----------



## Heavy_MG (Jun 12, 2011)

user21 said:


> AMD showed their world's 1st slowest 8 core AMD FX processor and still cant get a better bench LOL



Seriously? 
Read above. 
The bench test that was "leaked" is a * Engineering Sample*,a chip only for testing. It has bugs,and  possibly certain things missing such as L2 and L3 cache,or the clock speed is capped,so the clock speeds are actually less than what is shown. 
AMD doesn't want true benchmarks leaked before the release of FX,thus the possible reason behind the broken L2 & L3 cache and capped clock speeds.
However, there were some performance issues,AMD is currently working on a new revision.


----------



## Nesters (Jun 12, 2011)

Nice trolling over there, still waiting for Bulldozer, i'm gonna get one. I will support AMD as long as it's reasonable to do so.


----------



## twilyth (Jun 12, 2011)

Damn_Smooth said:


> I'm not sure if you were attempting to troll, but if you were, that was pretty weak.


What he said.  We have pretty high standards around here before we grant anyone the status of "troll".  You're really going to have to put your back into it if you expect to make as a troll round these parts.


----------



## Horrux (Jun 12, 2011)

Nesters said:


> Nice trolling over there, still waiting for Bulldozer, i'm gonna get one. I will support AMD as long as it's reasonable to do so.



Same here. I think AMD will surprise by offering a very competitive product with BD.


----------



## user21 (Jun 12, 2011)

TO ALL: JUST FOLLOW THE LINK HERE..................

http://www.youtube.com/watch?v=FQtmri400SU

and here 

http://www.youtube.com/watch?v=sVlZ6_niyoo


----------



## Damn_Smooth (Jun 12, 2011)

user21 said:


> TO ALL: JUST FOLLOW THE LINK HERE..................
> 
> http://www.youtube.com/watch?v=FQtmri400SU
> 
> ...



You do realize that they didn't run any benches at E3 right?

I don't see what you're trying to get at here.


----------



## user21 (Jun 12, 2011)

Damn_Smooth said:


> You do realize that they didn't run any benches at E3 right?
> 
> I don't see what you're trying to get at here.



and i dont see your getting the idea here! when intel made benches from their prototype they should promising results and so did amd in the past and now you all are talking about an AMD not finalized so y they did those benches anyway ? even the x6 is slower then intel quads, clearly i dont see a point mentioning a prototype and the the real thing is there! lol they didn't showed benches cause they know its slow hehe atleast from intel's extreme i7......if you dig in more you can actually consider the lga2011 platform with quad channel memory support x79 chip and running 8core processors then where would be this BULLDOZER be going ? x6 is slower then intel quad then what can be actually expected from an 8cored one


----------



## Dent1 (Jun 12, 2011)

What has AMD's current Phenom II X6's being slower than Intel's current Sandybridges got to do with AMD Bulldozer?

Secondly.

Why are you basing a argument on leaked benchmarks that could possibly be faked, wrong, inaccurate, doctored or tested without a proper methodology.

Thirdly.

Why are you basing your argument on a prototype. We don't even know if it's a prototype, it could be 100% fake.


----------



## St.Alia-Of-The-Knife (Jun 12, 2011)

why is the product number blacked out on the pics

however, i changed the contrast and brightness on the 5th pic and noticed that there wasnt any product number so why was it blacked out???


----------



## user21 (Jun 12, 2011)

Dent1 said:


> The education system in Pakistan obviously needs a reform.
> 
> What has AMD's current Phenom II X6's being slower than Intel's current Sandybridges got to do with AMD Bulldozer?
> 
> ...



you think e3 was a joke and it could be 100% fake i think your brain needs a reform here! if the x6 is slow the x8 might not be the best thats a forecast, if your obsessed with something slower then intel live up as you like mate

http://www.fudzilla.com/processors/item/22996-amd-officially-announces-fx-brand
DOES THIS LOOKS FAKE TO YOU ?????


and this is a bonus news
http://www.fudzilla.com/processors/item/22997-amds-zambezi-up-and-running-at-e3


----------



## TheMailMan78 (Jun 12, 2011)

user21 said:


> quotes modified post



First off fudzilla? Really?!

Second of all I have been to Pakistan pre 9/11. Not impressed.


----------



## Nesters (Jun 12, 2011)

Ok, so what's your point? 

If Bulldozer was that bad as you want it to be, there would possibly be AMD acquisition underway because shareholders wanted to get some money out of it while they can...

... and recent price cuts on PII lineup would insta-kill Bulldozer.

It's true that Bulldozer is architecture for servers but it also means it's future proof for desktops.
Bulldozer is going at least for entry high-end with S2011 considered as launched for sure.
They still have Llano and Bobcat which both seems to get positive feedback from OEM's.

AMD doesn't need top performer to make money and increase their sales volume.


----------



## Dent1 (Jun 12, 2011)

user21 said:


> *and you think e3 was a joke *and it could be 100% fake i think your brain needs a reform here!



It's common knowledge that there was no Bulldozer benchmarks at E3.





user21 said:


> http://www.fudzilla.com/processors/item/22996-amd-officially-announces-fx-brand
> DOES THIS LOOKS FAKE TO YOU ?????
> 
> 
> ...



The first link says that AMD are bringing back the FX name. This has nothing to do with Bulldozers performance or that potentially fake prototype review.

The second link says Bulldozer was demonstrated at E3. This has nothing to do with Bulldozers performance as no benchmarks were tested at E3. Again nothing to do with the potentially fake prototype Bulldozer review.

Try again. But this time READ the links before you post such sillyness.


----------



## Damn_Smooth (Jun 12, 2011)

user21 said:


> and i dont see your getting the idea here! when intel made benches from their prototype they should promising results and so did amd in the past and now you all are talking about an AMD not finalized so y they did those benches anyway ? even the x6 is slower then intel quads, clearly i dont see a point mentioning a prototype and the the real thing is there! lol they didn't showed benches cause they know its slow hehe atleast from intel's extreme i7......if you dig in more you can actually consider the lga2011 platform with quad channel memory support x79 chip and running 8core processors then where would be this BULLDOZER be going ? x6 is slower then intel quad then what can be actually expected from an 8cored one



Now that you've proven that you're incapable of reaching the level of troll, you will forever be known as a lawn gnome.

Let me explain this to you. *IF* those benches are real, AMD did not approve of them being released. They would be from a chip that was never supposed to reach consumer hands. That chip would have been sent out to motherboard makers to test for compatibility issues at best and it was definitely not the same chip that they were using to demo at E3.



user21 said:


> well come visit Pakistan if you got a problem, you would'nt dare so  and you think e3 was a joke and it could be 100% fake i think your brain needs a reform here! if the x6 is slow the x8 might not be the best thats a forecast, if your obsessed with something slower then intel live up as you like mate
> 
> http://www.fudzilla.com/processors/item/22996-amd-officially-announces-fx-brand
> DOES THIS LOOKS FAKE TO YOU ?????
> ...



So you are an internet tough guy, that links to articles that in no way back up any of the claims you're trying to make. Seriously man, you are making this too easy.

The truth of the matter is that nobody knows how Bulldozer will perform, and these supposed leaked benches of a gimped chip do not change that fact one bit.


----------



## erocker (Jun 12, 2011)

Unless you have a chip in your hand or on your motherboard you're all wrong. 

Seriously, who's more mature, the guy making the claim or the guy pointing fingers saying "No, no, no!"? Both things are fun to do but pointless in the end.  I'm going to go give the finger to a blind man and yell at a deaf man now. Y'all behave.


----------



## cadaveca (Jun 12, 2011)

cdawall said:


> oh god STFU you have no idea what you are talking about



Neither do you, really, because if you had a sample, you'd not be able to say as much as you have.


I recall the 2900XT...PhenomI, PhenomII, 5-series, and 6-series...basically everything that has launched since AMD bought ATI, has had early benches leaked "from china" that were disappointing, and every time, they basically turned out to be true.

Now, the thing is, is that the shown memory performance has been abysmal. This is specifically the area of performance I will be judging Bulldozer on. IPC, etc, doesn't matter to me, because it's good enough already with current cores. I don't need any increase here, just the same as Deneb or Thuban will do.

But I don't see any of that in the benches. I see apps that have specific extensions being highly accelerated, and the rest...well...I dunno.

Maybe it's possible that AMD sent out cores with an older IMC, due to consumer demand for backward compatibility with sockets, but I also directly asked JF-AMD if a new socket was require to get full performance outta Bulldozer.

He said yes.

I don't see many people with new 9-series boards. In fact, I think I have more boards than anyone else that doesn't work @ an OEM.

But I have no chips.

Based on board design...am I hopeful? Yes.

But do I really expect Bulldozer to be any faster than these benches? NOPE. History says I shouldn't.


I just have to hope that AMD will send me a sample. If not, my job here requires that I buy one...ASAP. So I don't care which way it goes...I just hope I'm not paying more, for less, with Bulldozer, like I did with this 1100T I'm using for reviews now.


----------



## Thatguy (Jun 12, 2011)

cadaveca said:


> Neither do you, really, because if you had a sample, you'd not be able to say as much as you have.
> 
> 
> I recall the 2900XT...PhenomI, PhenomII, 5-series, and 6-series...basically everything that has launched since AMD bought ATI, has had early benches leaked "from china" that were disappointing, and every time, they basically turned out to be true.
> ...




  JF-AMD has repeatdly siad memory controller throughput was to be nearly double thuban/phenom etc. 

  Also so has AMD. 

  I think they are enjoying all the free advertising and that is why they haven't leaked shit. Look at all the high end badass motherboards, if they don't have a killer product that makes no sense.


----------



## cadaveca (Jun 12, 2011)

Thatguy said:


> Look at all the high end badass motherboards, if they don't have a killer product that makes no sense.



Sure it does. Motherboards cater to a specific market segment. Being the motherboard reviewer means I kinda understand this more than anyone, as it is my job here. LuLz.

They want to sell boards to hardcore overclockers and gamers, and such users, so they have products that have all the features that crowd looks for. CPU performance has nothing to do with it. AMD and Intel says max mem for both current CPU platforms is DDR3 1333. Yet board makers hype DDR3 2133+ support on thier boards.


----------



## Damn_Smooth (Jun 12, 2011)

cadaveca said:


> Neither do you, really, because if you had a sample, you'd not be able to say as much as you have.
> 
> 
> I recall the 2900XT...PhenomI, PhenomII, 5-series, and 6-series...basically everything that has launched since AMD bought ATI, has had early benches leaked "from china" that were disappointing, and every time, they basically turned out to be true.
> ...



Ok, I see your point, but you are neglecting 2 issues here.

1st: We have seen supposed leaks from China that show Bulldozer being clearly better than Sandy Bridge, so why should these be any more credible?

2nd: The motherboard manufacturers didn't roll out their top of the line product for the Phenom II launch. They were still clearly below Intel's boards.

Now I'm not claiming that Bulldozer is going to be the greatest thing ever, but I'd be willing to bet that the finished product will produce better results than these "leaks".


----------



## cdawall (Jun 12, 2011)

cadaveca said:


> Neither do you, really, because if you had a sample, you'd not be able to say as much as you have.



all i  have said is there are pstate issues with bios's same issues that have been seen by several other releases 

what i do and don't have is none of your business to be quite honest. no one said i had a NDA chip.


----------



## cadaveca (Jun 12, 2011)

cdawall said:


> all i  have said is there are pstate issues with bios's same issues that have been seen by several other releases
> 
> what i do and don't have is none of your business to be quite honest. no one said i had a NDA chip.



LuLz. Drop the attitude, man, it serves no purpose. You posts in this thread allude to you knowing more. 



> pstate issues with bios's same issues that have been seen by several other releases



You cannot confirm nor deny any such issues. I can say, however, having boards in-hand, that there isn't any issues with current chips, while potentially there should be, given the new P-States that Bulldozer requires.


----------



## cdawall (Jun 12, 2011)

cadaveca said:


> LuLz. Drop the attitude, man, it serves no purpose.
> 
> 
> 
> You cannot confirm nor deny any such issues.



never said this one had pstate issues i said it was possible


----------



## cadaveca (Jun 12, 2011)

cdawall said:


> never said this one had pstate issues i said it was possible



So? Big deal. It's also possible that there are no such issues. Posting info like that without a source, is exactly what erocker was talking about. We can back and forth all day long, but it serves no purpose as there are no official numbers, and anyone with a chip, cannot speak about it.

Yes, 9-series chipsets have a different VRM design, partialyl to support new P-States, that 8-series boards will not support.


So why bother even guessing?


----------



## cdawall (Jun 12, 2011)

cadaveca said:


> So? Big deal. It's also possible that there are no such issues. Posting info like that without a source, is exactly what erocker was talking about. We can back and forth all day long, but it serves no purpose as there are no official numbers, and anyone with a chip, cannot speak about it.
> 
> Yes, 9-series chipsets have a different VRM design, partialyl to support new P-States, that 8-series boards will not support.
> 
> ...



not entirely true there are people who have them and chose not to speak. not every chip is NDA.


----------



## cadaveca (Jun 12, 2011)

So? How does that help here?

Even if you personally had a chip, and no NDA, posting stuff like that, without something to back it up, serves no purpose.

Which is my point. Rumour discussion is kinda pointless. Let people think whatever they want, and official news will, in the end, confirm or deny, anything in this thread.

You cannot stand up and say "this is not true, but this is!", and neither can I.


----------



## Heavy_MG (Jun 12, 2011)

user21 said:


> well come visit Pakistan if you got a problem, you would'nt dare so  and you think e3 was a joke and it could be 100% fake i think your brain needs a reform here! if the x6 is slow the x8 might not be the best thats a forecast, if your obsessed with something slower then intel live up as you like mate
> 
> http://www.fudzilla.com/processors/item/22996-amd-officially-announces-fx-brand
> DOES THIS LOOKS FAKE TO YOU ?????
> ...


You realize you're bashing on something that is just a testing processor,or even fake?
Fudzilla,hahaha! 
If you're saying the X6 is so slow,then what are you running?
I don't care what your silly useless intel biased benchmarks say,the Phenom II X6 _ isn't slow_.


Damn_Smooth said:


> Let me explain this to you. *IF* those benches are real, AMD did not approve of them being released. They would be from a chip that was never supposed to reach consumer hands. That chip would have been sent out to motherboard makers to test for compatibility issues at best and it was definitely not the same chip that they were using to demo at E3.
> 
> The truth of the matter is that nobody knows how Bulldozer will perform, and these supposed leaked benches of a gimped chip do not change that fact one bit.


 Thanks for pointing this out again. There's always someone who would rather ignore the facts and instead hate on it.


----------



## user21 (Jun 13, 2011)

Nesters said:


> Ok, so what's your point?
> 
> If Bulldozer was that bad as you want it to be, there would possibly be AMD acquisition underway because shareholders wanted to get some money out of it while they can...
> 
> ...



yes agreed here bulldozer might be an entry to high-end


----------



## Heavy_MG (Jun 13, 2011)

user21 said:


> all i see here is bunch of amd fans trying to deny the fact, like i said if x6 if slow then y not the x8 might be slower as past made a forecast ?



AMD fans? What? I don't even..... :shadedshu
The benchmark on this BD processor isn't any good because the chip used is an *ENGINEERING SAMPLE*. It is a first revision (B0 or B1) with bugs and many things locked off so actual benchmarks are not out before the final product release. How many times do I have to repeat myself? 
Further revisions to the processor means a better chip with better results in testing.
Yet you still don't get it with your Intel fanboyish bashing and trolling. 
You cannot use the X6 to make a assumption on the FX 8 core. This is a whole new architecture here. _The Thuban X6's are by no means slow,either._


user21 said:


> LOL dont want to show off but i made a system everytime there was something new
> 
> intel 915 with 3.8 ht
> asus p5nd2-sli deluxe with 3.73 pentium Dx
> ...


Ah,a diehard intel fan I see. The AMD Athlon 64 and Athlon 64 X2 were much better than the Pentium 4 & Pentium D,as was Phenom for bang for the buck vs Core 2 duo.
Overclock your i5 2400...oh wait you can only raise the multi to turbo speeds.
Lol,that crappy intel biased benchmark site?


Damn_Smooth said:


> Judging by your grammar and lack of ability to make any point, I don't think you should run around calling anybody "IDIOTS".
> 
> The true fact of the matter is that if you think that AMD is actually going to spend all of this time and effort to release something that is slower than their current gen chips, you should probably get your head examined.
> 
> Once again, your trolling attempts are weak.


QFT. Why would AMD waste their time on a CPU that was worse than Phenom II? If this were so they would not even make announcements at E3 about the FX processor.
Haha,I though trolling was bad on some other forums but this stuff is indeed just weak.


----------



## Heavy_MG (Jun 13, 2011)

user21 said:


> lol a die hardfan ??????????????????? haahahahahahahahaahaahahaaaha i respect what has the potential, kept amd's when they use to kill the blues!
> overclock my i5-2400 oh wait you forgot that it can be overclocked upto some limitation yet then it beat PHENOM X6 hahahahaha
> see for yourself;
> http://www.bit-tech.net/hardware/cpus/2011/01/03/intel-sandy-bridge-review/14



The only part I can make any sense of, is you saying that a 2nd gen core i5 is faster than a Phenom II. That's obvious you fanboy,newer tech is always faster,however a Phenom II X6 is still better in some things,so is 1st gen core i7.
Did you know a Phenom II at it's core is based on the Athlon 64? Yet it still keeps up quite well with a core i7.


----------



## user21 (Jun 13, 2011)

Heavy_MG said:


> The only part I can make any sense of , is you saying that a 2nd gen core i5 is faster than a Phenom II. That's obvious you fanboy,however a Phenom II X6 is still better in some things,so is 1st gen core i7.



dude if i were a fan boy i had bought only extremes and 1st generation as well, i used amd when they had the potention over regular pentium 4 no the corners have changed if amd would rise up i'll buy one! like i said i respect what has the potential but if not      "DUST"


----------



## Mussels (Jun 13, 2011)

thread cleaned up. play nice or i'll take your toys away.


----------



## user21 (Jun 13, 2011)

Mussels said:


> thread cleaned up. play nice or i'll take your toys away.


----------



## bucketface (Jun 13, 2011)

why are you all still dabting this??... 
Lets just get this straight. The slides may or may not be indicative of the performance of an engineering sample from the bulldozer line. can we all agree on that? 
i susupect that we are yet to see true performance figures. 
eagerly awaiting benchies.


----------



## Fatal (Jun 13, 2011)

So is this fake anyone know? 
http://wccftech.com/wp-content/uploads/2011/05/amdbulldozer-1.jpg
If not thats not bad at all.


----------



## Damn_Smooth (Jun 13, 2011)

Fatal said:


> So is this fake anyone know?
> http://wccftech.com/wp-content/uploads/2011/05/amdbulldozer-1.jpg
> If not thats not bad at all.



Yup, that one's fake. Gigabyte said so themselves.


----------



## Fatal (Jun 13, 2011)

Alright thanks just saw that looking through the links.


----------



## Thatguy (Jun 13, 2011)

cadaveca said:


> Sure it does. Motherboards cater to a specific market segment. Being the motherboard reviewer means I kinda understand this more than anyone, as it is my job here. LuLz.
> 
> They want to sell boards to hardcore overclockers and gamers, and such users, so they have products that have all the features that crowd looks for. CPU performance has nothing to do with it. AMD and Intel says max mem for both current CPU platforms is DDR3 1333. Yet board makers hype DDR3 2133+ support on thier boards.



No it really doesn't make sense make a badass board for a overclocker to overclock if the cpu is a pile of shit. Whats the point in overclocking a turd ? its like hotrodding a honda, when you could start with a corvette instead.


----------



## cadaveca (Jun 13, 2011)

You think X6 CPUs are good?

Read my sig.


----------



## Thatguy (Jun 13, 2011)

cadaveca said:


> You think X6 CPUs are good?
> 
> Read my sig.



Work fine for me on my protools rigs, works as good as my friends sandbridge, Do you belive everything you say ? my results don't add up to your results. Either I have a better optimized system, or your benchs favor intel products. All I know is I can runn 100 tracks with effects on each track just like my buddys brand new sandybridge machine and we both get the same performance. 

  So, to me dsp loads are kind of where the rubber meets the road. I also use a multithreaded daw and I have run rendering benchmarks. while the SB does outrun the thuban, its not as dramatic as one would expect. A good 5.1 surround mix with alot of effects and alot of instruments, tracks etc can take 30-45 minutes to render to wav. My thuban can build  in 37min my buddys SB in about 35 using the same project files setting plugins etc. To me thats a pretty moot point.


----------



## HTC (Jun 13, 2011)

JF-AMD posted this @ XS:



			
				JF-AMD said:
			
		

> Based on this conversation I have a feeling nobody here is in the semiconductor business, so let me try to explain it to you.
> 
> Engineering samples are designed to validate the design and for partners to validate their systems.  They are not meant for benchmarking.
> 
> ...


----------



## Dent1 (Jun 13, 2011)

Thatguy said:


> No it really doesn't make sense make a badass board for a overclocker to overclock if the cpu is a pile of shit. Whats the point in overclocking a turd ? its like hotrodding a honda, when you could start with a corvette instead.



Marketing.

Motherboard manufacturers need a entry level, mainstream and enthusiast  brand of line up. They couldn't care less about how the CPU performs or how well they overclock. If the consumer wants to buy a monster highend and expensive board they'll give them it because the market requests it and hence money is to be made.


----------



## TheMailMan78 (Jun 13, 2011)

cadaveca said:


> You think X6 CPUs are good?
> 
> Read my sig.



There is nothing wrong with a 6-core.


----------



## cadaveca (Jun 13, 2011)

Thatguy said:


> Work fine for me on my protools rigs, works as good as my friends sandbridge, Do you belive everything you say ? my results don't add up to your results. Either I have a better optimized system, or your benchs favor intel products. All I know is I can runn 100 tracks with effects on each track just like my buddys brand new sandybridge machine and we both get the same performance.
> 
> So, to me dsp loads are kind of where the rubber meets the road. I also use a multithreaded daw and I have run rendering benchmarks. while the SB does outrun the thuban, its not as dramatic as one would expect. A good 5.1 surround mix with alot of effects and alot of instruments, tracks etc can take 30-45 minutes to render to wav. My thuban can build  in 37min my buddys SB in about 35 using the same project files setting plugins etc. To me thats a pretty moot point.



It's not just about performance...to me, it's all about performance per watt.


Intel TDP is maximum load, while AMD TDP is "typical" load.

So, for example, just at stock, my 1100T pulls 150W. That's a fair bit more than the 125W my CPU is rated for.

My 2600K pulls 65W. So running 2x 2600K uses the same power from the wall as one X6. So yeah, I do not think X6 chips are that good, because at stock, Intel chips are a bit faster, so really, I get more than 2x the perforamcne boost via Intel.

Of course, I'm comparing 45nm AMD vs 32nm Intel. So no, it's not fair, exactly, but it is what it is. AMD's 32nm chips are expected to be 125W CPUs(again, typical load, not maximum), while Intel's is 95W.


To be honest, I don't really like Intel. I was NOT going to buy into P67 platform, but had to buy in to be able to do my job here. And I'm glad I did.

But, at hte same time, I'm an ATI/AMD fanboy, so I am very much interested in Bulldozer...

So consider, not only raw CPU grunt..consider how much power that grunt requires, too. For me, it's not JUST performance.



TheMailMan78 said:


> There is nothing wrong with a 6-core.



See the above.


----------



## TheMailMan78 (Jun 13, 2011)

cadaveca said:


> It's not just about performance...to me, it's all about performance per watt.
> 
> 
> Intel TDP is maximum load, while AMD TPU is "typical" load.
> ...



You're just a performance per watt homo.


----------



## cadaveca (Jun 13, 2011)

TheMailMan78 said:


> You're just a performance per watt homo.





Well you know, other than memory performance, that's exactly what I will be looking at for Bulldozer.


Memory is #1. If that is taken care of, then I'll look at how much power that performance requires. It's doesn't have to BEAT Intel, but it better damn well be close.


If bulldozer is faster, than I'll take that into consideration too.


----------



## TheMailMan78 (Jun 13, 2011)

cadaveca said:


> Well you know, other than memory performance, that's exactly what I will be looking at for Bulldozer.
> 
> 
> Memory is #1. If that is taken care of, then I'll look at how much power that performance requires. It's doesn't have to BEAT Intel, but it better damn well be close.
> ...



I agree. I'm stoked about the new IMC.


----------



## cadaveca (Jun 13, 2011)

TheMailMan78 said:


> I agree. I'm stoked about the new IMC.



Yeah, but you are one of few people that heard me bashing P67 in TS before it came out, and saying there was no way I was going that route, so you KNOW I'm not an Intel fanboy, and I was forced to accept that P67 IS good.


As far as I am concerned any leaked benches, plus the info from JF-AMD, says that the INC and cache speed is exactly what is lacking in the ES chips. Lower ram speed and cache greatly reduces overall PCU heat, and they did very much say that current ES chips weren't binned for anything other than pure functionality, not clockspeed.

So there's basic working parts in those chips, but no more. I think maybe that TurboCore isn't working, and only one core actually goes over like 1200 Mhz


----------



## Heavy_MG (Jun 13, 2011)

TheMailMan78 said:


> There is nothing wrong with a 6-core.


Exactly.


cadaveca said:


> It's not just about performance...to me, it's all about performance per watt.
> 
> 
> Intel TDP is maximum load, while AMD TDP is "typical" load.
> ...


I fail to see how both TDP wouldn't be "Maximum load".
Do you have proof that a 1100T actually pulls 150 watts?
Tools such as CPU-Z and HW monitor aren't accurate for measuring actual TDP.
If it actually used more than the rated 125W.,many would have fried VRM's because most motherboards are only rated to 140 watts,overclocking would easily push it upwards of 190W. Unless you're talking about performance per watt,you are not getting a "2x performance boost" with a 2600K.  Intel CPU have usually had better performance per watt,except for the Athlon 64 series.


----------



## cadaveca (Jun 13, 2011)

Heavy_MG said:


> Exactly.
> I fail to see how both TDP wouldn't be "Maximum load".
> Do you have proof that a 1100T actually pulls 150 watts?
> Tools such as CPU-Z and HW monitor aren't accurate for measuring actual TDP.
> If it actually used more than the rated 125W.,many would have fried VRM's.



Yes, I can fire up a 9-series based rig and show you numbers on a meter on the 8-pin CPU power line. I use this testing for my motherboard reviews, to check VRM efficiency from one board to the next.


I *DO NOT use software* to measure power consumption.

You'll also note, that if you check my ASUS M5A97 EVO review that I managed to pull near 300W through the $100 board's VRM.

If you want more info on TDP, and the differences in how AMD and Intel rate it, this topic has been discussed widely over the years, so I'll let you research that yourself.

Start @ Nigel Dessau's blog @ AMD.



> It is not the first time AMD has tried to convince the world its ACP measurement (or 'fake-a-watt' as we here at the INQ fondly call it) is the way to go, but after reading Nigel's blog, we decided the discussion needed some INQput.






> Several processor architectures ago, AMD and rival Intel used the same methods for calculating Thermal Design Power with regard to microprocessors.   From an engineering standpoint, the TDP represents the amount of power the cooling mechanism for the CPU must dissipate before failure.
> 
> AMD and Intel now differ with TDP calculations, and for different reasons.  Intel's current architecture, for example, allows the CPU to exceed the TDP rating for a small period of time before the processor throttles its frequency clock in order to reduce the temperature at the processor level.  AMD's current-generation processors do not practice this method, *and thus AMD intentionally publishes conservative TDP ratings*.



Here's an INtel whitepaper on how thier measurements differ from AMD's(dated April 2011):

http://www.intel.com/performance/resources/briefs/tdpvacp.pdf


AMD says:



> TDP is not the maximum power of the processor.



http://support.amd.com/us/Processor_TechDocs/43374.pdf

(Page 80, Item #7)


It's just not that often that someone points this out.


----------



## Samdbugman (Jun 13, 2011)

i hope that the new bulldozer totally spanks i-7, hell i hope it spanks i-9. that means intel has to lower prices, amd has to keep them down, and intel has to beat amd next year,and amd has to beat intel next year. it means keeping cpu's in a price range i can afford. i am not a fan boy, i have just bought my builds over the years that have favored amd over intel for bang for the buck.


----------



## trickson (Jun 13, 2011)

Now I know this is just BS ! I ran the Fritz Chess BM and well my score is higher than the score of an 8 Core BD ! Some thing is just not right here ! It would also seem that my Q9650 is killing it in Cinebench as well ! And in Super PI


----------



## Thatguy (Jun 13, 2011)

cadaveca said:


> Well you know, other than memory performance, that's exactly what I will be looking at for Bulldozer.
> 
> 
> Memory is #1. If that is taken care of, then I'll look at how much power that performance requires. It's doesn't have to BEAT Intel, but it better damn well be close.
> ...




   Maybe you give a crap about per/watt. I really don't. By design bulldozer should have pretty good perf/watt as thats one of the reasons to have more cores/vrs hyperthreading. Plus it should have better overall thermal spread as well.

   But if the difference is say 10 watts and equal or better performance. I really don't give a shit. My flashlight charger pulls more power.

  If you want a real perf/watt comparison add in process node sizing to the mix and you'll see AMD does pretty good here.


----------



## cadaveca (Jun 13, 2011)

And sure, that's a legit perspective, 100%. I mean, I did already mention that each product line is on a differnt node already. It's NOT fair, but it's what's on the market.

However, for myself, I'm left really comparing these things, considering all aspects, as that's my job as a reviewer. It might not be important to you, but it may be important to some of my other readers, so it's something I HAVE to look at.

I'm trying really hard to keep my opinion out of this, and sticking to the facts. And the facts say that AMD CPUs regularily consume more power than TDP, and give less performance/watt compared to other products available for purchase today. Of course, when Bulldozer launches, this can change, but I do NOT expect AMD's TDP consideration to chage...a 125W CPU will, under some workloads, consume more power than TDP...that's just how AMD rates thier CPUs.


----------



## Thatguy (Jun 13, 2011)

cadaveca said:


> And sure, that's a legit perspective, 100%. I mean, I did already mention that each product line is on a differnt node already. It's NOT fair, but it's what's on the market.
> 
> However, for myself, I'm left really comparing these things, considering all aspects, as that's my job as a reviewer. It might not be important to you, but it may be important to some of my other readers, so it's something I HAVE to look at.
> 
> I'm trying really hard to keep my opinion out of this, and sticking to the facts. And the facts say that AMD CPUs regularily consume more power than TDP, and give less performance/watt compared to other products available for purchase today.



   Flatten the sizing node and look at competing intel products of equal performance really does give a more fair comparison. So what if they pull a bit more. Who cares, its really only a factor with a overclock anyways. Also people look at TDP and power consumption the right way. Intel is using a fixed unit, amd is using a epa milage estimate based on use.


----------



## Heavy_MG (Jun 13, 2011)

Thatguy said:


> Flatten the sizing node and look at competing intel products of equal performance really does give a more fair comparison. So what if they pull a bit more. Who cares, its really only a factor with a overclock anyways. Also people look at TDP and power consumption the right way. Intel is using a fixed unit, amd is using a epa milage estimate based on use.



True. For the size AMD is using vs. what Intel is using,it is not that bad. You cannot ignore the difference in design,the differences mean different TDP.  *Both architectures are completely different anyway*,it's like trying to compare a Nissan to a Toyota.   AMD's TDP rating based on real use makes more sense,90% of users most likely don't go over the rated TDP.


cadaveca said:


> Yeah, and you know, I have compared power consumption between 45nm products. Intel still wins there, even when overclocked. Intel has higher clocking overhead due to thier lower power consumption.
> 
> 
> And that's why, IMHO, in the extreme scene, we see very few people are clocking AMD chips...board limitations brought on by CPU power consumption means that the gains just aren't wprth the effort.
> ...


Obviously.
Plenty of people of overclock AMD chips,you just don't go and buy a $50 board and expect any overclock from it. I have a $120 4+1 phase board,with a 3.8Ghz overclock,there is no way it could handle a 300W. load.
ATI still exists,you must not have heard about the name change,ATI has still existed even after AMD bought them out,the only difference is the name on the card. Judging from you're saying that the X6's are awful,and not realizing the differences in architecture you are Intel biased and try to sway your readers in that direction as well.


----------



## cadaveca (Jun 13, 2011)

Yeah, and you know, I have compared power consumption between 45nm products. Intel still wins there, even when overclocked. Intel has higher clocking overhead due to thier lower power consumption. My i7 870 draws 145w@ 4 GHz, my 1100T draws a bit over 300W.


And that's why, IMHO, in the extreme scene, we see very few people are clocking AMD chips...board limitations brought on by CPU power consumption means that the gains just aren't wprth the effort.

Like don't get me wrong, I used to call myself ATI's #1 fanboy. ATI doesn't exist any more, so now my main concerns are power consumption, and best performance. Differences in design doesn't matter...it's a set workload, that draws variable power based on platform, and completes in a variable time, based on performance.

You may not like that compare, but it's still definitely valid, especially when it come to the office environment.


----------



## erocker (Jun 13, 2011)

We're talking about the "extreme scene?" TDP doesn't factor in my decision at all. I doubt if factors much into anyone's decision. SB is a more powerful (performance) chip though, that's what people look at. As far as the office environment it doesn't matter. People will buy in lots at a good price, performance plays less of a factor.


----------



## trickson (Jun 13, 2011)

erocker said:


> We're talking about the "extreme scene?" TDP doesn't factor in my decision at all. I doubt if factors much into anyone's decision. SB is a more powerful (performance) chip though, that's what people look at. As far as the office environment it doesn't matter. People will buy in lots at a good price, performance plays less of a factor.



I would agree . And by the looks of the leaked BM it is an under performing CPU at best . But I must say that the leaked BM are BS . Price is a big factor and then performance the power consumption is last . I do not see people or businesses buying CPU's that are all about power consumption .


----------



## TheMailMan78 (Jun 13, 2011)

erocker said:


> We're talking about the "extreme scene?" TDP doesn't factor in my decision at all. I doubt if factors much into anyone's decision. SB is a more powerful (performance) chip though, that's what people look at. As far as the offic environment it doesn't matter. People will buy in lots at a good price, performance plays less of a factor.



This is true. Most offices would be over powered with an Athlon II. E-mail and some flash websites don't need a Phenom II. Nevermind a Sandy.

Raw power=Intel
Budget power=AMD

I mean its pretty simple.

Anyway we all sound like some old crows bickering about a CPU thats not even out yet. Lets just wait for some benches mkay?


----------



## erocker (Jun 13, 2011)

TheMailMan78 said:


> This is true. Most offices would be over powered with an Athlon II. E-mail and some flash websites don't need a Phenom II. Nevermind a Sandy.
> 
> Raw power=Intel
> Budget power=AMD
> ...



A friend of mine who leads the IT department at a very large national law firm just bought 400 computers, all with AMD 1045 x6's in them.


----------



## TheMailMan78 (Jun 13, 2011)

erocker said:


> A friend of mine who leads the IT department at a very large national law firm just bought 400 computers, all with AMD 1045 x6's in them.



lol overkill much?


----------



## trickson (Jun 13, 2011)

TheMailMan78 said:


> lol overkill much?



LOL well it is a law firm after all .


----------



## TheMailMan78 (Jun 13, 2011)

trickson said:


> LOL well it is a law firm after all .



6 raw cores to F#@K you out of your money.


----------



## trickson (Jun 13, 2011)

TheMailMan78 said:


> 6 raw cores to F#@K you out of your money.



And seeing they are AMD's that is one more thing that will F#$k you in the end !


----------



## TheMailMan78 (Jun 13, 2011)

trickson said:


> And seeing they are AMD's that is one more thing that will F#$k you in the end !



I'm not even sure what that means.


----------



## trickson (Jun 13, 2011)

TheMailMan78 said:


> I'm not even sure what that means.


----------



## Heavy_MG (Jun 13, 2011)

trickson said:


> And seeing they are AMD's that is one more thing that will F#$k you in the end !


What is this I don't even......


----------



## cadaveca (Jun 13, 2011)

trickson said:


> I do not see people or businesses buying CPU's that are all about power consumption .



Then why is AMD hyping performance per watt?





It's also a sign of how mis-managed some companies are. Power consumption is a factor when considering office power grid design. 400 Machines drawing 350 watts equals 140,000 watTs of power that office needs. If a pc draws half as much power, that's 70,000 Watts saved. That's potentially $70/hour saved, or just over $50,000 a month. How many of you make that a year?


----------



## TheMailMan78 (Jun 13, 2011)

cadaveca said:


> Then why is AMD hyping performance per watt?
> 
> 
> 
> ...



It makes sense.....however NO ONE takes that kinda stuff into account. Ive worked with a few IT dept and all they care about is reading HR's emails.


----------



## cadaveca (Jun 13, 2011)

Maybe part of teh reason it's not considered is due to noone talking about it. I know both AMD and Intel hype performance/watt, and specifically to thier office-based users. Mind you, I pay attention to ALL of thier marketing, while many do not.


----------



## erocker (Jun 13, 2011)

cadaveca said:


> Then why is AMD hyping performance per watt?



Why do marketeers hype anything? To try to sell it. Still makes little difference. Price is the only real deciding factor especially for office based users.


----------



## Thatguy (Jun 13, 2011)

cadaveca said:


> Yeah, and you know, I have compared power consumption between 45nm products. Intel still wins there, even when overclocked. Intel has higher clocking overhead due to thier lower power consumption. My i7 870 draws 145w@ 4 GHz, my 1100T draws a bit over 300W.
> 
> 
> And that's why, IMHO, in the extreme scene, we see very few people are clocking AMD chips...board limitations brought on by CPU power consumption means that the gains just aren't wprth the effort.
> ...



     300w ? are you sure ?


----------



## cadaveca (Jun 13, 2011)

erocker said:


> Price is the only real deciding factor especially for office based users.



yeah, but you work in an auto shop. My wife works in the hospital environment, and you better beleive they consider power consumption. Would really suck if you needed life support, but they couldn't give it to you because thier PCs draw too much power.



Thatguy said:


> 300w ? are you sure ?



YES.


----------



## Heavy_MG (Jun 13, 2011)

TheMailMan78 said:


> It makes sense.....however NO ONE takes that kinda stuff into account. Ive worked with a few IT dept and all they care about is reading HR's emails.



If offices actually cared about power usage,they would all be running Atoms.


----------



## Thatguy (Jun 13, 2011)

cadaveca said:


> yeah, but you work in an auto shop. My wife works in the hospital environment, and you better beleive they consider power consumption. Would really suck if you needed life support, but they couldn't give it to you because thier PCs draw too much power.
> 
> 
> 
> YES.



How did you measure this ?


----------



## Heavy_MG (Jun 13, 2011)

cadaveca said:


> yeah, but you work in an auto shop. My wife works in the hospital environment, and you better beleive they consider power consumption. Would really suck if you needed life support, but they couldn't give it to you because thier PCs draw too much power.
> 
> 
> 
> YES.



Now you're just grasping at straws,and bashing on a brand saying AMD's draw so much power,if you did anything else,you would blow the whole power grid. In everyday use,a AMD processor won't draw that much more power than your Intel.
Don't most hospitals use laptops anyway?


----------



## cadaveca (Jun 13, 2011)

Thatguy said:


> How did you measure this ?



With a meter that the CPU 8-pin plugs into. The meter then plugs into the board. the meter tells me how many amps @ 12v go through it, the wattage, and the actual 12v reading. 300W is near the maximum of the cable itself, which is why we are starting to see boards with more than one 8-pin connector, such as the Crosshair V Formula, which has 8-pin, plus another 4-pin....because the 8-pin is just not enough.



Heavy_MG said:


> Don't most hospitals use laptops anyway?



No, not as a single device people use. Almost every executive has a laptop, as well as thier "box" in the office.


Here, in Alberta, those laptops are slowly being replaced with iPads, but the "box" pc's are still under or on everyone's desk.


----------



## Thatguy (Jun 13, 2011)

cadaveca said:


> With a meter that the CPU 8-pin plugs into. the meter then plugs into the board.
> 
> 
> 
> ...



   and it also power the pci express slots.


----------



## TheMailMan78 (Jun 13, 2011)

cadaveca said:


> yeah, but you work in an auto shop. My wife works in the *hospital environment*, and you better beleive they consider power consumption. Would really suck if you needed life support, but they couldn't give it to you because thier PCs draw too much power.



Thats a mission critical job man. Thats not the norm. My father was an electrical engineer that turned contractor. Have you ever need the grounding system in an operating room? Yeah comparing hospitals to anything other then an ATCC is kinda pointless. 

I know where you are coming from mind you. Its just not relevant here.


----------



## cadaveca (Jun 13, 2011)

Thatguy said:


> and it also power the pci express slots.



No, those are powered via 24-pin. Remember, I review motherboards...so I know how they work.



TheMailMan78 said:


> Thats a mission critical job man.




You are right...mission critical. Many offices HERE deem thier network as such, specifically the engineering for the oilfield, and the hospitals.


----------



## TheMailMan78 (Jun 13, 2011)

cadaveca said:


> No, those are powered via 24-pin. Remember, I review motherboards...so I know how they work.



I review strippers yet I have no idea how they work.


----------



## Thatguy (Jun 13, 2011)

cadaveca said:


> No, those are powered via 24-pin. Remember, I review motherboards...so I know how they work.



   ahhh, no the cpu is powered from the 24pin


----------



## cadaveca (Jun 13, 2011)

Thatguy said:


> ahhh, no the cpu is powered from the 24pin



 Ok, if you say so. You could always as W1zz, as he isolates PCIe power draw for his reviews, too. Don't take my word for it.



TheMailMan78 said:


> I review strippers yet I have no idea how they work.



NO? You need to get out more...even Duke Nukem knows how strippers work.


----------



## TheMailMan78 (Jun 13, 2011)

cadaveca said:


> You are right...mission critical. Many offices HERE deem thier network as such, specifically the engineering for the oilfield, and the hospitals.


 But thats not the norm ya know?


----------



## cadaveca (Jun 13, 2011)

TheMailMan78 said:


> But thats not the norm ya know?



It is here. Part of ISO certifaction is network integrity, and office carbon production. You are forgetting corporations have to pay for carbon credits now.



> ISO, the International Organization for Standardization, is one step closer to finalizing its new energy management standard, ISO 50001. In July, the organization approved a Draft International Standard for the energy management system, which is based on common elements found in ISO's management system standards. As such, the new standard will be highly compatible with ISO 9001 (quality management) and ISO 14001 (environmental management).


----------



## Heavy_MG (Jun 13, 2011)

Thatguy said:


> ahhh, no the cpu is powered from the 24pin


24 pin powers the whole motherboard.
The 4 or 8 pin connector provides more power to the CPU and VRM's.


----------



## Thatguy (Jun 13, 2011)

cadaveca said:


> Ok, if you say so. You could always as W1zz, as he isolates PCIe power draw for his reviews, too. Don't take my word for it.
> 
> 
> 
> NO? You need to get out more...even Duke Nukem knows how strippers work.



http://en.wikipedia.org/wiki/ATX


----------



## Damn_Smooth (Jun 13, 2011)

I could be completely off base here, but I thought performance per watt was only an issue in the server market.

I have never heard of a hospital not being able to save a life because their computers drew too much power. you would think something like that would make headlines.


----------



## TheMailMan78 (Jun 13, 2011)

cadaveca said:


> It is here. Part of ISO certifaction is network integrity, and office carbon production. You are forgetting corporations have to pay for carbon credits now.


 Carbon credits? Is that some of your fancy tree huggin Canada talk again? Because as an American I only buy things I am SURE will drown polar bears.



cadaveca said:


> NO? You need to get out more...even Duke Nukem knows how strippers work.



Show me a man that knows how women work and Ill show you a homosexual.


----------



## Thatguy (Jun 13, 2011)

Heavy_MG said:


> 24 pin powers the whole motherboard.
> The 4 or 8 pin connector provides more power to the CPU and VRM's.



24pin predominately power the cpu and the onboard chipsets and provides power to the pci slots, pcie stuff and additional power planes are in the 4-8 pin connectors. the only real way is to use a socket isolator cpu and measure from the power pins. 300watts should cuase the cpu to flat out fail.


----------



## cadaveca (Jun 13, 2011)

Thatguy said:


> 24pin predominately power the cpu and the onboard chipsets and provides power to the pci slots, pcie stuff and additional power planes are in the 4-8 pin connectors. the only real way is to use a socket isolator cpu and measure from the power pins. 300watts should cuase the cpu to flat out fail.



Um, no.

Intel VRD design:

http://www.intel.com/assets/PDF/designguide/321736.pdf

One AMD VRM design:

http://cds.linear.com/docs/Design Note/dn326f.pdf


----------



## Heavy_MG (Jun 13, 2011)

Damn_Smooth said:


> I have never heard of a hospital not being able to save a life because their computers drew too much power. you would think something like that would make headlines.



Or the next Intel commercial. 

Lol,I wonder how much Intel is paying this guy.


----------



## Thatguy (Jun 13, 2011)

cadaveca said:


> Um, no.
> 
> Intel VRD design:
> 
> ...



The only way to accurately measure power draw is to isolate the cpu from the motherboard with a isolator socket layer and measure directly from the cpu pins. There are way to many factors invovled to take a measurement from one spot.


----------



## erocker (Jun 13, 2011)

cadaveca said:


> yeah, but you work in an auto shop. My wife works in the hospital environment, and you better beleive they consider power consumption. Would really suck if you needed life support, but they couldn't give it to you because thier PCs draw too much power.
> 
> 
> 
> YES.



I didn't always own an auto shop. The ISO standards are a farce and little to no corporations pay anything to meet those standards. This isn't Canada, this is the US where corporations do what they want regardless of law or whatnot. All I did was provide an example of this. The "industry" can spew all they want in marketing jargon, green lingo and save the world pamphlets, it doesn't change what the reality of real life is. As a reviewer that's fine as you normally go by what the industry goes by. It's not your job to go "behind the scenes" and see the way things really are in this regard. Like most things there's a hypocrisy involved.


----------



## Heavy_MG (Jun 13, 2011)

Thatguy said:


> The only way to accurately measure power draw is to isolate the cpu from the motherboard with a isolator socket layer and measure directly from the cpu pins. There are way to many factors invovled to take a measurement from one spot.



Measuring from the 8 pin socket isn't accurate,the 8 pins powers things other than just the CPU.
Also Intel has a different power design than AMD if you were to measure from the 8 pin and not directly form the cpu socket.


----------



## TheMailMan78 (Jun 13, 2011)

erocker said:


> Like most things there's a hypocrisy involved.



Like a mod who trolls.


----------



## cadaveca (Jun 13, 2011)

erocker said:


> As a reviewer that's fine as you normally go by what the industry goes by. It's not your job to go "behind the scenes" and see the way things really are in this regard. Like most things there's a hypocrisy involved.



Of course, and you are right, I live in a completley different country than you do, and as such, our standards are very different.



Heavy_MG said:


> Measuring from the 8 pin socket isn't accurate,the 8 pins powers things other than just the CPU.



Like what?


----------



## Dent1 (Jun 13, 2011)

cadaveca said:


> Then why is AMD hyping performance per watt?
> 
> 
> 
> ...





cadaveca said:


> yeah, but you work in an auto shop. My wife works in the hospital environment, and you better beleive they consider power consumption. Would really suck if you needed life support, but they couldn't give it to you because thier PCs draw too much power.
> 
> 
> 
> YES.



Marketing. 

Small companies are not concerned about power consumption as a priority because they're worried about their immediate overheads and budget. If they're buying up to 100 office computers they'll rather save £8,000 and buy a computer equipped with a CPU with higher rated TDP. Granted they might save £1,000 per year on energy bills if they went for a more power efficient computers but it would take 8+ years before the financial saving balances out. Coming out of a recession small companies are worried about surviving in business today not in 8+ years.

Multi-national organisations are not concerned about power consumption either from a financial point of view because they often have deals with their electricity provider for a fixed cost. However power consumption is still a small consideration mainly as a marketing ploy to show that their doing their part to help society lower carbon emissions etc and hence raise the profile of the company in question.

Hospitals are often Government funded. The government has a bigger incentive to lower carbon emission than small companies and multinational organisations. Politicians base policies on reducing carbon emissions and general energy wastage is the difference between them getting elected and getting re-elected. 

AMD hyping energy efficiency is to get small orders from multi-national organisations and to get huge orders from Hospitals and Government funded organisations. Apart from these huge organisations energy efficiency is a afterthought for the average Joe consumer.


----------



## TheMailMan78 (Jun 13, 2011)

cadaveca said:


> Of course, and you are right, I live in a completley different country than you do, and as such, our standards are very different.



BS Canada is the 51st State. Its like a cold ass California without the Mexican drug lords.


----------



## cadaveca (Jun 13, 2011)

Dent1 said:


> AMD hyping energy efficiency is to get small orders from multi-national organisations and to get huge orders from Hospitals and Government funded organisations. Apart from these huge organisations energy efficiency is a afterthought for the average Joe consumer.



Ah, thanks very much for explaining it to show both sides of the story.



TheMailMan78 said:


> Its like a cold ass California without the Mexican drug lords.





No ,the mexican cartels are here too.


----------



## erocker (Jun 13, 2011)

cadaveca said:


> Of course, and you are right, I live in a completley different country than you do, and as such, our standards are very different.



Don't get me wrong, that's not really what I meant. The industry has certain standards, measurements, etc the the industry as a whole "should" follow. As a reviewer it's part of your job to go over and make sure these companies are within margins of these standards. I'm just saying the actual consumer doesn't really care very much, though those in charge of making these purchases at larger corporations may take this data into consideration when making a purchase. Bottom line is they want more product that serves their primary needs for the least amount of money.


----------



## cadaveca (Jun 13, 2011)

Yeah, I understood clearly. And I agree, the avg consumer doesn't care...just what's faster, for the best dollar matters.

But those standards do exist, and at least here, these things are under closer scrutiny than elsewhere.


----------



## trickson (Jun 13, 2011)

I don't know just why or how this thread became a thread about AMD BD wattage when this is a leaked preview of the performance . Who gives a happy crap about the wattage when the performance ( The BM's I seen here ) is far lower than the Q9650 ! I could care less about how much wattage it takes to run the thing ! I think the thread has been derailed ! I am not hearing any one ( But me ) Talk about the dismal performance of the BD CPU ! WHY ? What is better having a CPU with lower watts and less performance ? GOOD GOD IF SO STICK WITH THE 4000+ Then ! This BD CPU looks like crap performance wise ! I hope that this is not what AMD plans to put out to consumers or they are in big deep Sh*T ! How about we talk about the benchmarks ?


----------



## Damn_Smooth (Jun 13, 2011)

trickson said:


> I don't know just why or how this thread became a thread about AMD BD wattage when this is a leaked preview of the performance . Who gives a happy crap about the wattage when the performance ( The BM's I seen here ) is far lower than the Q9650 ! I could care less about how much wattage it takes to run the thing ! I think the thread has been derailed ! I am not hearing any one ( But me ) Talk about the dismal performance of the BD CPU ! WHY ? What is better having a CPU with lower watts and less performance ? GOOD GOD IF SO STICK WITH THE 4000+ Then ! This BD CPU looks like crap performance wise ! I hope that this is not what AMD plans to put out to consumers or they are in big deep Sh*T ! How about we talk about the benchmarks ?



My guess would be that there is no point in discussing benchmarks of a chip that are either fake, or from a chip that was never meant to be benchmarked, any farther than we already have in the first 6 pages of this thread.

You can go on believing their credibility if you want to though, I'll stay grounded in a reality where AMD doesn't waste a shitload of time and money on a chip that performs significantly worse than their current offerings.

Have fun.


----------



## yogurt_21 (Jun 13, 2011)

ime power matters on the lowend where consumers/manufacturers have purchased low wattage powersupplies for their machines.

it also matters for servers when large amounts of them are used. So if each unit uses 100 watts more you're looking an an increased usage of 50-100kw for all servers. at the national average of 12cents a kwh that's 4500-9000$ extra a month. No cfo would be happy with that, especially if there was something that got the job done for a similar cost with lower power consumption.


----------



## erocker (Jun 13, 2011)

trickson said:


> I am not hearing any one ( But me ) Talk about the dismal performance of the BD CPU ! WHY ? What is better having a CPU with lower watts and less performance ? GOOD GOD IF SO STICK WITH THE 4000+ Then ! This BD CPU looks like crap performance wise ! I hope that this is not what AMD plans to put out to consumers or they are in big deep Sh*T ! How about we talk about the benchmarks ?



It's not going to be the released product so I assume no one really cares. I know I don't really care about some early engineering sample that has no reflection on a finished product using benchmarks that give an incliination of nothing.


----------



## trickson (Jun 13, 2011)

erocker said:


> It's not going to be the released product so I assume no one really cares. I know I don't really care about some early engineering sample that has no reflection on a finished product using benchmarks that give an incliination of nothing.



Then what is the use of talking about wattage ? This goes both ways . There are no chips out yet and all we seem to be able to do is get all worked up over nothing .


----------



## Dent1 (Jun 13, 2011)

trickson said:


> Talk about the dismal performance of the BD CPU!





trickson said:


> This BD CPU looks like crap performance wise !



What are you talking about? How can you bash the bulldozer's performance when there is no Bulldozer review or benchmarks.

The wattage debate happened when the thread lost steam and got derailed.


----------



## cadaveca (Jun 13, 2011)

trickson said:


> Then what is the use of talking about wattage ?



I mentioned that wattage would be something I'd be looking at when I get CPUs, and the discussion went from there.


----------



## Damn_Smooth (Jun 13, 2011)

trickson said:


> Then what is the use of talking about wattage ? This goes both ways . There are no chips out yet and all we seem to be able to do is get all worked up over nothing .



Because wattage is far more interesting than benchmarks that show nothing.


----------



## cadaveca (Jun 13, 2011)

Damn_Smooth said:


> Because wattage is far more interesting than benchmarks that show nothing.



I guess, but it's equally unknown at this point. We can only look at past chips, but I think these CPUs are going to get a different process than previous AMD chips, even?


----------



## trickson (Jun 13, 2011)

Dent1 said:


> What are you talking about? How can you bash the bulldozer's performance when there is no Bulldozer review or benchmarks.



Well first off there seems to be Bulldozers out . They are saying that they bench tested them and even gave some benchmarks of there sample , What are you talking about ? 
The thread is titled *AMD Bulldozer Eng. Sample leaked , Benched !* Am I seeing things here ? Don't flip the scrip on me son .


----------



## Damn_Smooth (Jun 13, 2011)

cadaveca said:


> I guess, but it's equally unknown at this point. We can only look at past chips, but I think these CPUs are going to get a different process than previous AMD chips, even?



The only thing I can answer you with and be even somewhat honest about it, is that I don't know. I don't know if the process behind this is different, I don't know how these chips will perform in real life, hell, I don't even know a release date.

But I do know that AMD wouldn't invest the time and effort that they have, to take a step backwards. They do have stockholders that they are accountable to, even if there is no CEO, so taking backwards steps would probably end the company.


----------



## trickson (Jun 13, 2011)

I do not know but it seems as if all AMD is really doing is pushing cores out . MORE CORES ='s better chips . I hope to see them taking the lead again but from the BM provided ( NOT reliable at all ) I would say they are just putting out more cores at the same performance as the Phenom . More of a lateral move to me .


----------



## erocker (Jun 13, 2011)

trickson said:


> Well first off there seems to be Bulldozers out . They are saying that they bench tested them and even gave some benchmarks of there sample , What are you talking about ?
> The thread is titled *AMD Bulldozer Eng. Sample leaked , Benched !* Am I seeing things here ? Don't flip the scrip on me son .



Um.. okay. So, someone got a hold of an early AMD BD engineering sample and it sucks. Where do we go from here? Make multiple posts claiming BS? Discuss it and everything to do with it? You don't seem to have much to contribute to the actual topic nor does anyone else. Why? There's no information. So I could just close the thread or let people be grown-up's and discuss various things involving CPU's that make for interesting and informative conversation. I'll do the latter.



trickson said:


> I do not know but it seems as if all AMD is really doing is pushing cores out . MORE CORES ='s better chips . I hope to see them taking the lead again but from the BM provided ( NOT reliable at all ) I would say they are just putting out more cores at the same performance as the Phenom . More of a lateral move to me .



Well, you're wrong and basing your assumption of of some sample that we don't even know is real. As far as this part:

 " would say they are just putting out more cores at the same performance as the Phenom"

I'll hold you to that. It makes absolutely zero sense, but we'll see.


----------



## Dent1 (Jun 13, 2011)

trickson said:


> Well first off there seems to be Bulldozers out . They are saying that they bench tested them and even gave some benchmarks of there sample , What are you talking about ?
> The thread is titledAMD Bulldozer Eng. Sample leaked , Benched ! Am I seeing things here ? Don't flip the scrip on me son .



Indeed the title does say that. Doesn't mean its true. 

I'm sure Intel had samples in their test labs of a premature, defective and underperforming SandyBridge. The difference is it wasn't leaked.

Anyways, we have all already established many pages ago that the authenticity of the benchmarks can not be trusted.

To close, there is no official or confirmed unofficial authentic Bulldozer review or benchmark at present.


----------



## trickson (Jun 13, 2011)

I really want to think that bulldozer is going to give Intel a run for the money . All I hope is that they will take the lead this will force lower prices in the market . I do not like to see things like this as it makes me think things will stay the same and prices will be even higher . This is all I am meaning to convey .


----------



## cadaveca (Jun 14, 2011)

Thatguy said:


> 300w ? are you sure ?



Oh, look, here's* OVER 400W *going through an AMD CPU(Crosshair V Formula, 8-pin + 4-pin):












Oh, and look...a meter over the 8-pins. I guess some are really just not aware of what happens when you overclock to the extreme. 300W is NOTHING, and ASUS is more than aware of it(otherwise there'd be no need for both 4-pin and 8-pin)!!


----------



## seronx (Jun 14, 2011)

cadaveca said:


> Oh, look, here's* OVER 400W *going through an AMD CPU(Crosshair V Formula, 8-pin + 4-pin):
> 
> 
> 
> ...



And that is with the Crosshair V Formula version and not the Crosshair V Extreme version

(Extreme has 2x8 pin)


----------



## cadaveca (Jun 14, 2011)

I didn't evne know there was an "Extreme" version planned.

I've had several peopel ask me...what's the difference between the Sabertooth, and the Crosshair, and why would I buy one over the other?

This is just one reason to choose one over the other.


----------



## seronx (Jun 14, 2011)

cadaveca said:


> I didn't evne know there was an "Extreme" version planned.
> 
> I've had several peopel ask me...what's the difference between the Sabertooth, and the Crosshair, and why would I buy one over the other?
> 
> This is just one reason to choose one over the other.



Sabertooth TUF 6+2 8pin , Formula 8+2 8pin+4pin , Extreme 8+2 2x8-pin 


http://www.asus.com/Motherboards/AMD_AM3/Crosshair_IV_Extreme/

http://www.asus.com/Motherboards/AMD_AM3/Crosshair_IV_Formula/

If they did it before and didn't get punished it is most likely they will do it again


----------



## cadaveca (Jun 14, 2011)

Huh. I'm gonna have to ask ASUS if I can try both out. My CPU is a power hog, and the coldest my pot has been has been -35c, in winter, as it's sitting in the garage collecting dust. These might be enough for me to pull it out again.

Interesting, very interesting indeed.

However, that link goes to the Crosshair IV, and I want Crosshair V Extreme.

ASUS has many products that catch my eye, like the ROG expander, OC panel, etc...and even the entry-level boards have alot of features I was NOT expecting.


----------



## seronx (Jun 14, 2011)

cadaveca said:


> Huh. I'm gonna have to ask ASUS if I can try both out. My CPU is a power hog, and the coldest my pot has been has been -35c, in winter, as it's sitting in the garage collecting dust. These might be enough for me to pull it out again.
> 
> Interesting, very interesting indeed.
> 
> However, that link goes to the Crosshair IV, and I want Crosshair V Extreme.



It usually takes 7-8 Months before the Crosshair Extreme is born

They need to get a mama crosshair and a daddy crosshair to make a perfect girl crosshair

But, I heard they got the mating session over with now it is just the waiting time


----------



## Thatguy (Jun 14, 2011)

cadaveca said:


> Oh, look, here's* OVER 400W *going through an AMD CPU(Crosshair V Formula, 8-pin + 4-pin):
> 
> 
> 
> ...



   WTF am I supposed to do with a red X and no link ? and I explained to you that other things are on those connectors getting power. You want real cpu wattage, isolator socket and get power data out of the pins that the cpu gets power from. Never mind the fact that 80 amps through a few pins on the cpu sounds like its pushing the boundrys of established electrical engineering thoery.


----------



## seronx (Jun 14, 2011)

Thatguy said:


> WTF am I supposed to do with a red X and no link ? and I explained to you that other things are on those connectors getting power. You want real cpu wattage, isolator socket and get power data out of the pins that the cpu gets power from. Never mind the fact that 80 amps through a few pins on the cpu sounds like its pushing the boundrys of established electrical engineering thoery.



The highest peak he got is 1688 Watts 8pin+4pin for ya


----------



## cadaveca (Jun 14, 2011)

Thatguy said:


> WTF am I supposed to do with a red X and no link ? and I explained to you that other things are on those connectors getting power. You want real cpu wattage, isolator socket and get power data out of the pins that the cpu gets power from. Never mind the fact that 80 amps through a few pins on the cpu sounds like its pushing the boundrys of established electrical engineering thoery.


Here's the full link:

http://www.youtube.com/watch?v=w70T3h_re9A&feature=player_embedded

And, note that the 9-series baords with a black socket feature support for thicker pins to handle mroe current. And while you may have explained soemthing to me, I simply asked a very straight-forward question..."Like what?"

There's this amazing thing, called a continuity test...are you familiar with that? It allows sourcing power at any component and finding where that power comes from. I have yet to find anything other than CPU power to be powered from an ATX/EPS 8-pin, yet you are right..it IS possible(and i mention as much in my reviews, too)..but I have yet to find any examples in any of the boards I have tested so far.


----------



## Thatguy (Jun 14, 2011)

cadaveca said:


> Here's the full link:
> 
> http://www.youtube.com/watch?v=w70T3h_re9A&feature=player_embedded
> 
> ...



   I got a 6 core running at 4 ghz on a 450watt power supply with a 5770 video card. Max load the system is pulling 380 from the wall.  not really seeing it. I don't have time right now to make a isolator socket with a current bridge.


----------



## cadaveca (Jun 14, 2011)

Thatguy said:


> I got a 6 core running at 4 ghz on a 450watt power supply with a 5770 video card. Max load the system is pulling 380 from the wall.  not really seeing it. I don't have time right now to make a isolator socket with a current bridge.



You don't need to. Just because you are running less power through your CPU than I do for testing, doesn't mean it's not possible. If it wasn't possible, the CHVF would not have the extra 4-pin CPU power connector, and that's a fact.

Since you think this:



Thatguy said:


> and it also power the pci express slots.



I'm about done explaining these things to you.


----------



## Thatguy (Jun 14, 2011)

cadaveca said:


> You don't need to. Just because you are running less power through your CPU than I do for testing, doesn't mean it's not possible. If it wasn't possible, the CHVF would not have the extra 4-pin CPU power connector, and that's a fact.



the problem is that the math doesn't add up. To consume 80amps would requires a significant drop in tempature to lower resistance to allow for more current.


----------



## cadaveca (Jun 14, 2011)

Thatguy said:


> the problem is that the math doesn't add up. To consume 80amps would requires a significant drop in tempature to lower resistance to allow for more current.



CPU's are semi-conductors, whose electrical properties vary according to temperature. Basic stuff.

CPUs take 80a all the time...more even...@ thier rated voltage. For example...80a @ 1.5v= 120W. Current CPUs run far less than 1.5v.

FYI, in case you didn't watch the video, the video shows LN2 cooling.


----------



## Thatguy (Jun 14, 2011)

cadaveca said:


> CPU's are semi-conductors, whose electrical properties vary according to temperature. Basic stuff.
> 
> CPUs take 80a all the time...more even...@ thier rated voltage. For example...80a @ 1.5v= 120W. Current CPUs run far less than 1.5v.



jesus I got 5v on the brain today, lol.


----------



## cadaveca (Jun 14, 2011)

NO big deal. I very purposely have created testing for my reviews that is near infalliable. I test things no other reviewer does, show things no other reviewer does, and hold opinions most reviewers do not agre with, so I expect alot of shock and surprise from readers...but at the same time, my main purpose is trying to educate users on what thier PCs are really doing.

I take my job here seriously. If I am even slightly unsure about something, I say so.

I bin my CPUs for testing. I usually buy 10-15 CPUs, and keep the very worst sample for testing, so create worst-case scenarios. Of course, there's always a chance that there are other bits out there worse than what I find, but I very purposely get rid of what woudl be considered "cherry-picked" parts, as I do not beleive using top-level hand-binned parts serves any purpose.


I do try to break every product I get, too, but at the same time, I do try to maintain reasonable limits to my testing.


So I found a CPU that can take over 350W, at what would be modest clocks. This is specifically so I can push board VRMs to the limit. In my ASUS M5A97 EVO review, I managed to do just that, with OCP kicking in @ just over 280w.


----------



## Thatguy (Jun 14, 2011)

cadaveca said:


> NO big deal. I very purposely have created testing for my reviews that is near infalliable. I test things no other reviewer does, show things no other reviewer does, and hold opinions most reviewers do not agre with, so I expect alot of shock and surprise from readers...but at the same time, my main purpose is trying to educate users on what thier PCs are really doing.
> 
> I take my job here seriously. If I am even slightly unsure about something, I say so.
> 
> ...



  The best way to test real consumption would be to grab the socket pins. Yeah I have touch probe and current meters that might be workable but nothing with beat the accuracy of a current bridge to get the most solid numbers to within a very small window of error. 

  I don't have time to build anything like it, but I bet some large company has it.I also know I don't have a top bin part either.


----------



## cadaveca (Jun 14, 2011)

Sure, and you are very right, and I know both Intel and AMD do have socket-based testers, to test VRM design, however, that would not really show VRM efficiency, merely the power the VRM produces.

I use the same CPU for testing in each platform(ie one CPU per platform), and things like ASUS's DIGI+ VRM will affect power output to the CPU, so it serves me better to measure @ the 8-pin, providing a metric based on the same CPU speed(which no matter the board, the CPU should consume the same power for the same clocks), and give numbers which can be comapred from board to board, highlighting the VRM, rather than the CPU. I am not the TPU CPU reviewer, so providing info specific to CPU power draw is not neccesary for me.

Testing via your proposed method would eliminate VRM efficiency numbers, and focus on the CPU, while I'm concerned about the VRM design. I coudl do both sets of testing, but I see no need for me to report specific CPU numbers when I'm not testing CPUs in my reviews.


----------



## Thatguy (Jun 14, 2011)

cadaveca said:


> Sure, and you are very right, and I know both Intel and AMD do have socket-based testers, to test VRM design, however, that would not really show VRM efficiency, merely the power the VRM produces.
> 
> I use the same CPU for testing in each platform(ie one CPU per platform), and things like ASUS's DIGI+ VRM will affect power output to the CPU, so it serves me better to measure @ the 8-pin, providing a metric based on the same CPU speed(which no matter the board, the CPU should consume the same power for the same clocks), and give numbers which can be comapred from board to board, highlighting the VRM, rather than the CPU. I am not the TPU CPU reviewer, so providing info specific to CPU power draw is not neccesary for me.
> 
> Testing via your proposed method would eliminate VRM efficiency numbers, and focus on the CPU, while I'm concerned about the VRM design. I coudl do both sets of testing, but I see no need for me to report specific CPU numbers when I'm not testing CPUs in my reviews.



   VRM's can have alot of current shedding by way of thermal output. In fact I'd bet they could consume alot of power, espcially on a highly parrellel design where you ave lots of smal vrms. Sure its going to account for something, but certainly not everything. These boards your getting 280w on, just what are these boards equipped with feature wise ? are you subtracting addition power consumption or are you measuring the vrm to cpu pipeline only ?


----------



## cadaveca (Jun 14, 2011)

I simply report power draw over the 8-pin, like is shown in the video above.

A bit of research into VRM design, and OEM requirements are that the 12V input to CPU VRM MUST be located within a maximum distance from the VRM itself, to eliminate EMI and line interference(which eliminates the possibility of the 24-pin connector providing input to the VRM). This is then translated in PSU power design, with specific requirements for both 12V inputs via 4-pin and 8-pin connections.

I focus on the 8-pin power draw, as this can highlight what users need to look for in power requirements for multi-rail PSUs.

I don't care about power loss from the VRMs...I think it can be assumed a fact that some of the power consumed via the 8-pin is lost via heat, which, technically, cannot be avoided with MOSFETs. I do not focus on any single part of the VRM, rather I look at the solution as a complete design, as difference in component choice, from high/low MOSFET pairs and triplets, to DRMOS compoenents, may introduce differences that aren't truly comparable unless doen in the method i have chosen. I do make a part of my reviews pointing out the individual components, but I do not feel users need more in-depth analysis than that.

You have to realize, that as it is, my testing takes 25-30 hours to complete, without my usage testing, where I typically use the board for several days, both for general web use, media playback, and gaming, typically done with other users on the forum here.

That said, I am earning just pennies for every hour I put in on some products, so I do have to limit the time I take for analysis. Even so, I do cover things like VRM power draw, which other review sites do not. I am not paid by the hour, so some caveats must be accepted. Really, in the end, I am volunteering my time to do reviews, and that time is ALOT every week, and other than the boards, every other part used in my testing is stuff i purchase myself. With that in mind, there is only so much I can do, although i do think I do more than some others do.


----------



## Thatguy (Jun 14, 2011)

cadaveca said:


> I simply report power draw over the 8-pin, like is shown in the video above.
> 
> A bit of research into VRM design, and OEM requirements are that the 12V input to CPU VRM MUST be located within a maximum distance from the VRM itself, to eliminate EMI and line interference(which eliminates the possibility of the 24-pin connector providing input to the VRM). This is then translated in PSU power design, with specific requirements for both 12V inputs via 4-pin and 8-pin connections.
> 
> ...




   good mosfest in the upper amperage ranges can easily shed 20% of the power they output by way of thermal loss.


----------



## cadaveca (Jun 14, 2011)

So? Whatever power the VRM pulls needs to be accounted for. Like I hear what you are getting at(and I am not denying any of it, either), however, the same could be said of VGA power draw, which most reveiwers compare in the same way I do my motherboard compares. I'm not focused on CPU-specific draw, I am focused on the CPU and VRM together, as this is what needs to be looked at for PSU choices.

When reviewers report VGA power draw, they are not testing the GPU or memeory power draw seperately, and they are not soldering wires to the PCB to measure current....they measure at the point the PSU terminates, and the component begins, as I do. Of course, the caveat here is W1zz, who actually measures PCIe power draw as well(which is powered via the 24-pin, or a molex, if the board is equipped with one).

When it comes to PCIe power draw, you can simply google "24pin burnt out", and you'll find many instances of the 24-pin motherboard connector being burnt out from excessive PCIe power draw from multiple VGAs, as well as finding mods to boards to prevent this issue.


----------



## Thatguy (Jun 14, 2011)

cadaveca said:


> So? Whatever power the VRM pulls needs to be accounted for. Like I hear what you are getting at(and I am not denying any of it, either), however, the same could be said of VGA power draw, which most reveiwers compare in the same way I do my motherboard compares. I'm not focused on CPU-specific draw, I am focused on the CPU and VRM together, as this is what needs to be looked at for PSU choices.
> 
> When reviewers report VGA power draw, they are not testing the GPU or memeory power draw seperately, and they are not soldering wires to the PCB to measure current....they measure at the point the PSU terminates, and the component begins, as I do. Of course, the caveat here is W1zz, who actually measures PCIe power draw as well(which is powered via the 24-pin, or a molex, if the board is equipped with one).
> 
> When it comes to PCIe power draw, you can simply google "24pin burnt out", and you'll find many instances of the 24-pin motherboard connector being burnt out from excessive PCIe power draw from multiple VGAs, as well as finding mods to boards to prevent this issue.



well if the vrms are sweating 10-20% of the power that your measuring, it could push up reported power consumption, for both intel and AMD.


----------



## cadaveca (Jun 14, 2011)

Thatguy said:


> well if the vrms are sweating 10-20% of the power that your measuring, it could push up reported power consumption, for both intel and AMD.



Yes, I am well aware of that. And I do not care. I am not reporting JUST CPU power draw. I report 8-pin power draw, which includes VRM efficiency, CPU draw, and possibly other components(that I've yet to find in current implementations).

The chokes are going to shed a bit, the MOSFETs, the input driver, the capacitors; every component is going to affect power consumption in some way, and if the component gives off heat, naturally this is going to translate into power consumed.

And that 20% number is acutally pretty close to real-world numbers for just about every board on the market, too.

But like I said, just like a VGA is considered the full PCB, GPU memory, VRM and all, I look at CPUs in the same fashion. The only difference is that in CPUs, memory is most commonly powered via the 24-pin.

Note that my tables are not labelled CPU power consumption:








Of course, once Bulldozer is out, these numbers will change for the 9-series board, but again, because this is dependant on CPU installed, I consider both the VRM and the CPU as a single power-draw source.

It's not about being technical..its about explain things in an easy-to-understand manner. But as you can see, when questions do arise, I am ready and waiting with explanations as to why i do what i do.


----------



## Mr McC (Jun 14, 2011)

erocker said:


> I didn't always own an auto shop. The ISO standards are a farce and little to no corporations pay anything to meet those standards. This isn't Canada, this is the US where corporations do what they want regardless of law or whatnot.



In Europe, some poor office chargehand gets lumbered with the task of inventing evidence that everybody is doing what they should be doing, and have been doing so, consistently, since the last time the ISO-guy came round, to get a rubber seal, coveted for tenure and contracting purposes, or so I'm told.


----------



## cadaveca (Jun 14, 2011)

Mr McC said:


> contracting purposes



ISO certification is primarily used for contract negotiations. Usually it denotes a specific level of quality within either manufacturing, or record-keeping, and sometimes in safety.

My wife has ensured ISO certification for quite a few companies now, and now does patient health and safety in a hospital, where things like ISO standards aren't something to negotiate a contract...meeting those levels ensure lawsuits do not follow any sort of incident.


----------



## Thatguy (Jun 14, 2011)

cadaveca said:


> Yes, I am well aware of that. And I do not care. I am not reporting JUST CPU power draw. I report 8-pin power draw, which includes VRM efficiency, CPU draw, and possibly other components(that I've yet to find in current implementations).
> 
> The chokes are going to shed a bit, the MOSFETs, the input driver, the capacitors; every component is going to affect power consumption in some way, and if the component gives off heat, naturally this is going to translate into power consumed.
> 
> ...


well aside from basic device operational consumption for the vrms, chokes etc and the thermal loss, what it really paints is a different picture. If you measuring 280w and we know 20% of that is thermal shedding maybe another 5% is used in operational uses and conductive losses and RFI emission that not thermal in nature we can figure easily that at least 50watts of your measurement are radiated. 

  so then what we see is that a really bad cpu is only 230watts not 280 and it says that intel is a good bit less leaky to.

   I'd really like to know exactly what a cpu does consume to.


----------



## Thatguy (Jun 14, 2011)

cadaveca said:


> ISO certification is primarily used for contract negotiations. Usually it denotes a specific level of quality within either manufacturing, or record-keeping, and sometimes in safety.
> 
> My wife has ensured ISO certification for quite a few companies now, and now does patient health and safety in a hospital, where things like ISO standards aren't something to negotiate a contract...meeting those levels ensure lawsuits do not follow any sort of incident.



Having done some ISO work, ISO is essentialy a process guarentee. It promises that you will repeatdly do the same over and over and over again, regardless of quality. The only quality ISO does grant is that you can consistently build a good product or a crappy one. ISO make no dstinction on that end.


----------



## cadaveca (Jun 14, 2011)

Thatguy said:


> well aside from basic device operational consumption for the vrms, chokes etc and the thermal loss, what it really paints is a different picture. If you measuring 280w and we know 20% of that is thermal shedding maybe another 5% is used in operational uses and conductive losses and RFI emission that not thermal in nature we can figure easily that at least 50watts of your measurement are radiated.
> 
> so then what we see is that a really bad cpu is only 230watts not 280 and it says that intel is a good bit less leaky to.
> 
> I'd really like to know exactly what a cpu does consume to.




Sure, of course. I mean, to me, that's a given. However, my CPU will pull OVER 300W @ 4 GHz, no matter the VRM efficiency. So will many others, depending on how far you push. Like the video I posted...35a x 12v = 420W.

However, if i used your suggested method for testing, those numbers above, like the compare of 80W vs 73W, for the same clocks, highlights VRM efficiency between the two products, that wouldn't be shown using your method.

My testing also eliminates differences in VRM frequency.

I do not review CPUs, so I have no real interest in specific CPU power draw numbers. But what i can say, 100%, which started us on this conversation, is that AMD's TDP, is not the MAXIMUM CPU POWER DRAW, as shown by AMD themselves in the document I posted earlier.



Thatguy said:


> The only quality ISO does grant is that you can consistently build a good product or a crappy one. ISO make no dstinction on that end.



Yes, of course. I think the industry it is applied to affects how ISO certification is approached, too. Kinda a fair bit off topic, even more so than our other ongoing discussion.


----------



## seronx (Jun 14, 2011)

Guys, GUYS!

AM3+ is 145A that is with an 8pin so two 8 pins should give off 290A...(3480 Watts can be provided to Bulldozer at max)

TDP is maximum/worst possible case for stock

TDP isn't watts of energy but watts of heat

125w TDP = 125 J/s of heat

Heat is the excess of work, which the other watt provides for

(3480 Watts of energy, 125 Watts of heat for stock)







Check out intel propaganda for 2x8pin power oooooohhhh

Output is only 1200 Watts because there is only 12 Chokes


----------



## cadaveca (Jun 14, 2011)

seronx said:


> TDP isn't watts of energy but watts of heat








You got it.

Which also highlights why I test the way I do, as TDP of a CPU does not mean that PSU power budgets for the CPU are the same as TDP.

I gotta say though, I do not think your wattage numbers are accurate there...the amperage lsitings are not based off of 12V, AFAIK, but off of default CPU voltage.


----------



## theeldest (Jun 14, 2011)

seronx said:


> Guys, GUYS!
> 
> AM3+ is 145A that is with an 8pin so two 8 pins should give off 290A...(3480 Watts can be provided to Bulldozer at max)
> 
> ...




I thought the EPS 8-pin was rated at 28a / 12v (336 watts). So 2 would provide up to 672 watts to a CPU?

Where are they getting 1500 watts? That would suggest that a Radeon 6990 or nVidia 590 would each only require a single 8-pin or 6-pin power plug...


EDIT: Yes, I realized the CPU and PCIe plugs are different, but power ratings will be similiar given pin count.


----------



## cadaveca (Jun 14, 2011)

I think they are talking hardware capability, not actual usage. For example, each choke can handle 50A...and there are mant chokes, the total of which gives the 1500W figure. That doesn't mean you can actually supply 1500W to the CPU.


----------



## seronx (Jun 14, 2011)

cadaveca said:


> You got it.
> 
> Which also highlights why I test the way I do, as TDP of a CPU does not mean that PSU power budgets for the CPU are the same as TDP.
> 
> I gotta say though, I do not think your wattage numbers are accurate there...the amperage lsitings are not based off of 12V, AFAIK, but off of default CPU voltage.





theeldest said:


> I thought the EPS 8-pin was rated at 28a / 12v (336 watts). So 2 would provide up to 672 watts to a CPU?
> 
> Where are they getting 1500 watts? That would suggest that a Radeon 6990 or nVidia 590 would each only require a single 8-pin or 6-pin power plug...
> 
> ...





cadaveca said:


> I think they are talking hardware capability, not actual usage. For example, each choke can handle 50A...and there are mant chokes, the total of which gives the 1500W figure. That doesn't mean you can actually supply 1500W to the CPU.





> +12 volts -	4 -	28 amps -	336 watts



EPS: 4pin = 336 Watts
4-pin provides 336 Watts so technically 2x8 pin provides 1344 Watts
ATX PCI-E: 6pin = 75 Watts, 8pin= 150 Watts

looked it up and finally found a place that talks about this

-------

The choke amount is 24 which equals 1200 Watts(I counted wrong in the previous posts)

1344 Watts -> 1200 Watts

The rest of the 1500 Watts are from the mainboard

24pin ATX Mobo
+3.3 volt: 79.2 watts
+5 volt: 150 watts
+12 volts : 144 watts

http://www.playtool.com/pages/psuconnectors/connectors.html


----------



## theeldest (Jun 14, 2011)

seronx said:


> EPS: 4pin = 336 Watts
> 4-pin provides 336 Watts so technically 2x8 pin provides 1344 Watts
> ATX PCI-E: 6pin = 75 Watts, 8pin= 150 Watts
> 
> ...



Do you have a source for the wattage limit on the EPS/ATX plugs? I'd assume that the amp limitation is based on the wire gauge with a minimum requirement. And the EPS12V 8-pin is going to use gauge very similar to the PCIe 8-pin. I have a hard time believing a CPU 8-pin can carry 4 TIMES the power of a PCIe 8-pin.


----------



## seronx (Jun 14, 2011)

theeldest said:


> Do you have a source for the wattage limit on the EPS/ATX plugs? I'd assume that the amp limitation is based on the wire gauge with a minimum requirement. And the EPS12V 8-pin is going to use gauge very similar to the PCIe 8-pin. I have a hard time believing a CPU 8-pin can carry 4 TIMES the power of a PCIe 8-pin.




EPS
http://www.playtool.com/pages/psuconnectors/connectors.html#eps8

PCI-E
http://www.playtool.com/pages/psuconnectors/connectors.html#pciexpress8

ATX
http://www.playtool.com/pages/psuconnectors/connectors.html#atxmain24

If you notice

the EPS is basically a PURE 12 Volt cable no grounds

while the PCI-E is mostly grounds

While ATX is a variety pack


----------



## theeldest (Jun 15, 2011)

seronx said:


> EPS
> http://www.playtool.com/pages/psuconnectors/connectors.html#eps8
> 
> PCI-E
> ...



Hey that's awesome. That's the best/most comprehensive listing of power plugs and ratings that I've seen. It says the EPS 8-pin is rated to 336 watts ...


----------



## seronx (Jun 15, 2011)

theeldest said:


> Hey that's awesome. That's the best/most comprehensive listing of power plugs and ratings that I've seen. It says the EPS 8-pin is rated to 336 watts ...



I got mixed up xD

Ya, 336 Watts is the max for an 8pin

When I did the posts I was under the influence of staying up for 32-36 hours


----------



## Heavy_MG (Jun 15, 2011)

Thatguy said:


> The best way to test real consumption would be to grab the socket pins. Yeah I have touch probe and current meters that might be workable but nothing with beat the accuracy of a current bridge to get the most solid numbers to within a very small window of error.
> 
> I don't have time to build anything like it, but I bet some large company has it.I also know I don't have a top bin part either.


Testing how much power it takes from the wall with a kill-a-watt meter is a better measure.


----------



## cadaveca (Jun 15, 2011)

Heavy_MG said:


> Testing how much power it takes from the wall with a kill-a-watt meter is a better measure.



Um, no. That will tell you full system power only...harddrives, fans, graphics cards, etc...also, unless you already know exactly how efficient your PSU is, there's no way to get real accurate numbers for individual components.


----------



## Horrux (Jun 15, 2011)

cadaveca said:


> Um, no. That will tell you full system power only...harddrives, fans, graphics cards, etc...also, unless you already know exactly how efficient your PSU is, there's no way to get real accurate numbers for individual components.



...and because PSU efficiency varies according to power draw that's all bound to be a real headache.


----------



## cadaveca (Jun 15, 2011)

Exactly.  yet another reason why i do what i do in reviews.


----------



## cdawall (Jun 15, 2011)

Thatguy said:


> I got a 6 core running at 4 ghz on a 450watt power supply with a 5770 video card. Max load the system is pulling 380 from the wall.  not really seeing it. I don't have time right now to make a isolator socket with a current bridge.



went well over that on mine with an old Crosshair III formula and quad core the chips suck power. i smoked a pair of 650W CWT units running a 4850X2 was running the PSU's in tandom.


----------



## Horrux (Jun 16, 2011)

OK so *THAT* is why you need, say, a 750w PSU if you have 2 x 200W graphics cards and one "125W CPU" which is actually a 300W CPU...


----------



## Heavy_MG (Jun 16, 2011)

cadaveca said:


> Oh, look, here's* OVER 400W *going through an AMD CPU(Crosshair V Formula, 8-pin + 4-pin):
> 
> 
> 
> ...



Facepalm.JPG
You'll find anything to prove your point won't you?
That is on *dry ice*,with the chip clocked at what? 5Ghz? 6Ghz?
Any normal OC on air will NOT consume 300W.


----------



## cadaveca (Jun 16, 2011)

Are you done yet?


----------



## Velvet Wafer (Jun 16, 2011)

cdawall said:


> went well over that on mine with an old Crosshair III formula and quad core the chips suck power. i smoked a pair of 650W CWT units running a 4850X2 was running the PSU's in tandom.



how many Watt pulled the system all in all, under bench load?


----------



## Heavy_MG (Jun 16, 2011)

cadaveca said:


> Are you done yet?



I should be asking you that question.
Everyone else ignored the fact,I pointed it out. And you just leave an immature comment.


----------



## cadaveca (Jun 16, 2011)

NO, the truth of the matter is my boards won't give more than 280 W, or I'd have my own video for ya. I can show 280W, no problem...and then the system shutting down.

lol.

Beleive me, I have no reason to lie; the fact you suggest such is offensive, to be honest. Consider my position. As a reviewer, such things are death.

Also consider the boards I have aren't even in retail yet.


----------



## Damn_Smooth (Jun 16, 2011)

You two are still bickering?

Anyways cadaveca, still waiting on that Crosshair V review. Any idea when I'll get to see it?


----------



## cadaveca (Jun 16, 2011)

Damn_Smooth said:


> You two are still bickering?
> 
> Anyways cadaveca, still waiting on that Crosshair V review. Any idea when I'll get to see it?



When they send me one? Maybe never. You know, I asked for the cheaper boards, and that's what I got, so it's up to ASUS now if they want to send me more.

In hte end, they are going to basically be the same as what I got already, but have more features, and will overclock better. The PCIe x16 slots might be nice, would be interesting to see if it really offers anything for Crossfire.


----------



## seronx (Jun 16, 2011)

cadaveca said:


> When they send me one? Maybe never. You know, I asked for the cheaper boards, and that's what I got, so it's up to ASUS now if they want to send me more.
> 
> In hte end, they are going to basically be the same as what I got already, but have more features, and will overclock better. The PCIe x16 slots might be nice, would be interesting to see if it really offers anything for Crossfire.




http://usa.asus.com/About_ASUS/Facilities_Branches/

You should tell them to not send you the formula one but send you the Crosshair V Extreme 

http://www.amd.com/us/aboutamd/contact-us/Pages/contact-us.aspx

Don't forget to call AMD for a FX


----------



## H82LUZ73 (Jun 16, 2011)

Ok look at this manual ,and look at the section for CPU install http://www.asrock.com/mb/manual.asp?Model=890FX%20Deluxe5 

Here is the only known shot of a BD FX ENG sample chip NO ONE has them except the manufactures anything else posted is a LIE! Notice they blurred out AMD ENG sample  and the lower right lift or notch at the HS golden triangle ....And no other chip made by AMD has that.


----------



## cdawall (Jun 16, 2011)

Velvet Wafer said:


> how many Watt pulled the system all in all, under bench load?



Not a clue we swapped to an 850w tt powering the cpu and 1000 silverstone for the card and it didn't have any other issues. That particular amd quad core was an es chip and a high leak es chip on top of that with roughly 1.9v and 4.9ghz I would put it alone pulling 450+


----------



## Damn_Smooth (Jun 16, 2011)

seronx said:


> http://usa.asus.com/About_ASUS/Facilities_Branches/
> 
> You should tell them to not send you the formula one but send you the Crosshair V Extreme
> 
> ...



Has an Extreme edition been confirmed yet?


----------



## Velvet Wafer (Jun 16, 2011)

cdawall said:


> Not a clue we swapped to an 850w tt powering the cpu and 1000 silverstone for the card and it didn't have any other issues. That particular amd quad core was an es chip and a high leak es chip on top of that with roughly 1.9v and 4.9ghz I would put it alone pulling 450+



How do you make both PSUs shut on upon boot? did you used a kind of adapter, or were the PSUs resoldered?
i heard horrible stories of using mixed PSUs, is something up with that, or is it really no problem to use 2 PSUs?


----------



## seronx (Jun 16, 2011)

Damn_Smooth said:


> Has an Extreme edition been confirmed yet?



No

But

They will have to release something that does

x16/NC/x16/NC/x4 - x8/x8/x8/x8/x4 instead of something that does x16/NC/x16/4x - x8/x8/x8/x4

Extreme(EATX) is best for 4-way SLI(4 PCI-E cards)(SS)/QuadSLI(2 PCI-e Cards)(DS) + Physx(1 PCI-E card)

Formula(ATX) is best for QuadSLI(2 590GTXs)(DS) but no Physx THE DARN 2nd card will overlap the last PCI-e spot

890FX Extreme





890FX Formula





Now look at the 990FX Formula


----------



## Damn_Smooth (Jun 16, 2011)

seronx said:


> No
> 
> But
> 
> ...



Well, they have the ROG expander for 4 way Crossfire and SLI. I can't find how many lanes each slot has though.


----------



## seronx (Jun 16, 2011)

Damn_Smooth said:


> Well, they have the ROG expander for 4 way Crossfire and SLI. I can't find how many lanes each slot has though.



16x4

It's meant for SLI only but I guess people would go for that!


----------



## cdawall (Jun 16, 2011)

Velvet Wafer said:


> How do you make both PSUs shut on upon boot? did you used a kind of adapter, or were the PSUs resoldered?
> i heard horrible stories of using mixed PSUs, is something up with that, or is it really no problem to use 2 PSUs?



i used a jumper on that one but i have an adapter i use now. mixing low end PSU's can be shitty but these were not so the rails all held a similar output.


----------



## seronx (Jun 17, 2011)

Damn_Smooth said:


> Well, they have the ROG expander for 4 way Crossfire and SLI. I can't find how many lanes each slot has though.





seronx said:


> 16x4
> 
> It's meant for SLI only but I guess people would go for that!



The NF200 in heavy loads actually is worse than natural x8/x8/x8/x8 for that ROG Expander

1 NF200 chip 16/16 is worse than the 8/8 natural setup

Latency goes up and bandwidth goes down but how it goes down is uneven

Have 4 580s will have a huge speed drop compared to  4 560s Ti performance


----------



## Damn_Smooth (Jun 17, 2011)

seronx said:


> The NF200 in heavy loads actually is worse than natural x8/x8/x8/x8 for that ROG Expander
> 
> 1 NF200 chip 16/16 is worse than the 8/8 natural setup
> 
> ...



That is stupid. You would think that Nvidia would make a chip that actually takes advantage of their cards.


----------



## Mussels (Jun 17, 2011)

Damn_Smooth said:


> That is stupid. You would think that Nvidia would make a chip that actually takes advantage of their cards.



they just wanted something to use to limit SLI capable boards initially.


----------



## Damn_Smooth (Jun 17, 2011)

So we're back to waiting for a Crosshair V extreme for those that really want quad SLI?

I could never afford it so it doesn't effect me, but that really sucks for those that do want it.


----------



## Horrux (Jun 17, 2011)

What about crossfire mobos, they can do sli with the patch, any good quad-xfire boards that can do the trick?


----------



## seronx (Jun 18, 2011)

Horrux said:


> What about crossfire mobos, they can do sli with the patch, any good quad-xfire boards that can do the trick?



Bigger overhead than Natural SLi

and imagine drivers worse than ATi(2004 and before)


----------



## Horrux (Jun 18, 2011)

seronx said:


> Bigger overhead than Natural SLi
> 
> and imagine drivers worse than ATi(2004 and before)



How is the overhead bigger than with native SLI?

And what about the drivers, I don't understand what you mean?


----------



## Mussels (Jun 18, 2011)

Horrux said:


> How is the overhead bigger than with native SLI?
> 
> And what about the drivers, I don't understand what you mean?



worse scaling, more CPU power needed. you are hacking it to make it run, ofc its not going to be perfect.


----------



## Horrux (Jun 18, 2011)

Mussels said:


> worse scaling, more CPU power needed. you are hacking it to make it run, ofc its not going to be perfect.



Given that the only "hack" is a patch to the text string of the motherboard's model so that the nvidia drivers can allow SLI to run, meaning that it is the same hardware implementation, I don't see that that has anything to do with how well a crossfire mobo can run SLI. It's essentially the exact same technology, motherboard-wise.


----------



## Velvet Wafer (Jun 18, 2011)

Mussels said:


> worse scaling, more CPU power needed. you are hacking it to make it run, ofc its not going to be perfect.


You obviously didnt even took the time to inform yourself, how the SLI hack works... sorry Mussels



Horrux said:


> Given that the only "hack" is a patch to the text string of the motherboard's model so that the nvidia drivers can allow SLI to run, meaning that it is the same hardware implementation, I don't see that that has anything to do with how well a crossfire mobo can run SLI. It's essentially the exact same technology, motherboard-wise.


That is absolutely correct, Anatolymik himself couldnt have stated it better


----------



## Mussels (Jun 18, 2011)

i've read the thread with older versions of the hack, and they were far more complex - modding the drivers. sounds like anton has made progress.


----------



## Velvet Wafer (Jun 18, 2011)

Mussels said:


> i've read the thread with older versions of the hack, and they were far more complex - modding the drivers. sounds like anton has made progress.



He had to, Nvidia has already implemented countermeasures against the hack the fifth time... also, after i had shot 2 OSes,due to the complex implementation process, i whined so much in the SLI Hack thread, that he made it a one click updater... which about anyone can use

its Anatoly btw, he is a russian


----------



## twilyth (Jun 27, 2011)

meh, not sure there's anything new here, but I'll post it anyway.  Benchmarks are at the end but all the good stuff is blocked out.


----------



## Damn_Smooth (Jun 27, 2011)

twilyth said:


> meh, not sure there's anything new here, but I'll post it anyway.  Benchmarks are at the end but all the good stuff is blocked out.



I don't see what you're talking about.


----------



## repman244 (Jun 27, 2011)

Damn_Smooth said:


> I don't see what you're talking about.



Well he did say that all the good stuff is blocked out 

On a more serious note, I don't see anything either.


----------



## Damn_Smooth (Jun 27, 2011)

repman244 said:


> Well he did say that all the good stuff is blocked out
> 
> On a more serious note, I don't see anything either.



Somewhere in that white square lies the secret. We must decode it.


----------



## seronx (Jun 27, 2011)

twilyth said:


> meh, not sure there's anything new here, but I'll post it anyway.  Benchmarks are at the end but all the good stuff is blocked out.



[yt]-flGRobFzsQ[/yt]

Solved!


----------



## Damn_Smooth (Jun 27, 2011)

seronx said:


> Solved!



Now I'm dissapointed. I don't trust this guy at all. It was nice seeing those boards though.


----------



## seronx (Jun 27, 2011)

Damn_Smooth said:


> Now I'm dissapointed. I don't trust this guy at all. It was nice seeing those boards though.



Ya, doing the maths it seems that Bulldozer actually has a High IPC and a low clock...


2.4-3.0 GHz like the Phenom 1s

Phenom I(4C) to K15(4C) = exactly 50% increase in performance
1 to 1.5

and 8C is
1 to 3

Stumbled on some old footnotes

1 AMD CMT core can do(2m = 8t)(4m = 16t)
2x Integer(1 execution core can do this)
or
2x Floating Point(1 execution core can do this)

1 Nehalem core can do
1+1x Integer
or
1+1x Floating Point

1 SB Core can do(4 core = 8 threads)
1+1 x Integer
or
1+1 x Floating Point

To make it easier to understand

AMD BD:
1 module can assign
2 Integer to 1 core(2 Nehalem Threads/ somewhat) while 2 Floating Point to 1 core(2 Nehalem threads/ somewhat)

Intel Nehalem:
1 core can assign
1 Integer to 1 thread while 1 Floating Point to 1 thread

Sandy bridge is tricky, I can't find any footnotes comparing to it
I can only assume that it is like AMD BD:
1 core can assign
1 Integer to 1 thread while 1 Floating Point to 1 thread

Bulldozer = Sandy Bridge in IPC

But actual synthetic core performance is weird

Bulldozer 8C @ 2.6GHz is equal to Sandy Bridge 8T @ 3.4GHz
800MHz difference

Leading me to believe Sandy Bridge is just a faster Nehalem/ somewhat


----------



## heky (Jun 27, 2011)

If the video is legit, BD will suck. Voltage is 1.4 to 1.5v, the tdp is almost 190W and the memory tests just suck compared to SB.


----------



## Heavy_MG (Jun 27, 2011)

heky said:


> If the video is legit, BD will suck. Voltage is 1.4 to 1.5v, the tdp is almost 190W and the memory tests just suck compared to SB.


The tested chip is  a ES.
An ES chip will use higher voltage,is buggy thus doesn't run as well as it should, and have a higher TDP than the final product released to the consumer.


----------



## WhiteLotus (Jun 27, 2011)

heky said:


> If the video is legit, BD will suck. Voltage is 1.4 to 1.5v, the tdp is almost 190W and the memory tests just suck compared to SB.



You mean, Bulldozer is going to be streets ahead of everything else that AMD have ever put out, but not quite as good as Intel's current high end/future offerings.

Honestly I don't see how Bulldozer is going to suck just because it's not going to be the best. That equation does not compute with me. It's still going to be a very powerful chip that will be over kill for *MANY MANY MANY* systems.


----------



## Horrux (Jun 27, 2011)

^^^^
I certainly hope so...


----------



## xenocide (Jun 27, 2011)

WhiteLotus said:


> You mean, Bulldozer is going to be streets ahead of everything else that AMD have ever put out, but not quite as good as Intel's current high end/future offerings.
> 
> Honestly I don't see how Bulldozer is going to suck just because it's not going to be the best. That equation does not compute with me. It's still going to be a very powerful chip that will be over kill for *MANY MANY MANY* systems.



The point he was getting at is people who are buying a $300 CPU expect it to at least be on par with the competition's offering around the same price range.  Sure, it would be great for many applications, but if that were the case why not save money and just go for a last-gen option?


----------



## Heavy_MG (Jun 27, 2011)

WhiteLotus said:


> You mean, Bulldozer is going to be streets ahead of everything else that AMD have ever put out, but not quite as good as Intel's current high end/future offerings.
> 
> Honestly I don't see how Bulldozer is going to suck just because it's not going to be the best. That equation does not compute with me. It's still going to be a very powerful chip that will be over kill for *MANY MANY MANY* systems.


I agree,while BD might not blow SB away,it will be a lot faster than anything AMD has out today. AMD's goal isn't to be the best,but to provide a competitive product that will satisfy most users needs. However,for $300 it should at least be on par with a 2500K,if not,users will still buy from Intel.


----------



## WhiteLotus (Jun 27, 2011)

xenocide said:


> The point he was getting at is people who are buying a $300 CPU expect it to at least be on par with the competition's offering around the same price range.  Sure, it would be great for many applications, but if that were the case why not save money and just go for a last-gen option?





Heavy_MG said:


> I agree,while BD might not blow SB away,it will be a lot faster than anything AMD has out today. AMD's goal isn't to be the best,but to provide a competitive product that will satisfy most users needs. However,for $300 it should at least be on par with a 2500K,if not,users will still buy from Intel.



Then take that up with the sales team. Not the chip itself. Instead of saying, omg this chip is going to suck, say hmm AMD seem to be charging too much.


----------



## heky (Jun 27, 2011)

I am not saying BD will suck becouse it will not be the most powerfull, but becouse if the video is legit, it will use 2x the power of SB and be inferior or on par with SB. For the same 32nm and the same price, that is just unacceptable!


----------



## repman244 (Jun 27, 2011)

heky said:


> If the video is legit, BD will suck. Voltage is 1.4 to 1.5v, the tdp is almost 190W and the memory tests just suck compared to SB.





heky said:


> I am not saying BD will suck becouse it will not be the most powerfull, but becouse if the video is legit, it will use 2x the power of SB and be inferior or on par with SB. For the same 32nm and the same price, that is just unacceptable!



How do you know how much power does it consume  the 186W in CPU-Z is a bug, HWiNFO says ~125W *TDP* (and AFAIK BD will be able to shut down a whole module if it doesn't use + it's on 32nm). The voltage doesn't say anything (don't compare it to Intel). 

And don't forget, you are paying for 8 cores (and there will be 4 and 6 cores versions, which will be cheaper)


----------



## heky (Jun 27, 2011)

repman244 said:


> The voltage doesn't say anything (don't compare it to Intel).



Voltage x amperage = wattage so unless BD draws minimal amounts of current, it will consume heaps of energy!

And the fact you state you pay for 8 cores, only makes it suck even more, becouse it takes 8 AMD cores(even though they are not real cores) to compete with 4 Intel cores + HT.


----------



## WhiteLotus (Jun 27, 2011)

heky said:


> Voltage x amperage = wattage so unless BD draws minimal amounts of current, it will consume heaps of energy!
> 
> And the fact you state you pay for 8 cores, only makes it suck even more, becouse it takes 8 AMD cores(even though they are not real cores) to compete with 4 Intel cores + HT.



Using what as evidence?

A chip that might not even be a real BD, some benchmarks that could be false, hell, the chip itself might be a very early development chip, like a Beta chip or something... a work in progress?

Stop dismissing the chip before anything official comes out, and until it actually gets released so the public can bench and compare without being biased.

:shadedshu


----------



## repman244 (Jun 27, 2011)

heky said:


> Voltage x amperage = wattage so unless BD draws minimal amounts of current, it will consume heaps of energy!
> 
> And the fact you state you pay for 8 cores, only makes it suck even more, becouse it takes 8 AMD cores(even though they are not real cores) to compete with 4 Intel cores + HT.



I know how to get wattage, and we still don't know the current. And don't forget AMD is using SOI which can take more voltage than bulk. And of course it will consume more power when at 100%, it's 8 cores.

If you ask me SB will be torn appart in multithread (and that's where you need the cores), singlethread it maybe comes close. 

You also forgot I mentioned 4 and 6 core versions which I believe will be priced competitively against Intel's lineup .


----------



## Horrux (Jun 27, 2011)

Aren't you guys confusing current drawn and thermal dissipation? They are different... If you are comparing an estimate of BD's current draw to SB's TDP, of course you will have a huge disparity. They aren't the same thing at all.  Although both are measured in watts.


----------



## theeldest (Jun 27, 2011)

Also, the chip is overclocked... 

If I recall correctly, overclocking increases current draw...


----------



## repman244 (Jun 27, 2011)

Horrux said:


> Aren't you guys confusing current drawn and thermal dissipation? They are different... If you are comparing an estimate of BD's current draw to SB's TDP, of course you will have a huge disparity. They aren't the same thing at all.  Although both are measured in watts.



That's why I used bold text for TDP.

And like you said TDP isn't very accurate for measuring power consumption. And also, Intel's TDP is not the same as AMD's TDP, AFAIK.

ACP should be more accurate in representing the power usage of the chip, but so far I only saw that in the HP Proliant server specification


----------



## Horrux (Jun 27, 2011)

repman244 said:


> That's why I used bold text for TDP.
> 
> And like you said TDP isn't very accurate for measuring power consumption. And also, Intel's TDP is not the same as AMD's TDP, AFAIK.
> 
> ACP should be more accurate in representing the power usage of the chip, but so far I only saw that in the HP Proliant server specification



Yeah I can't remember which one uses what, but one is "Thermal Design Power" and one is "Typical Dissipated Power" or somesuch?  Bleh, I knew this stuff years ago, now it's all a bunch of fuzz... Can someone clarify?


----------



## heky (Jun 28, 2011)

WhiteLotus said:


> Using what as evidence?
> 
> A chip that might not even be a real BD, some benchmarks that could be false, hell, the chip itself might be a very early development chip, like a Beta chip or something... a work in progress?
> 
> ...



What is wrong with you? Cant you read? I said IF the video is legit!!! Oh and believe me, there is a reason why the BD line got delayed. The frequency of the produced chips was too low to compete with current intel offerings, so AMD is preparing a new stepping!
Mark my words, if BD doesnt beat SB, AMD is in truble.

I am not trying to diss the chip, i was a AMD supporter once(socket 939), but for 2 gens now, they just cant find the right path.IMHO


----------



## repman244 (Jun 28, 2011)

heky said:


> What is wrong with you? Cant you read? I said IF the video is legit!!! Oh and believe me, there is a reason why the BD line got delayed. The frequency of the produced chips was too low to compete with current intel offerings, so AMD is preparing a new stepping!
> Mark my words, if BD doesnt beat SB, AMD is in truble.
> 
> I am not trying to diss the chip, i was a AMD supporter once(socket 939), but for 2 gens now, they just cant find the right path.IMHO



Well I i think the video is probably legit (the guy definetly has the chips), the thing that bothers me is that SuperPI is absolutely useless and doesn't tell anything.
I agree with the delay reasons, maybe even the yields still weren't that good someone also mentioned (assumed) Llano was a priority which could lead to BD being pushed back.

Well I don't think AMD will be in trouble, Llano is turning out to be very good, especially for laptops and such (that's where the money is ), and don't forget about the server chips which could provide a nice profit. AMD wasn't in trouble during Nehalem so I don't see them in trouble now.
Agree with not finding the right path, they are a bit late with Llano and BD.


----------



## heky (Jun 28, 2011)

repman244 said:


> Well I i think the video is probably legit (the guy definetly has the chips), the thing that bothers me is that SuperPI is absolutely useless and doesn't tell anything.
> I agree with the delay reasons, maybe even the yields still weren't that good someone also mentioned (assumed) Llano was a priority which could lead to BD being pushed back.
> 
> Well I don't think AMD will be in trouble, Llano is turning out to be very good, especially for laptops and such (that's where the money is ), and don't forget about the server chips which could provide a nice profit. AMD wasn't in trouble during Nehalem so I don't see them in trouble now.
> Agree with not finding the right path, they are a bit late with Llano and BD.



Agree on the Llano chip, nice work there. I was mostly refering to desktop chips with the "in trouble" part, .
But yeah, i am mostly just pissed off becouse of the long time waiting for "official" benchmarks.


----------



## Horrux (Jun 28, 2011)

I agree with AMD not being in that much trouble, unless BD turns out to be a complete dud that needs yet another delay and stepping after the upcoming one. Besides the mobile and server parts, AMD still has some GPUs which are said to be reasonably profitable and doing well. That takeover was a genius idea of... Was it Ruiz back then?


----------



## Thatguy (Jun 28, 2011)

heky said:


> What is wrong with you? Cant you read? I said IF the video is legit!!! Oh and believe me, there is a reason why the BD line got delayed. The frequency of the produced chips was too low to compete with current intel offerings, so AMD is preparing a new stepping!
> Mark my words, if BD doesnt beat SB, AMD is in truble.
> 
> I am not trying to diss the chip, i was a AMD supporter once(socket 939), but for 2 gens now, they just cant find the right path.IMHO



   Just shut up, seriously. Shut up. AMD is doing  just fine. They got slammed with orders for llano an bobcat products and maxed out fab capacity, its obvious when you look at orders and revenues for the last 2 qaurters. AMD is doing just fine. 

  BD just has to be a value and profitable and OEM friendly. It hould handily manage all of that, beating intel is just a plus if it occurs.


----------



## techtard (Jun 28, 2011)

No need for fighting. We just need to wait a little longer and then we can review the performance of the real chips.

If they are competitive, hopefully it will cause some price cuts, and all of us consumers will win, regardless of brand preference.


----------



## tilldeath (Jun 28, 2011)

Not that it probably matters much as the end of Q2 is wrapping up but I just got off the phone with AMD and thought I'd take a long shot and ask about the FX release date. Only info is Q3 so take it for what it's worth.


----------



## Fatal (Jun 29, 2011)

tilldeath said:


> Not that it probably matters much as the end of Q2 is wrapping up but I just got off the phone with AMD and thought I'd take a long shot and ask about the FX release date. Only info is Q3 so take it for what it's worth.



So what's the plan now? You still need to get your watercooling stuff any way. What's a few months


----------



## techtard (Jun 29, 2011)

Maybe they will launch the 7000 series of GPUs and Bulldozer in the fall together. 
Or they ran into some problems and are reeling. We'll know soon enough.


----------



## TheMailMan78 (Jun 29, 2011)

heky said:


> What is wrong with you? Cant you read? I said IF the video is legit!!! Oh and believe me, there is a reason why the BD line got delayed. The frequency of the produced chips was too low to compete with current intel offerings, so AMD is preparing a new stepping!
> Mark my words, if BD doesnt beat SB, AMD is in truble.
> 
> I am not trying to diss the chip, i was a AMD supporter once(socket 939), but for 2 gens now, they just cant find the right path.IMHO



I totally agree AMD is in truble and truble is not a place you want to be.


----------



## NdMk2o1o (Jun 29, 2011)

Thatguy said:


> Just shut up, seriously. Shut up. AMD is doing  just fine.



LEAVE AMD ALONE!!!









TheMailMan78 said:


> I totally agree AMD is in truble and truble is not a place you want to be.



Big truble, but don't be dissing him. 

There is nothing to suggest any of the pics/benches here are real even if they are this still tells us nothing, I for one will be waiting for the real deal and most likely building an BD HTPC when they come out


----------



## REAYTH (Jun 29, 2011)

truble indeed!!


----------



## Thatguy (Jun 29, 2011)

TheMailMan78 said:


> I totally agree AMD is in truble and truble is not a place you want to be.



the are selling every piece of silicon they can make right now, how is that " trouble " if selling all the product you can produce is a problem. Please include me in this group.


----------



## TheMailMan78 (Jun 29, 2011)

Bulldozer will be fine. If not buy Intel. The benches you see online are of an engineering sample IF they are even real. Waste of time even if they are real as engineering samples are not the final product. You would think this community would know better.



Thatguy said:


> the are selling every piece of silicon they can make right now, how is that " trouble " if selling all the product you can produce is a problem. Please include me in this group.



The joke. You missed it.


----------



## Velvet Wafer (Jun 29, 2011)

AMD will do fine... as it was mentioned time and time again, they are currently at the max production capability of their fabs, producing APUs....
They simply dont have the capacity for anything besides a paper lauch of BD... and i guess they rather wait, then do that.


----------



## TheoneandonlyMrK (Jun 29, 2011)

TheMailMan78 said:


> Bulldozer will be fine. If not buy Intel. The benches you see online are of an engineering sample IF they are even real. Waste of time even if they are real as engineering samples are not the final product. You would think this community would know better.



again speaking some sense, as if AMD hasnt been in trouble before, and its still here, just like intel.
 imho  AMD just refocused early on given possible poor BD speeds etc to pay more attention  to the E series and lano for use in pads phones dells hps(ie off shelf pcs) etc and on giving nvidia a beating whilst they were on the front foot.
their old ceo dissmissed phones and pads and he went,then they sold plenty of chips for phones and pads and old ceo's lovechild BD got delayed, its not brain surgery.

where is this truble place anyway??

oh n some peeps need some green, seriously, stressed fanboy allert get outside in some fields and chill out for a bit its soothing


----------



## seronx (Jun 30, 2011)

OBR: PCTuning






http://pctuning.tyden.cz/hardware/z...psetu-amd-990fx-procesory-ale-budou-az-v-zari

Google Translate 

990FX vs X58
FX-8130P ES vs 990X both @ 4GHz
2x580GTX

-----
I'm not supporting/endorsing these benchmarks this is just the same guy we have been talking about


----------



## Horrux (Jun 30, 2011)

^^^
For an ES, that's not bad at all, IMO. IF they are legit.


----------



## Frick (Jun 30, 2011)

Horrux said:


> ^^^
> For an ES, that's not bad at all, IMO. IF they are legit.



I'd say pretty darn good if true. And if it translates to other tests/situations.


----------



## Red_Machine (Jun 30, 2011)

Horrux said:


> ^^^
> For an ES, that's not bad at all, IMO. IF they are legit.



But it's _worse_ than Intel's previous generation.  It'll most likely get its ass kicked by Sandybridge and be murdered by socket 2011 and Ivybridge.


----------



## Crap Daddy (Jun 30, 2011)

None of those games "benched" are CPU intensive.


----------



## Frick (Jun 30, 2011)

Red_Machine said:


> But it's _worse_ than Intel's previous generation.  It'll most likely get its ass kicked by Sandybridge and be murdered by socket 2011 and Ivybridge.



I would not be suprised it that was the fact in some cases. But if it goes head to head with the 990X (which is a fast CPU) at a good price point it's not too shabby.


----------



## Red_Machine (Jun 30, 2011)

But Bulldozer is _supposed_ to be competing with Sandybridge now.  If it was engineering sample issues, then we would be seeing performance like that against Sandybridge parts, not Nehalem.

I see Sandybridge parts outperforming Bulldozer parts by around 20% at the least.


----------



## NdMk2o1o (Jun 30, 2011)

Red_Machine said:


> But Bulldozer is _supposed_ to be competing with Sandybridge now.  If it was engineering sample issues, then we would be seeing performance like that against Sandybridge parts, not Nehalem.
> 
> I see Sandybridge parts outperforming Bulldozer parts by around 20% at the least.



It depends, IF (that word is being used a lot) it is a real BD ES, then it depends on what revision as the 1st revision has been scrapped for a higher clocked/tweaked revison hence delays. 

All still stupid speculation and nothing more, YAWNNNNNNNNNNN


----------



## seronx (Jun 30, 2011)

Horrux said:


> ^^^
> For an ES, that's not bad at all, IMO. IF they are legit.



I suspect this has the same issues as the K12 Llano....it's a multiplier locked ES

so it is still 2.8GHz


----------



## Crap Daddy (Jun 30, 2011)

According to the article it was Oc to 4GHz same as the OC on the Intel chip


----------



## seronx (Jun 30, 2011)

Crap Daddy said:


> According to the article it was Oc to 4GHz same as the OC on the Intel chip



It's an ES

It's most likely multiplier locked

it can be any of these 3 stock clocks

2.8 3.0 3.1 vs 990X 4.0


----------



## Mussels (Jun 30, 2011)

well geez, you be nice enough to leave peoples comments that didnt break any rules behind and you cop shit over it.


time to clean this thread up a bit further then, my apologies to people who came here to read about bulldozer.


edit: done. Those involved know not to continue their previous 'discussion'


----------



## TheMailMan78 (Jun 30, 2011)

Anyway I hear that BD should be out by the end of August. I can't wait to see the benches! I don't mind a little slower then Sandy as long as its within 10% performance and 50% of the price


----------



## Mussels (Jun 30, 2011)

personally i dont care if BD is slower than SB, its about the pricing.


a lot of people dont realise there are two kinds of performance - per core/thread (EG, one core vs one core, which is faster) and COMBINED power, where it could be 8 BD cores vs 4 SB cores w/ HT.

if you're gaming where single threaded performance is still key, they SB may lead - but once you start running more and more background tasks simultaneously like i do (say, encoding H264 MKV files) then the CPU with the more cores and greater overall performance becomes the far better choice.


If BD and SB (mobo + CPU + ram combos) are equal priced and it varies depending on what tasks you do, then thats fine. there is a balance.

but if BD is 10% slower and 15% cheaper... well, we know what will be popular for the masses.


----------



## TheMailMan78 (Jun 30, 2011)

Mussels said:


> personally i dont care if BD is slower than SB, its about the pricing.
> 
> 
> a lot of people dont realise there are two kinds of performance - per core/thread (EG, one core vs one core, which is faster) and COMBINED power, where it could be 8 BD cores vs 4 SB cores w/ HT.
> ...



Thats always been the case of multi-tasking and encoding. Hell if its i7 performance in single threaded apps Ill be happy. Remember the Phenom II used to tickle the feet of the i7 in some benches and beat it in others and the Phenom II architecture is almost 10 years old. 

But its all about price.


----------



## seronx (Jun 30, 2011)

TheMailMan78 said:


> Thats always been the case of multi-tasking and encoding. Hell if its i7 performance in single threaded apps Ill be happy. Remember the Phenom II used to tickle the feet of the i7 in some benches and beat it in others and the Phenom II architecture is almost 10 years old.
> 
> But its all about price.



Sad piece...

Performance doesn't go up...

Bulldozer is basically just shrinking Phenom II

But the way the components are arranged are a lot different

But come out the same way

Single thread performance is linear

Multi thread performance is exponential

Bulldozer 1C 100% Performance
Bulldozer 8C 1120% Performance


----------



## TheMailMan78 (Jun 30, 2011)

seronx said:


> Sad piece...
> 
> Performance doesn't go up...
> 
> ...



I'm a little confused by this post. Please clarify. 
You start off saying it will flop then end it with exponential?


----------



## repman244 (Jun 30, 2011)

seronx said:


> Sad piece...
> 
> Performance doesn't go up...
> 
> ...



Shrinked Phenom II?! Sorry but that is not true, it is totally redesigned and has nothing to do with Phenom II whatsoever.


----------



## seronx (Jun 30, 2011)

TheMailMan78 said:


> I'm a little confused by this post. Please clarify.
> You start off saying it will flop then end it with exponential?



Architecturally looking at 1 core you are looking at phenom II

but if you look at it at a whole the more cores utilized the more performance

Bulldozer in theory should technically equal Phenom II

But there is a 33% increase in performance do to rearranging the resources

The 2nd core in the module does something "weird" when in use it doesn't increase performance like a 2nd CMP core would do

Single thread to Dual thread applications
CMP: 100% -> 200%
CMT: 100% -> 280%

This is due to the balanced nature of Bulldozers CMT



repman244 said:


> Shrinked Phenom II?! Sorry but that is not true, it is totally redesigned and has nothing to do with Phenom II whatsoever.



It is Phenom II but the way everything is done is rearranged

K7 -> K8 -> K10 all looked the same because they were the same

K15 looks different because it uses a more balanced method it has to balance resources between two balanced Phenom II cores and two balanced FPUs(Fused into 1 by FMA4/XOP/CVT16)


----------



## TheMailMan78 (Jun 30, 2011)

seronx said:


> Architecturally looking at 1 core you are looking at phenom II
> 
> but if you look at it at a whole the more cores utilized the more performance
> 
> ...



Thanks for clearing that up!

Do you have links to where it shows the cores being the same as the Phenom?


----------



## repman244 (Jun 30, 2011)

seronx said:


> It is Phenom II but the way everything is done is rearranged
> 
> K7 -> K8 -> K10 all looked the same because they were the same
> 
> K15 looks different because it uses a more balanced method it has to balance resources between two balanced Phenom II cores and two balanced FPUs(Fused into 1 by FMA4/XOP/CVT16)



Bulldozer is a totally new design and it is not just rearranged Phenom II design. In fact BD doesn't use 2 cores but it uses a module (within a module there are 2 "cores" and I could be wrong but those 2 "cores" share the integer part, no?)

And what exactly is the balanced method we are talking about here?


----------



## TheMailMan78 (Jun 30, 2011)

repman244 said:


> Bulldozer is a totally new design and it is not just rearranged Phenom II design. In fact BD doesn't use 2 cores but it uses a module (within a module there are 2 "cores" and I could be wrong but those 2 "cores" share the integer part, no?)
> 
> And what exactly is the balanced method we are talking about here?



Thats kinda the way I understood it. Thats why I asked for a link to see where he got this other info from.


----------



## Red_Machine (Jun 30, 2011)

Well, Bulldozer could be based on the Phenom II as the Core series was based on the Pentium III.  There's a lot of differences, but the architecture is basely a die-shrink with NetBurst bolted on and a few enhancements and additions over the years.


----------



## repman244 (Jun 30, 2011)

Just to clear this up a bit: http://blogs.amd.com/work/2010/08/02/the-bulldozer-blog/

A random quote:


> Our next generation core architecture, a complete new design from the ground up, is called “Bulldozer.”


----------



## seronx (Jun 30, 2011)

TheMailMan78 said:


> Thanks for clearing that up!
> 
> Do you have links to where it shows the cores being the same as the Phenom?
















repman244 said:


> Bulldozer is a totally new design and it is not just rearranged Phenom II design. In fact BD doesn't use 2 cores but it uses a module (within a module there are 2 "cores" and I could be wrong but those 2 "cores" share the integer part, no?)
> 
> And what exactly is the balanced method we are talking about here?



First quote comment, I think the guy who made the FPU portion of Phenom II had a photo bomb, how did that Bulldozer FPU get there!!!



TheMailMan78 said:


> Thats kinda the way I understood it. Thats why I asked for a link to see where he got this other info from.





Red_Machine said:


> Well, Bulldozer could be based on the Phenom II as the Core series was based on the Pentium III.  There's a lot of differences, but the architecture is basely a die-shrink with NetBurst bolted on and a few enhancements and additions over the years.





repman244 said:


> Just to clear this up a bit: http://blogs.amd.com/work/2010/08/02/the-bulldozer-blog/
> 
> A random quote:



It isn't exactly Phenom II, Each core without including the possibility of "CMT" performs exactly like Phenom II

Phenom II Core 0 over-provisioned with resources(1/3rd to many)

Bulldozer Core 0&1 uses the same resources (But it is balanced between both of the cores its a 1 to 1 ratio instead of a 1.33 to 1 ratio of the Phenom II)

2 Phenom II cores has 2.66x resources

2 Bulldozer core has 2x resources(1 Bulldozer core also has 2x resources as resources are shared between the cores)

It's fixing Phenom II but it is Phenom II in a way

It is an 8-core Phenom II but it is shrunk

If AMD made an 8-core Phenom II it would be 800% big

Bulldozer shrunk that 8-core Phenom II  to 600% big

Edit: furk I got Phenom II picture and another phenom II picture

Edit2: All these leaks and talks about the Bulldozer are all so old....2007->2009 come on.....


----------



## repman244 (Jun 30, 2011)

seronx said:


> http://images.anandtech.com/reviews/cpu/amd/hotchips2010/bulldozeruarch.jpg
> 
> http://images.anandtech.com/reviews/cpu/amd/hotchips2010/p2uarch.jpg
> 
> ...



Well what is it now?

Bulldozer is a 4 module CPU and has nothing to do with Phenom II (let alone shrink of Phenom II), the so called "cores" are designed from ground up (and yes the "core" is smaller than Phenom II's core), but using BD "core" and comparing it with Phenom's Core a performance analysis is for me a no-go, there are just too many changes/variables here.

Until we get some proper numbers this is all pretty much useless.


----------



## seronx (Jun 30, 2011)

repman244 said:


> Well what is it now?
> 
> Bulldozer is a 4 module CPU and has nothing to do with Phenom II (let alone shrink of Phenom II), the so called "cores" are designed from ground up (and yes the "core" is smaller than Phenom II's core), but using BD "core" and comparing it with Phenom's Core a performance analysis is for me a no-go, there are just to many changes/variables here.
> 
> Until we get some proper numbers this is all pretty much useless.








It is and it isn't

AMD marketing may say its a completely new design but it is a Phenom II redesigned, simplified and pushed into a small space and made to do work like a sweat shop where the 

Phenom II had a luxurious open office with slouching workers, Bulldozer is a compact sweat shop with no slouching workers

Where Phenom II wasted resources and didn't use workers alot(.33% resources and workers were not used)

Bulldozer uses all resources and workers efficiently(0% resources and workers will not be used)

Phenom II = Over-provisioned
Bulldozer = Well balanced

Edit: That image I showed...it doesn't show the second unified integer/memory unit thingy


----------



## Thatguy (Jun 30, 2011)

`Seronx bulldozer IS NOT PHENOM II stop spreading FUD


----------



## TheMailMan78 (Jun 30, 2011)

seronx said:


> http://img231.imageshack.us/img231/3553/bulldozer5.png
> 
> It is and it isn't
> 
> ...



I think the point people are trying to make man is we all know this isn't a new X86. That we know. What it is however is a redesign. Even your own graphs confirm this.

Now.....how it will work? Who the hell knows at this point.


----------



## seronx (Jun 30, 2011)

TheMailMan78 said:


> I think the point people are trying to make man is we all know this isn't a new X86. That we know. What it is however is a redesign. Even your own graphs confirm this.
> 
> Now.....how it will work? Who the hell knows at this point.



My prediction is it will work within 25% -> 75% more performance


At worst the 8C Bulldozer(2007):
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-2500K+@+3.30GHz
At 50% prediction(2009):
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-2600K+@+3.40GHz
At 75%, If you follow the trail every 2 years since BDs development started in 2005 25% more performance:
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7+995X+@+3.60GHz


----------



## Thatguy (Jun 30, 2011)

seronx said:


> My prediction is it will work within 25% -> 75% more performance
> 
> 
> At worst the 8C Bulldozer(2007):
> ...



stop with your "predictions" tell intel marketing I siad HI


----------



## seronx (Jun 30, 2011)

Thatguy said:


> stop with your "predictions" tell intel marketing I siad HI



Well the first 2 aren't my predictions actually...

2007 AMD said 33% more performance over Barcelona...I tuned that down to 25%
^Anandtech

2009 Industrial Engineers who knew the architecture said 50% more performance over Barecelona+
^ Donhambier whatever


----------



## Red_Machine (Jun 30, 2011)

Thatguy said:


> Stop with the bullshit. wiat for benchs. Intel paying well these days ?



Do you accuse everyone of working for Intel?


----------



## repman244 (Jun 30, 2011)

seronx said:


> Well the first 2 aren't my predictions actually...
> 
> 2007 AMD said 33% more performance over Barcelona...I tuned that down to 25%
> ^Anandtech
> ...



Is that 50% more per core or...?



Thatguy said:


> Stop with the bullshit. wiat for benchs. Intel paying well these days ?



It's called speculating and discussing


----------



## Tatty_One (Jun 30, 2011)

seronx said:


> Well the first 2 aren't my predictions actually...
> 
> 2007 AMD said 33% more performance over Barcelona...I tuned that down to 25%
> ^Anandtech
> ...



Thats not really a prediction, thats a ballpark quote, a 25% - 75% performance increase statement (or prediction) pretty much tells me that noone has any idea of how much it will be better and is only marginally better than saying 0 - 100%, which in the end pretty much does make it speculative. 

Had to tidy this up again, if some of you guys cant squabble with class then don't squabble at all.


----------



## seronx (Jun 30, 2011)

repman244 said:


> Is that 50% more per core or...?
> ---
> It's called speculating and discussing



50% more per cpu.... for Donhambier<--I am just going to call this guy the Swede yeesh

He is captain obvious 
6 cores -> 8 cores = 50% more cores!!
4 cores/8 threads -> 8 cores = I don't even know....



Tatty_One said:


> Thats not really a prediction, thats a ballpark quote, a 25% - 75% performance increase statement (or prediction) pretty much tells me that noone has any idea of how much it will be better and is only marginally better than saying 0 - 100%, which in the end pretty much does make it speculative.
> 
> Had to tidy this up again, if some of you guys cant squabble with class then don't squabble at all.



No one knows but there was a Cray benchmark

32 core BD 1.8GHz 2P vs 24 core MC 1.9GHz 2P/48 core MC 1.9GHz 4P

and bulldozer was .6x-1.3x better

and it averaged .9x better (aka it performed worst)

http://arstechnica.com/business/new...enchmarks-may-give-glimpse-of-amds-future.ars

http://www.realworldtech.com/page.cfm?ArticleID=RWT033011040021

But these were Server workloads not desktop/gaming workloads


----------



## repman244 (Jun 30, 2011)

seronx said:


> No one knows but there was a Cray benchmark
> 
> 32 core BD 1.8GHz 2P vs 24 core MC 1.9GHz 2P
> 
> ...



Do you maybe know what exactly does Cray benchmark measure? I'm a bit rusty with the benchmarks.

EDIT: Reading the second link I saw this:


> The STREAM triad benchmark showed particularly poor memory system performance for Interlagos: ~6GB/s per socket, compared to ~27GB/s for Magny-Cours



And a bunch of other constraints, maybe the same thing is happening with these Zambezi ES leaks (very crippled chips), but that's just a guess.


----------



## seronx (Jun 30, 2011)

repman244 said:


> Do you maybe know what exactly does Cray benchmark measure? I'm a bit rusty with the benchmarks.



Himeno: Effect of cache on performance and calculation size

Parallel BZIP2: SMP Data Compression Software

C-ray: Ray Tracing

Fast Fourier Transform: Linpack somewhat http://en.wikipedia.org/wiki/Fast_Fourier_transform

Jacobi SOR: I dunna know

Monte Carlo Pi: Area of a circle...?

the other two dunna know..

Sparse Matrix Multiply

Dense LU Factorizing

Bulldozer was better somewhat in...
Himeno, Bzip2, C-ray, and Pi



repman244 said:


> EDIT: Reading the second link I saw this:
> 
> 
> > The STREAM triad benchmark showed particularly poor memory system performance for Interlagos: ~6GB/s per socket, compared to ~27GB/s for Magny-Cours
> ...



ES are crippled...but what is crippled we dunna know

They made predictions of what is crippled but are there more crippled components?


----------



## repman244 (Jun 30, 2011)

seronx said:


> Himeno: Effect of cache on performance and calculation size
> 
> Parallel BZIP2: SMP Data Compression Software
> 
> ...



Thank you for that info, as you can see in my edited post above I think that is the reason for the weird scores.
And we don't know what else has been crippled and how good was the MB support for the chips.

It's very hard to say anything about BD performance based on those numbers.


----------



## seronx (Jun 30, 2011)

repman244 said:


> Thank you for that info, as you can see in my edited post above I think that is the reason for the weird scores.
> And we don't know what else has been crippled and how good was the MB support for the chips.
> 
> It's very hard to say anything about BD performance based on those numbers.



Especially desktop BD Performance since we do not know the clock speeds for desktop Bulldozers...


----------



## Horrux (Jun 30, 2011)

And given that Cray benches are designed for server or supercomputer purposes, where memory bandwidth can be everything, it is obvious that the dual channel memory design of the upcoming desktop parts are crippled compared to the quad channel server parts. That is no surprise at all.

Still, saying that BD is Phenom II somewhat tweaked and shrunk is pure misinformation. You can't do that to a CPU. Well maybe in1980 you could but not today. When making such changes as the cores to modules that AMD is making, a complete redesign is required. I don't know much, but I know timings are one important issue at current scales and frequencies.


----------



## seronx (Jun 30, 2011)

Horrux said:


> And given that Cray benches are designed for server or supercomputer purposes, where memory bandwidth can be everything, it is obvious that the dual channel memory design of the upcoming desktop parts are crippled compared to the quad channel server parts. That is no surprise at all.
> 
> Still, saying that BD is Phenom II somewhat tweaked and shrunk is pure misinformation. You can't do that to a CPU. Well maybe in1980 you could but not today. When making such changes as the cores to modules that AMD is making, a complete redesign is required. I don't know much, but I know timings are one important issue at current scales and frequencies.



I am talking about the Core....mainly the core -> Integer/Memory specifically 

Bulldozer as a Module doesn't look like Phenom II
but,
Bulldozer cores as a Integer Core compared to Phenom IIs integer core

is basically the change from
VLIW5 to VLIW4
Same architecture, just one is easier to code(edit: or should I say more efficient?) for *cough*VLIW4*cough*

Bulldozer 2C-> 3 Unified Pipelines over Phenoms II 1C 6 Dedicated Pipelines any day

6GB/s is pretty bad vs 27GB/s


----------



## Thatguy (Jun 30, 2011)

seronx said:


> I am talking about the Core....mainly the core -> Integer/Memory specifically
> 
> Bulldozer as a Module doesn't look like Phenom II
> but,
> ...



Bulldozer is not PII Stop saying it is.


----------



## seronx (Jun 30, 2011)

Thatguy said:


> Bulldozer is not PII Stop saying it is.



I'm going to be blunt....

Bulldozer is K7....

It has a lot more stuff going for it than just being another dirty K7 clone like

K8->K10->K10.5

It didn't help AMD having an architecture going from 1999 to 2011, that had no optimization in core design

Netburst -> Core2 -> Core i7 -> Core i7 2nd gen

Were all basically improved core designs....

K7 vs Netburst, Loss
K8 vs Core 2, Loss
K10/K10.5 vs Nehalem, Loss
K15 vs Sandy Bridge = Loss? 

The nature of things are that the AMD will lose regardless....

And since the redesign still has the image of the K7, well...

It's a lot more efficient than K7 dropped that lousy 3rd ALU/AGU and 3rd FPU for
A 2ALU/AGU design + 1FPU that can address 2 cores



But, it still has that lousy AMD spirit, "If isn't broken, don't fix it"


----------



## repman244 (Jun 30, 2011)

seronx said:


> I'm going to be blunt....
> 
> Bulldozer is K7....
> 
> ...



Wasn't it K8 vs. Netburst, where Netburst (Pentium 4 and the horrible Pentium D) got raped by Athlons?


----------



## seronx (Jun 30, 2011)

repman244 said:


> Wasn't it K8 vs. Netburst, where Netburst (Pentium 4) got raped by Athlons?



1999 K7
2000 Netburst
2003 K8
2006 Core 2
2007 K10
2008 Nehalem
2009 K10.5
2011 Sandy Bridge
2011 Bulldozer
2013 Haswell

Well actually this makes it looks even more tragic


----------



## renq (Jun 30, 2011)

seronx said:


> Netburst -> Core2 -> Core i7 -> Core i7 2nd gen



You forgot 1st gen Core CPUs, that, though being only for laptops etc, brought Intel out of the slump/dead end Netburst had lead it to.


----------



## Red_Machine (Jun 30, 2011)

seronx said:


> Netburst -> Core2 -> Core i7 -> Core i7 2nd gen



Actually, the Core series is basically a die-shrunk Pentium M, which is basically a die-shrunk Pentium III with the Netburst FSB bolted on.  The Netburst architecture itself was discontinued after the Pentium D.


----------



## repman244 (Jun 30, 2011)

seronx said:


> 1999 K7
> 2000 Netburst
> 2003 K8
> 2006 Core 2
> ...



Lol what? K8 went vs Netburst, remember Athlon64, FX days? Pentiums were getting raped even later Pentium D was getting raped by Athlon x2's


----------



## seronx (Jun 30, 2011)

repman244 said:


> Lol what? K8 went vs Netburst, remember Athlon64, FX days? Pentiums were getting raped even later Pentium D was getting raped by Athlon x2's



No it was the opposite...



> In many tests the Athlon 64 FX62 performs better than the Core 2 Duo E6400. Still, the FX-62 is slower than the Core 2 Duo E6600, E6700 and X6800 microprocessors.



http://www.cpubenchmark.net/cpu.php?cpu=AMD+Athlon+64+FX-62+Dual+Core

http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core2+Duo+E6600+@+2.40GHz

The Architecture is meant to compete downward not upward

K15 -> Haswell


----------



## repman244 (Jun 30, 2011)

seronx said:


> No it was the opposite...
> 
> 
> 
> ...



Read again what I wrote. I did not mention Core 2 duo's.
K8 was to counter Netburst AFAIK


----------



## seronx (Jun 30, 2011)

repman244 said:


> Read again what I wrote. I did not mention Core 2 duo's.
> K8 was to counter Netburst AFAIK



1999 K7 
^
2000 Netburst
v
2003 K8
v
2006 Core 2
^
2007 K10
^
2008 Nehalem
^
2009 K10.5
^^
2011 Sandy Bridge
^^
2011 Bulldozer
????
2013 Haswell

look again



Were off topic again

K7 = Netburst
K8 = Core2
Then what the... AMD?!!?!?!
K10 = Netburst/Core2
K10.5 = Core2/Nehalem

AMD was competing downward while Intel was riding them


----------



## repman244 (Jun 30, 2011)

seronx said:


> 1999 K7
> ^
> 2000 Netburst
> v
> ...




I see that for 3 years Intel was getting raped, I don't know why are you saying K8 is vs Core2. It was the other way around, Intel released Core 2 to counter K8 (successfully I have to say)
You cannot deny AMD was the best at that time, and the only time (as long as I remember) they had chips at $1000.


----------



## seronx (Jun 30, 2011)

repman244 said:


> I see that for 3 years Intel was getting raped, I don't know why are you saying K8 is vs Core2. It was the other way around, Intel released Core 2 to counter K8 (successfully I have to say)
> You cannot deny AMD was the best at that time, and the only time (as long as I remember) they had chips at $1000.



I'm stopping until a production pilot Bulldozer leak

Edit: I am back and well

Athlon 64/K8 and FX/K15

Have a lot in common







OH LAWD IT BLEW UP!


----------



## Horrux (Jul 1, 2011)

Am I the only one who feels seronx is spreading FUD and being a bit of a jerk?


----------



## cadaveca (Jul 1, 2011)

Not everyone's sense of humour is the same.


----------



## Velvet Wafer (Jul 1, 2011)

I smell severe Intel fanboyism, covered by what should look like real knowledge, in a thread, where it does not belong.

You crap this thread severely, so, please stay silent,as you have promised, until AMD has released some silicone, thats worth looking at, 
which will give at least some useful benches, without myriads of bugs



cadaveca said:


> Not everyone's sense of humour is the same.



Yeah, some people just have a very bad one.... thats simply provocative, without sense.


----------



## xenocide (Jul 1, 2011)

Velvet Wafer said:


> I smell severe Intel fanboyism, covered by what should look like real knowledge, in a thread, where it does not belong.
> 
> You crap this thread severely, so, please stay silent,as you have promised, until AMD has released some silicone, thats worth looking at,
> which will give at least some useful benches, without myriads of bugs



He made a few good points at first, but it has quickly diminished into speculation and opinion.


----------



## Melvis (Jul 1, 2011)

seronx said:


> I'm stopping until a production pilot Bulldozer leak
> 
> http://img39.imageshack.us/img39/603/dozer20fire.jpg
> 
> OH LAWD IT BLEW UP!



It just couldn't handle the Awesomeness


----------



## TheoneandonlyMrK (Jul 1, 2011)

im now pondering when and what form the bulldozer enhanced will ship as they were rumoured to be not long after  the originally earlier bulldozer launch BD Q2 Q3 enhanced Q4 -Q1 2012 but what now that BD has slipped the Q's .


----------



## seronx (Jul 1, 2011)

Horrux said:


> Am I the only one who feels seronx is spreading FUD and being a bit of a jerk?



2000-2003: The Hammer fell short
2009-2011: The Bulldozers battery died

But, after the AMD CPU released...it blew everyones expectations away

The one thing you should learn from history is that history tends to repeat in irony



Melvis said:


> It just couldn't handle the Awesomeness



Exactly


----------



## TheoneandonlyMrK (Jul 1, 2011)

I dont fully understand AMD's logic A350 uses BD cores  and is out first A4-6 use stars phenom 11 cores it dosnt make sense to me


----------



## Mussels (Jul 1, 2011)

seronx: stop with the thread crapping. i dont want to have to clean this thread up again.


----------



## tilldeath (Jul 1, 2011)

Fatal said:


> So what's the plan now? You still need to get your watercooling stuff any way. What's a few months



well since no waterblocs out are am3+ compatible listed I gotta wait on that as well. I did order compression fittings and tubing though.


----------



## erocker (Jul 1, 2011)

tilldeath said:


> well since no waterblocs out are am3+ compatible listed I gotta wait on that as well. I did order compression fittings and tubing though.



Mounting holes are the same as AM3, AM2+, AM2, s939 etc..


----------



## Mussels (Jul 1, 2011)

yeah, AMD have been nice with their coolers compatibility.


----------



## Horrux (Jul 1, 2011)

Mussels said:


> yeah, AMD have been nice with their coolers compatibility.



And sockets, althouth I find it to be a bummer that AM3+ chips can't go into AM3 mobos.

And somewhere down the line, tri- or quad-channel memory will have to find its way into the AMD camp...


----------



## ViperXTR (Jul 1, 2011)

uh, as far as i can remember AthlonXP(T-bred B/Barton) had a decent line against the netburst Pentium 4's even with having much lower clock speeds (my AXP 2500+ barton still functions up to now lol), Athlon64/Athlon64 X2/Athlon64 FX pretty much raped the Pentium 4's/Pentium D's afaik, but Intel launched with the Core microarchitecture (Core 2 Duo/Extreme) to prevent anymore raping :|


----------



## seronx (Jul 1, 2011)

Horrux said:


> And sockets, althouth I find it to be a bummer that AM3+ chips can't go into AM3 mobos.
> 
> And somewhere down the line, tri- or quad-channel memory will have to find its way into the AMD camp...



As far as we know....

Bulldozer does go into AM3 sockets...

Just don't expect a RMA, if you blow the VRMs










3mins 20 seconds

No extra pin

As you can tell







First 6 boards say Bulldozer runs on AM3 boards...


----------



## Mussels (Jul 1, 2011)

dont they need the black socket on the boards to achieve that?


----------



## seronx (Jul 1, 2011)

Mussels said:


> dont they need the black socket on the boards to achieve that?



No, you can run Bulldozer on AM3 just fine but like in your VRM thread?(not sure who made a VRM thread to lazy to search for it lol) the Bulldozer chips require more AMPs and Voltage...basically blowing up any low quality VRM in the process
^Worst case scenario

If you can run Bulldozer on your AM3 with a Bios patch
Well don't expect Turbo Core 2.0 and Cool N Quiet to run


----------



## Horrux (Jul 1, 2011)

seronx said:


> As far as we know....
> 
> Bulldozer does go into AM3 sockets...
> 
> ...



Well, it runs on SOME AM3 boards. Not mine, and not many others, but yes, some. And this is a departure from AMD's usual upgrade structure, where you usually could find a new chip for your mobo a long time after said mobo was even discontinued. It isn't that big a deal either way.


----------



## Mussels (Jul 1, 2011)

Horrux said:


> Well, it runs on SOME AM3 boards. Not mine, and not many others, but yes, some.



and like i said, i'm pretty sure it was only those with the black socket.


if the black socket is not required for BD to run on AM3+, why would it even exist?

seeing a list of boards with possible BIOS updates, to me just implies those boards have a revision with the black AM3+ compatible socket.


----------



## seronx (Jul 1, 2011)

Horrux said:


> Well, it runs on SOME AM3 boards. Not mine, and not many others, but yes, some. And this is a departure from AMD's usual upgrade structure, where you usually could find a new chip for your mobo a long time after said mobo was even discontinued. It isn't that big a deal either way.



It isn't if I was going to upgrade my computer
AM2+ -> AM3+ isn't that bad



Mussels said:


> and like i said, i'm pretty sure it was only those with the black socket.
> 
> 
> if the black socket is not required for BD to run on AM3+, why would it even exist?
> ...



Black socket isn't required but it means you can run ALL of the features of Bulldozer

It isn't possible BIOS updates



> Crosshair IV Formula 3017 Test BIOS
> For testing AM3+ CPU Function only, do not update this BIOS while using AM3 or previous type CPUs!





> Crosshair IV Extreme 3017 Test BIOS
> For testing AM3+ CPU Function only, do not update this BIOS while using AM3 or previous type CPUs!





> M4A89TD PRO 3017 Test BIOS
> For testing AM3+ CPU Function only, do not update this BIOS while using AM3 or previous type CPUs!





> 890FXA-GD70 1.9 BIOs
> - Support AM3+ CPU.
> - Update M-Flash module.



There is a lot more chipsets that support AM3+ cpu on the AM3 socket

White Socket = Maybe, You don't have a warranty

Black Socket = Absolutely supported, You have a warranty

So, if you buy the cpu oem and get a motherboard oem well get 890FX+OEM FX
and blast those cpus with LN2 wabam! since you don't have a warranty on the white socket, regardless


----------



## Mussels (Jul 1, 2011)

interesting. maybe i can run one on my board afterall.


----------



## Damn_Smooth (Jul 1, 2011)

Mussels said:


> interesting. maybe i can run one on my board afterall.



Nope man, we're screwed. Gigabyte isn't updating the Bios, they want us to pay for a black socket revision. I e-mailed them about the situation when Asus announced that they were going to do Bios updates and they told me that they weren't because AMD wasn't supporting it.
That was my driving force for looking at anyone other than Gigabyte for my next board.

That's assuming that you are using the board in your system specs, which is the same as mine.



> Dear customer,
> 
> We are sorry to let you know that, since AMD changed their AM3+ pin out, old version AM2+/AM3 socket mother board not able to support AM3+ processor. Due to processor pin out physically changed Bios update not able to make board works. User need to have rev 3.1 board to work with AM3+ processor.
> 
> ...



That's the e-mail if you were interested, awesome grammar included.


----------



## seronx (Jul 1, 2011)

Bad news guys from HardOCP Kyle Bennett "HardOCP Editor-in-Chief"



			
				Kyle_Bennett said:
			
		

> Confirmed from a couple of sources.....September 6th at this time. Surely it could move again. So not to say, "I told you so," but yeah, I did already. Not happy about it either.



September 6th

and from JF-AMD



			
				JF-AMD@HardOCP said:
			
		

> Zambezi sells to the enthusiast market, which is <10% of market. Llano sells to the mainstream which is probably 70%+ of the market.
> 
> Zambezi sells only into desktops (that I am aware of) which is a shrinking market and Llano sells into desktop and mobile which is a growing market.
> 
> To me that is the essence of strategy - prioritize products around a large, growing market and over a small, shrinking market. But, then what do I know, I'm a server guy.



:| doesn't look good we might not get Bulldozer ever...


----------



## Damn_Smooth (Jul 1, 2011)

seronx said:


> Bad news guys from HardOCP Kyle Bennett "HardOCP Editor-in-Chief"
> 
> 
> 
> ...



Oh well, my broke ass couldn't afford it now anyway. I just want real benches before then.


----------



## seronx (Jul 1, 2011)

Damn_Smooth said:


> Oh well, my broke ass couldn't afford it now anyway. I just want real benches before then.



They should just delay it to December when I can buy a Bulldozer lol

But, the market rarely works in my budget :\

<--- Need the extra-cores






Just waitin' for Bulldozer

I know exactly what I am going to do with it....


----------



## heky (Jul 1, 2011)

Thank god i went with SB instead of waiting for BD.
And if the VRM of the AM3 boards is not up to the task, that just prooves my asumption that BD will be even more power hungry than PhenomII x6. Epic fail imho.


----------



## Zyon (Jul 1, 2011)

It's 1st of July and still no news of release =/


----------



## Dent1 (Jul 1, 2011)

heky said:


> Thank god i went with SB instead of waiting for BD.
> And if the VRM of the AM3 boards is not up to the task, that just prooves my asumption that BD will be even more power hungry than PhenomII x6. Epic fail imho.



Or you can look at the Bulldozer's specification and compare it to Phenom II X6s spec and you'll be able to work out which is more power hungry.


----------



## heky (Jul 1, 2011)

Dent1 said:


> Or you can look at the Bulldozer's specification and compare it to Phenom II X6s spec and you'll be able to work out which is more power hungry.



I did that, but if people are saying AM3 boards are not capable of running BD becouse of high voltage and high current draw.

Wattage = voltage x current

You do the math!


----------



## Red_Machine (Jul 1, 2011)

No, it's because the chipsets aren't compatible.


----------



## TheMailMan78 (Jul 1, 2011)

seronx said:


> They should just delay it to December when I can buy a Bulldozer lol
> 
> But, the market rarely works in my budget :\
> 
> ...



Seronx do you think my motherboard will run Bulldozer ok?


----------



## Noxman (Jul 1, 2011)

Damn_Smooth said:


> Nope man, we're screwed. Gigabyte isn't updating the Bios, they want us to pay for a black socket revision. I e-mailed them about the situation when Asus announced that they were going to do Bios updates and they told me that they weren't because AMD wasn't supporting it.
> That was my driving force for looking at anyone other than Gigabyte for my next board.
> 
> That's assuming that you are using the board in your system specs, which is the same as mine.
> ...



Well, I don't think that is completely true because they have a beta BIOS for the GA-890GPA-UD3H (Rev 2.x) that adds support for AM3+
Link: http://www.gigabyte.com/products/product-page.aspx?pid=3420#bios


----------



## techtard (Jul 1, 2011)

Why don't you use your thousand suns intellect to figure it out yourself? ;D

This waiting game is killing me. I may switch sides to a cheap i3 rig, or a full-blown 2500k build soon.

AMD at least let us know when we can expect the chips.


----------



## repman244 (Jul 1, 2011)

heky said:


> I did that, but if people are saying AM3 boards are not capable of running BD becouse of high voltage and high current draw.
> 
> Wattage = voltage x current
> 
> You do the math!



Ok first of all not even all AM3 boards support 125/140W CPU's (Thubans, 140W Denebs). Secondly do some search on the internet as to why BD won't work in AM3 (the boards that don't support it don't have all the pins in the socket wired and partly because of the weak low quality VRM on cheap boards), and of course you need a good VRM for a top of the line CPU.
Would you use a Westmere in a shitty board? Maybe, but try overclocking it and watch the magic smoke come out.
And are you forgetting about the transition to 32nm, and a more efficient design?

And who the hell cares if BD uses even the same power as Thuban (I'm talking about an 4 module version here), it's a high end part. It's like saying: I'll buy a GTX 580 and I'll underclock and undervolt it so I can save some power.

You constantly keep saying BD will suck, now tell me this: based on what?


----------



## techtard (Jul 1, 2011)

I'd say based on teenage, hormone fueled fanboyism. 

The worst kind.


----------



## heky (Jul 1, 2011)

Oh really, how come some Asus boards will be able to run BD?


----------



## Horrux (Jul 1, 2011)

repman244 said:


> Ok first of all not even all AM3 boards support 125/140W CPU's (Thubans, 140W Denebs). Secondly do some search on the internet as to why BD won't work in AM3 (the boards that don't support it don't have all the pins in the socket wired and partly because of the weak low quality VRM on cheap boards), and of course you need a good VRM for a top of the line CPU.
> Would you use a Westmere in a shitty board? Maybe, but try overclocking it and watch the magic smoke come out.
> And are you forgetting about the transition to 32nm, and a more efficient design?
> 
> ...



And even with good VRMs... My M4A79 Deluxe supports my 145W Deneb just fine, but can't support BD. I don't think it's because its RAM sockets are DDR2 because the DDR3 equivalent, the M4A79T Deluxe (and non-deluxe variants) also can't support BD. So I hypothesize there may be something else at work too.


----------



## Red_Machine (Jul 1, 2011)

heky said:


> Oh really, how come some Asus boards will be able to run BD?



Mine won't.


----------



## repman244 (Jul 1, 2011)

heky said:


> Oh really, how come some Asus boards will be able to run BD?



Because of the reasons I already wrote?



Horrux said:


> And even with good VRMs... My M4A79 Deluxe supports my 145W Deneb just fine, but can't support BD. I don't think it's because its RAM sockets are DDR2 because the DDR3 equivalent, the M4A79T Deluxe (and non-deluxe variants) also can't support BD. So I hypothesize there may be something else at work too.



This could be the case of the wired pins I said earlier. The boards you mention are a bit older (I don't know how old they are, this could partly be the reason).
If it is older and if the AM3 boards supporting BIOS update are at the age of CHIV, I guess that's the answer.


----------



## heky (Jul 1, 2011)

techtard said:


> I'd say based on teenage, hormone fueled fanboyism.
> 
> The worst kind.



Really, i am probably older than you. I am basing my statements on the fact the launch has been postponed 3 times, and that all the leaked ES BD chips are just a FAIL really.


----------



## repman244 (Jul 1, 2011)




----------



## heky (Jul 1, 2011)

Get a grip man, its obvious who is the Troll and fanboy.

If you are so curious on what i am basing my statements, than please state on what you are basing yours?


----------



## Dent1 (Jul 1, 2011)

heky said:


> Really, I am probably older than you. I am basing my statements on the fact the launch has been postponed 3 times.



Earlier you was being negative about its hypothetical power consumption. Now you are being negative about it being postponed. Get your story straight, is your hostility due to the power consumption or delays?


heky said:


> and that all the leaked ES BD chips are just a FAIL really.



ES chips are supposed to be fail. What’s your point?


----------



## repman244 (Jul 1, 2011)

heky said:


> Get a grip man, its obvious who is the Troll and fanboy.
> 
> If you are so curious on what i am basing my statements, than please state on what you are basing yours?



First of all I'm not the one who posted FAIL, it will suck etc. /case closed I'm not talking about this anymore

Now what am I basing my statements: BD is 32nm compared to 45nm (Stars), it's a ground up design and I doubt AMD wouldn't make it power efficient (after all everyone is promoting Green stuff right? You want those power consumption charts to be good these days).
I also would like to know, to which chip are you comparing it's supposed power consumption (which no one has a clue about), Thuban maybe? If you are compare it core per "core" (heh when we get power consumption numbers - see what I did here?) I bet BD will be more efficient.

Now I want to hear what are you basing it on.

And please don't ever call me a fanboy ever again. I really don't give a flying rat's ass what is in my PC as long as it does the job I need it to do, I'm only a fanboy of *hardware*.
And just to add I only have 1 AMD chip and 8 Intel, yes a true fanboy indeed. *never wants to talk about this anymore since it derails a topic*.


----------



## Velvet Wafer (Jul 1, 2011)

Im curious why no one seems to know, that the pin holes of the black sockets are wider, and there is one additional hole... white sockets are physically inable to receive a BD chip, that and nothing less. 

seronx, with each additional post in this thread, you just ridicule yourself more, in front of this audition... if you dont know even that, you obviously have NO CLUE what you are talking about

and mailman, if you ask that guy for help, you can rather ask your grandma... she will be more educated regarding BD,than this Guys is... he tries to be smart, but fails miserably.


----------



## repman244 (Jul 1, 2011)

Velvet Wafer said:


> and there is one additional hole... white sockets are physically inable to receive a BD chip, that and nothing less.



We haven't seen the chips yet  I don't understand why would there be a CHIV beta BIOS then.


----------



## Red_Machine (Jul 1, 2011)

Velvet Wafer said:


> Im curious why no one seems to know, that the pin holes of the black sockets are wider, and there is one additional hole... white sockets are physically inable to receive a BD chip, that and nothing less.



Well, I DEFINETLY won't be able to use Bulldozer on my board then.


----------



## Thatguy (Jul 1, 2011)

Seronx needs a vacation, he has outright lied on more then one ocassion in this thread, lying is worse then the word FUCK by a mile.


----------



## TheMailMan78 (Jul 1, 2011)

Velvet Wafer said:


> Im curious why no one seems to know, that the pin holes of the black sockets are wider, and there is one additional hole... white sockets are physically inable to receive a BD chip, that and nothing less.
> 
> seronx, with each additional post in this thread, you just ridicule yourself more, in front of this audition... if you dont know even that, you obviously have NO CLUE what you are talking about
> 
> and mailman, if you ask that guy for help, you can rather ask your grandma... she will be more educated regarding BD,than this Guys is... he tries to be smart, but fails miserably.



Dude.......did you see my board? 



Red_Machine said:


> Well, I DEFINETLY won't be able to use Bulldozer on my board then.



He was being sarcastic. You wont be able to run BD because you have an M4 board. Not many M4 boards will work. If you had an M5 you might have better chances.


Here is a list of supported Asus boards for BD.

http://event.asus.com/2011/mb/AM3_PLUS_Ready/


----------



## Red_Machine (Jul 1, 2011)

Yup, no nForce support...


----------



## Horrux (Jul 1, 2011)

Personally I don't mind changing my mobo because when I purchased mine (two of them actually, in order to build identical gaming rigs for myself and my girl) I thought they were DDR3 boards but they were DDR2. Stupid mistake, but back then it was "AM3 is DDR3" all over... I had actually waited for AM3 mobos to come out so I could have DDR3 and be future-proofed. Sigh.

So anyway, no biggie coz I'd have to change them anyway, I'll be getting them 990FX mobos and keeping my deneb and thuban until BD comes around. If BD isn't the bees' knees, well, I might just stick with then bargain-priced Phenom II X6 1100t's which do the job nicely anyway. I'm not going to saddle myself with Intel's "change everything" upgrade 'path' just to get 20% more performance than an already pretty damn good level thereof.


----------



## TheMailMan78 (Jul 1, 2011)

Red_Machine said:


> Yup, no nForce support...



Well the good news is when you do upgrade all you will need is a mobo. I had to buy a mobo AND ram.....may I suggest my board? The thing is BAD ASS.


----------



## Red_Machine (Jul 1, 2011)

I'm most likely going to go Intel, and be changing my RAM because I'm on my second set already due to one of the sticks failing.  OCZ didn't make reliable RAM from what I hear...


----------



## heky (Jul 1, 2011)

Dent1 said:


> Earlier you was being negative about its hypothetical power consumption. Now you are being negative about it being postponed. Get your story straight, is your hostility due to the power consumption or delays?
> 
> 
> ES chips are supposed to be fail. What’s your point?



You have got to be kidding, right?



> "First of all I'm not the one who posted FAIL, it will suck etc. /case closed I'm not talking about this anymore
> 
> Now what am I basing my statements: BD is 32nm compared to 45nm (Stars), it's a ground up design and I doubt AMD wouldn't make it power efficient (after all everyone is promoting Green stuff right? You want those power consumption charts to be good these days).
> I also would like to know, to which chip are you comparing it's supposed power consumption (which no one has a clue about), Thuban maybe? If you are compare it core per "core" (heh when we get power consumption numbers - see what I did here?) I bet BD will be more efficient.
> ...



I am basing it on what we have seen so far. And what we have seen so far is 3 postpones and some really not good ES.


----------



## Black Panther (Jul 1, 2011)

Guys, any more flaming after this post will result in vacations being handed out lavishly.


----------



## repman244 (Jul 1, 2011)

Ok I found some information about energy efficiency: http://blogs.amd.com/work/2011/02/22/amd-at-isscc-bulldozer-innovations-target-energy-efficiency/

The main highlights:


> Fully power gating the core to essentially zero power when not in use
> Sharing components in the dual core design (instruction fetch, decode, L2 cache, FP) to make more efficient use of them while still delivering the performance of a true dual core.  This is sort of like the efficiency of a duplex home design where heat, plumbing, foundation and electrical infrastructure can all be shared, but the structure still provides independent homes for two families.
> 
> Optimizing the low level circuits for maximal efficiency at all levels.  For instance low-power flip-flop design shown in paper 4.5 yesterday at ISSCC provides innovative power reductions for one of the biggest power consuming circuits in the core.  The clock grid (another big power sink) builds on the power efficiencies of past designs, and adds more improvements.  Perhaps most importantly, the grounds-up design opportunity enabled an unprecedented level of clock gating (see figure below from the paper) to reduce power waste as shown in the graph below.  Retrofitting a design to add logic to turn clocks off when circuits aren’t used is a time consuming and error-prone process.  The Bulldozer team designed these in from the beginning which enabled the inclusion of over 30,000 individual clock enables to be used.
> ...



And let's not forget about 32nm process


----------



## Dent1 (Jul 1, 2011)

heky said:


> Really, i am probably older than you. I am basing my statements on the fact the launch has been postponed 3 times, and that all the leaked ES BD chips are just a FAIL really.





heky said:


> You have got to be kidding, right?



No I'm not kidding. Answer the question please 


Earlier you was being negative about its hypothetical power consumption. Now you are being negative about it being postponed. Get your story straight, is your hostility due to the power consumption or delays?


ES chips are supposed to be fail. What’s your point?


----------



## heky (Jul 1, 2011)

Dent1 said:


> No I'm not kidding. Answer the question please
> 
> 
> Earlier you was being negative about its hypothetical power consumption. Now you are being negative about it being postponed. Get your story straight, is your hostility due to the power consumption or delays?
> ...



1.)both

2.)Since i can remember(Intel 286 33mhz), ES were specially binned chips, shipped all over the worldf for reviews and testing. ES were also always the best overclockers. Just take a look at reviews over the years. Almost all reputable sites used ES. And you state ES are supposed to be fail.


----------



## Tatty_One (Jul 1, 2011)

On a serious (and un-biased note), I always thought the point of releasing some ES chips was to display at the very least some of the potential of the forthcoming product, not to highlight it's weaknesses which surely would actually only serve a negative purpose, hence in the past... often ES chips actually performed better than their first revision stepping releases....... I cannot beleive anyone would actually release any form of ES or "prototype" for anything that could even remotely be found to have any major shortcomings, it's a bit like the Britiish Airways, Air France and the American Airlines (not sure if I have the right US carrier there) partnership releasing a Concorde prototype in 1970 that couldn't fly surely?

Just my 2 pennies worth.


----------



## cadaveca (Jul 1, 2011)

ES samples are used by OEMs to validate thier supporting products. As such, they only need to have very basic functionality, and to simullate TDP/ACP.

Performance is not a consideration, as the testing period is like 6 months....6months after samples go out to OEMs, it's definitely posible a silicon revision can be done, which will vastly change performance characteristics.

AMD, via JF-AMD, has already stated that ES smaples are NOT indicitive of final performance of retail chips, due to tweaks that were made so that the highest number of samples were functional, rather that a retail scenario, where chips are binned for speed/performance.

With this information in hand, I reserve judgement until I see results with a non-ES sample.


----------



## repman244 (Jul 1, 2011)

But heky you still haven't answered my question 

What are you basing your assumptions on, that the power consumption of BD will be out of this world.



About ES's: OBR was saying that BIOS was the culprit for low scores (on those first benches that he showed), but then again I really can't take that guy seriously.


But did we have any other benchmarks (like cinebench, wprime, something like that) apart from useless SuperPI and supposed gaming benchmarks? (sorry I may have missed some)


----------



## cadaveca (Jul 1, 2011)

SuperPI is good to judge ram performance, but not CPU performance, really.

the others, while valid benchamrks, still aren't indictive of "real-world" performance, as they were all written with different tech in mind.


----------



## Dent1 (Jul 1, 2011)

heky said:


> 1.)both
> 
> 2.)Since i can remember(Intel 286 33mhz), ES were specially binned chips, shipped all over the worldf for reviews and testing. ES were also always the best overclockers. Just take a look at reviews over the years. Almost all reputable sites used ES. And you state ES are supposed to be fail.



From what I can remember the high performing ES which are a near final revision are given out to journalists of respected review websites and magazines to create publicity. In turn a full review is conducted on an array of benchmarks and a detailed analysis of the findings is writen up next to some pretty graphs and charts and a conclusion is made.  

In this situation, I  see no credible review from a reputable website or magazine, I see no detailed analysis, no pretty graphs and charts, just low quality images which you guys call leaks, a credible reviewer don't have leaks or act shady because they have permission to review the product.

Regardless of previous engineer samples, AMD are not obliged to make ES perform better or similar than the final retail product. It's an ES, there is no rules on how it *should* perform. It's a broken, unflished peice of kit, treat is as such.


----------



## repman244 (Jul 1, 2011)

cadaveca said:


> SuperPI is good to judge ram performance, but not CPU performance, really.
> 
> the others, while valid benchamrks, still aren't indictive of "real-world" performance, as they were all written with different tech in mind.



Just one question and a bit off topic, regarding SuperPI (maybe you don't know, but it's worth asking if you do), I always noticed a huge difference in performance between Intel and AMD, no other benchmark AFAIK doesn't have such huge delta in performance.
Is it the way it was coded, the way it uses instructions?


----------



## cadaveca (Jul 1, 2011)

No, AMD ram performance is severely lacking in comparison to Intel(I've mentioned many times that ram performance is key for me with Bulldozer). Even in comparison from Intel platforms, say 1156, and 1155, 1155 is vastly superior, and this shows not just in SuperPI, but in ALL benchmarks that are ram-intensive.


So, no, it's not just the way apps are coded. You'll see the same performance deficits in many of the benches in my reviews, like this one:








Note the 20FPS difference, but look at this bench:







Not so ram intensive, and things are pretty close...even though Intel can use 8 threads vs. 6 on AMD...

WinRAR, however, is more ram intensive, and shows a big performance difference:







So, in the end, your assumption that no other benchmark shows these differences, is not correct, at all.


----------



## TheMailMan78 (Jul 1, 2011)

cadaveca said:


> No, AMD ram performance is severely lacking in comparison to Intel(I've mentioned many times that ram performance is key for me with Bulldozer). Even in comparison from Intel platforms, say 1156, and 1155, 1155 is vastly superior, and this shows not just in SuperPI, but in ALL benchmarks that are ram-intensive.
> 
> 
> So, no, it's not just the way apps are coded. You'll see the same performance deficits in many of the benches in my reviews, like this one:
> ...



G-d I hope AMD reworked the damn MC on BD. If not well........next rig will be Intel.


----------



## heky (Jul 1, 2011)

repman244 said:


> What are you basing your assumptions on, that the power consumption of BD will be out of this world.



I am just speculating, based on the relatively high voltage in the leaked slides(1.4v+)


----------



## repman244 (Jul 1, 2011)

Thank you for that cadaveca (and thank you for warning me that SuperPi isn't a lonely case).

Well Llano is capable of doing 2400MHz RAM 8-11-8-28, would that suggest to an improved IMC or does it make only a small effect on RAM performance?


----------



## TheMailMan78 (Jul 1, 2011)

repman244 said:


> Thank you for that cadaveca (and thank you for warning me that SuperPi isn't a lonely case).
> 
> Well Llano is capable of doing 2400MHz RAM 8-11-8-28, would that suggest to an improved IMC or does it make only a small effect on RAM performance?



Its depends on if BD uses the same IMC. Which NO ONE knows but AMD.


----------



## sneekypeet (Jul 1, 2011)

This round of cleaning is free, and on BP's request! IF it was up to me points would have been doled out, and this thread would remain closed. You can thank cadaveca's talking me into opening it via MSN for your ability to continue with all this FUD!


----------



## cadaveca (Jul 1, 2011)

repman244 said:


> Well Llano is capable of doing 2400MHz RAM 8-11-8-28, would that suggest to an improved IMC or does it make only a small effect on RAM performance?



Just because frequency scales, doesn't mean that that frequency is utilized. You see that lots on AMD chips...little bandwidth gains from frequency, but latency improves. You need to up NB speed to see bandwidth gains, and a near 50% boost from stock NB does not give a 50% boost in performance.

Good indicator something is up. What that problem is, or if it is even really a true problem, must be answered from AMD. I just work here.


----------



## Damn_Smooth (Jul 1, 2011)

TheMailMan78 said:


> Seronx do you think my motherboard will run Bulldozer ok?



I'm sorry Mailman, I'll take this one for Seronx. I really don't think that one of the best 990fx boards (Probably the best.) on the planet is going to have a shot at running Bulldozer.

You are going to need MOAH POWAH!!!

And from what I've learned in this thread, you better watch those vrm's too.


----------



## TheMailMan78 (Jul 1, 2011)

cadaveca said:


> Just because frequency scales, doesn't mean that that frequency is utilized. You see that lots on AMD chips...little bandwidth gains from frequency, but latency improves. You need to up NB speed to see bandwidth gains, and a near 50% boost from stock NB does not give a 50% boost in performance.
> 
> Good indicator something is up. What that problem is, or if it is even really a true problem, must be answered from AMD. I just work here.



Constantly fighting my OC has taught me that timings are everything for AMD RAM.....that and Erocker calling me an idiot. Frequency makes a difference but not like timings. For example I am running cas9 now at a higher frequency then when I was running cas7 in DDR2........The DDR2 was faster due to the latency in everything.

So unless BD has changed the IMC its going to suck IMO.



Damn_Smooth said:


> I'm sorry Mailman, I'll take this one for Seronx. I really don't think that one of the best 990fx boards (Probably the best.) on the planet is going to have a shot at running Bulldozer.
> 
> You are going to need MOAH POWAH!!!
> 
> And from what I've learned in this thread, you better watch those vrm's too.



OH NOES!


----------



## techtard (Jul 1, 2011)

I hope they fix the underferforming USB too. Not a dealbreaker, but it would be nice. 
And maybe a more robust onboard RAID controller.


----------



## TheoneandonlyMrK (Jul 1, 2011)

im putting the delays down to production capacity verses strategic mass manu of the RIGHT  chips at this time E and A series plus the upcoming 7800+ gfx cards, if i were them i wouldnt be as concerned with the few as i would be with the many, simples, <and also as said before.


like this thread, lookin forward to BD and im willing to wait for deffinate benches before upgradeing again, so i appreciate the clean up

and imho AMD will have done as Cadeveca says, given out ES chips to oems that were mearly rock solid examples not fastest just to do ATE testing with etc, and its one or more of them getting benched, and AMD will as any company does pass the best chips to reviewers who will then give valid reviews and conclusions hence why no sites reviewed it yet, that and the NDA no doubt in place.

+1 on the better RAID and memory stabillity/compatibility


----------



## yogurt_21 (Jul 1, 2011)

Tatty_One said:


> On a serious (and un-biased note), I always thought the point of releasing some ES chips was to display at the very least some of the potential of the forthcoming product, not to highlight it's weaknesses which surely would actually only serve a negative purpose, hence in the past... often ES chips actually performed better than their first revision stepping releases....... I cannot beleive anyone would actually release any form of ES or "prototype" for anything that could even remotely be found to have any major shortcomings, it's a bit like the Britiish Airways, Air France and the American Airlines (not sure if I have the right US carrier there) partnership releasing a Concorde prototype in 1970 that couldn't fly surely?
> 
> Just my 2 pennies worth.



the only caveat to that I would say is that yours assumes the ES delivers upon consumer expectations. So while engineering expectations may have been met, it may not be up to what consumers demand. Also it could have taken them longer than anticipated to meet the engineering expectation thus putting them behind consumer demands.

in that case the ES would be put out to get a feel for the market to determine if the chip simply needs to be more competitively priced, needs to be scrapped because it sucks ass, or needs to have several things re-worked before final release. 

most on here are assuming the latter condition is true.


----------



## repman244 (Jul 1, 2011)

Found some interesting info: http://vr-zone.com/articles/sandy-bridge-e-delayed-until-january-2012/12816.html#ixzz1QpvOUZ9J

I know, I know, not very reliable, and you might also ask what am I trying to do with the link in an AMD thread.
I'm not saying this to hate on Intel or anything but, what if (as a joke) Intel knows how final BD will perform and realized they need more time to tweak SB-E?
This is just my random BS-ing around 

@TheMailMan78: So is it better to have the NB at let's say 2600MHz, RAM at 1333MHz with tight timings than having NB at stock (2000MHz) and RAM at 1600 or 1800 (but with loose timings)?

I really need to take a day or two to tweak a bit around RAM and NB, as I have seen that NB can give quite a boost.


----------



## renq (Jul 1, 2011)

repman244 said:


> I'm not saying this to hate on Intel or anything but, what if (as a joke) Intel knows how final BD will perform and realized they need more time to tweak SB-E?


Or perhaps just the opposite- Intel realised there's no rush


----------



## Velvet Wafer (Jul 2, 2011)

Tatty_One said:


> On a serious (and un-biased note), I always thought the point of releasing some ES chips was to display at the very least some of the potential of the forthcoming product, not to highlight it's weaknesses which surely would actually only serve a negative purpose, hence in the past... often ES chips actually performed better than their first revision stepping releases....... I cannot beleive anyone would actually release any form of ES or "prototype" for anything that could even remotely be found to have any major shortcomings, it's a bit like the Britiish Airways, Air France and the American Airlines (not sure if I have the right US carrier there) partnership releasing a Concorde prototype in 1970 that couldn't fly surely?
> 
> Just my 2 pennies worth.



Did you ever heard of the "Starfighter Affair"?
Many German airmen died, due to a severely flawed design, which even was accepted by military standards... it can happen


----------



## seronx (Jul 2, 2011)

New leak

[yt]VHI675P-Xc8[/yt]

Idle Clock #0 - 1.4GHz
Stock Clock #0 - 3.2GHz
Turbo Core #0 - 4.2GHz

This is another leak by OBR so be wary 

Nothing new for me


----------



## Pestilence (Jul 2, 2011)

seronx said:


> New leak
> 
> [yt]VHI675P-Xc8[/yt]
> 
> ...



BENCHMARKS!!!!!!!!!!!!!!! God damn leaks


----------



## seronx (Jul 2, 2011)

Pestilence said:


> BENCHMARKS!!!!!!!!!!!!!!! God damn leaks
> 
> http://assets0.ordienetworks.com/images/GifGuide/clapping/riker.gif



It's a weekend teaser so you might get benchmarks lol

I am watching like a hawk no worries







So, if he/OBR does any benchmarks I am watching and I'll post or if someone else gets to post it before me...

We at least have clocks

The whole 3.8 stock clock was false


----------



## Thatguy (Jul 2, 2011)

TheMailMan78 said:


> Its depends on if BD uses the same IMC. Which NO ONE knows but AMD.



AMD has stated that BD has vastly better memory throughput then phenom and currently available architectures. I have also seen mention of a new design memory controller as well, it would be needed given the way the chip is designed.


----------



## seronx (Jul 2, 2011)

Thatguy said:


> AMD has stated that BD has vastly better memory throughput then phenom and currently available architectures. I have also seen mention of a new design memory controller as well, it would be needed given the way the chip is designed.



http://www.techpowerup.com/forums/showthread.php?t=131161

Here is that thread

(Instead of using a tweaked K7 one we get a redesigned one)


----------



## Heavy_MG (Jul 2, 2011)

Damn_Smooth said:


> Nope man, we're screwed. Gigabyte isn't updating the Bios, they want us to pay for a black socket revision. I e-mailed them about the situation when Asus announced that they were going to do Bios updates and they told me that they weren't because AMD wasn't supporting it.
> That was my driving force for looking at anyone other than Gigabyte for my next board.
> 
> That's assuming that you are using the board in your system specs, which is the same as mine.
> ...



I just checked the Gigabyte site,they now have a beta version of a AM3+ support BIOS for my board (890GX),but that doesn't mean the BIOS is stable or even works with a final release Bulldozer CPU.
I'm surprised,no BIOS update for their 890FX series?


----------



## Pestilence (Jul 2, 2011)

Heavy_MG said:


> I just checked the Gigabyte site,they now have a beta version of a AM3+ support BIOS for my board,but that doesn't mean the BIOS is stable or even works with a final release Bulldozer CPU.
> I'm surprised,no BIOS update for their 890FX series?



I could have sworn i read over at the ORG blog that the Asus 990FX Crosshair V worked perfectly with BD


----------



## Heavy_MG (Jul 2, 2011)

Pestilence said:


> I could have sworn i read over at the ORG blog that the Asus 990FX Crosshair V worked perfectly with BD



The 990FX is the new chipset made to work with BD,older 8xx series require a BIOS update.


----------



## Pestilence (Jul 2, 2011)

Heavy_MG said:


> The 990FX is the new chipset made to work with BD,older 8xx series require a BIOS update.



I thought 990FX was pretty much 890FX just with a new name


----------



## seronx (Jul 2, 2011)

Pestilence said:


> I thought 990FX was pretty much 890FX just with a new name



Nope



There are some differences...

But since the Northbridge is in the CPU...


----------



## Heavy_MG (Jul 2, 2011)

seronx said:


> Nope
> 
> 
> 
> ...


Since the Northbridge is the CPU,there isn't alot to be changed. 
But I'm sure there are optimizations for BD in the 990FX chipset,990FX also has SLI support.


----------



## Mussels (Jul 2, 2011)

well the sockets supposedly changed in minor ways as well, which would be the key difference.


----------



## seronx (Jul 2, 2011)

Mussels said:


> well the sockets supposedly changed in minor ways as well, which would be the key difference.



Well...for the better...I haven't seen anyone look at this before

http://support.amd.com/us/Processor_TechDocs/47414.pdf

It's a interesting read

and when looking up a guy gave a pretty interesting summary

http://www.commodore-amiga.org/en/f...bulldozer-optimizations-pdf-pretty-impressive


			
				deadtime said:
			
		

> OK it took me a few days but here go's
> 
> If we take a look at a single Bulldozer core, you see a design optimized for throughput AMD will not introduce its own version of Hyper-Threading, but rather focus on physically increasing the number of instructions per clock [IPC] through wider internal units. A good example will be the newly designed 128-bit FPUs [Floating-Point Units]. Currently, 128-bit instructions are carried out by using 32-bit / 64-bit FPU at a reduced efficiency [more cycles needed to process a single instruction]. According to our sources, GPR [General Purpose Registers] were increased to 128-bit. Once that we learned of this alleged GPR depth, we asked does that mean we can, theoretically, call Bulldozer a "128-bit CPU" and is "x86-128" on the way? I will openly admit that I asked such a question without giving it a second thought.
> 
> ...



This software optimization looks to be aimed towards the server market

There is some changes with the north bridge

144bit->288bit could be why there is an extra pin
^Mem controller(There will be a Bulldozer/FX cpu that will not be supported in AM3, eventually)

I was looking up changes in Northbridge in Bulldozer and it basically every link popped up this pdf


----------



## cadaveca (Jul 2, 2011)

You info is full of fail. AMD's own site says 1866 memory support. If that obvious thing is wrong, more than likely the rest is too. Oh well.


----------



## erocker (Jul 2, 2011)

cadaveca said:


> You info is full of fail. AMD's own site says 1866 memory support. If that obvious thing is wrong, more than likely the rest is too. Oh well.



Quoting posts from other forums as information doesn't make any sense to me either.


----------



## seronx (Jul 2, 2011)

cadaveca said:


> You info is full of fail. AMD's own site says 1866 memory support. If that obvious thing is wrong, more than likely the rest is too. Oh well.



Quad Channel DDR3 Integrated Memory Controller (support for PC-12800 (DDR3-1600) and Registered DDR3) for Server/Workstation (New Opteron Valencia and Interlagos)

It's aimed for Servers the software optimization guide

Valencia is basically(basically, not EXACTLY like) the server version of Zambezi

There are differences going from NB A -> NB B

From K10.5 AM3 -> K15 AM3+


----------



## cadaveca (Jul 2, 2011)

...AND.....FAIL?

Jeff Fruehe says 1866, and he can ONLY comment on server parts.


End story.




I won't provide links, just check his blog on the AMD website. Get your facts from the horse's mouth.


----------



## seronx (Jul 2, 2011)

cadaveca said:


> ...AND.....FAIL?
> 
> Jeff Fruehe says 1866, and he can ONLY comment on server parts.
> 
> ...



Only 1600MHz and he didn't comment on clocks on his blog

He only said high bandwidth/low latency DDR3

End of story..


----------



## cadaveca (Jul 2, 2011)

He posted it on here ...oh wait.






He said new socket would be needed for best performance. I translated that as 1866, but he was talking it would be because of quad channel.


Fail.

Up to 30% more than Opteron 6174 is not that good.


----------



## Wile E (Jul 2, 2011)

erocker said:


> Mounting holes are the same as AM3, AM2+, AM2, s939 etc..



939 is not the same. It only had 2 mounting screws for the hsf bracket, remember?

AM2 and up is all the same tho. (4 screws)



Damn_Smooth said:


> Nope man, we're screwed. Gigabyte isn't updating the Bios, they want us to pay for a black socket revision. I e-mailed them about the situation when Asus announced that they were going to do Bios updates and they told me that they weren't because AMD wasn't supporting it.
> That was my driving force for looking at anyone other than Gigabyte for my next board.
> 
> That's assuming that you are using the board in your system specs, which is the same as mine.
> ...



The Asus/MSI/etc boards have to be specific revisions with the newer sockets as well. GB isn't ripping you off, you just don't have the revision with the needed socket.


----------



## HD64G (Jul 2, 2011)

Who is so sure that 990X & FX @4GHz cannot compete with any SB @4GHz? So, if clocks for FX are high there isn't a problem to compete SB w/o oc. And if this is trully ES then...


----------



## renq (Jul 2, 2011)

It's quite simple- Bulldozer HAS to be AT LEAST as fast as Sandy. It already has a(n) (in)significant disadvantage over Sandy- no GPU, though Enhanced Bulldozer ("Komodo") will take care of that next year...WIth the Enthusiast platform from Intel, the Sandy-E, yet to be released, BD faces a rough road ahead ...


----------



## Damn_Smooth (Jul 2, 2011)

Wile E said:


> 939 is not the same. It only had 2 mounting screws for the hsf bracket, remember?
> 
> AM2 and up is all the same tho. (4 screws)
> 
> ...



Not trying to start an argument, but your wrong.

http://tecinfozone.blogspot.com/2011/03/asus-am3-motherboards-get-beta-bios.html

Here's the info from Asus themselves.

http://event.asus.com/2011/mb/AM3_PLUS_Ready/


----------



## Mussels (Jul 2, 2011)

Wile E said:


> 939 is not the same. It only had 2 mounting screws for the hsf bracket, remember?
> 
> AM2 and up is all the same tho. (4 screws)



if your cooler uses the stock clip mechanism, they're still compatible. if you're replacing it completely, then yeah the screw holes are different.


----------



## HD64G (Jul 2, 2011)

renq said:


> It's quite simple- Bulldozer HAS to be AT LEAST as fast as Sandy. It already has a(n) (in)significant disadvantage over Sandy- no GPU, though Enhanced Bulldozer ("Komodo") will take care of that next year...WIth the Enthusiast platform from Intel, the Sandy-E, yet to be released, BD faces a rough road ahead ...



I agree that BD has to equal SB's performance as a total but not core2core. And that's because from C2D to present Intel was 15-20% ahead in c2c performance and with SB it's another 10%. So, even if BD is 10% back in c2c but wins overall in multithreaded programs, it's very nice a product.


----------



## Mussels (Jul 2, 2011)

HD64G said:


> I agree that BD has to equal SB's performance as a total but not core2core. And that's because from C2D to present Intel was 15-20% ahead in c2c performance and with SB it's another 10%. So, even if BD is 10% back in c2c but wins overall in multithreaded programs, it's very nice a product.



especially when (if, with BD) AMD sells at a lower price.


----------



## Pestilence (Jul 2, 2011)

HD64G said:


> I agree that BD has to equal SB's performance as a total but not core2core. And that's because from C2D to present Intel was 15-20% ahead in c2c performance and with SB it's another 10%. So, even if BD is 10% back in c2c but wins overall in multithreaded programs, it's very nice a product.



I believe the 8 Core will be as fast if not faster then a 2600K in multithreaded apps but in gaming it's going to have its ass handed to it by SB. 

Also were 8 months away from this beast...

Source: http://forum.coolaler.com/showthread.php?t=268383

CPU:













1. 22nm ivy bridge   2. 32nm sandy bridge i5-2300  3. 32nm sandy bridge pentium g620






Benchmarks:












http://img32.imageshack.us/img32/5907/ivy4.gif


----------



## renq (Jul 2, 2011)

HD64G said:


> I agree that BD has to equal SB's performance as a total but not core2core. And that's because from C2D to present Intel was 15-20% ahead in c2c performance and with SB it's another 10%. So, even if BD is 10% back in c2c but wins overall in multithreaded programs, it's very nice a product.



Lets leave CLK-2-CLK performance aside. First 8-core BD gaming tests show it to be equal to 6-core (12 thread) Gulftown in gaming tests. Of course, we don't know, how much BD is/was capped, but nevertheless, shouldn't expect any major improvements in the ready-to-market revision of Bulldozer. Now, faster Sandys, especially the unlocked K-versions, are equal or better than Gulftown in gaming tests. Thus 8-core BD ~= 4-core HT Sandy.

Now lets look into the future, Enhanced BD (Komodo) will have, IIRC, up to 10 cores, which with probable architectural advancements and possibly a bit higher clocks, should yield about 30-35% performance improvements over 8-core BD (in gaming, that is, provided games will be even more multicore/-thread happy). Now, Sandy-E will bring, IIRC also, up to 6-core HT-d CPU-s. Now, on paper at least, it should bring at least 50% increase in performance provided linear scaling (lets leave aside the ~1K+ USD pricetag it'll prolly have).

So, even if somehow BD takes the throne of the fastest (gaming) CPU this fall/year, AMD will have tough competition from Sandy-E next year...


----------



## Pestilence (Jul 2, 2011)

renq said:


> Lets leave CLK-2-CLK performance aside. First 8-core BD gaming tests show it to be equal to 6-core (12 thread) Gulftown in gaming tests. Of course, we don't know, how much BD is/was capped, but nevertheless, shouldn't expect any major improvements in the ready-to-market revision of Bulldozer. Now, faster Sandys, especially the unlocked K-versions, are equal or better than Gulftown in gaming tests. Thus 8-core BD ~= 4-core HT Sandy.
> 
> Now lets look into the future, Enhanced BD (Komodo) will have, IIRC, up to 10 cores, which with probable architectural advancements and possibly a bit higher clocks, should yield about 30-35% performance improvements over 8-core BD (in gaming, that is, provided games will be even more multicore/-thread happy). Now, Sandy-E will bring, IIRC also, up to 6-core HT-d CPU-s. Now, on paper at least, it should bring at least 50% increase in performance provided linear scaling (lets leave aside the ~1K+ USD pricetag it'll prolly have).
> 
> So, even if somehow BD takes the throne of the fastest (gaming) CPU this fall/year, AMD will have tough competition from Sandy-E next year...



Has Amd come out and said how much better memory bandwidth will be with BD?


----------



## TheMailMan78 (Jul 2, 2011)

Pestilence said:


> Has Amd come out and said how much better memory bandwidth will be with BD?



1866 with "wider channels".


----------



## cadaveca (Jul 2, 2011)

> With the new Bulldozer-based Opterons, which are set for release in the third quarter, AMD is introducing TDP Power Cap, which will give enterprises the ability to set the TDP (thermal design power) of their processors, essentially customizing their chips to meet power and workload demands. Using various knobs in the BIOS, businesses will be able to reduce the overall TDP of the chip—they won't be able to increase it beyond the maximum level set by AMD—which will help in power consumption, and then tweak the frequency of the cores as needed to get the maximum amount of performance allowed under the TDP setting, Kerby said.
> 
> "While you set the [TDP] cap, you can still operate at a high frequency," he said.
> 
> In addition, businesses can keep the TDP at the level set by AMD, and change the frequencies of the processors to add power, while keeping the overall power use under the TDP.



http://www.eweek.com/c/a/Desktops-a...s-Will-Feature-TDP-Capping-Technology-834387/


uh....


----------



## TheMailMan78 (Jul 2, 2011)

cadaveca said:


> http://www.eweek.com/c/a/Desktops-a...s-Will-Feature-TDP-Capping-Technology-834387/
> 
> 
> uh....



Thats for the server chips. Not the FX.


----------



## cadaveca (Jul 2, 2011)

But hte power gating shoud be in ALL Bulldozer chips, AFAIK.

Might also explain performance problems with ES chips. If they are gated to only allow so much power consumption, it's possible that while benching clocks are dropped, unnoticed by whoever is running the benches.


----------



## TheMailMan78 (Jul 2, 2011)

cadaveca said:


> But hte power gating shoud be in ALL Bulldozer chips, AFAIK.
> 
> Might also explain performance problems with ES chips. If they are gated to only allow so much power consumption, it's possible that while benching clocks are dropped, unnoticed by whoever is running the benches.



If thats true we have some fail on our hands.


----------



## Thatguy (Jul 2, 2011)

TheMailMan78 said:


> If thats true we have some fail on our hands.



Fial on the part of the people benchmarking engineering validation samples ? absolutely.


----------



## repman244 (Jul 2, 2011)

cadaveca said:


> But hte power gating shoud be in ALL Bulldozer chips, AFAIK.



Correct, it will be able to shut down and entire module which is not in use.




I don't think the desktop version of Bulldozer will have that TDP cap (well basic version will be - Turbo core stuff, and even if it does have you can control it if I read the post about it correctly), and _maybe_ the ES have the feature (but the question is, who the hell needs that on ES for desktop part). So I'm guessing that's enterprise only.


----------



## trt740 (Jul 3, 2011)

Gigabyte pisses me off there is no way that this board couldn't handle the Bulldozer look at it's power system 8+2 and 140 watt last Gigabyte board I will buy.

http://www.gigabyte.us/products/product-page.aspx?pid=3010#ov


----------



## Mussels (Jul 3, 2011)

trt740 said:


> Gigabyte pisses me off there is no way that this board couldn't handle the Bulldozer look at it's power system 8+2 and 140 watt last gigabyte board I will buy.
> 
> http://www.gigabyte.us/products/product-page.aspx?pid=3010#ov



the pins have changed in the socket, remember...


----------



## trt740 (Jul 3, 2011)

Mussels said:


> the pins have changed in the socket, remember...



No it fits in AM3 boards it's just a bios flash thats why it fits in a 770 board or a Crosshair and this shit  will support it.http://www.gigabyte.us/products/product-page.aspx?pid=3807#ov The difference in this socket is the color of the plastic on this socket.  It is a AM3 socket with black plastic and a bios flash. Atleast that how it was explained to me but we will see. * Please feel free to correct me if I am wrong.*


----------



## Deleted member 67555 (Jul 3, 2011)

Revision 3.1 supports it...the others do not


----------



## trt740 (Jul 3, 2011)

jmcslob said:


> Revision 3.1 supports it...the others do not



revision 3.1? explain


----------



## Mussels (Jul 3, 2011)

trt740 said:


> revision 3.1? explain



they keep releasing new revisions of the mobos with the black sockets, and claiming those ones are BD compatible. I think this is where people are getting confused, because you can have a revision (whatever) version of the board with the black socket and the beta BIOS are for THEM, and people are thinking those BIOS are for older revisions as well, when they arent.


----------



## Wile E (Jul 3, 2011)

Damn_Smooth said:


> Not trying to start an argument, but your wrong.
> 
> http://tecinfozone.blogspot.com/2011/03/asus-am3-motherboards-get-beta-bios.html
> 
> ...



Hmmm, I stand corrected. I wonder why AMD is trying to force the move to the AM3+ socket? Sounds a lot like the same stuff people bitch at Intel for.



Mussels said:


> if your cooler uses the stock clip mechanism, they're still compatible. if you're replacing it completely, then yeah the screw holes are different.


The discussion was about water blocks, and erocker said mounting holes specifically. That's all I was commenting on. I do believe you are correct about all clip on heatsinks tho.





Mussels said:


> the pins have changed in the socket, remember...



The links Smooth posted above seem to suggest otherwise.


----------



## Damn_Smooth (Jul 3, 2011)

Wile E said:


> Hmmm, I stand corrected. I wonder why AMD is trying to force the move to the AM3+ socket? Sounds a lot like the same stuff people bitch at Intel for.



From what I've read, Bulldozer will run on AM3 but not all of the power gating features will work. I don't think turbocore, or cool n quiet will work on AM3. 

That's not a big deal to us because we would shut the shit off anyway, but I can see why AMD won't officialy support it. I'm not forgiving Gigabyte though.


----------



## Pestilence (Jul 3, 2011)

Wile E said:


> Hmmm, I stand corrected. I wonder why AMD is trying to force the move to the AM3+ socket? Sounds a lot like the same stuff people bitch at Intel for.



Intel changes sockets because staying with the same socket leaves performance on the table. Look at Lynnfield vs Sandy Bridge. The swap to a new mb was WELL worth the gains. I had a 4.4Ghz 760 and i still swapped to SB because i like to waste money.


----------



## repman244 (Jul 3, 2011)

Pestilence said:


> Intel changes sockets because staying with the same socket leaves performance on the table. Look at Lynnfield vs Sandy Bridge. The swap to a new mb was WELL worth the gains. I had a 4.4Ghz 760 and i still swapped to SB because i like to waste money.



Sorry but IMHO they just removed the pin to force people to buy motherboards, I imagine the gains did not come from the mobo but from CPU.
Same thing we have here with AM3 to AM3+, I bet if "AMD officially supported" AM3 for Bulldozer, everything would work just fine. I don't see any huge differences between the two (there are even boards with 890FX chipset and AM3+ socket) and it's not a weak VRM issues here, boards like CHIV and such can provide more than enough power IMHO.


----------



## seronx (Jul 3, 2011)

5.1GHz on Air....OBR says the proof video will come soon

He's trolling so trollin'



			
				OBR said:
			
		

> Look, this is amazing! Fully stable in Cinebench R11 @ 4,85 GHz! SuperPi 1M - 5,1 GHz on AIR ... proof video:


----------



## repman244 (Jul 3, 2011)

I really don't see the point in those "leaks" that he does.
He is just showing some frequencies, but we all know frequency is just part of the story.

And I bet the video will feature CPU-Z showing the CPU and blocked scores of benchmarks...boriiing and already seen.


----------



## Damn_Smooth (Jul 3, 2011)

seronx said:


> http://3.bp.blogspot.com/-8e20pCOODxA/ThAbzbYfQkI/AAAAAAAAA1Y/CN1VPrwI6Bk/s1600/newska1.png
> 
> 5.1GHz on Air....OBR says the proof video will come soon
> 
> He's trolling so trollin'



I would not complain at all if that were true, but I still don't trust that guy.


----------



## repman244 (Jul 3, 2011)

Me neither, remember when he posted those Llano screens showing it run at ~4.7GHz? 
Well read this: http://www.xtremesystems.org/forums...lano-OC-amp-benchmark-collection-thread/page2 
Especially post 47.
I think OBR is full of shit, but that's just me.


----------



## repman244 (Jul 3, 2011)

Here we go:


----------



## seronx (Jul 3, 2011)

repman244 said:


> Here we go:
> 
> http://1.bp.blogspot.com/-aJTldEb1Kms/ThApx9sveSI/AAAAAAAAA14/dvIF3YKov_8/s1600/cine11.png-oc.png
> http://3.bp.blogspot.com/-roteoiOV-N0/ThApvJpAcEI/AAAAAAAAA1w/7TDEK93UsQY/s1600/superpi_oc4.png




Those black things all over that are so enthused in hugging those numbers












I don't really care for the benchmarks...since I only really need SSE4.1/4.2/5 and AVX

I don't care about Integer Performance at all it could be equal to Phenom II for all I care as long as I get SSE4.1/4.2/5 and AVX

Ya, his ES Sample is multiplier locked....I did calculations on what I know about the architecture
and ya....his version isn't a black edition

3.520GHz


----------



## Damn_Smooth (Jul 3, 2011)

seronx said:


> Those black things all over that are so enthused in hugging those numbers
> 
> 
> 
> ...



You would think that a guy with the money for all of those boards and a supposed BD engineering sample could spring for an SSD.

I noticed you finally filled out your system specs. I think you should have went with an AMD FX 10050 but other than that, that's a solid build.


----------



## HalfAHertz (Jul 3, 2011)

The power gating is done in software, just like on the AMD GPUs:


----------



## seronx (Jul 3, 2011)

Damn_Smooth said:


> You would think that a guy with the money for all of those boards and a supposed BD engineering sample could spring for an SSD.
> 
> I noticed you finally filled out your system specs. I think you should have went with an AMD FX 10050 but other than that, that's a solid build.



Thanks, I just want to get Thatguy mad at me lol



ya there is definitely some issues

3.520GHz for 4 cores
2.1GHz for 8 cores

The cpu in the cinebench test is throttling

Especially since the CPU didn't even beat a 3.5GHz 6 core Phenom II 1090T


----------



## Damn_Smooth (Jul 3, 2011)

HalfAHertz said:


> The power gating is done in software, just like on the AMD GPUs:
> 
> http://www.guru3d.com/imageview.php?image=31301



That shoots down what I've read. I guess I have absolutely no clue why they need a new socket then.


----------



## seronx (Jul 3, 2011)

Damn_Smooth said:


> That shoots down what I've read. I guess I have absolutely no clue why they need a new socket then.



New socket is for the new north bridge and the new thermal sensor which is back words compatible but makes the AM3 white sockets not forwards compatible 

The only way to have a 800 series or 700 series work

Is if it has:
UEFI
Black Socket
...


----------



## pantherx12 (Jul 3, 2011)

Also the new chips draw more power hence the bigger pins, might get burn out issues on am3 boards.


----------



## Damn_Smooth (Jul 3, 2011)

seronx said:


> New socket is for the new north bridge and the new thermal sensor which is back words compatible but makes the AM3 white sockets not forwards compatible
> 
> The only way to have a 800 series or 700 series work
> 
> ...



So the 800 boards with bios updates won't be able to read correct temps then? There goes overclocking.


----------



## repman244 (Jul 3, 2011)

What bigger pins? Are you sure about that?


----------



## pantherx12 (Jul 3, 2011)

repman244 said:


> What bigger pins? Are you sure about that?



Fairly sure, otherwise why make the holes in the socket 11% bigger.


----------



## seronx (Jul 3, 2011)

Damn_Smooth said:


> So the 800 boards with bios updates won't be able to read correct temps then? There goes overclocking.





repman244 said:


> What bigger pins? Are you sure about that?





pantherx12 said:


> Fairly sure, otherwise why make the holes in the socket 11% bigger.




























Old stuff but I guess we need to show this stuff late in the game


----------



## repman244 (Jul 3, 2011)

Thanks for that, but it still doesn't explain why would ASUS bother supporting old MB's. Maybe the supported boards do have larger pin holes?
I could check on my board but really can't be bothered to take the CPU out right now.


----------



## a_ump (Jul 3, 2011)

it is good to re-post the rumors/data that can still be up held or at least not denied, helps keep people up to speed on what the latest developement is in BD news.


----------



## seronx (Jul 3, 2011)

repman244 said:


> Thanks for that, but it still doesn't explain why would ASUS bother supporting old MB's. Maybe the supported boards do have larger pin holes?
> I could check on my board but really can't be bothered to take the CPU out right now.



The idea/rumor is that Asus helped AMD make the new Socket/Motherboard decision
or who ever designs ASUS mobos

Edit: Rather not make this topic go into a power/heat discussion


----------



## repman244 (Jul 3, 2011)

EDIT: Never mind then


----------



## HD64G (Jul 3, 2011)

I think some guys overestimate rumors and neglect facts or data from known to have leaked from AMD or Mobo-makers. Larger pins mean better or easier installation, not incompatibility between BD & AM3 sockets. ASUS officially said about BIOS update that is enough to support BD in AM3 mobos. What more does anyone need to understand it? As for max current and 110A vs 145A, we must wait to be sure because since some modules may shut down and some will get too much oced, *I just suppose* that the oced modules may consume up to 125W themselves which means much more current in less transistors. Maybe...


----------



## Velvet Wafer (Jul 3, 2011)

seronx said:


> http://www.asrock.com/news/events/2011am3+/1.jpg
> 
> http://www.asrock.com/news/events/2011am3+/2.jpg
> 
> ...



first useful post you made in this thread, thanks!... 
seems a lot of people here cant understand, that there is indeed a need for bigger pins, even tho it is known for months, that they were made bigger to allow more current to pass thru


----------



## HalfAHertz (Jul 3, 2011)

Velvet Wafer said:


> first useful post you made in this thread, thanks!...
> seems a lot of people here cant understand, that there is indeed a need for bigger pins, even tho it is known for months, that they were made bigger to allow more current to pass thru



And to reduce the amount of bent and broken pins


----------



## Velvet Wafer (Jul 3, 2011)

HalfAHertz said:


> And to reduce the amount of bent and broken pins



i dont think they will be much more resistant, its still (very soft) pure gold, if you drop it, the amount of damage will probably be about the same


----------



## Horrux (Jul 3, 2011)

Velvet Wafer said:


> i dont think they will be much more resistant, its still (very soft) pure gold, if you drop it, the amount of damage will probably be about the same



I don't think the pins are pure gold, just the inner interconnects between the actual silicon and the pin headers on the chip package. Otherwise the $80 CPU would cost more in gold than they sell for.


----------



## Velvet Wafer (Jul 3, 2011)

Horrux said:


> I don't think the pins are pure gold, just the inner interconnects between the actual silicon and the pin headers on the chip package. Otherwise the $80 CPU would cost more in gold than they sell for.



what material is used then, which looks exactly like gold?
i dont think they used brass, and copper is more reddish!
also, the pins are too soft for beeing a mixture of copper and gold


----------



## meran (Jul 3, 2011)

every interconnect in your pc is GOLD plated (graphic card pcie,motherboard slots pins,cpu)because gold is the least reactive material in universe also its very good in conducting electricity

just like this post
http://www.techpowerup.com/107572/E...oards-Longer-Lifespan-Higher-Reliability.html


----------



## Tatty_One (Jul 3, 2011)

I beleive it's a gold alloy that contains copper also.


----------



## TheoneandonlyMrK (Jul 3, 2011)

tatty_one said:


> i beleive it's a gold alloy that contains copper also.



+1

if they were actually made of gold they would be far softer and weaker then they are

socket receptors inc cpu and pciex are gold plated


----------



## Horrux (Jul 3, 2011)

Tatty_One said:


> I beleive it's a gold alloy that contains copper also.



I believe they are simple copper, and gold plated, just like the rest of electrical contacts inside a PC.


----------



## Tatty_One (Jul 3, 2011)

Horrux said:


> I believe they are simple copper, and gold plated, just like the rest of electrical contacts inside a PC.



I am not an expert by any means but as it has to be magnetic it must be an alloy because as far as I was aware (maybe wrongly) only ferrous metals are magnetic, therefore it is highly likely that the alloy would have an iron base element.


----------



## TheMailMan78 (Jul 3, 2011)

I wish someone would shut this thread down. Its so full of bullshit and misinformation it boggles the mind. This coming from a world renowned troll should tell you something.


----------



## repman244 (Jul 3, 2011)

But, but, but, we need a place to learn how to troll


----------



## Velvet Wafer (Jul 3, 2011)

TheMailMan78 said:


> I wish someone would shut this thread down. Its so full of bullshit and misinformation it boggles the mind. This coming from a world renowned troll should tell you something.



your honesty is refreshing, mailman!


----------



## Nesters (Jul 3, 2011)

http://www.tomshardware.co.uk/picturestory/2-gold-motherboard-chemistry.html

OR the whole story in 2 pictures:


----------



## Pestilence (Jul 3, 2011)

seronx said:


> http://3.bp.blogspot.com/-8e20pCOODxA/ThAbzbYfQkI/AAAAAAAAA1Y/CN1VPrwI6Bk/s1600/newska1.png
> 
> 5.1GHz on Air....OBR says the proof video will come soon
> 
> He's trolling so trollin'



Ofcourse he doesn't show the Cinebench score


----------



## Damn_Smooth (Jul 3, 2011)

Pestilence said:


> Ofcourse he doesn't show the Cinebench score



That's the point. The few things he does show are usually photoshopped so nothing he does can be trusted.


----------



## Pestilence (Jul 3, 2011)

Damn_Smooth said:


> That's the point. The few things he does show are usually photoshopped so nothing he does can be trusted.



Agreed.


----------



## Pestilence (Jul 3, 2011)

Just watched this video and im impressed if its real but HOLY SHIT on the 1.6v and size of that cpu cooler

http://www.youtube.com/watch?v=haV93vh20Q0&feature=player_embedded


----------



## seronx (Jul 3, 2011)

Pestilence said:


> Just watched this video and im impressed if its real but HOLY SHIT on the 1.6v and size of that cpu cooler
> 
> http://www.youtube.com/watch?v=haV93vh20Q0&feature=player_embedded



Noctua D14

Of course, it is big but the CPU is obviously being throttled

or aggressively gated




TheMailMan78 said:


> I wish someone would shut this thread down. Its so full of bullshit and misinformation it boggles the mind. This coming from a world renowned troll should tell you something.



Pift, misinformation, like you know what you are talking about


----------



## Pestilence (Jul 3, 2011)

seronx said:


> Noctua D14



Thats the D14? Wow.. I heard it was huge but not THAT huge


----------



## ensabrenoir (Jul 3, 2011)

TheMailMan78 said:


> I wish someone would shut this thread down. Its so full of bullshit and misinformation it boggles the mind. This coming from a world renowned troll should tell you something.



unfortunately so is 50% of the internet and thats whats makes it so much fun, frustrating  and intriguing so this just blends right in


----------



## twilyth (Jul 4, 2011)

I heard that a bunch of engineers at AMD sold their soul to the devil to get this chip out into the wild and the reason it's delayed is because they're having trouble finding enough virgins for the final sacrifice.


----------



## Horrux (Jul 4, 2011)

seronx said:


> Noctua D14
> 
> Of course, it is big but the CPU is obviously being throttled
> 
> ...







Pestilence said:


> Thats the D14? Wow.. I heard it was huge but not THAT huge



That's the exact cooler I am using on my 145W Phenom II X4 965 and keeps it surprisingly cool even under stress. I love this cooler and plan on keeping it for a LONG time. It is almost $100 though. However, it (obviously?) doesn't fit every mobo and every case, so check before you buy one.


----------



## Mussels (Jul 4, 2011)

i'm sick of the bullshit in this thread, locked.


----------

