• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD FX-8350 - "Piledriver" for AMD Socket AM3+

Fine, jack up the 3570K to 4.9Ghz or 5Ghz like I occasionally run mine at (H100).

My 3570K does no more than 4.5 GHz.:mad:

It's very interesting how there is such a differnce between my 3770K and my 3570K, too. I can run 1.4 V through the 3570K no problem, and not break 85C.
 
LOL, it was just a random number cdawall, no Intel conspiracy! Fine, jack up the 3570K to 4.9Ghz or 5Ghz like I occasionally run mine at (H100). The point was it has a 600Mhz headstart so I would expect things to be a lot closer than if both with clocked the same... regardless of that clock speed.

Though I would doubt retail is going to hit 5.5Ghz 24/7 stable with 'normal' cooling (ambient water or less). I could be wrong, and actually hope I am... until then, its all speculation.

Your chip is a good one as with Dave most do not go over 4.5ghz. So what is to say a good 8350 isn't going to do 5.5? We already saw that with Thuban and Deneb chips. "Good" chips could run 4.5ghz stable on air "normal" chips did no more than 4ghz. That is the same 500mhz difference. With you complete and utter lack of knowledge on the facts why are you speculating? If you really want to get silly about it most people do not overclock so the performance gains in multithreading at stock speeds make a massive difference.

My 3570K does no more than 4.5 GHz.:mad:

It's very interesting how there is such a differnce between my 3770K and my 3570K, too. I can run 1.4 V through the 3570K no problem, and not break 85C.

See that is a lot more normal.
 
Now that the PD reviews flood the interwebz (& TPU - thanx alot cadaveca) the question is: where's the "AMD FX (Piledriver) OCers Club" thread? Even more important: anyone tested it with same cooling as me, i.e. TR's VenomousX & AS5? Or does the LCS that comes with FX-8350 fairs better than what i have? :toast:

P.S. I might change my mobo to Crosshair V Formula & OS to Win 7 Ultimate 64-bit for this CPU quite soon.
 
Yay so this will be a good upgrade to my fx-4100.
 
Your chip is a good one as with Dave most do not go over 4.5ghz. So what is to say a good 8350 isn't going to do 5.5? We already saw that with Thuban and Deneb chips. "Good" chips could run 4.5ghz stable on air "normal" chips did no more than 4ghz. That is the same 500mhz difference. With you complete and utter lack of knowledge on the facts why are you speculating? If you really want to get silly about it most people do not overclock so the performance gains in multithreading at stock speeds make a massive difference.



See that is a lot more normal.
I bin chips and must be lucky. I havent had one (out of 10) do less than 4.5Ghz...(ambient water) voltage walls and therefore temperatures on these stupid TIM below the IHS chips tend to go up after that, yep.

My 3570K does no more than 4.5 GHz.

It's very interesting how there is such a differnce between my 3770K and my 3570K, too. I can run 1.4 V through the 3570K no problem, and not break 85C.
As Im sure you know, its all about the leakage. ;)

With you complete and utter lack of knowledge on the facts why are you speculating?
Im sorry, my what? Any need for this disparaging comment? I suppose I deserve it for calling out that other guy... but let's stop ehh? Im not an AMD guy, but I read the same forums you do and am not a muppet...

Anyway, they could, I said I hope Im wrong, what gets you off my dick? Agreeing with you that it could it 5.5Ghz? Ok... It could commonly hit 5.5Ghz... only time will tell. WAIT! I already said that....hmmmmmm.
 
Last edited:
I bin chips and must be lucky. I havent had one (out of 10) do less than 4.5Ghz...(ambient water) voltage walls and therefore temperatures on these stupid TIM below the IHS chips tend to go up after that, yep.

I used to heavily bin chips. Hence why my first batch 1090T was kicking in the 4.5ghz area. Heck I have gotten my B97 ebay chip up to 4.6ghz on an H70...I really do see these chips clocking much higher than listed in benchmarks. I am more onboard for the better IMC in them than anything else.

My personal issue with them is there is no good Mini-ITX board for AM3 out there. Zotac has an 890GX board, but it only has an X1 PCI-E. I need a full slot and 125w support. I wish I could snag one for my deployment box, but it looks like I will be dealing with my little X3440@4.2 :shadedshu.

Im sorry, my what? Any need for this disparaging comment? I suppose I deserve it for calling out that other guy... but let's stop ehh? Im not an AMD guy, but I read the same forums you do and am not a muppet...

You can take it as you want I meant it as you are an Intel guy with very little AMD experience sitting and preaching that AMD is not as good.

Anyway, they could, I said I hope Im wrong, what gets you off my dick? Agreeing with you that it could it 5.5Ghz? Ok... It could commonly hit 5.5Ghz... only time will tell.

They could do a lot of things. You don't know and your blatant disregard for reading the reviews posted is obvious.
 
Im not an intel guy...outside of the fact that I use their CPU's (but not a fanboy which is how I thought you meant that). if AMD performance matched Intel, specifically for benchmarking, I would be ALL OVER THEM. Performance does drive me since I do benchmark, so its clear why I own Intel as they do better at Hwbot in 3D/2D.

I dont recall saying they werent as good either. Put it back in your pants man, there is no battle here, just a (futile?) attempt to figure more things out about its performance. Ive read our review, Ive read this one. I see in single threaded performance its lacking but doing well in multithreaded performance that isnt FPU heavy. Its pricing is incredible making it a valid choice for anything these days.

Where am I wrong in that opinion?
 
As Im sure you know, its all about the leakage.

Yeah, but the change between the two is so large...so greater than anything I am really used too.. Ididn't see that with SNB at all.


I am going to ask AMD for a few more chips. Perhaps we can get some clocking going over the winter. You guys game for some challenges?
 
Im not an intel guy...outside of the fact that I use their CPU's (but not a fanboy which is how I thought you meant that). if AMD performance matched Intel, specifically for benchmarking, I would be ALL OVER THEM. Performance does drive me since I do benchmark, so its clear why I own Intel as they do better at Hwbot in 3D/2D.

I dont recall saying they werent as good either. Put it back in your pants man, there is no battle here, just a (futile?) attempt to figure more things out about its performance. Ive read our review, Ive read this one. I see in single threaded performance its lacking but doing well in multithreaded performance that isnt FPU heavy. Its pricing is incredible making it a valid choice for anything these days.

Where am I wrong in that opinion?

Look closer at the FPU benchmarks AMD does poorly in you will notice all AMD chips do poorly in them. It has nothing to do with AMD being weak at FPU it has to do with specific benchmarks not using the technology available to them.

I would be willing to wager a quite large bet that any multithreaded benchmark that allows AMD to utilize the technology at hand instead of backdooring anything that isn't "genuineIntel" AMD will perform better than its competition. What needs to happen is there needs to be some open X64 stuff that comes out for encoding as well as video games none of this Intel branded only works well on Intel we see now.
 
Compared to identically priced 2500k. If a game is cpu intensive then Intel wins. Otherwise it's a tie. So Intel is better for people that game a lot - like me. AMD is bad in this competition as far as games are concerned.

Not quite true. In most games the frame rate is a good 60 or higher. There are a few where it drops down to 40 at some points with the most intensive settings. I can't see you complaining about a handful of poorly designed and poorly threaded games. Why not complain to the software developer about their pathetically poor design???? Multithreaded games are the wave of the future. Single threading is an old poor design and is dying.
 
While I do agree that not using the proper instructions are a huge deal and in SOME tests show big differences while others do not (Cinebench showing little difference to CPUID), I also think that it is an architectural thing too. As Im sure you know, each Intel core (well that WAS A core until AMD changed the definition I guess) is FPU and Integer whereas AMD's 'modules' are 2 integer and one FPU(?). So I would imagine it goes both ways since when comparing it to an Intel chip it has the same amount of FPU's (quad with HT) as an 'octo' core (by AMD definition).
 
Yeah, but the change between the two is so large...so greater than anything I am really used too.. Ididn't see that with SNB at all.


I am going to ask AMD for a few more chips. Perhaps we can get some clocking going over the winter. You guys game for some challenges?

id love to see the waterblocked version ocd with some grr:)
 
While I do agree that not using the proper instructions are a huge deal and in SOME tests show big differences while others do not (Cinebench showing little difference to CPUID),

Cinebench is the only encoding benchmark that shows Intel ahead of the pack. It does not allow AMD processors to use AVX as a whole. Whenever AMD uses AVX it quite honestly demolishes the competition. Much like back in the P4 days when netburst ate video encoding up. Now these have a lot less of a performance drop in other applications vs P4.

I also think that it is an architectural thing too. As Im sure you know, each Intel core (well that WAS A core until AMD changed the definition I guess) is FPU and Integer whereas AMD's 'modules' are 2 integer and one FPU(?). So I would imagine it goes both ways since when comparing it to an Intel chip it has the same amount of FPU's (quad with HT) as an 'octo' core (by AMD definition).

I am sure some of it is an architectural difference. Which is why we are seeing AMD run well in multithreaded benchmarks, terrible in single IPC and mediocre in a handful of honestly biased benchmarks.

As I said before there is a reason AMD chips are being picked up for the server market. They are not bad and a massively multithreaded environment that is properly coded to make use of not only the new core hierarchy but also the technology available (AVX, SSE etc) they are actually quite good often times substantially better than an Intel alternative. It really really comes down to using the right encoders that allow use of all of the parts of the AMD cores. Like Dave said watching the power consumption during benchmarks it is blatant the cores are idling through things instead of running the cores up like it should.
 
Cinebench is the only encoding benchmark that shows Intel ahead of the pack. It does not allow AMD processors to use AVX as a whole. Whenever AMD uses AVX it quite honestly demolishes the competition. Much like back in the P4 days when netburst ate video encoding up. Now these have a lot less of a performance drop in other applications vs P4.



I am sure some of it is an architectural difference. Which is why we are seeing AMD run well in multithreaded benchmarks, terrible in single IPC and mediocre in a handful of honestly biased benchmarks.

As I said before there is a reason AMD chips are being picked up for the server market. They are not bad and a massively multithreaded environment that is properly coded to make use of not only the new core hierarchy but also the technology available (AVX, SSE etc) they are actually quite good often times substantially better than an Intel alternative. It really really comes down to using the right encoders that allow use of all of the parts of the AMD cores. Like Dave said watching the power consumption during benchmarks it is blatant the cores are idling through things instead of running the cores up like it should.
As far as the AVX, I have no idea. If that has to do with CPUID and such, that link I provided used a generic CPUID to force the use of all instructions in which cinebench appears to show no favorites. If its beyond that, I will admit I have no clue.

I hear ya... and appreciate the informaion cdawall. :)
 
As far as the AVX, I have no idea. If that has to do with CPUID and such, that link I provided used a generic CPUID to force the use of all instructions in which cinebench appears to show no favorites. If its beyond that, I will admit I have no clue.

I hear ya... and appreciate the informaion cdawall. :)

http://www.agner.org/optimize/blog/read.php?i=49

There is a little information on the CPUID I was talking about. As for your generic CPUID you are correct there is zero optimization for a CPUID that doesn't exist. The issue is when run under a processor that supports AVX cinebench will allow intel cpu's that support AVX utilize while not allowing the AMD ones to do so.
 
While I do agree that not using the proper instructions are a huge deal and in SOME tests show big differences while others do not (Cinebench showing little difference to CPUID), I also think that it is an architectural thing too. As Im sure you know, each Intel core (well that WAS A core until AMD changed the definition I guess) is FPU and Integer whereas AMD's 'modules' are 2 integer and one FPU(?). So I would imagine it goes both ways since when comparing it to an Intel chip it has the same amount of FPU's (quad with HT) as an 'octo' core (by AMD definition).

I'll agree that the question of the decoders that send the data to each mdoule have been particularly affected in floating point usage. That is unquestionable. That will be addressed in Steamroller you can't expect all issues to be corrected one generation. Steamroller will add another decoder I believe to each module to correct this issue. You could not accomplish that without first going down to .28 nm process. I am confident in spite of the mantras that AMD will survive this down cycle and will then be able to spit significant improvements in a timely fashion. It had to get its house in order before moving forward in any revolutionary way. I think AMD has survived the worst of a [painful reorganization and hopefully will be be able to add engineering and marketing staff in another 12 months.
 
http://www.agner.org/optimize/blog/read.php?i=49

There is a little information on the CPUID I was talking about. As for your generic CPUID you are correct there is zero optimization for a CPUID that doesn't exist. The issue is when run under a processor that supports AVX cinebench will allow intel cpu's that support AVX utilize while not allowing the AMD ones to do so.
OK, so it is CPUID... (read that agner link, know that, mentioned that already above and was linked in my link to OCF, thank you again though!).

That said, if you look at the Cinebench test, none of the CPUID's he used showed a difference. Am I wrong in thinking that this shows no bias since there are no changes regardless of CPUID? Or since he used an atom CPU or something would my thinking be off since it doesnt have AVX extensions (guessing here).

Feel free to PM as we are drifting a bit... :)
 
Thanks for the review :)

There is one thing I'd like to know, for the power consumption, What program you did to have full system load? and full system has been taken with what? as a full system, I do think that 100w if kinda low... My rig, at idle, with the HD6950 (1x), would be about 70-80w at idle, 2 hard drive and 1 SSD, and this has been taken at the wall, with the Kill-a-watt (by the way, my UPS does also give the same wattage or so).

Thanks if you can answer :D

BTW, AMD has some good performance, on some other review, games aren't that better, still alot behind Intel is alot of games, but multi-thread it is quite good (single thread, Intel seems to be faster). Theses CPU are workstation/servers at best, I would still use Intel for low power/performance as desktop.
 
OK, so it is CPUID... (read that, know that..was linked in my link, thank you again though!).

That said, if you look at the Cinebench test, none of the CPUID's he used showed a difference. Am I wrong in thinking that this shows no bias since there are no changes regardless of CPUID? Or since he used an atom CPU or something would my thinking be off since it doesnt have AVX extensions (guessing here).

Feel free to PM as we are drifting a bit... :)

This should be the last post unless shenanigans happen, but yes you are correct since the atom lacks a huge number of instruction sets there will be no variation.
 
SHENS! Thank you for bringing it back to a respectable, intelligent conversation free of disparaging remarks. This was fruitful IMO. :)
 
Thanks for the review :)

There is one thing I'd like to know, for the power consumption, What program you did to have full system load? and full system has been taken with what? as a full system, I do think that 100w if kinda low... My rig, at idle, with the HD6950 (1x), would be about 70-80w at idle, 2 hard drive and 1 SSD, and this has been taken at the wall, with the Kill-a-watt (by the way, my UPS does also give the same wattage or so).

Thanks if you can answer :D

BTW, AMD has some good performance, on some other review, games aren't that better, still alot behind Intel is alot of games, but multi-thread it is quite good (single thread, Intel seems to be faster). Theses CPU are workstation/servers at best, I would still use Intel for low power/performance as desktop.

power_idle.gif


Even in crossfire 7970's pull less wattage of your 6950 idle.

SHENS! Thank you for bringing it back to a respectable, intelligent conversation free of disparaging remarks. This was fruitful IMO. :)

Now we can't have that! :laugh:
 
Much better than Bulldozer but far away from Core i5 3470 :(
9.0 is too much
 
Cinebench is the only encoding benchmark that shows Intel ahead of the pack. It does not allow AMD processors to use AVX as a whole. Whenever AMD uses AVX it quite honestly demolishes the competition. Much like back in the P4 days when netburst ate video encoding up. Now these have a lot less of a performance drop in other applications vs P4.



I am sure some of it is an architectural difference. Which is why we are seeing AMD run well in multithreaded benchmarks, terrible in single IPC and mediocre in a handful of honestly biased benchmarks.

As I said before there is a reason AMD chips are being picked up for the server market. They are not bad and a massively multithreaded environment that is properly coded to make use of not only the new core hierarchy but also the technology available (AVX, SSE etc) they are actually quite good often times substantially better than an Intel alternative. It really really comes down to using the right encoders that allow use of all of the parts of the AMD cores. Like Dave said watching the power consumption during benchmarks it is blatant the cores are idling through things instead of running the cores up like it should.

I guess AMD needs to pay these 'benchmark' coders to make use of AMD's tech more or efficiently. Again $ should be spent on these so called marketing tactics. Push $ into their As* and the program will start to favor AMD.
 
Thanks for the review :)

There is one thing I'd like to know, for the power consumption, What program you did to have full system load? and full system has been taken with what? as a full system, I do think that 100w if kinda low... My rig, at idle, with the HD6950 (1x), would be about 70-80w at idle, 2 hard drive and 1 SSD, and this has been taken at the wall, with the Kill-a-watt (by the way, my UPS does also give the same wattage or so).

Thanks if you can answer :D

Better yet, a pic:

In that power bar is PC in kill-a-watt clone, lamp, monitor, and stereo. That bar plugs into it's own circuit @ 15a/120V, as well.

001.jpg



What I report is the average reported over an 1-hour period of a customized CPU-based load.

what's really amazing is that this system, does draw no more than 400W gaming, with dual 7950s!!!
 
Back
Top