Monday, July 11th 2011
AMD FX-8130P Processor Benchmarks Surface
Here is a tasty scoop of benchmark results purported to be those of the AMD FX-8130P, the next high-end processor from the green team. The FX-8130P was paired with Gigabyte 990FXA-UD5 motherboard and 4 GB of dual-channel Kingston HyperX DDR3-2000 MHz memory running at DDR3-1866 MHz. A GeForce GTX 580 handled the graphics department. The chip was clocked at 3.20 GHz (16 x 200 MHz). Testing began with benchmarks that aren't very multi-core intensive, such as Super Pi 1M, where the chip clocked in at 19.5 seconds; AIDA64 Cache and Memory benchmark, where L1 cache seems to be extremely fast, while L2, L3, and memory performance is a slight improvement over the last generation of Phenom II processors.Moving on to multi-threaded tests, Fritz Chess yielded a speed-up of over 29.5X over the set standard, with 14,197 kilonodes per second. x264 benchmark encoded first pass at roughly 136 fps, with roughly 45 fps in the second pass. The system scored 3045 points in PCMark7, and P6265 in 3DMark11 (performance preset). The results show that this chip will be highly competitive with Intel's LGA1155 Sandy Bridge quad-core chips, but as usual, we ask you to take the data with a pinch of salt.
Source:
DonanimHaber
317 Comments on AMD FX-8130P Processor Benchmarks Surface
PLease I don't want to enter this discussion, I just pointed out the obvious on that picture.
The FPU unit is FMA thus allows to shoot out 2x128bit SSE FPs
and 1x256 if 1 core needs it It is modular in design 4 fetch/decode/store per cycle for 32bit
2 fetch/decode/store per cycle for 64bit
and in theory if there were registers for it
1 fetch/decode store per cycle for 128bit
You can say that SPi favors Intel chips...but then again, if you want to go down that road, so do the majority of applications out there...any apps favors the faster performance on 1155. Like I posted above, I don't care, really, if an app favors one over the other...the fact of the matter is that the end user gets better performance on 1155, not how Intel really got there.
The important thing, for me as a user, is gaming performance. If Bulldozer has better game performance, then I'll be using Bulldozer in my gaming rig. If not, then Intel will stay in my gmaing rig, because it's faster.
There's no fanboyism in any of my comments, or concerns...I am a high-end user, and I require the best solution possible becuase I chose to game on triple monitors, and whoever brings me the best results for my chosen configuration, gets my cash.
After near two years with my Eyefinity rig, which began with a Crosshair III Formula, and now uses a Gigabyte P67A-UD4-B3, with many boards and CPUs between the two, I can quite confidently say that as it stands right this moment, for gaming, Intel is better. The graphs in my reviews show by how much.
Bickering about things like this is just kinda foolish...but...I have the time to do so. Which leaves me at this(which i stated earlier):
When I can go to my local store, and buy Bulldozer, I will. I will, firsthand, see who is faster, and you can bet that I'll be reporting the results here in the forums when that happens. As a gamer, gamnig is what's most important to me, and I see no gaming benchmarks..I see benchmarks that can relate to gaming perforamcne, and what I see, doesn't leave me impressed, or eager to spend my money on Bulldozer, but because I want to be sure you guys know the truth, I'll buy anyway.
256b will not be used
They have a chart they only expect 1% of any applications to use 256bits of AVX Read it up it doesn't prevent heat it accelerates heat dissipation
The armour it self and coating are resistant to heat meaning they don't melt when dissipating heat over time
I think the times when FPU @ 256-bit config is not going to be that often for most of us.
edit: even the Core 2 Duo can do 128-bit SSE instructions in a single clock cycle. In 2006. When we were running 32-bit XP. And it helped performance even then.
We only talk about Zambezi, you hear?
and we are talking about module performances why did multiple sockets? we are talking about cores and the fpu ugh lol
And what I'm saying (because again, your reading comprehension deficiencies create an artificial language barrier) is that this half-core nonsense isn't going to fly in the server world where AMD thinks it will. And maybe they know that and maybe that's part of why Zamboni is coming first.
*cough* Ahem?!
It's a full core not a half core
Prepare for TOP500!!!!! Interlagos AWAY!!
Again, we don't talk about Interlagos here mister!
--- Also I saw the leak....the 4x16 Core Interlagos A1 1.8GHz finishing a F@H workload twice~ as fast as the 4x12 Core Magny Cours @ 2.5GHz
Interlagos
TPF 3min 52 seconds
Magny Cours
TPF 6mins 40 seconds
Project 6901
www.linuxforge.net/bonuscalc2.php gotta love that PPD
My point here, is that there has never been any standard definition of what an x86 core contains. A single Zambezi "core" may share a number of resources with the other "core" on its module, but "cores" have been sharing resources with each other since the advent of multi-core processors. The question is, how much has to be shared before one can no longer call it a "core". According to AMD, the Zambezi "cores" have retained enough that they still consider the processor and octal core processor. According to Intel, the i7 2600 has enough resources shared, that they consider it a quad core. Who's to argue with them?
Assimilator - Ivy Bridge will not be out this year. To say it will be out "by then" (assuming you mean Zambezi availability) is un-true.
Intel can't price AMD out of the market. There are too many governmental bodies, rightly or wrongly, watching their every move. There's plenty of room for faster "official" sandy bridge models, based on overclocking headroom, and you don't see them because they need AMD to live.
Ivy Bridge is Sandy Bridge but for 22nm They will sell lol BY MILLIONS! 3.) Actually AMD doesn't need to exist only IBM
2.) No need for me to intervine
1a.) Hyperthreading makes 2 threads each thread has access to those shared resources and they compete to use those resources
1b.) Cluster-based Multithreading is 2 cores with equal amounts of resources and do not compete for resources
If you think that it is, then what components must not be shared to remain a "core"? I can guarantee that whatever you come up with, AMD disagrees, and it would be nothing more than their interpretation of a "core" vs yours. There is no standard of what components make up an x86 core!
And your mention of the memory controller on the north bridge emphasizes the fact that there is no standard of what components make up an x86 core!
It's an FMAC it can do 256bit Add+Multiply
Intel's solution is 256bit Add then 256bit Multiply (2 256bit Add or Multiply AVX commands when AMD Zambezi only needs 1 256 Add+Multiply AVX Command)
It is the same thing It is 8 cores
Some reviews showed a while back (like years) if you change the name that was reported to some of the benchmark programs, you would magically get better numbers. A VIA C7 that was reported to the software as either an AMD processor or Intel process improved its memory and per-clock performance. While the performance could be justified as the VIA C7 aquired use of SSE3 at the time rather late and a patch for the software was needed. The memory performance change was just BS.
And there has been no confirmation of the naming scheme to my knowledge.