Tuesday, September 18th 2012

AMD Shows Off A10-5800K and FX-8350 Near IDF
It's traditional for AMD to camp outside an ongoing IDF event (at a nearby hotel suite), siphoning off a small portion of its visitors. In the backdrop of this year's IDF event in San Francisco, AMD showed off two of its upcoming flagship client processors, the socket FM2 A10-5800K "Trinity" APU, and socket AM3+ FX-8350 "Vishera" CPU. The two chips were shown running fully-loaded gaming PCs.
The FX-8350 was shown installed on a machine with ASUS Crosshair V Formula (-Z?) motherboard, liquid cooling, and Radeon HD 7970 graphics card. The chip was clocked at 5.00 GHz (4.80 GHz when the picture was taken), and running popular CPU-intensive benchmarks such as WPrime and Cinebench. The A10-5800K was shown running application demos, including a widget that displays real-time boost states of the processor and GPU cores.
Source:
Hardware.fr
The FX-8350 was shown installed on a machine with ASUS Crosshair V Formula (-Z?) motherboard, liquid cooling, and Radeon HD 7970 graphics card. The chip was clocked at 5.00 GHz (4.80 GHz when the picture was taken), and running popular CPU-intensive benchmarks such as WPrime and Cinebench. The A10-5800K was shown running application demos, including a widget that displays real-time boost states of the processor and GPU cores.
80 Comments on AMD Shows Off A10-5800K and FX-8350 Near IDF
(from Anandtech's E5-2660 review) Hey, AMD used to have their own too. Weird that they actually paid Globalfoundries to get rid their remaining stake in the company, no? This is a company that made a complete hash of 32nm while Intel were fabbing 22nm, and are presently ramping 28nm (and still have a tricky 20nm/transition to gate last to come) while Intel are full steam ahead on 14nm.Presumeably "GloFo surpassing Intel" involves some far future date-yet-to-be-fixed, a magic wand and a sprinkling of faerie dust.
As for the manufacturing processes, Intel's been excellent in that department for many years now. Still with AMD's limited funds and resources, they kept up very well.
Also you have to consider the performance benefits per core. Consider for a moment a quad-core Intel processor with hyper-threading. You have 8 threads to use, but if all 4 cores are doing the same task using the same resources, HT isn't going to boost the speed very much. BD on the other hand has dedicated resources for each thread so the gain per thread is better by the time you start using hyper-threading.
Hyper-threading helps doing different tasks simultaneously where BD (on paper,) is better at doing similar tasks concurrently. BD has its architectural deficiencies, but AMD has more room to make its IPC better while saving a lot of die space. All in all, AMD is trying to efficiently use CPU space because they know that there will come a point where CPUs can't become smaller (and we're slowly but surely getting to that point.)
All in all, yeah, Intel is winning the CPU game but that doesn't mean that they will always be winning it. Think about it. Last year Intel had 54 billion USD in revenue and AMD had a little under 6.6 billion. The difference in size of each of these companies is huge. Intel simply has more resources and more money to invest into CPU innovation. I also might add that AMD also has a GPU market they have to satisfy, so CPUs isn't their only game. All in all, I think that experience with GPUs is what will make AMD processors take off. AMD knows how to play the concurrency game.
good luck this time AMD! I hope it's faster then PII, and much better then BD :eek:
Is that why no one with a Intel I series gets above 3Ghz?
www.techpowerup.com/img/12-09-18/87b.jpg
9.06 = 5 GHz
8.73 = 4.8 GHz
www.xtremesystems.org/forums/showthread.php?276245-AMD-quot-Piledriver-quot-refresh-of-Zambezi-info-speculations-test-fans&p=5137550&viewfull=1#post5137550
Bulldozer has a 15 stage pipeline
Nehalem has a 16 stage pipeline
Sandy Bridge has an 18 stage pipeline
No wonder why Nehalem and Sandy Bridge has less power consumption it has less gates per stage!
courses.engr.illinois.edu/cs232/fa2011/lectures/l14.pdf
Its easier to flail around about megahurts and how IPC doesn't matter.
I think about 80% of that pdf is accurate, but would either need convincing fro the other 20%, or would argue the advantages/disadvantages of some items. Such as the compiler removing all hazards, how much time does that take if we run it in real time VS how much larger do we make a dataset by making items redundant to prevent issues (hard faults and stalls).
Many things are due to X86 and its own issues, and the lack of programming in pure X64, as well the almost inherent need to move to a coutingiously mapped memory space with OS controlled and aware redundancy. Add to this the hardware to schedule between OpenCL or CUDA but transparent to the software (NOT DRIVER LEVEL!!!) and you increase application performance to the same level as developed platforms get.
msdn.microsoft.com/en-us/vstudio/aa496329
Yeah, faster than PAE.
X64 can do X86, but it takes longer.
x86 CPUs can not run x64 code.
www.intel.com/content/dam/doc/manual/64-ia-32-architectures-optimization-manual.pdf
Troll less please. :mad: