Parts that are clocked slower but perform better mean they have architecture advantage/IPC is higher. Parts that require higher clocks to perform the same are at a draw back IPC is lower.
Intel has the current advantage in IPC though, where as back during AMD K7 and K8, AMD had a big advantage in IPC vs P4 Netburst (Hottest Chip was Prescott)
Higher clocks normally mean higher voltages and thus heat produced making an inefficient chip in todays market. It very well dont mean too much in a desktop but in a laptop the design of a CPU does make a big determination.
The Clock for clock compare is to determine if AMD has actually improved the IPC over the 1st Bulldozer parts (by tweaking internals to get a IPC increase without requiring a clock speed hike- which increases heat and voltage further), this is following their road map.
CPUs have to be fast in todays apps and tomorrows.
Recognize rhetoric when you see it...
Higher IPC parts can't really clock high, which high clocking parts like BD can make up in performance just because of that. Add to it the capabilities, it's not just a clocking chip for benchers.
The intel vs AMD is outdated thinking, and past arguments and uArchs have no value here. Sure, intel beats AMD in optimized compiled code and low task count while AMD has good grips in parallel jobs and high task count. i for one won't upgrade to run a few apps at a time to get full performance.
Higher clock "normally" needs higher voltages, but that's not the norm and this is pretty much dependent on the process than on the uArch.
For low power usage in notebooks, it's nice to have tons of cash, your own fab and make several architectures in parallel, picking from each the winning one when it's done. When GloFo surpasses intel in fabbing we'll see AMD's use less power than intel.
You're probably stuck with in the thinking that today's processing power still relies heavily on IPC alone. That may be for low task devices, where Celerons are great for, but today I want to run more on one machine, not just a game and that's it.
BD's longer inst pipeline was a step in concurrent processing which is a "cure" for the limitation Si bring. Maybe when Graphene will become a standard then you can revisit the IPC idea at 100GHz CPU's. But for now, we don't need a faster pipeline as bad as we need a better instruction management and more core integration per module for better concurrent processing. A lot of the hours-long-work is done in concurrent tasks these days, and not in one uber-fast task using only a quarter of the CPU because hey, it' doesn't know how to else with a short pipeline. This is the important aspect of AMD's module "invention". If you look at intel you'll have some improvements, but they are still limited to single core processing with some optimization like HT unless the programmer invests in multi-core processing and not all invest as much as it is needed to produce the best results. Some, not at all, beside Valve and one other did anyone else look at parallel game engine?
We don't need that anymore. Unless we're talking about competitions, but for actual work this is old tech and AMD's "module" will bring the needed push forward. I'm still eager to hear some 4 core per module announcement.
Clock for clock is a well known comparison that has been used for as long as processors have been around. How else are you supposed to compare performance?
Time / completed task.
Unless you care about pointless or little-to-no-value data, what a non-tech person is interested in, you know the guys with the money that push the world forward, including governments of the big nations, are not interested in the inside workings, but how much output will I get from this much input and is it faster than its competitor or older version.
Damn, car analogy
Old vw scirocco, vs new vw scirocco, not much of a difference performance wise, some like the old one because it feels like a real machine, others the new one because of the comfort. Outside people are interested in these things, not ps-for-ps.
BTW, are all aspects of the compared CPU's the same to decide how much IPC is responsible for the performance? AFAIK, CPU's aren't the only ones t get upgraded, the rest of the platform does as well and this will eat the findings, resulting in higher error margins.
Maybe for an engineer it has some meaning, but how many of you actually make CPU's for a living. /s
Last edited: