# Intel "Haswell" Quad-Core CPU Benchmarked, Compared Clock-for-Clock with "Ivy Bridge"



## btarunr (Feb 1, 2013)

Russian tech publication OCLab.ru, which claims access to Intel's next-generation Core "Haswell" processor engineering-sample (and an LGA1150 8-series motherboard!), wasted no time in running a quick clock-for-clock performance comparison with the current Core "Ivy Bridge" processor. In its comparison, it set both chips to run at a fixed 2.80 GHz clock speed (by disabling Turbo Boost, C1E, and EIST), indicating that the ES OCLab is in possession of doesn't go beyond that frequency. 

The two chips were put through SuperPi 1M, PiFast, and wPrime 32M. The Core "Haswell" chip is only marginally faster than Ivy Bridge, in fact slower in one test. In its next battery of tests, the reviewer stepped up iterations (load), putting the chips through single-threaded SuperPi 32M, and multi-threaded wPrime 1024M. While wPrime performance is nearly identical between the two chips, Haswell crunched SuperPi 32M about 3 percent quicker than Ivy Bridge. It's still to early to take a call on CPU performance percentage difference between the two architectures. Intel's Core "Haswell" processors launch in the first week of June.



 



*View at TechPowerUp Main Site*


----------



## btarunr (Feb 1, 2013)

Many Thanks to NHKS for the tip.


----------



## syeef (Feb 1, 2013)

Funny.


----------



## dj-electric (Feb 1, 2013)

I don't believe in clock to clock benchmarks and find them pointless.
1. A CPU can be as fast as another C2C but the other comes naturally clocked higher
2. A CPU can be overclocked much further than another
So, of course a bugatti veiron at 60km/h would be as fast as fiat uno at 60km/h

For those who want examples, benchmark an i7 920 against 3770K C2C and see what im talking about.


----------



## The Von Matrices (Feb 1, 2013)

I think we all expected Haswell to focus on graphics so a lack on improvement in x86 performance is not a surprise.



Dj-ElectriC said:


> For those who want examples, benchmark an i7 920 against 3770K C2C and see what im talking about.



Does a 3770K really overclock much better than an i7-920?  I agree that the 3770K uses a lot less power and generates a lot less heat, but the maximum 24/7 clocks (without extreme cooling) for both are still in the low ~4GHz range.  The architectural differences account for a lot more of the performance difference between those chips than do the maximum clocks.


----------



## ...PACMAN... (Feb 1, 2013)

Dj-ElectriC said:


> I don't believe in clock to clock benchmarks and find them pointless.
> 1. A CPU can be as fast as another C2C but the other comes naturally clocked higher
> 2. A CPU can be overclocked much further than another
> So, of course a bugatti veiron at 60km/h would be as fast as fiat uno at 60km/h
> ...



You're wrong, clock to clock is the best way to benchmark and show differences in the architecture of two different chips at the same speed. It's all about IPC.

I understand where you are coming from with regards overclocking headroom but that normally forms part of a FULL review also.

Needless to say, my FX 4100@3.6 is miles behind a 2600K@3.6


----------



## RejZoR (Feb 1, 2013)

I guess i'll be keeping my trusty Core i7 920 for another year or two. The chip was so good it's probably the longest owned single CPU in my systems ever. It's so long i don't even remember what year i bought it, which is unusual...


----------



## dj-electric (Feb 1, 2013)

...PACMAN... said:


> You're wrong, clock to clock is the best way to benchmark and show differences in the architecture of two different chips at the same speed. It's all about IPC.



And what if a certain cpu is as fast as the other C2C but at stock is clocked much much higher thus being faster? than, what's the point?


----------



## repman244 (Feb 1, 2013)

Dj-ElectriC said:


> And what if a certain cpu is as fast as the other C2C but at stock is clocked much much higher thus being faster? than, what's the point?



The point is to see if the architecture had progress on the IPC field. Intel isn't changing it's design drastically (like going for a high clock low IPC), they have a certain pattern in the last few years.

A good example was when BD launched and was compared to the older gen per clock and you could see that the IPC decreased.

I agree that at the end it doesn't matter much since you can "hide" that with clock speed, but for architecture comparison it's the only way.


----------



## dj-electric (Feb 1, 2013)

repman244 said:


> The point is to see if the architecture had progress on the IPC field. Intel isn't changing it's design drastically (like going for a high clock low IPC), they have a certain pattern in the last few years.



Progress could also be the ability to work at a higher frequency. That's all i'm saying.


----------



## ...PACMAN... (Feb 1, 2013)

Dj-ElectriC said:


> And what if a certain cpu is as fast as the other C2C but at stock is clocked much much higher thus being faster? than, what's the point?



That's a different point entirely. Obviously if they are the same speed as each other at the same clocks, then the one clocked higher is faster. However, this is when other elements of the benchmark come into effect, i.e. does it clock higher? Has there been any power revisions or die shrink?

In the case of various phenom II revisions they could all pretty much be clocked to the same area of 3.8/4Ghz but effectively(as they were the same architecture) gave the same performance at the same clocks.

C2C is best used when it's between two different architecures.


----------



## repman244 (Feb 1, 2013)

Dj-ElectriC said:


> Progress could also be the ability to work at a higher frequency. That's all i'm saying.



Indeed, but it's still interesting to see what happened to the IPC.


----------



## DaJMasta (Feb 1, 2013)

I think it all comes down to the fact that they're testing engineering samples.  Generally, these chips (especially early revisions) can't clock up to where the final product will be and come with dramatically reduced clocks as a result.  Clock vs. clock comparisons are valid in comparing different architectures (even if they're very much the same, as we see here), but in this case they may be all we'll see until we hear word of launch pricing and clock speeds.

There's still plenty of room for haswell to be an impressive option compared to ivy bridge - clock speed, power consumption, graphics performance, etc - but we at least now know that in some kinds of tasks, performance is basically identical.  I think their architecture improvements will make some benchmarks show a much larger difference, but apparently calculating pi or prime numbers hasn't gotten a performance boost since ivy bridge.


----------



## LAN_deRf_HA (Feb 1, 2013)

So who get's to do the preview this time? Anandtech got it the first time. Tom's next. I hope I it goes back to Anandtech.


----------



## Mathragh (Feb 1, 2013)

When it comes to the superpi benchmarks: this can be simply the result of intel moving away from optimising the ancient x87 instructionset for something a bit more modern. 

I'm not sure however what is up with the wprime results, as that program should be able to use newer instruction sets? According to http://www.realworldtech.com/haswell-cpu/, Haswells IPC should be substantially higher than ivy bridge. I guess we can only wait and see, but I dont believe these results are 100% accurate.


----------



## Pehla (Feb 1, 2013)

i belive intel say'd they are working on power consumption on haswel cpu and igpu.. they aim at notebooks with this one..i may be wrong but this comparsion just confirm my toughts..


----------



## RejZoR (Feb 1, 2013)

Generally, if per clock performance is high, you also get very high performance with higher clocks. Unless CPU cannot be clocked high fopr some weird reason. But pretty much all go to 4GHz these days...


----------



## SonDa5 (Feb 1, 2013)

Cool to see some leaked benchmarks.  I think they are legit.

All of these benchmarks are greatly effected by memory bandwidth as well.  I am looking forward to seeing how well Haswell Integrated Memory Controller has improved for over clocking memory.  These scores with the CPU at same speeds and max memory overclock on each cpu will show just how much stronger Haswell is over all.


----------



## Prima.Vera (Feb 1, 2013)

How high in GHz those guys can go??


----------



## NC37 (Feb 1, 2013)

Hey uhh Intel...yeah, AMD called, they want their mediocre speed bump back.


----------



## Nordic (Feb 1, 2013)

This is just one side of the story, practically just a teaser. Now we just need a full cough tpu cough review


----------



## Dent1 (Feb 1, 2013)

...PACMAN... said:


> Needless to say, my FX 4100@3.6 is miles behind a 2600K@3.6



Are we supposed to be surpised that your cheap low end FX 4100 quadcore is mile behind an expensive high end quadcore 8 threaded 2600K.


----------



## Rowsol (Feb 1, 2013)

NC37 said:


> Hey uhh Intel...yeah, AMD called, they want their mediocre speed bump back.


----------



## qubit (Feb 1, 2013)

So there's almost no performance boost with a "brand new" architecture?

Nice to see AMD providing stiff competition to Intel. 

We'll only gain if it clocks higher and has a proper soldered heatspreader and that remains to be seen.


----------



## Prima.Vera (Feb 1, 2013)

qubit said:


> So there's almost no performance boost with a "brand new" architecture?
> 
> Nice to see AMD providing stiff competition to Intel.
> 
> We'll only gain if it clocks higher and has a proper soldered heatspreader and that remains to be seen.


 Maybe I am wrong, but probably this next gen will be some minor tweaking over prev gen, an increase in transistor count and maybe higher frequencies. This is how Intel plans for idiots to change their mobos into new one. Dark deal made with the mobo manufacturers.


----------



## Ikaruga (Feb 1, 2013)

Anton Shilov and his findings on the Interwebs..... I better wait for something more creditable, even if he is right this time somehow.


----------



## qubit (Feb 1, 2013)

Prima.Vera said:


> Maybe I am wrong, but probably this next gen will be some minor tweaking over prev gen, an increase in transistor count and maybe higher frequencies. *This is how Intel plans for idiots to change their mobos into new one.* Dark deal made with the mobo manufacturers.



Very well said. At this rate, I'll be sticking to my trusty 2700K Sandy Bridge CPU.


----------



## Ikaruga (Feb 1, 2013)

qubit said:


> Very well said. At this rate, I'll be sticking to my trusty 2700K Sandy Bridge CPU.



If I have to guess, I think it will be about much lower power consumption instead, since that's the area Intel is focusing on the most lately.


----------



## tacosRcool (Feb 1, 2013)

So power consumption is the only thing Haswell got vs Ivy Bridge since there is not a big performance difference


----------



## Aquinus (Feb 1, 2013)

I'm feeling pretty good about investing in a skt2011 rig right now with IVB-E down the road.


----------



## FreedomEclipse (Feb 1, 2013)

No need to upgrade from my 2500k i guess.... money well spent!


----------



## iO (Feb 1, 2013)

This is either a fake, a very early ES or Intel goes the Microsoft route and says "Screw you desktop, them all want mobiles!"...


----------



## Aquinus (Feb 1, 2013)

iO said:


> ... or Intel goes the Microsoft route and says "Screw you desktop, them all want mobiles!"...



A number of confirmed changes to Haswell could support this. Intel definitely is playing the power consumption card and they're going to beet it to death. Intel's CPUs are plenty fast already. I think they're working on the easier things to improve at this point because you can only get clock speeds and your IPC so high before you run into the diminishing returns problem.

If Intel can get a CPU to consume less power but perform just as well, that's a win.


----------



## Crap Daddy (Feb 1, 2013)

Can't say I'm too surprised. The desktop era is coming to a close and fast. Haswell has to offer competitive TDP and power consumption in the war x86 vs. ARM. It's the future man. Everybody has gone insane with the mobile stuff. Intel has to deliver very soon chips that will make the ultrabooks and surfaces or whatever smack the ipads and nexuses on the head from different points of view than sheer performance (which is unquestionable).


----------



## Melvis (Feb 1, 2013)

So no point for anyone to upgrade to this CPU/Socket unless your running a Socket 775 or AM2 still? 

Good chance for AMD to catch up then i guess if this is true?


----------



## Frick (Feb 1, 2013)

Melvis said:


> So no point for anyone to upgrade to this CPU/Socket unless your running a Socket 775 or AM2 still?
> 
> Good chance for AMD to catch up then i guess if this is true?



If you're an avarage, "normal", user still no point.


----------



## Melvis (Feb 1, 2013)

Frick said:


> If you're an avarage, "normal", user still no point.



To true, im talking about high end junkys and gamers more so


----------



## radrok (Feb 1, 2013)

At this point the only exciting release will be the new Ivy lineup for socket 2011.
I mean I'm all for refining and cutting power consumption but as an hardware addict that's just not enough, I want performance on top of it.

Let's just hope Intel goes wild on the core count on skt 2011.


----------



## phanbuey (Feb 1, 2013)

just wait and see- i mean superpi and wprime are not exactly all-encompassing benchmarks.  My sandy bridge laptop gets close to those numbers in superpi, but i guarantee you it would get stomped by a haswell or ivy quad in everything else.


----------



## Easy Rhino (Feb 1, 2013)

am i missing something? take cpu A at 2.8 ghz. take cpu B which can do much faster than that and bring it down to the speed of cpu A. how is that a good comparison of the two cpus? After all you are spending your money on what the processor can do... It's not like i am going to buy cpu B and downclock it and then act disappointed at the results...


----------



## qubit (Feb 1, 2013)

Easy Rhino said:


> am i missing something? take cpu A at 2.8 ghz. take cpu B which can do much faster than that and bring it down to the speed of cpu A. how is that a good comparison of the two cpus? After all you are spending your money on what the processor can do... It's not like i am going to buy cpu B and downclock it and then act disappointed at the results...



It's a clock for clock comparison to show the architectural improvements, so it makes sense to do this. Only if the new architecture has something up its sleeve with higher clocks will it offer any advantage to performance enthusiasts (us lot).

According to Intel's slides a while back, Haswell has some wicked overclocking features, so that might be enough for us to upgrade our SB/IB to it if it clocks significantly higher. It'll be on the same 22nm process however, so I wouldn't be surprised if it doesn't.


----------



## BrooksyX (Feb 1, 2013)

Its funny. I used to buy budget CPU's and would upgrade almost every year. But this time around I decided to go with the high end (2500k) and I really see no reason to upgrade my cpu at least for another 1.5~2 years.


----------



## Easy Rhino (Feb 1, 2013)

qubit said:


> It's a clock for clock comparison to show the architectural improvements, so it makes sense to do this. Only if the new architecture has something up its sleeve with higher clocks will it offer any advantage to performance enthusiasts (us lot).
> 
> According to Intel's slides a while back, Haswell has some wicked overclocking features, so that might be enough for us to upgrade our SB/IB to it if it clocks significantly higher. It'll be on the same 22nm process however, so I wouldn't be surprised if it doesn't.



i understand that. we want to see if the new line has architectural improvements. but all my wallet cares about is how much faster is it going to load programs and perform mathematical computations.


----------



## qubit (Feb 1, 2013)

Easy Rhino said:


> i understand that. we want to see if the new line has architectural improvements. but all my wallet cares about is how much faster is it going to load programs and perform mathematical computations.



Indeed, it's a bit like the old Athlon XP / Pentium 4 situation from a decade ago, isn't it? The Athlon was more IPC efficient, but the P4 clocked higher, so it won even though it was so inefficient.

The answer you're looking for (and so is everyone else, lol) will be answered when the official reviews come out. It's just that to me, I think the fact it's on the same 22nm process means it will perform similarly to IB.


----------



## FreedomEclipse (Feb 1, 2013)

qubit said:


> I think the fact it's on the same 22nm process means it will perform similarly to IB.



or a superclocked SB


----------



## cadaveca (Feb 1, 2013)

...PACMAN... said:


> That's a different point entirely. Obviously if they are the same speed as each other at the same clocks, then the one clocked higher is faster. However, this is when other elements of the benchmark come into effect, i.e. does it clock higher? Has there been any power revisions or die shrink?
> 
> In the case of various phenom II revisions they could all pretty much be clocked to the same area of 3.8/4Ghz but effectively(as they were the same architecture) gave the same performance at the same clocks.
> 
> C2C is best used when it's between two different architecures.



Not in this instance. For IVB, CPU cache speed is directly linked to core clock...they run the same speed.


SO by downclocking an IVB chip, you are not reporting actual performance. you are reporting a gimped performance, with L3 running at a lower speed than intended.




Haswell breaks this design, and has L3 clocked independently, so C2C compare at low clocks doesn't tell you anything, but what a broken IVB does vs a non-broken Haswell.



Which makes this compare stupid, and that's why it was allowed. It's not a "real" performance compare.


----------



## qubit (Feb 1, 2013)

@Easy Rhino

Looks like Cadaveca's answered your question nicely - this performance test isn't valid.


----------



## Jorge (Feb 1, 2013)

If those benches are accurate, Haswell is a bust just as IB was.


----------



## cadaveca (Feb 1, 2013)

Jorge said:


> If those benches are accurate, Haswell is a bust just as IB was.



That's what I think, and have thought, for many many months.


This is not the first time Haswell has been shown running publically.

However, I need a board and to clock a chip myself before I am 1000% on that.


----------



## TheHunter (Feb 1, 2013)

I dont think its a bust, I mean look at what Haswell brings to the table,
http://www.anandtech.com/show/6355/intels-haswell-architecture/6
 if all these changes translate in a lousy 5-10% increase then they need to do some serious work.



But then again, like someone said why improve a dead x87 code anyway. Imo those leaked benches mean squat. I say bring on real applications and games that will love bigger registers, more execution branches, faster single threaded optimizations and what not.


----------



## iLLz (Feb 1, 2013)

Just stop.  I don't think these are legit at all.  Anandtech's Haswell Architecture article would leave me to believe we can expect 10-20% IPC increase depending on workload.  

Also this:  https://twitter.com/FPiednoel/status/296459612377468928

Edit:  Also check out some of his other tweets in his timeline, he clearly states that if you have a "healthy" Haswell, under no circumstance will it be slower than IVB.  He is Principal Engineer / Performance Architect at Intel so I am inclined to believe him.


----------



## TheMailMan78 (Feb 1, 2013)

Well this old 2600K isn't going anywhere, anytime soon I guess. Mobo.......that might be a different story. Looking forward to a Cadaveca review to judge.


----------



## phanbuey (Feb 1, 2013)

Haswell is for power efficiency at the same performance, with a lower thermal envelope.  I am sure if they wanted to, for the same TDP they could have crammed 4 more cores in there then it would have whooped the IB in wprime.


----------



## iLLz (Feb 1, 2013)

One thing to note guys is Haswell's main goal is to increase single threading workloads as well as lower power consumption.  According to the twitter feed I posted earlier from the Intel Architect, single threaded performance helps with the perceived snapiness.  And don't forget if the single threaded performance is better, multicore performance increase as well.


----------



## cadaveca (Feb 1, 2013)

TheMailMan78 said:


> Well this old 2600K isn't going anywhere, anytime soon I guess. Mobo.......that might be a different story. Looking forward to a Cadaveca review to judge.



I hope I get a chip before the launch.

I don't really think Haswell be be all that bad, but it strikes me as odd that they'd go back to Nehalem clock domain design for Haswell, unless they want OC to be more interesting. The only reason they'd invest that effort, and further complicate the chip, to me, is because clock-for-clock, Haswell is a bit worse off.

Many parts of internal cache has been doubled, L3 is changed too, lots of other changes...Haswell should be good...better than IVB...but for me it is too early to even guess what those improvements will bring. I'd be happy with exact same performance, faster ram clocks(to help cache subsystem), and lower power draw. IVB is pretty damn fast as is.


----------



## brandonwh64 (Feb 1, 2013)

TheMailMan78 said:


> Well this old 2600K isn't going anywhere, anytime soon I guess. Mobo.......that might be a different story. Looking forward to a Cadaveca review to judge.



Why change boards if you do not OC?


----------



## cadaveca (Feb 1, 2013)

brandonwh64 said:


> Why change boards if you do not OC?



His current board lacks proper Windows8 drivers?


----------



## TheMailMan78 (Feb 1, 2013)

cadaveca said:


> His current board lacks proper Windows8 drivers?



This. Windows 8 just doesn't seem to like this board and as always Asus likes to forget the last generation of boards.


----------



## brandonwh64 (Feb 1, 2013)

cadaveca said:


> His current board lacks proper Windows8 drivers?



Ahhh good point. Same with my WLAN drivers on windows 8


----------



## Samskip (Feb 1, 2013)

Ikaruga said:


> If I have to guess, I think it will be about much lower power consumption instead, since that's the area Intel is focusing on the most lately.



Am I seeing this wrong? The 3770K has a TDP of 77w and the 4770K will have a TPD of 84w.
That's not really what I call lower power consumption. Or does Intel only mean the mobile ones?
Kinda weird.


----------



## Covert_Death (Feb 1, 2013)

...PACMAN... said:


> You're wrong, clock to clock is the best way to benchmark and show differences in the architecture of two different chips at the same speed. It's all about IPC.
> 
> I understand where you are coming from with regards overclocking headroom but that normally forms part of a FULL review also.
> 
> Needless to say, my FX 4100@3.6 is miles behind a 2600K@3.6



its the ONLY point to doing clock to clock though ( to see arch differences). but if your trying to determine the better CPU you must do STOCK vs STOCK or MAX OC vs MAX OC


----------



## jihadjoe (Feb 1, 2013)

Samskip said:


> Am I seeing this wrong? The 3770K has a TDP of 77w and the 4770K will have a TPD of 84w.
> That's not really what I call lower power consumption. Or does Intel only mean the mobile ones?
> Kinda weird.



Power regulation is included in Haswell, so it adds a bit to the TDP. But that's a component that isn't on the motherboard anymore, so total platform power consumption should still be less than Ivy.


----------



## Darkleoco (Feb 1, 2013)

Guess the 2600K will stay where it is for quite some time, might be time to get a top of the line motherboard though.

Power Consumption be damned when I don't get to use my desktop half the year.


----------



## Aquinus (Feb 1, 2013)

Darkleoco said:


> Power Consumption be damned when I don't get to use my desktop half the year.



Power consumption be damned anyways. I didn't get a SB-E chip to sip power. I bought it to shovel power down like a little kid with a bucket of candy... all year round... 24/7. 

All things considered though, if Haswell lowers power consumption even further I might be in the market for a new laptop after Haswell mobile CPUs start becoming mainstream.


----------



## erocker (Feb 1, 2013)

Pretty sure this is fake.


----------



## Jstn7477 (Feb 1, 2013)

I'd love to pick up a Haswell soon. Power consumption really matters for distributed computing since the machine is under full load 24/7, and I'm glad that they keep advancing the CPUs in that way even if performance increases are minimal. It's why I've stopped buying AMD processors, because their power consumption is rather high and the performance is lower in many cases. Sure, the upfront price is "cheap," but you end up paying for it with your electric bill and cooling costs especially if you live in a hot region.


----------



## Darkleoco (Feb 1, 2013)

Aquinus said:


> Power consumption be damned anyways. I didn't get a SB-E chip to sip power. *I bought it to shovel power down like a little kid with a bucket of candy*... all year round... 24/7.


----------



## happita (Feb 1, 2013)

I won't mind if performance isn't increased too much because if I can compare my current SB setup to a future Haswell setup, I'm sure the performance bump will be there regardless just because of the fact that it is on a smaller manufacturing process coupled with the fact that it will draw less power (higher OC potential). However, I do wish that they release different iterations of Haswell without the graphics part of it on the die.
A 4770S will be perfect for a modern HTPC that I plan on building for my HDTV in the near future.


----------



## sergionography (Feb 1, 2013)

qubit said:


> So there's almost no performance boost with a "brand new" architecture?
> 
> Nice to see AMD providing stiff competition to Intel.
> 
> We'll only gain if it clocks higher and has a proper soldered heatspreader and that remains to be seen.



from my understanding of the haswell presentations alot of the focus was on the new voltage regulation and how haswell can scale much better to different tdps, ivy bridge wasnt designed with 22nm in mind it was just a die shrink, with haswell intel can use all the bells and whistles of the process node




Ikaruga said:


> If I have to guess, I think it will be about much lower power consumption instead, since that's the area Intel is focusing on the most lately.



its more than just power consumption, its scaling. ivy bridge brought much better performance/watt over sandy bridge due to the shrink but couldnt scale any higher than sandy did because its the same architecture



Aquinus said:


> A number of confirmed changes to Haswell could support this. Intel definitely is playing the power consumption card and they're going to beet it to death. Intel's CPUs are plenty fast already. I think they're working on the easier things to improve at this point because you can only get clock speeds and your IPC so high before you run into the diminishing returns problem.
> 
> If Intel can get a CPU to consume less power but perform just as well, that's a win.



and this is what amd saw when designing bulldozer, except they jumped ship a bit too early, and this is when their winner multicore scaling design starts to pay off, intel on the other hand working on their strengths and making all new instruction sets multicore ready



Easy Rhino said:


> am i missing something? take cpu A at 2.8 ghz. take cpu B which can do much faster than that and bring it down to the speed of cpu A. how is that a good comparison of the two cpus? After all you are spending your money on what the processor can do... It's not like i am going to buy cpu B and downclock it and then act disappointed at the results...



no one here is disapointed at the result, only at the progress if this is true, but like many mentioned here this doesnt tell us anything about power consumption and tdp because that sure is very important, ivy brought substantial performance/watt over sandy bridge, but enthusiats didnt benefit much because ivy didnt scale well, there is a reason the highest ivy bridge is rated at 77w and not 95 like sandy, and no its not the new standard, its because the architecture doesnt scale well to higher tdps without running into binning problems or what not, the only way to get there is by just adding more cores, and what was impossible with sandy bridge(8 core cpu) might now be possible with ivy bridge extreme when it comes out, maybe there people will realize the benefit of performance/watt
haswell on the other hand if i intel isnt bluffing is supposed to have a much wider range of operating voltage, that doesnt only mean lower tdp like some understood it, but also higher tdp for the higher voltage models, that means higher stable clocks without running into problems



cadaveca said:


> Not in this instance. For IVB, CPU cache speed is directly linked to core clock...they run the same speed.
> 
> 
> SO by downclocking an IVB chip, you are not reporting actual performance. you are reporting a gimped performance, with L3 running at a lower speed than intended.
> ...



so are you suggesting ivy bridge has really bad performance scaling with clockspeed?  well this is exactly why haswell took the other approach, so no its a fair comparison, as fair as it can get actually because not every ivy bridge chip is sold at 3.4ghz clockspeed




Samskip said:


> Am I seeing this wrong? The 3770K has a TDP of 77w and the 4770K will have a TPD of 84w.
> That's not really what I call lower power consumption. Or does Intel only mean the mobile ones?
> Kinda weird.




i second what i said earlier, there is a reason the highest quad core ivy is rated at 77watt and not 95watt like sandy, its because clocking it accordingly to the 95watt envelope will pretty much get it near its limit and run into binning and stability problems for comfort, not to mention dissapointment for overclockers(who already saw no benefit from moving from sandy) because of a cpu clocked near its limit, haswell is supposed to address that issue


----------



## Patriot (Feb 1, 2013)

*slow*

431s for wprime 1024m ... Darn thats slow.


I can do it in ~30s


----------



## Ikaruga (Feb 1, 2013)

Samskip said:


> Am I seeing this wrong? The 3770K has a TDP of 77w and the 4770K will have a TPD of 84w.
> That's not really what I call lower power consumption. Or does Intel only mean the mobile ones?
> Kinda weird.



I'm aware of those numbers, I just made a guess about how things will go with the power consumption perhaps


----------



## Darksword (Feb 1, 2013)

I was hoping for a better IPC increase.

Guess my 4.0GHz i7-930 will stick around for whatever comes after Haswell.


----------



## TheGuruStud (Feb 1, 2013)

If the real results are anywhere close, I want an apology from every fan boy in the world claiming 40% increase from nehalem (like there was anything from nehalem to IVB LOL).



Darksword said:


> I was hoping for a better IPC increase.
> 
> Guess my 4.0GHz i7-930 will stick around for whatever comes after Haswell.



That's what I've been recommending for everyone. Either that or just upgrade to IVB now if you want a newer CPU/ MB b/c haswell isn't bringing anything new.


----------



## esrever (Feb 1, 2013)

Patriot said:


> 431s for wprime 1024m ... Darn thats slow.
> 
> 
> I can do it in ~30s



you sure are good at math


----------



## Jack1n (Feb 1, 2013)

Samskip said:


> Am I seeing this wrong? The 3770K has a TDP of 77w and the 4770K will have a TPD of 84w.
> That's not really what I call lower power consumption. Or does Intel only mean the mobile ones?
> Kinda weird.



TDP stands for Thermal design power,it means how much heat the chip creates,if i remember correctly the 4770k is slightly higher clocked which could account for the higher TDP,although i suspect temp will be lower if Intel does the IHS proper this time around.


----------



## Patriot (Feb 1, 2013)

esrever said:


> you sure are good at math



lol...
An unfair comparison but my amd rig creams that score...
http://hwbot.org/submission/2346765_


----------



## Aquinus (Feb 2, 2013)

Patriot said:


> 431s for wprime 1024m ... Darn thats slow.
> I can do it in ~30s



Don't be cocky. Not everyone has a 4P server to fold with. Run SuperPi instead and that result will change pretty quickly because you won't be using 48 cores. 
My 3820 @ 4.3ghz did it in 215s.


----------



## Patriot (Feb 2, 2013)

Aquinus said:


> Don't be cocky. Not everyone has a 4P server to fold with. Run SuperPi instead and that result will change pretty quickly because you won't be using 48 cores.
> My 3820 @ 4.3ghz did it in 215s.



It actually wasn't that bad... just didn't set any world records.
at 3.8ghz Magny cours is pretty potent.  It was folding stable at 3.48ghz... a 75% overclock.  
And its a personal rig... built her from the ground up.  Cost less than many a SB-E rigs.

But yes High clocked uP have their place.  Though I tend to drift towards more threads being a folder.  I do enjoy a SB-E for a daily driver.


----------



## Totally (Feb 2, 2013)

Dj-ElectriC said:


> I don't believe in clock to clock benchmarks and find them pointless.
> 1. A CPU can be as fast as another C2C but the other comes naturally clocked higher
> 2. A CPU can be overclocked much further than another
> So, of course a bugatti veiron at 60km/h would be as fast as fiat uno at 60km/h
> ...



well, your car analogy is very wrong, and they do matter to some for example current ivy owners who aren't sold on just the e-peen factor this just shows them that a simple trip into the bios, the tweak of a few parameters will net them similar performance without spending money on a new chip. Or it matters to those who are in a position to upgrade now and are faced with the question 'upgrade now or wait for Haswell'

The correct analogy would be pitting a stock Gallardo (490hp) against Gallardo lp570-4 superellegga (de-tuned to 490hp) and seeing that they make roughly the same times around any given track


----------



## Redspeed93 (Feb 2, 2013)

NC37 said:


> Hey uhh Intel...yeah, AMD called, they want their mediocre speed bump back.



Yeah if these graphs are true it's kinda sad. It's partially AMDs fault. If they were actually pushing innovation and making some decent chips Intel wouldn't be able to produce chips like this and still make a profit. But they will as long as AMD fails to impress.


----------



## FreedomEclipse (Feb 2, 2013)

Redspeed93 said:


> as long as AMD fails to impress.



AMD has been impressing in other areas - such as APUs. as a whole, AMD hasnt targeted the high end market in a long while.


----------



## Redspeed93 (Feb 2, 2013)

FreedomEclipse said:


> AMD has been impressing in other areas - such as APUs. as a whole, AMD hasnt targeted the high end market in a long while.



But that's not really the topic at hand here. The sad truth is that there is very little reason right now to put an AMD CPU in your desktop PC.

Succes with APUs or not, that still means quite a substantial loss of potential profits.


----------



## Thefumigator (Feb 3, 2013)

Redspeed93 said:


> But that's not really the topic at hand here. The sad truth is that there is very little reason right now to put an AMD CPU in your desktop PC.
> 
> Succes with APUs or not, that still means quite a substantial loss of potential profits.



the FX 8320 is only 179 bucks, and its a better cpu compared to any intel at the same price, unless you are just a gamer, there are several reasons to choose AMD. At least if you treat a desktop as one, and not a gaming-only device, then AMD has some shine. Just saying giving them zero credit is somewhat extreme.

On the other hand, I feel APUs are somewhat pointless on desktop, I prefer them on laptops.


----------



## Aquinus (Feb 3, 2013)

Thefumigator said:


> On the other hand, I feel APUs are somewhat pointless on desktop, I prefer them on laptops.


I don't know about that. With the right hardware you could build an ultra-portable desktop using something like these. ITX chassis tend to get pretty small and Trinity has a lot to offer.
ASRock FM2A85X-ITX FM2 AMD A85X (Hudson D4) HDMI S...
AMD A10-5800K Trinity 3.8GHz (4.2GHz Turbo) Socket...


----------



## LAN_deRf_HA (Feb 3, 2013)

Thefumigator said:


> the FX 8320 is only 179 bucks, and its a better cpu compared to any intel at the same price, unless you are just a gamer, there are several reasons to choose AMD. At least if you treat a desktop as one, and not a gaming-only device, then AMD has some shine. Just saying giving them zero credit is somewhat extreme.
> 
> On the other hand, I feel APUs are somewhat pointless on desktop, I prefer them on laptops.



The latest FX chips seem nice for the price in certain aps. Then you get to the power consumption page. Kinda invalidates any sense of accomplishment. Certainly not cost effective to use for any large scale business installations. The only AMD chip I'd grab is one of their 65w APUs, and seriously only under very specific circumstances.


----------



## TheGuruStud (Feb 3, 2013)

LAN_deRf_HA said:


> The latest FX chips seem nice for the price in certain aps. Then you get to the power consumption page. Kinda invalidates any sense of accomplishment. Certainly not cost effective to use for any large scale business installations. The only AMD chip I'd grab is one of their 65w APUs, and seriously only under very specific circumstances.



Since when did that stop businesses from buying junk in a box pentium 4s and Ds 
They buy whatever has the best marketing. They don't know any better. And usually Dulls.


----------



## Thefumigator (Feb 3, 2013)

Aquinus said:


> I don't know about that. With the right hardware you could build an ultra-portable desktop using something like these. ITX chassis tend to get pretty small and Trinity has a lot to offer.
> ASRock FM2A85X-ITX FM2 AMD A85X (Hudson D4) HDMI S...
> AMD A10-5800K Trinity 3.8GHz (4.2GHz Turbo) Socket...



If you consider HTPC a desktop then trinity is fine, and of course I didn't thought on small form factor desktops, it could be useful in some scenarios. But I still believe trinity is killer on laptop. Such a decent CPU and impressive GPU sported in sub 650$ laptops. Its just... insane.



LAN_deRf_HA said:


> The latest FX chips seem nice for the price in certain aps. Then you get to the power consumption page. Kinda invalidates any sense of accomplishment. Certainly not cost effective to use for any large scale business installations. The only AMD chip I'd grab is one of their 65w APUs, and seriously only under very specific circumstances.



Yeah I know the power consumption page, and also heat, I had to replace the stock AMD cooler because it sounded like a jet engine on my FX-8320, I bought a TX3 and its much more silent. Still, the price of the chip itself made this system possible, I think I can trade off a price cut for those ineficiencies sometimes. Still, its not a _*superhuge *_difference in power consumption. 95watts piledrivers will be out soon...



TheGuruStud said:


> Since when did that stop businesses from buying junk in a box pentium 4s and Ds
> They buy whatever has the best marketing. They don't know any better. And usually Dulls.


Amen to that. Except Intel did sell completely inefficients P4 and Ds for a huge price. While in this case AMD just give the prices down to make them attractive.


----------



## Aquinus (Feb 3, 2013)

Thefumigator said:


> 95watts piledrivers will be out soon...



That's TDP not power consumption. I feel like I have to point this out because everyone seems to think that all of a CPU's power is used to make heat. It's a processor not a space heater. 

The TDP is how much power it takes to keep the CPU thermal within spec. Watts is the measurement of power (not electricity,) and since heat is kinetic energy, how much heat that needs to be dissipated is described in watts. Since all heat in a CPU is generated by ohmic heating and has a non-infinite resistance/impedance, only a portion of the CPU's power usage actually gets made into heat. AMD tends to release less energy as heat (hence, higher power consumption without incredibly higher TDPs,) where Intel tends to release more of that energy as heat, but never will the TDP truly match the consumption of the CPU. ...but as I've said before, I suspect that the difference in how much heat is generated has to do with the manufacturing process for the CPU (Intel's HKMG vs AMD's SOI.)


----------



## sergionography (Feb 3, 2013)

LAN_deRf_HA said:


> The latest FX chips seem nice for the price in certain aps. Then you get to the power consumption page. Kinda invalidates any sense of accomplishment. Certainly not cost effective to use for any large scale business installations. The only AMD chip I'd grab is one of their 65w APUs, and seriously only under very specific circumstances.



measuring maximum power consumption does not indicate anything objective in the real world, especialy when comparing an 8 core with a quad core
now for typical desktop use i bet you 1 amd core is as efficient if not more efficient than an intel core(power consumption wise not performance dont get me wrong), and for applications that barely stress the cores which are pretty much like 90% of apps out there then surely amd isnt bad at all, not to mention amd has the multithread advantage, and i dont mean best benchmarks results, im talking about running multiple things at once while still having consistent performance. intel quads on the other hand when using an app that stresses 3 cores then only one core is left for everything else, to the end user thats a big deal


----------



## Aquinus (Feb 3, 2013)

sergionography said:


> measuring maximum power consumption does not indicate anything objective in the real world, especialy when comparing an 8 core with a quad core
> now for typical desktop use i bet you 1 amd core is as efficient if not more efficient than an intel core(power consumption wise not performance dont get me wrong), and for applications that barely stress the cores which are pretty much like 90% of apps out there then surely amd isnt bad at all, not to mention amd has the multithread advantage, and i dont mean best benchmarks results, im talking about running multiple things at once while still having consistent performance. intel quads on the other hand when using an app that stresses 3 cores then only one core is left for everything else, to the end user thats a big deal



Not that I disagree with anything you've said, but when he says "65w", I think he really means TDP and is confusing it with power draw. In which case he needs to see this post.

For a desktop, in business I can tell you TDP is one of the last things I think about. The only exception to that is a server that needs to run on low power in case there is a power outage in order to maximize battery life. A good example of that would be a gateway server and/or a VoIP server.


----------



## cdawall (Feb 3, 2013)

Redspeed93 said:


> Yeah if these graphs are true it's kinda sad. It's partially AMDs fault. If they were actually pushing innovation and making some decent chips Intel wouldn't be able to produce chips like this and still make a profit. But they will as long as AMD fails to impress.



They did push innovation. The FX 8350 is *faster* than the 3820/3770K/3570K in heavily multithreaded apps. Like Metro 2033, transcoding+games, video editing the list goes on. Just because Intel wins the single threaded IPC race doesn't make them a better CPU just depends the applications you are running. Want to be upset at someone for this marginal performance jump? Get upset at he developers not bothering to push current systems. Skyrim is just as fast on an i3 as an i7 that should say something. The AMD APU being in the PS4 should say something else.


----------



## Ikaruga (Feb 3, 2013)

cdawall said:


> They did push innovation. The FX 8350 is *faster* than the 3820/3770K/3570K in heavily multithreaded apps. Like Metro 2033, transcoding+games, video editing the list goes on. Just because Intel wins the single threaded IPC race doesn't make them a better CPU just depends the applications you are running. Want to be upset at someone for this marginal performance jump? Get upset at he developers not bothering to push current systems. Skyrim is just as fast on an i3 as an i7 that should say something. The AMD APU being in the PS4 should say something else.


Because the guy on the video is not here and can't defend himself, I will greatly soften my opinion about him (below) to "favor" his side:

He is clearly a sad attention-whore, who can't be taken seriously. That staged setup about his precious "belongings" he put around him is nothing but a terrible joke. That's not a table of a computer enthusiast, more like a table of an 8 year old boy posting his first facebook picture. 
And after all of that, he is trying to convince everybody on the Internet that AMD CPUs are just as good as Intel ones, and you gain nothing if you go Intel. So basically everybody (all the professional and trusted people we know and listen to), who spent days and weeks with hard work running tests and CPU benches and making reviews until now were simply clueless, and he is the only one who finally holds and gives us the real truth. 

Sure, no problem


----------



## [H]@RD5TUFF (Feb 3, 2013)

Seems a little underwhelming, but shady unsourced benchmarks are not to be trusted.


----------



## Frick (Feb 3, 2013)

I didn't see the video, but AMD does well in all reviews, in certain situations. There is no denying that. The problem is that the performance is all over the place. Anand's final words sums it up pretty well, especially the highlighted part:



> Ultimately Vishera is an easier AMD product to recommend than Zambezi before it. However the areas in which we'd recommend it are limited to those heavily threaded applications that show very little serialization. As our compiler benchmark shows, a good balance of single and multithreaded workloads within a single application can dramatically change the standings between AMD and Intel. *You have to understand your workload very well to know whether or not Vishera is the right platform for it.* Even if the fit is right, you have to be ok with the increased power consumption over Intel as well.


----------



## Prima.Vera (Feb 3, 2013)

cdawall said:


> They did push innovation. The FX 8350 is *faster* than the 3820/3770K/3570K in heavily multithreaded apps. Like Metro 2033, transcoding+games, video editing the list goes on. Just because Intel wins the single threaded IPC race doesn't make them a better CPU just depends the applications you are running. Want to be upset at someone for this marginal performance jump? Get upset at he developers not bothering to push current systems. Skyrim is just as fast on an i3 as an i7 that should say something. The AMD APU being in the PS4 should say something else.



You must be joking with that rigged video. That's the worst lie since Watergate.  :shadedshu


----------



## Thefumigator (Feb 3, 2013)

Aquinus said:


> Not that I disagree with anything you've said, but when he says "65w", I think he really means TDP and is confusing it with power draw. In which case he needs to see this post.
> 
> For a desktop, in business I can tell you TDP is one of the last things I think about. The only exception to that is a server that needs to run on low power in case there is a power outage in order to maximize battery life. A good example of that would be a gateway server and/or a VoIP server.



You are discussing this with an electrical engineering student... not that I'm a good one .

C'mon, we all know a 65w processor can't really make more heat and consume more power than a 125w processor. Of course the relation is not linear and it has to be measured on full throttle.

Also if you put a 125 watt processor into a 95 watt motherboard, the mobo will die, sooner or later. Not because of heat, but because of power draw overheating its regulators. It could even overheat the tracks.

I didn't say TDP is _exactly_ heat dissipation power draw, but it comes _almost_ hand in hand.

I'm pretty sure those 95w FX processors will be more competitive in the power consumption and heat dissipation area where today the 125w versions are struggling.


----------



## Aquinus (Feb 3, 2013)

Thefumigator said:


> You are discussing this with an electrical engineering student... not that I'm a good one .
> 
> C'mon, we all know a 65w processor can't really make more heat and consume more power than a 125w processor. Of course the relation is not linear and it has to be measured on full throttle.
> 
> ...



Right, but a 125 watt TDP CPU like the 8150 actually can consume closer to 180 watts (depending on the leakage for that particular CPU), it's just that 125 watts of that is converted to heat (which is actually a lot of wasted energy, but that's a different topic). Of course this is all at stock, but I'm just mentioning this because a lot of people are using TDP and power draw interchangeably and depending on the CPU the amount of leakage the circuit generates can vary.


----------



## NeoXF (Feb 3, 2013)

Considering the power usage should be worse (at least in load) for Haswell... And if AMD can improve theirs (along with the obvious IPC/uArch improvements and hopefully some platform as well)... I'd say this would be a good time for AMD to catch up...

Also, que 5 or 6 module cosumer FX CPUs for extra squeeze on Intel...


----------



## Thefumigator (Feb 3, 2013)

Aquinus said:


> Right, but a 125 watt TDP CPU like the 8150 actually can consume closer to 180 watts (depending on the leakage for that particular CPU), it's just that 125 watts of that is converted to heat (which is actually a lot of wasted energy, but that's a different topic). Of course this is all at stock, but I'm just mentioning this because a lot of people are using TDP and power draw interchangeably and depending on the CPU the amount of leakage the circuit generates can vary.



surely, I get it now. Funny thing is, I'm sitting next to a prescott processor desktop computer and it doesn't seem such a disaster... at least on idle. Its also very snappy, from a subjective point of view. Oh well, its running XP...


----------



## pjl321 (Feb 3, 2013)

*It's time we moved on!*

I guess the days of significant performance increases are over. The only thing that would get my wallet out is a mid-range priced 8-core (real 8-core, 16-core in AMD world). How long have we had quad-core now for? 2006 I think and they have been mid-range for well over 5 years.

It's time we moved on!


----------



## Frick (Feb 3, 2013)

pjl321 said:


> I guess the days of significant performance increases are over. The only thing that would get my wallet out is a mid-range priced 8-core (real 8-core, 16-core in AMD world). How long have we had quad-core now for? 2006 I think and they have been mid-range for well over 5 years.
> 
> It's time we moved on!



Naah, the increases are just not here in this instant. And your avarage computer user has no need for more speed. Go back ten years and it was new and hot and cool, now all the action happens on tablets and phones. The jiggahurtz race ended with the Core 2 line.


----------



## pjl321 (Feb 3, 2013)

Frick said:


> Naah, the increases are just not here in this instant. And your avarage computer user has no need for more speed. Go back ten years and it was new and hot and cool, now all the action happens on tablets and phones. The jiggahurtz race ended with the Core 2 line.



You have to ask yourself, why are Intel (or any tech company) bring out new products?

Simple answer, to make money.

I agree most users don't need even the power of today's CPU so why would anyone buy something ever so slightly more powerful than what they have now? They won't, but they probably would buy it if the performance doubled. Yes they still wouldn't need this extra power but they will still like that it is 2x the performance of their current product and put their hand in their pocket to buy it.

Maybe some of them will find a use for the extra power, maybe they won't but Intel will be happy and so will I.


----------



## Thefumigator (Feb 3, 2013)

pjl321 said:


> I guess the days of significant performance increases are over. The only thing that would get my wallet out is a mid-range priced 8-core (real 8-core, 16-core in AMD world). How long have we had quad-core now for? 2006 I think and they have been mid-range for well over 5 years.
> 
> It's time we moved on!



Well you still can buy at newegg a dual G34 with two 16 core opteron processors (or 8 real cores as you wish) and you'll have enough cores FTW. I was planning that but my budged was too tight, I just didn't make it. I got an FX 8320 instead. I still hope AMD to bring more cores to the AM3+ platform...



pjl321 said:


> You have to ask yourself, why are Intel (or any tech company) bring out new products?
> 
> Simple answer, to make money.
> 
> ...



I still feel computers today are slow. I mean, not long ago I borrowed a movie and I converted it to DivX, it took some minutes without counting the DVD ripping. I think it could be faster, faster! faster! faster! no waiting! I would pay for a -2x performance- computer If price was OK. But talking about performance, we are ages behind what I could expect for.


----------



## cdawall (Feb 4, 2013)

Ikaruga said:


> Because the guy on the video is not here and can't defend himself, I will greatly soften my opinion about him (below) to "favor" his side:
> 
> He is clearly a sad attention-whore, who can't be taken seriously. That staged setup about his precious "belongings" he put around him is nothing but a terrible joke. That's not a table of a computer enthusiast, more like a table of an 8 year old boy posting his first facebook picture.
> And after all of that, he is trying to convince everybody on the Internet that AMD CPUs are just as good as Intel ones, and you gain nothing if you go Intel. So basically everybody (all the professional and trusted people we know and listen to), who spent days and weeks with hard work running tests and CPU benches and making reviews until now were simply clueless, and he is the only one who finally holds and gives us the real truth.
> ...



Plenty of places support what he said just because most sites don't review the same doesn't make his wrong. Teksyndicate isn't exactly known for being biased...



Prima.Vera said:


> You must be joking with that rigged video. That's the worst lie since Watergate.  :shadedshu



Rigged because Intel lost to a $179 cpu in applications Vishera performs well in?


----------



## xenocide (Feb 4, 2013)

cdawall said:


> Rigged because Intel lost to a $179 cpu in applications Vishera performs well in?



Vishera does outperform in applications that support 8 threads (or more), but those are so few and far between, it's not exactly a deciding metric for most people.  Unless you play exclusively Metro 2033 and encode videos daily (very few people) it's not necessarily a higher performing CPU.


----------



## cdawall (Feb 4, 2013)

xenocide said:


> Vishera does outperform in applications that support 8 threads (or more), but those are so few and far between, it's not exactly a deciding metric for most people.  Unless you play exclusively Metro 2033 and encode videos daily (very few people) it's not necessarily a higher performing CPU.



There are a lot of people who transcode videos of what they are playing to youtube. A lot of people do that is video encoding and youtube is full of gameplay videos as far as the eye can see. That is an application Vishera does substantially better in and one that does not make it into many reviews which is sad considering how many people do it.


----------



## Mr.EVIL (Feb 4, 2013)

( ͡° ͜ʖ ͡°)


----------



## Nordic (Feb 4, 2013)

Mr.EVIL said:


> Haswell will use HyperThreading ? again ?



Why would it not?


----------



## pjl321 (Feb 4, 2013)

james888 said:


> Why would it not?



How else would Intel separate the Core i5 and the Core i7 and try to justify the massive price premium for much a small performance increases!


----------



## Nordic (Feb 4, 2013)

pjl321 said:


> How else would Intel separate the Core i5 and the Core i7 and try to justify the massive price premium for much a small performance increases!



I asked why would they not use hyperthreading not why do they use.


----------



## pjl321 (Feb 4, 2013)

james888 said:


> I asked why would they not use hyperthreading not why do they use.



Woops, meant to quote James888.


----------



## Frick (Feb 4, 2013)

pjl321 said:


> You have to ask yourself, why are Intel (or any tech company) bring out new products?
> 
> Simple answer, to make money.
> 
> ...



I'm not sure I agree they would. Would they do it if it happened on a tablet? In a heartbeat. To be honest I don't think people care about PC's enough anymore. This is of course taken from my arse and is pure speculation. 

And with cloud computing and all that most personal computing devices will probably be more and more like the terminals of old. It's both good and bad imo.


----------



## Aquinus (Feb 4, 2013)

Frick said:


> And with cloud computing and all that most personal computing devices will probably be more and more like the terminals of old. It's both good and bad imo.



It's good when you have a service like YouTube where you can upload a video from a mobile device where their cloud re-encodes it for you.


----------



## Slizzo (Feb 4, 2013)

Thefumigator said:


> I still feel computers today are slow. I mean, not long ago I borrowed a movie and I converted it to DivX, it took some minutes without counting the DVD ripping. I think it could be faster, faster! faster! faster! no waiting! I would pay for a -2x performance- computer If price was OK. But talking about performance, we are ages behind what I could expect for.



I use xilisoft to transcode videos I get online. Uses CUDA, so it transcode quite quickly.


----------



## niko084 (Feb 15, 2013)

...PACMAN... said:


> You're wrong, clock to clock is the best way to benchmark and show differences in the architecture of two different chips at the same speed. It's all about IPC.
> 
> I understand where you are coming from with regards overclocking headroom but that normally forms part of a FULL review also.
> 
> Needless to say, my FX 4100@3.6 is miles behind a 2600K@3.6



Exactly!

It's a direct comparison of base line architecture, meaning unless they prove to clock to 6ghz a high end ivy bridge owner has little to no reason to upgrade considering raw cpu power alone. From the looks of it here anyways.


----------



## ...PACMAN... (Feb 15, 2013)

Dent1 said:


> Are we supposed to be surpised that your cheap low end FX 4100 quadcore is mile behind an expensive high end quadcore 8 threaded 2600K.
> 
> http://images.sodahead.com/polls/001390907/FacePalm_xlarge.jpeg



That was the point I was making dur........


----------



## Kaynar (Feb 21, 2013)

Thefumigator said:


> C'mon, we all know a 65w processor can't really make more heat and consume more power than a 125w processor. Of course the relation is not linear and it has to be measured on full throttle.



Then how can you explain that my i7 930 at 4Ghz with 130+W TDP at 1.3v needs a corsair H100, 4 fans, and aftermarket thermal past to stay under 80c, while my flatmate's i5 3570K at 4.5Ghz with 70+W TDP and near 1.3v doest go over 55c with a plain H60 and two fans.... I'm just curious cause I can't figure out another reason than TDP...


----------



## Aquinus (Feb 21, 2013)

Kaynar said:


> Then how can you explain that my i7 930 at 4Ghz with 130+W TDP at 1.3v needs a corsair H100, 4 fans, and aftermarket thermal past to stay under 80c, while my flatmate's i5 3570K at 4.5Ghz with 70+W TDP and near 1.3v doest go over 55c with a plain H60 and two fans.... I'm just curious cause I can't figure out another reason than TDP...



I bet your 920 is consuming at least twice as much power as the 3570k at the same voltage as well, so I suspect that the amount of current your CPU is eating is that much more than a 3570k which is why your 920 is getting hotter. Heat increases exponentially as current draw increases, not voltage. Increasing the voltage only enables current to increase since the impedance in any part of the CPU at any given time won't change (much). A good example is my i7 3820. I have it running at 4.5Ghz @ ~1.4v and I don't see much higher than 65*C fully loaded with a Zalman CNPS9900. For me to get to 80*C I need to pump 1.5v or more through my CPU and at that point I'm too concerned about heat.

Also, the amount of leakage a chip generates varies from CPU to CPU, so two CPU that are the same could produce different amounts of heat at the same voltages but that is a different topic.

All in all, your 920 eats more power than an IVB chip. I honestly wouldn't be surprised if it consumed more power than mine at a high voltage as well.


----------



## malyleo (Mar 5, 2013)

*You're right*



RejZoR said:


> I guess i'll be keeping my trusty Core i7 920 for another year or two. The chip was so good it's probably the longest owned single CPU in my systems ever. It's so long i don't even remember what year i bought it, which is unusual...



I bought the i7-960 back in 2010 and I still have no problems with it, everything running fast and smooth. I've been thinking about an upgrade later in the year, but I guess I'll hold off for another year. Maybe just a graphic card and memory, I'll see. It was a great purchase as far as I'm concerned...


----------

