Thursday, October 16th 2008

Kuma Manifests into Athlon X2 7550, 7750

Industry sources reveal that AMD would be branding its Kuma dual-core processor as Athlon X2 7000 series. These processors are aimed to compete with Intel's Core 2 Duo E7000 series processors. Kuma continues to use the 65nm SOI fabrication process. The core sports 512 KB L2 cache per core, and a shared 2 MB L3 cache. Surprisingly, despite having sub-3.00 GHz clock speeds, the processors have rated TDP of 95W.

These processors use a broader 3600 MT/s HyperTransport interface, and feature DDR2 memory controllers that support the PC2-8500 (1066 MHz) standard. As for the models, the Athlon X2 7550 has a clock speed of 2.50 GHz, and an FSB multiplier of 12.5x. The Athlon X2 7750 comes with the clock speed of 2.70 GHz, and a FSB multiplier of 13.5x. Both processors are expected to be out by Q1 2009.

As for its 45nm successor, there are early indications that it would be succeeded by the Phenom X2 10000 series processors. Depending on the clock speeds, they would be branded as 10x00, with "x" deciding the model number. These would support PC3-10600 (DDR3-1333) memory and have lower TDP of 65W.
Source: Expreview
Add your own comment

57 Comments on Kuma Manifests into Athlon X2 7550, 7750

#26
Viscarious
btarunrA little unrelated, but the BIOS for Phenom supportive boards, come with a "processor downcore" option. If you downcore a Phenom X4 to X2, the chip would still use the same power draw, except that the disabled cores would not be shown to the OS, so zero load on them, but the TDP remains the same as X4, since OC'ing the processor (such as increasing vCore) would do so for all existing cores. In other words, I personally suspect these are X4s with two cores disabled. :)
Are you sure? I've seen load meters show significantly lower power consumptions when quad is downcored to dualcore.
Posted on Reply
#27
aj28
95W sounds about right actually... True, the 8750 is also rated at 95W, but it's also a fair bit slower (300Mhz), and from the tests I've seen anyway, downcoring Phenom chips doesn't exactly deliver spectacular energy efficiency. Now personally, I would think AMD could have come up with a way to make such processes scale better, but then again if they're just dumping old chips, I wouldn't count on it. Seems they just want to purge Phenom while they still can and put their engineering hours where they've still got potential (i.e. 45nm generation and Deneb). Expect these to be a big OEM product and nothing more...

I would like to point out, however, that most of the reports I've seen on the Phenom X2 (successor to these chips?) note a 45W TDP, versus the 65W mentioned in this article. Sources anyone?
Posted on Reply
#28
Melvis
Icewind31It's a quad-core with 2 bad cores :shadedshu
No its not, and im pretty sure it never has been, it is and always has been a "true" Dual core, even the triple core is a "true" triple core, if you look at the architecture of the triple to a Quad, its totaly different. eg, 1.2.3 cores side by side, and we all know how a quad core works, 2x2. And y in the world would AMD throw away a complete dual core production line? when dual cores are used 10X more then quads are? it doesnt make sense to do this.

Im 99% positive that all the CPU's that come out are ether true dual , triple or quads.

It would be a complete waste of resources to make a quad core and sell it as a dual core?
Posted on Reply
#29
suraswami
MelvisNo its not, and im pretty sure it never has been, it is and always has been a "true" Dual core, even the triple core is a "true" triple core, if you look at the architecture of the triple to a Quad, its totaly different. eg, 1.2.3 cores side by side, and we all know how a quad core works, 2x2. And y in the world would AMD throw away a complete dual core production line? when dual cores are used 10X more then quads are? it doesnt make sense to do this.

Im 99% positive that all the CPU's that come out are ether true dual , triple or quads.

It would be a complete waste of resources to make a quad core and sell it as a dual core?
Where have you been? Went to the dark age and just came back? How big is your beard? About 2 miles long?

Hello that 2x2 sticky thing is Intel. 4 individual cores with shared L3 cache is AMD. Tri is one core disabled and same with dual, 2 cores disabled. I don't think they would have spent anymore time in coming up with a true dual-core design based on a broken design. Its just they try to make max use of broken ones while their engineers hopefully breaking their head to come out with a smash hit design.
Posted on Reply
#30
btarunr
Editor & Senior Moderator
MelvisIt would be a complete waste of resources to make a quad core and sell it as a dual core?
No, it's more economical. From a batch of quads, if cores are found poor-performing, it could be down-cored and locked at that (using diodes on the chip's package (under the IHS)). For them to chalk out new batches of dual/triple core dice, it would step up operational costs. Again, I'm only suspecting it's a downcore quad. No way would a 65nm 2.50 GHz dual-core AMD chip which is essentially a Brisbane with L3 cache + higher CPU-NB interface would get a rating of 95W. It doesn't make sense. However, if you look at the ratings of X3 chips (Toliman), they all got 95W ratings for clock-speeds speeds X4 chips run on.
Posted on Reply
#31
eidairaman1
The Exiled Airman
btarunrNo, it's more economical. From a batch of quads, if cores are found poor-performing, it could be down-cored and locked at that (using diodes on the chip's package (under the IHS)). For them to chalk out new batches of dual/triple core dice, it would step up operational costs. Again, I'm only suspecting it's a downcore quad. No way would a 65nm 2.50 GHz dual-core AMD chip which is essentially a Brisbane with L3 cache + higher CPU-NB interface would get a rating of 95W. It doesn't make sense. However, if you look at the ratings of X3 chips (Toliman), they all got 95W ratings for clock-speeds speeds X4 chips run on.
well what about the 6300???

The first batches were Conroes, and then they were Allendales.
Posted on Reply
#32
btarunr
Editor & Senior Moderator
eidairaman1well what about the 6300???

The first batches were Conroes, and then they were Allendales.
They come with 65W ratings, both of them. Conroe-2M has the same exact transistor count as Conroe-4M...they're just Conroe chips with half the cache disabled.
Posted on Reply
#33
eidairaman1
The Exiled Airman
KBDwow, wow, i was cwaiting for babies. I'm hoping the 7000 series Athlons will come in highrer clock varities than 2.7. Afterall the Denebs for AM2+ will be clocled at 2.8 and 3.0. I assume a dual cre could be clocked even highrer.
I believe Kuma are 65mm process being Brisbane/1st Gen Phenom, where Denebs are 45nm and probably reworked.
Posted on Reply
#34
eidairaman1
The Exiled Airman
btarunrThey come with 65W ratings, both of them. Conroe-2M has the same exact transistor count as Conroe-4M...they're just Conroe chips with half the cache disabled.
Well Allendale are exactly the same except they are manufactured that way, and they costed about the same as the Disabled Conroe Parts, Except you got a Chip that was Fully functioning, and greater yields.
Posted on Reply
#35
btarunr
Editor & Senior Moderator
eidairaman1Well Allendale are exactly the same except they are manufactured that way, and they costed about the same as the Disabled Conroe Parts, Except you got a Chip that was Fully functioning, and greater yields.
I'm not sure about the yields part, but yes, Allendale used an FSB of 200 MHz, so it was stable at its frequency.
Posted on Reply
#36
Melvis
suraswamiWhere have you been? Went to the dark age and just came back? How big is your beard? About 2 miles long?

Hello that 2x2 sticky thing is Intel. 4 individual cores with shared L3 cache is AMD. Tri is one core disabled and same with dual, 2 cores disabled. I don't think they would have spent anymore time in coming up with a true dual-core design based on a broken design. Its just they try to make max use of broken ones while their engineers hopefully breaking their head to come out with a smash hit design.
Where have you been? Went to the dark age and just came back? How big is your beard? About 2 miles long, back at ya buddy

*rolls eyes* omg ill have to explain to you what i mean, since you did not get it the first time, when i mean 2x2 i mean it has 2cores above, and also 2 cores below, regardless if its AMD or Intel, i know very well how a Intel quad works and a AMD one, its a shame you have not read what i have said properly, please read again with some common sense and post again please?

Ummm i think you are lost wen you say > I don't think they would have spent anymore time in coming up with a true dual-core design based on a broken design, considering that ALL the dual cores from the past including intel are all "true" dual cores, how is it wasting time? if its all ready there? And if this is some how true? and its not, then why in the world isn't intel doing the same thing? it makes even more sense for them to do it since there design of a quad core is alot different and not on one die, it would be alot easier for them to do it compared to AMD, and then after all these so called stuffed quads are noticed, they then gotta go through all of them again, disable the cores, and re pack and whatever else, thats goin to cost a F load to redo all that? Its NOT cost effective end of story.

Sorry m8 but you need to get your facts straight and realize that its just complete stupidity to pump out quad cores that cost ALOT more to make, and engineer and then sell them as a really cheap dual core with all the quad core components in them?
Posted on Reply
#37
suraswami
MelvisWhere have you been? Went to the dark age and just came back? How big is your beard? About 2 miles long, back at ya buddy

*rolls eyes* omg ill have to explain to you what i mean, since you did not get it the first time, when i mean 2x2 i mean it has 2cores above, and also 2 cores below, regardless if its AMD or Intel, i know very well how a Intel quad works and a AMD one, its a shame you have not read what i have said properly, please read again with some common sense and post again please?

Ummm i think you are lost wen you say > I don't think they would have spent anymore time in coming up with a true dual-core design based on a broken design, considering that ALL the dual cores from the past including intel are all "true" dual cores, how is it wasting time? if its all ready there? And if this is some how true? and its not, then why in the world isn't intel doing the same thing? it makes even more sense for them to do it since there design of a quad core is alot different and not on one die, it would be alot easier for them to do it compared to AMD, and then after all these so called stuffed quads are noticed, they then gotta go through all of them again, disable the cores, and re pack and whatever else, thats goin to cost a F load to redo all that? Its NOT cost effective end of story.

Sorry m8 but you need to get your facts straight and realize that its just complete stupidity to pump out quad cores that cost ALOT more to make, and engineer and then sell them as a really cheap dual core with all the quad core components in them?
Hey don't get flamed. In a way both of us are correct. But the fact is A or I makes a CPU for a purpose, say a Quad, then they test it to find out it cannot be released for what it was made for, say cache problem, one core lazy, etc etc. So then they disable whatever is broken and try to maximize whatever that is left. In AMD's case they make native Quads and if one or 2 cores is broken they have to repack it to 3 or 2 core chips. That is what I trying to say. As far as AMD there is no ground up new design for Dual and Triple core separately and they can't afford for one right now. If they really come out then it will be great, but they don't have it, its applicable for new K10 architecture.

:toast:
Posted on Reply
#38
Silverel
AMD doesn't *need a ground up new design for X2's and X3's (in theory), because their Quad production already yields X2's and X3s'.

This was an intentional part of the design process when they came out with their "Native Quad Core" campaign. Because of how the cores access and communicate with each other, it doesn't matter how many cores are left over, they just work. They could make an 8-core processor based off the Phenom, and have X7, X6, and X5's to release as well. It's a simple architectural design that was implemented to save money, not waste it on binning. All companies do binning and testing, AMD just has an extra step. Think that's less cost effective than scrapping 1-2-3 GOOD cores?

Want to know why Intel can't do the same thing? They use dual core packages, two of em go on a die. If one package fails, they have a Core2. Nothing in-between. Intel doesn't have to worry about making things like their architecture cost effective. They just made it blazing fast, and ate up whatever losses in the record breaking sales they've had over the past year or so.

It wasn't really a mistake on AMD's part to go with the architecture that saves chips, they just didn't invest enough speed to compete with Intel at the high end. Whoops. Let's just hope Deneb kicks it up a notch so we all don't have to watch Intel stomp on the throat of their only competitor, kay?


*lolz, amd totally needs a new architecture so they can get some speed in there. The concept is good though
Posted on Reply
#39
ValiumMm
Icewind31If they are actually binning out the Quads w/ 2 bad cores then they're actually losing less money since they don't throw out the bad cores (at least until they regear their AthlonX2 fabs to pump out the Kuma's)... who knows what's next... the single core Semprons are quads w/ 3 disabled/bad cores
lmao
Posted on Reply
#40
Zubasa
ValiumMmlmao
I don't find that funny at all to lmao:slap:
Posted on Reply
#41
boise49ers
I have an Athlon X2 2.8 5600

Is this an AM 2 processor and just a swap with my current processor ?
Has a price been quoted yet ?
Posted on Reply
#42
eidairaman1
The Exiled Airman
yes you can swap CPUs but check for compatibility with your motherboard at the manufacturers website.
Posted on Reply
#43
mdm-adph
eidairaman1yes you can swap CPUs but check for compatibility with your motherboard at the manufacturers website.
Funny you should mention that -- I remember checking MSI's website for my board (K9A2 CF-F) for BIOS updates, and remember seeing two entries for the Kuma in the CPU compatibility chart months and months ago. (They were called something different, though.) They've since been taken down, though.
Posted on Reply
#44
Silverel
Heh, I have that board. The only difference between the V1 and the V2 of that board, is support for higher wattage processors, and a heatsink over the MOSFETs. Ergo, in my Mystique mod, I added some copper heatsinks to the FET's and now have a V2.5 of the board :D
Posted on Reply
#45
mdm-adph
SilverelHeh, I have that board. The only difference between the V1 and the V2 of that board, is support for higher wattage processors, and a heatsink over the MOSFETs. Ergo, in my Mystique mod, I added some copper heatsinks to the FET's and now have a V2.5 of the board :D
You've got version 2? Lucky! :laugh: I'm seriously scared to do much more overclocking (I'm already 600MHz over) because there's quite a few tales of people burning these v1 boards out once you get up around 3.0GHz.

MSI's done something sneaky on their site, too -- they used to have seperate CPU compatibility listings for v1 and v2, so that you could know what chips to not use on v1 boards (like mine). That page is now gone, however, so I'm just stabbing around in the dark. I pretty much know to stay away from anything over 95W TDP, though.

That's why I was hoping the Kuma's would be some sort of super-efficient 45W chips or something, but nope.
Posted on Reply
#46
ocre
well

If the Kuma was just phenoms with 2 disabled cores they would've already been out along time ago. Dont you think? There has been alot of time spent on it and time is money. they already quit making the 6000+ and 6400+ so there is nothing to offer in the higher x2 range so it just wouldnt make any since to have it all along and not sell it. Besides you cannot disable any phenom to just 2 cores then overclock it to 3.6ghz on air like the sample 6500 that was tested. and i expect there have been improvements even from that chip as it ran at 2.3ghz claiming to be 6500+. The real kuma clocked at 2.5ghz is supposed to be a 7550+ and i really hope they just didnt make that number up. All of amds x2s scaled very well in there numerical rated name. Amd has claimed that the kuma would have a new design. It most likely is heavily based on the phenoms but i expect it is a good improvement over current phenoms. They memory controller is new. could they have taken a broken phenom apart and then attached all 4 cores to a new memory controller then disable 2 cores and call it a kuma? Why bother? i dont think the current phenom is powerful enough to benefit from a higher bandwidth memory controller much less one with 2 disabled cores. But for real lets hope there will be something new and fun and fresh to come out. I sure am ready...i am bored of building intels
Posted on Reply
#47
Melvis
ocreIf the Kuma was just phenoms with 2 disabled cores they would've already been out along time ago. Dont you think? There has been alot of time spent on it and time is money. they already quit making the 6000+ and 6400+ so there is nothing to offer in the higher x2 range so it just wouldnt make any since to have it all along and not sell it. Besides you cannot disable any phenom to just 2 cores then overclock it to 3.6ghz on air like the sample 6500 that was tested. and i expect there have been improvements even from that chip as it ran at 2.3ghz claiming to be 6500+. The real kuma clocked at 2.5ghz is supposed to be a 7550+ and i really hope they just didnt make that number up. All of amds x2s scaled very well in there numerical rated name. Amd has claimed that the kuma would have a new design. It most likely is heavily based on the phenoms but i expect it is a good improvement over current phenoms. They memory controller is new. could they have taken a broken phenom apart and then attached all 4 cores to a new memory controller then disable 2 cores and call it a kuma? Why bother? i dont think the current phenom is powerful enough to benefit from a higher bandwidth memory controller much less one with 2 disabled cores. But for real lets hope there will be something new and fun and fresh to come out. I sure am ready...i am bored of building intels
I agree :)

And welcome to TPU
Posted on Reply
#48
christof139
SilverelHeh, I have that board. The only difference between the V1 and the V2 of that board, is support for higher wattage processors, and a heatsink over the MOSFETs. Ergo, in my Mystique mod, I added some copper heatsinks to the FET's and now have a V2.5 of the board :D
Hi, Newbie/'cruit here. Was reading this thread with great interest and am happy to have found two people with the mobo that I have, the MSI K9A2-CF-F (v1.0 in my case).:)

It would be interesting/nice if these new 65nm Kumas will work on this SB600 mobo. The following 45nm Jumas should also work with this mobo. Just missed a used X3 8750 BE on Ebay for ~$82 + $minor shipping. I was going to get that to replace my excellent 5400+ BE @ 3.2GHz @ stock volts with stock AMD copper pipe X2 6000+ to 95W or 125W X2 and X4 CPU's HSF, and 4GB's (3 used) of DDR2 @400/800MHz. It runs very cool and is fast (for me)and it scored 11,737 in 3DMark06 with Win XP SP3, and with two XFired Saphire HD 4670 @ 770MHz core x 1140MHz Memory (about maximum OC for this Saphire model that is stock at 750x1000). I just have the free demo-test version of 3DMark06. Previously I had a 5000+ BE with two Xfired Saphire 2600XT's and was only scoring in the mid to high 8,000's.

If I may continue with some more slightly OT meanderings for a moment, may I ask if this MSI mobo that you put the copper heatsinks on the MOSFETS, do you think aluminim will be OK and do you think one long piece of aluminun covering all the chips would be OK?? I would hope so and then believe that this mobo would be able to handle at least 95W CPU's and possibly 125W CPU's, but I am not sure and need advice. I have the mobo BIOS flashed to its latest version. Any advice would be appreciated.

These new first model 65nm Kumas, might they not be made from both rejects from the X4 chips and also be purposefulf built 2x cores?? For instance, maybe the lower speed 7550 might be made from the rejected X4's and the faster 7750 be made from purposeful produced X2 silicon?? Maybe, something like that??

I heard AMD will start using metal (maybe also hafnium or similar) in their silicon in 2010, and also start using the ferrite/iron compound chokes or whatever. I am not an electrical engineer nor even an electrician and definitely not into semiconductor work, nor much other work at the present time at that level of brain cell strain/expenditure :banghead:. Previous decades past inebriation episodes casued enough expenditure of that matter anyway, or many ways, whatever, it's a gray/grey area/matter. :toast:

Thanx for any advice concerning the MSI K9A2-CF-F v1.0 mobo and input on the Kumas and future AMD and Intel CPU's. Have an old P4 571 3.8MHz Intel in a micor-BTX setup and it is OK, and never put in my unused PD 940 3.2GHz, and still have an E4700 2.6GHz rig to put together, and also have a P4 3.2GHz Extreme Edition (bought used BTW earlier this year, or was it last year, I don't remember, maybe Alzheimer's or whatever :eek:) for older games etc. and to fiddle with.

Thanx again, Chris
Posted on Reply
#49
mdm-adph
christof139If I may continue with some more slightly OT meanderings for a moment, may I ask if this MSI mobo that you put the copper heatsinks on the MOSFETS, do you think aluminim will be OK and do you think one long piece of aluminun covering all the chips would be OK?? I would hope so and then believe that this mobo would be able to handle at least 95W CPU's and possibly 125W CPU's, but I am not sure and need advice. I have the mobo BIOS flashed to its latest version. Any advice would be appreciated.
Feel free to put heatsinks on your MOSFETs if you'd like (they can't hurt), but if you've got version 1 of the MSI K9A2 CF, you're not going to be able to use 125W CPU's, no matter what. MSI used to have information about it on their site, but they've since taken it down.

From what I gathered from information I found on the internet, it wasn't a lack of cooling that the MOSFETs had, but a weakness in the power circuits that MSI built into the board -- draw too much power, and the board fries.

Now, if you could buy one of the 89W X2 6000's (like this one), you'll be fine, but any of the 125W Phenom products are very, very risky on this board.

The Kuma's should work just fine, but I was personally going to skip them, since I was looking to overclock the heck out of one of them for my next chip; but if they're already rated at 95W, that's not going to be a possibility with this board.
Posted on Reply
#50
Castiel
With the i7 up and coming I see that AMD proc.'s suck big time. I was a AMD fanboy, but it just doesn't make it. I think that AMD needs to come out with some killer new hardware not only to put themselves with Intel but also give there company some money and get them out of the whole.
Posted on Reply
Add your own comment
Dec 20th, 2024 14:25 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts