# AMD Starts Shipping 12-core and 8-core ''Magny Cours'' Opteron Processors



## btarunr (Feb 23, 2010)

AMD has started shipping its 8-core and 12-core "Magny Cours" Opteron processors for sockets G34 (2P-4P+), and C32 (1P-2P). The processors mark entry of several new technologies for AMD, such as a multi-chip module (MCM) approach towards increasing the processor's resources without having to complicate chip design any further than improving on those of the Shanghai and Istanbul. The new Opteron chips further make use of third-generation HyperTransport interconnect technologies for 6.4 GT/s interconnect speeds between the processor and host, and between processors on multi-socket configurations. It also embraces the Registered DDR3 memory technology. Each processor addresses memory over up to four independent (unganged) memory channels. Technologies such as HT Assist improve inter-silicon bandwidth on the MCMs. The processors further benefit from 12 MB of L3 caches on board, and 512 KB of dedicated L2 caches per processor core. 

In the company's blog, the Director of Product Marketing for Server/Workstation products, John Fruehe, writes "Production began last month and our OEM partners have been receiving production parts this month." The new processors come in G34/C32 packages (1974-pin land-grid array). There are two product lines: the 1P/2P capable (cheaper) Opteron 4000 series, and 2P to 4P capable Opteron 6000 series. There are a total of 18 SKUs AMD has planned some of these are listed as followed, with OEM prices in EUR:







Opteron 6128 (8 cores) | 1.5 GHz | 12MB L3 cache | 115W TDP - 253.49 Euro
Opteron 6134 (8 cores) | 1.7 GHz | 12MB L3 cache | 115W TDP - 489 Euro
Opteron 6136 (8 cores) | 2.4 GHz | 12MB L3 cache | 115W TDP - 692 Euro
Opteron 6168 (12 cores) | 1.9 GHz | 12MB L3 cache | 115W TDP - 692 Euro
Opteron 6172 (12 cores) | 2.1 GHz | 12MB L3 cache | 115W TDP - 917 Euro
Opteron 6174 (12 cores) | 2.2 GHz | 12MB L3 cache | 115W TDP - 1,078 Euro

*View at TechPowerUp Main Site*


----------



## pantherx12 (Feb 23, 2010)

Nice, can't wait for the desktop versions.


----------



## Deleted member 3 (Feb 23, 2010)

Opteron 6128 (8 cores) | 1.5 GHz | 12MB L3 cache | 115W TDP - 253.49 Euro

Interesting prices, and the 4000 series should be cheaper? Gonna spend some time on google now


----------



## Thrackan (Feb 23, 2010)

Pricing isn't that bad, can't wait to see how the 4000 series are priced!


----------



## btarunr (Feb 23, 2010)

Whoever paid like $5000 a piece on ebay for the Engineering Samples of these chips about a month ago, is so owned.


----------



## Easo (Feb 23, 2010)

Actually cheap.


----------



## HolyCow02 (Feb 23, 2010)

Where the hell did this come from?!? They only released Istanbul a couple of months ago with 6 cores, and now they have doubled that? AMD FTW!! Sweet sweet desktop versions plllleeeeeaassseeeee


----------



## pantherx12 (Feb 23, 2010)

Gives people various options 

AMD are intelligently playing the field very well recently.


----------



## [I.R.A]_FBi (Feb 23, 2010)

best move amd has made in years apart from unlocking


----------



## mdm-adph (Feb 23, 2010)

Well... damn.  That was quick.  

I've been saying for years that, AMD fan or no, when Intel released their first true-eight core 32nm chip, I'd go Intel, but I think now I'd rather stick with AMD and pick up one of those 12-core beauties when they make it to the desktop market.


----------



## blkhogan (Feb 23, 2010)

btarunr said:


> Whoever paid like $5000 a piece on ebay for the Engineering Samples of these chips about a month ago, is so owned.


Who was stupid enough to do that?


----------



## mdm-adph (Feb 23, 2010)

blkhogan said:


> Who was stupid enough to do that?



The same people who keep Intel in business.

They probably bought it just so they could keep it out of the hands of loyal AMD owners who would've used it to cure cancer or save puppies or something equally wonderful.


----------



## btarunr (Feb 23, 2010)

blkhogan said:


> Who was stupid enough to do that?



http://www.channelregister.co.uk/2010/02/22/opteron_6100s_on_ebay/

Oh, so they sold for £5000 ($7700), not $5000.


----------



## mstenholm (Feb 23, 2010)

btarunr said:


> Whoever paid like $5000 a piece on ebay for the Engineering Samples of these chips about a month ago, is so owned.



It was for a set of 4. Makes it an allmost OK price


----------



## A Cheese Danish (Feb 23, 2010)

This is what I like to see! Finally my dreams are coming true 
Way to go AMD! I wonder how much the boards are going to cost for these things.


----------



## Fourstaff (Feb 23, 2010)

Desktop versions in AM3 or new socket? I still can't see (above) average Joe using more than 4 cores, let alone 6


----------



## xaira (Feb 23, 2010)

2.4ghz octocore with only 115watt tdp, phenom II X6 should be clocked higher than i thought, well done amd.

i doubt theyll release 12 or 8 core desktop processors in the near future, good to know they released 8and 12core cpus b4 intel, although an intel 8core might go head 2 head with an amd 12core, way to go amd on both the cpu and graphics front

AMDIntel

AMDNvidia


----------



## r9 (Feb 23, 2010)

I`m just going to the thread and funny enough I have noticed that no one wondered how it would run Crysis . So I wonder . And I remember a time not so long ago AMD fanboys sayng "AMD would never charge 1000 eur for CPU". Yeah right.


----------



## xrealm20 (Feb 23, 2010)

awesome progress -- I'd love to see some benchmarks of these chips -- any chance TPU will do a review for us SysAdmins?


----------



## mdm-adph (Feb 23, 2010)

r9 said:


> I`m just going to the thread and funny enough I have noticed that no one wondered how it would run Crysis . So I wonder . And I remember a time not so long ago AMD fanboys sayng "AMD would never charge 1000 eur for CPU". Yeah right.



These are server chips -- they've _always_ cost north of $1000 for certain variations.


----------



## DaJMasta (Feb 23, 2010)

Great..... but we're still stuck here with 4 on desktops.


Get on that AMD and Intel.


----------



## [Ion] (Feb 23, 2010)

I'm not really convinced the lower-clocked (sub-2ghz) 8-cores are really all that good.  A Phenom II X4 965 would probably be better than the 6128 and 6134


----------



## pantherx12 (Feb 23, 2010)

[Ion] said:


> I'm not really convinced the lower-clocked (sub-2ghz) 8-cores are really all that good.  A Phenom II X4 965 would probably be better than the 6128 and 6134





They're for servers not for desktop use, its completely different really


----------



## TIGR (Feb 23, 2010)

Fourstaff said:


> I still can't see (above) average Joe using more than 4 cores, let alone 6



Oh, but he will. Massively parallel computing is the future, and software that can take advantage of any number of cores will be ubiquitous eventually. Of course ... the future is also GPGPUs, but they will eventually be massively parallel as well.

They have to create the hardware first ... the software will follow.


----------



## pantherx12 (Feb 23, 2010)

Just imagine a time when 100 core CPUS are availble for desktop use, CPU core per individual program 

have 10 of those cores higher clocked then the rest for handling games and heavy duty apps, rest for everything else.

Computer will never slow down ( theoretically)


----------



## Disparia (Feb 23, 2010)

If you only need 12 cores, but want that quad-channel bandwidth and/or capacity, single-socket G34.







RAWRR !! CORES!! OH!! !MEMORY !! BANANAS!1

Sorry, just bored waiting for formal releases from Supermicro, Tyan, and others. I mean this has been posted for 6 hours now, lets get some boards out there people! Especially with more PCIe sexiness 

A little chipset info:

SR5650 :: 22 lanes : 13w
SR5670 :: 30 lanes : 17w
SR5690 :: 42 lanes : 18w

Just about everything else is common between them, except for PCIe Hot Plug.


----------



## fatguy1992 (Feb 23, 2010)

Where can I buy one of those G34 motherboards?


----------



## eidairaman1 (Feb 23, 2010)

mdm-adph said:


> Well... damn.  That was quick.
> 
> I've been saying for years that, AMD fan or no, when Intel released their first true-eight core 32nm chip, I'd go Intel, but I think now I'd rather stick with AMD and pick up one of those 12-core beauties when they make it to the desktop market.



imagine the 4 way or 8 way 12Core, Intel is talking about a 48 Core CPU, well AMD will meet the market with 4 of these in 4 way. and then 8x8 way would be 64 cores. Cool my computer has 48 cores in it.


----------



## TIGR (Feb 23, 2010)

I wonder if we'll see eight-CPU systems at the consumer level as eidairaman1 suggested. It seems more likely those 48/64 cores will be in a single CPU (or by that time, GPGPU), aside from server-class configurations like the Tyan Thunder n4250QE motherboard plus the M4985 expansion board (eight CPUs total). Yeah, enthusiasts might go with some of those four/eight-CPU rigs but the mainstream doesn't tend to adopt technology until it fits in the same size packages its predecessors did.


----------



## eidairaman1 (Feb 23, 2010)

TIGR said:


> I wonder if we'll see eight-CPU systems at the consumer level as eidairaman1 suggested. It seems more likely those 48/64 cores will be in a single CPU (or by that time, GPGPU), aside from server-class configurations like the Tyan Thunder n4250QE motherboard plus the M4985 expansion board (eight CPUs total). Yeah, enthusiasts might go with some of those four/eight-CPU rigs but the mainstream doesn't tend to adopt technology until it fits in the same size packages its predecessors did.



what i meant to say was a poke at Intels little article about 48core CPus, to me that is already possible to have 48 cores in a machine in a 4 way Motherboard


----------



## pantherx12 (Feb 23, 2010)

Yeah but not as convenient as just one actual CPU with 48 cores.


There's a company with 100 core CPUs already clocked at 1.8ghz each : /

Once they've sorted out how an OS can actually interact with such a CPU that will be fun and games : ]


----------



## Deleted member 3 (Feb 23, 2010)

pantherx12 said:


> Yeah but not as convenient as just one actual CPU with 48 cores.
> 
> 
> There's a company with 100 core CPUs already clocked at 1.8ghz each : /
> ...



Tilera, but since that's not x86 it's apples and oranges. OSes can handle such amounts just fine.


----------



## eidairaman1 (Feb 24, 2010)

ya fact of those 1.8 units are Itaniums which are i64 and not x86 compatible (gotta run anything x86 in a virtual environment which then hinders performance and fact of those Itaniums run better in cluster environments)


----------



## pantherx12 (Feb 24, 2010)

Would work great in a i-phone like device XD

( obviously not the 100 core version, I imagine heat would melt everything ha ha)


----------



## CDdude55 (Feb 24, 2010)

Not buying it.


----------



## blkhogan (Feb 24, 2010)

btarunr said:


> http://www.channelregister.co.uk/2010/02/22/opteron_6100s_on_ebay/
> 
> Oh, so they sold for £5000 ($7700), not $5000.


Holy freaking crap.  That is insane to say the least.


----------



## eidairaman1 (Feb 24, 2010)

CDdude55 said:


> Not buying it.



no ones forcing ya to so.


----------



## CDdude55 (Feb 24, 2010)

eidairaman1 said:


> no ones forcing ya to so.



Yep.

No money for this crap anymore. I'm gonna ride my i7 rig for years then pick up whatever Console is out at the time.

So i'm out.*throws cards down on table*


----------



## Tartaros (Feb 24, 2010)

r9 said:


> I`m just going to the thread and funny enough I have noticed that no one wondered how it would run Crysis . So I wonder . And I remember a time not so long ago AMD fanboys sayng "AMD would never charge 1000 eur for CPU". Yeah right.



You could buy 4 years ago an opteron 185 for 500 euros or athlon fx 60 (same as opteron but unlocked multiplier) for 1000... amd also sold their cpus at astronomic prices.


----------



## aj28 (Feb 24, 2010)

[Ion] said:


> I'm not really convinced the lower-clocked (sub-2ghz) 8-cores are really all that good.  A Phenom II X4 965 would probably be better than the 6128 and 6134



I don't know about this actually. I think people need to look at a product like this and, rather than seeing a 48-core server and wondering "I wonder how fast it can do X-single-operation," they ought to be wondering about how you can split up that workload and accomplish more instances of a single operation (or a variety of different operations) that will run well on just one or two cores. What used to take twenty-four servers towards the beginning of the decade now takes one, with twenty-four completely independent virtualized dual-core systems within!

Mind you, there's plenty of overhead associated with such an operation, but I think the point stands. In my mind, AMD will do well with these... They're no speed demon, but that's not what the industry is looking for right now.


----------



## Disparia (Feb 24, 2010)

fatguy1992 said:


> Where can I buy one of those G34 motherboards?



No boards are available right now. That's what I was saying before, haven't seen releases from any of the manufacturers yet.

The H8SGL-F that I posted was displayed at Supermicro's SC09 booth.


----------



## Melvis (Feb 24, 2010)

Is the 8 core a 12 core with 4 disabled cores? or two 4 cores sticky taped together?


----------



## troyrae360 (Feb 24, 2010)

Melvis said:


> Is the 8 core a 12 core with 4 disabled cores? or two 4 cores sticky taped together?



I was under the understanding that that the 12 core was 2x 6 core cpu's "sticky taped" I could be wrong though


----------



## btarunr (Feb 24, 2010)

Melvis said:


> Is the 8 core a 12 core with 4 disabled cores? or two 4 cores sticky taped together?



8 core is two Shanghai-derived dies on MCM. 12 core is two Istanbul-derived ones. The difference between AMD's MCM and Intel's traditional MCM designs is that each die on AMD's MCM has its own memory controllers, and independent HyperTransport links to the system, and to each other.  The cores on each die (node) can address memory controlled by the neighbouring die.


----------



## MadClown (Feb 24, 2010)

Not enough cores for my liking.  Want moar!


----------



## Athlonite (Feb 24, 2010)

sticky tape WHO uses sticky tape anymore its all crazy glue or nothin at all. All we need now is a board with 8 pcie x16 slots 128 lanes (gpgpu) and four x4/x1 slots (raid SAS/SSD/10Gbit network) please


----------



## FordGT90Concept (Feb 24, 2010)

This is getting ridiculous.  Most applications aren't very good candidates for multithreading so more per-core performance is still ideal.  Someone has to change this trend of gluing more cores on to more core performance.  Multiple cores create needless overhead and before long, applications will be slower tomorrow than they are today because overhead exceeds actual work done.


----------



## aj28 (Feb 24, 2010)

FordGT90Concept said:


> This is getting ridiculous.  Most applications aren't very good candidates for multithreading so more per-core performance is still ideal.  Someone has to change this trend of gluing more cores on to more core performance.  Multiple cores create needless overhead and before long, applications will be slower tomorrow than they are today because overhead exceeds actual work done.



Well, like I was saying earlier, I see these primarily as parts for virtualization more than anything else, and I think they will do that job quite well. Plenty of independent cores to work on. Of course there's not much clock speed, but depending on the application that may not be necessary. After all, there's a reason these things start out on servers... I don't see anything beyond the X6 for at least another few years on the desktop front. Not from AMD at least...


----------



## a_ump (Feb 24, 2010)

yea but i'm assuming it'd take a hell of a lot more time and money to creat quad cores that run at 4.5ghz stock n oc to 6ghz on air than it would doing the MCM deal. 

And with btarunr's statement wouldn't the overhead that was in intel's MCM design be non existant or dam near gone?


btarunr said:


> 8 core is two Shanghai-derived dies on MCM. 12 core is two Istanbul-derived ones. *The difference between AMD's MCM and Intel's traditional MCM designs is that each die on AMD's MCM has its own memory controllers, and independent HyperTransport links to the system, and to each other.  The cores on each die (node) can address memory controlled by the neighbouring die.*


----------



## FordGT90Concept (Feb 24, 2010)

Overhead is created by programs to manage multiple threads synchronous or asynchronously.  Synchronous creates more overhead than async because all threads have to halt until ordered to move on to the next set of work (like games).  The huge wall programmers are going to hit sooner rather than later is the core that is managing all the threads will get overburdened which in turn, creates an uncloggable roadblock.  Every core is waiting for that one core to tell it what to do and that one core falls behind leading to huge problem.

Basically, this fad of adding more cores, if it lasts too long, will be bad for developers and consumers.  Yes, it's nice to have extra cores to offload work but that doesn't change the fact a 12 GHz CPU can handle more work than a 4 x 3 GHz CPU because of having no overhead.

In order to maintain Moore's Law much longer, semiconductor has to leap ahead of where it is today.


In terms of MCM, it really doesn't matter as demonstrated by Phenom compared to Core 2 Quad.


----------



## TIGR (Feb 24, 2010)

FordGT90Concept said:


> This is getting ridiculous.  Most applications aren't very good candidates for multithreading so more per-core performance is still ideal.  Someone has to change this trend of gluing more cores on to more core performance.  Multiple cores create needless overhead and before long, applications will be slower tomorrow than they are today because overhead exceeds actual work done.



While multi-core scaling isn't perfect , it's a necessary step to overcome the fact that a single core has its limits. Diminishing performance gains from clock rate increases, exponentially increasing power consumption for each factorial increase in operating frequency, ILP and memory walls, and simple limits to how well a single core can be designed, force us to use multiple cores to keep up the exponential rate of progress that has come to be expected from chip makers.

The human brain (the most powerful computer known) is massively parallel.

Improving per-core performance is still extremely important, and I don't think AMD or Intel are abandoning that in favor of just increasing core count. Look at the per-core difference between C2D and Core lines of CPUs.

Anyway, mainstream multi-core computing is still in its infancy. The main issue seems to be software algorithms and implementation, not some flaw with the concept of multiple CPU cores itself. There will be challenges in the future, such as the jump from multi-core to many-core CPUs, but I see no signs that multi-core computing is a dead end.


----------



## FordGT90Concept (Feb 24, 2010)

TIGR said:


> The human brain (the most powerful computer known) is massively parallel.


The human brain is designed to process senses; computers are designed to process binary.  As a human brain to process binary and it fails.  Ask a computer to process imagery and it fails.




TIGR said:


> Improving per-core performance is still extremely important, and I don't think AMD or Intel are abandoning that in favor of just increasing core count. Look at the per-core difference between C2D and Core lines of CPUs.


Core performs miserably without Hyperthreading enabled.  The major improvements to Core are improving the instruction sets which streamline complex processes.




TIGR said:


> Anyway, mainstream multi-core computing is still in its infancy. The main issue seems to be software algorithms and implementation, not some flaw with the concept of multiple CPU cores itself. There will be challenges in the future, such as the jump from multi-core to many-core CPUs, but I see no signs that multi-core computing is a dead end.


The hardware causes the software flaws but maybe that's just it.  Instead of having multiple asynchronous cores, why not make the cores themselves synchronous.  Actual thread states are handled on the hardware, instead of software level.  That would virtually eliminate software overhead.

I see the signs although they generally aren't anything to be worried about now on a quad-core; however, the more cores there are, the bigger the problem becomes.  I don't want to imagine how much trouble it will be to multithread the code that handles multiple cores.  The potential for errors, collisions, and other problems are exponentially increased.


----------



## ShiBDiB (Feb 24, 2010)

Fourstaff said:


> Desktop versions in AM3 or new socket? I still can't see (above) average Joe using more than 4 cores, let alone 6



ditto

These r useless for everyday users. Most games arent even coded to use 2 cores let alone 12


----------



## Hayder_Master (Feb 24, 2010)

prices are awesome, top performance per dollar cpu's


----------



## Phxprovost (Feb 24, 2010)

ShiBDiB said:


> ditto
> 
> These r useless for everyday users. Most games arent even coded to use 2 cores let alone 12



and the rate by which pc devs are jumping ship, there never will be a time where games use them all


----------



## ShiBDiB (Feb 24, 2010)

Phxprovost said:


> and the rate by which pc devs are jumping ship, there never will be a time where games use them all



?

dual cores have been out for how long now and we still dont see universal acceptance of them by dev's... I  people who spew bs


----------



## Phxprovost (Feb 24, 2010)

ShiBDiB said:


> ?
> 
> dual cores have been out for how long now and we still dont see universal acceptance of them by dev's...



 and my point is pretty much all devs are abandoning pc game releases or release crap ports that are hardly optimized...... I  people who cant read


----------



## FordGT90Concept (Feb 24, 2010)

Consoles are going the same way as PCs though.  Xbox360 has a tri-core w/ SMT (6 threads at a time) CPU and PS3 has a dual-core with up to 8 sub-processors.  The only exception is the Wii which still has a single core CPU (as far as anyone can tell).


----------



## driver66 (Feb 24, 2010)

ShiBDiB said:


> ?
> 
> dual cores have been out for how long now and we still dont see universal acceptance of them by dev's... I  people who spew bs



Lay off the booze bro ......... He was agreeing with you


----------



## Melvis (Feb 24, 2010)

troyrae360 said:


> I was under the understanding that that the 12 core was 2x 6 core cpu's "sticky taped" I could be wrong though



Yea i new about the 12 cores been two 6 cores sticky taped together (AMD's lingo) But i had no idea on the 8 core as i have not heard anything about the 8 core CPU's till now


----------



## Mussels (Feb 24, 2010)

well, my next system might just be AMD... would let me re-use my 4870's in crossfire at least (my crossfire problems stem from the intel chipset)



one thing all you naysayers are forgetting, is that DX11 comes with multithreading as part of its basic design.. next gen games are going to use our spare threads quite well


----------



## Wile E (Feb 24, 2010)

Meh. Useless for desktop market. And I doubt we'll see this in a desktop variant. Just look at the package size.

And what Intel Crossfire issues? Your Crossfire issues do not stem from the Intel chipset, they stem from the shitty ATI drivers.


----------



## Mussels (Feb 24, 2010)

Wile E said:


> Meh. Useless for desktop market. And I doubt we'll see this in a desktop variant. Just look at the package size.
> 
> And what Intel Crossfire issues? Your Crossfire issues do not stem from the Intel chipset, they stem from the shitty ATI drivers.



no, they stem from a problem where my cards flicker with Vsync off on intel chipsets. they dont do it on AMD boards.


----------



## Wile E (Feb 24, 2010)

Mussels said:


> no, they stem from a problem where my cards flicker with Vsync off on intel chipsets. they dont do it on AMD boards.



And if they coded proper drivers, it wouldn't be an issue.


----------



## Mussels (Feb 24, 2010)

Wile E said:


> And if they coded proper drivers, it wouldn't be an issue.



its a chipset issue. works on x58 boards, just not on 965 through x48/45. only seems to happen on 38x0 and 48x0 cards too


----------



## FordGT90Concept (Feb 24, 2010)

Mussels said:


> one thing all you naysayers are forgetting, is that DX11 comes with multithreading as part of its basic design.. next gen games are going to use our spare threads quite well


But that doesn't alleviate the problem of the master thread (orchestrates the worker threads) bringing everything else to a crawl; moreover, Windows 7 does a really, really bad job at synchronizing threads.  For example, you can't play most games with WCG running because performance will drop like a rock despite 4 cores being completely idle.  One core gets held back just a tiny bit then other cores end up waiting for it.  We also can't forget that Windows 7 itself suffers from the same thread prioritizing problems when dragging and dropping files while the CPU is 100% loaded (idle).

It's difficult to explain but multi-core doesn't have a very bright future.  Everything about them multiplies complexity of operating systems to software.  Until that is fixed on the hardware level, no one is going to be excited about more cores except Intel/AMD (because its cheap and easy) and consumers (because it's the new fad for incorrectly cataloguing performance like clockspeeds were up to Pentium 4/D).

Call me a pessimist but this trend is more harmful than helpful to developers and by extension, consumers.


----------



## Mussels (Feb 24, 2010)

FordGT90Concept said:


> But that doesn't alleviate the problem of the master thread bringing everything else to a crawl; moreover, Windows 7 does a really, really bad job at synchronizing threads.  For example, you can't play most games with WCG running because performance will drop like a rock despite 4 cores being completely idle.  One core gets held back just a tiny bit then other cores end up waiting for it.  We also can't forget that Windows 7 itself suffers from the same thread prioritizing problems when dragging and dropping files while the CPU is 100% loaded (idle).
> 
> It's difficult to explain but multi-core doesn't have a very bright future.  Everything about them multiplies complexity of operating systems to software.  Until that is fixed on the hardware level, no one is going to be excited about more cores except Intel/AMD (because its cheap and easy) and consumers (because it's the new fad for incorrectly cataloguing performance like clockspeeds were up to Pentium 4/D).
> 
> Call me a pessimist but this trend is more harmful than helpful to developers and by extension, consumers.



it may no solve it, but it'll help - and in every (DX11) game, too.


----------



## Wile E (Feb 24, 2010)

Mussels said:


> its a chipset issue. works on x58 boards, just not on 965 through x48/45. only seems to happen on 38x0 and 48x0 cards too



If it only happens with 38x0 and 48x0 and only on certain chipsets, it's a driver problem, or a hardware fault by ATI. Either way, it's ATI's fault.


----------



## FordGT90Concept (Feb 24, 2010)

Mussels said:


> it may no solve it, but it'll help - and in every (DX11) game, too.


It makes it easier for the developer by making the GPU render stream work somewhat asymmetrically.  The CPU load is the same.


----------



## TIGR (Feb 24, 2010)

Parallel computing is without a doubt the future; it just needs to mature, like every other technology in the history of humankind.


----------



## pantherx12 (Feb 24, 2010)

FordGT90Concept said:


> This is getting ridiculous.  Most applications aren't very good candidates for multithreading so more per-core performance is still ideal.  Someone has to change this trend of gluing more cores on to more core performance.  Multiple cores create needless overhead and before long, applications will be slower tomorrow than they are today because overhead exceeds actual work done.




Your thinking about this the wrong way fella.

Firstly these are for servers at the moment, as people were saying what used to take 12 cpus ( 4 cores each) can be done with 4 cpus.

That's space saving! ( aswell as cheaper eventually)

Also it means servers can process more incoming requests etc so online games could hold much more avatars in one area etc .

Also means if someone made a modified L4D server they could have 1000 or more zombies come at you at once rather then the typical 50 or so 



Ontop of that imagine running several OS at once simultaneously, got a program that won't run on windows, no problem just switch to linux instantly.

You need to think outside your current thinking and see the potential.



Oh also your statement about computers not being able to recognise imagery is quickly becoming less and less true, hell hondas little robot can recognise chairs and cars etc, even recognise the model of the car if its been taught it.

With more powerful cpus with more cores it will be able to function even better.

Can use bunches of 10 cores to control individual body parts as well to give it much greater dexterity etc.


----------



## TIGR (Feb 24, 2010)

pantherx12 said:


> Oh also your statement about computers not being able to recognise imagery is quickly becoming less and less true, hell hondas little robot can recognise chairs and cars etc, even recognise the model of the car if its been taught it.



I'm too lazy/tired to respond to him on a point to point basis at the moment, but this is an important consideration. Research of the human body shows we are more like computers than ever thought before (DNA a digital code, for example), and R&D into the most powerful and promising future computer systems is being done by reverse engineering the way the human brain works. Things our brains can do well are what we increasingly want our computers to do, so it makes sense: things like pattern recognition (identifying distinct objects in two or three dimensional video/simulations), learning (evolutionary programming), etc.

Seeing how effective massively parallel computing makes the human brain at such tasks, is teaching researchers that if we want our computers to perform increasingly "intelligent" and profound operations, we're going to have to step out of the box to take computing to the next level. We have to think beyond traditional methods, because they can only take us so far. At this point, the "next level" is massively parallel hardware. The ability of software to utilize it well will come as the technology matures.


----------



## btarunr (Feb 24, 2010)

pantherx12 said:


> Your thinking about this the wrong way fella.
> 
> Firstly these are for servers at the moment, as people were saying what used to take 12 cpus ( 4 cores each) can be done with 4 cpus.
> 
> ...



To put that in one word: Virtualization. 

In one line: Virtual servers in data centers, where one physical server with one or two physical CPUs can be used to rent 12 web-servers, each suiting the customer's needs.


----------



## FordGT90Concept (Feb 24, 2010)

TIGR said:


> Parallel computing is without a doubt the future; it just needs to mature, like every other technology in the history of humankind.


Parallel computing is the past.  Super computers have been doing it for decades.  It comes to your home and everyone is in awe.  Problem is: what use is a screw driver without screws?  Hence, the fad.




pantherx12 said:


> Firstly these are for servers at the moment, as people were saying what used to take 12 cpus ( 4 cores each) can be done with 4 cpus.


I know that and they suite server tasks well.  The problem is these processors have no use in workstations because most workstations aren't highly scalable like server applications.  That's not very likely to change either so Intel/AMD are trying to convince corporations to virtualize and cloud compute.  Well, cloud computing especially doesn't work in homes because very few homes have a server and gaming through virtualization is nothing more than a pipe dream today.

Intel/AMD is trying to cater to one crowd (enterprise) while consumers get shafted because workstation/home computers are well-rounded machines and not task oriented.




pantherx12 said:


> Also means if someone made a modified L4D server they could have 1000 or more zombies come at you at once rather then the typical 50 or so


Your GPU will be crying for mercy long before your CPU.  And still, there is little one fast core couldn't do than 100 slow cores.  Personally, I think mainstream processors should have no more than four cores.  The focus needs to be on core performance.  If, as I stated earlier, that takes symmetrical core design, so be it.  The point is: most users with quad cores rarely see their CPU usage over 50% if not 25% doing anything they do on a day to day basis.




pantherx12 said:


> Ontop of that imagine running several OS at once simultaneously, got a program that won't run on windows, no problem just switch to linux instantly.


Unless you are talking about virtualization, that doesn't work: resource collisions.




pantherx12 said:


> You need to think outside your current thinking and see the potential.


I'm looking 10-50 years out here.  The prognosis starts getting grim in about 6 years when die shrinks are no longer possible.  From there, it's nothing but question marks.  Nothing revolutionary has happened in computing since the 1970s.  We're still using the same old techniques with different materials.




pantherx12 said:


> Oh also your statement about computers not being able to recognise imagery is quickly becoming less and less true, hell hondas little robot can recognise chairs and cars etc, even recognise the model of the car if its been taught it.


Which demonstrates the brain is falling behind.  We can build processors faster, not brains (at least not yet).  It is still inefficient because images don't translate well to binary but that's the nature of the beast.




pantherx12 said:


> With more powerful cpus with more cores it will be able to function even better.


Only if the process is not linear.  If step b requires the result from a, step c requires the result from b, step d require the result from c, and so on, it is doomed to forever be slow in the foreseeable future.  That is what most concerns me (aside from manufacture process).




pantherx12 said:


> Can use bunches of 10 cores to control individual body parts as well to give it much greater dexterity etc.


A 486 could handle that with lots of room to spare.  Computer controlled robots have been in use a long time.




btarunr said:


> To put that in one word: Virtualization.
> 
> In one line: Virtual servers in data centers, where one physical server with one or two physical CPUs can be used to rent 12 web-servers, each suiting the customer's needs.


Oh, so you want some nameless corporation 1,000 miles from where you live to know everything you did and are doing?  That's the Google wet-dream there.  They would know everything about you from how much is in your checking account to which sites you frequent, to all your passwords and user names, to your application usage, to everything that would exist in your "personal computer."  Cloudcomputing/virtualization is the epitome of data-mining.  Google already knows every single search you made in the past six months with their search engine.

Corporations *want* this.  It is vital we not give it to them.  Knowledge is power.


----------



## TIGR (Feb 24, 2010)

In terms of parallel computing, you haven't seen anything yet.

If you want to fight the concept, go build a better system that doesn't utilize it. Otherwise, take a look around. Multi-core CPUs, multi-CPU systems, Crossfire and SLI, RAID ... running components in parallel isn't perfectly efficient, but guess what: neither is anything else.

Sure, maybe there's overhead, and maybe more than we'd like (although that's improving), but as a car guy [I gather] you should understand very well that sometimes you have to take a loss to make bigger gains (unless you don't believe in forced induction either?).


----------



## pantherx12 (Feb 24, 2010)

You not seen how slow the bastarding thing is?

That's not due to its motors its has it doesn't have the processing power to run !

Unlike humans that build up muscle memory and automatic responses a machine has to think about moving, so once its phsyical speed starts building up it becomes more and more difficult.

Where as having a CPU core for each sensor it has it will be able to adjust things all that much quicker.

( thus move quickly)

Same reason the thing falls arse over tit when climbing stair cases sometimes


----------



## Frick (Feb 24, 2010)

FordGT90Concept said:


> Oh, so you want some nameless corporation 1,000 miles from where you live to know everything you did and are doing?  That's the Google wet-dream there.  They would know everything about you from how much is in your checking account to which sites you frequent, to all your passwords and user names, to your application usage, to everything that would exist in your "personal computer."  Cloudcomputing/virtualization is the epitome of data-mining.  Google already knows every single search you made in the past six months with their search engine.
> 
> Corporations *want* this.  It is vital we not give it to them.  Knowledge is power.



That's the way it is, and it looks like it will be more like this in the future. As you say, that is what the cloud is, and more and more stuff is being shoved into it. I for one really like local apps and resources, but the cloud is already here even if I don't like it.

EDIT: BTW panther, how in heck did you manage to get your post count to 4000 in a year?


----------



## HalfAHertz (Feb 24, 2010)

Wile E said:


> If it only happens with 38x0 and 48x0 and only on certain chipsets, it's a driver problem, or a hardware fault by ATI. Either way, it's ATI's fault.



Oh, so your magical ball looked into the problem and found the answer? Well thanks for sharing, might want to drop a ring to AMD's driver department! Unless you did some extensive testing to prove that it is indeed the drivers and not the chipset, you're just as right / wrong as mussels...

BTW correct me if I'm wrong but AMD measured their TDP differently from Intel, right? AMD were measuring the average, while intel was just showing the peak. If memory serves me right then AMD's 115W equals Intel's 130W chips.


----------



## pantherx12 (Feb 24, 2010)

Frick said:


> .
> 
> EDIT: BTW panther, how in heck did you manage to get your post count to 4000 in a year?



Unemployed 

Also have very little to do in town so I'm online a lot.


----------



## TIGR (Feb 24, 2010)

I would like to submit this article to the discussion.

The other articles linked to at the end of this one are worth looking over too.


----------



## FordGT90Concept (Feb 24, 2010)

TIGR said:


> In terms of parallel computing, you haven't seen anything yet.
> 
> If you want to fight the concept, go build a better system that doesn't utilize it. Otherwise, take a look around. Multi-core CPUs, multi-CPU systems, Crossfire and SLI, RAID ... running components in parallel isn't perfectly efficient, but guess what: neither is anything else.


I have a dual Xeon server sitting right next to me and have written a lot of multithreaded applications (capable of loading 8+ cores to 100%).  It has its uses but, if it weren't for BOINC, most of the time it would be sitting around 0-5% CPU usage.  A super computer without works is little more than an inefficient furnace.




pantherx12 said:


> Unlike humans that build up muscle memory and automatic responses a machine has to think about moving, so once its phsyical speed starts building up it becomes more and more difficult.


Humans require more brain activity than a computer does during isometric contractions (contralateral sensorimotor cortex, premotor areas, and ipsilateral cerebellum light up like a Christmas tree in an EMG): http://www.ncbi.nlm.nih.gov/pubmed/17394210

All a computer needs is a sensor to tell what position the motor is in, it calculates how to power a trajectory to get the motor to its needed position, and verify it arrived.  It's just a bunch of 1s and 0s--the language computers eat for lunch.




pantherx12 said:


> Where as having a CPU core for each sensor it has it will be able to adjust things all that much quicker.


Very wasteful.  Monitoring a motor takes only a few cycles every second.  An ARM processor is more than sufficient.  Hell, your cellphone could probably manage a dozen robots depending on precision.




TIGR said:


> I would like to submit this article to the discussion.
> 
> The other articles linked to at the end of this one are worth looking over too.


That's similar to what I said before: cores need to work synchronously on the hardware level so on the software level, you only have a few cores but there are many behind the scenes handling the data.

That would mean a different instruction set (doubt x86 would work) and a new generation of processors.


----------



## pantherx12 (Feb 24, 2010)

That was an awful example, concious movement is completely different kettle of fish : /


----------



## FordGT90Concept (Feb 24, 2010)

You can move your arm without "thinking" about it.  You do it all the time in your sleep and when you touch something hot (you have to override the brain's reaction to keep touching it).  The brain still does a lot of work to make it happen subconsciously.


----------



## pantherx12 (Feb 24, 2010)

Of course but its still a bad example.

I've trained Parkour for long enough that I've experienced completely automatic responses.

Say I've vaulted a wall but I can't see what's on the other side and suddenly there's another obstacle I wasn't expecting, I don't think " HOLY SHIT WALL DO SOMETHING" My body just reacts to a visual stimulus like when someone throws a punch you just flinch. Shame sort of concept.

A Machine can't do anything like that without a lot of cores as rather then a reaction it will have to see the object  and then essentially think what to do next the more cores it has the more potential responses to the obstacle it could think of, having a core per sensor will also allow it to truly accurately judge is position in space and thus go over the next obstacle with no problems.


With less cores it simply won't be able to plan the next movement efficiently.


----------



## Mussels (Feb 24, 2010)

no, it'd add extra latency as the right hand literally wouldnt know what the left hand was doing. we evolved with a central brain because its more effective.


----------



## TIGR (Feb 24, 2010)

A central brain consisting of many synapses, which are rather well-connected.


----------



## pantherx12 (Feb 24, 2010)

Mussels said:


> no, it'd add extra latency as the right hand literally wouldnt know what the left hand was doing. we evolved with a central brain because its more effective.




And what about spinal chord with can control automatic responses?


The brain is in one place but its definitely "multi core" 

if it wasn't humans would barely function, I certainly know I can think of more then one thing at once.

Hell right now I'm typing whilst listening to music and planning what I'm doing tomorow, all simultaneously in my brain with no slow downs XD


----------



## btarunr (Feb 24, 2010)

FordGT90Concept said:


> Oh, so you want some nameless corporation 1,000 miles from where you live to know everything you did and are doing?



Huh? Never heard of data-centres? You think everyone who has a website or company VPN has his own server?


----------



## WhiteLotus (Feb 24, 2010)

The brain can not be compared to a computer chip.

The brain has many, MANY locations in it that all work to together to do the simplest of tasks. Say you want to bend down and pick up a pencil. You have different nerves that tell your fingers to open and close compared to the nerves that carry the signal to your shoulder to move in the correct position to allow the movement to happen. You have different routes, parasympathetic and sympathetic. One allows your body to rest and relax, the other allows your body to get up and move around. 
If it wasn't for the brain nerves (cranial nerves) telling your heart to slow the fuck down you'd all be having a heart rate of about 200 odd beats a minute which ain't healthy.

Again, you can not compare the brain with a computer chip. A brain is just too damn complicated.


----------



## TIGR (Feb 24, 2010)

Parallelism doesn't necessarily require a multi-core CPU architecture. It can take many forms and function in many different ways. Multi-tasking does not require parallel computing if the system is fast enough to be perceived as doing things simultaneously. So a computer (or brain) wouldn't necessarily need multiple CPU cores (or multiple anything operating in parallel) to do the things you have mentioned, panther (BTW I'm tired so hope I'm making sense).

However, massive parallelism is, in one form or another (or more likely, in multiple forms—parallel parallelism?), going to be required for the extremely demanding tasks that humans will be handing to computers in the coming several decades. I doubt that the architecture of a system designed fifty years from now would be recognizable or make sense to any of us discussing this here and now, but I do know it will feature a lot of parallel computation.

And technology like these 8 and 12-core CPUs from AMD is a necessary stepping stone along the way.


----------



## Frick (Feb 24, 2010)

FordGT90Concept said:


> I have a dual Xeon server sitting right next to me and have written a lot of multithreaded applications (capable of loading 8+ cores to 100%).  It has its uses but, if it weren't for BOINC, most of the time it would be sitting around 0-5% CPU usage.  A super computer without works is little more than an inefficient furnace.



Has anyone stated anything else? Of course you need software for it, that's self explanatory.

Err.. Anyway, I tend to think of the future where massive parallel computing power is stored in a few locations around with next to no local applications at all. Everything electronic will be connected to the net, for good and bad. Phones, toasters, fridges, everything.. Pretty scary, but it seems like it's heading that way.


----------



## FordGT90Concept (Feb 24, 2010)

pantherx12 said:


> A Machine can't do anything like that without a lot of cores as rather then a reaction it will have to see the object  and then essentially think what to do next the more cores it has the more potential responses to the obstacle it could think of, having a core per sensor will also allow it to truly accurately judge is position in space and thus go over the next obstacle with no problems.


Go to a modern computerized factory and simple computers with good sensors are completing tasks at in human speeds (like ejecting bad seeds, plucking out fertilized eggs out of line up, mixing inks, painting and assembling cars, etc.).

Like a computer, the body generally has only one response to save itself from danger.  If it guesses wrong, bad things happen.  If it guesses right, things end up not so bad.  An example is when someone sees a deer 20 feet in front of them.  Some do the stupid thing and veer off the road, rolling it, and killing the occupants.  Some do the smart thing and brake to a collision (a trained response).  The brain isn't developing an extensive list of possibilities and weeding out which is best.  It just knows the status quo won't do and takes the first alternative.




pantherx12 said:


> With less cores it simply won't be able to plan the next movement efficiently.


With computers, it is prescribed.  If this, this, and this conditions are true, do that.  That's why they are so efficient.

Where computers are slowest is very human tasks like recognizing a face, determining if someone is "beautiful" or not, recognizing voice tones, identifying body language, and detecting emotions.  The brain can do all of these tasks in little more than 100ms.  It takes a computer that long just to realize it is looking at a "face."  Because of the extreme variety in the real world, checking for a specific list of conditions takes a lot of computing horsepower.  The technology is improving but again, it stems from the shortfalls of binary: neurons vs. transistors.




btarunr said:


> Huh? Never heard of data-centres? You think everyone who has a website or company VPN has his own server?


It's the same deal: privacy is non-existant.


----------



## Frick (Feb 24, 2010)

FordGT90Concept said:


> It's the same deal: privacy is non-existant.



Welcome to the future mate.


----------



## pantherx12 (Feb 24, 2010)

TIGR said:


> Parallelism doesn't necessarily require a multi-core CPU architecture. It can take many forms and function in many different ways. Multi-tasking does not require parallel computing if the system is fast enough to be perceived as doing things simultaneously. So a computer (or brain) wouldn't necessarily need multiple CPU cores (or multiple anything operating in parallel) to do the things you have mentioned, panther (BTW I'm tired so hope I'm making sense).
> 
> However, massive parallelism is, in one form or another (or more likely, in multiple forms—parallel parallelism?), going to be required for the extremely demanding tasks that humans will be handing to computers in the coming several decades. I doubt that the architecture of a system designed fifty years from now would be recognizable or make sense to any of us discussing this here and now, but I do know it will feature a lot of parallel computation.
> 
> And technology like these 8 and 12-core CPUs from AMD is a necessary stepping stone along the way.




Of course but we're a long way off from CPUS that are that fast, maybe when optic light computing gets into full swing. ( give it 15 years at the current rate of laser size halfing, although that's 15 years for a working prototype not public ) 


I still think multi cores are the way forward, as I just mentioned when light computing is out heat output would be miniscule so you could pack as many cores into one package as you phsycally could, and why the hell not, 10 1 terahurtz cpus would pwn an individual one after all 

I'm a firm believer that when it comes to hardware theres no such thing as overkill


----------



## TIGR (Feb 24, 2010)

WhiteLotus said:


> The brain can not be compared to a computer chip.
> 
> The brain has many, MANY locations in it that all work to together to do the simplest of tasks. Say you want to bend down and pick up a pencil. You have different nerves that tell your fingers to open and close compared to the nerves that carry the signal to your shoulder to move in the correct position to allow the movement to happen. You have different routes, parasympathetic and sympathetic. One allows your body to rest and relax, the other allows your body to get up and move around.
> If it wasn't for the brain nerves (cranial nerves) telling your heart to slow the fuck down you'd all be having a heart rate of about 200 odd beats a minute which ain't healthy.
> ...



Sure, a human brain is more complicated that today's computer chips, but within the next decade, computers will be built that easily exceed the computing power of the human brain, and in the decades that follow, software will be written that will make it look simple by comparison. After all, we [and our brains] are but complex machines, as are computers. The difference is, computer technology is evolving at an exponential rate, while the evolution of our brains is ... oh probably something like going from a Pentium II to a Pentium III over the course of 100,000 years.

I would say that computers not only can be, but must be (and will be) compared to the human brain. Reverse engineering such a marvelous piece of machinery is a powerful tool in the quest for progress, and it will certainly impact the way we develop our technology.


----------



## pantherx12 (Feb 24, 2010)

FordGT90Concept said:


> Go to a modern computerized factory and simple computers with good sensors are completing tasks at in human speeds (like ejecting bad seeds, plucking out fertilized eggs out of line up, mixing inks, painting and assembling cars, etc.).
> 
> Like a computer, the body generally has only one response to save itself from danger.  If it guesses wrong, bad things happen.  If it guesses right, things end up not so bad.  An example is when someone sees a deer 20 feet in front of them.  Some do the stupid thing and veer off the road, rolling it, and killing the occupants.  Some do the smart thing and brake to a collision (a trained response).  The brain isn't developing an extensive list of possibilities and weeding out which is best.  It just knows the status quo won't do and takes the first alternative.
> 
> ...



Your two statements sort of conflict there, your first example is cancelled out by this "Because of the extreme variety in the real world,"

A machine on the production line has consistency, real life does not.

Also when it comes to sorting seeds that's actually done by just shaking the hell out of them, seeds that are not ready yet fall through a mesh, it then goes to a second sieve for further sorting followed by a bath, dead seeds sink.

So then they just scoop of the good stock and dry it.

BAM seeds that grow everytime.


*edit* if anyone finds it odd that I know that, I spend most of my time researching and reading about things, I love to know how things work


----------



## btarunr (Feb 24, 2010)

FordGT90Concept said:


> It's the same deal: privacy is non-existant.



Regardless, a majority use rented servers. So AMD is catering to a majority.


----------



## Mussels (Feb 24, 2010)

btarunr said:


> Regardless, a majority use rented servers. So AMD is catering to a majority.



dind ding ding.


data centers will gobble these up - going quad core to 12 core gets them 3x the work in the same physical space.


----------



## FordGT90Concept (Feb 24, 2010)

Frick said:


> Has anyone stated anything else? Of course you need software for it, that's self explanatory.
> 
> Err.. Anyway, I tend to think of the future where massive parallel computing power is stored in a few locations around with next to no local applications at all. Everything electronic will be connected to the net, for good and bad. Phones, toasters, fridges, everything.. Pretty scary, but it seems like it's heading that way.


But what is the software?  You can make any program consume 100% of a given processor but if it isn't doing something useful, it is wasteful.  The industry appears to be willfully pulling itself apart.  You got the CPU market trying to achive parralism less than 5% of the market can even use, you got the GPU market attempting to do the same and, in the process, diminishing the ability of the GPU to perform its original task, and you got developers with all these hardware resources available to them with either a 1000 page manual how to do it or nothing that could possibly require that many resources.  It's like the whole "32-bit for gaming is all that is needed" argument for the rest of hardware.  It's not good. 


Oh goodie, so they know how you like your toast and what is in your fridge too. :shadedshu  I think I will move to Mars now (or die trying to get there).




pantherx12 said:


> Your two statements sort of conflict there, your first example is cancelled out by this "Because of the extreme variety in the real world,"


That's in human appearance and behavior.  Computers have been able to do everything I listed individually.  It is just impractical to combine them all.




btarunr said:


> Regardless, a majority use rented servers. So AMD is catering to a majority.





Mussels said:


> data centers will gobble these up - going quad core to 12 core gets them 3x the work in the same physical space.


True and true.  They (IBM, Intel, Sun, AMD, etc.) created that industry and they are going to feed it.  I never said these wouldn't be good for data centers/enterprises.  My concern is sticking them in consumer computers (wasteful and inevitably coming because consumers are creating demand for waste) or moving consumers to clouds of them (privacy).


----------



## pantherx12 (Feb 24, 2010)

FordGT90Concept said:


> That's in human appearance and behavior.  Computers have been able to do everything I listed individually.  It is just impractical to combine them all.
> 
> 
> 
> ...



And does that not make you think a hybird system is the way forward?

have perhaps 4 cores that run at 4ghz+ ( or more) and have the reminder low clocked ( 1.5) for handling non intensive tasks etc?


----------



## TIGR (Feb 24, 2010)

pantherx12 said:


> And does that not make you think a hybird system is the way forward?
> 
> have perhaps 4 cores that run at 4ghz+ ( or more) and have the reminder low clocked ( 1.5) for handling non intensive tasks etc?



Or give all cores the ability to clock up and down as needed.


----------



## pantherx12 (Feb 24, 2010)

TIGR said:


> Or give all cores the ability to clock up and down as needed.



Even better


----------



## FordGT90Concept (Feb 24, 2010)

pantherx12 said:


> And does that not make you think a hybird system is the way forward?
> 
> have perhaps 4 cores that run at 4ghz+ ( or more) and have the reminder low clocked ( 1.5) for handling non intensive tasks etc?


If your objective is to identify people, transistors and more specifically, binary, is not the way to go.  You need a processor that thinks in terms of shapes and other visual cues.  The brain can quickly determine if what it is looking at is the shape of a human, the shape of a face, the shape of a hand, etc.  It can then rapidly pick abnormalities out of the face like distribution of hair, moles, wrinkles, etc.  The trouble with binary is describing any of the above in terms of color differences.  It is a real PITA.

Work smarter, not harder.

The best upgrade to a human would be a calculator.  Humans are ridiculously bad at seeing things in 1s, 0s, and derivatives thereof.  If you could add that capability to the brain, it would be far more efficient at processing numbers (rather, the concept of numbers).  Likewise, if we could create a co-processor that works in terms of shapes, it would drastically increase the capability of computers.  For instance, it could look at a web page and read everything on it so long as it can identify the character set.  It could look at a picture and identify people and what those people are most likely doing.  It could also name every object it knows that appears in the picture like cars, bikes, signs, symbols, etc.  In order to engineer said processor, we'd have to throw what we know about processing out the window and start from scratch with that goal in mind.  As far as I know, that's not going to happen any time soon because their all too busy milking the ARM, POWER, and x86 cash cows.


They would not be marketed by instructions per second like current CPUs; they would be marketed by shapes per second and detail per shape.  And hey, because it works on shapes, it could actually create a seamless arch on an analog display (digital would pixelate it).


----------



## pantherx12 (Feb 24, 2010)

Of course, but we're ages away from that sort of thing really.

This is the best we got for now.


----------



## FordGT90Concept (Feb 24, 2010)

I don't think we are; we (the people with the resources) refuse to go there because initial research would be very costly and, because it wouldn't be directly compatible with any existing processor technologies, implementation wouldn't exactly be smooth.  Communication between them would need subprocessing of its own (binary would have to be converted from and to symbols).  The result would be a major jump in computing though.

After shapes, we'd need a speech processor (decodes sound waves and can produce its own including pitch, tone, and expressiveness).  With some good programming, it could completely replace call centers and you'd never be able to tell you were actually talking to a computer.


----------



## WarEagleAU (Feb 24, 2010)

Well I am just blown away that they got these out so quick, in my opinion. Well done AMD. Cannot wait to see some type of review if it is possible.


----------



## TIGR (Feb 24, 2010)

FordGT90Concept said:


> I don't think we are; we (the people with the resources) refuse to go there because initial research would be very costly and, because it wouldn't be directly compatible with any existing processor technologies, implementation wouldn't exactly be smooth.  Communication between them would need subprocessing of its own (binary would have to be converted from and to symbols).  The result would be a major jump in computing though.
> 
> After shapes, we'd need a speech processor (decodes sound waves and can produce its own including pitch, tone, and expressiveness).  With some good programming, it could completely replace call centers and you'd never be able to tell you were actually talking to a computer.



BCIs (Brain Computer Interfaces) are quickly advancing and I agree with you that adding to humans the ability to process data as computers do is coming. By that point I don't think the architecture of the computers connected to the brain will resemble anything like today's systems, but once again I think it will be highly parallel, and with many parallel connections to our highly parallel brains (of course, who knows what modifications we might make to our own organic brains, aside from adding computers to them?).

Speech processing has already advanced far beyond what most people realize, because the text to speech and automated calling systems to which most people have been exposed are far from state of the art. That crap that comes bundled with operating systems and even some of the more expensive speech processing software a regular consumer can buy, are not representative of the speech processing computers are already capable of. Speech processing in real time might be one of those things that is best done without too much parallel processing due to the latency introduced—but then again, it would be small-minded to assume that said latency will always be the issue it is today.


----------



## jasper1605 (Feb 24, 2010)

pantherx12 said:


> Just imagine a time when 100 core CPUS are availble for desktop use, CPU core per individual program
> 
> have 10 of those cores higher clocked then the rest for handling games and heavy duty apps, rest for everything else.
> 
> Computer will never slow down ( theoretically)



just like computers will never need more than 640K of memory?


----------



## pantherx12 (Feb 24, 2010)

jasper1605 said:


> just like computers will never need more than 640K of memory?



My statement wasn't implying it would stop at 100 cores


----------



## sly_tech (Feb 24, 2010)

When I was a computer science student, I try to apply how computer process with my brain activity. It improves little by little. Try to apply what to do next, what is repeating process then organize the step to do things, decreasing the process step and try to use both side of my brain when I do my work. And much more try to keep changing or improve the way you think. Like re-architecture the way you think. The result is very good. Now my job as programmer is keep improving over the time. It is more organized than before. The basic computation brain can produce is comparing 2 different things whether it is true or false. So basically, processor it’s almost same with brains. Why? Human create it. Comparing between ON and OFF, 1 and 0 yeah same to brain, TRUE and FALSE. But because of around 100 billion of neuron, all the things calculate on very high speed and efficient. So same with computer if you have more core processor so more performance. The weakness of processor is connection between its core, unlike brain have balance count of synapse, yeah its architecture and speed of the connection too. Brain also have two biggest block of processing, logic and art, same goes too computer, CPU and GPU. Yeah I know, now all of you talking about CPU only. Whatever AMD already had it on their plan, APU. Maybe Intel(with GPU on die now but it was not APU) and the rest will follow. Human create computer, it never can compete with us, I means of all part of thinking process, because it doesn't have soul or desire to think. Its designed only to one purpose, to help human processing very large count of raw data to useful information. Simple way to understand, right? ;D Parallel processing is good, but for now we screw up to full utilizes it potential. But wait, 2011 perhaps AMD will bring Bulldozer, next step of multi threading. It was combining 2 cores with a better way. I like it. ;D. So the conclusion is, I was on pantherx12 and others who on his side too. Parallel computing is more than welcome. We know die shrink have near its limit, GHz too, and then what we have now to explore more? APU and multi core have a bright future. (Quote: future and potential is hard to predict) I think it's ok buy their products (playing games and etc2 on multi core CPU) to help the manufacturer gain the money to support their research on it. I think gamers help a lot in this industry. They demand more than others(on new technology and features). On business side I think like many company around the world will change their equipment at most once in 2 years. But pc gamers change their part or upgrade at least once per year. LOL.


----------



## pantherx12 (Feb 24, 2010)

Very nice post sly-tech.

I especially liked the first bit.

Its true, the brain can be trained just as well as muscles can be trained.
( albeit differently off course heh)


----------



## sly_tech (Feb 24, 2010)

Hehe, thanks pantherx12. Because i can see what is the really point you want to bring. ;D


----------



## TIGR (Feb 25, 2010)

Welcome to TPU, sly! interesting post, gonna read it again.


----------



## TIGR (Feb 25, 2010)

sly_tech said:


> I think it's ok buy their products (playing games and etc2 on multi core CPU) to help the manufacturer gain the money to support their research on it. I think gamers help a lot in this industry. They demand more than others(on new technology and features). On business side I think like many company around the world will change their equipment at most once in 2 years. But pc gamers change their part or upgrade at least once per year. LOL.



 Agreed 100%.


----------



## FordGT90Concept (Feb 25, 2010)

Multi-cores are not parallel computing.  They can be made to simulate higher clockspeeds through synchronous execution but again, that creates a lot of wasteful overhead and massive headaches with desyncing and inter-thread interrupts.  Multi-core is today, not tomorrow.  The future should move away from threads and move towards non-algorithmic parallel computing or, at bare minimum, hardware synchronization.


----------



## eidairaman1 (Feb 25, 2010)

I may just wind up getting a 2 way setup with a properly laid out motherboard


----------



## TIGR (Feb 25, 2010)

FordGT90Concept said:


> Multi-cores are not parallel computing.  They can be made to simulate higher clock speeds through synchronous execution but again, that creates a lot of wasteful overhead and massive headaches with desyncing and inter-thread interrupts.  Multi-core is today, not tomorrow.



Whatever they "simulate," multiple _anything_ working concurrently constitutes parallelism of some form. Splitting a larger problem into smaller ones to be solved simultaneously is the essence of parallel computing. There's bit-level, instruction level, data and task parallelism, etc. Multi-core CPU architecture will be replaced by something else in the future, so sure, you could say it's "today, not tomorrow," but it is a necessary stepping stone to tomorrow, which is why I take exception to your first post in this thread asserting that adding cores is the wrong path to go down.



FordGT90Concept said:


> The future should move away from threads and move towards non-algorithmic parallel computing or, at bare minimum, hardware synchronization.



That's basically a repeat of what the article I linked to earlier in this thread said.

_______________________________

But okay, my arguments aside. Let's say that the problems of multi-core latency, overhead, etc. are impossible to ever improve or overcome and there's no alternative to a "clog-prone" one-core-managing-many ("master thread") architecture. Let's assume that hardware-managed thread states on multi-core CPUs simply cannot work (you mentioned earlier that that would virtually eliminate software overhead, but you still argue against multi-core CPUs, so that's out). Basically, let's say multi-core is simply unacceptable tech and _you_ get to determine the design of future CPUs, and they will all be single-core monsters that smoke their multi-core inferiors. How are you going to do it?

1. Will the performance come from streamlining processes via new instruction sets?
2. You stated earlier that "a 12 GHz CPU can handle more work than a 4 x 3 GHz CPU because of having no overhead." Will you succeed where AMD and Intel have failed and find ways to overcome the ILP, memory, and power walls that in our current reality makes such high operating frequencies unfeasible?

3. If you were running AMD, starting five years ago, what path would you have set the company down, and what products would they now be releasing instead of these 8 and 12-core CPUs that you criticize?

I ask out of a genuine desire to learn, seriously. I'm completely up for better ways of doing things than the norm.


----------



## FordGT90Concept (Feb 25, 2010)

TIGR said:


> Multi-core CPU architecture will be replaced by something else in the future, so sure, you could say it's "today, not tomorrow," but it is a necessary stepping stone to tomorrow, which is why I take exception to your first post in this thread asserting that adding cores is the wrong path to go down.


The only time multi-core was innovative is when it debuted with the IBM POWER4 architecture.  To keep expanding on what is already known is to stagnate efficiency.




TIGR said:


> Let's assume that hardware-managed thread states on multi-core CPUs simply cannot work (you mentioned earlier that that would virtually eliminate software overhead, but you still argue against multi-core CPUs, so that's out).


That would be the best solution to the current issues.  That is where we should currently be heading--not adding more cores that few applications can put to work.  The goal should be an architecture which any program can utilize the full potential of any given number of cores.




TIGR said:


> Basically, let's say multi-core is simply unacceptable tech and _you_ get to determine the design of future CPUs, and they will all be single-core monsters that smoke their multi-core inferiors. How are you going to do it?


Think of it as reverse hyperthreading.  Instead of one core accepting two threads, two or more cores accept one thread.  They share variable states which should allow each core to process a portion of the algorithm while others prepare or dispose of their state.  Instructions would still take 2+ cycles to execute but they would be staggered across the cores so that each cycle, one instruction completes.  It takes more hardware to accomplish it but the instructions per clock would at least double.  I imagine the processor would have four cores exposed to the operating system but under the hood, it could have 16 or more cores internally.  The x86 instruction set could still be used.




TIGR said:


> 3. If you were running AMD, starting five years ago, what path would you have set the company down, and what products would they now be releasing instead of these 8 and 12-core CPUs that you criticize?


AMD shouldn't have made x86-64.  We need a new CISC instruction set that doesn't drag a decades of garbage with it.

Other than that, we need to look at Bulldozer before deciding if they screwed up since Athlon 64 or not.


----------



## TIGR (Feb 25, 2010)

So you are really proposing the continuation of multi-core CPUs?


----------



## FordGT90Concept (Feb 25, 2010)

Only in the short term (5-10 years).  You need multiple cores in order to prevent the system from coming to a stand-still but any more than four or so ends up being wasted unless using specialized software.

The objective to accelerate both single and multi-threaded performance with smarter engineering.


----------



## TIGR (Feb 25, 2010)

What should come after 5-10 years? Single-core CPUs or GPGPUs or something else entirely?


----------



## FordGT90Concept (Feb 25, 2010)

Something else entirely.  If it doesn't happen, processor performance will flatline due to the limitations of electrons.  The transistor has to go.  Processor architecture will have to change to whatever structure supports the new physical medium.  There's really only question marks 10 years from now.  There's a lot of ideas but nothing is gaining traction yet.


----------



## pantherx12 (Feb 25, 2010)

Carbon nano tubes should be sorted by around then, which will nicely fuck up moores law : ]


----------



## yogurt_21 (Feb 25, 2010)

FordGT90Concept said:


> Only in the short term (5-10 years).  You need multiple cores in order to prevent the system from coming to a stand-still but any more than four or so ends up being wasted unless using specialized software.
> 
> The objective to accelerate both single and multi-threaded performance with smarter engineering.



since when is the home machine completely practical? This is TPU right? I haven't wandered into productivity central by mistake have I? 

last I checked we have several memebers with quad sli, quadfire, i7 rigs with 12gb memery and etc. all of which is overkill for gaming or anything else a home user typically does. 

but epeen plays a role in the purchase.

these chips are more than likely going to be used in enterprise and server environements but a few home users will toss them in as well because you gain x amount of epeen for ever core your machine hase.

on an unrelated topic, why of why is it "magny cours" that's far to close to mangy cores if you ask me.


----------



## TIGR (Feb 25, 2010)

yogurt_21 said:


> since when is the home machine completely practical? This is TPU right? I haven't wandered into productivity central by mistake have I?
> 
> last I checked we have several memebers with quad sli, quadfire, i7 rigs with 12gb memery and etc. all of which is overkill for gaming or anything else a home user typically does.
> 
> ...



I liked that "productivity central" comment. 

That's true though. Many of us do WCG or Folding@home (got ten video cards/fifteen GPUs and five computers running myself and GT90 himself with a dual Xeon quad-core system).

GT90: you said yourself that you've written applications that can fully load 8+ cores to 100%. If _some_ but not _all_ software can utilize all this cores, then that tells me the problems lies with software. Anyway, judging multi-core CPU architecture based on how current software utilizes it would be a mistake.

There's always lag between new technology and mainstream software support for it. The first mainstream dual-core CPUs came out in 2005, quads in late '06/early '07 (although there were earlier multi-core CPUs, they were not mainstream enough to get the attention of mainstream software developers). Most applications I know of can already utilize two cores, and many can already fully utilize quads—just three years after they first hit the mainstream consumer market.

There are probably _still_ far more single- and dual-core CPUs than there are quad-core CPUs out there in consumer systems, and you're saying because of how well 4+ cores perform with today's software, it's no good? If the Wright brothers had that attitude about new technology, they would have given up before their first flight because obstacles like gravity were too hard to overcome. But they had faith they could tackle the obstacles, and thus they did. The technology wasn't bad—it just needed time and work to survive its own infancy.


----------



## Wile E (Feb 26, 2010)

HalfAHertz said:


> Oh, so your magical ball looked into the problem and found the answer? Well thanks for sharing, might want to drop a ring to AMD's driver department! Unless you did some extensive testing to prove that it is indeed the drivers and not the chipset, you're just as right / wrong as mussels...
> 
> BTW correct me if I'm wrong but AMD measured their TDP differently from Intel, right? AMD were measuring the average, while intel was just showing the peak. If memory serves me right then AMD's 115W equals Intel's 130W chips.



No, it's basic logic. Since the 5k series doesn't have the issue, it is clearly something that ATI was able to solve.


----------

