# AMD's Excavator Core is Leaner, Faster, Greener



## btarunr (Feb 24, 2015)

AMD gave us a technical preview of its next-generation "Carrizo" APU, which is perhaps the company's biggest design leap since "Trinity." Built on the 28 nm silicon fab process, this chip offers big energy-efficiency gains over the current-generation "Kaveri" silicon, thanks to some major under-the-hood changes. 

The biggest of these is the "Excavator" CPU module. 23 percent smaller in area than "Steamroller," (same 28 nm process), Excavator features a new high-density library design, which reduces die-area of the module. Most components are compacted. The floating-point scheduler is 38% smaller, fused multiply-accumulate (FMAC) units compacted by 35%, and instruction-cache controller compacted by another 35%. The "Carrizo" silicon itself uses GPU-optimized high-density metal stack, which helps with the compaction. Each "Excavator" module features two x86-64 CPU cores, which are structured much in the same way as AMD's previous three CPU core generations. 



 

 

 




The compaction in components doesn't necessarily translate into lower transistor-counts over "Kaveri." In fact, Carrizo features 3.1 billion transistors (Haswell-D has 1.4 billion). Other bare-metal energy optimizations include an 18% leakage reduction over previous generation, using faster RVT components, which enables 10% higher clock speeds at the same power-draw (as "Kaveri"). A new adaptive-voltage algorithm reduces CPU power draw by 19%, and iGPU power draw by 10%. AMD introduced a few new low-power states optimized for mobile devices such as >9-inch tablets and ultra-compact notebooks, which reduces overall package power draw to less than 1.5W when idling, and a little over 10W when active. In all, AMD is promising a conservative 5% IPC uplift for Excavator over Steamroller, but at a staggering 40% less power, and 23% less die-area. 



 

 

 

The integrated-GPU is newer than the one featured on "Kaveri." It features 8 compute units (512 stream processors) based on Graphics CoreNext 1.3 architecture, with Mantle and DirectX 12 API support, and H.265 hardware-acceleration, with more than 3.5 times the video transcoding performance increase over "Kaveri." For notebook and tablet users, AMD is promising "double-digit percentage" improvements in battery life.

Now, if only AMD can put six of these leaner, faster, and greener "Excavator" modules onto an AM3+ chip.

*View at TechPowerUp Main Site*


----------



## Sony Xperia S (Feb 24, 2015)

"Now, if only AMD can put six of these leaner, faster, and greener "Excavator" modules onto an AM3+ chip."

I hope also so because if they do it, I would immediately go a buy the plaform with it.


----------



## TRWOV (Feb 24, 2015)

AMD themselves stated that Piledriver was the end of the line for AM3+

I think the next big CPU from AMD will feature a new socket and new architecture. Maybe Zen?


----------



## Sony Xperia S (Feb 24, 2015)

TRWOV said:


> AMD themselves stated that Piledriver was the end of the line for AM3+
> 
> I think the next big CPU from AMD will feature a new socket and new architecture. Maybe Zen?



Yes, unfortunately this Excavator core is very mobile systems centric and kind of "useless" for a desktop CPU. 

Which is a shame because I do NOT care about anything mobile now. /#egoismmodeswitchedon


----------



## Bjorn_Of_Iceland (Feb 24, 2015)

Hopefully it would deliver.. else this excavator would be digging them a deeper grave.


----------



## btarunr (Feb 24, 2015)

6x Excavator AM3+ could be this generation's "Thuban" before Zen.


----------



## BiggieShady (Feb 24, 2015)

So IPC is going up 5% and power consumption is going down 30%. If they use this architecture improvements in their desktop CPUs, they are better off with upping the clocks than adding more cores IMO.


----------



## Sony Xperia S (Feb 24, 2015)

BiggieShady said:


> So IPC is going up 5% and power consumption is going down 30%.



This might be true only in a limited area of the frequency / power curve....


----------



## arbiter (Feb 24, 2015)

TRWOV said:


> AMD themselves stated that Piledriver was the end of the line for AM3+
> 
> I think the next big CPU from AMD will feature a new socket and new architecture. Maybe Zen?



AM3+ being killed off was well over due. It was old, gotta replace socket with something once and a while else cpu's get held back by old and slow parts. I mean that as not a bash but it was time that, 3 and half years old (october 2011).


----------



## Sony Xperia S (Feb 24, 2015)

arbiter said:


> AM3+ being killed off was well over due. It was old, gotta replace socket with something once and a while else cpu's get held back by old and slow parts. I mean that as not a bash but it was time that, 3 and half years old (october 2011).



Only if you consider the socket itself as a bottleneck spot which is not serious.


----------



## Exceededgoku (Feb 24, 2015)

Genuine question, how does AMD expect to compete with Intel when they're process node size is double that of the Skylake CPU's?


----------



## comperius (Feb 24, 2015)

Did you saw any Skylake CPU's on the market? They are delayed in 2016. 

So realistically it has no sense to compare who has what. 

As long as the CPU doesn't use too much power, I don't care if it's made in 100nm techology


----------



## Assimilator (Feb 24, 2015)

comperius said:


> Did you saw any Skylake CPU's on the market? They are delayed in 2016.



End of August 2015 is 2016 now? Interesting.


----------



## Mussels (Feb 24, 2015)

these could make for some very, very nice 9-10" windows tablets


----------



## RejZoR (Feb 24, 2015)

Looks quite decent...


----------



## Sony Xperia S (Feb 24, 2015)

Exceededgoku said:


> Genuine question, how does AMD expect to compete with Intel when they're process node size is double that of the Skylake CPU's?



AMD is expecting to move to a new process with Zen sometime in 2016.

At that time, the process gap will be largely gone.


----------



## Aquinus (Feb 24, 2015)

Exceededgoku said:


> Genuine question, how does AMD expect to compete with Intel when they're process node size is double that of the Skylake CPU's?


Intel and AMD measure node sizes differently. They're actually not as dissimilar as you think they are, but that isn't to say that Intel doesn't have a smaller process than AMD, it's just over exaggerated because of how it's measured. I wish I could find the article, but IIRC it describes how Intel measures more of the width of any given circuit where AMD excludes the very outside edge where Intel includes it, or something along those lines. So in reality, something like 16nm for Intel would be something like 20nm by how AMD measures it. It's important to keep that in mind and if I end up finding the article, I'll add it.


----------



## micropage7 (Feb 24, 2015)

power consumption looks ok then we need some benchmark


----------



## Sony Xperia S (Feb 24, 2015)

Aquinus said:


> Intel and AMD measure node sizes differently.



Somewhat.



Aquinus said:


> They're actually not as dissimilar as you think they are, but that isn't to say that Intel doesn't have a smaller process than AMD, it's just over exaggerated because of how it's measured.


----------



## comperius (Feb 24, 2015)

Assimilator said:


> End of August 2015 is 2016 now? Interesting.



No, I read Q3 2015 is out of the picture and the CPU's wil arrive in late Q4 2015 or Q1 2016. That's why I wrote this info. You can check this info online via google search.


----------



## Aquinus (Feb 24, 2015)

Sony Xperia S said:


> Somewhat.



Out of curiosity, do you recall where you got that picture from?


----------



## Recus (Feb 24, 2015)

Aquinus said:


> Out of curiosity, do you recall where you got that picture from?



http://www.extremetech.com/extreme/195897-samsung-and-apple-team-up-on-14nm-chips-expected-in-2015
http://pc.watch.impress.co.jp/docs/column/kaigai/20141010_670675.html


----------



## TheinsanegamerN (Feb 24, 2015)

arbiter said:


> AM3+ being killed off was well over due. It was old, gotta replace socket with something once and a while else cpu's get held back by old and slow parts. I mean that as not a bash but it was time that, 3 and half years old (october 2011).


Yeah, you have to replace the socket (and chipset) eventually, but AMD didn't do that. They killed AM3+ without offering a newer socket, or offering 6-8 core chips for FM2+. This is after they tried to pump up everyone on the MOAR CORES system. If I have to buy a quad core, the i5 smears the a10 across the walls. I'm still waiting for an upgrade from the fx6300.....


----------



## GhostRyder (Feb 24, 2015)

Interesting to say the least, would really be nice to see the power reduction numbers in action on a laptop (Or other mobile device) to see how much better the battery life is on the chip compared with the steam roller chips.  Would also be nice if the GPU received a nice step up in platform because that would make the mobile media house laptops more appealing with that chip inside.

But we have to wait for further details/proof before we decide anything.


----------



## alwayssts (Feb 24, 2015)

I preferr this now often-referenced article:

https://www.semiwiki.com/forum/content/3884-who-will-lead-10nm.html

Up to 14nm Intel/20nm Fab Club they seem fairly similar enough to state one notch larger node from AMD is similar to one size smaller from Intel. 

When you start getting to 14nm from the CPA, things start to get switched around.  My best guess (and the numbers more-or-less follow the logic) to this is Intel will be targeting the low-power/lower-end of the voltage curve in favor of density, where-as Samsung/GF may target higher performance in effort to be more TSMC-like (notice how the CPA/TSMC essentially end in the same place).  I haven't gone deep where they got their 14/16nm numbers from in relation to version of process, but it meshes pretty cleanly with TSMC shooting for roughly 10% smaller chips with 16ff*+* vs earlier 16nm (which would bring them in line with the CPA 14nm process for density), and the CPA having 'early' vs 'plus' versions of the 14nm process, which the later is supposed to improve speed/power consumption (read: be more like TSMC).


----------



## Jorge (Feb 24, 2015)

Sony Xperia S said:


> "Now, if only AMD can put six of these leaner, faster, and greener "Excavator" modules onto an AM3+ chip."
> 
> I hope also so because if they do it, I would immediately go a buy the plaform with it.



Unfortunately AMD dropped the ball and should have released an Excavator based AM3+, but they are NOT going to do this. Zen will be the upgrade for AM3+ users but it will require a new socket/mobo. While Zen will be a big performance jump from the current FX Vishera series, it's taken YEARS too long for AMD to deliver this performance upgrade. AMD should have definitely offered an Excavator cored AM3+ late in 2014 so AM3+ users had a viable upgrade path.



Exceededgoku said:


> Genuine question, how does AMD expect to compete with Intel when they're process node size is double that of the Skylake CPU's?



Many people do not understand that node size below ~32Nm does not buy much in actual CPU performance. What is offers is lower power consumption and higher transistor density with lower unit costs. It's pointless to rush to lower die size based on cost for each new iteration. Anyone who has paid attention knows that Intel has not achieved any significant gains in CPU performance with their last three node drops. That should be no surprise if you understand the process.

Technically informed people are not buying Intel chips based on the node size. In fact Intel just delayed their 14Nm Skylake due to slow uptake on their current products and development issues with the die shrink.



Sony Xperia S said:


> AMD is expecting to move to a new process with Zen sometime in 2016.
> 
> At that time, the process gap will be largely gone.



The process diff has little impact on actual CPU/APU performance other than primarily lower power consumption. Carrizo uses advanced power management to make significant reductions in power consumption.


----------



## TRWOV (Feb 24, 2015)

I can't believe I'm actually agreeing with Jorge but yes, last two nodes haven't got Intel much performance. Even if IPC has increased maximum attainable clock speeds have decreased too so it evens out although that might have more to do with the change from solder to TIM on Intel's part rather than the node itself.

Anyone having a Core 3xxx doesn't have much reason to upgrade to Core 4xxx or 5xxx.


----------



## Jorge (Feb 24, 2015)

Aquinus said:


> Intel and AMD measure node sizes differently. They're actually not as dissimilar as you think they are, but that isn't to say that Intel doesn't have a smaller process than AMD, it's just over exaggerated because of how it's measured. I wish I could find the article, but IIRC it describes how Intel measures more of the width of any given circuit where AMD excludes the very outside edge where Intel includes it, or something along those lines. So in reality, something like 16nm for Intel would be something like 20nm by how AMD measures it. It's important to keep that in mind and if I end up finding the article, I'll add it.




...and as documented by the past three Intel node reductions, no tangible performance is gained with die shrinks below ~32Nm. The die shrink is primarily for lower power consumption, increased transistor density and lower unit costs. That's why AMD is in no hurry to rush to smaller node sizes.

AMD is doing a very good job on their APU designs. Carrizo should make a lot of people very happy. AMD is scheduled to officially release these chippies in Q2 of '15. They are actually in production now.

For those not up top speed on die shrink sizes, there is almost zero performance gain now that we are below 32Nm. The biggest gains in die shrink are lower power and higher transistor density, which reduce CPU/APU costs. With AMD's advanced power management on Carrizo, power consumption is reduced even further than in prior low power APU models.

Carrizo APUs will be available in both mobile and desktop versions with mobile models being released first.


----------



## Exceededgoku (Feb 24, 2015)

Jorge said:


> AMD is doing a very good job on their APU designs. Carrizo should make a lot of people very happy. AMD is scheduled to officially release these chippies in Q2 of '15. They are actually in production now.
> 
> For those not up top speed on die shrink sizes, there is almost zero performance gain now that we are below 32Nm. The biggest gains in die shrink are lower power and higher transistor density, which reduce CPU/APU costs. With AMD's advanced power management on Carrizo, power consumption is reduced even further than in prior low power APU models.
> 
> Carrizo APUs will be available in both mobile and desktop versions with mobile models being released first.



Yes of course power requirements are lower, which can (not necessarily does) allow for greater performance.

The fact that Intel hasn't improved much is down to market complacency and general optimisations being prioritised.

I'd love to see AMD release something even half as good as some of the latest Intel CPU's. Until these are released and in consumer hands I won't be making any judgements, as the last time they did this they released the original FX that ended up being a turd (although over it's lifetime it's improved somewhat...).

This is of course ignoring the price differences (which AMD has aced)!


----------



## Ralfies (Feb 24, 2015)

I would switch to AMD if they offered even a 4 module excavator CPU on AM3+.


----------



## Sony Xperia S (Feb 24, 2015)

Exceededgoku said:


> Yes of course power requirements are lower, which can (not necessarily does) allow for greater performance.
> 
> The fact that Intel hasn't improved much is down to market complacency and general optimisations being prioritised.



If Intel sit back, relax and wait for AMD, that might turn to a very severe mistake which could cost them a lot.

Those recent launches from them for some reason didn't show anything than negligible improvements both in power consumption and performance, so I guess they don't even use the process nodes' full potential which is a shame.


----------



## newtekie1 (Feb 24, 2015)

Jorge said:


> Unfortunately AMD dropped the ball and should have released an Excavator based AM3+, but they are NOT going to do this. Zen will be the upgrade for AM3+ users but it will require a new socket/mobo. While Zen will be a big performance jump from the current FX Vishera series, it's taken YEARS too long for AMD to deliver this performance upgrade. AMD should have definitely offered an Excavator cored AM3+ late in 2014 so AM3+ users had a viable upgrade path.



The northbridge on the motherboard has become the huge issue with AM3+.  These chips are designed to have the nothbridge directly attached to them.  Trying to modify them to work with an already existing outdated northbridge would be a lot of work.  Plus, AM3+ users would still be stuck with PCI-E 2.0.  It really is time to let AM3+ go.  What we really need is a new high end socket and some high end versions of these APUs.  I know they said they didn't want to release AM4 until DDR4 was mainstream, but I'd really like to see them do it sooner rather than later.  Put Excavator on AM4, give it 8-cores and a weaker GPU(because most people will use a dedicated GPU anyway so power efficiency is more important) and let it use DDR3.  When DDR4 is mainstream move to AM4+.


----------



## Dave65 (Feb 24, 2015)

TRWOV said:


> AMD themselves stated that Piledriver was the end of the line for AM3+
> 
> I think the next big CPU from AMD will feature a new socket and new architecture. Maybe Zen?



As it should be, AM3+ is ancient!


----------



## Dustproblem (Feb 24, 2015)

A really nice gaming-capable mobile/ultrabook chip. 
I really need a new notebook badly; Web browsing, twitch streaming, Youtube and gaming are top priorities for me. 
My current laptop gets so sluggish running flash packed websites and source quality videos on twitch. 
It can't even run 1080p 60fps youtube videos, without crashing the browser and the extensions running on Chrome.
Hope these ones come in equipped  with SSDs, unlike their predecessors Kaveri laptops.


----------



## Sony Xperia S (Feb 24, 2015)

newtekie1 said:


> The northbridge on the motherboard has become the huge issue with AM3+.  These chips are designed to have the nothbridge directly attached to them.  Trying to modify them to work with an already existing outdated northbridge would be a lot of work.  Plus, AM3+ users would still be stuck with PCI-E 2.0.  It really is time to let AM3+ go.  What we really need is a new high end socket and some high end versions of these APUs.  I know they said they didn't want to release AM4 until DDR4 was mainstream, but I'd really like to see them do it sooner rather than later.  Put Excavator on AM4, give it 8-cores and a weaker GPU(because most people will use a dedicated GPU anyway so power efficiency is more important) and let it use DDR3.  When DDR4 is mainstream move to AM4+.



In 2016-2017 DDR4 should be the way-to-go for a system designed to last at least couple of years.
With Zen AMD needs to go directly to DDR4 and forget about DDR3.


----------



## BiggieShady (Feb 24, 2015)

Sony Xperia S said:


> This might be true only in a limited area of the frequency / power curve....


Power consumption, yes, I also suspect that decrease is their best case scenario. With IPC, if they are telling the truth, 5% is across all frequencies.


----------



## theGryphon (Feb 24, 2015)

AMD is following the right strategy by going for the mobile market. That's where the volume (of sales) is, and that's where the money can come from. The money they desperately need to fund further R&D. Desktop market (especially high-end) is way smaller than the excitement of these tech forums, filled with enthusiast system builders (including myself), indicate.

AM3+ is now dead. AMD should come up with a new socket when the new node is ready for mass high performance production, and they can deliver desktop/server CPUs with many cores with at least somewhat competitive performance. That may not be possible without the new node, so releasing a DOA desktop platform too soon would be a terrible mistake, putting the nail in the coffin.


----------



## Assimilator (Feb 24, 2015)

comperius said:


> No, I read Q3 2015 is out of the picture and the CPU's wil arrive in late Q4 2015 or Q1 2016. That's why I wrote this info. You can check this info online via google search.



http://www.guru3d.com/news-story/intel-14nm-skylake-desktop-cpus-delayed-to-august.html



Jorge said:


> ...and as documented by the past three Intel node reductions, no tangible performance is gained with die shrinks below ~32Nm. The die shrink is primarily for lower power consumption, increased transistor density and lower unit costs. That's why AMD is in no hurry to rush to smaller node sizes.



Considering the appalling power consumption of AMD's CPUs, perhaps they should be in a bit more of a hurry.


----------



## $ReaPeR$ (Feb 24, 2015)

good news  its time for some competition on the CPU market. it has been stagnant for too long..


----------



## ZoneDymo (Feb 24, 2015)

Dustproblem said:


> A really nice gaming-capable mobile/ultrabook chip.
> I really need a new notebook badly; Web browsing, twitch streaming, Youtube and gaming are top priorities for me.
> My current laptop gets so sluggish running flash packed websites and source quality videos on twitch.
> It can't even run 1080p 60fps youtube videos, without crashing the browser and the extensions running on Chrome.
> Hope these ones come in equipped  with SSDs, unlike their predecessors Kaveri laptops.



Well its not like 1080p 60fps is easy peasy


----------



## arbiter (Feb 24, 2015)

TheinsanegamerN said:


> Yeah, you have to replace the socket (and chipset) eventually, but AMD didn't do that. They killed AM3+ without offering a newer socket, or offering 6-8 core chips for FM2+. This is after they tried to pump up everyone on the MOAR CORES system. If I have to buy a quad core, the i5 smears the a10 across the walls. I'm still waiting for an upgrade from the fx6300.....



Problem that is with doing that on FM2+, is it has GPU on it as well which takes up a ton of space. Probably could make one without it and have it work but probably be confusing for some people. Intel on their high end 6 and 8 core parts don't have gpu cause would use to much space.


----------



## AlexS9U4 (Feb 24, 2015)

And what about AM1? xD


----------



## ThE_MaD_ShOt (Feb 24, 2015)

It would be great if they did what they did with Am2+/Am3 cpu's and have a memory controller on the cpu that could use ddr3 or ddr4. That way to can release something now and it use ddr3 and when ddr4 becomes mainstream release an updated mobo that is ddr4.


----------



## Aquinus (Feb 24, 2015)

I personally would like AMD to release a new socket that's LGA with the PCI-E root complex moved to the CPU and with more PCI-E lanes like 32 or 36. Enough to drive 4x 8-lane ports.


newtekie1 said:


> The northbridge on the motherboard has become the huge issue with AM3+.  These chips are designed to have the nothbridge directly attached to them.  Trying to modify them to work with an already existing outdated northbridge would be a lot of work.  Plus, AM3+ users would still be stuck with PCI-E 2.0.  It really is time to let AM3+ go.  What we really need is a new high end socket and some high end versions of these APUs.  I know they said they didn't want to release AM4 until DDR4 was mainstream, but I'd really like to see them do it sooner rather than later.  Put Excavator on AM4, give it 8-cores and a weaker GPU(because most people will use a dedicated GPU anyway so power efficiency is more important) and let it use DDR3.  When DDR4 is mainstream move to AM4+.



So what you're really saying is that the PCI-E root complex should be moved to the CPU and the interconnect between the south bridge would use what, A-Link/PCI-E like it has in the past?

The funny thing about all of this is that it's exactly how APUs work and how Intel CPUs work. AM3+ is just a dying platform because of the archaic design of having the PCI-E root complex on the motherboard (which is awesome for cheap servers I might add. It's half of the reason my gateway still has the Phenom II and a 790FX in it, PCI-E galore! I could add 8 ethernet ports and still have PCI-E to spare.)


----------



## librin.so.1 (Feb 25, 2015)

I would have gotten an AMD laptop with the older chips. And I would get a laptop with these new chips.
Now, if only I could BUY a laptop with an AMD APU in these parts, that'd be great.
Me and quite a few people from around here I know are looking for [decent] laptops with AMD APUs. But it's Intels as far as the eye can see.
"There's a demand. But zero supply for the last few years."
ARG.


----------



## arbiter (Feb 25, 2015)

ThE_MaD_ShOt said:


> It would be great if they did what they did with Am2+/Am3 cpu's and have a memory controller on the cpu that could use ddr3 or ddr4. That way to can release something now and it use ddr3 and when ddr4 becomes mainstream release an updated mobo that is ddr4.



Sounds good in theory but problem is its 240pin vs 288pin ram slots, so need 2 separate ram slots which limits the ram can install. On top of that space on the cpu for the 2 diff memory controllers that would each need their own pins and traces in the board. It makes things more complex on cpu pin-out and board traces. Not really worth it to make one. The memory controlers are on cpu for both intel and amd.


----------



## newtekie1 (Feb 25, 2015)

Aquinus said:


> So what you're really saying is that the PCI-E root complex should be moved to the CPU and the interconnect between the south bridge would use what, A-Link/PCI-E like it has in the past?
> 
> The funny thing about all of this is that it's exactly how APUs work and how Intel CPUs work. AM3+ is just a dying platform because of the archaic design of having the PCI-E root complex on the motherboard (which is awesome for cheap servers I might add. It's half of the reason my gateway still has the Phenom II and a 790FX in it, PCI-E galore! I could add 8 ethernet ports and still have PCI-E to spare.)



Yes, that is what I'm saying.  And yes, I would like to see 32 PCI-E lanes from the CPU and 16 from the PCH(southbridge).



arbiter said:


> Sounds good in theory but problem is its 240pin vs 288pin ram slots, so need 2 separate ram slots which limits the ram can install. On top of that space on the cpu for the 2 diff memory controllers that would each need their own pins and traces in the board. It makes things more complex on cpu pin-out and board traces. Not really worth it to make one. The memory controlers are on cpu for both intel and amd.


That isn't what he means.  He doesn't want a single motherboard with both types of RAM.  He just wants the CPU to support both types.  So you can use a motherboard with DDR3 at first, and then get a new motherboard with DDR4 in the future when DDR4 becomes mainstream.


----------



## Schmuckley (Feb 25, 2015)

Diagrams don't mean a thing until it's in my grubby little hand


----------



## ThE_MaD_ShOt (Feb 25, 2015)

arbiter said:


> Sounds good in theory but problem is its 240pin vs 288pin ram slots, so need 2 separate ram slots which limits the ram can install. On top of that space on the cpu for the 2 diff memory controllers that would each need their own pins and traces in the board. It makes things more complex on cpu pin-out and board traces. Not really worth it to make one. The memory controlers are on cpu for both intel and amd.


Yes it would be two different mobos a ddr3 board for now then a ddr4 board when ddr4 becomes mainstream. The boards don't need both kinds of ram slots. Just the cpu be able to run on a board with either.  Also Amd has already done this with cpus using 2 different memory specs, the Phenom II cpus could be used in boards that either where ddr2 or ddr3.


----------



## BiggieShady (Feb 25, 2015)

Aquinus said:


> Intel and AMD measure node sizes differently. They're actually not as dissimilar as you think they are, but that isn't to say that Intel doesn't have a smaller process than AMD, it's just over exaggerated because of how it's measured. I wish I could find the article, but IIRC it describes how Intel measures more of the width of any given circuit where AMD excludes the very outside edge where Intel includes it, or something along those lines. So in reality, something like 16nm for Intel would be something like 20nm by how AMD measures it. It's important to keep that in mind and if I end up finding the article, I'll add it.



Quoted for truth. 
16 nm node doesn't have transistors that are 2 times smaller than at 32 nm node. FinFET Transistors at 16 nm node just have thinner fins so marketing comes to play and starts measuring the fin width and BAM new silicon node is here ... where in fact there's nothing new except tapered fins. 
Main optimizations these days come from figuring out how to arrange the transistors in order for them to be able to share a fin or two.


----------



## ThE_MaD_ShOt (Feb 25, 2015)

cheesy999 said:


> That's because the ddr2 controller was on the motherboard and the ddr3 one was on the cpu, they could presumably do something similar again though...


Nah the controllers where on the cpu this is from Amd's web site.


----------



## Aquinus (Feb 25, 2015)

cheesy999 said:


> That's because the ddr2 controller was on the motherboard and the ddr3 one was on the cpu, they could presumably do something similar again though...


Wrong, The Phenom II and Athlon AM3 chips had both a DDR2 and DDR3 controller on the CPU and you could put AM3 CPUs in most AM2+ boards with the proper BIOS. AM3+ CPUs had the DDR2 portion of the IMC removed, so as a result AM3+ was incompatible with AM2+ whereas AM2+ CPUs only had a DDR2 controller which is why you couldn't put an AM2+ CPU in an AM3 board with DDR3, it was not because the controller was on the motherboard. AMD hasn't had a memory controller on the motherboard for many, many, many years. Even going back to skt939 with DDR, the memory controller was on the CPU. There is absolutely no memory controller logic on the motherboard, just IO with PCI-E and all the peripherals on the south bridge.


----------



## arbiter (Feb 25, 2015)

The cpu space needed for both controllers, and given AMD financial state. Its just not worth it to do it. How many people go out and just replace their MB and keep the same cpu (not counting MB failure)? The time, effort, money, and what would be mostly no boost in performance less you use APU graphic's but might well use that money and get a dedicated gpu instead. Cost of the board + new ram it would be better bang for $ to just buy the current gpu in the price range of what you would spend.


----------



## sergionography (Feb 26, 2015)

I don't know what to think of this :/ it's great what amd is doing with this process node but to think they have only increased density yet retained the same die size? Same number of cores? And same number of graphics clusters? Meaning all that real estate is going towards hsa stuff. Now the question remains, is it really worth it AMD? Is this hsa really worth it for you to use up 1/3 of the chip towards it. AMD needs to stop trying to save the world with their "innovations" that will pave the way of the future and instead learn how to make money, and not just with this but everything they been doing so far. Look at mantle for example, yes it's great and yes it forced other apis in the right direction, but did it make amd any money or better their position in the marketplace? Well the market share loss this year doesn't think so unfortunately, making it a fail business project. And similar thing with hsa, it didnt benefit amd one bit because it's been half baked baby steps so far and meaningless because there is no point of an excellent fiber if the main components are lacking. AMD would've been better off building simpler apus with either more cpu/gpu to gain more performance, or with less of this hsa fabric to get smaller chips and therefore more profits, and only release hsa when it is completed and tested at its best form and when they have good components to make it shine


----------



## sergionography (Feb 26, 2015)

arbiter said:


> The cpu space needed for both controllers, and given AMD financial state. Its just not worth it to do it. How many people go out and just replace their MB and keep the same cpu (not counting MB failure)? The time, effort, money, and what would be mostly no boost in performance less you use APU graphic's but might well use that money and get a dedicated gpu instead. Cost of the board + new ram it would be better bang for $ to just buy the current gpu in the price range of what you would spend.



i agree with you. But then is using more than 1/3 of the chip on hsa fabric worth it? At a time when amds x86 cores are forced to compete only in the mid-low end market I don't think hsa makes sence. Amd should've waited till they can compete in the high end and began releasing hsa from top to buttom on chips with high margins first, that's the only place where new untested secondary technologies make sense.


----------



## Aquinus (Feb 26, 2015)

sergionography said:


> i agree with you. But then is using more than 1/3 of the chip on hsa fabric worth it? At a time when amds x86 cores are forced to compete only in the mid-low end market I don't think hsa makes sence. Amd should've waited till they can compete in the high end and began releasing hsa from top to buttom on chips with high margins first, that's the only place where new untested secondary technologies make sense.


HSA makes sense for APUs because the general purpose X86 cores and the specialized GPU shaders will need to share a memory controller and as a result need a common interface into it. HSA makes this problem a whole lot easier to deal with because it is that piece that sits between the CPU and GPU cores and the memory controller. The motivation is efficient use of local resources (aka cache and the IMC) without wasting too much time trying to coordinate the two. So it is a side-effect that cores talking to each other is faster but the real focus was about ensuring that both components can take maximum advantage of the memory controller without trampling over each other. Without it the different parts of the APU will be wasting time trying to compete for shared resources.

Now leveraging HSA to actually use shared memory between both X86 and GPU shaders is an entirely different problem which has more to deal with software design as applications must be able to take advantage of it if the workload is conducive to such improvements.

All in all, I think AMD made a mistake with SMT by using dedicated shared components. It was a huge waste of time for the regular market. If AMD invested these resources into their monolithic X86 core design with AM2+ and AM3, improving the IMC and cpu cache latencies would have done more for performance than SMT ever could have for 95% of situations that most consumers actually care about but instead of trying to scale vertically, they decided to scale horizontally and the result has been, in my opinion, lackluster. I just don't see how AMD is going to get out of this hole they've gotten themselves into.


----------



## xfia (Feb 26, 2015)

you make some good points for sure but hsa is way more efficient for a lot of mobile devices and like what if the game consoles had these architectural improvements today.. probably be quite a bit nicer and I think they will have something great for console gamers next gen. should help with lower cost gaming laptops too but that is something they will have to sell hard to be widely realized.


----------



## newtekie1 (Feb 26, 2015)

arbiter said:


> The cpu space needed for both controllers, and given AMD financial state. Its just not worth it to do it. How many people go out and just replace their MB and keep the same cpu (not counting MB failure)? The time, effort, money, and what would be mostly no boost in performance less you use APU graphic's but might well use that money and get a dedicated gpu instead. Cost of the board + new ram it would be better bang for $ to just buy the current gpu in the price range of what you would spend.



That is why on the first generation of CPUs you only have the DDR3 memory controller. Get something out in the high end market to start generating some money. AM4 CPUs would be DDR3 only, AM4+ would have DDR3 and DDR4 so AM4 users could upgrade their CPU without upgrading their motherboard.


----------



## BiggieShady (Feb 26, 2015)

Aquinus said:


> If AMD invested these resources into their monolithic X86 core design with AM2+ and AM3, improving the IMC and cpu cache latencies would have done more for performance than SMT ever could have for 95% of situations that most consumers actually care about but instead of trying to scale vertically, they decided to scale horizontally and the result has been, in my opinion, lackluster. I just don't see how AMD is going to get out of this hole they've gotten themselves into.


After splurging insane amount of cash on ATI acquisition, it seems they were saving on CPU R&D because scaling out horizontally is easier/cheaper. Given the lackluster results and the fact they are in huge debt now, they really should have taken a huge loan back then and put it all in R&D - at least they would have a chance to get out of the debt today. I'm not sure console chips manufacturing will be enough, because I have a feeling they offered very attractive price to ms and sony to seal the deal.


----------



## HumanSmoke (Feb 26, 2015)

sergionography said:


> I don't know what to think of this :/ it's great what amd is doing with this process node but to think they have only increased density yet retained the same die size? Same number of cores? And same number of graphics clusters? Meaning all that real estate is going towards hsa stuff.


Yet, Kaveri already has integrated HSA...so it kind of begs the question, how different are they?


----------



## ThE_MaD_ShOt (Feb 26, 2015)

newtekie1 said:


> That is why on the first generation of CPUs you only have the DDR3 memory controller. Get something out in the high end market to start generating some money. AM4 CPUs would be DDR3 only, AM4+ would have DDR3 and DDR4 so AM4 users could upgrade their CPU without upgrading their motherboard.


This is exactly my thoughts, get something out now then worry about ddr4 when it becomes mainstream. Amd has always been a tad late to the party when adopting new memory standards so why should this time be any different. Release something now using ddr3 then there next release can run either ddr3 or 4.


----------



## _Flare (Feb 26, 2015)

Has someone seen this before ? 
FP4 Socket with DDR4


http://www.seair.co.in/fp4-ddr4-export-data.aspx

01-Nov-2014  
Banglore Air Cargo  
84733099    
MOTHERBOARD W/O CPU FP4-DDR4 SODIMM AMDP/N 102-H27202-00


----------



## HumanSmoke (Feb 26, 2015)

_Flare said:


> Has someone seen this before ?
> FP4 Socket with DDR4


Probably a development board for the Excavator-cored Toronto (server orientated BGA), which is specced for DDR4


----------



## anolesoul (Feb 28, 2015)

As "slow" as AMD has been to PUT anything out for "competition" against Intel, AND to actually support DDR4...I'm NOT holding my breath!


----------



## HumanSmoke (Feb 28, 2015)

anolesoul said:


> As "slow" as AMD has been to PUT anything out for "competition" against Intel, AND to actually support DDR4...I'm NOT holding my breath!


Probably wise. You'll note that the above slide states that AMD's ARM-cored Seattle is a 2014 product, yet wont be launched until at least Q2 2015. Given the fabrication time for an SoC , you would have to think that the A1100 would already be in production but that might not be the case.


----------



## RealNeil (Mar 1, 2015)

I have enough CPU power to do me for the foreseeable future. (just got a FX-9590 CPU and board that I won, also have a i7-4790K to install)
So I can wait to see what they release, and read reviews on it when it's time.
I really hope that they nail it with something. 
I hope that their next series of GPUs kick ass too! 

I'll wait and see.


----------



## arbiter (Mar 1, 2015)

anolesoul said:


> As "slow" as AMD has been to PUT anything out for "competition" against Intel, AND to actually support DDR4...I'm NOT holding my breath!



DDR4 will help the amd APU's gpu side but that is it. Doubt AMD will go ddr4 least for now as prices are still high and not worth the cost for performance it would give. Might well put the extra $ in to a dedicated gpu.


----------



## RealNeil (Mar 1, 2015)

I've been told that APUs respond well to faster RAM.


----------



## newtekie1 (Mar 1, 2015)

RealNeil said:


> I've been told that APUs respond well to faster RAM.


Only really on the GPU side of it.  Think of using faster system RAM as akin to overclocking the memory on a graphics card.


----------



## refillable (Mar 2, 2015)

These slides are almost always very interesting but that doesn't mean it will always as interesting as the processors/benchmarks themselves. Let's all just wait.


----------



## hairyfeet (Mar 2, 2015)

Jorge said:


> ...and as documented by the past three Intel node reductions, no tangible performance is gained with die shrinks below ~32Nm. The die shrink is primarily for lower power consumption, increased transistor density and lower unit costs. That's why AMD is in no hurry to rush to smaller node sizes.
> 
> AMD is doing a very good job on their APU designs. Carrizo should make a lot of people very happy. AMD is scheduled to officially release these chippies in Q2 of '15. They are actually in production now.
> 
> ...



Except they still have a VERY large amount of sales of 6 and 8 core AM3+ budget gamers and NONE of them are gonna downgrade to a 4 core APU?

My 6 year old Phenom II X6 paired with an HD7790 is gonna kill those 4 core APUs, badly. And if I were building new, as my youngest recently did, you can get an FX8300 for $115 and pair it with an HD7790 or R260 and SLAUGHTER the most expensive APU in their line up thanks to more cores and GDDR 5.

So if my choice is a 4 core APU from AMD or a 4 core CPU from Intel, why would I buy AMD? After comparing doing A/V work and gaming with 6 cores I'll go 8, even 10, but I won't downgrade to just 4, not when you can get 4 WITH HT from Intel, it just wouldn't make any sense.


----------



## xfia (Mar 2, 2015)

where is that 16 compute core apu with a strong hold on 1080p


----------



## hairyfeet (Mar 2, 2015)

xfia said:


> where is that 16 compute core apu with a strong hold on 1080p



You should probably make it clear you want 16 actual COMPUTE cores, because I think AMD is being really disingenuous with their adverts saying "12 cores" when IRL its a lousy 4 CPU cores and 8 MUCH weaker and more specialized (and also memory constrained) GPU cores. I'm sorry but a GPU unit is NOT a core, I can NOT just run any program I want on it like I can a CPU and until then? I'd argue their ads are seriously deceptive.

Of course this is why my youngest never even thought about their APUs, as he already HAD 4 Phenom II cores and had just gotten an HD7790 so there would have been absolutely no point in any chip in their APU lineup...and that is the problem. Still plenty of tasks where more REAL CPU cores can be a benefit, but if you have even the lowest end GDDR 5 card (which is currently a whopping $60) then nothing in their APU lineup makes ANY sense at all.

This is why I'm seriously hoping they either come out with new AM3+ chips or come out with 8-12 real CPU core AM4s soon, because right now I have a sinking feeling they are just gonna throw in the towel WRT all the budget gamers out there and strictly build APU granny boxes and if that is the case? Stick a fork, they are as good as dead as its all the hexa and octo AM3+ guys that are keeping the fanbase alive, recommending their chips to friends, even with no new chips in this long there still isn't anything on the Intel side that can compete with 8 real cores for multithreading until you spend over double the price. But again if the ONLY choices we have is a 4 core APU from AMD, stuck with a GPU that will never EVER get used if you have even a $60 card, versus a 4 core from Intel with higher single core performance AND the option of getting a 4 core with HT down the line when the price drops..why would I stick with AMD when my hexacore gets too long in the tooth?


----------



## xfia (Mar 2, 2015)

I meant 8 cpu and 8 gpu..  they wont release anything new for am3+ but 8 excavator threads (4 modules) on fm2+ would be nice to see. 

the whole compute core thing makes since to me but yeah misunderstood by a lot of people as well as hsa for the most part. 

it may seem strange that something more advanced and efficient than other desktop platforms started out on the lower end with no new chipset but kaveri was the first generation. apu's by design have the cpu and gpu on the same chip so there was nothing high end to start without further research and refinements. kaveri is pretty good at 900p so I would expect these new apu's to be knocking on the 1080p door. due to the way that technology advances a separate cpu and gpu will remain a better performing option but after another generation or two people will be saying 1080p gaming is a job for one those amd apu's so you can save money if you want. 

amd's got plenty of granny boxes coming!  zen.. excavator.. carrizo.. playstation.. xbox.. wii..


----------



## arbiter (Mar 2, 2015)

hairyfeet said:


> You should probably make it clear you want 16 actual COMPUTE cores, because I think AMD is being really disingenuous with their adverts saying "12 cores" when IRL its a lousy 4 CPU cores and 8 MUCH weaker and more specialized (and also memory constrained) GPU cores. I'm sorry but a GPU unit is NOT a core, I can NOT just run any program I want on it like I can a CPU and until then? I'd argue their ads are seriously deceptive.



Well GPU cores in software that can be accelerated by them runs much faster but that is not much software to start with. Most software that people would use that is GPU accelerated in such a way usually more pro end software and likely those people will spend a lot on hardware to begin with.  Look at the slides AMD put out claiming their apu match's an i7 in mobile side, All those benchmarks the compare with are all GPU accelerated ones which AMD has big edge on.


----------



## hairyfeet (Mar 2, 2015)

arbiter said:


> Well GPU cores in software that can be accelerated by them runs much faster but that is not much software to start with. Most software that people would use that is GPU accelerated in such a way usually more pro end software and likely those people will spend a lot on hardware to begin with.  Look at the slides AMD put out claiming their apu match's an i7 in mobile side, All those benchmarks the compare with are all GPU accelerated ones which AMD has big edge on.



But you just pointed out the POINTLESSNESS of the whole thing as those running the kind of high end AV and specialty software like 3D CAD that can actually USE those GPU cores are NOT gonna be running AMD APUs for such CPU intensive tasks, not when they can get a MUCH more powerful GPU for MUCH cheaper and pair it with more real CPU cores!

The simple fact is IMHO the entire APU concept makes ZERO sense with the exception of mobile. In a laptop where space is a premium and power is severely limited? Then sure having the CPU and GPU on one die cuts down the costs and power usage, but on a desktop? Even if you get the lowest end APU you are still getting ripped off, I mean look at the prices, you can get a dual core APU with an HD8300 for $69 OR you can go to some place like Biiz and pick up an FX4300 with four REAL cores that will do any task (not just the extremely limited GPU accelerated ones) for the same money. By the time you figure the increased cost of the APU over the CPU, the need for much faster RAM compared to the CPU as the GPU side of an APU is ALWAYS starved for memory bandwidth? you will simply never come out ahead as even the lowest end GPU with dedicated GDDR 5 memory (which as I said is just $60, the GeForce 610 or 710 IIRC) with just slaughter the thing without effort!

Look I'm about as hardcore an AMD supporter as they come, I have 6 AMD PCs in my family going back to my father's Phenom I quad all the way up to the FX8300 of the youngest, but if you are not running a laptop? There just isn't a selling point for their APUs, there just isn't. If you build a machine with the least expensive APU and I build the least expensive CPU+GPU the increased cost of the APU is gonna make it a losing proposition,just compare the lowest ACTUAL quad core APU with the same on the AM3+ side and its not even funny how lopsided it is, you can get 4 REAL cores AND a GPU for less than the APU quad so no matter how you slice it? It just doesn't make sense.


----------



## arbiter (Mar 2, 2015)

hairyfeet said:


> Look I'm about as hardcore an AMD supporter as they come, I have 6 AMD PCs in my family going back to my father's Phenom I quad all the way up to the FX8300 of the youngest, but if you are not running a laptop? There just isn't a selling point for their APUs, there just isn't. If you build a machine with the least expensive APU and I build the least expensive CPU+GPU the increased cost of the APU is gonna make it a losing proposition,just compare the lowest ACTUAL quad core APU with the same on the AM3+ side and its not even funny how lopsided it is, you can get 4 REAL cores AND a GPU for less than the APU quad so no matter how you slice it? It just doesn't make sense.



i would say if you got younger kids, amd apu would be gold to build each one their own computer with for school stuff and some of less graphic games.


----------



## Aquinus (Mar 2, 2015)

hairyfeet said:


> Even if you get the lowest end APU you are still getting ripped off, I mean look at the prices, you can get a dual core APU with an HD8300 for $69 OR you can go to some place like Biiz and pick up an FX4300 with four REAL cores that will do any task (not just the extremely limited GPU accelerated ones) for the same money.



Ummm, they both use BD modules. A 2 module (4 core) FX CPU is practically identical to a 2 module (4 core) APU. The only difference is that that the APU has a GPU onboard and lacks HyperTransport as the PCI-E lanes are provided by the CPU on APUs (much like Intel's CPUs, which on skt115(5/0) which also have integrated graphics). The problem is that AM3+ is old. PCI-E 3.0 will saturate HyperTransport too quickly and the shift to lower power platforms kind of makes this a must.

You see this statement?


hairyfeet said:


> those running the kind of high end AV and specialty software like 3D CAD that can actually USE those GPU cores are NOT gonna be running AMD APUs for such CPU intensive tasks


I think you're forgetting the simple fact that most users aren't using their computer for this. APUs target the widest audience, not the narrowest like 8c FX or skt2011(-3) CPUs. If you need a lot of power and you get an APU, shame on you. However, for a workstation or for doing anything that isn't 3D related, an APU is a pretty good option for the price.

Also with respect to the cores bit. More isn't always better because it depends on the workload. CAD, 3D rendering, OLAP, and web servers love more cores, however that just the nature of the application. More often than not, most applications won't benefit from more cores, even if it's coded to be multi-threaded. So we're seeing quad-core for the most part for this reason, because more is useless for most consumers and companies like to maximize profits.

Before people start going too gung-ho about cores, I'm going to leave this quote of myself from the thread on AMD and their rumored Zen architecture:


Aquinus said:


> Please don't reduce this problem to a it to a statement like this. It's not that most software doesn't use multi-threading properly because a lot of software does. It's that most situations don't constitute a speedup by simply using more threads because the task isn't parallel in nature. Depending on the workload, the speedup could be huge or it could be tiny but, for most applications that react to human intervention, there is a good bet that most of it is done in a single thread because tasks that are mostly serial in nature will only run slower when you attempt to divvy them up and the amount of speedup is proportional the amount of the application that can actually be run in parallel.
> 
> So please be careful with this statement because a lot of applications aren't conducive to being accelerated by using more threads and running parts of the application in parallel depends on the workload itself. You can't efficiently run tasks in parallel if each tasks relies on output from previous one. *One doesn't simply make an application multi-threaded*.


----------



## hairyfeet (Mar 3, 2015)

Aquinus said:


> Ummm, they both use BD modules. A 2 module (4 core) FX CPU is practically identical to a 2 module (4 core) APU. The only difference is that that the APU has a GPU onboard and lacks HyperTransport as the PCI-E lanes are provided by the CPU on APUs (much like Intel's CPUs, which on skt115(5/0) which also have integrated graphics). The problem is that AM3+ is old. PCI-E 3.0 will saturate HyperTransport too quickly and the shift to lower power platforms kind of makes this a must.
> 
> You see this statement?
> 
> ...



So in other words granny boxes, just as I said. Well if all you want is a granny box there the APU again makes no sense as most users just watching videos are never gonna be able to notice the difference between that $189 APU and the $129 Intel with built in HD graphics because its "good enough" for the basic tasks a granny box does.

Like I said I'm a hardcore AMD fan but other than mobile? You just aren't gonna find a use case other than the super teeny niche HTPC where having a memory constrained powerful GPU is gonna be a benefit. I can get an FX4200 (with four actual compute cores) for $59, pair that with a $60 GeForce 610 with a GB of GDDR 5 and I WILL SLAUGHTER the APU, which at less than $100 has only TWO, count 'em two, CPU cores.

And the benefit of actual real cores? That one is simple...multitasking! Today's OS does NOT just single task, you got processes in the background, you got program and OS updates being checked, you got email and chat programs, multiple browser tabs, all of these take CPU power and pretty much NONE of it can be done on a GPU! A GPU is not, now nor will it ever be, a general purpose computing unit. It won't because the very nature of its purpose, rendering 3d objects, just does not translate into being good at processing the basic math done by say a chat program. AMD pretending that it will is like saying a Ferrari is a great ride for hauling kids and a camper!

Look I could show you build after build after build and in not a single instance will the APU turn out to be a better buy, because you can get more performance for less money by going CPU+GPU, that's a fact. The APU will never be able to compete because DDR 3 is a bad joke compared to GDDR 5 so the $60 GPU will just kill the $189 APU because its gimped for bandwidth. Again if ALL you want is a granny box you can run an APU, but why would you? And on gaming? You're just never gonna fix the bandwidth issue short of bolting GDDR 5 on the board which again would drive the cost above what a simpler CPU+GPU would cost!

If AMD bets the farm on nothing but APUs? Then I'm sorry but they are toast, and this is from somebody who has been AMD exclusive since Athlon 64. Nobody is gonna go from an X4 (or X6 or X8) and be satisfied with a dual core or even quad APU because it would be a downgrade from what they have, and many games are already taking advantage of 4 cores...you looked at the price of the actual quad core (as opposed to AMD marketing calling a GPU a "compute core") APUs in their lineup? They just aren't cost competitive, either with AM3+ nor with the Intel side. If you just want a box for browsing and watching 1080P video? The HD graphics on any sub $99 Intel will do that job just fine. you want to game? the APU will become the bottleneck so fast it really isn't even funny, a first gen bulldozer quad with a $60 GPU will just kill anything in the APU line, regardless of price point. 

So sell it to me, why EXACTLY would I want to pay more for less with the AMD APU over the competition or even their past CPU offerings? Because from where I sit its Bulldozer all over again, a design that can't compete with previous offerings in their own catalog. heck if all I want is light gaming why would I not just buy the socket AM1 which is MUCH cheaper and gives me a quad core with GPU that guys on YouTube are playing Crysis 3 with? Can you name even ONE selling point with their high end APUs that wouldn't be served better by a (again much cheaper) CPU+GPU other than mobile?


----------



## TheGuruStud (May 28, 2015)

hairyfeet said:


> So in other words granny boxes, just as I said. Well if all you want is a granny box there the APU again makes no sense as most users just watching videos are never gonna be able to notice the difference between that $189 APU and the $129 Intel with built in HD graphics because its "good enough" for the basic tasks a granny box does.
> 
> Like I said I'm a hardcore AMD fan but other than mobile? You just aren't gonna find a use case other than the super teeny niche HTPC where having a memory constrained powerful GPU is gonna be a benefit. I can get an FX4200 (with four actual compute cores) for $59, pair that with a $60 GeForce 610 with a GB of GDDR 5 and I WILL SLAUGHTER the APU, which at less than $100 has only TWO, count 'em two, CPU cores.
> 
> ...



You. Are. Not. The. Target. Audience.

Every little tom, dick and harry gets a PC from a big box store. Almost all of them want to play little games. Some are 2D, so they don't need much acceleration. But, there's a lot that want to play something like Sims. This is where the apu will shine. It will work perfectly for this sort of gaming and normal desktop use. Mommy doesn't know about buying a gpu nor is she going to spend more money (and the AMD box was cheaper than intel which is why she bought it).


----------



## $ReaPeR$ (May 29, 2015)

TheGuruStud said:


> You. Are. Not. The. Target. Audience.
> 
> Every little tom, dick and harry gets a PC from a big box store. Almost all of them want to play little games. Some are 2D, so they don't need much acceleration. But, there's a lot that want to play something like Sims. This where the apu will shine. It will work perfectly for this sort of gaming and normal desktop use. Mommy doesn't know about buying a gpu nor is she going to spend more money (and the AMD box was cheaper than intel which is why she bought it).



The truth has been spoken!


----------



## hairyfeet (May 30, 2015)

$ReaPeR$ said:


> The truth has been spoken!



So their business model is suckering people who don't know any better? BTW in case you missed it the new CEO of AMD said they will NOT compete on price anymore but on PERFORMANCE, so that strategy is already dead. Look up AMD on /. if you want to read more but since the ONLY advantage AMD has now is cores? I'm looking forward to seeing those 12 and 16 core AM4 boxes myself.


----------



## TheGuruStud (May 30, 2015)

hairyfeet said:


> So their business model is suckering people who don't know any better? BTW in case you missed it the new CEO of AMD said they will NOT compete on price anymore but on PERFORMANCE, so that strategy is already dead. Look up AMD on /. if you want to read more but since the ONLY advantage AMD has now is cores? I'm looking forward to seeing those 12 and 16 core AM4 boxes myself.



Uh....what do you think intel did for 20-25 years? They spread propaganda through shady and illegal means (plus the antitrust stuff) so people would buy their inferior and more expensive CPUs.

At least in this case the apu IS the better buy for these people.


----------



## $ReaPeR$ (May 30, 2015)

hairyfeet said:


> So their business model is suckering people who don't know any better? BTW in case you missed it the new CEO of AMD said they will NOT compete on price anymore but on PERFORMANCE, so that strategy is already dead. Look up AMD on /. if you want to read more but since the ONLY advantage AMD has now is cores? I'm looking forward to seeing those 12 and 16 core AM4 boxes myself.



mate if you cant understand that different people have different needs there is no point in continuing this argument with you. i have built many systems for different people with different needs, and most of them do not NEED more than an APU, that doesnt mean that if someone asked me to build him/her the best available system i would suggest an APU! i would probably suggest the best available i7 with the best available GPU at the time. i'm just not biased that way  usualy the people that argue Intel vs AMD/ Nvidia vs AMD always try to show off their epenis is bigger and better so.. no point in arguing with fanboys, they dont understand how reality works. cheers


----------



## Mussels (May 30, 2015)

my dad got an APU system as a non-gamer, and then he got into world of tanks. he played for 6 months on minimum settings before throwing in a 5850 1GB, and now plays it on high.


APU's sure as hell have a place, entry level gaming machines and laptops being the obvious ones.


----------



## RealNeil (May 30, 2015)

I did an APU build for one of my nephews. An A10-7850K APU and a R9-280X OC GPU. He's really happy with the way it games for what we spent on parts.

There is certainly a place for APUs in today's gaming market.


----------

