Friday, July 25th 2014
AMD to Drag Socket FM2+ On Till 2016
AMD's desktop processor and APU platforms are not expected to see any major changes till 2016, according to a BitsnChips report. The delay is attributed to a number of factors, including DDR4 memory proliferation (i.e. for DDR4 memory to get affordable enough for target consumers of APUs), and AMD's so-called "project Fast-Forward," which aims to place high-bandwidth memory next to the APU die, for the AMD's increasingly powerful integrated graphics solutions to overcome memory bottlenecks.
The company's upcoming "Carrizo" APU is targeted at low-TDP devices such as ultra-slim notebooks and tablets; and is the first chip to integrate AMD's next-generation "Excavator" CPU micro-architecture. "Carrizo" chips continue to use DDR3 system memory, and therefore it's possible that AMD may design a socket FM2+ chip based on "Excavator," probably leveraging newer silicon fab processes. But otherwise, socket FM2+ is here to stay.
Sources:
BitsnChips, VR-Zone
The company's upcoming "Carrizo" APU is targeted at low-TDP devices such as ultra-slim notebooks and tablets; and is the first chip to integrate AMD's next-generation "Excavator" CPU micro-architecture. "Carrizo" chips continue to use DDR3 system memory, and therefore it's possible that AMD may design a socket FM2+ chip based on "Excavator," probably leveraging newer silicon fab processes. But otherwise, socket FM2+ is here to stay.
54 Comments on AMD to Drag Socket FM2+ On Till 2016
Looks like I'll need to wait till atleast 2016 for a replacement of my system if I want to stay with AMD!
I wonder if by then they'll drop the APU tag. Or maybe they'll pick up a new tagline. lol they could go with TPU (Total Processing Unit) then this site could reap some benefits...or get sued for rights to the tag....heh.
In FM2+, Carrizo with Excavator and HBM. Without HBM, Excavator would have to be much much better performing in the cpu part, to be considered an upgrade. Personally I don't see it happening. I don't see why the fourth version of the module architecture to be a bigger step than the last two (Bulldozer-->Piledriver, Piledriver-->Steamroller).
Of course there is the possibility of new FX processors for the FM2+ with more than 2 modules. But that could also mean new motherboards because 3 modules and a few stream processors, could be a possibility with 100W ceiling, 4 or .... 6 modules, I think they are a "No Go" with only 100W limit.
In the AM1 platform, Beema models that will also be compatible with existing Kabini boards. 25 Watts are more than enough for a 2.8-3GHz Beema quad core.
I don't expect anything in AM3+ unfortunately.
PS That guy that thought that Bulldozer, or should I say AMD's Pentium 4 version, was a good idea,... well, I hope he/she works at a McDonalds store today serving people hot potatoes. He/she knows much about hot potatoes.
as for no update to the socket till 2016.. i dont really see that as an issue. sockect 775 for intell was one of the best and that sucker went on forever. (pentium 4, pentium d, pentium c2d, c2q. and with a slight mod xenon's)
a longer life span for a socket isnt always a bad thing. provided you improve the components that go in and arround it.
So no, they didn't do it to "cut corners", that's just how you feel about it which is different than why they did it. They did it to save die space so they could cram more cores on a single CPU.
but having one fp is not corner cutting thats just a design flaw imo. the cost cutting was due to simply not doing things by hand that should have been done by hand. that cost a lot of extra performance for some money savings.
its really annoying to me that they chose the path they did. as it could have been so much better.
You want to justify a design that failed miserably and brought AMD to it's knees. I can't stop you. I only can say to you that for the Jaguar design where space is much more limited and power consumption much more important, they didn't choose the module design. Even considering that Kabinis for example do have stream processors in them for GPGPU they still paired an integer unit with a full fpu. That should tell you something.
When it comes to integer performance (what CPUs are doing most of the time since memory addresses and strings are represented as integers) that's what CPUs will be doing. More often than not, 4 FPUs will be more than enough for your typical floating point use. Also you're misunderstanding me if you're thinking I'm saying that CPU doesn't need any FPUs. If you're running an application that has more than 4 FPU intensive threads, then you really should be considering GPGPU but most of the time FPU instructions will be spread throughout code and not all bunched up so despite there only being 1 FPU per module, it doesn't matter if it's shared as it will just use whatever is free. You run out of FP performance in unique situations with FX chips which are typically only encountered on benchmarks and less in real world applications.
Loss in performance is probably much more likely to be caused by the long pipeline that FX CPUs have because of the module design, so not predicting a branch properly (which would cause a pipeline stall) will cause a much worse performance hit than fewer FPUs will as the pipeline has to be wiped and the next instruction has to go all the way through it again which was one of the biggest flaws of the first version of Bulldozer to come out and has been improved with every revision since, same deal with cache hit/miss ratios.
Kabini is a different animal because it doesn't use modules or even the Phenom II architecture for that matter. The pipeline is much shorter (shorter than Phenom IIs were in fact,) and is designed for low power use cases, not performance. The cost of a shorter pipeline is that (initially at least,) it can hinder clock speeds until the components on the pipeline are optimized like Intel has done over the last 8 years with the Core architecture.
I'm not saying that what AMD did was a good idea. I'm saying that it was ambitious and probably is more suitable for businesses than your typical consumer. It was too early to do this and they suffered because of it. However the claims your making are false though, the things you don't like about FX aren't what hinders it. The shared FPU was probably one of the best decisions they made with the architecture. The worst was the size of the pipeline, it's the single biggest reason why AMD can't get as much done per clock cycle as Intel.
Also HyperThreading threads typically give you a max improvement of 30% and as little as nothing depending on the workload where AMD's modules scale almost linearly in comparison, as real cores do. So Intel might have better single-threaded performance but AMD CPUs scale better per core and start showing their colors in multi-threaded workloads.
Also AMD and Intel's philosophy with HT and modules are very similar. AMD is adding components to run more stuff in parallel where Intel just uses what isn't being used already to gain more performance. As a result HT performance depends highly on the current CPU load and what parts of the CPU aren't being used where with module you know that you'll get roughly the same performance per integer compute core as opposed to being highly dependent on what's being done already.
I did some testing a while back with respect to how much HT and more cores impacts 7zip performance and came up with this and this. You're over estimating the ability of HT.
Also what kind of workload are you doing to measure performance and in what language?
But I guess we will just have to wait and see whats behind door number 2 lol.
Personally, I think they should start looking to make DDR4 standard asap because it will benefit APU's so much to have it (Though they can just start integrating high performance DDR3 memory controllers I would suppose as well)
Granted we have reached a plateau with desktops in terms of performance demands but the server market is hungry for more speed with all the cloud infrastructure going into industries. I think its a tad bit short sided not to adopt DDR4 earlier rather than later for AMD at least in the server market.
lga 775 managed to span ddr ddr2 and ddr3. obviously would be per board specific but i dont see how the socket type is relivant to what memory can be used.
And the Op kind of implies that Amd are definitely bad for holding on to reality , intel by comparison keep swapping sockets and chipsets mearly to keep people from having more than a few years upgrade path.
Imho pciex 3 is not utilised 100% by 99% of those that have it and ddr4 is simply to expensive at this time so I welcome the common sense approach of No we won't swap sockets just to drum up chipset sales.
Granted, there's no technical reason why AMD can't release CPUs that support both DDR3 and DDR4 at the same time... but there are plenty of good financial reasons why two memory controllers on a CPU don't make much sense. Especially when you're in AMD's position where they're targeting their CPUs at the price-conscious.