Friday, July 17th 2015
AMD Now Almost Worth A Quarter of What it Paid for ATI
It's been gloomy at the markets in the wake of the European economic crisis. This along with a revised quarterly outlook released by the company, hit AMD very hard over the past week. The AMD stock opened to a stock price of 1.87 down -0.09 or -4.59% at the time of writing this report, which sets the company's market capitalization at $1.53 billion. This is almost a quarter of what AMD paid to acquire ATI Technology, about a decade ago ($5.60 billion). Earlier this month, AMD took a steep fall of -15.59%, seeing its market cap drop by a quarter.
Intel is now worth $140.8 billion (92 times more), and NVIDIA $10.7 billion (7 times more). Among the issues affecting AMD are decline in PC sales and stiff competition. However, reasonably positive earnings put out by Intel disproves AMD's excuse that the market is to blame for bad performance, and the company could slide even further, hitting its all-time-low at the financial markets. The company will host an earnings call later today.
Source:
Google Finance
Intel is now worth $140.8 billion (92 times more), and NVIDIA $10.7 billion (7 times more). Among the issues affecting AMD are decline in PC sales and stiff competition. However, reasonably positive earnings put out by Intel disproves AMD's excuse that the market is to blame for bad performance, and the company could slide even further, hitting its all-time-low at the financial markets. The company will host an earnings call later today.
136 Comments on AMD Now Almost Worth A Quarter of What it Paid for ATI
WHO CARES... As long they are afloat with their design team, they will sell their own product, just as ARM does.
In the event of AMD going under (extremely unlikely since the company could exist - in an extreme situation- as a design house drawing revenue solely from IP), all that would happen is that Intel would be forced to operate under a Consent Decree to ensure that the company did not take advantage of their position. Consent Decree is basically the only thing that kept IBM in checkfor decades when it could quite easily have squeezed the "seven dwarfs" out of the mainframe market. It would have been better if AMD themselves had come to this same decision voluntarily, rather than having it forced upon them in a series of painful amputations - although I'm not sure AMD's BoD have the stones to set a long term goal based upon core competency. AMD have a tendency to follow trends, not set them. A merger is still ownership change (as an entity) as far as the agreement is concerned. Section 5.2 of the agreementhas the relevant clauses. You're right about the IP complications - and it isn't just ATI. A large part of ATI's IP originated from 3DLabs - who in turn had acquired Dynamic Pictures, Chromatic Research, and Intergraph's graphic division. All these companies has IP sharing in place with other companies that ended up being subsumed other players ( Nvidia's acquisition of SGI's graphics IP for example) - in addition Nvidia and ATI had a very non-adversarial relationship once they'd seen off the other graphics players (S3, 3Dfx, Matrox, Rendition etc.) Graphics and parallelization is where its at (or will be). As far as x86 is concerned, I doubt anyone would actually want it - imagine how far behind a potential "new" player would be. Intel is too entrenched in the high margin x86 enterprise markets, and the low end is a dogfight for the lowest pricing/power envelope for a limited feature set between x86 and ARM on razor thin margins. Basically you're saying is that AMD need management vision and a defined strategic plan that can be augmented by new technologies/markets, and not waver from it at the first sign of something shiny distracting them? Taking the decision making out of AMD's BoDs hands? I fully agree if that is the case, although just bringing in someone with a proven track record of success and understanding of strategic planning might suffice (Lisa Su's previous work experience doesn't indicate that she is the ONE). Renée James will be looking for a CEO position next year, and would be a great catch for AMD - whether AMD could tempt her is another matter entirely.
If you want to go older, MMX and SSE are basically the backbone of modern multimedia (high-quality music and graphics) being a thing at all on PCs by and large (you could probably do it all using plain old i486 binary-compatible code with how highly-clocked CPUs are now, vut the speed boosts from using MMX and SSE are real), or if you go even older, the math comprocessor/FPU "extension" is why 3D games are a thing at all, being absolutely needed for the id1 engine powering Wolfenstein 3D and DOOM.
If you want more recent widely-used stuff, just compare an AES-NI-enabled AES encryption program vs x86 AES - even with the full complement of MMX, SSE and AVX extensions, x86 is nowhere near AES-NI.
If you want more essential, look at the various virtualization extensions across the whole CPU industry (VT-x/AMD-V, VT-d/IOMMU, similar stuff on ARM and POWER) that made "the cloud" a thing at all by making fast x86 virtualization possible at all.
So please, do tell me again how instruction set extensions don't add to efficiency, or improve performance per watt, or improve all out performance.
I still maintain, AMD's biggest issue is that they have utterly and completely lost the big x86 server market (Bulldozer and Thuban being unscalable being why such is the case, and has remained unchanged since 2012!) and have essentially zero GPGPU compute presence compared to the sheer amount of of Tesla deployments in the wild. Combine CUDA being decently well-known and loved by the scientific community, as well as Xeon Phi's x86 nature, and going AMD using only OpenCL can be a hard pill to swallow to many. Plus AMD GPUs run hot and use a lot power. Fine for a desktop, deal-breaker when you're trying to fit a few hundred in a single room and have to pay the ills. Oh, and the Fury X is hopeless as a compute card: the radiator is impossible to fit in most servers.
If, and I really mean IF AMD can become competitive again in the server market, they will see a return to profitability. If they are unable to do so, their future is looking pretty bleak.
The x86 CPU is a red herring argument: if they wanted to, all the console makers could have easily moved to ARM or (more likely) used POWER again (especially when you factor that Nvidia is one of the founding members of OpenPOWER together with IBM, Mellanox (high-bandwidth, low-latency networking provider), Google and Tyan (mobo manufacturer)). And if the hint isn't obvious enough: Nvidia could have made both an ARM or POWER-based APU for the console people, they declined to do so citing too small margins. As for the software side.. software will compile for both.. more owrk is spent of porting engines to each console's API/ABI than fiddling with the compilers, cause compilers are genuinely good enough, besides, outside of MS, pretty much everyone uses GCC or LLVM-Clang, so support is just as good all around.
Now, onwards to your posts: Keeping nvidia out of consoles is an incredibly minor win for AMD compared to nvidia completely and utterly dominating AMD in the HPC and server space: $20 per chip shipped vs $1000s per Tesla card, more in support contracts and even more in fully built solutions by Nvidia like GRID.
PhysX works partially even on AMD systems, and is the bigger risk of the two for vendor lock-in than anything else.
Gameworks on the other hand is a much more traditional closed-source code licensing affair, with no restrictions on running it with non-Nvidia hardware. It runs slow on everything because it's heavy (you know, a bit like Crysis back in 2006... except now it has a pretty marketing name instead of being nothing more than a meme). Why does it run particularly slowly on AMD GPUs? Well, quite simply because AMD GPUs are designed quite differently from Nvidia. If most games had gameworks, AMD would simply respond by desoigning a new GPU that looks a lot more like the Fermi/Kepler/Maxwell evoutionary family than GCN. No more, no less.
Much the same happenned with the GeForce 8000 and Radeon HD 2000 when Direct3D 10 changed the rendering pipeline completely: the industry as a whole moved from pixel pipelines to much more general-purpose shader processors instead.
Much the same also happens in the CPU side of things, with how Intel and AMD have vastly different CPU designs that perform vastly differently based on different workloads, the current one being Bulldozer vs Haswell/Broadwell, before that NetBurst vs K8, and even further before that, K6 vs Pentium 2/P6.
Nothing to see here in Gameworks/Physx, so move along and stop bringing it up unless you're ripping apart AMD's driver teams' excuses, in which case, do bring it up as much as possible.
Now, if you say that Gameworks is bad from a more conflict of interest point of view, then remember, TressFX is also around, as well as various other AMD-centric stuff under AMD Gaming. Besides, Gameworks has always existed, albeit less well-marketed under the "The way it's meant to be played" program from way back, but you don't see people whining about it after the first 3-4months of being suspicious, and even then, much less loudly than now. As I said before, the CPU architecture is an irrelevant argument. Console makers would have been just as happy with ARM or POWER or even MIPS. Obviously nobody besides AMD found it profitable enough to bother custom engineering the silicon for the console makers.
Mantle was a push mostly from DICE (Johan Andersson, specifically, probably also why he/DICE got the first Fury X, ahead of reviewers :)), not from AMD, though AMD was the more responsive company by far, likely because it would make CPUs less of an argument in games. And sure, while Microsoft was happy with D3D11 as it was with no real plans for major re-architecting in the works, Nvidia, AMD and game devs would keep pushing new features, and MS and Khronos (OpenGL/Vulkan) would oblige by extending the APIs as neeeded, as they largely have since D3D10/OGL4. Before Johan pushed and AMD noticed, AMD was happy to coast along and extend D3D10-11/OGL4.x just like Nvidia.
Oh, and no, given where it came from, it's blindingly obvious that Mantle was all about getting better 3D performance. Propping up Bulldozer and Jaguar were just excellent side benefits as far as Johan (DICE) was concerned, but excellent marketing material for AMD if they could make it stick. And try they did, getting a decent amount of support from all corners and whether intentional or not, spent a fair bit of time keeping it closed-source despite their open-source claims.
AMD then gave Mantle to Khronos because not only was Mantle's job as a proof of concept was done, and they had finally sanitized the code and manuals enough it could be both handed over and made open-source. Besides, D3D12 was on, Khronos had started NGOGL to look into lower-level API for OpenGl users - suddenly Mantle was not something AMD could use as a competitive advantage anymore, so they handed it over.
However, it is irrelevant in the grand scheme of things: AMD failed to scale Bulldozer and GCN, partly because basically everyone besides Intel failed to deliver 22/20nm, but mostly because they made foolish bets: On the CPU-side, they tried to build a better, lower-clocked, wider, higher-cored NetBurst and smacked right into the same problems Intel failed to solve over 4 (FOUR!) whole manufacturing nodes, and on the GPU side.. GCN is basically AMD's Fermi, their first truly compute-oriented card, and runs similarly hot, rather amusingly.
Still irrelevant in the scheme of consoles by and large though: all three consoles run modifed version of various existing APIs, all with very low-level access to things, effectively they already had Mantle, and whatever Nvidia would have cooked up if they had accepted.
You also never explained why that link you posted in the second page was even relevant to the consoles. I quoted on page three about that, you never answered. Probably you could add post #61 to my other two posts, but never mind.
Next time try to be polite and not incorrectly accuse others. I read that post of yours, I have NO intention in reading your last post after the way you started it. Have a nice day.
AMD happened to have APU's that fit the bill, but I think a Intel\Nvidia setup could have easily worked. Using a cut down 4-core Intel CPU and something like a cut-down 960 would have resulted in quite the efficient and powerful console. On paper the consoles seem great with 8-cores but those cores are horrible so it would probably be better to go with less cores that are a lot more powerful.
AMD getting the contracts for the consoles was not necessarily about making a lot of money, they knew they wouldn't. It was about fattening up their revenue stream to increase their perceived value for a potential buyer.
More than 200% I'd dare wager.
You are basically arguing a cisc over risc design philosophy... And it's been well established the gains in higher clocked simple processors are usually more than lower clocked complex ones. All without having to code for anything special.
There is far more to the goal of extensions than performance. It actaully has almost nothing to do with that. If anything, it's about catering to a select application... And locking code in.
RISC vs CISC is entirely irrelveant: any modern high-performance core does not execute the instruction directly - they decode it into their own internal instructions, called micro-ops and execute them instead. Combine that with out of order processing, simultaneous multi-threading (HyperThreading is an implementation of that with 2Threads oer CPU), branch predictors and the like, and you get way more from extensions than you do from scaling clockspeeds.
If you don't believe me, compare Broadwell-M to a 1.3GHz Pentium 3. The broadwell M will kick the crap out of the P3 while running slower, in single-core mode, just from having better x86 + SSE.. and then you turn on AVX and it just flies.
Even RISC cores are not immune to such effects: the ARM ISA has grown from ARMv1 to ARMv8, and has had extra instructions added every new release, and much like x86, specialised instructions as the market demands, since they are much faster than the general-purpose stuff. The difference is that on x86 I have core x86 (binary compatible back to at least the 486 CPU)+ a bajillion extensions, while on ARM I have 8 versions of ARM, plus custom stuff if I tweak the core to add custom stuff.
I'll conceed I am not certain about this, as my knowledge comes from the old days when IBM was doing research into this. IBM now only makes extremely high end servers, and even they have made the PowerPC instruction set pretty beefy. My main point was not that you should not use propietary extensions at all, but rather that they have a limited benefit vs keeping the instruction set propietary. Now, there may be SOME benefit in select instances (vector math and AVX are a great example of specialization), but they certainly keep x86 patents from expiring in a useful way and don't underestimate intels evaluation of that.
If AMD had the same mindset, why would they use Intel systems to benchmark their graphics cards for public consumption? Are you saying AMD have an Intel bias? AMD don't know how to get the best out of their own graphics benchmarking? Why use a competitors product to showcase your own, and by inference, indicate that the system used would provide the best results?
So you are basically asking me to believe a random forum member over the company that makes the hardware. So either Eroldru is correct and AMD don't know what they're doing in giving Intel free publicity and torpedoing their own enthusiast platform, or AMD did some comparative benchmarking and went with the system that provided the best numbers. I guess only Eroldru and AMD know for sure...oh, and the reviewers of course:
GTA V CPU performance
Battlefield Hardline CPU performance
Evolve CPU performance
Far Cry 4 CPU performance
Dragon Age: Inquisition CPU performance
as the54thvoid intimated, perception is reality in business - and AMD highlighted a competitors product over its own in every flagship graphics launch event of the past few years.
Now, what kind of perception does that engender amongst potential customers?
I read the two posts you mentioned specifically at the time. I also read the rest of page 4 many hours earlier, and that's well and truly gone off the stack. If you wanted post #61 included, you should have included it when giving specific examples, since I treated every other post as not part of your reponse. Welcome to how references work.
As for relevance to consoles, the only relevant bit is how Nvidia decided not to pursue consoles and AMD did. All I did was explain ways in which Nvidia could have provided a competing chip (either by integrating POWER or ARM with GeForce in a SoC, or by using an extremely wide link combined with an existing external CPU, which is where the link to an article about Sierra and Summit are relevant, since, due to needs, they built a really wide, high-bandwidth link)
Now, let's have a look at the fames post #61: Nvidia controlling console GPU would not have resulted in gameworks and physx being everywhere, and even if it were, studios would still have ported them over anyways - remember Gameworks works on any GPU, not just Nvidia, and AMD would have launched a GPU that looked a lot more like Maxwell 2 than GCN. if PhysX became commonplace, AMD and game devs would've found a way to replicate the functionality on non-GeForce platforms. It just hasn't been necessary so far. x86 vs blah , as I have explained several times now, is irrelevant: programmers in all industries no longer work in assembly, and will not do so again outside of a bit of hardware init and hand-optimizing certaing HPC and crypto programs, though even that is falling out of favour. So now you're coding in C/C++ because that's what the SDKs like by and large for fast games, and, well, C/C++ and most of it's various libraries and OSes have been ported to all the major platforms (x86, ARM, POWER, MIPS).
The only, and I mean ONLY relevant part of x86 being in consoles is that it makes porting to PC minorly easier (Especially from XBOne or PS4-OpenGL). The bulk of the effort is still cross-API/ABI compatibility/translation layers/shims, as has been since the X360/PS3 generation. Based on how Nvidia walked away from all three, I think AMD managed to raise the price of their SoC by being the only viable platform left. Intel doesn't have the GPU power, neither do Qualcomm (Adreno), ARM (Mali) or Imagination (PowerVR), and then you have the driver state of the latter three... which is just hopeless from what we can see on Android. Based on HumanSmoke's link, AMD is charging $100-110 per unit, with 20% margin (hence the $20 number). If AMD were not the only choice, MS and Sony would have pushed for a race to the bottom and dropped that price even lower.
This is pure speculation though, so it's probably wrong, though I suspect the truth isn't that far away based on NV's public statements. I meant Broadwell M at the same frequency as a P3, running identical code would be faster, but that the extensions would be even faster.
IBM server CPUs are actually the last really high-frequency chips out there. If they could push for even more cores and lower the speeds, they would be outperforming Intel's Haswell-EX platform, but they can't - simply because no fabs can make chips bigger than they're already shipping (each POWER8 core is bigger than a Haswell x86 core), then you have the monstruous memory config of the POWER8 chips. And that's on 22nm SOI, not 22nm FinFET CMOS+High-K (what Intel is using), which allows for the much higher clock speeds, at the cost of absolutely insane power consumption: Tyan's lower-clocked POWER8 CPUs are 190W-247W TDP, IBM's higher-clocked parts go even higher (300W is a pretty conservative estimate by AT), meanwhile Intel's E5-2699v3 and E7-8890v3 are a "mere" 165W.
Keeping the ISA proprietary while a bit of a nasty thing to do, is a status quo neither Intel, AMD or anyone else really wants to change: they get to quash any upstart competition without needing to lift a finger. And if you think AMD is nice about it, think again - Intel sued AMD for AMD64 because AMD did not want to license it, and why would they.. Opteron with AMD64 had single-handedly smashed into the datacenter and cleared out MIPS and SPARC and was well on it's way to clearing POWER from everything but the highest of the high-end systems. Meanwhile, Itanium (Intel and HP's 64bit CPU.. probably one of the nicest architectures built, ever.. with no compiler ever built to use it properly) was floundering hard, and was later killed off by GPUs doing the only thing Itanium ended up being good for: fast, very parallelisable math. Eventually, after a lot of counter-suiing, AMD and Intel settled and cross-licensed a lot of stuff, and continue to do so with all the new ISA extensions.
AMD shares are down 36% in the last month.
2015 Q2 Net Profit Margin
Intel 20.51%
Nvidia 11.64%
AMD -19.21%
Intel finally agrees to pay $15 to Pentium 4 owners" etc. In fact even if we could be back at 2005 when Athlon x2 were rolfstomping the room-heaters p4 amd would not gain that current 80% market of intel's. Cause customers are not all that informed and real life performance. People pay 400$ for a i7 when its production cost let along the innovation from 2011 Sandy is minimal. . Intel are good at many things but they are the best if they want to run u dry of money, chipsets / new sockets / tiny updates ... call it whatever. From 1156 to 1151 in ~4.5 years ... LGA775 lasted more that all these sockets together. Even if zen comes and can compete against intel products, we may not have CPU price wars, might be the other way around amd zen being overpriced and i5 becoming the new vfm king cpus while still being over ~200$. Sometimes customers have to say no to overpriced recycled tech with just fancier I/O and +/- 1pin each year.
Why Intel is winning? Well it's simple: Nobody can in good conscience recommend anything AMD right now CPU-side to anyone: Intel simply has the better performance at near enough price on the consumer side, and nothing competitive server-side. Oh, and let's not forget, AMD has never managed to build a decent mobile CPU to compete with Pentium M and Core. You can say what you want, but not showing up means you lose.
As for the constant socket change, I for one do NOT want a redux of LGA775, where you had to match chipsets to VRMs and BIOS in order to figure out which CPUs you could run. No thanks, I'll take constant socket changes where I can blindly dump a matching CPU/socket combo over that particular mess.
Why Intel isn't releasing the big 30% improvements anymore, it's quite simple: they've run out of the big improvements, and as a result, are scaling core counts and power instead, because that's what they can do at all by and large. You can read up more if you want on my posts here and here, as well as my conversation with @R-T-B in this very thread a few posts above your own.
Also, a wonderful argument regarding sockets. LGA775 was a gong show. DDR2 and DDR3, FSB 800, 1066 and 1333 all existing on one socket made for a hell of a mess. The current socket strategy makes it much easier for less-knowledgeable consumers to get something that works.
No notebook manufacturer gives a damn.
Even before Carrizo, I'd rather go with AMD's APU than Intel's overpriced CPUs that suck at gaming, but alas, neither that was possible.
Both Microsoft and Sony had good reasons to go with AMDs APU in their current gen consoles.
So, no, it's not really about the product.