Tuesday, March 19th 2019
AMD Says Not to Count on Exotic Materials for CPUs in the Next Ten Years, Silicon Is Still Computing's Best Friend
AMD's senior VP of AMD's datacentre group Forrest Norrod, at the Rice Oil and Gas HPC conference, said that while graphene does have incredible promise for the world of computing, it likely will take some ten years before such exotic material are actually taken advantage off. As Norrod puts it, silicon still has a pretty straightforward - if increasingly complex - path down to 3 nanometer densities. And according to him, at the rate manufacturers are being able to scale down their production nodes further, the average time between node transitions stands at some four or five years - which makes the jump to 5 nm and then 3 nm look exactly some 10 years from now, where Norrod expects to go through two additional shrinking nodes for the manufacturing process.
Of course, graphene is being hailed as the next best candidate for taking over silicon's place at the heart of our more complex, high-performance electronics, due, in part, to its high conductivity independent of temperature variation and its incredible switching resistance - it has been found to be able to operate at Terahertz switching speeds. It's a 2D material, which means that implementations of it will have to occur in deposited sheets of graphene across some other material.Of course, there's also the matter of quantum computing, on which Norrod takes a cautious, pondered approach: he expects the technology to flourish within the next 10 to 100 years, which, I think we can all agree, is a pretty safe bet for that to happen. Even though quantum computing is particularly geared for some specific workloads and wouldn't be able to completely replace said "traditional" processing designs and approaches, it's a technology that can be developed side by side with traditional computing (even if achieved with recourse to exotic materials).
Source:
PCGamesN
Of course, graphene is being hailed as the next best candidate for taking over silicon's place at the heart of our more complex, high-performance electronics, due, in part, to its high conductivity independent of temperature variation and its incredible switching resistance - it has been found to be able to operate at Terahertz switching speeds. It's a 2D material, which means that implementations of it will have to occur in deposited sheets of graphene across some other material.Of course, there's also the matter of quantum computing, on which Norrod takes a cautious, pondered approach: he expects the technology to flourish within the next 10 to 100 years, which, I think we can all agree, is a pretty safe bet for that to happen. Even though quantum computing is particularly geared for some specific workloads and wouldn't be able to completely replace said "traditional" processing designs and approaches, it's a technology that can be developed side by side with traditional computing (even if achieved with recourse to exotic materials).
34 Comments on AMD Says Not to Count on Exotic Materials for CPUs in the Next Ten Years, Silicon Is Still Computing's Best Friend
For the most part, doesn't software still need to keep up? LOL..programmers, ugh.
Although it might well take more than 10 years. Multithreading performance.
Are they going by TSMC roadmaps?
Please don't count the + + + + + after that, they are within the same 14nm processing.
Ignore the monopoly consumer market at that time, check the Xeon market where true improvements have been made.
Put 14nm Xeon v4 vs 22nm Xeon v3,
They packed 15% - 20% more cores into a CPU with the same Frequencies and TDP.
That's significant.
As an end user, I couldn't care less if the CPU was built out of sand or iron. And those that actually build CPUs probably know what materials they can count on without advice from AMD.
game companies have started to learn that as well.
The fact that everybody is still exploring alternatives tells us we don't even have a viable candidate for the time being. The properties of the materials are understood pretty well already, what I think we need is engineering breakthroughs in lowering costs for some alternative.
Problem starts when you are using an engine what is not written like that. The engine is out of your control.
There are still few programs written this way. The extra challenge is that many PCs have weaker cooling as they processor would require. The reason these systems are running is that they never ever run well optimized programs. If you create one many people will complain about suddenly unstable systems. 90+% load on the CPU will overheat it and the system will crash in 10 minutes or so. So you must build in a throttle what enables to limit the CPU usage to xx%. If you don't do that many PCs will not be able to run your program.
The thing is, AMD did try to win this battle on technical merit alone, back in AthlonXP/64 days. We all know how that played out. So PR (even as hapless as in this instance) is still a move in the right direction for them. (Fwiw, I think in general their PR is doing a pretty good job.) That much is true. Testing said programs and ensuring they do what you think they do, that's where the pain starts.
If we want to use a brain as a model, we need a hell of a lot more cores, a lot smaller, and far more efficient.