Friday, February 8th 2019
No AMD Radeon "Navi" Before October: Report
AMD "Navi" is the company's next-generation graphics architecture succeeding "Vega" and will leverage the 7 nm silicon fabrication process. It was originally slated to launch mid-2019, with probable unveiling on the sidelines of Computex (early-June). Cowcotland reports that AMD has delayed its plans to launch "Navi" all the way to October (Q4-2019). The delay probably has something to do with AMD's 7 nm foundry allocation for the year.
AMD is now fully reliant on TSMC to execute its 7 nm product roadmap, which includes its entire 2nd generation EPYC and 3rd generation Ryzen processors based on the "Zen 2" architecture, and to a smaller extent, GPUs based on its 2nd generation "Vega" architecture, such as the recently launched Radeon VII. We expect the first "Navi" discrete GPU to be a lean, fast-moving product that succeeds "Polaris 30." In addition to 7 nm, it could incorporate faster SIMD units, higher clock-speeds, and a relatively cost-effective memory solution, such as GDDR6.
Source:
Cowcotland
AMD is now fully reliant on TSMC to execute its 7 nm product roadmap, which includes its entire 2nd generation EPYC and 3rd generation Ryzen processors based on the "Zen 2" architecture, and to a smaller extent, GPUs based on its 2nd generation "Vega" architecture, such as the recently launched Radeon VII. We expect the first "Navi" discrete GPU to be a lean, fast-moving product that succeeds "Polaris 30." In addition to 7 nm, it could incorporate faster SIMD units, higher clock-speeds, and a relatively cost-effective memory solution, such as GDDR6.
135 Comments on No AMD Radeon "Navi" Before October: Report
This will of course mean they free up some capacity for making other stuff, like more Vega20 or Zen, at least for a few months. AMD uses TSMC's 7nm HPC node. Apple and all the other mobile chip makers use the low power node, which are related but separate nodes. Yes, Polaris, Vega and Navi (which I believe are named after stars), were announced as incremental changes to GCN. Navi will also be a monolithic GPU.
What comes after Navi might be using "Super SIMD", MCM and other technologies AMD are developing. It's fairly difficult to scale up a GPU, which is why all modern GPUs are made as a large design and then cut down.
But as you are saying, Navi is intended to replace their current lineup with more efficient alternatives, not compete with RTX 2080 Ti and whatever Nvidia launches next.
But people see a green bar on a plot and suddenly AMD is in fantastic condition. :)
Call me then they have profit margin like Nvidia or Intel. They're competing on the same market and have similar costs. Where's the profit? And you know all this because Radeon VII is so cheap?
What else will you tell me? That 7nm chips will be more efficient and require less cooling? :p
Yes, in a distant future 7nm *may* become cheap. But at the moment it's still including a huge premium for R&D and the supply is very limited. And it may be like that for years.
So on one hand we have a new node that is very useful for smartphones makers, who can easily ask a $1000+ price for their flagship models despite the CPU being tiny and relatively cheap. They can pay a lot for 7nm.
On the other hand you have 3 companies making consumer CPUs and GPUs, who need 7nm to push performance, because that's what gaming clients demand. Their chips are huge and are a majority of PC cost.
IMO there is just one possible solution: gaming PC parts will become silly expensive. So if you're irritated by RTX or Intel 9th gen prices, brace yourself... Millions of notebooks weren't enough for AMD to bother and make Zen more frugal. So yeah, why make cards that support a few AAA games indeed... Although in 2019 few will become few dozen and by 2021 most new games should support RTRT (if it catches on). I wonder when will AMD decide it's worth the fuss and whether they'll still be in business. Are you sure you know what rapid packed math stands for? It just means doing 2 FP16 operations with FP32 - an idea coming straight from compute cards.
So first of all: in ideal situation it gives you 2x performance - that's far cry from what purpose built ASIC can do.
Second: this will work only in specific scenarios and, more importantly, only when you force it explicitly in the code. In other words: game engines would have to be rewritten for AMD.
So both ideologically and practically it's a lot like AVX-512. Sorry mate... Sometimes I understand what you're trying to say and sometimes I don't. This is the latter case. Can you rewrite this sentence? IMO it doesn't need to be in the same chip. You should be literally able to add RTRT or tensor cores on a separate card. It works in the Nvidia world pretty well - it's just a question of latency. But 2 chips on the same card? Should work perfectly well. The whole point of IF is being able to combine different circuit types.
Of course, it's possible for Nvidia to beef things up to get better RTRT performance and still end up with a huge die, but I'm hoping the beating they took recently conveys the message the market does not put up with their new prices.
Polaris debuted June 29, 2016. Xbox One X debuted November 7, 2017. Navi is expected to debut ~July, 2019 for a PlayStation 5 launch holiday 2020. The timelines match. If Navi has problems and it did in fact get bumped back to October or later, PlayStation 5 might be delayed too. That said, Micorosft may have deliberately choosen a holiday release for the Xbox One X to maximize sales/market impact. They may have been ready to launch many months before that so, PlayStation 5 could still make a holiday 2020 launch with the delay.
The way the custom SoC business works is that requirements are set and a contract signed. AMD spends 6-12 months designing the chip and informs the client what the engineering specs are (power, packaging, etc.) so the client can design the rest of the package. Then AMD starts sampling GPUs and engineering prototypes are manufactured by the client, tested internally, then shipped out to developers to create games for it. AMD refines the process and works towards mass production while the client finalizes everything else and developers polish games. Then at the end, you have about three months of stockpiling inventory of consoles and games alike so there's hopefully enough of everything available to meet market demand.
Desktop cards don't have the software side to worry about so much (other than drivers, which AMD addresses with engineering samples internally) which is why they can debut PC cards well before a console using the same architecture.
I think Microsoft's next console will be mostly DXR-based and on Arcturus. PlayStation 5 is a small step up from Xbox One X so Microsoft likely isn't going to feel inclined to make a Navi-based console. They're going to want to pent up hype for the big DXR push. This also adds credibility to the idea that Navi is GCN 6.0 and Arcturus is something new with tensor cores and the like.
In case of Nvidia it's similar, Vega II is a dud because it sells for $699 with 16GB HBM2 & can only match or exceed 1080Ti after 2(?) years lest we forget what the competition has now & their prices! It's fashionable to hate on AMD because they always seem to under-perform or exaggerate some of their selling points, but hey none remember that 28 core 5GHz joke or "freesync doesn't even work" from you know who :rolleyes:
And that is why AMD will always be the underdog &/or less profitable, simply because (bigger) brand name & bluster wins all the time - virtually every time these days! AMD could sell the next gen chips beating Intel in virtually every metric, except possibly raw clocks yet Intel will still outsell them 4:1 or 3:1 as these are the times we live in & that's our fault!
So this is basically AMD's fault for ignoring demand.
We know what CPUs sell best today. We know how Intel makes money.
AMD should have tried to attack these markets, because clearly that's where the money is.
But AMD decided to NOT enter the profitable niches. They decided to make a very specific type of CPUs (basically: to win benchmark and excel in reviews). And as a result they have to accept a very specific profit margin - which is low. But it is their decision.
They could have made CPUs similar to Intel's. They could have gone after Intel's clients and convince them with lower prices. They would maintain their 10-15% market share, but with much higher profit.
Contrary to CPUs, AMD's GPUs are at least doing what they should. It's just that the technology is old and they seem not to have any idea how to improve it. Not true at all.
Big brand has more market share by definition. It has little to do with profitability (at least in electronics).
In a stable market of comparable products, all producers should have similar prices. So once costs are similar, they should have similar profitability. I'm not making this up - that's how economy works. :)
AMD is not making money, so they're doing something wrong. Either the prices are way too low or the costs are too high.
They're not making a fantastic product that everyone wants. They won't increase their market share by a lot.
If AMD keeps making near zero profit, they won't build any reserves and they won't have money to develop new product lines. They'll be OK as long as the market is healthy and there's high demand for what they currently make.
"Are you sure you know what rapid packed math stands for? "
Are you, the limits are not the same on Vega 20 as 10, it can do down to 8bit and possibly 4bit Rapid packed math so you ARE wrong and its not like AVX512 which is used as you should know for things other then AI ,usually.
"So first of all: in ideal situation it gives you 2x performance - that's far cry from what purpose built ASIC can do.
Second: this will work only in specific scenarios and, more importantly, only when you force it explicitly in the code. In other words: game engines would have to be rewritten for AMD."
first re read RPM specs on vega 20,, What you mean via a new API like DX12+ or vulkan , thats happening dx11 is finally going the way of 10 and 9 , Nvidia's relic lead on older games is less relevant
My point you didnt get-
Nvidia sold CUDA on its ability to do compute prior , yet straight up it got dumped when they needed to do AI since it is not as fast as custom , specific hardware and they made tensor and RT cores which largely are better at specific tasks.
They presented a personel demonstration to PROSUMERS that GPGPU is not for them and special circuitry could be significantly better.
now go figure why Softbank sold out.
"IMO it doesn't need to be in the same chip. You should be literally able to add RTRT or tensor cores on a separate card. It works in the Nvidia world pretty well - it's just a question of latency. But 2 chips on the same card? Should work perfectly well. The whole point of IF is being able to combine different circuit types."
your opinion is that of a user not architect , it may seam easy to extend ,add a extra side bus to accommodate a new RT chip that you also just made in this last year hypothetically , then to slap it on a 2.5D interposer chip that you also just designed this last year.
But the facts are that's two to three extra chips to design ,since nvidia announced RT, plus a redesign of the one you just spent 3 years designing and validating the design of.
then you have validation testing to proof correct opperation and endurance , now add CE and enviromental testing .
your being silly , its way too much work and not possible in any way.
and finally AMD's margin increased not decreased this year, THEY ARE MAKING A PROFIT ON EVERYTHING THEY SELL they are not a charity.
The thing is that it's easy to do well when money is rolling in and the future looks bright. It's a lot harder to do well under adversity and AMD was under serious adversity back then and yet they still brought Ryzen to market. They have my respect for that. Having owned my own business I understand what they went through.
Today they have regained some CPU market share. They have also regained GPU market share (largely due to the mining craze though). Their stock is up to $23 a share and their future is looking pretty solid as long as they continue to make good decisions which I think they will do under Lisa Su's leadership.
They're paying the banks instead of loaning more money!
Your comment is nothing but speculation at this point.
Are you an engineer?
I don't think so.
Typical of someone with a greeneye.
2. Navi is going to be a new architecture, just like Vega is a new architecture and NOT GCN.
3. Nvidia have been using the same architecture since their GTX 400 series. Literally the latest Turing architecture is technically an iteration and improvement over the GTX 400 architecture. Heck even that architecture shared much of the designs from the GTX 200 series. In essence the biggest architectural shift Nvidia did was from their GTX 9000 series to their GTX 200 series and then another smaller shift from their 200 series to their 400 series. Ever since the GTX 400 series its been small iterations and improvements over time.
AMD made the unified shaders shift way before Nvidia actually, they did it in their HD 2000 series GPU's, two years before Nvidia did it. Since then the biggest jump for AMD was with their 4000 series.
They have Vega listed as GCN5.
As for previous iterations, they were so major redesigns, AMD themselves called them 1.1, 1.2 and so on. Only when it became painfully clear their architecture was getting long in the tooth (I believe it was with Polaris?) they went back and renamed everything. That's not to say there were no improvements (Vega is clearly faster than, say, a 290X), but I'm sure they weren't as big as AMD wished.
If speculation is correct and this 7nm Navi is based on gcn, hopefully AMD pulls this off, before the real deal is released.
Though there's been conflicting reports claiming 7nm Navi maybe the last gcn based but will get a overhaul all thanks to 7nm.
Other sources state, 7nm Navi s a completely new GPU design. Only time will tell.
One thing I hope AMD does not do is Re-Brand its GPU's again. This is probably one of the worst strategies a company can follow. That was a direct quote from a site that posted this 3 months ago. The question is did things change within AMD? Not sure, but seeing how Navi is delayed till October 2019, perhaps its no longer the case. AMD doesn't have to beat the crap out of Nvidia, they only need to remain competitive in performance and price. It seems AMD puts more concentration on its CPU department, because the GPU department has been lacking for years now. AMD GPUs are far from useless of course, they can Game any game you throw at them, just that benchmarks don't do justice and many rely on benchmarks religiously. Until AMD can launch its Brand Spanking New GPU Design, they need to compete on Price/Performance.