# CHOO CHOOOOO!!!!1! Navi Hype Train be rollin'



## moproblems99 (Mar 28, 2019)

Didn't see this pop up on the forums so thought it should be shared.

https://wccftech.com/amd-navi-20-radeon-rx-graphics-card-ray-tracing-gcn-architecture-rumor/


----------



## Xzibit (Mar 28, 2019)

All cards can support Ray Tracing its the acceleration in consumer hardware that's new.

Turing RT cores accelerates BVH and Triangle intersection, That's off loaded.


----------



## Mats (Mar 28, 2019)

"Future product made with smaller process node could get faster than current product. Or nots."


----------



## IceShroom (Mar 28, 2019)

moproblems99 said:


> Didn't see this pop up on the forums so thought it should be shared.
> 
> https://wccftech.com/amd-navi-20-radeon-rx-graphics-card-ray-tracing-gcn-architecture-rumor/


Dont forget write, *those are not official words from AMD. A lot of people take rumor a direct word from AMD and hype it.*


----------



## Tomgang (Mar 28, 2019)

Interesting news. Maybe an alternative to ngreedia's rtx cards that is way to exspensive vs. Performance and last gen cards performance as well.

Lets see if the train keeps rolling or stop when amd officually reveals it.


----------



## XiGMAKiD (Mar 28, 2019)

"Enhanced GCN Graphics Architecture"

I smell something familiar, it smells like... *sniff* Polaris


----------



## dj-electric (Mar 28, 2019)

This just in.

This next gen Graphics card might be faster than the current flagship.... Or not.

Tune in for more at 5.


----------



## Vya Domus (Mar 28, 2019)

Given that recent CryTek demo was running on a Vega 56, this wouldn't even be impressive really.


----------



## moproblems99 (Mar 29, 2019)

In the article they are claiming Navi 20 will exceed 2080 Ti peformance.  I think the caveat to that was that they were talking about RTRT.


----------



## xkm1948 (Mar 29, 2019)

I want AMD GPU to to be competitive. However from its recent track record I really can't see how they can pull a miracle, at the very least when they are still using GCN. A design lean heavily on simple brute force instead of efficiency simply is not gonna be good enough.


Previously derailed hype train from WCCF:

https://wccftech.com/amd-radeon-rx-vega-performance-gtx-1080-ti-titan-xp/

https://wccftech.com/amd-vega-64-x2/

https://wccftech.com/amd-hbm-fury-x-fastest-world/

https://wccftech.com/amd-unveils-polaris-11-10-gpu/


These type of hyping post was never good. I was guilty of starting many of the old AMD hype post here as well. AMD GPU are at best good for mid to low tier markets.


I have more hope in Intel's discrete GPU TBH. Most of the original ATI team are now at Intel now. Quoting Kyle's article here:
https://www.hardocp.com/article/2016/05/27/from_ati_to_amd_back_journey_in_futility

The former ATi team now under Intel's R&D budget will surely put out some serious punch. If there is anything to be hyped I'd rather hype the Intel Xe than Navi


----------



## biffzinker (Mar 29, 2019)

xkm1948 said:


> I want AMD GPU to to be competitive. However from its recent track record I really can't see how they can pull a miracle, at the very least when they are still using GCN. A design lean heavily on simple brute force instead of efficiency simply is not gonna be good enough.
> 
> 
> Previously derailed hype train from WCCF:
> ...


Rather have three healthy competitors than just two though. It's a win for the consumer.


----------



## btarunr (Mar 29, 2019)

It's a "Polaris 10" successor and will probably beat RTX 2060.


----------



## biffzinker (Mar 29, 2019)

btarunr said:


> It's a "Polaris 10" successor and will probably beat RTX 2060.


Hope not, just bought a RTX 2060 and it's flying with a little more oc on top of MSI's factory oc.


----------



## Apocalypsee (Mar 29, 2019)

XiGMAKiD said:


> "Enhanced GCN Graphics Architecture"
> 
> I smell something familiar, it smells like... *sniff* Polaris


*another sniff* A hint of Fiji
*more whiff* Hmm..a bit of Vega, but it missing something...
Sniffing intensifies

On a serious note, if its another GCN and still on 4 Shader Engine, just forget about any large performance improvement. Even some of Vega 'special sauce' like DSBR and NGG fastpath is either disabled or need API/game support.


----------



## R-T-B (Mar 29, 2019)

Xzibit said:


> All cards can support Ray Tracing its the acceleration in consumer hardware that's new.



And what matters if you want to use it in any meaningful way...

As for hype trains...  Mugabe.  Kill it like his presidency. (kudos to those who get the reference).


----------



## eidairaman1 (Mar 29, 2019)

biffzinker said:


> Rather have three healthy competitors than just two though. It's a win for the consumer.



Intel's main focus is AI, gaming is second or 3rd on their list if not deadlast, don't follow their hype.


----------



## 64K (Mar 29, 2019)

Honestly it's just unreasonable to expect AMD to be completely competitive with Nvidia or Intel. Take a look at revenue from 2018:

Intel 70.8 billion dollars
Nvidia 11.7 billion dollars
AMD 6.5 billion dollars

Bear in mind that the revenue for AMD includes both their CPU and GPU businesses. How much of the money they are able to spend on R&D I don't know but it's clear they can't spend what they don't have. They are working to get out of debt as well. If they can manage to compete with Nvidia on entry level and midrange then that's all I really expect from them.


----------



## cucker tarlson (Mar 29, 2019)

Still on gcn and rely on 7nm perf power increases,what can go wrong with that.


----------



## Fouquin (Mar 29, 2019)

I think a lot of people equate GCN to only being a fixed micro architecture, when in fact it's also an ISA. A new GCN core does not mean it's the same compute unit design or arrangement, but it does mean that it uses and supports the GCN ISA. AMD has done an extremely poor job of distinguishing the two, and it's lead to the majority of people just glossing over any improvements at the micro architectural level because "it's still just GCN".


----------



## cucker tarlson (Mar 29, 2019)

well,I'll give navi a shot,though I can't say I'm on the hype traing.they're on 7nm so they can afford smaller dies and tbh it's now or never for them to push nvidia off tracks.


----------



## s3thra (Mar 29, 2019)

This is why I don't like reading sites like Wccftech. It's like reading a schoolboys fantasy blog. I'd rather just read about the real performance results at review after release. Until that happens, anyone can speculate anything.


----------



## notb (Mar 29, 2019)

biffzinker said:


> Rather have three healthy competitors than just two though. It's a win for the consumer.


Not sure about a win for consumer, but surely a loss for AMD. Seriously, they just won't fit in this market.
Intel's computing products will go against Nvidia, but there's a lot of market to share here. Gaming products, on the other hand, will be mostly harming AMD. If Intel manages to secure a console or game streaming deal, I don't see how Radeon branch could survive.


eidairaman1 said:


> Intel's main focus is AI, gaming is second or 3rd on their list if not deadlast, don't follow their hype.


Of course it is. So?
Gaming is also second on Nvidia's list and look where it took them in the PC gaming industry.

From the big 3, AMD is the only company focusing on gaming and this is what led them to the small market share they have today.
More or less the same number of people played games on PCs/consoles in 2005, when AMD had 40% CPU market share and ATI was briefly > 50%.
Intel and Nvidia invested heavily in other markets (computing, mobile, AI, IoT, cars) - that's what provided them with growth potential.


----------



## Ebo (Mar 29, 2019)

I just wanna see what Navi brings to the table when it arrives, no hype, no expectations just the facts. The rest is just as a fart, it only warms for so long and sometimes its wet.


----------



## R0H1T (Mar 29, 2019)

Whoa easy there, let's dial down the graphics a bit


----------



## cucker tarlson (Mar 29, 2019)

A fanboy yt channel is now quoted as a source for a clickbait article,proof people never learned.


----------



## londiste (Mar 29, 2019)

Fouquin said:


> I think a lot of people equate GCN to only being a fixed micro architecture, when in fact it's also an ISA. A new GCN core does not mean it's the same compute unit design or arrangement, but it does mean that it uses and supports the GCN ISA. AMD has done an extremely poor job of distinguishing the two, and it's lead to the majority of people just glossing over any improvements at the micro architectural level because "it's still just GCN".


It is a bit of both. ISA literally means Instruction Set Architecture. There are some things on different levels this does set in stone but many others can be improved on. Whether some things that need improvements are fixed or not is not easy to know.



XiGMAKiD said:


> "Enhanced GCN Graphics Architecture"
> I smell something familiar, it smells like... *sniff* Polaris


Polaris will not work. Navi will have to be based on Vega and hopefully improved upon it. Navi will have RPM (or some other form of 2*FP16), probably some form of variable rate shading and other bits of new tech. As of Turing, Nvidia is at least on feature parity with Vega. Intel's Gen11 seems to get to the same point as well. AMD has no choice and I am sure they are way ahead of this.


----------



## moproblems99 (Mar 29, 2019)

Since no one actually bothered to read it as usual...I did not know there are potentially two chips as I had only heard rumblings of only competing with 2060.  I don't think that is realistic considering they already have something 'competitive' with 2070.



> Now the details that are mentioned are broken into two parts, one is for the initial AMD Navi cards that utilize the Navi 10 GPU architecture and the second is for the high-end, enthusiast grade parts that would feature the Navi 20 GPU. According to RedGamingTech, the details were acquired from sources who have been very accurate in the past as per their claim.
> 
> The details say that before Raja Koduri, AMD’s ex-head of Radeon Technologies Group, left the company, one of his major tasks was to fix many of the weaknesses in the GCN architecture. The reason to do this was to let AMD RTG focus on both, producing a next-gen architecture while working on GCN iterations to remain competitive against NVIDIA GeForce and Quadro lineups. Now we have seen that this strategy worked well for AMD in the mainstream market but their flagship products weren’t necessarily the best or to make it simple, king of the hill products that AMD wanted them to be but rather side options to NVIDIA’s enthusiast offerings.
> 
> The reason why Vega didn’t live up to the hype was that when Raja joined RTG, the design of the Vega GPU was very much completed and there was little he could do. The actual goal for Raja was to work on Navi GPUs which would still be based on the existing GCN architecture but further refined through fixes to let’s say, the geometry engine, as reported by RedGamingTech. Now it is possible and very likely that AMD had finished the design for Navi much before Raja left RTG. But what happens to Navi when it goes into the development phase, that’s something we are really close to finding out now as rumors are alleging a launch of the first Navi based Radeon RX cards in mid of 2019.





btarunr said:


> It's a "Polaris 10" successor and will probably beat RTX 2060.



Well, there could be two.  So yes, the 10 may target the midrange but they may actually be targeting a higher end as well.

Edit: Also, in case it wasn't obvious, the title is a joke.  Moreso, I just had never heard two Navi chips before.


----------



## Vayra86 (Mar 29, 2019)

We're looking at Navi 20 by 2020 the WCCFtech article says. And by _then _ it should compete with an RTX 2080ti. The grossly overpriced and underperfoming 'upgrade' to Pascal.

So basically we're once again looking at yesteryears' performance by then, and Nvidia will have comfortably moved to 7nm. Its a 2080 vs VII all over again in 2020, and that is the best case scenario. I suppose we should count our blessings, and pray this will remain relevant until Intel shows some benchmarks.


----------



## londiste (Mar 29, 2019)

There is likely to be at least two chips in Navi series, possibly more depending on what range of performance AMD wants to cover. Nvidia has two at and below RTX2060 already (TU106, TU116) and is likely to have three (TU117?) soon.

Architecturally, I do not think AMD is likely to continue having semi-different architectures (like Polaris and Vega). There has been a lot of talk about AMD (and especially RTG) running at low budget and GPU architectures are expensive. Even Nvidia is primarily using one architecture at one time, plus maybe a high-end compute thing that simultaneously works as a research vehicle like V100.

AMD Roadmap plus rumors from WCCFtech and other sites currently claim Navi as the swan song of GCN in 2019/2020 and Arcturus as a new architecture past this.


----------



## cdawall (Mar 29, 2019)

biffzinker said:


> Hope not, just bought a RTX 2060 and it's flying with a little more oc on top of MSI's factory oc.



Why you care by the time AMD releases it and has a driver that allows it to perform better the NV4060 will be out.


----------



## Deleted member 158293 (Mar 29, 2019)

Whatever Navi will be, Navi will literally define what the gaming industry will be for years to come from Microsoft to Apple Arcade to Sony to Google Stadia to PC.

Navi needs no hype...


----------



## Vya Domus (Mar 29, 2019)

londiste said:


> Even Nvidia is primarily using one architecture at one time, plus maybe a high-end compute thing that simultaneously works as a research vehicle like V100.



V100 is a distinct standalone product that Nvidia is selling alongside their other parts and they made that very clear, it's obvious Turing and Volta were designed concurrently. There is no maybe in this, Nvidia has without doubt separate designs/architectures for different markets.


----------



## moproblems99 (Mar 29, 2019)

Vayra86 said:


> We're looking at Navi 20 by 2020 the WCCFtech article says. And by _then _ it should compete with an RTX 2080ti. The grossly overpriced and underperfoming 'upgrade' to Pascal.
> 
> So basically we're once again looking at yesteryears' performance by then, and Nvidia will have comfortably moved to 7nm. Its a 2080 vs VII all over again in 2020, and that is the best case scenario. I suppose we should count our blessings, and pray this will remain relevant until Intel shows some benchmarks.



That is true.  Personally, I would be ecstatic if they could hit 2080 ti performance AND get power draw comparable.  If they can hit 2080 ti performance it will at least force NV to use full chips in their cards again.  I won't hold my breath but I think one or the other could be a reality.  In either case, it at least elevates my hope that they haven't completely dropped the 'high end' for Navi in theory.  We'll see how it carries out in practice.


----------



## londiste (Mar 29, 2019)

Vya Domus said:


> V100 is a distinct standalone product that Nvidia is selling alongside their other parts and they made that very clear, it's obvious Turing and Volta were designed concurrently. There is no maybe in this, Nvidia has without doubt separate designs/architectures for different markets.


Turing is evolved from Volta. The changes are minor compared to what was changed from Pascal to Volta.


----------



## Vayra86 (Mar 29, 2019)

Vya Domus said:


> V100 is a distinct standalone product that Nvidia is selling alongside their other parts and they made that very clear, it's obvious Turing and Volta were designed concurrently. There is no maybe in this, Nvidia has without doubt separate designs/architectures for different markets.



I would rather say they deploy different iterations of it for each segment much like Intel has done its HEDT releases.


----------



## notb (Mar 29, 2019)

yakk said:


> Whatever Navi will be, Navi will literally define what the gaming industry will be for years to come from Microsoft to Apple Arcade to Sony to Google Stadia to PC.


Yeah... I don't really understand what you wanted to say here.
Game streaming simply means putting game rendering into the cloud - just like we already did with databases, scientific/industrial computing and media.
Even in the most optimistic plans Google and Sony have shown, it'll be just a tiny part of datacenter market.

Today GPU-accelerated cloud is dominated by Nvidia and this is not going to change.

To be honest, I don't know why Google Stadia decided to get GPUs from AMD - I'd imagine they simply were cheaper.
Microsoft and Sony may go for AMD to make it compatible with consoles - price still being the more probable reason.

But don't put your hopes to high. It won't be hard for any service to change the GPU provider. Each of the companies mentioned must have prepared for this already in case AMD stops making GPUs.


And, clearly, you don't even know what Apple Arcade is (there was a news piece lately, read it).


----------



## cucker tarlson (Mar 29, 2019)

moproblems99 said:


> That is true.  Personally, I would be ecstatic if they could hit 2080 ti performance AND get power draw comparable.  If they can hit 2080 ti performance it will at least force NV to use full chips in their cards again.  I won't hold my breath but I think one or the other could be a reality.  In either case, it at least elevates my hope that they haven't completely dropped the 'high end' for Navi in theory.  We'll see how it carries out in practice.


Amd favorable yt channels have been starving lately,and they gotta eat too.


----------



## xkm1948 (Mar 29, 2019)

cucker tarlson said:


> Amd favorable yt channels have been starving lately,and they gotta eat too.



Yeah need more click-bait videos farting out of their ass to get that sweet adsense money from Daddy Google


----------



## Ibotibo01 (Mar 29, 2019)

I believe that RX 660=GTX 1650, RX 670=GTX 1660, RX 680=GTX 1660 Ti. This technology used 7 NM but most probably it's equal with Nvidia's 12NM.



Vayra86 said:


> We're looking at Navi 20 by 2020 the WCCFtech article says. And by _then _ it should compete with an RTX 2080ti. The grossly overpriced and underperfoming 'upgrade' to Pascal.
> 
> So basically we're once again looking at yesteryears' performance by then, and Nvidia will have comfortably moved to 7nm. Its a 2080 vs VII all over again in 2020, and that is the best case scenario. I suppose we should count our blessings, and pray this will remain relevant until Intel shows some benchmarks.



I agree. Navi 20 will realase in 2020-2021. If Nvidia uses 7NM which is Ampere, it will be faster than Navi 20. Also Nvidia doesn't want to use 7NM due to Transistor's conductivity. Transistors allow max 1NM. Maybe it will change in the future but not now.


----------



## Vya Domus (Mar 29, 2019)

Ibotibo01 said:


> This technology used 7 NM but most probably it's equal with Nvidia's 12NM.



"Nvidia's 12nm" is TSMC's 16nm, an almost three year old node by this point. Will TSMC's 7nm be equal to it's own 16nm node ? How do you people come up with this stuff?


----------



## Steevo (Mar 29, 2019)

I am hopeful that AMD has cleared up their architecture issues, the unacceptably low cache hit rates that also plagued their CPU's has been the issue with GCN and how it handles resources, they keep trying for a one size fits all when they clearly need two architectures if they want to compete in compute and graphics, and the overburden they saddled themselves with is whats hurting the most. Get a new architecture for graphics and then one for compute. 



Ibotibo01 said:


> I believe that RX 660=GTX 1650, RX 670=GTX 1660, RX 680=GTX 1660 Ti. This technology used 7 NM but most probably it's equal with Nvidia's 12NM.
> 
> 
> 
> I agree. Navi 20 will realase in 2020-2021. If Nvidia uses 7NM which is Ampere, it will be faster than Navi 20. Also Nvidia doesn't want to use 7NM due to Transistor's conductivity. Transistors allow max 1NM. Maybe it will change in the future but not now.




Process/Node size doesn't matter for "conductivity" as they are still using the same base metals. 7Nm allows for a 25% increase in performance per watt and or higher frequencies. Neither Nvidia or AMD have their own fabrication plant so there is no Nvida/AMD Nm size, its whatever they get or negotiate with fab plants. We will not see 1Nm transistors for many years, and by the time we reach that point we should be using more stacked 3D designs or some other new tech will emerge to pickup where Silicon stops.


----------



## Ibotibo01 (Mar 29, 2019)

Vya Domus said:


> "Nvidia's 12nm" is TSMC's 16nm, an almost three year old node by this point. Will TSMC's 7nm be equal to it's own 16nm node ? How do you people come up with this stuff?


I spoke for Navi. Also 28NM R9 390X is equal with 14 NM RX 580.



Steevo said:


> We will not see 1Nm transistors for many years


7NM, 5NM, 3NM, 1NM.




Source: IBS


----------



## moproblems99 (Mar 29, 2019)

Vya Domus said:


> "Nvidia's 12nm" is TSMC's 16nm, an almost three year old node by this point. Will TSMC's 7nm be equal to it's own 16nm node ? How do you people come up with this stuff?



I think he was referring to performance not lithography.



cucker tarlson said:


> Amd favorable yt channels have been starving lately,and they gotta eat too.



I don't watch youtube....Really though, the point of this thread was that there has not really been any interesting 'news'......just press releases.  It's fun time.


----------



## Vya Domus (Mar 29, 2019)

Ibotibo01 said:


> I spoke for Navi.



You spoke what exactly ? 



Ibotibo01 said:


> Also 28NM R9 390X is equal with 14 NM RX 580.



Also 28nm GTX 980 is equal with 16nm GTX 1060 or 14nm RX 580, all of this means .... absolutely nothing.


----------



## R-T-B (Mar 29, 2019)

Ibotibo01 said:


> I spoke for Navi. Also 28NM R9 390X is equal with 14 NM RX 580.
> 
> 
> 7NM, 5NM, 3NM, 1NM.
> ...



That graph is kinda refuting the idea that 1nm will be any time soon, not supporting it dude.


----------



## Steevo (Mar 29, 2019)

Ibotibo01 said:


> I spoke for Navi. Also 28NM R9 390X is equal with 14 NM RX 580.
> 
> 
> 7NM, 5NM, 3NM, 1NM.
> ...


Irritable Bowel Syndrome or not, do you see the price jump from 7 to 5nm? The timeline to see a 1nm transistor is exponential in both cost and time based on historical data from the last few node shrinks. 


About the 28nm being equal to 14nm, correlation is NOT causation. What you are claiming is akin to a large SUV is as good as a turbo 4cylinder sports car since they both go the same speed on the highway. AMD architecture was better at graphics on 28nm process, for a better comparison lets look at what nvidia is doing with 16nm VS AMD on 7, Nvidia has a superior design so it performs better, uses less power, runs cooler.... if Nvidia put that on 7nm it would be at least 25% faster still, and use less power doing it. AMD has sucked at GPU design for awhile, aiming for a compute heavy card with an excess of bandwidth to mask the cache issues that cannot keep the shaders full, and their lack of turning off shaders while data is beign fetched and increasing the cache sizes means they use more power.


----------



## Vya Domus (Mar 29, 2019)

Steevo said:


> with an excess of bandwidth to mask the cache issues that cannot keep the shaders full, and their lack of turning off shaders while data is beign fetched and increasing the cache sizes means they use more power.



I have talked about this in another thread and basically this isn't true, not just in AMD's case but for GPU designs in general. Caches are not a critical factor for achieving high performances/utilization unlike it is the case with CPUs, you don't even have to believe me you only need look at similar sized dies for GPUs and CPUs and see how much the cache/core/shader ratio differs.

For instance on GP104 theoretically there is a grand total of 0.8 Kb of L2 cache that you can expect per shader, this is an abysmally small amount yet these GPUs operate just fine. Hit misses are mostly irrelevant because their latency is hidden by the fact that there are already instructions scheduled, therefor there is no need for quick frequent access of memory which would require large fast caches with very good hit ratios.

Instead what you actually need is a lot of memory bandwidth, AMD designs their GPUs just fine from this point of view, there is literally no other way of doing it. The reason GCN based cards have had more memory bandwidth and cache than their Nvidia equivalent is because they incidentally also have more ALUs typically. There is no mystery to any of this, it's all quite simple. I don't know why people have the impression that these guys could make such huge glaring oversights in their designs, they aren't idiots, they know what they are doing very well.


----------



## Ibotibo01 (Mar 29, 2019)

Vya Domus said:


> You spoke what exactly ?
> 
> 
> 
> Also 28nm GTX 980 is equal with 16nm GTX 1060 or 14nm RX 580, all of this means .... absolutely nothing.


I said for performance. Yes, it is nothing but 28NM to 14 NM. RTX 2060 is 12NM and it performances between GTX 1070 Ti and GTX 1080.



Steevo said:


> Irritable Bowel Syndrome or not, do you see the price jump from 7 to 5nm? The timeline to see a 1nm transistor is exponential in both cost and time based on historical data from the last few node shrinks.
> 
> 
> About the 28nm being equal to 14nm, correlation is NOT causation. What you are claiming is akin to a large SUV is as good as a turbo 4cylinder sports car since they both go the same speed on the highway. AMD architecture was better at graphics on 28nm process, for a better comparison lets look at what nvidia is doing with 16nm VS AMD on 7, Nvidia has a superior design so it performs better, uses less power, runs cooler.... if Nvidia put that on 7nm it would be at least 25% faster still, and use less power doing it. AMD has sucked at GPU design for awhile, aiming for a compute heavy card with an excess of bandwidth to mask the cache issues that cannot keep the shaders full, and their lack of turning off shaders while data is beign fetched and increasing the cache sizes means they use more power.



I agree. For 1NM to become a reality we would first need a new material to etch it onto 1NM isn't impossible, but with our current rate of development it will take us approximately 15-20 years to see any sort of viability. 

Well, What will Nvidia and AMD do for future? Will they use refresh cards or new cores such as Tensor?  What do you think?


----------



## Steevo (Mar 29, 2019)

Vya Domus said:


> I have talked about this in another thread and basically this isn't true, not just in AMD's case but for GPU designs in general. Caches are not a critical factor for achieving high performances/utilization unlike it is the case with CPUs, you don't even have to believe me you only need look at similar sized dies for GPUs and CPUs and see how much the cache/core/shader ratio differs.
> 
> For instance on GP104 theoretically there is a grand total of 0.8 Kb of L2 cache that you can expect per shader, this is an abysmally small amount yet these GPUs operate just fine. Hit misses are mostly irrelevant because their latency is hidden by the fact that there are already instructions scheduled, therefor there is no need for quick frequent access of memory which would require large fast caches with very good hit ratios.
> 
> Instead what you actually need is a lot of memory bandwidth, AMD designs their GPUs just fine from this point of view, there is literally no other way of doing it. The reason GCN based cards have had more memory bandwidth and cache than their Nvidia equivalent is because they incidentally also have more ALUs typically. There is no mystery to any of this, it's all quite simple. I don't know why people have the impression that these guys could make such huge glaring oversights in their designs, they aren't idiots, they know what they are doing very well.




Bulldozer. Hawaii. Sure better than Via (CPU) and Intel (GPU) or a kid with a stick, but is that what we do here, compare to failures to feel better?

We can do the math together, The GTX680 (GK104) VS Tahiti Same everything except AMD had 25% more shaders used 25% more power, 700,000 more transistors for equal performance. 0.8Kb is still a lot of information if it can be kept full, but alas when it CAN'T usually you have two choices as then you have a shader using power, making heat, and not doing work. One is to improve cache hit rate but that takes a lot of tuning and tweaking, or you can just add more cache to increase the chances the data will be loaded, but that takes more power to run and makes more heat. Can you guess which AMD has/had been doing for years? Coupled to the fact that AMD kept their shaders full precision for all operations and Nvidia used half or partial precision for some of the same calculations (later tests show the effect of forced full precision and the performance decrease https://www.extremetech.com/gaming/273897-nvidia-gpus-take-a-heavy-hit-with-hdr-enabled ) all of which adds up to more efficient use of cache, and thus increased performance.


----------



## TheoneandonlyMrK (Mar 29, 2019)

Steevo said:


> Bulldozer. Hawaii. Sure better than Via (CPU) and Intel (GPU) or a kid with a stick, but is that what we do here, compare to failures to feel better?
> 
> We can do the math together, The GTX680 (GK104) VS Tahiti Same everything except AMD had 25% more shaders used 25% more power, 700,000 more transistors for equal performance. 0.8Kb is still a lot of information if it can be kept full, but alas when it CAN'T usually you have two choices as then you have a shader using power, making heat, and not doing work. One is to improve cache hit rate but that takes a lot of tuning and tweaking, or you can just add more cache to increase the chances the data will be loaded, but that takes more power to run and makes more heat. Can you guess which AMD has/had been doing for years? Coupled to the fact that AMD kept their shaders full precision for all operations and Nvidia used half or partial precision for some of the same calculations (later tests show the effect of forced full precision and the performance decrease https://www.extremetech.com/gaming/273897-nvidia-gpus-take-a-heavy-hit-with-hdr-enabled ) all of which adds up to more efficient use of cache, and thus increased performance.


So you say a 680 is better than a 7970 then prove it's depending on use case and give proof that said Nvidia Gpu didn't age well.
Depending on use case the 7970 was always better depending on perspective ,i use compute.
But anyway would it not be better to actually discuss the Op then regurgitate the same arguable points about dead tech.
If Amd do Raytracing on navi 10 ill be surprised tbh.
Navi 20 i expect to have a go, we'll see how that goes in time.

Oh and he's right GPU are designed for streams of data not OoO data streams so the cache isn't used for possible hit's only expected and is quite small in footprint terms compared to Cpu caches.
That's why Gpu memory bandwidth matters more to Gpus then system memory bandwidth matters to CPU's , they can't store many instructions and dont have teired caches like CPU's do to buffer poor memory bandwidth.


----------



## Vya Domus (Mar 29, 2019)

Steevo said:


> 0.8Kb is still a lot of information if it can be kept full, but alas when it CAN'T usually you have two choices as then you have a shader using power, making heat, and not doing work. One is to improve cache hit rate but that takes a lot of tuning and tweaking, or you can just add more cache to increase the chances the data will be loaded, but that takes more power to run and makes more heat.



That's just not how this works, firstly a shader that doesn't do work doesn't use power (or very little), because of something called power-gating. Not that it wold matter because this rarely happens, *GPUs are designed to maximize utilization without the need of big caches/registers and complex caching algorithms*. No one does that because those things would take so much more die space that you would have to decrease the overall number of ALUs and that would nullify whatever advantage that was supposed to bring.



> _A 'shader' is a small program written in GLSL which performs graphics processing, and a 'kernel' is a small program written in OpenCL and doing GPGPU processing.* These processes don't need that many registers, they need to load data from system or graphics memory. This operation comes with significant latency. AMD and Nvidia chose similar approaches to hide this unavoidable latency: the grouping of multiple threads. AMD calls such a group a wavefront, Nvidia calls it a warp.* A group of threads is the most basic unit of scheduling of GPUs implementing this approach to hide latency, is minimum size of the data processed in SIMD fashion, the smallest executable unit of code, the way to processes a single instruction over all of the threads in it at the same time._



Secondly, the concept of improving the hit rate on a GPU cache doesn't even make sense because there is nothing you can do. You already know that the same sequence of instructions will run thousands of times across multiple CUs, therefor you can schedule the execution in such a way that *you can always have the data ready* if you have enough memory bandwidth. And that's what everyone does, including Nvidia.

Here's another hint : AMD calls their execution units *Stream *processors and Nvidia names their cores *Streaming *Multiprocessors. Still don't believe me ?

GP100 : 3840 shaders , 4MB L2 cache
Vega 64 : 4096 shaders, 4MB L2 cache

Turns out they aren't that different are they ? GCN's problem isn't the cache size or hit rate or anything like that, it's something else. It's the fact that they have a lot more complex logic on chip whereas Nvidia offloads most of it to software, I am not going to go into details but that's what is using a lot of power and what makes gaming performance unimpressive. I'll just name one thing, AMD has logic that allows for scalar instructions to be executed within each CU, Nvidia doesn't have such thing, this is mostly a worthless addition that creates even more scheduling overhead as far as graphics workloads go but great for compute.


----------



## TheoneandonlyMrK (Mar 29, 2019)

Vya Domus said:


> I have talked about this in another thread and basically this isn't true, not just in AMD's case but for GPU designs in general. Caches are not a critical factor for achieving high performances/utilization unlike it is the case with CPUs, you don't even have to believe me you only need look at similar sized dies for GPUs and CPUs and see how much the cache/core/shader ratio differs.
> 
> For instance on GP104 theoretically there is a grand total of 0.8 Kb of L2 cache that you can expect per shader, this is an abysmally small amount yet these GPUs operate just fine. Hit misses are mostly irrelevant because their latency is hidden by the fact that there are already instructions scheduled, therefor there is no need for quick frequent access of memory which would require large fast caches with very good hit ratios.
> 
> Instead what you actually need is a lot of memory bandwidth, AMD designs their GPUs just fine from this point of view, there is literally no other way of doing it. The reason GCN based cards have had more memory bandwidth and cache than their Nvidia equivalent is because they incidentally also have more ALUs typically. There is no mystery to any of this, it's all quite simple. I don't know why people have the impression that these guys could make such huge glaring oversights in their designs, they aren't idiots, they know what they are doing very well.


It's the massive oversights people have with their perspective, not asic designers, if they make mistakes ,heads roll and millions get flushed down the loo.

All for people to vastly miss-understand that it was'nt,, surprisingly ,,designed and created JUST for them to play Crisis. 
The fact that for three generations Nvidia have stepped outside that box and given people hardware that's really only good within its shelf life at the SPECIFIC task of 32bit Gaming , proven out by the tanking performance of 680's and 780's in todays games verses AMD's more complete and adaptable HAwaii's ok/meh performance.

Never mentioned and Oft forgotten AMD GPUS are not just sold by AMD as AMD GPU's ,whereas Geforce GPU's for gaming do not do other tasks well and are not as usefull outside there core use and are only sold by Nvidia.


----------



## Steevo (Mar 29, 2019)

theoneandonlymrk said:


> So you say a 680 is better than a 7970 then prove it's depending on use case and give proof that said Nvidia Gpu didn't age well.
> Depending on use case the 7970 was always better depending on perspective ,i use compute.
> But anyway would it not be better to actually discuss the Op then regurgitate the same arguable points about dead tech.
> If Amd do Raytracing on navi 10 ill be surprised tbh.
> ...




https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/27.html

I assume W1zz isnt biased but the numbers are here in our own reviews, along with transistor counts, power consumption, performance....


----------



## Fouquin (Mar 29, 2019)

londiste said:


> It is a bit of both. ISA literally means Instruction Set Architecture. There are some things on different levels this does set in stone but many others can be improved on. Whether some things that need improvements are fixed or not is not easy to know.



Indeed it is both, that's why I said it was both in my post. Someone else gave a great analogy to put perspective on why sticking with GCN is not actually a bad thing, and they said, "Netburst was also x86." Now think about that. Netburst and Skylake share the x86 ISA and are so vastly different in performance that they are realistically not even comparable to each other. GCN does have some limitations (to be fair, so does x86) but it was designed to scale, and AMD could definitely be on track to release a completely rearranged micro architecture with Navi.


----------



## TheoneandonlyMrK (Mar 29, 2019)

Steevo said:


> https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/27.html
> 
> I assume W1zz isnt biased but the numbers are here in our own reviews, along with transistor counts, power consumption, performance....


2012 called, it wants its review back. Read my post again ,i said in today's game's.


----------



## biffzinker (Mar 29, 2019)

Fouquin said:


> Indeed it is both, that's why I said it was both in my post. Someone else gave a great analogy to put perspective on why sticking with GCN is not actually a bad thing, and they said, "Netburst was also x86." Now think about that. Netburst and Skylake share the x86 ISA and are so vastly different in performance that they are realistically not even comparable to each other. GCN does have some limitations (to be fair, so does x86) but it was designed to scale, and AMD could definitely be on track to release a completely rearranged micro architecture with Navi.


That only works for the x86 ISA because of the front-end decoding logic to the internal RISC ISA. I'm not seeing AMD wanting to add in decode just so they can switch the execution back-end to another micro architecture.


----------



## RealNeil (Mar 29, 2019)

Just got an MSI GTX-1660Ti Gaming-X for the wife's PC. Guy sold it to me cheap.

I still plan to get an AMD CPU/Board combination when they're released in a few months.
And maybe, if NAVI is all that, I'll sell my two Vega-64 cards and buy a pair of NAVI cards. (only if I'm going to see real improvement over the two Vega-64 cards)


----------



## Vya Domus (Mar 29, 2019)

Fouquin said:


> Indeed it is both, that's why I said it was both in my post. Someone else gave a great analogy to put perspective on why sticking with GCN is not actually a bad thing, and they said, "Netburst was also x86." Now think about that. Netburst and Skylake share the x86 ISA and are so vastly different in performance that they are realistically not even comparable to each other. GCN does have some limitations (to be fair, so does x86) but it was designed to scale, and AMD could definitely be on track to release a completely rearranged micro architecture with Navi.



ISAs in GPUs are irrelevant for the most part and AMD/Nvidia changes them all the time anyway.


----------



## Steevo (Mar 29, 2019)

theoneandonlymrk said:


> 2012 called, it wants its review back. Read my post again ,i said in today's game's.




Well I have a 7970 and know others that have 680s and clock for clock we are still close. So please don't bring up fine wine, or Nvidia gimping, we have already beat that horse.


----------



## Fouquin (Mar 29, 2019)

biffzinker said:


> I'm not seeing AMD wanting to add in decode



GCN CUs have front-end decoding logic to SIMDs.



Vya Domus said:


> ISAs in GPUs are irrelevant for the most part and AMD/Nvidia changes them all the time anyway.



And both have virtual ISAs that are hardware agnostic (PTX and HSAIL). It does not matter what changes are made down at the hardware, the programming model doesn't change. GCN ISA was designed with exactly that in mind, having a configurable architecture that requires no changes in programming.

That is the whole reason I believe they could make more significant changes. Vega already displays some very substantial changes to the hardware, but the arrangement of that hardware remained the same. I believe that they do have the ability to restructure the hardware with Navi.


----------



## Assimilator (Mar 29, 2019)

Since we're all making arbitrary predictions, here's mine: AMD will dump the graphics portion of its company, in exchange for lots of money and a 5-year exclusivity and revenue-sharing agreement with whoever buys said graphics IP. The end result is that AMD will continue to sell and support prosumer GPUs under its brand, but the third party (let's call them "ATI", for no good reason) will sell consumer GPUs under its brand, and revenue from APUs that go into consoles will be split between them.

This allows AMD to concentrate on what they're doing well at now - CPUs - while also giving them a nice cash injection, a guarantee that they have a source of GPUs to put in their chips, and keeps their corporate graphics customers happy. ATI gets investor cash which means they can finally focus on just delivering GPUs.


----------



## TheoneandonlyMrK (Mar 29, 2019)

Steevo said:


> Well I have a 7970 and know others that have 680s and clock for clock we are still close. So please don't bring up fine wine, or Nvidia gimping, we have already beat that horse.


You were in here beating the dead horse and still are ,you started on Gpu cache in efficiency and now you're bringing up fine wine , again re read my first post.
@Assimilator so when every other Soc maker want's arm or x86 And a Gpu core in house Amd will step the other way after converting to an Soc and design house hmnn.


----------



## eidairaman1 (Mar 29, 2019)

Just chill, doesn't matter. This is about Navi despite it being at the sub atomic elements of hydrogen chloride...

End the pissing match.

I have my preference as the next guy does.


----------



## xkm1948 (Mar 30, 2019)

Assimilator said:


> Since we're all making arbitrary predictions, here's mine: AMD will dump the graphics portion of its company, in exchange for lots of money and a 5-year exclusivity and revenue-sharing agreement with whoever buys said graphics IP. The end result is that AMD will continue to sell and support prosumer GPUs under its brand, but the third party (let's call them "ATI", for no good reason) will sell consumer GPUs under its brand, and revenue from APUs that go into consoles will be split between them.
> 
> This allows AMD to concentrate on what they're doing well at now - CPUs - while also giving them a nice cash injection, a guarantee that they have a source of GPUs to put in their chips, and keeps their corporate graphics customers happy. ATI gets investor cash which means they can finally focus on just delivering GPUs.




Sell it to, say, Intel? 

I like this hypothesis


----------



## moproblems99 (Mar 30, 2019)

Assimilator said:


> Since we're all making arbitrary predictions, here's mine: AMD will dump the graphics portion of its company, in exchange for lots of money and a 5-year exclusivity and revenue-sharing agreement with whoever buys said graphics IP. The end result is that AMD will continue to sell and support prosumer GPUs under its brand, but the third party (let's call them "ATI", for no good reason) will sell consumer GPUs under its brand, and revenue from APUs that go into consoles will be split between them.
> 
> This allows AMD to concentrate on what they're doing well at now - CPUs - while also giving them a nice cash injection, a guarantee that they have a source of GPUs to put in their chips, and keeps their corporate graphics customers happy. ATI gets investor cash which means they can finally focus on just delivering GPUs.



The whole reason they bought it was APUs and they are (potentially) near parity with Intel on CPUs and have (temporary, at least) lead in GPUs.  I can't see them selling Ati off now that the plan is coming to Fruition.


----------



## xkm1948 (Mar 30, 2019)

moproblems99 said:


> The whole reason they bought it was APUs and they are (potentially) near parity with Intel on CPUs and have (temporary, at least) lead in GPUs.  I can't see them selling Ati off now that the plan is coming to Fruition.



They(AMD) intended to build Fusion (CPU+GPU), which failed flat on their face. That ship has sailed and there is no going back from that. AMD is better off without (consumer) grade GPU

Big APU is never gonna work. CPU and GPU are simply too different to incorporate into one single die. Not good for effiecnecy, and difficult to scale up/down. APU was, is and will always remain in the minimum low performance segment of the market.


----------



## biffzinker (Mar 30, 2019)

xkm1948 said:


> Big APU is never gonna work. CPU and GPU are simply too different to incorporate into one single die.


I could work out if not for the memory bandwidth deficiency.


----------



## xkm1948 (Mar 30, 2019)

biffzinker said:


> I could work out if not for the memory bandwidth deficiency.



yeah that too as well. All the data congests down over current DDR4 transfer capacity. Or make it even more expensive and difficult to scale: put a bunch of HBM2 on there. 

Modularity with good efficiency > All-in-One


----------



## eidairaman1 (Mar 30, 2019)

moproblems99 said:


> The whole reason they bought it was APUs and they are (potentially) near parity with Intel on CPUs and have (temporary, at least) lead in GPUs.  I can't see them selling Ati off now that the plan is coming to Fruition.



Chipsets was another reason


----------



## 64K (Mar 30, 2019)

Assimilator said:


> Since we're all making arbitrary predictions, here's mine: AMD will dump the graphics portion of its company, in exchange for lots of money and a 5-year exclusivity and revenue-sharing agreement with whoever buys said graphics IP. The end result is that AMD will continue to sell and support prosumer GPUs under its brand, but the third party (let's call them "ATI", for no good reason) will sell consumer GPUs under its brand, and revenue from APUs that go into consoles will be split between them.
> 
> This allows AMD to concentrate on what they're doing well at now - CPUs - while also giving them a nice cash injection, a guarantee that they have a source of GPUs to put in their chips, and keeps their corporate graphics customers happy. ATI gets investor cash which means they can finally focus on just delivering GPUs.



When they split off their graphics division and made it a separate company (RTG) I wondered the same thing. Add to that that I have seen business articles that they make more revenue from CPUs than they do GPUs and the money from selling RTG would probably allow them to get completely out of debt.

But who will buy RTG? Some said Intel would but they don't need to. They have already lured some of the top management away and who knows how many of AMD's engineers. They are acquiring the best people from AMD the cheap way. Some said Samsung might but that turned out to be just a rumor. Whoever would consider buying RTG would have to be prepared to take on Nvidia and Intel next year as well. That seems pretty daunting to me.


----------



## Vya Domus (Mar 30, 2019)

RTG isn't going anywhere, not for the following 5 years at the very least. The new consoles are knocking on the door, that'll be a nice stream of cash going in for the foreseeable future, after that who knows what will be. You are all forgetting that Navi in PCs is literally a byproduct of that. And make no mistake, if the time comes and for some reason they decide to sell their GPU division, they are not going to give it away for cheap. GPU manufactures have slowly disappeared over the years, this has become a pretty exclusive industry.

Regardless, I still don't think it'll ever happen, unless some catastrophic event forces them to do that but they seem to have come out of many crises in the past just fine.


----------



## 64K (Mar 30, 2019)

Vya Domus said:


> RTG isn't going anywhere, not for the following 5 years at the very least. The new consoles are knocking on the door, that'll be a nice stream of cash going in for the foreseeable future, after that who knows what will be. You are all forgetting that Navi in PCs is literally a byproduct of that. And make no mistake, if the time comes and for some reason they decide to sell their GPU division, they are not going to give it away for cheap. GPU manufactures have slowly disappeared over the years, this has become a pretty exclusive industry.
> 
> Regardless, I still don't think it'll ever happen, unless some catastrophic event forces them to do that but they seem to have come out of many crises in the past just fine.



I don't think RTG is going to be sold either. I don't see any company that would buy it considering the competition right now and in the future.

You're right AMD has faced several crisis. Just a few years ago there were several financial analyst sites that were saying AMD would most likely have to file bankruptcy but look at them now. Back to being profitable and paying off their debt as well. Lisa Su doesn't get enough credit for turning AMD around and making a profit now. She did it by focusing R&D on Ryzen and letting the GPU side somewhat stagnate. Back when AMD was focusing on their GPU business and letting their CPU side somewhat stagnate they were going into red ink by hundreds of millions of dollars each year.


----------



## Vya Domus (Mar 30, 2019)

And who is to say that this "focus primarily on one division at a time" is not an actual strategy that they want to follow.


----------



## Vario (Mar 30, 2019)

Steevo said:


> Well I have a 7970 and know others that have 680s and clock for clock we are still close. So please don't bring up fine wine, or Nvidia gimping, we have already beat that horse.


Both aged fine.  Was using a 770 2GB until 2018 and it was fine. 1440P with medium settings and 60FPS stable smooth.  Had a 7970 in 2013 and it was indistinguishable from the 770 at the time.  Vram limit is a valid argument but I personally don't care much about graphic detail so turning down textures and filters to achieve a max under the 2GB threshold was not a big deal for me.  I agree with steevo and let the 2012 graphics card battle die its been 7 years now!


----------



## Vayra86 (Mar 30, 2019)

theoneandonlymrk said:


> 2012 called, it wants its review back. Read my post again ,i said in today's game's.



What about them? There is an odd case of having better performance due to 1GB of extra VRAM but that os really all she wrote and has little to do with the argumentation which is that, once again we are hearing a repackaged AMD fine wine here. We know this doesnt really exist. In the larger scheme of things a 680, and a 7970 are equally obsolete and relegated to budget/low end performance level.

It needs no discussion that GCN is lacking the efficiency it needs to compete. That will only get worse as the die space and power budget is limited. The best architecture is the one that can keep clear of these limitations. The moment you touch them on the current node is a sign you are getting behind the curve, and AMD ignored those signs since 2013. Nvidia offers us an architectural update every time they risk having to move up from their cut down big die. The only time we got the full Titan in a consumer chip was with the 780ti, and only because Hawaii existed.


----------



## Steevo (Mar 30, 2019)

Vayra86 said:


> What about them? There is an odd case of having better performance due to 1GB of extra VRAM but that os really all she wrote and has little to do with the argumentation which is that, once again we are hearing a repackaged AMD fine wine here. We know this doesnt really exist. In the larger scheme of things a 680, and a 7970 are equally obsolete and relegated to budget/low end performance level.
> 
> It needs no discussion that GCN is lacking the efficiency it needs to compete. That will only get worse as the die space and power budget is limited. The best architecture is the one that can keep clear of these limitations. The moment you touch them on the current node is a sign you are getting behind the curve, and AMD ignored those signs since 2013. Nvidia offers us an architectural update every time they risk having to move up from their cut down big die. The only time we got the full Titan in a consumer chip was with the 780ti, and only because Hawaii existed.




Hawaii was a worse performing graphic die but had potential for compute based off what AMD saw as the limiting factors of Tahiti. It was also aimed for a node shrink which didn't happen, and it took a mediocre chip and made it more bland with only 17% more performance than a 7970Ghz while using 13% more power too. 

Adding more streaming processors made it worse, again, due to poor management of resources, but they worked great at actual streaming loads like Compute, mining anyone?


----------



## TheoneandonlyMrK (Mar 30, 2019)

Vayra86 said:


> What about them? There is an odd case of having better performance due to 1GB of extra VRAM but that os really all she wrote and has little to do with the argumentation which is that, once again we are hearing a repackaged AMD fine wine here. We know this doesnt really exist. In the larger scheme of things a 680, and a 7970 are equally obsolete and relegated to budget/low end performance level.
> 
> It needs no discussion that GCN is lacking the efficiency it needs to compete. That will only get worse as the die space and power budget is limited. The best architecture is the one that can keep clear of these limitations. The moment you touch them on the current node is a sign you are getting behind the curve, and AMD ignored those signs since 2013. Nvidia offers us an architectural update every time they risk having to move up from their cut down big die. The only time we got the full Titan in a consumer chip was with the 780ti, and only because Hawaii existed.


It Needs no discussion here, he mentioned that stuff I replied ,no more off topic drivel to come from me (though you won't stop clearly)on old tech here check my reply to him and move on.


Possibly read the Op both you and him.

Personally I think you and him are just trolling for an argument, I can't believe how many times you want to argue about the same shit in so many different thread's personally.


----------



## efikkan (Mar 30, 2019)

Amazing, all it takes to get the hype train rolling again is one random guy on YouTube saying AMD's next gen may end up faster than Nvidia's current gen. Did anyone even bother to check out the source?

Navi 1x and 2x is the same architecture and will share performance characteristics. The differences are feature sets and core configurations.


----------



## eidairaman1 (Mar 31, 2019)

Just wait till it's out


----------



## xkm1948 (May 5, 2019)

The well claimed “RTG Knight” AdoredTV himself is starting to trash Navi in his latest video











Some TL;DW from reddit user u/WinterCharm


Detailed Tl;Dw: (it's a 30 min video)

First half of video discusses possibility of Navi being good - mainly by talking about the advantage of new node vs old node, and theoretical improvements (AMD has made such strides before, for example, matching the R9 390 with RX 580, at lower power and cost). Then, discusses early rumors of Navi, and how they were positive, so people's impressions have been positive up until now, despite some nervousness about delay.

Now, the bad news:

1.    ⁠Very early samples looked promising, but there's a clockspeed wall that AMD hit, required a retape, hence missing the CES launch.
2.    ⁠Feb reports said Navi unable to match Vega 20 clocks.
3.    ⁠March reports - said clock targets met, but thermals and power are a nightmare
4.    ⁠April - Navi PCB leaked, could be engineering PCB, but 2x8 pins = up to 375 (ayyy GTX 480++) power draw D:
5.    ⁠Most recently, AdoredTV got a message from a known source saying "disregard faith in Navi. Engineers are frustrated and cannot wait to be done!"

Possible Product Lineup shown in this table is "best case scenario" at this point. Expect worse.

RIP Navi. We never even knew you. 

It's quite possible that RTG will be unable to beat the 1660Ti in perf/watt on a huge node advantage (7nm vs 12nm)

Edit: added more detail. Hope people dont mind.





> It's quite possible that RTG will be unable to beat the 1660Ti in perf/watt on a huge node advantage
> 
> 
> Let that sink in.
> ...


----------



## eidairaman1 (May 5, 2019)

xkm1948 said:


> The well claimed “RTG Knight” AdoredTV himself is starting to trash Navi in his latest video
> 
> 
> 
> ...




Heres some light on where RTG is.


----------



## FordGT90Concept (May 5, 2019)

xkm1948 said:


> The well claimed “RTG Knight” AdoredTV himself is starting to trash Navi in his latest video
> 
> 
> 
> ...


Sadly, it all makes sense.  AMD *really* needs to go back to the drawing board.  Start from scratch.


----------



## eidairaman1 (May 5, 2019)

FordGT90Concept said:


> Sadly, it all makes sense.  AMD *really* needs to go back to the drawing board.  Start from scratch.



Arcturas is the change but sounds to be an instinct card.


----------



## FordGT90Concept (May 5, 2019)

33:30 is where the TL;DR is and only Navi 10 is expected any time soon (Q3).  Copied verbatim:


Card|Chip|CUs|Memory|Performance|TDP|Price
RX 3080 XT|Navi 10|56||~RTX 2070|190W|$330
RX 3080|Navi 10|52|8GB GDDR6|Vega 64 + 10%|175W|$280
RX 3070 XT|Navi 10|48||Vega 64|160W|$250Sounds to me like the first batch of cards performed fantastic but then they discovered they performed fantastic because something was broken.  Now they fixed that something and now it's just incremental GCN, nothing fantastic.  Hopefully they learned something from the mistake that can help improve Arcturus but I won't hold my breath.



eidairaman1 said:


> Arcturas is the change but sounds to be an instinct card.


Navi 20 is coming in 2020 (apparently using HBM2) and it will be replacing Radeon VII and debuting as a Radeon Instinct card.  It's not clear what Arcturus is at this point (other than beyond Navi).


The tonal difference in what Lisa Su told investors is very telling, I think.  She not only knows it's worse than expected, but also knows RTG is struggling.  I hope she has a plan that works to make RTG competitive again on the technology front.  With Intel entering the fray soon, AMD can't afford to dilly dally with GCN much longer.


----------



## ShurikN (May 5, 2019)

FordGT90Concept said:


> 33:30 is where the TL;DR is and only Navi 10 is expected any time soon (Q3).  Copied verbatim:
> 
> 
> Card|Chip|CUs|Memory|Performance|TDP|Price
> ...


These 3 cards are too close together in performance, power and price to be viable. The middle one is pointless. 
On a more positive note, 2070 perf for $330, is pretty nice. NV can always drop prices but those chips are massive and I doubt they'll sell them at a loss.

Now whether Navi hits those targets or not, is to be seen. The lack on any proper info from AMD is worrisome, and I'm guessing it 'll be just another Polaris. Good enough but not great. Then again with the amount of money AMD has been pouring into RTG, or better yet, lack thereof, what are we to expect anyway. 

Another thing to bare in mind is that the entire Navi lineup will be out in 2020, so I personaly wouldn't expect Arcturus (or whatever this next non-GCN arch will be called) before 2021...
In the mean time NV can just shrink Turing to 7nm and continue to rip us of.


----------



## FordGT90Concept (May 5, 2019)

Yeah, Adored said the numbers I copied are likely optimistic.

Navi was funded by Sony so it should be a better overall architecture than Vega in terms of features and support but, where Vega translated fantastic to 7nm, Navi didn't.  The delays are likely to improve yields, performance, and leakage.

What was to replace GCN likely started R&D immediately after Raja Kuduri left at the end of 2017.  New ISAs take about five years to develop so 2022 or 2023 is the soonest we'll see it.  I don't think that's Arcturus because it's way too soon (have to remember Arcturus has been delayed because of Navi delays).  GCN's replacement could be what immediately follows Arcturus.


----------



## notb (May 5, 2019)

ShurikN said:


> These 3 cards are too close together in performance, power and price to be viable. The middle one is pointless.


These may be release candidates. That's how you make products: test multiple versions, analyze, research the market and launch a few that make most economic sense.
For AMD it's natural to make a "candidate" every 4 CU.


> On a more positive note, 2070 perf for $330, is pretty nice.


Maybe today, but not in 2020-2021 - the period when this card is going to be offered.


> NV can always drop prices but those chips are massive and I doubt they'll sell them at a loss.


NV sells them with big margin now - something they can always sacrifice if needed.
And 2020 will be half way (or better) to another gen with similar performance/price as these virtual Navi chips. Just real.


> Then again with the amount of money AMD has been pouring into RTG, or better yet, lack thereof, what are we to expect anyway.


You're a client. Why would you care how much R&D money Radeon has? It's AMD's business strategy. You should only look at what they sell you.

Basically, many AMD fans say something like this: objectively Radeon GPUs are sh*t, but AMD is small, poor and doesn't give a f*ck, which makes Radeon GPUs great.


> In the mean time NV can just shrink Turing to 7nm and continue to rip us of.


I don't understand how you can say about RTG lacking R&D budget and call Nvidia ripping clients off in literally next paragraph. Where is R&D money coming from in your universe?
So what you meant here was: NV can shrink Turing and continue to sell a leading product - at a premium for being the only company that actually gives a f*ck (at least until Intel joins the race).


----------



## ShurikN (May 5, 2019)

FordGT90Concept said:


> Yeah, Adored said the numbers I copied are likely optimistic.
> 
> Navi was funded by Sony so it should be a better overall architecture than Vega in terms of features and support but, where Vega translated fantastic to 7nm, Navi didn't.  The delays are likely to improve yields, performance, and leakage.
> 
> What was to replace GCN likely started R&D immediately after Raja Kuduri left at the end of 2017.  New ISAs take about five years to develop so 2022 or 2023 is the soonest we'll see it.  I don't think that's Arcturus because it's way too soon (have to remember Arcturus has been delayed because of Navi delays).  GCN's replacement could be what immediately follows Arcturus.


Here's one of my theories. 
We have heard that Navi was supposed to be a light-weight GCN, with all the compute-heavy elements removed in favor of pure gaming performance. PS5 is a gaming console after all, and while it's still an old GCN at heart, at least it will not be inefficient as Vega (when it comes to gaming). 
So what if, while removing/changing all the quirks and features, they somehow screwed up, and now they can't hit clocks as high as V20.

Sony probably doesn't care, Navi inside PS5 will definitely not clock as high as 1800MHz
That's why we heard rumors that Navi looks better than expected. Well yeah, for a ~1500MHz chip. But when you try to push it, it fails hard.
Which brings me to this. 
Could we see a damage control card? Take Polaris, buff it up, shrink it with some minor generational improvements to GCN and call it a day...
I'm probably wrong, but it's something to ponder about.


----------



## cucker tarlson (May 5, 2019)

xkm1948 said:


> The well claimed “RTG Knight” AdoredTV himself is starting to trash Navi in his latest video
> 
> 
> 
> ...


he pumped up the expectations so high they went through the roof,reaps what he sowed.

they did blind tests with Vega,now they'll be doing deaf ones too


----------



## HD64G (May 5, 2019)

Navi cannot be a total failure imho. It just might not be much more efficient then the last nVidia GPUs as it should due to the 7nm process. VFM will determine its success or failure as a product. For us enthusiast might be a mediocre product though. As customers we need competition to force the companies to lower the prices. Let's wait and see.


----------



## Vayra86 (May 5, 2019)

Honestly why did anyone ever think the first iteration(s) of Navi were going to be a massive performance jump? The arch was never touted as being that, first and foremost it was going to _finally_ get Vega's inefficiency and shitty margins out of the gaming segment. Seems to do that. And only that.

So really, we have no news here, just people that have been pulled into wishful thinking mode and are now kicked back to reality.



HD64G said:


> Navi cannot be a total failure imho. It just might not be much more efficient then the last nVidia GPUs as it should due to the 7nm process. VFM will determine its success or failure as a product. For us enthusiast might be a mediocre product though. As customers we need competition to force the companies to lower the prices. Let's wait and see.



Well... I think no commercial success does apply as a total failure in most companies that like to sell product. AMD's been looking at that for a looong time now. Radeon VII is just the same. They move units, but they don't make profit. Only when the mining craze was at its peak did they have some worthwhile margins... on _old_ product. The consoles are their best segment really and its clear these chips are geared towards that before anything else.


----------



## Fouquin (May 5, 2019)

xkm1948 said:


> 4. ⁠April - Navi PCB leaked, could be engineering PCB, but 2x8 pins = up to 375 (ayyy GTX 480++) power draw D:



Traced for 2x8-pin does not mean it will definitively have 2x8-pin. By that logic all of nVidia's previous flagship reference boards were 450W cards since they had 2x8-pin and 1x6-pin on the PCB.


----------



## Vya Domus (May 5, 2019)

ShurikN said:


> Sony probably doesn't care, Navi inside PS5 will definitely not clock as high as 1800MHz
> That's why we heard rumors that Navi looks better than expected. Well yeah, for a ~1500MHz chip. But when you try to push it, it fails hard.



Every architecture hits a power wall, welcome to the world of integrated circuits. Come on people, let's stop pretending this stuff is new.



Fouquin said:


> Traced for 2x8-pin does not mean it will have definitively have 2x8-pin.



Yeah but we better spew some sensationalist nonsense while we can because it's cool.


----------



## ShurikN (May 5, 2019)

Vya Domus said:


> Every architecture hits a power wall, welcome to the world of integrated circuits. Come on people, let's stop pretending this stuff is new.


I'm not pretending anything, I simply made the parallel between V20 which can hit those clocks and Navi which apparently can't. Because let's face it, it's all GCN.
Further more souldn't, in theory, a more simple arch (Navi) reach higher clocks than the compute-heavy V20. 
But for the sake of argument let's put Vega aside for a moment. Navi is a Polaris successor after all. 
Let's take RX 580. It hits a wall at around 1600 with (arguably) reasonable power draw. Now let's put Navi at maybe 1700. Okayish but nothing mind blowing, right. But you also went from GloFo 14nm to Tsmc 7nm. Plus high power draw and temps that get mentioned as well. Now that's definitely worrisome. 
So that takes me back to my previous statement that they probably screwed up something while slimming the architecture OR Tsmc's 7nm is not the be all end all process we hoped. 
Like I said this me thinking out loud.


----------



## notb (May 5, 2019)

HD64G said:


> Navi cannot be a total failure imho.


Why would it be a failure?
AMD holds 20% of dGPU market. You can't call AMD GPUs failed if you praise their CPUs.

It's almost impossible for it to be worse than last Polaris cards. It'll be OK for gaming and as chips get bigger and TSMC node gets better, Navi will get to 4K @ 60fps. Maybe even with hardware RT similar to Nvidia's.

But that's one side of a coin. Another one is delivering on expectations.
Literally from the day Vega launch and turned out to be somehow disappointing, countless AMD fans started saying that it's just an interim step. That Navi will be the "Zen moment" for Radeon. Well.. here we are almost 2 years later. And even before Navi reviews came out, narration moved to next-gen Arcturus...


----------



## Vya Domus (May 5, 2019)

ShurikN said:


> Further more souldn't, in theory, a more simple arch (Navi) reach higher clocks than the compute-heavy V20.



No, it shouldn't, that's the point. There are endless ways to make a simpler architecture scale badly with clocks, it's all about the implementation, you don't know how AMD dealt with that and neither do I.



ShurikN said:


> Let's take RX 580. It hits a wall at around 1600 with (arguably) reasonable power draw. Now let's put Navi at maybe 1700. Okayish but nothing mind blowing, right. But you also went from GloFo 14nm to Tsmc 7nm. Plus high power draw and temps that get mentioned as well. Now that's definitely worrisome.



None of that means anything without context, maybe the RX 580/590 replacement draws more power but maybe it is also notably faster.

I don't understand you people at all. What did you hope Navi will be ? A high performance, lower power, cheap design ? If so get ready to become disappointed for the rest of your life because you'll never see that not from AMD nor anyone else. Free lunches in chip designs do no exist anymore, you will always trade in something for an improved metric somewhere else. Look at Turing, plenty fast and power efficient, right ? Yeah but it's huge and it costs a lot, for Nvidia and for you.

Regardless, good luck with this skewed perception of this industry.


----------



## ShurikN (May 5, 2019)

Vya Domus said:


> I don't understand you people at all. What did you hope Navi will be ? A high performance, lower power, cheap design ? If so get ready to become disappointed for the rest of your life because you'll never see that not from AMD nor anyone else. Free lunches in chip designs do no exist anymore, you will always trade in something for an improved metric somewhere else. Look at Turing, plenty fast and power efficient, right ? Yeah but it's huge and it costs a lot, for Nvidia and for you.
> 
> Regardless, good luck with this skewed perception of this industry.


Why are you acting so triggered? Everything I said was simply for the sake of discussion, not to mention all of the speculations are based on wild rumors. Yet you managed to take everything I said like a fact...


----------



## vega22 (May 5, 2019)

Why are people getting triggered by rumours about pre-production chips?

Let's look at shit we know for sure. Gcn hardware, so it's going to be better suited to compute than gaming. Smaller node, so more heat in a small area. Amd don't have the same efficiency, so they will need 300w to match NV 200w cards.

If they price it right they will still sell like hot cakes imo. People who cry about power draw and still use desktop parts are hypocrites.


----------



## Vayra86 (May 5, 2019)

vega22 said:


> People who cry about power draw



People don't really cry about power draw. They cry about perf/watt. This is true everywhere, and especially on mobile with its cooling/power restrictions. As we reach the end of a node, power draw becomes a crucial part of the equation, its exactly what AMD is struggling with for the last 5-7 years.


----------



## xkm1948 (May 5, 2019)

I just want David Wang to join Intel now, then the old ATi will be reborn inside Intel. Intel’s dGPU cannot arrive soon enough


----------



## FordGT90Concept (May 5, 2019)

cucker tarlson said:


> he pumped up the expectations so high they went through the roof,reaps what he sowed.


Lisa Su did that.  She was hyped about Navi and now she isn't.  Something clearly happened on Navi that was unexpected.


----------



## Vya Domus (May 5, 2019)

ShurikN said:


> Yet you managed to take everything I said like a fact...



What's that even supposed to mean ? I was pointing how how the things you said do not work like that, of course they aren't facts but for the sake of discussion some of them have to be treated as such otherwise what's the point ? Speculation by itself is worthless.


----------



## vega22 (May 5, 2019)

Vayra86 said:


> People don't really cry about power draw. They cry about perf/watt. This is true everywhere, and especially on mobile with its cooling/power restrictions. As we reach the end of a node, power draw becomes a crucial part of the equation, its exactly what AMD is struggling with for the last 5-7 years.



I'll tag you in the first post I read where they do dude 

I know what you're saying. It's not been something amd have done well at since they made my GPU but still it's cost : perf that matters the most to the majority who don't just buy another NV card for mindshare reasons.


----------



## xkm1948 (May 5, 2019)

vega22 said:


> I'll tag you in the first post I read where they do dude
> 
> I know what you're saying. It's not been something amd have done well at since they made my GPU but still it's cost : perf that matters the most to the majority who don't just buy another NV card for mindshare reasons.



Nah R300 days were glorious. So was 4870/5870 days. R9700pro held crown of absolute performance AND power efficiency comparing to the famous leaf blower FX 5800


----------



## ShurikN (May 5, 2019)

vega22 said:


> I'll tag you in the first post I read where they do dude
> 
> I know what you're saying. It's not been something amd have done well at since they made my GPU but still it's cost : perf that matters the most to the majority who don't just buy another NV card for mindshare reasons.


Yes, but those people are a minority in the grand scheme of things. And the majority is tilting towards nVidia. 
Let me put it this way.
Imagine both AMD and nVidia release a card in the, most popular, mid-range segment, at the exact same time. The two cards have the exact same power draw, performance and price. Same temps and noise as well. They are literally indistinguishable in any way or form once in the case. One is green, the other is red.
Do you think the cards will sell 50:50 or will they favor nVidia, with lets say 70:30.
And that's the issue. In order for AMD to gain market share, they need (unfortunately for them) an amazing product. Last of which was HD 5870. And I would go as far as to say their last great card was 7970. They had a lot of solid products after that, Hawaii (with a proper cooler), Polaris, V56, but none were game changers.


----------



## eidairaman1 (May 5, 2019)

Vayra86 said:


> People don't really cry about power draw. They cry about perf/watt. This is true everywhere, and especially on mobile with its cooling/power restrictions. As we reach the end of a node, power draw becomes a crucial part of the equation, its exactly what AMD is struggling with for the last 5-7 years.



There are plenty that look at performance per dollar too.



ShurikN said:


> Yes, but those people are a minority in the grand scheme of things. And the majority is tilting towards nVidia.
> Let me put it this way.
> Imagine both AMD and nVidia release a card in the, most popular, mid-range segment, at the exact same time. The two cards have the exact same power draw, performance and price. Same temps and noise as well. They are literally indistinguishable in any way or form once in the case. One is green, the other is red.
> Do you think the cards will sell 50:50 or will they favor nVidia, with lets say 70:30.
> And that's the issue. In order for AMD to gain market share, they need (unfortunately for them) an amazing product. Last of which was HD 5870. And I would go as far as to say their last great card was 7970. They had a lot of solid products after that, Hawaii (with a proper cooler), Polaris, V56, but none were game changers.



I posted a Video of AMD where its goals are at right now earlier, take time to hear it.


----------



## Divide Overflow (May 5, 2019)

How odd.  Suddenly those who trashed AdoredTV as an AMD fanboy are the biggest advocates of his latest rumors and speculations...


----------



## Space Lynx (May 5, 2019)

I still don't see Navi coming close to Vega VII/2080 performance. Because if it did, they are basically just giving a big f u to those people who paid for VII at $699...  Navi will only compete for low end and mid-tier performance levels, it really is the only thing that makes sense on paper anyway.


----------



## Vya Domus (May 5, 2019)

Divide Overflow said:


> How odd.  Suddenly those who trashed AdoredTV as an AMD fanboy are the biggest advocates of his latest rumors and speculations...


 
More interesting is that they want to come off as someone who doesn't want to have anything to do with his fanboy trash yet they are the frist ones to post his videos.

How odd indeed.


----------



## cucker tarlson (May 5, 2019)

adtv blew the expectations out of proportion,ppl called him out on that.
nothing odd about the fact they're speaking now too.
why would anyone who knew what adtv posted was far too much hyped up shut up now? he just admitted things turned out to be below his humongous expectations.ppl knew that as soon as it was revealed navi is gcn.
you either believed what he said back then or not.

when adtv says it's gonna disappoint we're in for another saga of amd fan base baiting ppl on tpu since they got nothing else going on.


----------



## ShurikN (May 5, 2019)

eidairaman1 said:


> I posted a Video of AMD where its goals are at right now earlier, take time to hear it.


Just finished it. Pretty good video. And explains a lot of what's been happening and why.
This comment from the second Adored video ties in nicely with it.


> PC gamers are the lowest on the list of priorities for AMD. The 7nm capacity and resources had better uses in the datacenter as Vega Instinct / EPYC. No need to waste them on gamers who only want AMD to compete in order for Nvidia to lower prices and get Nvidia anyway.


----------



## cucker tarlson (May 5, 2019)

ShurikN said:


> Just finished it. Pretty good video. And explains a lot of what's been happening and why.
> This comment from the second Adored video ties in nicely with it.


as if this wasn't common knowledge already.
did adtv just learn that rtg cares about pc gamers the least?
cause to me switching focus is the whole reason why they're still living and breathing now.


----------



## vega22 (May 5, 2019)

@ShurikN 

For sure dude, it's the whole crux of their issue. People think Nvidia are better, even when they ain't.

Maybe if they could get a plug from playstation and Xbox that might change, but it won't happen.


----------



## FordGT90Concept (May 5, 2019)

lynx29 said:


> I still don't see Navi coming close to Vega VII/2080 performance. Because if it did, they are basically just giving a big f u to those people who paid for VII at $699...  Navi will only compete for low end and mid-tier performance levels, it really is the only thing that makes sense on paper anyway.


Remember why Radeon VII is $700: because NVIDIA cards it competes with are.  AMD's intent with Navi was to shift the price point of the market down where Radeon VII's wasn't.


----------



## cucker tarlson (May 5, 2019)

I'm not worried about power draw tbh.modern cooling solutions are capable of keeping a 250w card cool and quiet.it's gonna drive up the cost though since no one should have to buy mid-range cards with high end coolers.
I'm worried that amd might let nvidia's 2060/2070 go without competition.



FordGT90Concept said:


> Remember why Radeon VII is $700: because NVIDIA cards it competes with are.


Might wanna rethink that.
RVII is closer to $500 rtx2070 yet priced like rtx2080.

http://www.pcgameshardware.de/Grafi...Rangliste-GPU-Grafikchip-Benchmark-1174201/2/


----------



## ShurikN (May 5, 2019)

FordGT90Concept said:


> Remember why Radeon VII is $700: because NVIDIA cards it competes with are.


Yes, but bear in mind that while R7 die size is smaller, it's built on a more expensive and not as mature process, and has HBM2. If AMD could have priced it at $600 and get even, they would.


----------



## FordGT90Concept (May 5, 2019)

cucker tarlson said:


> RVII is closer to $500 rtx2070 yet priced like rtx2080.


1) Amazon lists reference Radeon VII cards from $660 (back ordered) to $770, no where near $500.

2) Performance wise, it lands squarely between RTX 2070 (~$500) and RTX 2080 (~$700) while having twice as much memory as the RTX 2080 (16 GiB HBM2 vs 8 GiB GDDR6).

So yeah, how am I wrong?  It's priced to fit inside of NVIDIA's pricing.  That's why NVDIA didn't respond with price cuts.  AMD did the same thing when they launched Vega and Fury.


----------



## cucker tarlson (May 5, 2019)

FordGT90Concept said:


> 1) Amazon lists reference Radeon VII cards from $660 (back ordered) to $770, no where near $500.
> 
> 2) Performance wise, it lands squarely between RTX 2070 (~$500) and RTX 2080 (~$700) while having twice as much memory as the RTX 2080 (16 GiB HBM2 vs 8 GiB GDDR6).
> 
> So yeah, how am I wrong?  It's priced to fit inside of NVIDIA's pricing.  That's why NVDIA didn't respond with price cuts.  AMD did the same thing when they launched Vega and Fury.


come on,every card can be found discounted
you can have 2070 for $450
I'm talking msrp

my point is the same as the point ShurikN was making above.RVII is more like $500-600 card,but they can't sell it for that.


----------



## FordGT90Concept (May 5, 2019)

If it had 8 GiB VRAM, it would be a $600 card but it has 16 GiB, hence closer to $700.  Remember, Radeon VII is not far removed from Radeon Instinct.

TPU's performance summary has a lot of older games in it like Witcher 3, Hellblade,  Dark Souls 3, Rainbow Six: Siege, and Grand Theft Auto V.  These are games that the Radeon VII doesn't do well on simply because they're DX11 with the crappy VRAM limitations that generally invokes.  Omit the dinosaurs (where you're getting 60+ fps 4K anyway) and the aggregate performance is closer to RTX 2080 than RTX 2070.


----------



## cucker tarlson (May 5, 2019)

FordGT90Concept said:


> If it had 8 GiB VRAM, it would be a $600 card but it has 16 GiB, hence closer to $700.


then they should make it 8gb cause 16gb drives up the price but the performance is that of a $500 rtx 2070 with no rt and worse efficiency.

this should be $550-600 max,same for rtx 2080,it should come down $100 too.


----------



## FordGT90Concept (May 5, 2019)

There's little/no supply of 2 GiB HBM2 chips.  They'd also have to retool/redesign HSF because of the difference in height.  Not worth it.  Sell 16 GiB to everyone.

This thread is about Navi, not Radeon VII.


----------



## Vayra86 (May 5, 2019)

vega22 said:


> I'll tag you in the first post I read where they do dude
> 
> I know what you're saying. It's not been something amd have done well at since they made my GPU but still it's cost : perf that matters the most to the majority who don't just buy another NV card for mindshare reasons.



That is just it. You need the one to get the other, what @ShurikN says as well. We all know this, deep down inside, everyone can recognize 'Nvidia mindshare' is a thing but let's just face reality, that is not because of Huang's fancy jacket. Its because of the product.

I think its comparable to a Dacia car versus a Volkswagen. They both accelerate about the same, they carry just as many people and luggage, they do the same amount of KM/L. But, the VW has a somewhat better designed interior, looks a bit nicer on the outside, and comes in twenty different colors. The Dacia comes in three. And, to top things off, VW has a few concept cars going about, and a few fast and luxurious ones too. Nobody ever buys those, but hey, if you drive a simple VW, you do get some of that 'feeling' of being part of the brand that has those cars.

This also handily underlines that people care about more than price - you don't see Dacias everywhere. In fact, price is one of the least important factors in most segments except that volume midrange. And because of that, the midrange is also the least profitable segment. This is why AMD moves units but profits so very little - and therein lies the problem. The midrange is only a result of solid high-end products from last year, or you're constanly doing reboots a'la Polaris to fix the gap and you'll never amass a comfortable margin to fund new R&D.

I think this is a pretty decent car analogy, for once.  Heck it goes further, even; VW has that E-Golf, a pretty useless electrical version of the same car, sounds almost like Turing!


----------



## HD64G (May 5, 2019)

notb said:


> Why would it be a failure?
> AMD holds 20% of dGPU market. You can't call AMD GPUs failed if you praise their CPUs.
> 
> It's almost impossible for it to be worse than last Polaris cards. It'll be OK for gaming and as chips get bigger and TSMC node gets better, Navi will get to 4K @ 60fps. Maybe even with hardware RT similar to Nvidia's.
> ...


Hi again pal! Nice to meet you in another AMD thread although you don't like their products muchly.

1) Polaris made AMD to recover to 40% with the 480 if you look back after its launch
2) Arcturus isn't a new arch but a new product probably for server market
3) Navi's expectations for 2019 is Navi 10 competing with 2070 for the cut-down one (Vega64 successor in performance, RX580 in its launch price) and the cut-down version of it to go against 1660Ti and succeed Vega56 in performance for close to $200. Me thinks they will make it but not sure about efficiency being worthy of the 7nm process. Radeon 7 got ~15% higher clocks for 10% lower power consumption having double the memory chips on it.
4) Vega wasn't an interim step at all. It just was a multi-purposed chip that resulted being much better in compute workloads (CGN and HBM2 helped much) but failed to get as high for gaming. Vega 56 however was a good product since launch and is even better today, not mentioning the price hikes during the mining craze.
5) Intel is already3 years off its schedule with 10nm having spent huge piles of $ on that front while AMD is behind the schedule with Navi mainly due to the change in plans with GF abandoning their 7nm and altering parameters in design to make it compatible with the TSMC's process


----------



## cucker tarlson (May 5, 2019)

FordGT90Concept said:


> There's little/no supply of 2 GiB HBM2 chips.  They'd also have to retool/redesign HSF because of the difference in height.  Not worth it.  Sell 16 GiB to everyone.
> 
> This thread is about Navi, not Radeon VII.


if nv put 16gb on 2070 cause there's a short supply of 1gb ddr6 chips it wouldn't make it worth $200 more.


----------



## MrGenius (May 5, 2019)

FordGT90Concept said:


> 2) Performance wise, it lands squarely between RTX 2070 (~$500) and RTX 2080 (~$700) while having twice as much memory as the RTX 2080 (16 GiB HBM2 vs 8 GiB GDDR6).
> 
> So yeah, how am I wrong?  It's priced to fit inside of NVIDIA's pricing.  That's why NVDIA didn't respond with price cuts.


I'll tell you how.

1)  It's not even close to squarely between the RTX 2070 and RTX 2080 performance wise. It's essentially equal to an RTX 2080. Trading blows with it all day long. Sometimes a little slower, sometimes a little faster, sometimes pretty much the same. Turn RTX on and it wins every time(speed wise). Those charts above are bullshit.

2) As such, it's priced to beat Nvidia value wise. Which it does.

3) Nvidia didn't respond with price cuts because they can't. Or won't. Either way. Doesn't matter. Radeon VII is a better value.


----------



## erocker (May 5, 2019)

Hype train went off a cliff for me once they said VII is their high end product.


----------



## cucker tarlson (May 5, 2019)

MrGenius said:


> I'll tell you how.
> 
> 1)  It's not even close to squarely between the RTX 2070 and RTX 2080 performance wise. It's essentially equal to an RTX 2080. Trading blows with it all day long. Sometimes a little slower, sometimes a little faster, sometimes pretty much the same. Turn RTX on and it wins every time(speed wise). Those charts above are bullshit.
> 
> ...


every chart that doesn't show R7 where you want is bullshit.


----------



## FordGT90Concept (May 5, 2019)

cucker tarlson said:


> if nv put 16gb on 2070 cause there's a short supply of 1gb ddr6 chips it wouldn't make it worth $200 more.


Of course it wouldn't because 8 GiB GDDR6 isn't worth $200--it's closer to $50-75.

Would people pay ~$600 for a 16 GiB RTX 2070?  Likely.


----------



## cucker tarlson (May 5, 2019)

FordGT90Concept said:


> Of course it wouldn't because 8 GiB GDDR6 isn't worth $200--it's closer to $50-75.


you still haven't got me convinced that 16gb needs to be there, drives that price up a lot for little to no gain.I mean *HBCC is their own friggin invention*.


----------



## HD64G (May 5, 2019)

erocker said:


> Hype train went off a cliff for me once they said VII is their high end product.


So, you didn't know that Navi 10 was the Polaris successor coming in 2019 and the Vega successor is the Navi 20 that would launch in 2020? Those rumors are over a year old to be confused with the Radeon 7 launch that was a product to buy AmD time until Vega 20 is ready.


----------



## FordGT90Concept (May 5, 2019)

cucker tarlson said:


> you still haven't got me convinced that 16gb needs to be there, drives that price up a lot for little to no gain.I mean *HBCC is their own friggin invention*.


You're looking at it backwards: the product has 16 GiB so the price reflects that.  Vega 20 wasn't designed for gamers; it was designed for Radeon Instinct (Radeon VII == Radeon Instinct MI50).

The "gain" is in the fact that it has 1 TB/s bandwidth which the GPU clearly benefits from.


----------



## vega22 (May 5, 2019)

Vayra86 said:


> That is just it. You need the one to get the other, what @ShurikN says as well. We all know this, deep down inside, everyone can recognize 'Nvidia mindshare' is a thing but let's just face reality, that is not because of Huang's fancy jacket. Its because of the product.
> 
> I think its comparable to a Dacia car versus a Volkswagen. They both accelerate about the same, they carry just as many people and luggage, they do the same amount of KM/L. But, the VW has a somewhat better designed interior, looks a bit nicer on the outside, and comes in twenty different colors. The Dacia comes in three. And, to top things off, VW has a few concept cars going about, and a few fast and luxurious ones too. Nobody ever buys those, but hey, if you drive a simple VW, you do get some of that 'feeling' of being part of the brand that has those cars.
> 
> ...



I'm not sure the car analogy works. It paints amd as a cheap brand while it works quite well for NV given their history of lying and cheating. Maybe vw and ford or BMW and ford would of worked better. They both make good products but 1 is perceived as being "better".

But I know what you mean. Amd have been more chasing the mid range, family sedan while NV have been chasing the high end sports coupe market.


----------



## cucker tarlson (May 5, 2019)

FordGT90Concept said:


> You're looking at it backwards: the product has 16 GiB so the price reflects that.  Vega 20 wasn't designed for gamers; it was designed for Radeon Instinct (Radeon VII == Radeon Instinct MI50).
> 
> The "gain" is in the fact that it has 1 TB/s bandwidth which the GPU clearly benefits from.


no, you're looking at it backwards.
if it wasn't designed for gamers,then a gamer shouldn't take $700 for R7 as a good alternative if they have a 2070 at $500.
still,it's better that it's there than leaving 2070/80 with no competition.


----------



## FordGT90Concept (May 5, 2019)

I would flip it back to you: why pay $500 for a card with only 8 GiB of VRAM?  $200 cards released years ago had that (RX 470).  I would expect premium priced cards to have premium amounts of memory.


----------



## cucker tarlson (May 5, 2019)

FordGT90Concept said:


> I would flip it back to you: why pay $500 for a card with only 8 GiB of VRAM?  $300 cards released years ago had that (RX 470).  I would expect premium priced cards to have premium amounts of memory.


oh my god just cause there's 8gb of some old ass ddr5 on one rx470 variant doesn't mean rtx2070 needs 16gb ddr6  you just said the fact R7 has 16gb has two reasons and none of them is that it needs it.
come on,I've got work to do and I'm here refreshing tpu,sipping a drink and going back and forth with you


----------



## notb (May 5, 2019)

HD64G said:


> Hi again pal! Nice to meet you in another AMD thread although you don't like their products muchly.


What would be the point of a thread that involves only people that like the product? Would that still be a discussion? More like a gang bang.
If that's what you're after, I'll step aside for sure...


----------



## FordGT90Concept (May 5, 2019)

cucker tarlson said:


> oh my god just cause there's 8gb of some old ass ddr5 on one rx470 variant doesn't mean rtx2070 needs 16gb ddr6  you just said the fact R7 has 16gb has two reasons and none of them is that it needs it.
> come on,I've got work to do and I'm here refreshing tpu,sipping a drink and going back and forth with you


If it makes you feel any better, a likely reason why Radeon Instinct MI60 was passed by for consumer cards is because 32 GiB is excessive; 16 is not.  It is a lot but it is not over the top.


----------



## cucker tarlson (May 5, 2019)

FordGT90Concept said:


> If it makes you feel any better, a likely reason why Radeon Instinct MI60 was passed by for consumer cards is because 32 GiB is excessive.  16 is not.  16 is the new 8.


no it isn't.


----------



## notb (May 5, 2019)

ShurikN said:


> Yes, but bear in mind that while R7 die size is smaller, it's built on a more expensive and not as mature process


So why not stay at the larger, cheap node? They could have simply polished Vega a bit further and keep selling it. It has enough performance for the market it targets.
Radeon VII didn't improve on anything qualitative. It's still power hungry and hot above what many PCs can take.
It's slightly faster, but not making AMD compete with top Nvidia products.

It literally looks like a statement for shareholders - showing that 7nm works (kind of). And a way to push few thousand Instinct chips that no one wants.

And BTW: this "more expensive and not as mature process" is what Zen2 will be using - at least initially. So you'd better be wrong or the Zen2 fanclub will be very disappointed.


----------



## FordGT90Concept (May 5, 2019)

GPUs don't work as a chiplet as well as CPUs do.  Zen 2's success on 7 nm is because of the chiplet design.  Navi was theoretically supposed to be a chiplet too but...that remains to be seen.  Infinity fabric would have to be really, really fast to satisfy a GPU's need for bandwidth.


----------



## notb (May 5, 2019)

FordGT90Concept said:


> 16 is not.  It is a lot but it is not over the top.


How much do games use these days at 4K ultra? Is 8GB really a big limitation?
I quickly checked the analyses TPU provides.
E.g. Metro Exodus, not even 6GB with RTX on:
https://www.techpowerup.com/reviews/Performance_Analysis/Metro_Exodus/6.html
Generally speaking, most games tested use around 4GB in 4K.
The biggest usage I've found:
https://www.techpowerup.com/reviews/Performance_Analysis/Middle_Earth_Shadow_of_War/5.html
8.3GB, but the comment is crucial. Usual 4GB on the "high" settings.

Also, I remember very well that 8GB was perfectly fine when Vega came out. AMD convinced us that HBM2 and HBCC mean it doesn't need more. That it performs like a card that has more memory.
*What happened to all that?*



FordGT90Concept said:


> GPUs don't work as a chiplet as well as CPUs do.  Zen 2's success on 7 nm *is *because of the chiplet design.


I believe it's a bit too early to say chiplets work great in CPUs and that Zen2 is a success. Don't you think? ;-)


----------



## HD64G (May 5, 2019)

notb said:


> What would be the point of a thread that involves only people that like the product? Would that still be a discussion? More like a gang bang.
> If that's what you're after, I'll step aside for sure...


I just point out your constantly negative altitude towards any AMD product, even for very good ones such the Zen derived cpus. Keep commenting freely, just don't expect many people to take your opinions seriously when you are so single-minded about tech products. Personally, even if I don't like nVidia's market practices I can regard highly of the GTX1080Ti being marvelous in design and efficiency for the time it launched. Every product should be judged seperately from the company's profile if we want to be as objective as humanly possible.


----------



## xkm1948 (May 5, 2019)

AMD graphics focus on console chips while Intel(ATi) and Nvidia battle it out in dGPU.

I am OK with that.


----------



## eidairaman1 (May 5, 2019)

ShurikN said:


> Just finished it. Pretty good video. And explains a lot of what's been happening and why.
> This comment from the second Adored video ties in nicely with it.



Thats why I shared it, Lisa Su is on the right path. I'm not expecting miracles like the Jump between the Radeon 8500 and 9500/9700/Pro. She expects improvement which there is. AMD is digging themselves out of the mess that started with Phenom 1 (Damage had already been done) they have been able to do a lot as of late despite their smaller operating revenue.


----------



## HD64G (May 5, 2019)

xkm1948 said:


> AMD graphics focus on console chips while Intel(ATi) and Nvidia battle it out in dGPU.
> 
> I am OK with that.


Intel will need at least 5 years (most possible 10) to be able to compete with nVidia for the high-end consumer GPU market. And they might be obliged to use Samsung's Fabs in order to even begin their mass-market GPU production. They are not in form lately in many fronts. I also would like to have a 3 or more part competition in CPU and GPU market but it is very hard for a newcomer to compete with the established ones for the 1st few years.


----------



## Space Lynx (May 5, 2019)

HD64G said:


> Intel will need at least 5 years (most possible 10) to be able to compete with nVidia for the high-end consumer GPU market. And they might be obliged to use Samsung's Fabs in order to even begin their mass-market GPU production. They are not in form lately in many fronts. I also would like to have a 3 or more part competition in CPU and GPU market but it is very hard for a newcomer to compete with the established ones for the 1st few years.



not to mention Intel's new CEO announced they are focused on big data and servers more than PC consumers moving forward.  Sadly I think that is a bad move for them, seeing as how Dell just announced they intend to increase 7nm EPYC Rome usage threefold than previously estimated.  Good luck Intel you will need it, cause Rome is going to kick their ass.


----------



## NdMk2o1o (May 5, 2019)

HD64G said:


> Intel will need at least 5 years (most possible 10) to be able to compete with nVidia for the high-end consumer GPU market. And they might be obliged to use Samsung's Fabs in order to even begin their mass-market GPU production. They are not in form lately in many fronts. I also would like to have a 3 or more part competition in CPU and GPU market but it is very hard for a newcomer to compete with the established ones for the 1st few years.


This.

If AMD leave the PC GPU consumer market then we're in for bad times ahead, Intel will not be able to come out with competitive products to Nvidia being as this is their first discreet GPU they have released in probably over 20+ years, so expecting them to be able to match Nvidia on all fronts, power, price, performance after a couple of years of R+D is quite frankly ridiculous, and if there was no AMD then we would see a monopoly like there has been in the CPU segment for the last 10 years with Intel just incrementally adding a little bit more performance every generation and prices always increasing, now imagine that was Nvidia... their prices have already gone up ridiculously, if there was no competition less competition than there is now.... well good luck with your $2k high end GPU with a small uplif of 10% over turing.


----------



## eidairaman1 (May 5, 2019)

NdMk2o1o said:


> This.
> 
> If AMD leave the PC GPU consumer market then we're in for bad times ahead, Intel will not be able to come out with competitive products to Nvidia being as this is their first discreet GPU they have released in probably over 20+ years, so expecting them to be able to match Nvidia on all fronts, power, price, performance after a couple of years of R+D is quite frankly ridiculous, and if there was no AMD then we would see a monopoly like there has been in the CPU segment for the last 10 years with Intel just incrementally adding a little bit more performance every generation and prices always increasing, now imagine that was Nvidia... their prices have already gone up ridiculously, if there was no competition less competition than there is now.... well good luck with your $2k high end GPU with a small uplif of 10% over turing.



AMD is not leaving.


----------



## notb (May 5, 2019)

HD64G said:


> I just point out your constantly negative altitude towards any AMD product, even for very good ones such the Zen derived cpus. Keep commenting freely, just don't expect many people to take your opinions seriously when you are so single-minded about tech products. Personally, even if I don't like nVidia's market practices I can regard highly of the GTX1080Ti being marvelous in design and efficiency for the time it launched. Every product should be judged seperately from the company's profile if we want to be as objective as humanly possible.


Well, here's the difference. I'm not just criticizing AMD's products. I also don't like their business strategy and the whole background they provide. That's why I'm criticizing the company as well.

And yes, I don't like AMD's GPUs - because they're an affront to the great company that ATI had been.
And I don't like the CPUs as well - because IMO they made too many compromises to push the price down.

You see, many cores and all that - great. AMD started the core war that changed the landscape of what we TALK about.
But simple fact is: most of demand for PC CPUs (desktop and mobile) is for chips with an IGP. When Zen launched in 2017, Intel was making 4 core chips with IGP.
Over 2 years later AMD is still launching 4-core APUs. A lot of talk. Not a lot of improvement of mainstream products. So yeah, it's hard for me to like someone who made these decisions.

And it's a similar story with Navi. If AMD's market share was as high as their "forum discussion share"...



HD64G said:


> Intel will need at least 5 years (most possible 10) to be able to compete with nVidia for the high-end consumer GPU market. And they might be obliged to use Samsung's Fabs in order to even begin their mass-market GPU production. They are not in form lately in many fronts.


Well, AMD is also few years behind Nvidia and they also need to outsource production.

Why would Intel go for the expensive small volume cards? That makes no sense.
They should go for a good mainstream GPU. And what stops them from making a competitor to RX580? Not very efficient, but with decent performance and a better brand? Absolutely nothing.

As for workstation/datacenter products, Intel is very unlikely to be able to compete with Nvidia for years - not because of hardware finesse, but the whole ecosystem. They'll have the exact same problem AMD has.
Nvidia dominated GPGPU not because their chips were much better than AMDs, but because of things like CUDA.
Even if Intel magically makes a V100 clone - and even sell it slightly cheaper - it'll take them years to get a significant market share.


----------



## Vya Domus (May 5, 2019)

FordGT90Concept said:


> GPUs don't work as a chiplet as well as CPUs do.



They should work better I would argue due to the fundamental way GPUs work (SIMT/SPMD). That makes it much easier to decentralize the chip into compute modules, also with GPUs you didn't have to worry much about added latencies to begin with.

The problem is why would you want to make one right know ? A chiplet GPU would only makes sense if you reached the absolute limit of size/power/performance and any further advancement would affect any one of those metrics to the point it is no longer feasible to make a monolithic GPU. Or, if you want to make an APU (right know that's a bad idea on PCs due to lack of bandwidth).

That's exactly where Rome sits right know on the CPU front, that was meant to be the biggest , fastest most power efficient CPU AMD can make. With Navi they clearly didn't have those goals in mind. They wanted an APU for consoles and whatever design resulted from that they decided to port that on PCs in the form of dedicated graphics.

Let's take that a little further, power efficiency and size were likely to be the leading metrics in making Navi. Targets that were probably met just fine as far as the APUs that they built for consoles were concerned. Now we get to the matter of turning it into a compelling product for PCs and AMD was faced with a dilemma : do you make a 1:1 port of the technology and make cards that are very power efficient but are mediocre  as far as performance goes (compared to the best Nvidia has). Or do you go outside of this optimal design in pursuit of more performance ? We'll see what they did, but how AMD got here shouldn't be a mystery or a surprise to anyone, it was all rather straight forward.


----------



## eidairaman1 (May 5, 2019)

Vya Domus said:


> They should work better I would argue due to the fundamental way GPUs work (SIMT/SPMD). That makes it much easier to decentralize the chip into compute modules, also with GPUs you didn't have to worry much about added latencies to begin with.
> 
> The problem is why would you want to make one right know ? A chiplet GPU would only makes sense if you reached the absolute limit of size/power/performance and any further advancement would affect any one of those metrics to the point it is no longer feasible to make a monolithic GPU. Or, if you want to make an APU (right know that's a bad idea on PCs due to lack of bandwidth).
> 
> That's exactly where Rome sits right know on the CPU front, that was meant to be the biggest , fastest most power efficient CPU AMD can make right know. With Navi they clearly didn't have those goals in mind. They wanted an APU for consoles and whatever design resulted from that they decided to port that on PCs in the form of dedicated graphics.



Best to wait and see for next 2 years


----------



## FordGT90Concept (May 5, 2019)

notb said:


> How much do games use these days at 4K ultra? Is 8GB really a big limitation?
> I quickly checked the analyses TPU provides.
> E.g. Metro Exodus, not even 6GB with RTX on:
> https://www.techpowerup.com/reviews/Performance_Analysis/Metro_Exodus/6.html
> ...


I generally don't trust these memory requirement assessments for games because it varies wildly.  For example, there's reviews that say GTX 1080 uses at most 4 GiB on Assassin's Creed: Origins at 4K ultra.  I'm not playing it on ultra, only using 1920x1200, and I've seen my VRAM exceed 4 GiB.  This may be because I was many hours into a play session before I checked and the amount of content cached in VRAM has accumulated over time.  Or it could simply be where I was in the world.

Point is: memory is something you have enough of until you don't.  Xbox One X makes 9 GiB of VRAM/RAM available games and most of that is used by the GPU. This has lead to more games pushing closer to 6 GiB memory use.  PS5 and Microsoft's answer to it are likely to raise the memory threshold higher.  8 GiB will likely be okay in dGPUs but there will be more games in the next 3-5 years that want more than 8 at high resolutions.

I know that I'm blowing $500+ on a card now, I would want at least 10 GiB because I do tend to use my cards for 3-5 years.


----------



## NdMk2o1o (May 5, 2019)

eidairaman1 said:


> AMD is not leaving.


I was quoting xkm1948 comment 



> AMD graphics focus on console chips while Intel(ATi) and Nvidia battle it out in dGPU.
> 
> I am OK with that


----------



## FordGT90Concept (May 5, 2019)

Vya Domus said:


> They should work better I would argue due to the fundamental way GPUs work (SIMT/SPMD). That makes it much easier to decentralize the chip into compute modules, also with GPUs you didn't have to worry much about added latencies to begin with.


 That last sentence has me in stitches.  Latencies is the reason why 100+ GB/s memory is common in cards.  if You want 144 fps, you need to do everything to draw that frame in 6.94 ms. 6.94 ms!  GPUs have no time to wait for anything.

CPUs: threads are localized by nature.  GPUs: work on warps/wavefronts where each warp/wavefront spawns thousands of parallel executions accessing the same data. Have some reading material.

Remember how SLI/Crossfire works: every GPU mirrors the memory from every other GPU.  This is extremely wasteful but it's the only way to make sure every GPU has access to the data it needs, when it needs it. When you have multiple GPU chiplets that need to access the same resource, each chiplet trying to access it is going to be penalized by the other chiplets trying to access it as well.  The memory controller would have be able to copy resources to all chiplets' local memory simultaneously and in real time.  But that presents its own problem: latency between the memory controllers and the chiplets.  It's one thing, after another, after another that leads to not being able to meet that 6.94 ms goal.

The hope was that infinity fabric would be fast enough to make it possible but...the talk of Navi and chiplets hasn't ever really materialized.  I mean, PS5 could have a chiplet with separate Zen 2, I/O, and Navi packages on an MCM but putting multiple Navis together on one package with the intent to masquerade as one GPU...there's no hints at that since years ago (vague reference to "scalability").



Vya Domus said:


> The problem is why would you want to make one right know ?


Lower cost, better yields, and better performance.


----------



## Vya Domus (May 5, 2019)

FordGT90Concept said:


> GPUs have no time to wait for anything.



You can get plenty of work done in 6.94ms if you have the bandwidth. GPUs are designed to work around latencies not to improve upon them, it's the reason why for examples GPUs typically have an order of magnitude smaller caches compared to similarly sized CPUs. The predictability and assumption of what sort of programs are going to be run on a GPU makes it so that scheduling can hide latencies very well.

Edit :



FordGT90Concept said:


> GPUs: work on WAVEs where each WAVE spawns thousands of threads accessing the same data.



*Not the same data*, _*typically*_. GPUs execute instructions into a SIMD fashion (Single-Instruction-*Multiple*-Data) and the work is generally data independent, if GPUs worked as you described it manufactures would have given up long ago. Thankfully they don't.

And wavefronts do not "spawn thousands of threads", I think you are seriously confusing things here. A wavefront is a collection of 32 software threads (Nvidia, they call it a warp) or 64 (AMD) that are subsequently scheduled for execution on to the actual hardware threads within an CU/SM. Wavefronts don't even consist of threads in absolute terms, they are more like instructions from a collection of threads. The same instruction gets executed from multiple threads, aka SIMT, another design philosophy for GPUs.


----------



## FordGT90Concept (May 5, 2019)

I did major edits to post above clarifying all those things and giving an example why it hasn't been done.

Also: look at page 22 of the referenced document: Radeon HD 7970 = 2560 active threads per L1 data cache.  So yes, "thousands of threads."


----------



## notb (May 5, 2019)

FordGT90Concept said:


> Point is: memory is something you have enough of until you don't.


But why would it run out? 8GB is a lot.
A few games checked by TPU pull everything they can ever use as soon as possible. And it's still under 10GB. So what's the point of more memory?

And just as you said: even at 1080p a GPU can gather 4GB of data over a long session.But that simply means holding a lot of data that it'll never use, like textures of locations you can't get back to.
When GPU reaches memory limit, it simply removes the stuff that's least likely to be useful.

So if we assume 4K games need around 10GB, what's the point of 16GB vs 8GB? You'll use 2 more for stuff that doesn't have to be there. And you won't use the other 6GB. Ever.
That unused HBM2 costs you $100 - money that could have given you a much faster GPU instead.

And once again: Vega was designed to work well with less RAM, right? All these presentations and videos weren't a dream? 


> Xbox One X makes 9 GiB of VRAM/RAM available games and most of that is used by the GPU.


Of course. Consoles are reading as much as they can the moment you start a game. That's why it takes so long to start a game (compared to PCs).


> This has lead to more games pushing closer to 6 GiB memory use.  PS5 and Microsoft's answer to it are likely to raise the memory threshold higher.  8 GiB will likely be okay in dGPUs but there will be more games in the next 3-5 years that want more than 8 at high resolutions.


But why? I mean: where is these extra GB come from?
The only thing that could significantly increase the RAM requirement is higher resolution. Radeon VII is a 4K card tops (at max settings).

Check the tests TPU made. They started 3 years ago.
https://www.techpowerup.com/reviews/?category=PC+Port+Testing
Most games tested at 4K used around 4-5GB.
There are a few outliers, like CoD or RoTR. In their case performance doesn't drop on cards will smaller VRAM, so the GPU holds more data than it needs. W1zzard mentions that in the comments as well.

I see no reason why 4K games would suddenly require 10GB - let alone 16GB. If you know one, please share. 


> I know that I'm blowing $500+ on a card now, I would want at least 10 GiB because I do tend to use my cards for 3-5 years.


But wouldn't you prefer the card to have more performance, not more RAM?
Games still run on 8GB. And they still will 5 years from now. It's just that they'll need to read data from disks a bit more often. It's not a big problem - I doubt you'd notice. AMD could have spent these $100 on extra CUs instead. Or better cooler. Or just make a profit for a change and save money for R&D.

Maybe 5 years from now you'll have a 5 or 6K monitor. Maybe 6K games will utilize 12GB of RAM. But so what? Radeon VII is too slow for that anyway.


----------



## Vya Domus (May 5, 2019)

None of what's in there goes at odds with what I have been saying, as a matter of fact it matches it entirely. A wavefront doesn't consists of thousands of threads, it's either 32 or 64, hopefully we got that out of the way. Only when we get to the collection of wavefronts can we talk about thousands of threads.



FordGT90Concept said:


> Radeon HD 7970 = 2560 active threads per L1 data cache.



Those 2560 threads may or may not all go to L1 , they may go to L2 or global memory. The fact that there can be bytes of instruction/data available per thread should make it obvious to you that caches do not play much a role here and neither does the latency benefit associated with them.

That's how a GPU handles the horrendous memory access times, you make it so that work is already scheduled while you fetch data :





You can expand this vertically and all you would need is more bandwidth, latency could remain untouched or even degrade slightly. That's why this isn't critical even when you need a frame done in 6.94 ms and why chiplets would not be hard to implement. You can only do this because you know ahead of time that you are going to execute the same sequences of instructions over multiple pieces of data. You are talking about memory controllers and how there would be a contention between multiple chiplets accessing the same data. First of all they rarely need to access the same data as I pointed out and secondly *this isn't anything new *, you already have this problem with multiple CU/SMs querying the same data from the same memory controller. Nothing about this makes it so that this couldn't be dealt with easily.


----------



## FordGT90Concept (May 5, 2019)

notb said:


> But why would it run out? 8GB is a lot.
> A few games checked by TPU pull everything they can ever use as soon as possible. And it's still under 10GB. So what's the point of more memory?
> 
> And just as you said: even at 1080p a GPU can gather 4GB of data over a long session.But that simply means holding a lot of data that it'll never use, like textures of locations you can't get back to.
> ...


I have an original PCI Radeon with 32 MB.  That's enough VRAM, right? Right!?![/sarcasm]



notb said:


> But why? I mean: where is these extra GB come from?


Larger textures, more triangles, async compute workload, etc.



notb said:


> Most games tested at 4K used around 4-5GB.


3 years ago.  At that time, GTX 970 were made functionally obsolete because it doesn't have enough VRAM.  Now GTX 1060 6 GiB is dangerously close to being overloaded.  Meanwhile, RX 470 8 GiB is fine, so is RTX 2080...but not for long.



notb said:


> But wouldn't you prefer the card to have more performance, not more RAM?


They're one in the same for Radeon VII.  1 TB/s versus 483.8 MB/s and 16 GiB versus 8 GiB.  You get double the bandwidth along with double the VRAM.  It's win-win, for the reasonable price of $100.



Vya Domus said:


> That's why this isn't critical even when you need a frame done in 6.94 ms and why chiplets would not be hard to implement.


You sound so sure of yourself, yet, it hasn't been done yet, even in prototyping.  Either engineers are wrong that have every incentive to go chiplet or you're wrong.  Guess who my money is on.


----------



## eidairaman1 (May 5, 2019)

FordGT90Concept said:


> I have an original PCI Radeon with 32 MB.  That's enough VRAM, right? Right!?![/sarcasm]



Yup 970 comes to mind now, not enough ram after 3.5GB.


----------



## TheoneandonlyMrK (May 5, 2019)

notb said:


> How much do games use these days at 4K ultra? Is 8GB really a big limitation?
> I quickly checked the analyses TPU provides.
> E.g. Metro Exodus, not even 6GB with RTX on:
> https://www.techpowerup.com/reviews/Performance_Analysis/Metro_Exodus/6.html
> ...





FordGT90Concept said:


> I generally don't trust these memory requirement assessments for games because it varies wildly.  For example, there's reviews that say GTX 1080 uses at most 4 GiB on Assassin's Creed: Origins at 4K ultra.  I'm not playing it on ultra, only using 1920x1200, and I've seen my VRAM exceed 4 GiB.  This may be because I was many hours into a play session before I checked and the amount of content cached in VRAM has accumulated over time.  Or it could simply be where I was in the world.
> 
> Point is: memory is something you have enough of until you don't.  Xbox One X makes 9 GiB of VRAM/RAM available games and most of that is used by the GPU. This has lead to more games pushing closer to 6 GiB memory use.  PS5 and Microsoft's answer to it are likely to raise the memory threshold higher.  8 GiB will likely be okay in dGPUs but there will be more games in the next 3-5 years that want more than 8 at high resolutions.
> 
> I know that I'm blowing $500+ on a card now, I would want at least 10 GiB because I do tend to use my cards for 3-5 years.


Apex legends can use more than 8GB and that's dx11,as a 4k ultra IQ gamer that 8GBlimit gets tested Today, imagine what spec GtaVI will need.


----------



## notb (May 6, 2019)

theoneandonlymrk said:


> Apex legends can use more than 8GB and that's dx11,as a 4k ultra IQ gamer that 8GBlimit gets tested Today, imagine what spec GtaVI will need.


"Can use" and "needs" are slightly different things. We've already said in this thread that GPUs gather a lot of data, so likely many games will go over the 8GB mark at some point. It doesn't mean 8GB are needed.
Apex Legends official recommended specs state 8GB RAM. And the game runs perfectly fine with that budget - even at 4K 60fps max settings.
Can it gather more data? Sure. Does it improve performance? I bet it doesn't. If you have tests that show it does, post a link.

GTA VI will work well on 4GB 6GB and won't benefit from more than 8GB. You know how I know this? Because that's the amount of RAM mainstream GPUs have. They want to sell tens of millions copies of this game. They'll design it to look well on tens of millions of PCs.


----------



## eidairaman1 (May 6, 2019)

notb said:


> "Can use" and "needs" are slightly different things. We've already said in this thread that GPUs gather a lot of data, so likely many games will go over the 8GB mark at some point. It doesn't mean 8GB are needed.
> Apex Legends official recommended specs state 8GB RAM. And the game runs perfectly fine with that budget - even at 4K 60fps max settings.
> Can it gather more data? Sure. Does it improve performance? I bet it doesn't. If you have tests that show it does, post a link.
> 
> GTA VI will work well on 4GB and won't benefit from more than 8GB. You know how I know this? Because that's the amount of RAM mainstream GPUs have. They want to sell tens of millions copies of this game. They'll design it to look well on tens of millions of PCs.


I dont think Grand Theft Auto 6 is out yet, so don't assume.


----------



## Vya Domus (May 6, 2019)

FordGT90Concept said:


> You sound so sure of yourself, yet, it hasn't been done yet, even in prototyping.



The good thing is you don't even need to believe me, you only need look at how GPU performance evolved in the last decade or so. This is the reason why GPU manufactures have been able to keep up a fairly linear performance increase over the years, because their major concerns were how do you fit more execution resources and how do you get more memory bandwidth, latency while it can't be ignored wasn't the main focus. 

And on the other side that's one of the reasons CPU performance hasn't gone up much as of lately, you can make a CPU core with a million execution ports and TB/s of bandwidth available, if it can't get that one particular instruction and data in time it will all be for nothing, latency reins king here.

Chiplet CPUs were a much more difficult problem to crack, which is probably why AMD decided to focus on this first.



FordGT90Concept said:


> Either engineers are wrong that have every incentive to go chiplet



So basically you argument boils down to : it hasn't been done yet so it can't be done or it's very difficult. Kind of an weak point, regardless I feel like I explained my reasons to believe the opposite fairly well so I'll end it here.


----------



## notb (May 6, 2019)

eidairaman1 said:


> I dont think Grand Theft Auto 6 is out yet, so don't assume.


I'm not assuming. It's an educated guess, sometimes called "a forecast".

Nvidia dominates the market and their mainstream cards of latest generation are 6-8 GB (not 4-8 as I said earlier). For the next 2 years no game developer will make a high volume game that requires more. It wouldn't make any sense.


----------



## StudMuffin (May 6, 2019)

Divide Overflow said:


> How odd.  Suddenly those who trashed AdoredTV as an AMD fanboy are the biggest advocates of his latest rumors and speculations...





Vya Domus said:


> More interesting is that they want to come off as someone who doesn't want to have anything to do with his fanboy trash yet they are the frist ones to post his videos.
> 
> 
> How odd indeed.



I know right? They jump at any chance to toss dirt over AMD's way.

 Long term lurker here, finally decided to register to interact a bit because there seems to be very few folks around these days that truly have a honest geniuine love for this hobby. Sad state of affairs, its just as bad as all the political drivel we see here in the U.S. these days.




FordGT90Concept said:


> Lisa Su did that.  She was hyped about Navi and now she isn't.  Something clearly happened on Navi that was unexpected.



That's the most ridiculous assumption ever, seriously? dude? Come on lol. 


Some of you bros are reaching very hard to make a freaking brand/company look as bad as possible at any chance possible and that's utterly childish, can't you see that? How stupid and petty that is? 

What ever happened to the genuine appreciation and enthusiasm for this hobby instead of all the snarky Anti-AMD comments that are made to look like honest discussion yet the obvious seeps right through the cracks because of some of these constant attackers that come at any opportunity possible. 

I mean, really, AMD's GPUs are a good thing that all of us should be "truly" wanting to see succeed, and frankly, they will, they have for over 20 years, and has brought a lot of great technology to the GPU side of things. One of the reasons i really appreciate AMD GPUs is the innovation they aggressively go after, some really great things have come from this approach and others times not so much, but what AMD has done on the GPU side of things has really  steered the GPU industry forward in a lot of ways. I mean, lets just take AMD's Mantle API, honestly, this is what we have today with DirectX12, and Vulkan, its just one example of how AMD's GPU division brings innovation to the tables that a lot of times lands up setting the trend forward for all GPU technologies. So why on earth we got people that have hatred pent-up inside them towards AMD GPUs is utterly ridiculousness. When ever i buy an AMD GPU...i feel like i am buying a GPU prototype..because honestly, a lot of times thats what we are getting with a AMD GPU, prototype hardware feature-sets inside its GPUs. Look at the performance increase we get when a game is truly coded to take advantage of AMD"s superior DirectX12 feature-set hardware in its current modern GPUs..it literally takes a HUGE leap forward in perf, matching Nvidia's next higher tier GPUs..just look at the recent Forza games for one example of this where we see a Vega 64 and 56 leap up in performance increase..its quite amazing and very interesting. We will see a lot of this coming soon with the last  round of games for PS4 Pro and Xbox One X consoles as developers are now starting to finally develop games from the ground up to take advantage of AMD DX12 hardware features that we see in the current consoles but more importantly..the upcoming next-generation of consoles that both Xbox Next and PS5 will have Semi-Custom Navi GPU hardware inside, which for PC gamers that happen to have a Vega or Navi GPU..will reap major benefits from this, starting about now and for the foreseeable future...again, especially for next gen games on the upcoming new consoles from Sony and MS. Personally, if someone was wanting the best bang for buck PC GPU hardware that will give them long lasting performance ...AMD GPUs are the way to go, im not talking about the PC enthusiast that must have the best of the best top dollar hardware every 12 months..no, im talking about the PC gamer that wants to see their hardware last for 3-4 years...Vega/Navi is the way to go, no ways about that, you'd be wasting your hard earned cash to go with a Nvidia equivalent, imho.

Anyways, I really do appreciate some of you guys in here that do have a genuine appreciation for both Nvidia and AMD hardware, especially AMD GPU hardware because i find AMD does not get enough credit these days with their GPU hardware, I find AMD GPUs very innovative and is always pushing for the next new GPU technology feature to lead the way forward, and AMD does it time and time again. You gotta keep in mind AMD has a fraction of the R&D that Nvidia has for its GPU development, yet look over the years what AMD has done for the GPU industry, its done a lot and to see people here purposely trash AMD is just down-right disrespectful, not just towards AMD as a company but to those of us that has a true genuine appreciation for this PC Hardware/Gaming hobby some of us love.

If any of you guys are in the market for a GPU and what the best bang for buck, go with a AMD GPU, Lots of games on current PS4 Pro and Xbox One X are now coming out taking full advantage of AMD GPUs inside those consoles and that rolls over to the PC side of things when we see Performance gains as time goes on and more mature AMD drivers come out but the most important thing is to remember that these upcoming next-gen consoles are using AMD GPUs again, using a hybrid of Navi hardware features and some custom features, so PC users with AMD GPUs will reap benefits big time in the future. And don't get me wrong, I appreciate Nvidia just as much but thats for another disucssion since this thread is AMD focused.


-


----------



## notb (May 6, 2019)

FordGT90Concept said:


> 3 years ago.  At that time, GTX 970 were made functionally obsolete because it doesn't have enough VRAM.  Now GTX 1060 6 GiB is dangerously close to being overloaded.  Meanwhile, RX 470 8 GiB is fine, so is RTX 2080...but not for long.


This is not what I was asking about.
You have data points from 3 years. 4K resolution. Pretty much the same RAM usage.

Why do you expect a sudden change of trend now? Why would textures start to grow in next 3 years?


> They're one in the same for Radeon VII.  1 TB/s versus 483.8 MB/s and 16 GiB versus 8 GiB.  You get double the bandwidth along with double the VRAM.  It's win-win, for the reasonable price of $100.


So you'd rather pay $100 for meaningless numbers in datasheet than something empirical?
What's the point of larger transfer or larger VRAM? Nvidia cards still consume less W and give more fps.
I'm sure there's an inflection point. What if AMD made a 32GB version for +$200. Would you go for that? I mean: it's so much RAM! Like on pro cards!

I don't understand how you buy PC parts. What's the goal?


----------



## TheoneandonlyMrK (May 6, 2019)

notb said:


> I'm not assuming. It's an educated guess, sometimes called "a forecast".
> 
> Nvidia dominates the market and their mainstream cards of latest generation are 6-8 GB (not 4-8 as I said earlier). For the next 2 years no game developer will make a high volume game that requires more. It wouldn't make any sense.


your belief that increasing settings does not increase graphical image quality (using more Vram)is nonesens  ,im not searching nothing to prove that.

But will state , because I've experienced(not assumed or believe) that Apex legends will indeed scale its Vram usage very well , But 4K using 4GB Vram might run ok but its downscaling the resolution scaling to between a quarter and a third, I've tried and It does'nt look the same on a 4 k monitor.

how about you prove settings do little ,since their inclusion in almost every game seams to allude to my illusion of higher image quality by them as being right.

Nvidia mean dick to console land and they're expected to Grow (20%, $$$$$$$$$$$$$$Jpeddy)user base , mainstream will be represented well with Navi derrivetives ,the pc master race can keep buying Nvidia if it wants ,AMD will be fine im sure.

you wont , the future's bleak for you, you dont want amd gpus, you dont want multi cores ,you dont want any better then the lowest grade of GPU in your system, you just want Nvidia to rule the world yet are blind to the many millions who could'nt give a shit who's chip name is inside their gaming beast(console, were talking kids especially with their pro's and ones)

And you love hanging your tongue out in a Tech and computer enthusiasts Forum.


----------



## eidairaman1 (May 6, 2019)

notb said:


> I'm not assuming. It's an educated guess, sometimes called "a forecast".
> 
> Nvidia dominates the market and their mainstream cards of latest generation are 6-8 GB (not 4-8 as I said earlier). For the next 2 years no game developer will make a high volume game that requires more. It wouldn't make any sense.



Educated guess is an oxymoron


----------



## notb (May 6, 2019)

theoneandonlymrk said:


> your belief that increasing settings does not increase graphical image quality (using more Vram)is nonesens  ,im not searching nothing to prove that.


That's the question I've asked @FordGT90Concept .
For 3 years we haven't seen a significant increase in VRAM needs. 4K games utilize roughly the same amount. And that's on highest settings games offer.
So why would this trend change now? Why would games launching in next 3 years utilize more?
It'll still be 4K.
Maybe you know?

Looking at RTX cards, clearly RTRT and DLSS use need some RAM (1-2GB above what the game normally needs). But that's hardware that won't magically appear on Radeon VII. And mainstream Nvidia RTX cards are <=8GB nevertheless.



eidairaman1 said:


> Educated guess is an oxymoron


No, it isn't.


----------



## eidairaman1 (May 6, 2019)

notb said:


> That's the question I've asked @FordGT90Concept .
> For 3 years we haven't seen a significant increase in VRAM needs. 4K games utilize roughly the same amount. And that's on highest settings games offer.
> So why would this trend change now? Why would games launching in next 3 years utilize more?
> It'll still be 4K.
> ...



It's like guesstimate... smh


----------



## xkm1948 (May 6, 2019)

HD64G said:


> Intel will need at least 5 years (most possible 10) to be able to compete with nVidia for the high-end consumer GPU market. And they might be obliged to use Samsung's Fabs in order to even begin their mass-market GPU production. They are not in form lately in many fronts. I also would like to have a 3 or more part competition in CPU and GPU market but it is very hard for a newcomer to compete with the established ones for the 1st few years.



Who knows. Intel is not starting from 0 as they have been making iGPU forever. Realistically with the R&D force as well as pure cash flow, Intel has way better chance of battling Nvidia at dGPU market. AMD with its limited amount of resources fighting on 2 fronts will always be hard. 

There will always be competition, don't worry too much.


----------



## notb (May 6, 2019)

eidairaman1 said:


> It's like guesstimate... smh


You don't know what "educated guess" means and you can't even use a dictionary...

Focus on giving +1 to people that praise AMD. Why leave the comfort zone?



xkm1948 said:


> Who knows. Intel is not starting from 0 as they have been making iGPU forever. Realistically with the R&D force as well as pure cash flow, Intel has way better chance of battling Nvidia at dGPU market. AMD with its limited amount of resources fighting on 2 fronts will always be hard.


Well... not so long ago some people on this forum were sure that Intel designed 6-core CPUs in few months - because AMD surprised them with Zen. Otherwise they'd be making 4 cores forever.
Yet, when it comes to GPUs, the same people are very worried about Intel's R&D potential... ;-)


----------



## eidairaman1 (May 6, 2019)

notb said:


> You don't know what "educated guess" means and you can't even use a dictionary...
> 
> Focus on giving +1 to people that praise AMD. Why leave the comfort zone?
> 
> ...



Wrong there hypocrite


----------



## moproblems99 (May 6, 2019)

eidairaman1 said:


> Yup 970 comes to mind now, not enough ram after 3.5GB.



Honestly, I think the 970 got a pretty decent service life for what it was.



theoneandonlymrk said:


> imagine what spec GtaVI will need.



By the time GTAVI makes it to PC, all of these GPUs will be obsolete anyway.  What did it take For GTAV?  Over a year?  RDR2 still isn't here 6 mos later.  If anyone knows how to milk people, it's Rockstar.


----------



## Caring1 (May 6, 2019)

notb said:


> You don't know what "educated guess" means and you can't even use a dictionary...


An "educated" guess is still a guess!


----------



## eidairaman1 (May 6, 2019)

moproblems99 said:


> Honestly, I think the 970 got a pretty decent service life for what it was.
> 
> 
> 
> By the time GTAVI makes it to PC, all of these GPUs will be obsolete anyway.  What did it take For GTAV?  Over a year?  RDR2 still isn't here 6 mos later.  If anyone knows how to milk people, it's Rockstar.



The 290 still holds up further.



Caring1 said:


> An "educated" guess is still a guess!


----------



## notb (May 6, 2019)

eidairaman1 said:


> Wrong there hypocrite


Wrong about your skills with dictionaries?
Well... it was an educated guess as well... ;-)


----------



## FordGT90Concept (May 6, 2019)

Vya Domus said:


> That's the most ridiculous assumption ever, seriously? dude? Come on lol.


Assumption?  You realize these were statements to investors.  Making false statements to investors is fraud.

She went from being excited about Navi in January 2019 to being evasive about Navi in April 2019.


----------



## moproblems99 (May 6, 2019)

notb said:


> Well... not so long ago some people on this forum were sure that Intel designed 6-core CPUs in few months - because AMD surprised them with Zen. Otherwise they'd be making 4 cores forever.
> Yet, when it comes to GPUs, the same people are very worried about Intel's R&D potential... ;-)



I don't understand why everyone is so hyped on Intel GPUs, you may as well hope AMD turns RTG around.

What makes anyone think the people that produced Vega and Polaris are going to go to a company that has as much money as the Pope but hasn't been able to produce a decent GPU are somehow going to buck the trend?


----------



## xkm1948 (May 6, 2019)

moproblems99 said:


> Honestly, I think the 970 got a pretty decent service life for what it was.
> 
> 
> 
> By the time GTAVI makes it to PC, all of these GPUs will be obsolete anyway.  What did it take For GTAV?  Over a year?  RDR2 still isn't here 6 mos later.  If anyone knows how to milk people, it's Rockstar.




In VR it 970 was even faster than my old FuryX, specifically Fallout4 VR simply because AMD refuse to add Async Reprojection and Smooth Motining to R9 GPUs. 

3.5GB "slow" GDDR5 VRAM wins over 4GB HBM RAM, that was pretty sad.


----------



## eidairaman1 (May 6, 2019)

moproblems99 said:


> I don't understand why everyone is so hyped on Intel GPUs, you may as well hope AMD turns RTG around.
> 
> What makes anyone think the people that produced Vega and Polaris are going to go to a company that has as much money as the Pope but hasn't been able to produce a decent GPU are somehow going to buck the trend?



Considering Intel's main focus is like Nvidia's, AI, gaming takes a far back seat.

We are not the money makers for any corporation it's other corporations that make these companies money.


----------



## notb (May 6, 2019)

Caring1 said:


> An "educated" guess is still a guess!


Well, actually no. Educated guess is more like inference (thinking).


----------



## xkm1948 (May 6, 2019)

moproblems99 said:


> I don't understand why everyone is so hyped on Intel GPUs, you may as well hope AMD turns RTG around.
> 
> What makes anyone think the people that produced Vega and Polaris are going to go to a company that has as much money as the Pope but hasn't been able to produce a decent GPU are somehow going to buck the trend?



Because a good amount of former ATI talent are now at Intel, A shit ton more money will definitely help as well. Kyle of HardOCP speculated this a loooooong time ago. It seems to be coming true now

https://www.hardocp.com/article/2016/05/27/from_ati_to_amd_back_journey_in_futility

I liked ATi, so since most of the team are now at Intel might as well root for Intel GPU now.


----------



## eidairaman1 (May 6, 2019)

notb said:


> Well, actually no. Educated guess is more like inference (thinking).



It's not, it's an assumption.


----------



## FordGT90Concept (May 6, 2019)

notb said:


> You have data points from 3 years. 4K resolution. Pretty much the same RAM usage.


The rate memory usage is climbing is exponential because the 32-bit barrier is gone (which made GTX 970 feasible) and consoles are finally shipping with amounts that aren't pathetic because developers are demanding it.

Again: $500+ product, only 8 GiB of VRAM?  Radeon VII is a better product from a value proposition because it won't be obsoleted by growing memory demand as fast as the RTX 2080 and RTX 2070 will be.  NVIDIA did that, does that, and will continue to do that intentionally (planned obsoletism).  AMD has a history of being generous on memory (except Fury because of technical limitations of HBM).


For the record, HBCC is a carry over from Radeon Instinct where 32 GiB often isn't enough.  HBCC I think allows DMA to virtually all storage resources in the machine so it can pull data directly without waiting for the CPU to do it.  Not really important for games...especially when you have more than enough dedicated VRAM.  When you're working with datasets that are in the terabytes (like laser scans and raytracing) HBCC reduces latency.


----------



## notb (May 6, 2019)

moproblems99 said:


> I don't understand why everyone is so hyped on Intel GPUs, you may as well hope AMD turns RTG around.
> 
> What makes anyone think the people that produced Vega and Polaris are going to go to a company that has as much money as the Pope but hasn't been able to produce a decent GPU are somehow going to buck the trend?


Because Intel is the most mature company in this business: very serious and committed to enterprise clients. They will make a good GPU. Maybe not extremely fast, but extremely polished, focused and easy to use (with minimal tinkering). And yeah, I think they could go as far as locking OC. 

It'll be an interesting alternative. If you'd rather get a lot of RAM, a lot of oomph and a lot of driver lottery, AMD will probably still be around.


----------



## FordGT90Concept (May 6, 2019)

Intel is going to come out of the gate with a process tech disadvantage.  Rumor mill says 10 nm but I don't buy it.  Nothing Intel has produced to date on 10 nm comes remotely close to densities necessary for a monolithic GPU.  I think the only way Intel is competitive is by outsourcing dGPUs to TSMC 7 nm.  On top of that, I'm still not convinced Intel is even prioritizing real-time rendering.  The big money and the market Intel is losing is in the enterprise compute space (Radeon Instinct and Tesla).


----------



## notb (May 6, 2019)

eidairaman1 said:


> It's not, it's an assumption.


"Assumption" is in different class. It's statement that is treated as true without a proof.
"Guess and "educated guess" are not treated as true, but as uncertain.
But a guess is random and an educated guess is based on some information one has.

So for example:
"eidairaman1 can't use a hammer" is a guess - I have no idea, no information.
but
"eidairaman1 can't use a dictionary" is an educated guess - an inference - because over many years of using Internet forums I encountered many people that didn't know some word (no shame). But they quickly checked whether I'm right and either admitted or changed the subject. And you keep drowning.


----------



## Caring1 (May 6, 2019)

notb said:


> Well, actually no. Educated guess is more like inference (thinking).


Sure, based on prior evidence and concensus.
Neanderthal: Dinosaurs will rule the Earth forever.
Scientists circa 1500: The Earth is flat. The Earth is the centre of the Universe etc.


----------



## moproblems99 (May 6, 2019)

notb said:


> Because Intel is the most mature company in this business: very serious and committed to enterprise clients. They will make a good GPU. Maybe not extremely fast, but extremely polished, focused and easy to use (with minimal tinkering). And yeah, I think they could go as far as locking OC.



None of this should inspire a lot of enthusiasm for 'gamers'.

EDIT: Also, what GPU isn't easy to use?


----------



## TheoneandonlyMrK (May 6, 2019)

notb said:


> That's the question I've asked @FordGT90Concept .
> For 3 years we haven't seen a significant increase in VRAM needs. 4K games utilize roughly the same amount. And that's on highest settings games offer.
> So why would this trend change now? Why would games launching in next 3 years utilize more?
> It'll still be 4K.
> ...


You haven't seen shit simples.

This is clear because I can use close to 8GB of Vram on GtaV at 4k ultra settings, guess what the vega VII with twice the bandwidth runs it better maxed out as do the 2080 and ti.
Your talking rubbish ,sure i could game using less Vram but I would HAVE to reduce resolution or settings it's really just that simple.
For three years I have seen Vram increases because I was always trying for Max IQ at the best resolution i could.
Your running a potato yet know so much about 4k gaming,how, because you read reviews.


----------



## eidairaman1 (May 6, 2019)

notb said:


> "Assumption" is in different class. It's statement that is treated as true without a proof.
> "Guess and "educated guess" are not treated as true, but as uncertain.
> But a guess is random and an educated guess is based on some information one has.
> 
> ...



The only one drowning is you because you assume.


----------



## ShiBDiB (May 6, 2019)

cucker tarlson said:


> A fanboy yt channel is now quoted as a source for a clickbait article,proof people never learned.



Threads like this used to get locked here.. now they're just where the remaining active members let their fanboy flags fly.. Oh how tpu has fallen (but so has every traditional forum)


----------



## ShurikN (May 6, 2019)

Caring1 said:


> Scientists circa 1500: The Earth is flat.


Scientists claimed the earth is round since BC. You're mixing that with "Sun orbits the Earth - Earth orbits the Sun" argument


----------



## Assimilator (May 6, 2019)

notb said:


> Basically, many AMD fans say something like this: objectively Radeon GPUs are sh*t, but AMD is small, poor and doesn't give a f*ck, which makes Radeon GPUs great.



This is so true.


----------



## Vayra86 (May 6, 2019)

vega22 said:


> I'm not sure the car analogy works. It paints amd as a cheap brand while it works quite well for NV given their history of lying and cheating. Maybe vw and ford or BMW and ford would of worked better. They both make good products but 1 is perceived as being "better".
> 
> But I know what you mean. Amd have been more chasing the mid range, family sedan while NV have been chasing the high end sports coupe market.



VW = Nvidia = Dieselgate, its perfect? Meanwhile Dacia's got a spotless reputation and competes on price  Ford does not. You said it right: perceived to be better  Not _really _better at getting the job done, really. In fact, VW's got worse failure rates I believe.

Anyway  Let's move on



theoneandonlymrk said:


> Apex legends can use more than 8GB and that's dx11,as a 4k ultra IQ gamer that 8GBlimit gets tested Today, imagine what spec GtaVI will need.



What!? Apex Legends? Not sure that is a great example. I can almost count the number of different assets in that game on my two hands.


----------



## notb (May 6, 2019)

moproblems99 said:


> None of this should inspire a lot of enthusiasm for 'gamers'.


Why not?


> EDIT: Also, what GPU isn't easy to use?


The one you have to underclock, overclock, flash, tune and pick the right driver (to get performance figures everyone talks about on internet forums).

I won't be surprised if drivers will come in Windows Update and some GPUs will have locked clocks.


eidairaman1 said:


> The only one drowning is you because you assume.


You have no idea how math or science work, do you? :-D
Next time you comment in an AI-related thread, I'll ask you what "inference" is. Maybe you can at least use wikipedia. ;-)


Caring1 said:


> Sure, based on prior evidence and concensus.


I said: based on information one has. It doesn't have to be true and inference doesn't have to be correct.


> Scientists circa 1500: The Earth is flat.


Ancient Greeks were already pretty certain Earth is round.
Anyway, "flat Earth" approximation will never go away in science and engineering. It's very good.


> The Earth is the centre of the Universe etc.


In classical physics it is relative, so there was really no way to understand this until we had gravity equations (XVII century).


----------



## ratirt (May 6, 2019)

Well I read few pages here hoping for some "Hype Train" stuff but I guess the train's left the station already. Too bad I missed it. Same Nicknames appeared and of course same old stories revealed again. Apples or Oranges? I see the same persistence of some people drowning in their own thoughts  Amazing really how hopeless and depressed some of you are  

On topic. 
This is frustrating. Been hearing about the Navi for a while now. AMD pushed the release. The question here is why? Will it be that good as WCCFTECH claims? well I sure hope so. And this Ray Tracing  It's everywhere now and it starts to be boring. As of now and all the RTRT I really couldn't care less about it. Just want a good card for a reasonable price.


----------



## ShurikN (May 6, 2019)

ratirt said:


> On topic.
> This is frustrating. Been hearing about the Navi for a while now. AMD pushed the release. The question here is why? Will it be that good as WCCFTECH claims? well I sure hope so. And this Ray Tracing  It's everywhere now and it starts to be boring. As of now and all the RTRT I really couldn't care less about it. Just want a good card for a reasonable price.


Well we have been hearing about Navi for a while, just not from AMD. Huge difference. They mention it here and there as an afterthought. Everything mostly came from rumors and "leaks".
And another thing, WCCF is as reliable as a 1980 Zastava Yugo. Don't give them too much credit.


----------



## ratirt (May 6, 2019)

ShurikN said:


> Well we have been hearing about Navi for a while, just not from AMD. Huge difference. They mention it here and there as an afterthought. Everything mostly came from rumors and "leaks".
> And another thing, WCCF is as reliable as a 1980 Zastava Yugo. Don't give them too much credit.


I never give much credit to anybody. I'm the dude that believes when sees it. I know it's still assumptions but their premise is intriguing and they must have something to support this. I want to buy RVII but maybe a good thing would be waiting a bit longer. What's the release date for NAVI now? I think AMD will have a shot with NAVI. Probably I will be pounded for the GCN old crap here but everything can be improved. AMD must buckle up and give the performance finally to stand up to NV. Now's a good time for this. Hopefully they can pull it off.


----------



## ShurikN (May 6, 2019)

ratirt said:


> I never give much credit to anybody. I'm the dude that believes when sees it. I know it's still assumptions but their premise is intriguing and they must have something to support this. I want to buy RVII but maybe a good thing would be waiting a bit longer. *What's the release date for NAVI now?* I think AMD will have a shot with NAVI. Probably I will be pounded for the GCN old crap here but everything can be improved. AMD must buckle up and give the performance finally to stand up to NV. Now's a good time for this. Hopefully they can pull it off.


Apparently Q3 this year. That's Navi 10, intended to replace Polaris and Vega.


----------



## moproblems99 (May 6, 2019)

notb said:


> Why not?



First you said it yourself: They are committed to enterprise.  And enterprise only.  Second, I would say look at recent Intel history and make an educated guess.  Enthusiasts and 'gamers' are basically an after thought to Intel.  Actually, a piggy bank for when times get tough so they can poop out some half assed product (more recently) and add $150 - $500 to it.  I can't say I blame them when it is that easy.

Add that to the fact that all the people they hired have been churning out gpus that aren't really great at anything and Voila!  You have exactly nothing to be excited about.  My educated guess says we don't have much to look forward to from them.  Maybe a better Instinct but what does that get us?


----------



## R0H1T (May 6, 2019)

Take with 2 teaspoon full of rock salt & distilled water, for (less) aftertaste


----------



## xkm1948 (May 6, 2019)

R0H1T said:


> Take with 2 teaspoon full of rock salt & distilled water, for (less) aftertaste




Fairly sure Sony will not use the same name as Nvidia’s DLSS for deep learning based de-noising


----------



## erocker (May 6, 2019)

HD64G said:


> So, you didn't know that Navi 10 was the Polaris successor coming in 2019 and the Vega successor is the Navi 20 that would launch in 2020? Those rumors are over a year old to be confused with the Radeon 7 launch that was a product to buy AmD time until Vega 20 is ready.


I don't pay attention to rumors so it doesn't matter. What matters is they're taking a long time and market prices are inflated due to lack of competition. Sooner the better.


----------



## R0H1T (May 7, 2019)

xkm1948 said:


> Fairly sure Sony will not use the same name as Nvidia’s DLSS for deep learning based de-noising


Radeon Rays is already a thing, or did you mean something else?


----------



## FordGT90Concept (May 7, 2019)

The picture mentions DLSS by name (end of the first bullet).

I can't see Navi having tensor cores at all so the chances of deep learning anything are none.  Arcturus might have tensor cores, not Navi.


----------



## xkm1948 (May 7, 2019)

FordGT90Concept said:


> The picture mentions DLSS by name (end of the first bullet).
> 
> I can't see Navi having tensor cores at all so the chances of deep learning anything are none.  Arcturus might have tensor cores, not Navi.



Tensor flow support with GCN on Linux is quite poor. Experienced it first hand when I still had the FuryX. Very few developer actually spend time to develop for GCN based cards. So yeah I agree, without dedicated Tensor flow ASIC and *good software support*, DLSS level de-nosing would be very hard on AMD GPU.

Also R0h1t, DLSS has nothing to do with Ray Tracing. Not talking about Ray Tracing here at all.


----------



## R0H1T (May 7, 2019)

It's from 2018, no idea whether fake or not but *it is possible* given RTX cards were only released less than a year back. Also we don't know for sure what DLSS stands for in that slide.


----------



## FordGT90Concept (May 7, 2019)

...Volta has tensor cores and it debuted in 2017.  It took almost a year for that to evolve into Turing and DLSS to be created that uses it.  AMD is quite far behind in this area.  And why would Sony want anything to do with it anyway?  DLSS only reason to exist is to cover up the fact they're running it at low resolutions to hide the raytracing performance drop.

Mentioning DLSS (an NVIDIA tech) in relation to AMD shows the author of said picture doesn't have a basic understanding of what it is which calls in to question the accuracy of all of it.  Here's more glaring examples:
1) 6c/12t when chiplets are 8c/16t.  The leak that suggests it is 8c/8t makes a lot of sense because of backward compatibility reasons with the PS4 and PS4 Pro.  6c is going to create threading issues because two cores are going to have higher load than the rest (and it could choke).
2) Sony doesn't use anything off the shelf so why would they use Radeon Rays off the shelf?  They likely have a custom raytracing implementation.
3) "post-processing of the buffers" let me put my pear "wut" expression on
4) "8K?" 
5) these TFLOP/shader counts look like they're copied from Vega.
6) "14 GB available to developers" which is 2 GB when PS4 kept 3 GB to itself.  They'll likely expand it, not contract it.
7) 2 TB, 2.5" HDDs retail for $85.  I'm thinking either 3.5" HDD or, more likely, an SSD of unknown capacity.  Probably use older, slower chips and buy them in bulk.
8) 802.11ax  ac at best
9) that last bullet is 100% BS.


----------



## xkm1948 (May 7, 2019)

R0H1T said:


> It's from 2018, no idea whether fake or not but *it is possible* given RTX cards were only released less than a year back. Also we don't know for sure what DLSS stands for in that slide.




D~L~S~S

Deep Learning Super Sampling


https://www.nvidia.com/en-us/geforc...-new-technologies-in-rtx-graphics-cards/#dlss

Whoever made that pic is doing his/her/its best to fake it without even paying attention to all the acronyms.



Well unless Sony actually is using “DISCRETE LOGIC SOLVING SYSTEM”  to control nuclear power plant with the help of lockheed-martin.  TBH with those BS spec running off an GCN, it actually might need a nuclear power plant to power it lol.
https://trademarks.justia.com/870/97/dlss-87097534.html


----------



## steen (May 7, 2019)

xkm1948 said:


> So yeah I agree, without dedicated Tensor flow ASIC and *good software support*, DLSS level de-nosing would be very hard on AMD GPU.



Just out of curiosity, what does this mean? Are you equating a pre-computed edge reconstruction filter applied after scanout to denoising of low pass RTRT?



> Also R0h1t, DLSS has nothing to do with Ray Tracing. Not talking about Ray Tracing here at all.



Hence my confusion. What do you think DLSS is?


----------



## cucker tarlson (May 7, 2019)

xkm1948 said:


> D~L~S~S
> 
> Deep Learning Super Sampling
> 
> ...


Yup,this is ~300w of gpu in a console


----------



## R0H1T (May 7, 2019)

xkm1948 said:


> D~L~S~S
> 
> Deep Learning Super Sampling
> 
> ...


You never know with AMD, however *if* there is no such thing as DLSS' equivalent here one would hope the leakers could avoid such an obvious mistake. Who knows, frankly I'm just *shoveling coal* here.
Next stop ~ *E3 *


----------



## ratirt (May 7, 2019)

steen said:


> Just out of curiosity, what does this mean? Are you equating a pre-computed edge reconstruction filter applied after scanout to denoising of low pass RTRT?
> 
> Hence my confusion. What do you think DLSS is?


He is right it has nothing to do with Ray Tracing. It comes alongside RT to make more FPS by reducing resolution or image quality.



FordGT90Concept said:


> I can't see Navi having tensor cores at all so the chances of deep learning anything are none. Arcturus might have tensor cores, not Navi.


Why would Navi have tensor cores? It's not NV product. There are other ways of supporting ray tracing. It doesn't need to have tensor cores which is Nvidia specific.



cucker tarlson said:


> Yup,this is ~300w of gpu in a console


I missed something here. Where does it say 300W??


----------



## FordGT90Concept (May 7, 2019)

cucker tarlson said:


> Yup,this is ~300w of gpu in a console


Like Xbox One X: they take a big chip and run it at low clocks which translates to low wattage.  The whole system will likely use less than 200w so about in line with Xbox One X.



ratirt said:


> Why would Navi have tensor cores? It's not NV product. There are other ways of supporting ray tracing. It doesn't need to have tensor cores which is Nvidia specific.


Tensors aren't for raytracing, they're for AI which AMD is way behind in.  Arcturus is presumably the next architecture focused on Radeon Instinct like Vega was.


----------



## Vayra86 (May 7, 2019)

R0H1T said:


> Take with 2 teaspoon full of rock salt & distilled water, for (less) aftertaste



Source? This to me looks fake as hell and it certainly isn't © Sony

Nah, this reads like some raging fan's wet dream, not reality.


----------



## R0H1T (May 7, 2019)

Reddit.


----------



## Vayra86 (May 7, 2019)

R0H1T said:


> Reddit.


----------



## ratirt (May 7, 2019)

FordGT90Concept said:


> Like Xbox One X: they take a big chip and run it at low clocks which translates to low wattage.  The whole system will likely use less than 200w so about in line with Xbox One X.
> 
> 
> Tensors aren't for raytracing, they're for AI which AMD is way behind in.  Arcturus is presumably the next architecture focused on Radeon Instinct like Vega was.


If you want to be more precise then tensor cores are not exactly AI but to enable it by mixing precisions to finish work faster sacrificing accuracy and it comes along with RT to fill other blanks RT can't complete.
Besides I still think tensor cores are NV specific. As far as I remember, AMD is going to have something similar with Navi. They talked about mixing precisions and they will not call it tensor for sure.


----------



## londiste (May 7, 2019)

ratirt said:


> cucker tarlson said:
> 
> 
> > Yup,this is ~300w of gpu in a console
> ...


The rumored specs are effectively 7nm Vega 56. 11TFLOPS puts the clock at around 1500MHz. It will not be 300W but from the looks of it should stay at around 180-200W. Performance level of such GPU would be equal to Vega64 at about 30% lower power consumption.


----------



## FordGT90Concept (May 7, 2019)

ratirt said:


> If you want to be more precise then tensor cores are not exactly AI but to enable it by mixing precisions to finish work faster sacrificing accuracy and it comes along with RT to fill other blanks RT can't complete.
> Besides I still think tensor cores are NV specific. As far as I remember, AMD is going to have something similar with Navi. They talked about mixing precisions and they will not call it tensor for sure.


Tensor cores are FP16*FP16+(FP16|FP32) matrix solvers. Deep Learning for dummies.


----------



## TheoneandonlyMrK (May 7, 2019)

FordGT90Concept said:


> Tensor cores are FP16*FP16+(FP16|FP32) matrix solvers. Deep Learning for dummies.


I think Amd are using rapid packed math to do the same, probably why they sorted async compute out first, they're now set up to do compute and graphics ,on the fly.


----------



## FordGT90Concept (May 7, 2019)

Rapid Packed Math is really simple: the FP32 FPUs can alternatively handle 2xFP16 in the same space/cycle.


----------



## TheoneandonlyMrK (May 7, 2019)

FordGT90Concept said:


> Rapid Packed Math is really simple: the FP32 FPUs can alternatively handle 2xFP16 in the same space/cycle.


Well that's it's initial implementation, later versions support lower bit ranges like 4x16bit 8x8bit 16x4bit and that's through 64bit wavefronts not 32 ,on 32 bit jobs it can still throughout 2x.

This is why Gcn isn't changing as soon as some would like.


----------



## londiste (May 7, 2019)

theoneandonlymrk said:


> Well that's it's initial implementation, later versions support lower bit ranges like 4x16bit 8x8bit 16x4bit and that's through 64bit wavefronts not 32 ,on 32 bit jobs it can still throughout 2x.
> This is why Gcn isn't changing as soon as some would like.


Vega already has 1xFP32, 2xFP16, 4xINT8 and 8xINT4, so does Turing. Pascal should have everything besides 2xFP16.
Lower bit ranges have quite limited utility though and these have really not been used much in other than some ML applications.


----------



## ratirt (May 7, 2019)

FordGT90Concept said:


> Tensor cores are FP16*FP16+(FP16|FP32) matrix solvers. Deep Learning for dummies.


I'm not sure where you are going with this but thanks for the tip and that's what I said. Mixed precision. Anyway my confusion with you is about a different matter. let me ask you straight. I understand that tensor cores are AI for you or the deep learning  or did I just understand you wrong cause that's my impression.


----------



## TheoneandonlyMrK (May 7, 2019)

londiste said:


> Vega already has 1xFP32, 2xFP16, 4xINT8 and 8xINT4, so does Turing. Pascal should have everything besides 2xFP16.
> Lower bit ranges have quite limited utility though and these have really not been used much in other than some ML applications.


They ,meaning Nvidia, do not have RPM , they Can do all of it ,but do some of it with special hardware ie tensor or RtRt core's and some is done by cuda core's but they're not doing it the same way at all.

I have a vega, i know what it can do.


----------



## AlienIsGOD (May 7, 2019)

*CHOO CHOOOOO!!!!1! Navi Hype Train be rollin'*

looks like thread title was written by a 5 year old.....


----------



## londiste (May 7, 2019)

theoneandonlymrk said:


> They ,meaning Nvidia, do not have RPM , they Can do all of it ,but do some of it with special hardware ie tensor or RtRt core's and some is done by cuda core's but they're not doing it the same way at all.


Yes, Nvidia has a different implementation. Does it matter all that much as long as the same featureset is there?


----------



## TheoneandonlyMrK (May 7, 2019)

londiste said:


> Yes, Nvidia has a different implementation. Does it matter all that much as long as the same featureset is there?


It does to Nvidia and Amd , but not so much to us no.
But in saying that Nvidia are making quite the big deal at the moment about what they're Special hardware can do aren't they.


----------



## londiste (May 7, 2019)

theoneandonlymrk said:


> But in saying that Nvidia are making quite the big deal at the moment about what they're Special hardware can do aren't they.


Well, it depends on the context or features/hardware in question.
Couple operations Nvidia implemented in hardware as RT Cores do seem to be somewhat worth hyping - doable in shaders definitely but RT Cores are clearly much more efficient at them. 
Tensor cores are a question but it looks like Nvidia has been somewhat hush-hush about what these actually do. For example the part where FP16 is done (or can be done) on Tensor cores is worth noting but of the bigger sites Anandtech was the one that caught wind of it for their TU116 review. I would say this is interesting.


----------



## moproblems99 (May 7, 2019)

AlienIsGOD said:


> *CHOO CHOOOOO!!!!1! Navi Hype Train be rollin'*
> 
> looks like thread title was written by a 5 year old.....



I am 7 actually, sheesh.  Age Descrimination.  The thread was supposed to be fun (and a joke) because everybody is salty as fuck.  Like you.  Carry on.


----------



## TheoneandonlyMrK (May 7, 2019)

londiste said:


> Well, it depends on the context or features/hardware in question.
> Couple operations Nvidia implemented in hardware as RT Cores do seem to be somewhat worth hyping - doable in shaders definitely but RT Cores are clearly much more efficient at them.
> Tensor cores are a question but it looks like Nvidia has been somewhat hush-hush about what these actually do. For example the part where FP16 is done (or can be done) on Tensor cores is worth noting but of the bigger sites Anandtech was the one that caught wind of it for their TU116 review. I would say this is interesting.


So it does matter just only if it's Nvidia lauding it, anywho.

In the context of this thread we probably need to get more on topic.


----------



## AlienIsGOD (May 7, 2019)

moproblems99 said:


> I am 7 actually, sheesh.  Age Descrimination.  The thread was supposed to be fun (and a joke) because everybody is salty as fuck.  Like you.  Carry on.


LOL I'm not salty, just wish ppl could act and write more adult like.... This site has gone downhill forum wise the last few years...


----------



## juiseman (May 7, 2019)

*AMD Scores EPYC Win With Cray And ORNL On Frontier 1.5 Exaflop Supercomputer*

https://hothardware.com/news/amd-epyc-radeon-instinct-ornl-supercomputer

This is a big win for AMD


----------



## moproblems99 (May 7, 2019)

AlienIsGOD said:


> LOL I'm not salty, just wish ppl could act and write more adult like.



I'm sorry you couldn't see the joke that it was.  I have ordered a happy meal for you.


----------



## steen (May 7, 2019)

londiste said:


> Tensor cores are a question but it looks like Nvidia has been somewhat hush-hush about what these actually do. For example the part where FP16 is done (or can be done) on Tensor cores is worth noting but of the bigger sites Anandtech was the one that caught wind of it for their TU116 review. I would say this is interesting.



For RTX TU, FP16 is exclusively a tensor op. GTX TU FP16 is interesting given no tensors according to NV. I'm not entirely convinced the hardware is very different. TU SM layout is more tightly packed than GP, but RTX/Tensor silicon appears to be only ~10% of the die. TU uarch is higher area consuming even without the RTX pipeline. Given RTX features only make sense with a minimum raster performance level (2060), I wouldn't be surprised if GTX TU had similar hardware but limited to fp16 ops. The big benefit of RTX tensor cores IMO is the FP32 accumulate for data science.


----------



## eidairaman1 (May 7, 2019)

moproblems99 said:


> I am 7 actually, sheesh.  Age Descrimination.  The thread was supposed to be fun (and a joke) because everybody is salty as fuck.  Like you.  Carry on.



should be in General Nonsense


----------



## FordGT90Concept (May 7, 2019)

ratirt said:


> I'm not sure where you are going with this but thanks for the tip and that's what I said. Mixed precision. Anyway my confusion with you is about a different matter. let me ask you straight. I understand that tensor cores are AI for you or the deep learning  or did I just understand you wrong cause that's my impression.


The add is the only one that supports FP32 and the reason for that is so that it is less likely to overflow the FP16*FP16 result.  The main point (and why it is good for AI) is that it is a matrix solver for tensor flow.  AMD doesn't have a matrix solver. GCN has to do these calculations on the shaders which is much, much slower.  Example: Vega can do about 24 TFLOP FP16; Volta can do over 100 TFLOP FP16 in its tensor cores alone.



theoneandonlymrk said:


> They ,meaning Nvidia, do not have RPM , they Can do all of it ,but do some of it with special hardware ie tensor or RtRt core's and some is done by cuda core's but they're not doing it the same way at all.
> 
> I have a vega, i know what it can do.


NVIDIA added parallelism to deal with the problem in Turing where AMD made Vega more flexible.  As a result, Turing has a lot of transistors but more performance where Vega has fewer transistors but less performance.

AMD is going to want to compete in AI so AMD is going to have to add tensor cores eventually but I don't think that is in Navi because it was made for Sony who has no use for it.


----------



## steen (May 7, 2019)

FordGT90Concept said:


> NVIDIA added parallelism to deal with the problem in Turing where AMD made Vega more flexible.  As a result, Turing has a lot of transistors but more performance where Vega has fewer transistors but less performance.



Not entirely the same. GCN makes no distinction between graphics & compute modes & can schedule concurrently. TU is better at this than GP et al, but parallelism is a function of running integer & floats at the same time. Just highlights the different uarch approaches. NV prefers discrete specialized silicon costing more die space, whereas AMD (til now) has preferred generalist alus.


----------



## FordGT90Concept (May 7, 2019)

Turing doesn't sacrifice anything (other than die space) for concurrent FP16 performance.  Vega gets FP16 performance by taking away from FP32 performance.  This is a disadvantage for Vega and an advantage for Turing when it comes to anything that can benefit from FP16.


----------



## TheoneandonlyMrK (May 7, 2019)

FordGT90Concept said:


> The add is the only one that supports FP32 and the reason for that is so that it is less likely to overflow the FP16*FP16 result.  The main point (and why it is good for AI) is that it is a matrix solver for tensor flow.  AMD doesn't have a matrix solver. GCN has to do these calculations on the shaders which is much, much slower.  Example: Vega can do about 24 TFLOP FP16; Volta can do over 100 TFLOP FP16 in its tensor cores alone.
> 
> 
> NVIDIA added parallelism to deal with the problem in Turing where AMD made Vega more flexible.  As a result, Turing has a lot of transistors but more performance where Vega has fewer transistors but less performance.
> ...


Nvidia couldn't easily put back 64bit compute, they had to go special hardware they added tensor cores after Google ditched their GPUs for their own tensor asic.

And just look how much use their specific hardware is generally, it's useless.


----------



## FordGT90Concept (May 7, 2019)

For games, mostly.  Navi is a gaming product which is why I don't think it will have tensor cores.  I would be shocked if Arcturus didn't have tensor cores because AMD is so far behind in machine learning.  Then again, companies like Tesla are designing their own chips for machine learning anyway.

Point is: RPM doesn't help much with tensor flow where RTX's tensor cores do.  DLSS isn't something Navi will have because it will lack the hardware to do it effectively.


----------



## CrAsHnBuRnXp (May 7, 2019)

If patterns are anything to go by, the AMD hype train for their GPU's are going to crash.


----------



## Deleted member 24505 (May 7, 2019)

AlienIsGOD said:


> *CHOO CHOOOOO!!!!1! Navi Hype Train be rollin'*
> 
> looks like thread title was written by a 5 year old.....



what you have only just noticed it, mr salty


----------



## steen (May 8, 2019)

FordGT90Concept said:


> Turing doesn't sacrifice anything (other than die space) for concurrent FP16 performance.



What does "concurrent fp16" even mean? You are aware that half floats & RPM (2xfp16) are used instead of fp32 to increase performance of ops not requiring full float precision? It's a register/resource & throughput gain in the case of 2xfp16. Int32, Int16, transcendentals, etc, still happen in the SM. TU "concurrency" is the ability to pack both integer & floats in the pipeline without bubbles/stalls/context switching.



> Vega gets FP16 performance by taking away from FP32 performance.  This is a disadvantage for Vega and an advantage for Turing when it comes to anything that can benefit from FP16.



Frightening. You should read the TU uarch & mixed precision white papers.



FordGT90Concept said:


> Point is: RPM doesn't help much with tensor flow where RTX's tensor cores do.  DLSS isn't something Navi will have because it will lack the hardware to do it effectively.



Tensor math is just 4x4 matrix FMA. It's the ability of the tensors to work on fp16, int8, int4 that makes them useful in nn ML. I asked someone else earlier: what do you think DLSS is?


----------



## Midiamp (May 8, 2019)

CrAsHnBuRnXp said:


> If patterns are anything to go by, the AMD hype train for their GPU's are going to crash.


AMD has a bad marketing team. Instead of quelling down the rumors, they just let the rumor spread like wild fire. I was one of the victim of the Radeon 7 hype train. The fall from hype hurts so bad, I now consider EVERY rumor about Zen 2 and Navi as nothing but bad gossip. Frankly I don't want to be a part of a community that harbors and encourage spreading of bad information.


----------



## seronx (May 8, 2019)

If one googles GFX1010:
//On GFX10 I$ is 4 x 64 bytes cache lines.  By default prefetcher keeps one cache line behind and reads two ahead.  We can modify it with S_INST_PREFETCH for larger loops to have two lines behind and one ahead.  Therefor we can benefit from aligning loop headers if loop fits 192 bytes.  If loop fits 64 bytes it always spans no more than two cache lines and does not need an alignment.  Else if loop is less or equal 128 bytes we do not need to modify prefetch, Else if loop is less or equal 192 bytes we need two lines behind.

-> L0 cache, which is referred to below.

// In WGP mode the waves of a work-group can be executing on either CU of the WGP. Therefore need to invalidate the L0 which is per CU. Otherwise in CU mode and all waves of a work-group are on the same CU, and so the L0 does not need to be invalidated.

-> CU mode and WGP mode

// HWRC  = Register destination cache
&
// Try to reassign registers on GFX10+ to reduce register bank conflicts.
// On GFX10 registers are organized in banks. VGPRs have 4 banks assigned in a round-robin fashion: v0, v4, v8... belong to bank 0. v1, v5, v9... to bank 1, etc. SGPRs have 8 banks and allocated in pairs, so that s0:s1, s16:s17, s32:s33 are at bank 0. s2:s3, s18:s19, s34:s35 are at bank 1 etc.
// The shader can read one dword from each of these banks once per cycle. If an instruction has to read more register operands from the same bank an additional cycle is needed. HW attempts to pre-load registers through input operand gathering, but a stall cycle may occur if that fails. For example V_FMA_F32 V111 = V0 + V4 * V8 will need 3 cycles to read operands, potentially incuring 2 stall cycles.
// The pass tries to reassign registers to reduce bank conflicts.
// In this pass bank numbers 0-3 are VGPR banks and 4-11 are SGPR banks, so that 4 has to be subtracted from an SGPR bank number to get the real value.  This also corresponds to bit numbers in bank masks used in the pass.

-> HWRC and banking are part of Super-SIMD patents;
https://patents.google.com/patent/US20180357064A1
https://patents.google.com/patent/US20180121386A1

//In one embodiment, each bank of the vector destination cache holds 4 entries, for a total 8 entries with 2 banks.
-> destination register cache // HWRC => 8 destination registers with 3-entry source operand forwarding.

//In one embodiment, source operands buffer holds up to 6 VALU instruction's source operands. In one embodiment, source operand buffer includes dedicated buffers for providing 3 different operands per clock cycle to serve instructions like a fused multiply-add operation which performs a*b+c.
-> source operand buffer => 6 * 3-entry source operand buffer


----------

