• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

CHOO CHOOOOO!!!!1! Navi Hype Train be rollin'

Hype train went off a cliff for me once they said VII is their high end product.
 
I'll tell you how.

1) It's not even close to squarely between the RTX 2070 and RTX 2080 performance wise. It's essentially equal to an RTX 2080. Trading blows with it all day long. Sometimes a little slower, sometimes a little faster, sometimes pretty much the same. Turn RTX on and it wins every time(speed wise). Those charts above are bullshit.

2) As such, it's priced to beat Nvidia value wise. Which it does.

3) Nvidia didn't respond with price cuts because they can't. Or won't. Either way. Doesn't matter. Radeon VII is a better value.
every chart that doesn't show R7 where you want is bullshit.
 
if nv put 16gb on 2070 cause there's a short supply of 1gb ddr6 chips it wouldn't make it worth $200 more.
Of course it wouldn't because 8 GiB GDDR6 isn't worth $200--it's closer to $50-75.

Would people pay ~$600 for a 16 GiB RTX 2070? Likely.
 
Of course it wouldn't because 8 GiB GDDR6 isn't worth $200--it's closer to $50-75.
you still haven't got me convinced that 16gb needs to be there, drives that price up a lot for little to no gain.I mean HBCC is their own friggin invention.
 
Hype train went off a cliff for me once they said VII is their high end product.
So, you didn't know that Navi 10 was the Polaris successor coming in 2019 and the Vega successor is the Navi 20 that would launch in 2020? Those rumors are over a year old to be confused with the Radeon 7 launch that was a product to buy AmD time until Vega 20 is ready.
 
you still haven't got me convinced that 16gb needs to be there, drives that price up a lot for little to no gain.I mean HBCC is their own friggin invention.
You're looking at it backwards: the product has 16 GiB so the price reflects that. Vega 20 wasn't designed for gamers; it was designed for Radeon Instinct (Radeon VII == Radeon Instinct MI50).

The "gain" is in the fact that it has 1 TB/s bandwidth which the GPU clearly benefits from.
 
That is just it. You need the one to get the other, what @ShurikN says as well. We all know this, deep down inside, everyone can recognize 'Nvidia mindshare' is a thing but let's just face reality, that is not because of Huang's fancy jacket. Its because of the product.

I think its comparable to a Dacia car versus a Volkswagen. They both accelerate about the same, they carry just as many people and luggage, they do the same amount of KM/L. But, the VW has a somewhat better designed interior, looks a bit nicer on the outside, and comes in twenty different colors. The Dacia comes in three. And, to top things off, VW has a few concept cars going about, and a few fast and luxurious ones too. Nobody ever buys those, but hey, if you drive a simple VW, you do get some of that 'feeling' of being part of the brand that has those cars.

This also handily underlines that people care about more than price - you don't see Dacias everywhere. In fact, price is one of the least important factors in most segments except that volume midrange. And because of that, the midrange is also the least profitable segment. This is why AMD moves units but profits so very little - and therein lies the problem. The midrange is only a result of solid high-end products from last year, or you're constanly doing reboots a'la Polaris to fix the gap and you'll never amass a comfortable margin to fund new R&D.

I think this is a pretty decent car analogy, for once. :laugh: Heck it goes further, even; VW has that E-Golf, a pretty useless electrical version of the same car, sounds almost like Turing!

I'm not sure the car analogy works. It paints amd as a cheap brand while it works quite well for NV given their history of lying and cheating. Maybe vw and ford or BMW and ford would of worked better. They both make good products but 1 is perceived as being "better".

But I know what you mean. Amd have been more chasing the mid range, family sedan while NV have been chasing the high end sports coupe market.
 
You're looking at it backwards: the product has 16 GiB so the price reflects that. Vega 20 wasn't designed for gamers; it was designed for Radeon Instinct (Radeon VII == Radeon Instinct MI50).

The "gain" is in the fact that it has 1 TB/s bandwidth which the GPU clearly benefits from.
no, you're looking at it backwards.
if it wasn't designed for gamers,then a gamer shouldn't take $700 for R7 as a good alternative if they have a 2070 at $500.
still,it's better that it's there than leaving 2070/80 with no competition.
 
I would flip it back to you: why pay $500 for a card with only 8 GiB of VRAM? $200 cards released years ago had that (RX 470). I would expect premium priced cards to have premium amounts of memory.
 
I would flip it back to you: why pay $500 for a card with only 8 GiB of VRAM? $300 cards released years ago had that (RX 470). I would expect premium priced cards to have premium amounts of memory.
oh my god just cause there's 8gb of some old ass ddr5 on one rx470 variant doesn't mean rtx2070 needs 16gb ddr6 :rolleyes: you just said the fact R7 has 16gb has two reasons and none of them is that it needs it.
come on,I've got work to do and I'm here refreshing tpu,sipping a drink and going back and forth with you :D
 
Hi again pal! Nice to meet you in another AMD thread although you don't like their products muchly.
What would be the point of a thread that involves only people that like the product? Would that still be a discussion? More like a gang bang.
If that's what you're after, I'll step aside for sure...
 
oh my god just cause there's 8gb of some old ass ddr5 on one rx470 variant doesn't mean rtx2070 needs 16gb ddr6 :rolleyes: you just said the fact R7 has 16gb has two reasons and none of them is that it needs it.
come on,I've got work to do and I'm here refreshing tpu,sipping a drink and going back and forth with you :D
If it makes you feel any better, a likely reason why Radeon Instinct MI60 was passed by for consumer cards is because 32 GiB is excessive; 16 is not. It is a lot but it is not over the top.
 
If it makes you feel any better, a likely reason why Radeon Instinct MI60 was passed by for consumer cards is because 32 GiB is excessive. 16 is not. 16 is the new 8.
no it isn't.
 
Yes, but bear in mind that while R7 die size is smaller, it's built on a more expensive and not as mature process
So why not stay at the larger, cheap node? They could have simply polished Vega a bit further and keep selling it. It has enough performance for the market it targets.
Radeon VII didn't improve on anything qualitative. It's still power hungry and hot above what many PCs can take.
It's slightly faster, but not making AMD compete with top Nvidia products.

It literally looks like a statement for shareholders - showing that 7nm works (kind of). And a way to push few thousand Instinct chips that no one wants.

And BTW: this "more expensive and not as mature process" is what Zen2 will be using - at least initially. So you'd better be wrong or the Zen2 fanclub will be very disappointed. :-)
 
GPUs don't work as a chiplet as well as CPUs do. Zen 2's success on 7 nm is because of the chiplet design. Navi was theoretically supposed to be a chiplet too but...that remains to be seen. Infinity fabric would have to be really, really fast to satisfy a GPU's need for bandwidth.
 
16 is not. It is a lot but it is not over the top.
How much do games use these days at 4K ultra? Is 8GB really a big limitation?
I quickly checked the analyses TPU provides.
E.g. Metro Exodus, not even 6GB with RTX on:
https://www.techpowerup.com/reviews/Performance_Analysis/Metro_Exodus/6.html
Generally speaking, most games tested use around 4GB in 4K.
The biggest usage I've found:
https://www.techpowerup.com/reviews/Performance_Analysis/Middle_Earth_Shadow_of_War/5.html
8.3GB, but the comment is crucial. Usual 4GB on the "high" settings.

Also, I remember very well that 8GB was perfectly fine when Vega came out. AMD convinced us that HBM2 and HBCC mean it doesn't need more. That it performs like a card that has more memory.
What happened to all that?

GPUs don't work as a chiplet as well as CPUs do. Zen 2's success on 7 nm is because of the chiplet design.
I believe it's a bit too early to say chiplets work great in CPUs and that Zen2 is a success. Don't you think? ;-)
 
Last edited:
What would be the point of a thread that involves only people that like the product? Would that still be a discussion? More like a gang bang.
If that's what you're after, I'll step aside for sure...
I just point out your constantly negative altitude towards any AMD product, even for very good ones such the Zen derived cpus. Keep commenting freely, just don't expect many people to take your opinions seriously when you are so single-minded about tech products. Personally, even if I don't like nVidia's market practices I can regard highly of the GTX1080Ti being marvelous in design and efficiency for the time it launched. Every product should be judged seperately from the company's profile if we want to be as objective as humanly possible.
 
Just finished it. Pretty good video. And explains a lot of what's been happening and why.
This comment from the second Adored video ties in nicely with it.

Thats why I shared it, Lisa Su is on the right path. I'm not expecting miracles like the Jump between the Radeon 8500 and 9500/9700/Pro. She expects improvement which there is. AMD is digging themselves out of the mess that started with Phenom 1 (Damage had already been done) they have been able to do a lot as of late despite their smaller operating revenue.
 
AMD graphics focus on console chips while Intel(ATi) and Nvidia battle it out in dGPU.

I am OK with that.
Intel will need at least 5 years (most possible 10) to be able to compete with nVidia for the high-end consumer GPU market. And they might be obliged to use Samsung's Fabs in order to even begin their mass-market GPU production. They are not in form lately in many fronts. I also would like to have a 3 or more part competition in CPU and GPU market but it is very hard for a newcomer to compete with the established ones for the 1st few years.
 
Intel will need at least 5 years (most possible 10) to be able to compete with nVidia for the high-end consumer GPU market. And they might be obliged to use Samsung's Fabs in order to even begin their mass-market GPU production. They are not in form lately in many fronts. I also would like to have a 3 or more part competition in CPU and GPU market but it is very hard for a newcomer to compete with the established ones for the 1st few years.

not to mention Intel's new CEO announced they are focused on big data and servers more than PC consumers moving forward. Sadly I think that is a bad move for them, seeing as how Dell just announced they intend to increase 7nm EPYC Rome usage threefold than previously estimated. Good luck Intel you will need it, cause Rome is going to kick their ass.
 
Intel will need at least 5 years (most possible 10) to be able to compete with nVidia for the high-end consumer GPU market. And they might be obliged to use Samsung's Fabs in order to even begin their mass-market GPU production. They are not in form lately in many fronts. I also would like to have a 3 or more part competition in CPU and GPU market but it is very hard for a newcomer to compete with the established ones for the 1st few years.
This.

If AMD leave the PC GPU consumer market then we're in for bad times ahead, Intel will not be able to come out with competitive products to Nvidia being as this is their first discreet GPU they have released in probably over 20+ years, so expecting them to be able to match Nvidia on all fronts, power, price, performance after a couple of years of R+D is quite frankly ridiculous, and if there was no AMD then we would see a monopoly like there has been in the CPU segment for the last 10 years with Intel just incrementally adding a little bit more performance every generation and prices always increasing, now imagine that was Nvidia... their prices have already gone up ridiculously, if there was no competition less competition than there is now.... well good luck with your $2k high end GPU with a small uplif of 10% over turing.
 
This.

If AMD leave the PC GPU consumer market then we're in for bad times ahead, Intel will not be able to come out with competitive products to Nvidia being as this is their first discreet GPU they have released in probably over 20+ years, so expecting them to be able to match Nvidia on all fronts, power, price, performance after a couple of years of R+D is quite frankly ridiculous, and if there was no AMD then we would see a monopoly like there has been in the CPU segment for the last 10 years with Intel just incrementally adding a little bit more performance every generation and prices always increasing, now imagine that was Nvidia... their prices have already gone up ridiculously, if there was no competition less competition than there is now.... well good luck with your $2k high end GPU with a small uplif of 10% over turing.

AMD is not leaving.
 
I just point out your constantly negative altitude towards any AMD product, even for very good ones such the Zen derived cpus. Keep commenting freely, just don't expect many people to take your opinions seriously when you are so single-minded about tech products. Personally, even if I don't like nVidia's market practices I can regard highly of the GTX1080Ti being marvelous in design and efficiency for the time it launched. Every product should be judged seperately from the company's profile if we want to be as objective as humanly possible.
Well, here's the difference. I'm not just criticizing AMD's products. I also don't like their business strategy and the whole background they provide. That's why I'm criticizing the company as well.

And yes, I don't like AMD's GPUs - because they're an affront to the great company that ATI had been.
And I don't like the CPUs as well - because IMO they made too many compromises to push the price down.

You see, many cores and all that - great. AMD started the core war that changed the landscape of what we TALK about.
But simple fact is: most of demand for PC CPUs (desktop and mobile) is for chips with an IGP. When Zen launched in 2017, Intel was making 4 core chips with IGP.
Over 2 years later AMD is still launching 4-core APUs. A lot of talk. Not a lot of improvement of mainstream products. So yeah, it's hard for me to like someone who made these decisions.

And it's a similar story with Navi. If AMD's market share was as high as their "forum discussion share"...

Intel will need at least 5 years (most possible 10) to be able to compete with nVidia for the high-end consumer GPU market. And they might be obliged to use Samsung's Fabs in order to even begin their mass-market GPU production. They are not in form lately in many fronts.
Well, AMD is also few years behind Nvidia and they also need to outsource production.

Why would Intel go for the expensive small volume cards? That makes no sense.
They should go for a good mainstream GPU. And what stops them from making a competitor to RX580? Not very efficient, but with decent performance and a better brand? Absolutely nothing.

As for workstation/datacenter products, Intel is very unlikely to be able to compete with Nvidia for years - not because of hardware finesse, but the whole ecosystem. They'll have the exact same problem AMD has.
Nvidia dominated GPGPU not because their chips were much better than AMDs, but because of things like CUDA.
Even if Intel magically makes a V100 clone - and even sell it slightly cheaper - it'll take them years to get a significant market share.
 
GPUs don't work as a chiplet as well as CPUs do.

They should work better I would argue due to the fundamental way GPUs work (SIMT/SPMD). That makes it much easier to decentralize the chip into compute modules, also with GPUs you didn't have to worry much about added latencies to begin with.

The problem is why would you want to make one right know ? A chiplet GPU would only makes sense if you reached the absolute limit of size/power/performance and any further advancement would affect any one of those metrics to the point it is no longer feasible to make a monolithic GPU. Or, if you want to make an APU (right know that's a bad idea on PCs due to lack of bandwidth).

That's exactly where Rome sits right know on the CPU front, that was meant to be the biggest , fastest most power efficient CPU AMD can make. With Navi they clearly didn't have those goals in mind. They wanted an APU for consoles and whatever design resulted from that they decided to port that on PCs in the form of dedicated graphics.

Let's take that a little further, power efficiency and size were likely to be the leading metrics in making Navi. Targets that were probably met just fine as far as the APUs that they built for consoles were concerned. Now we get to the matter of turning it into a compelling product for PCs and AMD was faced with a dilemma : do you make a 1:1 port of the technology and make cards that are very power efficient but are mediocre as far as performance goes (compared to the best Nvidia has). Or do you go outside of this optimal design in pursuit of more performance ? We'll see what they did, but how AMD got here shouldn't be a mystery or a surprise to anyone, it was all rather straight forward.
 
Last edited:
Back
Top