• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA CEO Jensen Huang on Radeon VII: "Underwhelming (...) the Performance is Lousy"; "Freesync Doesn't Work"

No argument then?
VII is good, 2080 is no new content, no visual gains,... seriously?

You really have reading issues, I already told you, skim back a few pages. Think of it as a treasure hunt!
 
It looks like: lol, 7nm, 300w tdp only 2080 or 1080ti performance from 2 years ago, no new features, but 16gb hbm2, for games...
Wrong, 180w and ~92% yield per max OC. People need to remember GpuBoost 3.0 is vendor-locked while the AMD driver is accessible. Changing the voltages and undervolting is done by default, not manually like AMD, on Nvidia. Also, the memory model is tiled, meaning it is not intermediate mode and always consuming power like AMD. If both cards were at %100 load, they would much likely be in AMD's favour because of the stated inference.
c7043b5e8c70cc3b30ff887266ec1db8.png
 
love how everyone is beating on the stock market being down for NV

@ time of posting it is at 143.09 and UP
While AMD is DOWN @ 19.33 ... I would rather own stock in a company that is worth $143 than a company with $19. just food for thought people.

NVIDIA Corporation
NASDAQ: NVDA
143.09 USD +0.51 (0.36%)
Jan 10, 1:56 PM EST ·

Advanced Micro Devices, Inc.
NASDAQ: AMD
19.33 USD −0.86 (4.26%)
Jan 10, 1:55 PM EST
Doesn't that depend on the price when you actually bought the stock? Your post sure makes it ound like you don't quite understand quite how this works. Owning $150 stock if you bought in at $170 certainly is quite a lot worse than owning $15 stock having bought in at $10 - and as you're likely to have invested the same amount, you'd also have 10-20x more stock with the cheaper one, making the difference very noticeable.
Gone from a majority to a healthy minority.

If it weren't for crypto, NVDA and AMD wouldn't be crapping their trousers at this point in time.
What? You get that the total addressable market has expanded dramatically over that period of time, right? The PC gaming hardware market is not smaller than before, and the software market certainly isn't. Other devices growing into their own and establishing themselves as parallel alternatives does not at all spell impending doom for the incumbent segments. That PC gaming is now "a minority" is meaningless when the total gaming market is 10x what it was when that was the case.
To buyers, the only thing that actually matters is how well it performs, along with thermals, noise etc. What process they use, how many cores, what clocks, etc. shouldn't matter to buyers, the benchmarks should show the reality. But if AMD needs ~300W to match the performance level of RTX 2080 / GTX 1080 Ti, they need to offer up some other advantage to justify going with a TDP that much higher, e.g. lower price.

Strategically, I can't see any real good reasons why AMD releases this card. My best guess is they felt the need to get some attention in the market since they don't have anything else for a good while. This is similar to the "pointless" RX 590 released last year, released just to get another review cycle.

For the future, this doesn't look good. AMD just spent what might be the last "good" node shrink, and repurposed a GPU for the professional market, a GPU which is very expensive to make and taking up precious capacity on the 7nm node. If this was planned a long time ago, they would have made a GPU without full fp64 support and with GDDR6, then at least they could have sold it much cheaper. But with Vega 20 they are at a disadvantage, not only is the HBM2 more expensive, the die is too. And they even put 16GB on it, which is completely wasted for gaming. If they really wanted more than 8GB, they could have gone with 12GB, and at least made it a little cheaper.
You've got this all turned around.
-Very expensive chip? Not likely. It's <400mm2. Even on 7nm, that's likely to be cheaper than Nvidia's monster 12nm chips. It does need an interposer and HBM, but given that GDDR6 is reportedly 60-70% more expensive than GDDR5(X), that leaves us at roughly price parity between GDDR6 and HBM2. AMD uses twice as much, which means less margins there, but not night and day. The interposer is likely not that expensive, given that it's far smaller than the Fiji lineup's interposer.
-"That much higher TDP"? While there's no denying that Nvidia's cards are more efficient, by a significant margin, the difference between a 300W VII and a 275W 1080Ti or 225W 2080 is ... not that much, really. 9 and 33% respectively. The difference was far bigger the last go around, with the 275W V64 roughly matching the 180W 1080 (52%).
-Strategically, this is AMD saying "Hey, we're here, and we can compete this go around too" rather than leaving the old and 1080-ish V64 as their highest performer. At the same price or cheaper than the 2080 for matching performance, this is a good alternative, particularly with a good tri-fan cooler for the reference card. That makes very much sense, as the other option would be leaving the RTX series unopposed until Navi arrives. If that's within a few months, this would be of no use, but if it's towards fall or Q4, this is a very good strategic move. Nvidia can't launch anything new in response, all they can do is lower prices on their existing cards, which we know are ridiculously expensive to produce, even if they are still overpriced. If Nvidia doesn't, AMD will sell more. What ever happens, this is a good move by AMD.
-"Spending a node shrink" on a refresh of a known architecture is a well established mode of operations in the tech world. It's when you don't have a node shrink that you need a new arch - a position that AMD has now put itself in. This primes the market for Navi. Of course, it lessens the "wow" effect if Navi delivers on improving perf/w (combining that with a node shrink would make it really stand out), but on the other hand it makes for continuity in AMD's offerings and the performance of their product stack.
-While I agree that Vega as an architecture is far better suited for professional applications (we've seen this proven plenty of times), apparently AMD feels confident enough in the gaming-related improvements when doubling the memory bandwidth and doubling the number of ROPs to give it a go. I suggest we wait for reviews.
-You're right that 12GB of HBM2 might have made this cheaper, but getting 6hi vs. 8hi KGSDs of HBM2 is likely a challenge, and given that 8hi is in volume production for enterprise/datacenter/etc., it might be the same price or even cheaper than special-ordered 6hi stacks.
 
You really have reading issues, I already told you, skim back a few pages. Think of it as a treasure hunt!
2 posts before you say there is no new content, no visual gains. Treasure hunt is over.
 
Alright @Vayra86 and @Anymal - enough already. Take it to PM if you want to continue bickering.
 
Last edited:
This seems like a very odd statement from Mr. nVidia...
He most know, that such a statement will be discussed in various tech forums, and likely will backfire, so what caused such odd saying? Gives no meaning, at all, to me :confused:
 
This whole affair really highlights how bad the GPU market is right now.

Nvidia is owning AMD by providing a smidge more performance for decently more money than last generation at the same price point. Meanwhile, AMD can't even beat THAT with a new card launch and a die shrink. So:

1) Nvidia took a technological advantage and wasted most of their extra die space on fad features that won't even get play until a few generations from now. They should have waited to implement RTX when they had a die shrink and space to kill. Here, they basically made a slightly enhanced version of their last generation and added features that won't matter for a few gens. Whoops. Meaning performance didn't go up to much degree, compelling them to swap the model numbers around to hide that behind marketing spin of "AND COMPARING THE 2080 to the 1080, WE SEE A HUGE UPLIFT IN FRAMERATE!"

2) AMD took a dieshrink advantage and managed to barely keep up with nvidia's last generation high end. So for the same money as nvidia, they'll give you a card that mostly keeps up at a higher power draw and lacks even the fad features that nvidia has.

So if Nvidia was "greedy" to charge basically the same for a tiny performance uplift plus fad features, then what is AMD to charge the same price for virtually the same (or less?) performance WITHOUT the fad features?

This is Aliens vs Predator all over again. No matter who wins this fight, we all just lost.
 
-Very expensive chip? Not likely. It's <400mm2. Even on 7nm, that's likely to be cheaper than Nvidia's monster 12nm chips.
Estimates are about twice the cost per mm². Combined with a interposer and HBM2, this will be more expensive than "12nm" and GDDR6.

-"That much higher TDP"? While there's no denying that Nvidia's cards are more efficient, by a significant margin, the difference between a 300W VII and a 275W 1080Ti or 225W 2080 is ... not that much, really. 9 and 33% respectively. The difference was far bigger the last go around, with the 275W V64 roughly matching the 180W 1080 (52%).
When comparing RTX 2080 vs "Radeon VII", presumably with similar performance, ~300W vs. 215W and the same price, it becomes a hard sell. On the other hand if it was $50 cheaper or had some other significant advantage, it could have some potential, assuming AMD could make enough of them.

-"Spending a node shrink" on a refresh of a known architecture is a well established mode of operations in the tech world.
<snip>
-While I agree that Vega as an architecture is far better suited for professional applications…
The refresh itself is not the problem, but doing it to make a very expensive GPU with irrelevant features that have no real chance in the market, due to the great inefficiency of the architecture.

The Vega 20 was intended for professional markets, which means it includes more fp64 cores and I believe some other new features which is wasted die space on the very expensive and currently low volume node. As I mentioned, if they had a Vega 20 without this and with GDDR, it would at least make some sense.

-You're right that 12GB of HBM2 might have made this cheaper, but getting 6hi vs. 8hi KGSDs of HBM2 is likely a challenge, and given that 8hi is in volume production for enterprise/datacenter/etc., it might be the same price or even cheaper than special-ordered 6hi stacks.
No, I'm talking about using 3 stacks instead of 4.
 
Wow, he sounds super salty...
It "barely keeps up with" our more expensive card "and if we turn on ray tracing we'll crush it" by halving our framerates...
 
heck ... that speech is making me want a Vega VII ... rather than a 20XX card ... even if the price point is around a 2080 (where i live ...)
eagerly waiting on AIB and etailer to list them to see ...

but for now i am more inclined to get a deal on a Vega 64 rather than getting a 2070 from the green goblin ---


red dawn indeed ...
 
No halving thank to dlss
 
Exactly my thought - it will backfire on nVidia...

heck ... that speech is making me want a Vega VII ... rather than a 20XX card ... even if the price point is around a 2080 (where i live ...)
eagerly waiting on AIB and etailer to list them to see ...

but for now i am more inclined to get a deal on a Vega 64 rather than getting a 2070 from the green goblin ---


red dawn indeed ...
 
Low quality post by Anymal
heck ... that speech is making me want a Vega VII ... rather than a 20XX card ... even if the price point is around a 2080 (where i live ...)
eagerly waiting on AIB and etailer to list them to see ...

but for now i am more inclined to get a deal on a Vega 64 rather than getting a 2070 from the green goblin ---


red dawn indeed ...
You are gettin all emotional
 
Not a hater trust me, a fanboy yes. I Really don't care what anyone wants to buy its their money, i just don't care for Nvidia cards or their tech thats it. last nvidia cards i own was a pair of gtx 570 in sli they ran like garbage and color look wash out. Never own since then, not for the foreseeable future.

I ran 570s in SLI for 4+ years. I thought they handled things rather well. I used them across several different monitors over that time frame and things never looked washed out. I maxed out games on 1080p for a long time. I probably would have kept the cards until Pascal hit the shelves, but having jumped into the 5760x1080 resolution that I really like playing games on (that support it), the 570s struggled. I ran FarCry 3 on 5760x1080, but all settings were turned down to the medium range and FPS usually were around 40-50fps. I could max out Borderlands 2 (except for Physx, I kept that on low) and hold around 60fps at 5760x1080.....anyway. I decided to jump to something and after some deliberating I went with my 980Ti AMP Omega. Hell of an upgrade.

I can't say I've ever had issues with Nvidia cards before. I've had issues with SLI over the years, but not big glaring issues that people seem to claim. My next card will most likely be from Nvidia because I've been using them for so long....but if AMD can throw something out that works as well or better and cost less, I wouldn't be against picking up one of their GPUs for my next upgrade. My next GPU is still probably 2-3 years out (if my 980Ti lasts that long), so who knows what we'll see then.
 
Sticks and stones will break my bones, but words will never bring back the market cap.
 
I have an rx 480 in my main rig, and jensen might be right about freesync. It was so poorly regulated that it was hard figuring out which monitors actually performed well in their variable refresh range. Most reviewers only check image quality with fixed refresh rates.
 
I have an rx 480 in my main rig, and jensen might be right about freesync. It was so poorly regulated that it was hard figuring out which monitors actually performed well in their variable refresh range. Most reviewers only check image quality with fixed refresh rates.

If you really want to get Freesync working as intended use a program called CRU. There you can adjust the Sync rate of your monitor I have mine set to 25-80
 
What? You get that the total addressable market has expanded dramatically over that period of time, right? The PC gaming hardware market is not smaller than before, and the software market certainly isn't. Other devices growing into their own and establishing themselves as parallel alternatives does not at all spell impending doom for the incumbent segments. That PC gaming is now "a minority" is meaningless when the total gaming market is 10x what it was when that was the case.

Hey, use acronyms like "TAM", ok? :p But yes, to your point the total market has increased as the share %'s shift. Interesting thing from that graph is that PC market has almost taken over the console market from 2016-2019. Who would have thought?
 
How to read this:

Vega VII is a "noteworthy product"

The performance is competitive but there's no frills.

[There's] no ray tracing, no AI (underdeveloped gimmicks which you as a consumer don't really care about yet).

It's 7nm is a turning point because VEGA went from competing with the 1080 to competing with the 1080 Ti.

We can only rely on gimmicks that have zero market penetration and are so far only empty promises to feign a better offer to potential customers.
 
He is feeling the pressure now and lost his cool. What is it JHH? Is that leather jacket getting a little tight or something? :laugh::laugh:

The marketing and lies are like 2 ends of a pole, nVidia better end up hitting themselves back into the hole they dug.
 
Last edited:
Ultimately this is good news, finally Vega can compete with the 1080 Ti. Let's wait for the review and see if TSMC's 7nm is the silver bullet many had hoped for.
 
Give that guy a piss test lol.
 
Back
Top