Thursday, January 10th 2019
NVIDIA CEO Jensen Huang on Radeon VII: "Underwhelming (...) the Performance is Lousy"; "Freesync Doesn't Work"
PC World managed to get a hold of NVIDIA CEO Jensen Huang, picking his thoughts on AMD's recently announced Radeon VII. Skirting through the usual amicable, politically correct answers, Jensen made his thoughts clear on what the competition is offering to compete with NVIDIA's RTX 2000 series. The answer? Vega VII is an "underwhelming product", because "The performance is lousy and there's nothing new. [There's] no ray tracing, no AI. It's 7nm with HBM memory that barely keeps up with a 2080. And if we turn on DLSS we'll crush it. And if we turn on ray tracing we'll crush it." Not content on dissing the competition's product, Jensen Huang also quipped regarding AMD's presentation and product strategy, saying that "It's a weird launch, maybe they thought of it this morning."Of course, the real market penetration of the technologies Jensen Huang mentions is currently extremely low - only a handful of games support NVIDIA's forward-looking ray tracing technologies. That AMD chose to not significantly invest resources and die-space for what is essentially a stop-gap high-performance card to go against NVIDIA's RTX 2080 means its 7 nm 331 mm² GPU will compete against NVIDIA's 12 nm, 545 mm² die - if performance estimates are correct, of course.The next remarks came regarding AMD's FreeSync (essentially a name for VESA's Adaptive Sync), which NVIDIA finally decided to support on its GeForce graphics cards - something the company could have done outright, instead of deciding to go the proprietary, module-added, cost-increased route of G-Sync. While most see this as a sign that NVIDIA has seen a market slowdown for its G-Sync, added price-premium monitors and that they're just ceding to market demands, Huang sees it another way, saying that "We never competed. [FreeSync] was never proven to work. As you know, we invented the area of adaptive sync. The truth is most of the FreeSync monitors do not work. They do not even work with AMD's graphics cards." In the wake of these word from Jensen, it's hard to understand the overall silence from users that might have their FreeSync monitors not working.
Reportedly, NVIDIA only found 12 out of 400 FreeSync-supporting monitors to support their G-Sync technology automatically in the initial battery of tests, with most panels requiring a manual override to enable the technology. Huang promised that "We will test every single card against every single monitor against every single game and if it doesn't work, we will say it doesn't work. And if it does, we will let it work," adding a snarky punchline to this matter with an "We believe that you have to test it to promise that it works, and unsurprisingly most of them don't work." Fun times.
Source:
PC World
Reportedly, NVIDIA only found 12 out of 400 FreeSync-supporting monitors to support their G-Sync technology automatically in the initial battery of tests, with most panels requiring a manual override to enable the technology. Huang promised that "We will test every single card against every single monitor against every single game and if it doesn't work, we will say it doesn't work. And if it does, we will let it work," adding a snarky punchline to this matter with an "We believe that you have to test it to promise that it works, and unsurprisingly most of them don't work." Fun times.
270 Comments on NVIDIA CEO Jensen Huang on Radeon VII: "Underwhelming (...) the Performance is Lousy"; "Freesync Doesn't Work"
-Very expensive chip? Not likely. It's <400mm2. Even on 7nm, that's likely to be cheaper than Nvidia's monster 12nm chips. It does need an interposer and HBM, but given that GDDR6 is reportedly 60-70% more expensive than GDDR5(X), that leaves us at roughly price parity between GDDR6 and HBM2. AMD uses twice as much, which means less margins there, but not night and day. The interposer is likely not that expensive, given that it's far smaller than the Fiji lineup's interposer.
-"That much higher TDP"? While there's no denying that Nvidia's cards are more efficient, by a significant margin, the difference between a 300W VII and a 275W 1080Ti or 225W 2080 is ... not that much, really. 9 and 33% respectively. The difference was far bigger the last go around, with the 275W V64 roughly matching the 180W 1080 (52%).
-Strategically, this is AMD saying "Hey, we're here, and we can compete this go around too" rather than leaving the old and 1080-ish V64 as their highest performer. At the same price or cheaper than the 2080 for matching performance, this is a good alternative, particularly with a good tri-fan cooler for the reference card. That makes very much sense, as the other option would be leaving the RTX series unopposed until Navi arrives. If that's within a few months, this would be of no use, but if it's towards fall or Q4, this is a very good strategic move. Nvidia can't launch anything new in response, all they can do is lower prices on their existing cards, which we know are ridiculously expensive to produce, even if they are still overpriced. If Nvidia doesn't, AMD will sell more. What ever happens, this is a good move by AMD.
-"Spending a node shrink" on a refresh of a known architecture is a well established mode of operations in the tech world. It's when you don't have a node shrink that you need a new arch - a position that AMD has now put itself in. This primes the market for Navi. Of course, it lessens the "wow" effect if Navi delivers on improving perf/w (combining that with a node shrink would make it really stand out), but on the other hand it makes for continuity in AMD's offerings and the performance of their product stack.
-While I agree that Vega as an architecture is far better suited for professional applications (we've seen this proven plenty of times), apparently AMD feels confident enough in the gaming-related improvements when doubling the memory bandwidth and doubling the number of ROPs to give it a go. I suggest we wait for reviews.
-You're right that 12GB of HBM2 might have made this cheaper, but getting 6hi vs. 8hi KGSDs of HBM2 is likely a challenge, and given that 8hi is in volume production for enterprise/datacenter/etc., it might be the same price or even cheaper than special-ordered 6hi stacks.
He most know, that such a statement will be discussed in various tech forums, and likely will backfire, so what caused such odd saying? Gives no meaning, at all, to me :confused:
Nvidia is owning AMD by providing a smidge more performance for decently more money than last generation at the same price point. Meanwhile, AMD can't even beat THAT with a new card launch and a die shrink. So:
1) Nvidia took a technological advantage and wasted most of their extra die space on fad features that won't even get play until a few generations from now. They should have waited to implement RTX when they had a die shrink and space to kill. Here, they basically made a slightly enhanced version of their last generation and added features that won't matter for a few gens. Whoops. Meaning performance didn't go up to much degree, compelling them to swap the model numbers around to hide that behind marketing spin of "AND COMPARING THE 2080 to the 1080, WE SEE A HUGE UPLIFT IN FRAMERATE!"
2) AMD took a dieshrink advantage and managed to barely keep up with nvidia's last generation high end. So for the same money as nvidia, they'll give you a card that mostly keeps up at a higher power draw and lacks even the fad features that nvidia has.
So if Nvidia was "greedy" to charge basically the same for a tiny performance uplift plus fad features, then what is AMD to charge the same price for virtually the same (or less?) performance WITHOUT the fad features?
This is Aliens vs Predator all over again. No matter who wins this fight, we all just lost.
The Vega 20 was intended for professional markets, which means it includes more fp64 cores and I believe some other new features which is wasted die space on the very expensive and currently low volume node. As I mentioned, if they had a Vega 20 without this and with GDDR, it would at least make some sense. No, I'm talking about using 3 stacks instead of 4.
It "barely keeps up with" our more expensive card "and if we turn on ray tracing we'll crush it" by halving our framerates...
eagerly waiting on AIB and etailer to list them to see ...
but for now i am more inclined to get a deal on a Vega 64 rather than getting a 2070 from the green goblin ---
red dawn indeed ...
I can't say I've ever had issues with Nvidia cards before. I've had issues with SLI over the years, but not big glaring issues that people seem to claim. My next card will most likely be from Nvidia because I've been using them for so long....but if AMD can throw something out that works as well or better and cost less, I wouldn't be against picking up one of their GPUs for my next upgrade. My next GPU is still probably 2-3 years out (if my 980Ti lasts that long), so who knows what we'll see then.
Vega VII is a "noteworthy product"
The performance is competitive but there's no frills.
[There's] no ray tracing, no AI (underdeveloped gimmicks which you as a consumer don't really care about yet).
It's 7nm is a turning point because VEGA went from competing with the 1080 to competing with the 1080 Ti.
We can only rely on gimmicks that have zero market penetration and are so far only empty promises to feign a better offer to potential customers.
The marketing and lies are like 2 ends of a pole, nVidia better end up hitting themselves back into the hole they dug.