# AMD Announces the Radeon RX 6000 Series: Performance that Restores Competitiveness



## btarunr (Oct 28, 2020)

AMD (NASDAQ: AMD) today unveiled the AMD Radeon RX 6000 Series graphics cards, delivering powerhouse performance, incredibly life-like visuals, and must-have features that set a new standard for enthusiast-class PC gaming experiences. Representing the forefront of extreme engineering and design, the highly anticipated AMD Radeon RX 6000 Series includes the AMD Radeon RX 6800 and Radeon RX 6800 XT graphics cards, as well as the new flagship Radeon RX 6900 XT - the fastest AMD gaming graphics card ever developed.

AMD Radeon RX 6000 Series graphics cards are built upon groundbreaking AMD RDNA 2 gaming architecture, a new foundation for next-generation consoles, PCs, laptops and mobile devices, designed to deliver the optimal combination of performance and power efficiency. AMD RDNA 2 gaming architecture provides up to 2X higher performance in select titles with the AMD Radeon RX 6900 XT graphics card compared to the AMD Radeon RX 5700 XT graphics card built on AMD RDNA architecture, and up to 54 percent more performance-per-watt when comparing the AMD Radeon RX 6800 XT graphics card to the AMD Radeon RX 5700 XT graphics card using the same 7 nm process technology.



 

 

 

 




AMD RDNA 2 offers a number of innovations, including applying advanced power saving techniques to high-performance compute units to improve energy efficiency by up to 30 percent per cycle per compute unit, and leveraging high-speed design methodologies to provide up to a 30 percent frequency boost at the same power level. It also includes new AMD Infinity Cache technology that offers up to 2.4X greater bandwidth-per-watt compared to GDDR6-only AMD RDNA -based architectural designs.

"Today's announcement is the culmination of years of R&D focused on bringing the best of AMD Radeon graphics to the enthusiast and ultra-enthusiast gaming markets, and represents a major evolution in PC gaming," said Scott Herkelman, corporate vice president and general manager, Graphics Business Unit at AMD. "The new AMD Radeon RX 6800, RX 6800 XT and RX 6900 XT graphics cards deliver world class 4K and 1440p performance in major AAA titles, new levels of immersion with breathtaking life-like visuals, and must-have features that provide the ultimate gaming experiences. I can't wait for gamers to get these incredible new graphics cards in their hands."

*Powerhouse Performance, Vivid Visuals & Incredible Gaming Experiences*
AMD Radeon RX 6000 Series graphics cards support high-bandwidth PCIe 4.0 technology and feature 16 GB of GDDR6 memory to power the most demanding 4K workloads today and in the future. Key features and capabilities include:

Powerhouse Performance 
AMD Infinity Cache - A high-performance, last-level data cache suitable for 4K and 1440p gaming with the highest level of detail enabled. 128 MB of on-die cache dramatically reduces latency and power consumption, delivering higher overall gaming performance than traditional architectural designs.
AMD Smart Access Memory - An exclusive feature of systems with AMD Ryzen 5000 Series processors, AMD B550 and X570 motherboards and Radeon RX 6000 Series graphics cards. It gives AMD Ryzen processors greater access to the high-speed GDDR6 graphics memory, accelerating CPU processing and providing up to a 13-percent performance increase on a AMD Radeon RX 6800 XT graphics card in Forza Horizon 4 at 4K when combined with the new Rage Mode one-click overclocking setting.9,10
Built for Standard Chassis - With a length of 267 mm and 2x8 standard 8-pin power connectors, and designed to operate with existing enthusiast-class 650 W-750 W power supplies, gamers can easily upgrade their existing large to small form factor PCs without additional cost.
True to Life, High-Fidelity Visuals 
DirectX 12 Ultimate Support - Provides a powerful blend of raytracing, compute, and rasterized effects, such as DirectX Raytracing (DXR) and Variable Rate Shading, to elevate games to a new level of realism.
DirectX Raytracing (DXR) - Adding a high performance, fixed-function Ray Accelerator engine to each compute unit, AMD RDNA 2-based graphics cards are optimized to deliver real-time lighting, shadow and reflection realism with DXR. When paired with AMD FidelityFX, which enables hybrid rendering, developers can combine rasterized and ray-traced effects to ensure an optimal combination of image quality and performance.
AMD FidelityFX - An open-source toolkit for game developers available on AMD GPUOpen. It features a collection of lighting, shadow and reflection effects that make it easier for developers to add high-quality post-process effects that make games look beautiful while offering the optimal balance of visual fidelity and performance.
Variable Rate Shading (VRS) - Dynamically reduces the shading rate for different areas of a frame that do not require a high level of visual detail, delivering higher levels of overall performance with little to no perceptible change in image quality.
Elevated Gaming Experience 
Microsoft DirectStorage Support - Future support for the DirectStorage API enables lightning-fast load times and high-quality textures by eliminating storage API-related bottlenecks and limiting CPU involvement.
Radeon Software Performance Tuning Presets - Simple one-click presets in Radeon Software help gamers easily extract the most from their graphics card. The presets include the new Rage Mode stable over clocking setting that takes advantage of extra available headroom to deliver higher gaming performance.
Radeon Anti-Lag - Significantly decreases input-to-display response times and offers a competitive edge in gameplay.

*AMD Radeon RX 6000 Series Product Family*


 

*Robust Gaming Ecosystem and Partnerships*
In the coming weeks, AMD will release a series of videos from its ISV partners showcasing the incredible gaming experiences enabled by AMD Radeon RX 6000 Series graphics cards in some of this year's most anticipated games. These videos can be viewed on the AMD website. 
DIRT 5 - October 29
Godfall - November 2
World of Warcraft : Shadowlands - November 10
RiftBreaker - November 12
FarCry 6 - November 17
*Pricing and Availability* 
AMD Radeon RX 6800 and Radeon RX 6800 XT graphics cards are expected to be available from global etailers/retailers and on AMD.com beginning November 18, 2020, for $579 USD SEP and $649 USD SEP, respectively. The AMD Radeon RX 6900 XT is expected to be available December 8, 2020, for $999 USD SEP.
AMD Radeon RX 6800 and RX 6800 XT graphics cards are also expected to be available from AMD board partners, including ASRock, ASUS, Gigabyte, MSI, PowerColor, SAPPHIRE and XFX, beginning in November 2020.
The complete AMD slide deck follows.



 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



*View at TechPowerUp Main Site*


----------



## Metroid (Oct 28, 2020)

AMD wants margin, reason only 6800/6900 will be launched, amd wants to capitalize on the $600 gpus first then 6700 and so on around $400. AMD still have some rx 5700 to be sold at $400 anyway, amd not keen to cannibalize its rx 5700 even though is eol. Aside from this, I want to know more about infinity cache, that has given amd a huge boost to ryzen 5xxx series and now big navi aka 6xxx series.


----------



## dicktracy (Oct 28, 2020)

No mention of DXR performance at all. For all we know, you flick that setting on in Cyberpunk 2077 and it'll drop down to 2060 performance LOL!


----------



## AusWolf (Oct 28, 2020)

Metroid said:


> AMD wants margin, reason only 6800/6900 will be launched, amd wants to capitalize on the $600 gpus first then 6700 and so on around $400. AMD still have some rx 5700 to be sold at $400 anyway, amd not keen to cannibalize its rx 5700 even though is eol. Aside from this, I want to know more about infinity cache, that has given amd a huge boost to ryzen 5xxx series and now big navi.


Yeah, AMD is pure evil because they want to make money. Hey wait... does that make every other company evil too?

Come on guys, these new gpus aren't even out yet, and you're already being negative about them. Is it really that hard to be happy that nvidia might end up having real competition again? (Regardless of personal preference)


----------



## saikamaldoss (Oct 28, 2020)

3090 performance for 999$ is awesome


----------



## Chrispy_ (Oct 28, 2020)

They're matching Nvidia's pricing but not moving the performance/$ forward for anyone against their own product stack; These 6800 and 6900 cards aren't supposed to supplant the 5700-series, they're complementing them at similar performance/$.

The real question is when will AMD release the 6700 series that match the next-gen consoles and bring AMD raytracing/DXR to the mainstream market? We can enjoy reading about flagships but honestly even among TPU readers the number of us running GPUs north of $500 is a pretty small minority.


----------



## Metroid (Oct 28, 2020)

AusWolf said:


> Yeah, AMD is pure evil because they want to make money. Hey wait... does that make every other company evil too?
> 
> Come on guys, these new gpus aren't even out yet, and you're already being negative about them. Is it really that hard to be happy that nvidia might end up having real competition again? (Regardless of personal preference)



I'm not being negative at all, that is a business practice, nvidia has done and do a lot of that too. People need to understand that AMD is here to make money, they will not sell something better for much less than the competition. If people think AMD would come out at top of performance segment and be 50% cheaper than Nvidia then they lost their minds. This shows exactly what is all about, at the end of the day, profit margin matters, especially if you have almost called bankruptcy few years ago.

I'm not surprised about AMD pricing at all, they have seen scalpers and people creating lines to buy the rtx 3080, scalpers sold rtx 3080 for thousands of dollars and the timing is impeccable for AMD. People have not being spending their money on things like used to do due to covid19, so they have money and nothing to do with it, so upgrading computers parts and gaming is very attractive at moment. I'm surprised AMD have not asked for more money hehe because at moment people have more money than sense in order to buy a gpu and we have seen it with the rtx 3080.


----------



## Chrispy_ (Oct 28, 2020)

dicktracy said:


> No mention of DXR performance at all. For all we know, you flick that setting on in Cyberpunk 2077 and it'll drop down to 2060 performance LOL!


3DMark Port Royal leaks published by Igor's Lab put the 6800XT at 2080Ti levels of raytracing performance, so a bit behind Ampere but not enough that it's going to be a deal-breaker. You've also got to remember that Port Royal was made with Nvidia's help specifically for the RTX architecture, so you can't even read too much into that leak, other than assuming it's a worst-case scenario for AMD and DXR.


----------



## rhaoul (Oct 28, 2020)

AusWolf said:


> Yeah, AMD is pure evil because they want to make money. Hey wait... does that make every other company evil too?
> 
> Come on guys, these new gpus aren't even out yet, and you're already being negative about them. Is it really that hard to be happy that nvidia might end up having real competition again? (Regardless of personal preference)



There's a difference between the necessity for a company to make money and profit excessively from a situation like this: poor competition. It's a tacit agreement.


----------



## saikamaldoss (Oct 28, 2020)

Tho it looks very promising and great, if 3080 is better with RT enabled game FPS, I will go with 3080. I need RT no matter what others say.. I love RT effect and I need it badly.. been waiting for a while to upgrade my crappy Vega64 :/


----------



## Turmania (Oct 28, 2020)

Shadowlands is out November 10th? Don't think so...


----------



## rhaoul (Oct 28, 2020)

Metroid said:


> I'm surprised AMD have not asked for more money hehe *because at moment people have more money* than sense in order to buy a gpu and we have seen it with the rtx 3080.



Is this a bad joke ?


----------



## Zubasa (Oct 28, 2020)

Turmania said:


> Shadowlands is out November 10th? Don't think so...


The pre-expansion patch is already out, and it does support ray-traced shadows. It is barely noticeable in game.


----------



## spnidel (Oct 28, 2020)

dicktracy said:


> No mention of DXR performance at all. For all we know, you flick that setting on in Cyberpunk 2077 and it'll drop down to 2060 performance LOL!


moving the goalposts like the cute little shill that you are lmao


----------



## Metroid (Oct 28, 2020)

rhaoul said:


> Is this a bad joke ?



It calls reality, lots of people paid thousands of dollars for the rtx 3080 and 3090 on ebay, there will always be people with more money than sense, those people buy bottles of wines for thousands of dollars, pay 5k for a 100 dollar ticket for the privilege to stay in the first row of an event, need to accept it and move on ehhe


----------



## ZoneDymo (Oct 28, 2020)

performance is good, price not so much, sigh.

like take the RX6800, beats the 2080ti so it will be a bit faster then an RTX3070, but it also costs 580 dollars vs 500 dollar for the RTX3070, so not really a clear winner in the "What to buy" discussion.


----------



## GhostRyder (Oct 28, 2020)

A lot of interesting pieces in the announcements today.  They have many new techs that sound interesting but will be more interesting in how (And if) they are implemented in a wide array of games.

I think the most interesting part though was the 6900XT.  I honestly thought it was just a rumor and not actually something that exists (At least that was coming out this soon).  That has me intrigued.


----------



## Ashtr1x (Oct 28, 2020)

They really caught up very fast in Raster technology vs Nvidia, they didn't had an answer for the 1080Ti for a long time, and now they beat 2080Ti so quick. The perf/watt is also good but the big problem is Ray Tracing HW/SW stack and the DLSS/AI, they did mention Super Resolution. But not sure how they are going to do it. Along with the existing pool of RT games, like for eg want to see Metro Exodus RT performance.

Nvidia is definitely going to refresh their HW, I'm expecting Ti / SUPER revisions of 3080 and 3070. They still hold that Ray Tracing upper hand along with G6X memory, it all boils down to price and availability AND Drivers. Also the NVENC too. A shame that AMD is not even saying anything on CFX, with Nvidia killing SLI on their side I think this was expected..still a 3090 gets NVLink to have some points.

On a sidenote, with 6800 having 60CUs competing against a 2080Ti, the clock speed is at 1.8GHz and so the 52CUs Xbox is going to be really fast by the performance estimates then... PS5 probably will be a touch slower even with it's 2.2GHz clocks as we can compare the 6800XT's 2GHz base of 70CUs and 32CUs on PS5. Damn.


----------



## birdie (Oct 28, 2020)

I'm waiting for reviews and AMD kinda skimped on RTRT performance figures: "we have it, great, but we won't show the numbers". Perhaps as predicted, RDNA 2.0 cards have a great rasterization performance but not so much RTRT performance. Still, it's really great the we finally have competition at high end graphics for the first time in many many years. Also, AMD hasn't mentioned any DLSS alternative which further casts doubt into their RTRT performance. It surely looks like NVIDIA will remain the king of RTRT enabled games whose number is only going to go up now that both consoles support RTRT.

And lastly, let me calm down everyone's excitement here: absolute most people out there won't buy any of RTX 3070/3080/3090 RX 6800(XT)/6900 XT cards: they are all priced very high.

I'm waiting for midrange products where we'll see these vendors' true colors: RTX 3050/3060(Ti), RX 6500/6600. It's what really matters. Cards for the rich Europeans and Americans - no so much. Again, the real money and the real market are below $300 for a GPU. We haven't yet seen anything from either AMD or NVIDIA in this regard.


----------



## Cheeseball (Oct 28, 2020)

Metroid said:


> It calls reality, lots of people paid thousands of dollars for the rtx 3080 and 3090 on ebay, there will always be people with more money than sense, those people buy bottles of wines for thousands of dollars, pay 5k for a 100 dollar ticket for the privilege to stay in the first row of an event, need to accept it and move on ehhe



Yeah it's sad that scalping is happening. I'm lucky to be living in the US (I'm laughing at this statement, I know) and to be able to reserve a spot on EVGA.com to get my RTX 3080 at MSRP ($810.00 + sales tax + shipping). I definitely would have not bought the RTX 3080 at $900, $1000 or whatever high prices are going on now.


----------



## Fluffmeister (Oct 28, 2020)

ZoneDymo said:


> performance is good, price not so much, sigh.
> 
> like take the RX6800, beats the 2080ti so it will be a bit faster then an RTX3070, but it also costs 580 dollars vs 500 dollar for the RTX3070, so not really a clear winner in the "What to buy" discussion.



Yeah they are okay in many ways but ultimately nothing really changes the market much, it's great that they have undercut the 3090 by $500, but we are talking $1000 smackers here, and Nv doesn't care because their 3090's are sold before they hit the shelves anyway.

Meh, I need a CPU anyway.


----------



## AusWolf (Oct 28, 2020)

rhaoul said:


> There's a difference between the necessity for a company to make money and profit excessively from a situation like this: poor competition. It's a tacit agreement.


It's been nvidia that haven't had real competition for years. Now that AMD is (or seems to be) catching up, it's only natural that they're matching prices. Like it's been said: you can't expect them to match performance and sell at half price.


----------



## mak1skav (Oct 28, 2020)

Prices should be lower I think. Maybe they have quite a good stock and want to take advantage that NVIDIA can't produce then new 3xxx series fast enough and then get their prices lower when the production from NVIDIA is up again.


----------



## milewski1015 (Oct 28, 2020)

Turmania said:


> Shadowlands is out November 10th? Don't think so...



Nov 10th is the release date for the Shadowlands feature video on AMD's website


----------



## AusWolf (Oct 28, 2020)

birdie said:


> I'm waiting for reviews and AMD kinda skimped on RTRT performance figures: "we have it, great, but we won't show the numbers". Perhaps as predicted, RDNA 2.0 cards have a great rasterization performance but not so much RTRT performance. Still, it's really great the we finally have competition at high end graphics for the first time in many many years. Also, AMD hasn't mentioned any DLSS alternative which further casts doubt into their RTRT performance. It surely looks like NVIDIA will remain the king of RTRT enabled games whose number is only going to go up now that both consoles support RTRT.
> 
> And lastly, let me calm down everyone's excitement here: absolute most people out there won't buy any of RTX 3070/3080/3090 RX 6800(XT)/6900 XT cards: they are all priced very high.
> 
> I'm waiting for midrange products where we'll see these vendors' true colors: RTX 3050/3060(Ti), RX 6500/6600. It's what really matters. Cards for the rich Europeans and Americans - no so much. Again, the real money and the real market are below $300 for a GPU. We haven't yet seen anything from either AMD or NVIDIA in this regard.


I'm not so much into DLSS, but I agree on the RT part. Would have been nice to see numbers on that front too.

To be honest, I've been planning on getting a 6900 XT because I thought it would be a 3070/3080 competitor. Seeing that it's more than that is great, but I'm not willing to pay a thousand bucks for a graphics card. Maybe I'll go for a 6700 or 6800 XT which will be more than enough for 1080p anyway. We'll see when the reviews (and the 3060) come.


----------



## Xuper (Oct 28, 2020)

__ https://twitter.com/i/web/status/1321508119797780480
wow that Infinity Cache !


----------



## ratirt (Oct 28, 2020)

AusWolf said:


> I'm not so much into DLSS, but I agree on the RT part. Would have been nice to see numbers on that front too.


From what i understand, The DLLS variant from AMD is the Super Resolution feature although it is supposed to work in a totally different way than the NV's DLSS. 
RT hasn't been showed. It would mean as a add feature. How I take it, and been explained by AMD, This will work exploit DXR but but some games developed already may not work as intended. It will work slightly different from the NV's RT and that is the reason behind it. AMD claims, 35 games supporting AMD's RT DXR with the Directx12 Ultimate API. I guess we will have to wait and see what the RT is going to look like and how will it perform. The infinity cash is supposedly help a lot with the RT.


----------



## rhaoul (Oct 28, 2020)

AusWolf said:


> It's been nvidia that haven't had real competition for years. Now that AMD is (or seems to be) catching up, it's only natural that they're matching prices. Like it's been said: you can't expect them to match performance and sell at half price.


There are only two actors in the market ... there is a tacit agreement on prices


----------



## Searing (Oct 28, 2020)

Chrispy_ said:


> They're matching Nvidia's pricing but not moving the performance/$ forward for anyone against their own product stack; These 6800 and 6900 cards aren't supposed to supplant the 5700-series, they're complementing them at similar performance/$.
> 
> The real question is when will AMD release the 6700 series that match the next-gen consoles and bring AMD raytracing/DXR to the mainstream market? We can enjoy reading about flagships but honestly even among TPU readers the number of us running GPUs north of $500 is a pretty small minority.



Now you understand nvidia's fake paper launch. AMD is moving perf/dollar ahead. The RX 6800 is half the price of the 2080 ti but has 5GB more VRAM and is 18 percent faster on average over those games. You can't say "wowee look how good the 3080 is vs the 2080 ti" then same "AMD didn't improve perf/dollar vs the 3070" without coming off as a tool.


----------



## tfdsaf (Oct 28, 2020)

dicktracy said:


> No mention of DXR performance at all. For all we know, you flick that setting on in Cyberpunk 2077 and it'll drop down to 2060 performance LOL!



That is because all existing ray traced games are build with Nvidia's rendering systems, which is literally less than 10 games anyways! 

We are going to see some real competition from now on and considering rdna2 powers the next gen consoles I expect 90% of games who use ray tracing to optimize it for rnda2.



Ashtr1x said:


> They really caught up very fast in Raster technology vs Nvidia, they didn't had an answer for the 1080Ti for a long time, and now they beat 2080Ti so quick. The perf/watt is also good but the big problem is Ray Tracing HW/SW stack and the DLSS/AI, they did mention Super Resolution. But not sure how they are going to do it. Along with the existing pool of RT games, like for eg want to see Metro Exodus RT performance.
> 
> Nvidia is definitely going to refresh their HW, I'm expecting Ti / SUPER revisions of 3080 and 3070. They still hold that Ray Tracing upper hand along with G6X memory, it all boils down to price and availability AND Drivers. Also the NVENC too. A shame that AMD is not even saying anything on CFX, with Nvidia killing SLI on their side I think this was expected..still a 3090 gets NVLink to have some points.
> 
> On a sidenote, with 6800 having 60CUs competing against a 2080Ti, the clock speed is at 1.8GHz and so the 52CUs Xbox is going to be really fast by the performance estimates then... PS5 probably will be a touch slower even with it's 2.2GHz clocks as we can compare the 6800XT's 2GHz base of 70CUs and 32CUs on PS5. Damn.


There is NOT going to be any refreshed variants for at least 6 months from Nvidia. They haven't even launched their new cards yet, the 3080/90 have been out for about a month, but its a paper launch and 3070 is launching tomorrow with limited availability. 

Accoring to AMD their infinity cache gives then over 3x the bandwidth rate, so that means their 16GB 256bit GDDR6 is DOUBLE the bandwidth of RTX 3080.


----------



## FinneousPJ (Oct 28, 2020)

Wow that's all I wanted and more! I don't care for ray tracing until it's more mature, so I'm very impressed. Shame about the pricing though. I guess they won't be forced to compete in price until nvidia can actually deliver products lol


----------



## Bubster (Oct 28, 2020)

Way to go AMD


----------



## MikeMurphy (Oct 28, 2020)

Having 8GB of VRAM maxed out at 4K I am looking for more than 10GB featured on Nvidia cards.  These cards featuring 16GB of VRAM seem like an excellent solution.


----------



## HD64G (Oct 28, 2020)

Based on this official slide from the presentation, 6800 non-XT is almost 20% faster at 1440P vs 2080Ti/3070. Even if AMD is trying to show a better picture of their product I think the difference in the @W1zzard 's reveiw won't be lower than 15%. So, its price is reasonable imo. Other than that RDNA2 is the best gaming arch until today. And indeed this is the Zen2 moment for AMD's GPUs. And without using chiplets (yet...). As for the infinity cache offering equal to 1,6TB/s bandwidth performance with 256bit bus, this is truly revolutionary if proved real (AMD's official performance numbers confirm that for now). And that will work wonders for smaller GPUs also.


----------



## Vayra86 (Oct 28, 2020)

Zubasa said:


> The pre-expansion patch is already out, and it does support ray-traced shadows. It is barely noticeable in game.



The bit they showed was an absolute joke. Even Godfall failed to impress me RT wise. Its really hit and miss. Some games and scenes really do show what it can do, but most others... neh. If its added on later, it shows.

Definitely not a killer feature yet and I'll say right now... its good fun if my next GPU can do some of it, but I wouldn't hesitate turning it off.


----------



## jabbadap (Oct 28, 2020)

ratirt said:


> From what i understand, The DLLS variant from AMD is the Super Resolution feature although it is supposed to work in a totally different way than the NV's DLSS.
> RT hasn't been showed. It would mean as a add feature. How I take it, and been explained by AMD, This will work exploit DXR but but some games developed already may not work as intended. It will work slightly different from the NV's RT and that is the reason behind it. AMD claims, 35 games supporting AMD's RT DXR with the Directx12 Ultimate API. I guess we will have to wait and see what the RT is going to look like and how will it perform. The infinity cash is supposedly help a lot with the RT.



uhm where did they claim such a thing? All I can see is the slide claiming 35 games supporting FidelityFX, which does not mean that those 35 games has or will have any form of DXR or RT. FidelityFX has been around for a while now. And yeah directx ultimate is more than just RT.


----------



## Chrispy_ (Oct 28, 2020)

Searing said:


> Now you understand nvidia's fake paper launch. AMD is moving perf/dollar ahead. The RX 6800 is half the price of the 2080 ti but has 5GB more VRAM and is 18 percent faster on average over those games. You can't say "wowee look how good the 3080 is vs the 2080 ti" then same "AMD didn't improve perf/dollar vs the 3070" without coming off as a tool.


Did you just put words into my mouth _and_ call me a tool and in a single sentence? That's how I read it, anyway.

Where did I say "wowee look how good the 3080 is vs the 2080 ti"?
I've never said that.

As for "AMD didn't improve performance/$ vs the 3070", they haven't. AMD directly compared the 6800 at $579 to the 2080Ti and it was a damn close match. Reviewers (yesterday) proved that the $499 3070 is also a damn close match, so AMD's $579 card is performing like Nvidia's $499 card.

More to the point though, I didn't even compare to Nvidia at all - I specifically said "not moving the performance/$ forward for anyone against their *own* product stack". We're talking 2080Ti performance for $579 compared to the previous best AMD had to offer, the 5700XT at $399

So, $579 is 45% more expensive than $399, and here's how much faster that extra 45% cost is....






Yeah, AMD's best value 6000-series card has *WORSE* performance/$ than their current 5000-series card.


----------



## TechLurker (Oct 28, 2020)

It's nice to see that investing into an all AMD system could have some tangible results. Then again, the writing on the wall was there with the Infinity Architecture announcement and their plans to go all-in on heterogeneous computing (something they started in the EPYC ecosystem, IIRC, according to a Tomshardware article on SAM). And for now, AMD is in the unique position to be able to do this kind of thing. Intel is the next likely, given that they're working on that heterogeneous GPU concept that combines iGPU with dGPU, and could potentially expand to a full shared ecosystem too. NVIDIA is a bit of a wild-card; they can't directly compete PC-wise due to a lack of an x86 license, but could put some pressure on a similar thing via ARM in the mobile space (although AMD had moved preemptively with their agreement with Samsung to bring RDNA to smartphones).

The near-future is looking quite nice for AMD; if they can just fully master heterogenous GPU linking as well as MCM GPUs, they could further beat Intel at its own planned game and more readily scale up and down to rival NVIDIA's best.


----------



## IceShroom (Oct 28, 2020)

Metroid said:


> AMD wants margin, reason only 6800/6900 will be launched, amd wants to capitalize on the $600 gpus first then 6700 and so on around $400. AMD still have some rx 5700 to be sold at $400 anyway, amd not keen to cannibalize its rx 5700 even though is eol. Aside from this, I want to know more about infinity cache, that has given amd a huge boost to ryzen 5xxx series and now big navi aka 6xxx series.


RX 5700 is discontinuted not EOL 'ed. And RX 5700 was $350.


----------



## AnarchoPrimitiv (Oct 28, 2020)

rhaoul said:


> There's a difference between the necessity for a company to make money and profit excessively from a situation like this: poor competition. It's a tacit agreement.



Why is there such a strong sense of people believing they're entitled to AMD selling their products at cost, even when the performance matches or exceeds the competition? Don't give me that driver BS either, those days are gone with Lisa Su at the helm and since she's been there, she's delivered on her promises.... This is the first GPU launch that was completely under her leadership, so I'm going to give her the benefit of the doubt and assume drivers will be just as good as Nvidia this time around (let's not forget that Nvidia has already run into driver issues with the 3000 launch that some people would be claiming is the end of the world if AMD had done it.... Go ahead call me a fanboy, despite the fact that I think these statements are nothing more than truthful observations)


----------



## hero1 (Oct 28, 2020)

saikamaldoss said:


> 3090 performance for 999$ is awesome



I'll wait and see what retailers and AIB charge for it before I call it awesome, but the competition is healthy and welcomed.

On a side note, I have to put my 290X back in my system, I'm a my 1080 and wait until Dec unless the RTX 3080 stock issue somehow improves.


----------



## AnarchoPrimitiv (Oct 28, 2020)

ZoneDymo said:


> performance is good, price not so much, sigh.
> 
> like take the RX6800, beats the 2080ti so it will be a bit faster then an RTX3070, but it also costs 580 dollars vs 500 dollar for the RTX3070, so not really a clear winner in the "What to buy" discussion.


Seems to me that this is the same as 2070 super vs 5700xt with roles reversed, did you consider that bad pricing too?  Am I the only one thinking price complaints are ridiculous? 6800xt matches the 3080 and is cheaper, what's the problem?


----------



## TheoneandonlyMrK (Oct 28, 2020)

Looks good, be nice to see some independent reviews though.
6800XT looks a good buy.


----------



## ShurikN (Oct 28, 2020)

hero1 said:


> I'll wait and see what retailers and AIB charge for it before I call it awesome, but the competition is healthy and welcomed.
> 
> On a side note, I have to put my 290X back in my system, I'm a my 1080 and wait until Dec unless the RTX 3080 stock issue somehow improves.


As far as I remember all the leaks and rumors, there is not going to be an AIB 6900XT


----------



## birdie (Oct 28, 2020)

AnarchoPrimitiv said:


> so I'm going to give her the benefit of the doubt and assume drivers will be just as good as Nvidia this time around



Sorry, that doesn't sound plausible. They've already head at least three major iterations of drivers (rewrites) after they purchased ATI back in 2006 and the latest drivers continue to suck hard.


----------



## Fluffmeister (Oct 28, 2020)

It's interesting how quickly prices get justified, Nvidia are famously greedy whilst AMD are the underdog charity case, meanwhile the prices creep up regardless.

If AMD have the better product they should definitely charge for it no doubt about it, but if you think they are your friend.... prepare to be disappointed.


----------



## Vayra86 (Oct 28, 2020)

Fluffmeister said:


> It's interesting how quickly prices get justified, Nvidia are famously greedy whilst AMD are the underdog charity case, meanwhile the prices creep up regardless.
> 
> If AMD have the better product they should definitely charge for it no doubt about it, but if you think they are your friend.... prepare to be disappointed.



The 6800 seems too pricey for what it is, they won't be selling a lot of those.

But in a general sense, disregarding the politics and mindshare components entirely - AMD does charge a 'fair' price compared to the competition. And nobody did really complain about the price/perf of the 3080 to begin with. Its a pretty decent deal. We can't expect a hard price point to remain fixed over prolonged periods of time, the value of coin does change.

Doesn't make me a fan of it though and I'll still wait until the price suits me. Patience always works out well, in my experience.


----------



## yotano211 (Oct 28, 2020)

Metroid said:


> It calls reality, lots of people paid thousands of dollars for the rtx 3080 and 3090 on ebay, there will always be people with more money than sense, those people buy bottles of wines for thousands of dollars, pay 5k for a 100 dollar ticket for the privilege to stay in the first row of an event, need to accept it and move on ehhe


And there is people like me who buy hot items and sell them for increased prices to other people who want it and with money. 
I didnt buy any 3080s or 3090s because I dont buy computers to resell. I usually buy other items like iphones, anything apple.


----------



## R0H1T (Oct 28, 2020)

HD64G said:


> As for the infinity cache offering equal to 1,6TB/s bandwidth performance with 256bit bus,


They're just adding the cache bandwidth numbers to the VRAM, which is kinda misleading. Because you aren't getting *1.6 TBps* over 16GB, it's more like ~1.1 TBps for 128MB cache & the rest for VRAM. Imagine if Intel added add all their cache's bandwidth & showed them in a slide against AMD. Not saying it doesn't or couldn't work as intended but the number itself is pretty useless!


----------



## wheresmycar (Oct 28, 2020)

Wow - I totally underestimated AMD's bullish attack (6800 XT). Was expecting something marginally trailing the 3080 for around $600. $650 for the announced outcome - I'm all for it!! 6900 XT (no comment) just 

It would be interesting to see if Nvidia replies back with a 3080 TI... or AMD responds with so and so...and price cuts being a possibility too. Exciting times ahead and more importantly it's good to see AMD levelling up.

For me... definitely sticking with my 1080 TI a little while longer until the AMD vs NVIDIA battle blood dries up and dust settles.


----------



## Sasqui (Oct 28, 2020)

So I can retire my Vega 64 now?


----------



## mandelore (Oct 28, 2020)

If AMD are going to compare a 6900XT with rage mode (overclocked), surely they should compare it to an overclocked 3090 for a fair comparison.

Or just not use rage mode. Cant get a proper idea of performance, although they "are" just slides.

Need a proper game-off between these two cards


----------



## R0H1T (Oct 28, 2020)

Well the 3090 is in a permanent Rage mode ~ given its power consumption & thermals, seems pretty fair to me.


----------



## HTC (Oct 28, 2020)

All 3 of these cards are way too expensive for me. The performance IS there, but so is the price ...

Unless there's a card with substantially superior performance (VS my current card) @ a LOW power consumption that costs @ most 380€, i'll be skipping this round of GPUs entirely.


----------



## Cheeseball (Oct 28, 2020)

mandelore said:


> If AMD are going to compare a 6900XT with rage mode (overclocked), surely they should compare it to an overclocked 3090 for a fair comparison.
> 
> Or just not use rage mode. Cant get a proper idea of performance, although they "are" just slides.
> 
> Need a proper game-off between these two cards



That is a good point. Are there slides that show the configuration of the RTX 3080/3090 they used to compare? If it's stock NVIDIA cards vs their own with Rage+SAM being used, that wouldn't be fair per say.

But at $50 less for more +5 GB VRAM (albeit standard GDDR6 with an extra 128 MB Infinity Cache) kind of evens that out in a way. From a performance perspective, based off AMD's slides, they are nearly match-for-match, which is what we as consumers need.




HTC said:


> All 3 of these cards are way too expensive for me. The performance IS there, but so is the price ...
> 
> Unless there's a card with substantially superior performance (VS my current card) @ a LOW power consumption that costs @ most 380€, i'll be skipping this round of GPUs entirely.



The current RTX 2060 Super, RX 5700 XT and RTX 2070 looks like that would fulfill that for you, unless you want the newer architectures for upcoming games.


----------



## witkazy (Oct 28, 2020)

What a pleasant turn of events is all i'm sain' ,cheers.


----------



## HTC (Oct 28, 2020)

Cheeseball said:


> That is a good point. *Are there slides that show the configuration of the RTX 3080/3090 they used to compare?* If it's stock NVIDIA cards vs their own with Rage+SAM being used, that wouldn't be fair per say.





Cheeseball said:


> The current *RTX 2060 Super, RX 5700 XT and RTX 2070* looks like that would fulfill that for you, unless you want the newer architectures for upcoming games.



Check the last 4 pics in the OP of the live blog topic: those are all footnotes.

The cheapest available 5700 XT costs 390€, and that's with a 35€ discount promotion. In any case, it's power consumption is WAY too high for me.

As for nVidia's offerings, i dislike their business practices so i avoid their cards entirely: i "speak" with my wallet.


----------



## Searing (Oct 28, 2020)

Chrispy_ said:


> Did you just put words into my mouth _and_ call me a tool and in a single sentence? That's how I read it, anyway.
> 
> Where did I say "wowee look how good the 3080 is vs the 2080 ti"?
> I've never said that.
> ...



I'm talking about the idea that perf/dollar didn't increase because you are not looking in the same price bracket. And your graph is wrong, the 6800 looks to be at least 18 percent faster on average at 1440p vs the 2080 ti... that would make it an improvement.

118/75 is 57 percent faster for less than 57 percent more money. So you're just wrong, sorry.


----------



## efikkan (Oct 28, 2020)

They are really stretching out releases these days; an announcement of an announcement of an announcement. I wish they could just release it already.


----------



## Cheeseball (Oct 28, 2020)

HTC said:


> Check the last 4 pics in the OP of the live blog topic: those are all footnotes.
> 
> The cheapest available 5700 XT costs 390€, and that's with a 35€ discount promotion. In any case, it's power consumption is WAY too high for me.
> 
> As for nVidia's offerings, i dislike their business practices so i avoid their cards entirely: i "speak" with my wallet.



The RX 5700 XT can be undervolted and brought down below the 225W TDP. It would still be more than the 150W TDP of your RX 480 . Your Corsair 850W can handle that no problem though.


----------



## Sasqui (Oct 28, 2020)

Cheeseball said:


> The RX 5700 XT can be undervolted and brought down below the 225W TDP. It would still be more than the 150W TDP of your RX 480 . Your Corsair 850W can handle that no problem though.



I wonder if the undervolting days will be gone with big navi?  With the Vega 64 it's the only way to really overclock, and wow does it overclock!


----------



## HTC (Oct 28, 2020)

Cheeseball said:


> The *RX 5700 XT can be undervolted and brought down below the 225W TDP.* It would still be more than the 150W TDP of your RX 480 . Your Corsair 850W can handle that no problem though.



While true using windows, not so much using linux: @ least i dunno how to do it in linux.


----------



## Alduin (Oct 28, 2020)

Rx 6800xt Great card revolutionary Gpu Arch
But lower Rt performance
and that price is ridiculously high
I think that infinity cache think takes a lot of Resources,Maybe I'm wrong


----------



## KainXS (Oct 28, 2020)

The 6800XT looks pretty damn good, but the 6900XT  . . . . . at least they admitted they used smart access and their new overclocking software on it.(nvidia would not have done that)

The price of the 6800XT does not look that bad to me, AMD should be able to charge just as much as nvidia at this point.(given similar performance) This is why I suspect the 6900XT is cheaper and nothing is wrong with that, they know its not as fast as the 3090 without rage mode and price accordingly. The RX 5700XT drivers were better than the RX 480 and 580 drivers were, its a moot point to say AMD's drivers suck now and I know they're not perfect but they're not as bad as what people say.

Competition is always good for us though, where you at intel.


----------



## InVasMani (Oct 28, 2020)

Metroid said:


> AMD wants margin, reason only 6800/6900 will be launched, amd wants to capitalize on the $600 gpus first then 6700 and so on around $400. AMD still have some rx 5700 to be sold at $400 anyway, amd not keen to cannibalize its rx 5700 even though is eol. Aside from this, I want to know more about infinity cache, that has given amd a huge boost to ryzen 5xxx series and now big navi aka 6xxx series.


 Yeah I really want to see a Ryzen 5600X with one of these RNDA2 GPU's tested with GPU RAM Drive it seems like in theory it'll be quite a bit quicker in certain scenario's than it would with previous GPU's that lacked the infinity cache because it's a direct connection to the CPU from the GPU over the infinity fabric and cache accelerated on each end. I suspect anything that fits within a certain cache file transfer size will be blazing fast and make even NVME look slow if it works how I think it how it sounds at least.


----------



## docnorth (Oct 28, 2020)

saikamaldoss said:


> 3090 performance for 999$ is awesome


"up to 2X higher performance in select titles with the AMD Radeon RX 6900 XT graphics card compared to the AMD Radeon RX 5700 XT". Well 3080 FE actually IS 2x faster in 4k compared to RX 5700 XT according to TPUs performance summary. Of course AMD may include 1440 analysis in this comparison, so 6900 XT could indeed be on par or more powerful than RTX 3080. But right now it is only speculation, we just don't know exactly what we are comparing.


----------



## ZoneDymo (Oct 28, 2020)

AnarchoPrimitiv said:


> Seems to me that this is the same as 2070 super vs 5700xt with roles reversed, did you consider that bad pricing too?  Am I the only one thinking price complaints are ridiculous? 6800xt matches the 3080 and is cheaper, what's the problem?



yeah the 6800xt seems to trade blows and is 50 bucks cheaper, which is nice, but like I said originally, its a bit more performance for a bit higher price, so its not better really.
If it was better performance for the same price, then you would have a winner and that is usually the case with AMD


----------



## Searing (Oct 28, 2020)

docnorth said:


> "up to 2X higher performance in select titles with the AMD Radeon RX 6900 XT graphics card compared to the AMD Radeon RX 5700 XT". Well 3080 FE actually IS 2x faster in 4k compared to RX 5700 XT according to TPUs performance summary. Of course AMD may include 1440 analysis in this comparison, so 6900 XT could indeed be on par or more powerful than RTX 3080. But right now it is only speculation, we just don't know exactly what we are comparing.



People uncritically accept nvidia's claims so much they don't even remember the results after reviews are published. How much faster is the 3080? 19.2 percent. The 3080 is 19.2 percent faster on average vs the 2080 ti at 1440p. The 2080 Ti was bad, it was only 14 percent faster than the 2080 Super. Here is the proof (Ryzen 3950XT results):






						1440p comparison - Google Drive
					






					docs.google.com
				




I don't care about 4k or all the other ways in which nVidia misleads people, I want actual in game fps to increase at 1440p. If that is what AMD delivers (and it looks like their 1440p numbers are incredible) it is a slam dunk. Even the base 6800 might be VERY close to the 3080, yet $120 cheaper (actually $220 cheaper since the FE was never for sale), and with 6GB more VRAM. At 1440p there might only be a 5 percent difference. We'll see.


----------



## Cheeseball (Oct 28, 2020)

Sasqui said:


> I wonder if the undervolting days will be gone with big navi?  With the Vega 64 it's the only way to really overclock, and wow does it overclock!



It's still the same with the RX 5700 XT, and I had the reference blower too. Capping it to 1151mV (which is what the Adrenalin software defaults to if I hit the automatic undervolt option) kept it below 80 C with the same boost clock. Stock performance at lower power usage.

I'm doing the same with this RTX 3080. At only 887 mV it keeps the TBP (according to GPU-Z) at 270W but it still boosts to 1950 MHz which is more than enough for 3440x1440 at 144 Hz. Not to mention staying cool on load at 65C.


----------



## jesdals (Oct 28, 2020)

I do like the specs of that 6900xt


			https://www.amd.com/en/products/specifications/compare/graphics/10516%2C10521%2C10526


----------



## RedelZaVedno (Oct 28, 2020)

I used to always buy best value xx70 class GPUs, but Ampere/RDNA2 pricing has finally pushed me out of the market. First mining craze, then Turing and now 5800... I don't wanna support greed anymore. I'm gonna stick with 1080TI and buy something for 300-400 bucks on 2nd hand market when RDNA3/Nvidia whatever get released. Pricing has become ridiculous.

Just look at the last decade: $349 GTX 570 vs  $369 HD 6970, R9 280X $299 GPU competing with $399 USD GTX 770, GTX 970 was $329, Vega 56 $399 beating great 1070 $379 AND then came infamous Turing RTX 2070 first $499 GPU. AMD fought against price hike with $349 RX 5700 but already tried to squeeze 399 out of it on release conference. Fast forward to today and people are praising Nvidia for not price hiking xx70 class GPU again while we get price hike to $579 from AMD and ppl praise it as a good deal because it offer 8 gigs more vram (costing amd additional 50 bucks in worse case scenario). Where's the value in that?  Such price hikes would get obliterated by tech media just 7 years ago and now it's OK? I mean, what's wrong with us?


----------



## Anymal (Oct 28, 2020)

Frames per second (up to)


----------



## R-T-B (Oct 28, 2020)

Metroid said:


> It calls reality, lots of people paid thousands of dollars for the rtx 3080 and 3090 on ebay, there will always be people with more money than sense, those people buy bottles of wines for thousands of dollars, pay 5k for a 100 dollar ticket for the privilege to stay in the first row of an event, need to accept it and move on ehhe



You are right about that, but I think his point was on average "people have more money" is outright wrong.


----------



## Casecutter (Oct 28, 2020)

So far I like what I'm seeing, but I'll wait for the reviews. 
On price, I'd like the RX 6800 to be less (sure), but for like $40 they're doubling the memory, so that's good.  While the RX 6800 XT provides that additional 6b, the RTX3080 is the GDDR6X on 320-Bus offering like 50% bandwidth.  The RX 6800 XT might about right or slightly high depending on reviews, although besting the RTX 2080 Ti for 32% less and sure 2 years later.  I'm good with it.   Finally, near RTX 3090 performance for 50% less cost, no one should be crying... you should expect a bloody nose if you want to play in that neighborhood.


----------



## Chrispy_ (Oct 28, 2020)

Searing said:


> And your graph is wrong, the 6800 looks to be at least 18 percent faster on average at 1440p vs the 2080 ti... that would make it an improvement.
> 
> 118/75 is 57 percent faster for less than 57 percent more money. So you're just wrong, sorry.


Uh, WTF?
My graph? Uh... No. I didn't make that graph.
You _do_ know where that graph's from, right?

Ah wait, I think I know what you're saying. 
That graph they posted with SAM. Yeah, we don't have Zen3 CPUs to compare to yet. Unlike the 6800XT which was an apples-to-apples comparison, the 6800 graph isn't a fair comparison and is using the Zen3 + Infinity Cache + SAM amalgamated results to make it look better. Some of those results are pretty impressive but they're deliberately unfair to promote the whole 3-part solution of Ryzen5000+X570+RX6800. We won't know until November what the actual GPU-only results are, and I was just going off Lisa Su's actual words when she said "matches a 2080Ti at just $579"


----------



## Cheeseball (Oct 28, 2020)

HTC said:


> While true using windows, not so much using linux: @ least i dunno how to do it in linux.



Try using powerupp and get your pp working like it should! (GAHAHAHAHAHAHAHAHAHAHA)

*EDIT*: Review UPP first before you use the GUI above


----------



## ahenriquedsj (Oct 28, 2020)

3090 performance for 999$ is awesome (2)


----------



## RedelZaVedno (Oct 28, 2020)

ahenriquedsj said:


> 3090 performance for 999$ is awesome (2)


And that really is the only awesome release from 21 Navi lineup. Big lost opportunity. 6800XT priced at $599 and 6800 at $449 would obliterate entire Nvidia A104/A102 lineup. It looks like AMD is content with 20% discrete GPU market share, that's why no price war with Nvidia  6900XT is awesome deal for 0.1% of the gaming market that actually buys $1K GPUs.


----------



## my_name_is_earl (Oct 28, 2020)

Review or it's not happening. If true, AMD is back from the dead. It's been more than 10yrs. Bout time?


----------



## rbgc (Oct 28, 2020)

Ashtr1x said:


> They still hold that Ray Tracing upper hand along with G6X memory, it all boils down to price and availability AND Drivers. Also the NVENC too.



*RT*
RDNA2 is first generation with hardware accelerated RT and RT games and drivers must be optimized first. It is logical that they didn't presented RT perf now. It will take months. We will see, but I think NVIDIA will be still more better than AMD, they started with RT sooner.

*Memory subsystem*
We will see in tests if more expensive G6X memory/384bit bus or infinity cache and G6 memory/256bit bus is better or the same. But if infinity cache will be advantage, then it is plus for AMD, they can move to infinity cache and G6X memory if needed. 

*Hardware encoding*
I like Turing NVENC. Using it very often to offload encoding from the CPU with low GC utilization and good and usually sufficient results for my needs. I think AMD don't plan to make hardware encoders. They have multi-core CPU already presented as CPU for content creators. Software encoding has similar or better results than NVENC. And multi-core processors from AMD is probably also reason why NVIDIA placed Turing NVENC on Ampere cards.


----------



## Chrispy_ (Oct 28, 2020)

RedelZaVedno said:


> And that really is the only awesome release from 21 Navi lineup. Big lost opportunity. 6800XT priced at $599 and 6800 at $449 would obliterate entire Nvidia A104/A102 lineup. It looks like AMD is content with 20% discrete GPU market share, that's why no price war with Nvidia  6900XT is awesome deal for 0.1% of the gaming market that actually buys $1K GPUs.


95% of the whole dGPU market is at sub-$300. Nothing AMD or Nvidia have announced in the last week is remotely relevant to market share.

Even Navi22 - presumably a 40CU and 36CU pair of cards somewhere in the $300-400 range might actually be of relevance to the marketshare numbers, but even then that's still in the realm of 'enthusiast'


----------



## DeathtoGnomes (Oct 28, 2020)

so AMD > Nvidia now?


hmmm....


----------



## TheoneandonlyMrK (Oct 28, 2020)

Chrispy_ said:


> 95% of the whole dGPU market is at sub-$300. Nothing AMD or Nvidia have announced in the last week is remotely relevant to market share.
> 
> Even Navi22 - presumably a 40CU and 36CU pair of cards somewhere in the $300-400 range might actually be of relevance to the marketshare numbers, but even then that's still in the realm of 'enthusiast'


I think being an enthusiast can be expensive, but damnn, I want a full swap out Now. .
Looks around room for something to sell.


----------



## Chrispy_ (Oct 28, 2020)

theoneandonlymrk said:


> I think being an enthusiast can be expensive, but damnn, I want a full swap out Now. .
> Looks around room for something to sell.


Think I'm happy for the moment, just swapped out to X570 and a 3900X, debating whether to keep the 2070S or 5700XT in that machine and the deciding factor is going to be how well either card runs CP2077. Raytracing really hasn't impressed me at all.


----------



## birdie (Oct 28, 2020)

And I remember how I purchased the GeForce 8800 GT (basically the RTX 3080 in today's terms) for $250 which was close to the very top. Nowadays $250 is what? Lower end?


----------



## TheoneandonlyMrK (Oct 28, 2020)

Chrispy_ said:


> Think I'm happy for the moment, just swapped out to X570 and a 3900X, debating whether to keep the 2070S or 5700XT in that machine and the deciding factor is going to be how well either card runs CP2077. Raytracing really hasn't impressed me at all.


Well I am too frugal to actually do that but it's tempting.
GPU is next up.


----------



## NeuralNexus (Oct 28, 2020)

rbgc said:


> *RT*
> RDNA2 is first generation with hardware accelerated RT and RT games and drivers must be optimized first. It is logical that they didn't presented RT perf now. It will take months. We will see, but I think NVIDIA will be still more better than AMD, they started with RT sooner.
> 
> *Memory subsystem*
> ...



Games are designed with consoles in mind first and ported to PC later. With RDNA 2 being the backbone of the next-gen consoles, I highly doubt Nvidia will have better RT performance. Devs don't like don't extra work especially for proprietary software solutions.


----------



## Icon Charlie (Oct 28, 2020)

Chrispy_ said:


> 95% of the whole dGPU market is at sub-$300. Nothing AMD or Nvidia have announced in the last week is remotely relevant to market share.
> 
> Even Navi22 - presumably a 40CU and 36CU pair of cards somewhere in the $300-400 range might actually be of relevance to the marketshare numbers, but even then that's still in the realm of 'enthusiast'



Completely agree with this assessment.  
What I think is going to happen is that they will re-brand their 5700 line as well as a cut down version of the of their 6800 video card.

But for $580 on their cheapest video card, I might decide to buy a console.  the last time I bought one was 28 years ago.  These prices through out the computer tech industry are just too high to validate the usual "Coof"/Tarrif excuse.   I do business globally and IMHO this is utter nonsense.  They are throwing prices out there to see what sticks and gerbils and influencers (including YT) make sure that the prices stay that way.


----------



## medi01 (Oct 28, 2020)

wheresmycar said:


> It would be interesting to see if Nvidia replies back with a 3080 TI...


3090 is full GA102 and it is only 10% faster than 3080. How could 3080Ti fit in this or what chip would it even be based on?


----------



## Chrispy_ (Oct 28, 2020)

Icon Charlie said:


> These prices through out the computer tech industry are just too high to validate the usual "Coof"/Tarrif excuse.   I do business globally and IMHO this is utter nonsense.  They are throwing prices out there to see what sticks and gerbils and influencers (including YT) make sure that the prices stay that way.


Well, the other option is to just not buy into the hype:
Sure, a 6900XT or RTX3090 looks cool and at 4K Ultra with Raytracing you can get 120fps+ in some triple-A game

Here's the thing though; Those triple-A games are still just as fun, or just as shit at a quarter the resolution and half the framerate. The game itself will determine how enjoyable it is. I've also chatted with friends about a few games that made me stop playing just to pan the camera around and admire the scenery and most of the stuff that makes me actually stop and take note isn't about the resolution it's running at or mind-blowing framerates.

It wasn't that long ago that 1080p30 was the target experience. If your PC can handle a game at fluid framerates on your monitor, then you're good to go. Quality graphics are down to the developer and game artists more than your PC - and it always staggers me how good looking many games look running at medium or even low settings these days, and they do that on a potatoPC....

Here's Death Stranding, one of the most visually-striking games of 2020 running on an old GTX970:









and even an old 7970 which is slower than practically every dGPU you'd buy these days is doing a pretty respectable job in 2020:


----------



## chris.london (Oct 28, 2020)

RedelZaVedno said:


> And that really is the only awesome release from 21 Navi lineup. Big lost opportunity. 6800XT priced at $599 and 6800 at $449 would obliterate entire Nvidia A104/A102 lineup. It looks like AMD is content with 20% discrete GPU market share, that's why no price war with Nvidia  6900XT is awesome deal for 0.1% of the gaming market that actually buys $1K GPUs.



The 6800XT results are without rage mode and SAM. With the same settings the 6900XT is only 6-9% faster which is nothing. The 6800XT is the card to get. The 6900XT is pretty bad value. At least if you get a 3090 you can tell yourself you did it for the VRAM. No such excuse for 6900XT buyers.

Edit: the 6800XT will be able to match the 3090 in some games. Crazy.
Edit 2: I am curious why AMD used a 5900X for the tests. I guess the 5950X is not faster at 1440p/4k.


----------



## SIGSEGV (Oct 28, 2020)

I am so happy with this news. NVidia fans seem insecure reading this announcement from AMD.   
6900XT price should be around $749. damn NVidia!!


----------



## kiddagoat (Oct 28, 2020)

I haven't purchased an AMD card since the Fury X and the Fury Nano...... I think I might have to jump on the 6800XT or 6900XT.   Depends on availability.


----------



## Nephilim666 (Oct 28, 2020)

Great to have competition, as always take all marketing slides with a healthy dose of salt. The Nvidia 3000 launch ones were a bit misleading.

Personally, I'm a huge AMD fan (who doesn't love a good underdog story). 
My system is a 3960X + Vega64 and I have an RTX3090 on order. Once reviews and RT performance numbers for RX6900 are out I'll decide whether to cancel the RTX3090 order... it's not like I'm going to get it any time soon.

Great time to be a PC enthusiast.


----------



## RedelZaVedno (Oct 28, 2020)

It saddens me to write this observation, but PC gaming is becoming more and more of a niche. Just look at XBox X package: $499 gets you 52CU RDNA2 GPU/8C-16T Zen2-3 CPU/16GB GDDR6/1TB custom super fast NVMe SSD/Blu-ray optical drive. All this tech for the price of RTX 3070 and costs even 16% less than 60 CU RX 6800. How can one justify these prices? I've been a PC gamer since getting PC 286 back in the early 90ies, but now I'm seriously considering  buying Xbox X + good 55in 4K OLED TV instead of investing the same amount of money into a gaming rig and call it a day. GPU prices are getting just ridiculous


----------



## turbogear (Oct 28, 2020)

theoneandonlymrk said:


> Looks good, be nice to see some independent reviews though.
> 6800XT looks a good buy.



Yes, 6800XT looks like would be good choice.
Waiting to see review from TPU though.  

Looking forward to see if also fan performance this time would be better than on my Radeon VII and if their is any headroom for increasing performance by undervolting and water-cooling.

If performance is as good as shown here, maybe time is coming to sell my Radeon VII with EK water block on eBay.


----------



## Anymal (Oct 28, 2020)

no worries, its up to


----------



## rhaoul (Oct 28, 2020)

AnarchoPrimitiv said:


> Why is there such a strong sense of people believing they're entitled to AMD selling their products at cost, even when the performance matches or exceeds the competition? Don't give me that driver BS either, those days are gone with Lisa Su at the helm and since she's been there, she's delivered on her promises.... This is the first GPU launch that was completely under her leadership, so I'm going to give her the benefit of the doubt and assume drivers will be just as good as Nvidia this time around (let's not forget that Nvidia has already run into driver issues with the 3000 launch that some people would be claiming is the end of the world if AMD had done it.... Go ahead call me a fanboy, despite the fact that I think these statements are nothing more than truthful observations)


i think you don't understand how the market work. AMD don't sell at cost, far from there. the situation is AMD follows the rules... dictate by the leader (nvidia high prices). AMD benefit from this situation and their partners too. That can easly be done in this kind of configuration, when there are only two players in a market. There is a tacit agreement between them. In term of competition law, it's really borderline. But the player want to play, even if he's blind...


----------



## Xuper (Oct 28, 2020)

Nephilim666 said:


> Great to have competition, as always take all marketing slides with a healthy dose of salt. The Nvidia 3000 launch ones were a bit misleading.
> 
> Personally, I'm a huge AMD fan (who doesn't love a good underdog story).
> My system is a 3960X + Vega64 and I have an RTX3090 on order. Once reviews and RT performance numbers for RX6900 are out I'll decide whether to cancel the RTX3090 order... it's not like I'm going to get it any time soon.
> ...


Here is number of Ray-tracing.so far 6900xt is probably equal to rtx 3070


----------



## ebivan (Oct 28, 2020)

RedelZaVedno said:


> It saddens me to write this observation, but PC gaming is becoming more and more of a niche. Just look at XBox X package: $499 gets you 52CU RDNA2 GPU/8C-16T Zen2-3 CPU/16GB GDDR6/1TB custom super fast NVMe SSD/Blu-ray optical drive. All this tech for the price of RTX 3070 and costs even 16% less than 60 CU RX 6800. How can one justify these prices? I've been a PC gamer since getting PC 286 back in the early 90ies, but now I'm seriously considering  buying Xbox X + good 55in 4K OLED TV instead of investing the same amount of money into a gaming rig and call it a day. GPU prices are getting just ridiculous


I agree that some pc hardware prices are ridiculus. 
But basically prices for an average gaming pc have not changed too much. Remember that 3080 and 6800xt are not average, these are enthusiast products. You can get a capable gaming pc for 1000 bucks, just like you could 20 years ago.

But you can't really compare that to consoles. On pc games are usurally cheaper since you have a lot more ways to buy them. On pcs you can work, you can upgrade your pc, and sell parts you dont use anymore... There are endless possibilities. On pcs you can play brand new games and games from 40 years ago...
On consoles you can just play games and watch netflix.


----------



## TheoneandonlyMrK (Oct 28, 2020)

Xuper said:


> Here is number of Ray-tracing.so far 6900xt is probably equal to rtx 3070


Rumours do point to 50% better than a 2080ti
I'm not sure how the 3070 stand's here.


----------



## rvalencia (Oct 28, 2020)

dicktracy said:


> No mention of DXR performance at all. For all we know, you flick that setting on in Cyberpunk 2077 and it'll drop down to 2060 performance LOL!


According to Digital Foundry, PS5's raytracing performance is similar to RTX 2060 Super.


----------



## arbiter (Oct 28, 2020)

rbgc said:


> *Hardware encoding*
> I like Turing NVENC. Using it very often to offload encoding from the CPU with low GC utilization and good and usually sufficient results for my needs. I think AMD don't plan to make hardware encoders. They have multi-core CPU already presented as CPU for content creators. Software encoding has similar or better results than NVENC. And multi-core processors from AMD is probably also reason why NVIDIA placed Turing NVENC on Ampere cards.


You clearly haven't seen streams that use NVENC. It looks as good as any cpu could put out and doubt even 12 core cpu using 8 of them for encoding would look any better.



NeuralNexus said:


> Games are designed with consoles in mind first and ported to PC later. With RDNA 2 being the backbone of the next-gen consoles, I highly doubt Nvidia will have better RT performance. Devs don't like don't extra work especially for proprietary software solutions.


Its called DXR and its the Direct X standard that is being used. Its not Proprietary anything and likely just another amd excuse to being bad at it. Like Tessellation that amd was bad at forever that was apart of DX standard.


----------



## RedelZaVedno (Oct 28, 2020)

ebivan said:


> I agree that some pc hardware prices are ridiculus.
> But basically prices for an average gaming pc have not changed too much. Remember that 3080 and 6800xt are not average, these are enthusiast products. You can get a capable gaming pc for 1000 bucks, just like you could 20 years ago.
> 
> But you can't really compare that to consoles. On pc games are usurally cheaper since you have a lot more ways to buy them. On pcs you can work, you can upgrade your pc, and sell parts you dont use anymore... There are endless possibilities. On pcs you can play brand new games and games from 40 years ago...
> On consoles you can just play games and watch netflix.



I agree to some extend. But Xbox X is 4K/60fps capable console. How much do I have to pay today to get the same level of performance?  RDNA2/Ampere 2080S equivalent is not gonna be cheaper than $400 if we're lucky. Add 8C/16T 3700x or 10700 to the mix $300 (not even considering $450 8C zen3), 16 GB or ram $70, fast 1TB NVe SSD $120, blue ray $60, decent case $70, decent PSU $80... That's 1100 bucks PC that will probably perform worse in gaming than the console because most devs don't give a f... about code optimization when it comes to porting games from consoles to PC. I accept paying 50% premium as you get fully functioning workstation and freedom that PC brings, but not 110%. That's just too much. AMD/Intel/Nvidia have become too greedy when it comes to DIY PC market.


----------



## rvalencia (Oct 28, 2020)

rbgc said:


> *RT*
> RDNA2 is first generation with hardware accelerated RT and RT games and drivers must be optimized first. It is logical that they didn't presented RT perf now. It will take months. We will see, but I think NVIDIA will be still more better than AMD, they started with RT sooner.
> 
> *Memory subsystem*
> ...


*Memory subsystem*
Big NAVI's Infinity Cache is based on Zen's L3 cache. 

*Hardware encoding*
RX 5700 has AMF & VCE


----------



## Mysteoa (Oct 28, 2020)

rbgc said:


> *RT*
> RDNA2 is first generation with hardware accelerated RT and RT games and drivers must be optimized first. It is logical that they didn't presented RT perf now. It will take months. We will see, but I think NVIDIA will be still more better than AMD, they started with RT sooner.
> 
> *Memory subsystem*
> ...



There will be games with AMD RT support on release date. AMD is using DX ray tracing, so every game that has that implemented will work. Games that have more Nvidia tailored RTX would probably need more work to get them to work.

G6X is not much faster than G6, but it is hotter and need more cooling. They don't need G6X, if they are currently matching it and it's cheaper to have a smaller bus with G6.

AMD has a hardware encoder in their GPU, but it uses standard formats and not a self-made one. They do have to increase support for it, if they want to be an alternative for streamers, but it is good enough for regular Joe.


----------



## rvalencia (Oct 28, 2020)

Chrispy_ said:


> 95% of the whole dGPU market is at sub-$300. Nothing AMD or Nvidia have announced in the last week is remotely relevant to market share.
> 
> Even Navi22 - presumably a 40CU and 36CU pair of cards somewhere in the $300-400 range might actually be of relevance to the marketshare numbers, but even then that's still in the realm of 'enthusiast'


PS5 has the mainstream RDNA 2 with 36 active CU with up to 2.23 Ghz clock speed. PS5 has the 40 CU RDNA 2 based design without the Infinity Cache.

NAVI 22 with 36 CU would be PC's equivalent to PS5.


----------



## Easo (Oct 28, 2020)

I was planning to upgrade my PC with Zen 3 and something from 3000 series, but this, this has made me think... Maybe GPU will actually be AMD again. Upgrade was planned for sometime next year anyway, so a perfect chance to see reviews and decide.
But hopes are up. Worst case - it is at least comparable to 3000 series, which is already good for AMD. They have crawled out of the pit.


----------



## Rob94hawk (Oct 28, 2020)

Show me reviews or GTFO.


----------



## Deleted member 203344 (Oct 28, 2020)

Well i can openly say this justifiably ... AMD have not had a single win against anyone at the moment ... tbeir products are large doses of salt .. as is their claims .. until such time as theyre made available for independent testing and purchase .. Claims of "we are faster than intel or nvidia" dont cut it if theyre comparing their products to those that are already in the marketplace, and until such time as theirs are available no claims made by AMD are validated.
Lets see how their products stack up in the real world .. I have purchased AMD products in the past and have been burned by the hype .. never again.
Excuse me if i have good reason to want to see how they perform outside of AMD's presentations or the hype which of recent years has closely been similar to Apple's Reality Distortion paradigm.
Arguing that theyre better failing independent testing is simply absurd as we have heard it from them many times before.


----------



## Divide Overflow (Oct 28, 2020)

Eagerly waiting for TPU to review these cards when they are released!
I see a CPU / GPU upgrade combo in my stocking for Christmas!


----------



## Mike2Fr (Oct 28, 2020)

Ok so the only difference between 6800xt and 6900xt is 8 Computer units....  Am I missing something? 

It seems very light. For 350$. 45$ each computer units....Hopefully it will be way more overclockable than the 6800xt....

The fans will be bigger I guess... Do we know if AIB will produce 6900xt as well?


----------



## dinmaster (Oct 28, 2020)

Mike2Fr said:


> Ok so the only difference between 6800xt and 6900xt is 8 Computer units....  Am I missing something?
> 
> It seems very light. For 350$. 45$ each computer units....Hopefully it will be way more overclockable than the 6800xt....
> 
> The fans will be bigger I guess... Do we know if AIB will produce 6900xt as well?



rumored to be no


----------



## rvalencia (Oct 28, 2020)

Mike2Fr said:


> Ok so the only difference between 6800xt and 6900xt is 8 Computer units....  Am I missing something?
> 
> It seems very light. For 350$. 45$ each computer units....Hopefully it will be way more overclockable than the 6800xt....
> 
> The fans will be bigger I guess... Do we know if AIB will produce 6900xt as well?


It may be another case of Vega 56 OC that rivals stock Vega 64.  I'm targeting RX 6800 XT AIB OC and RTX 3080 Ti for my two gaming PCs.


----------



## Deleted member 203344 (Oct 28, 2020)

I believe they indicated the 6900 would be an AMD exclusive .. which raises all sorts of concern if true .. given that their past exclusives have been shocking in comparison to AiB partner versions .. maybe they have concerns about their top end card .. so much so that they want to keep control of its functioning .. regardless of reason i cant see any valid argument as to why they wouldnt want AiB partners to extract every last drop from their products by making it available to them


----------



## Caring1 (Oct 28, 2020)

Mike2Fr said:


> Ok so the only difference between 6800xt and 6900xt is 8 Computer units....  Am I missing something?


Yes, it's Compute units.


----------



## RedelZaVedno (Oct 28, 2020)

Given the 6800 pricing is $579, should we expect 36CU and 40CU Navi 22 (clocked at 2250Mhz) and priced $399/349 or even try to push to $449/399 again? That would mean 1080TI/2080 level of performance still costing the same amount as RX 5700(XT) did. Can this "rx 590" Deja Vu refresh really happen to us again?


----------



## Mussels (Oct 28, 2020)

dicktracy said:


> No mention of DXR performance at all. For all we know, you flick that setting on in Cyberpunk 2077 and it'll drop down to 2060 performance LOL!



What uhh, what titles out there support AMD's ray tracing right now?


----------



## Deleted member 203344 (Oct 28, 2020)

I like gaming by maxing all available quality settings regardless of outright fps with the expectation that my experience will be fluent and as the game developer intended to highlight latest tech realism. AMD deliberately left out several important current tech .. or performance thereof .. that raises an eyebrow .. and theres no denying that. On paper they claim their card is fast under their best ideal conditions .. but even that claim is without validation by any 3rd party.


----------



## Imsochobo (Oct 28, 2020)

rvalencia said:


> According to Digital Foundry, PS5's raytracing performance is similar to RTX 2060 Super.


according to DF they didn't really do an apple to apples comparison.

They took a guess.


----------



## Caring1 (Oct 28, 2020)

birdie said:


> And I remember how I purchased the GeForce 8800 GT (basically the RTX 3080 in today's terms) for $250 which was close to the very top. Nowadays $250 is what? Lower end?


That's not how inflation works.
Average wages in the U.S. have risen 65% since the 8800GT's release, that would mean your $250 card would have to be priced around $412 to be on par.


----------



## Deleted member 203344 (Oct 28, 2020)

Yep .. aligning themselves to Apples reality distortion field ... lmao

Its real when it exists in real hands ... everything else is vapour .. we will know soon enough what the realities are

Well im old enough to know that back in the Geforce 2 days when Nvidias reign was threatened they released Detonator drivers which resulted in a 20-40 percent increase in performance .. the history is there for Nvidia ... sadly its never been there for AMD ... regardless of speculation AMD have never delivered strong driver performance increases .. you can deny reality until it bites you and ive been bitten by AMD's claims often .. thats why im sceptical until its independently tested.


----------



## Caring1 (Oct 28, 2020)

Any Idea why the 6900XT and 6800XT are both listed as 300W cards, but the PSU recommendations differ?
The 6900XT shows a 850W recommendation while the 6800XT shows a 750W unit.


----------



## Deleted member 203344 (Oct 29, 2020)

With a 375 watt maximum its all academic .. i doubt theres much headroom for manual overclocking otherwise that would be the 1st thing AMD would be leading with .. and they didnt .. but thats not surprising given the dependencies they highlighted in their presentation


----------



## ViperXTR (Oct 29, 2020)

I plan to have the RTX 3070 but hello there RX 6800


----------



## Camm (Oct 29, 2020)

Basilix said:


> AMD have never delivered strong driver performance increases



Are you on drugs? The 680, a card that beat the 7950 on launch, is generally trounced by the 7950 by 50%+.










Or the 5700XT, a card that when the 2070S launched was conclusively beaten by the 2070S, but now gets within spitting distance and beats the 2070S in some games.










AMD doesn't give driver performance my ass. There's a reason why the AMD fanbois cling to finewine(tm), as AMD cards to tend to get stronger over time.


----------



## rvalencia (Oct 29, 2020)

Imsochobo said:


> according to DF they didn't really do an apple to apples comparison.
> 
> They took a guess.


It's better than the straight-out RX 6900 XT being RTX 2060 claim.

Since RT is memory bandwidth extensive with a small memory storage footprint (i.e. BVH is a search engine for geometry data), a very fast 128 MB Infinity cache matched RT workload which is missing in RDNA 2 based game consoles.

Game console's RDNA 2 RT cores without a high speed 128 MB Infinity cache acted like AMD's gen 1 HW RT cores.


----------



## Deleted member 203344 (Oct 29, 2020)

Lmao .. no one has ever argued who has the most competent driver development team and for good reason .. but you seem to be the exception .. good luck with convincing the entire gaming community .. you have my support in your quest.


----------



## Camm (Oct 29, 2020)

Basilix said:


> Lmao .. no one has ever argued who has the most competent driver development team and for good reason .. but you seem to be the exception .. good luck with convincing the entire gaming community .. you have my support in your quest.



Why is it on every product launch we get newly joined members whose only job seems to be to astroturf for the competition. Does one's head in.


----------



## Deleted member 203344 (Oct 29, 2020)

Camm said:


> Why is it on every product launch we get newly joined members whose only job seems to be to astroturf for the competition. Does one's head in.


.... And yet another snowflake who resorts to shaming and safe spaces to aid their argument .. lmao


----------



## rvalencia (Oct 29, 2020)

Camm said:


> Why is it on every product launch we get newly joined members whose only job seems to be to astroturf for the competition. Does one's head in.


Poster like Basilix continues NVIDIA's near monoply in discrete GPUs.


----------



## TechLurker (Oct 29, 2020)

Speculation for what exactly is AMD's reason for the higher price of the 6800 vs the 3070 has been interesting to read in the past couple of hours.

Some speculated that the ~80 dollar difference was "feature tax", SAM + RAGE Mode allowing it to eke ahead of a 3070; assuming an All-AMD 5000/500/6000 ecosystem. GamersNexus' pinned YT post seemed to think similar; that the ability to use RAGE Mode + SAM was the reasoning behind the increase (not that they agreed or disagreed with it).

But others speculated that it was really to push people towards the 6800 XT ("well, if I'm going to fork out $580; may as well just fork out a bit more and get the 6800 XT instead for an extra $70"); same reason the 6900XT is also still priced a lot higher despite no likely way to squeeze in a "6850 XT" or "6900" (non-XT) between the 6800 XT and 6900 XT.

And still others speculate it was simply so they could make some extra $$$ before dropping the price when the replacement 3070 Ti/S comes out, suddenly wiping out any reason to buy an upcoming 3060 Ti/S or plain 3070 when you can have 2080Ti/3070 performance for cheaper.

And last one I've seen echoed a few times; the price was simply a placeholder, since they didn't yet know how much the 3070 would sell for at the time of filming the show, and could match or lower the price closer to release.


----------



## rvalencia (Oct 29, 2020)

Basilix said:


> Lmao .. no one has ever argued who has the most competent driver development team and for good reason .. but you seem to be the exception .. good luck with convincing the entire gaming community .. you have my support in your quest.


Higher clock speed reduces the workload on driver teams i.e. shifting towards serial performance from very wide parallelism. It's the same idea since G8X's high clock speed.


----------



## Mescalamba (Oct 29, 2020)

They can announce whatever they want, until hard data proves its capable of competition I take it as that its not.

In GPU case, AMD is lately just all talk and no results.


----------



## docnorth (Oct 29, 2020)

Searing said:


> People uncritically accept nvidia's claims so much they don't even remember the results after reviews are published. How much faster is the 3080? 19.2 percent. The 3080 is 19.2 percent faster on average vs the 2080 ti at 1440p. The 2080 Ti was bad, it was only 14 percent faster than the 2080 Super. Here is the proof (Ryzen 3950XT results):
> 
> 
> 
> ...


I didn't even bother to comment about nVidia claims. I'm only referring to TPU results from 23 games, those are available for everyone who wants to read them. BTW 3090, 6900 XT, 3080 and probably 6800 XT make more sense for 4K (or business). I also don't care about 4K, mostly waiting for the mid-range cards, even lower then 3070 and 6800.


----------



## InVasMani (Oct 29, 2020)

So given the bandwidth of the infinity cache...





Let's assume in the future AMD started to use the double capacity G6 chips...

Option 1) x6 2GB chips + x2 4GB chips = 20GB VRAM
Option 2) x4 4GB chips + x4 2GB chips = 24GB VRAM
Option 3) x6 4GB chips + x2 2GB chips = 28GB VRAM
Option 4) x8 4GB chips = 32GB VRAM

It's really fascinating to think about the infinity cache implications combined with APU's and some of the brute force CPU capabilities of like ThreadRipper and Epyc like oh my then to top it off AMD with the Xilinix FPGA is wild things are getting very Skynet/Matrix/Borg quickly you can't put this genie back in the bottle.


----------



## zlobby (Oct 29, 2020)

Sweet! How is the compute performance?


----------



## PooPipeBoy (Oct 29, 2020)

Mescalamba said:


> They can announce whatever they want, until hard data proves its capable of competition I take it as that its not.
> 
> In GPU case, AMD is lately just all talk and no results.



Hard data? Does it break your teeth if you bite down on it?


----------



## Mike2Fr (Oct 29, 2020)

Basilix said:


> I believe they indicated the 6900 would be an AMD exclusive .. which raises all sorts of concern if true .. given that their past exclusives have been shocking in comparison to AiB partner versions .. maybe they have concerns about their top end card .. so much so that they want to keep control of its functioning .. regardless of reason i cant see any valid argument as to why they wouldnt want AiB partners to extract every last drop from their products by making it available to them


It might be a yield issue. Maybe they can't produce enough of those Gpu's or maybe it is a last minute card and the Aib's did not have enough to produce cards.... Anyhow I am sure that the 6800xt OC version for aib's will be pretty damn close to the 6900xt. For much less.... I hope the 6900xt PCB will be top level and the Gpu's extremely well binned. If not it is going to be hard to justify such a card...


----------



## ViperXTR (Oct 29, 2020)

TechLurker said:


> Speculation for what exactly is AMD's reason for the higher price of the 6800 vs the 3070 has been interesting to read in the past couple of hours.
> 
> Some speculated that the ~80 dollar difference was "feature tax", SAM + RAGE Mode allowing it to eke ahead of a 3070; assuming an All-AMD 5000/500/6000 ecosystem. GamersNexus' pinned YT post seemed to think similar; that the ability to use RAGE Mode + SAM was the reasoning behind the increase (not that they agreed or disagreed with it).
> 
> ...


No mention of 8gb vs 16gb?


----------



## Flanker (Oct 29, 2020)

When will the NDA of reviews be lifted?


----------



## InVasMani (Oct 29, 2020)

Flanker said:


> When will the NDA of reviews be lifted?


 Krakken NDA to be released...


----------



## chodaboy19 (Oct 29, 2020)

When can we see some reviews?


----------



## hurakura (Oct 29, 2020)

Basilix said:


> Yep .. aligning themselves to Apples reality distortion field ... lmao
> 
> Its real when it exists in real hands ... everything else is vapour .. we will know soon enough what the realities are
> 
> Well im old enough to know that back in the Geforce 2 days when Nvidias reign was threatened they released Detonator drivers which resulted in a 20-40 percent increase in performance .. the history is there for Nvidia ... sadly its never been there for AMD ... regardless of speculation AMD have never delivered strong driver performance increases .. you can deny reality until it bites you and ive been bitten by AMD's claims often .. thats why im sceptical until its independently tested.


What performance increase by detonator drivers, by cheating in benchmarks? nvidia have been caught doing that many times


----------



## Steevo (Oct 29, 2020)

rhaoul said:


> There are only two actors in the market ... there is a tacit agreement on prices


Kinda like CPUs


----------



## TheUn4seen (Oct 29, 2020)

Again with the marketing fluff. Who cares about corporate PR gibberish? Let your product prove itself in the real world instead of the paper launch designed to create pointless hype.


----------



## mtcn77 (Oct 29, 2020)

Camm said:


> Why is it on every product launch we get newly joined members whose only job seems to be to astroturf for the competition. Does one's head in.


A little update: there is a difference between astroturf and a flame. I don't do the other, it breaks the rules, but you get to keep your head up since they cannot circumlocute a good astroturf.



hurakura said:


> What performance increase by detonator drivers, by cheating in benchmarks? nvidia have been caught doing that many times


Young kids... always straight to the point. Spoils the pleasure.

PS: Samsung 8nm goes brrr...


----------



## DeathtoGnomes (Oct 29, 2020)

Mussels said:


> What uhh, what titles out there support AMD's ray tracing right now?


Tic-Tac-Toe 3d?  

I glanced at something that implies that any RT title does, but nothing that there is a standard for RT.



Camm said:


> Why is it on every product launch we get newly joined members whose only job seems to be to astroturf for the competition. Does one's head in.


paid schills?


----------



## dir_d (Oct 29, 2020)

TechLurker said:


> Speculation for what exactly is AMD's reason for the higher price of the 6800 vs the 3070 has been interesting to read in the past couple of hours.
> 
> Some speculated that the ~80 dollar difference was "feature tax", SAM + RAGE Mode allowing it to eke ahead of a 3070; assuming an All-AMD 5000/500/6000 ecosystem. GamersNexus' pinned YT post seemed to think similar; that the ability to use RAGE Mode + SAM was the reasoning behind the increase (not that they agreed or disagreed with it).
> 
> ...


Ill add my speculation, they finalized the video before the release of the 3070 and did not know it was going to be released that low. I believe if they hear price feedback the MSRP may drop by Nov 18th.


----------



## Space Lynx (Oct 29, 2020)

It just occurred to me, the 6800XT shroud design has no vents on the side with the I/O... so all that heat is getting shot out the top of the car direct on to the CPU... Nvidia has the better reference design here, the most hot part gets shot outside the case next to the I/O and and the less hot air shot out in the case at the rear of the card... hmm this will be interesting to see CPU temps with this gpu... that seems like a really bad design flaw... the air is literally going to hit the CPU straight as it leaves the card, especially if you use an air cooler and it will just suck that hot air right in before it has a chance to escape.

@TheLostSwede curious your thoughts on this design, am I understanding it wrong?


----------



## R0H1T (Oct 29, 2020)

This is why you buy aftermarket cards, depending on the model not only do you get higher clocks but also (much) better cooling.


----------



## Space Lynx (Oct 29, 2020)

R0H1T said:


> This is why you buy aftermarket cards, depending on the model not only do you get higher clocks but also (much) better cooling.



that's not true for rtx 3080 though and AMD seemed like they learned their lesson from last time... but I guess not. only the high end Asus cools better than stock 3080


----------



## R0H1T (Oct 29, 2020)

The 3xxx series launch is mostly vaporware. I don't know if Nvidia intended for it to be as such, but they clearly knew where RDNA2 cards would fit & just wanted some early/cheap publicity & sales. This botch job could hurt them hard, unlike AMD they don't have too many customer facing avenues where they can redeem themselves & yeah as some others have said this could be a *Ryzen* moment for their (AMD) GPU department!


----------



## Space Lynx (Oct 29, 2020)

R0H1T said:


> The 3xxx series launch is mostly vaporware. I don't know if Nvidia intended for it to be as such, but they clearly knew where RDNA2 cards would fit & just wanted some early/cheap publicity & sales. This botch job could hurt them hard, unlike AMD they don't have too many customer facing avenues where they can redeem themselves & yeah as some others have said this could be a *Ryzen* moment for their (AMD) GPU department!



especially since the gpu's do even better with x570 and zen 3 cpu.  i'm really hoping i can get my hands on a 5600x and 6800 or 6800 xt. should be my final build for many years, and hoping i can sell my gtx 1070 laptop to at least cover the some of the purchase... bleh


----------



## renz496 (Oct 29, 2020)

RedelZaVedno said:


> And that really is the only awesome release from 21 Navi lineup. Big lost opportunity. 6800XT priced at $599 and 6800 at $449 would obliterate entire Nvidia A104/A102 lineup. It looks like AMD is content with 20% discrete GPU market share, that's why no price war with Nvidia  6900XT is awesome deal for 0.1% of the gaming market that actually buys $1K GPUs.



see what happen for the last 10 years. did price war really help AMD gain more market share?


----------



## Searing (Oct 29, 2020)

Chrispy_ said:


> Uh, WTF?
> My graph? Uh... No. I didn't make that graph.
> You _do_ know where that graph's from, right?
> 
> ...



no i meant your markings on the graph make no sense because you are comparing to the /2080ti3070 but the RX 6800 looks to be 18 percent faster at 1440p vs the 2080 ti, it should be 118 on the top of your fraction...


----------



## medi01 (Oct 29, 2020)

Nephilim666 said:


> Once reviews and RT performance


In which games?
Laughable World of Warcraft effects barely anyone notices (bar framerate dip)?

Note how despite DXR being a DirectX standard, API, Jensen Huang still managed to shit all over the place and make "nvidia sponsored" games (at least) use green proprietary extensions.
I guess he doesn't remember how he killed OpenGL.


----------



## SLK (Oct 29, 2020)

IMO, AMD is still a generation behind Nvidia.

Pros for AMD :
- Competitive in rasterization now
- Can compete on price 

Cons for AMD:
- 1st gen RT, so its a slower than Ampere's RT
- No DLSS, which gives huge performance advantage in the second most popular game in the world (Fortnite)
- NVENC is far superior to AMD's implementation. (OBS Streaming software has native support for Nvidia but not AMD)
- Nvidia Broadcast is a huge asset to content creators, no equivalent from AMD
- Way behind on professional applications due to Nvidia's broad CUDA support.

My choice is clear.


----------



## medi01 (Oct 29, 2020)

rvalencia said:


> According to Digital Foundry, PS5's raytracing performance is similar to RTX 2060 Super.


According to Digital Totally Not Shills But Do It Like Shills Just By Coincidence Foundry, XSeX is merely 2070 at raster perfr.


----------



## Space Lynx (Oct 29, 2020)

SLK said:


> IMO, AMD is still a generation behind Nvidia.
> 
> Pros for AMD :
> - Competitive in rasterization now
> ...




your choice doesn't matter, none of this will be in stock for any of us for 6 months still, lol if then.  covid has brought in a ton of people to compete with not just scammers.  im going to try my best to get whatever comes in stock first within budget. lol

also, for me i don't care about RT, DLSS (though I do think it is nice but not enough games use it really), I don't care about streaming, don't care about broadcast for same reason, and I don't care about professional stuff.

just a gamer guy that likes to game at high refresh rates 1440p. AMD is 100% for me if i can find it in stock on launch day, big if.  heh.


----------



## medi01 (Oct 29, 2020)

TechLurker said:


> Speculation for what exactly is AMD's reason for the higher price of the 6800 vs the 3070 has been interesting to read in the past couple of hours.
> 
> Some speculated that the ~80 dollar difference was "feature tax", SAM + RAGE Mode allowing it to eke ahead of a 3070;


Is it some sort of a joke?
*6800 is said to be 18% faster than 2080Ti.*
It has twice the RAM of 3070.


----------



## Camm (Oct 29, 2020)

SLK said:


> Cons for AMD:
> - 1st gen RT, so its a slower than Ampere's RT
> - No DLSS, which gives huge performance advantage in the second most popular game in the world (Fortnite)
> - NVENC is far superior to AMD's implementation. (OBS Streaming software has native support for Nvidia but not AMD)
> ...



Just to break this down.

1\ You don't know RT performance numbers. Its likely weaker, but literally have nfi atm. Also, with RT still all being hybrid.... its a feature not a requirement.
2\ AMD mentioned its DLSS alternative, it hasn't detailed it yet however.
3\ No info has been provided about the encoder except what it can support and it supports all current standards.
4\ If Nvidia Broadcast is a value ad for you, thats great. The majority of us just want to play games, not broadcast our warts to the world. See above regarding feature.
5\ CUDA is somewhat of a chicken and egg, became a defacto standard due to initial better support. However, 'massively better' is a misnomer, and is really dependent on what professional task you are doing as more things look to ditch CUDA (which has started to pick up pace over the last year).

tl;dr making definitive statements is crass.


----------



## Jism (Oct 29, 2020)

R0H1T said:


> This is why you buy aftermarket cards, depending on the model not only do you get higher clocks but also (much) better cooling.



These days there's hardly benefit from going aftermarket vs reference. Apart from AMD delivering a reference with 3 fans with proberly idle-stop and a custom fan profile, in the past even blowers could be tuned and / or adepted the way you wanted it. It's just that both AMD and nvidia select a default fan profile that ramp up pretty late untill the card reaches 80 degrees or so before it goes beserk. All could be solved by setitng the fan profile at approx 40 to 60% of it's duty cycle and you never had any issue with it.

Reference cards these days have the perfect VRM for most users; wether thats water, air or even LN2 as there is hardly any condition these days that you need a larger VRM then reference already is pre-designed for you. You wont be overclocking that much either that you need the VRM working at it's full capacity. The proces node or limitation of the silicon itself wont allow this either. And knowing AMD there's proberly a curent limit running into those chips as well to prevent degrading these. 

It's why you see game / boost clock; simular as Zen CPU's they allow higher clocks when light workloads are happening, they go down when heavy workload is being applied. All this is polled every 25ms on hardware level to present you the best clocks possible in relation of temperature and power consumption. This is obviously a bliss for consumers, but it takes the fun out of OC'ing a bit if you plan to run on water. AMD is already maxing out the silicon for you, you have to go subzero to get more out of it.

And if it's not what you need in relation of sound, slap a watercooler and your done. Watercooling always has better efficiency compared to air to be honest. And because of that you could hold the boost clocks longer. I'm glad for AMD to bring back not one but two generations leap forward that competes with the 3090 which costs 1500 compared to 999$. 24GB of Vram is'nt a real reason to pay 500$ more, considering nvidia pays like 4 $ per each extra memory chip. The margins are huge.


----------



## Max(IT) (Oct 29, 2020)

I was honestly expecting more, especially from the 6800 that was my target. I mean, 16 Gb of VRAM are highly unnecessary (10 would have been perfect) and the price, probably because of that amount of VRAM, is $50/60 higher than the sweet spot, and definitely too close to the 6800XT.
we know nothing about RT performance, so we should wait for the review before draw any conclusion.



lynx29 said:


> especially since the gpu's do even better with x570 and zen 3 cpu.  i'm really hoping i can get my hands on a 5600x and 6800 or 6800 xt. should be my final build for many years, and hoping i can sell my gtx 1070 laptop to at least cover the some of the purchase... bleh


When did they speak about X570 ???


----------



## Mussels (Oct 29, 2020)

Max(IT) said:


> I was honestly expecting more, especially from the 6800 that was my target. I mean, 16 Gb of VRAM are highly unnecessary (10 would have been perfect) and the price, probably because of that amount of VRAM, is $50/60 higher than the sweet spot, and definitely too close to the 6800XT.
> we know nothing about RT performance, so we should wait for the review before draw any conclusion.
> 
> 
> When did they speak about X570 ???



theres a new feature that if used with B550 or X570, you get a performance boost. seems like that boost was included in the benchmark results, although we dont know much yet.


----------



## Max(IT) (Oct 29, 2020)

TechLurker said:


> Speculation for what exactly is AMD's reason for the higher price of the 6800 vs the 3070 has been interesting to read in the past couple of hours.
> 
> Some speculated that the ~80 dollar difference was "feature tax", SAM + RAGE Mode allowing it to eke ahead of a 3070; assuming an All-AMD 5000/500/6000 ecosystem. GamersNexus' pinned YT post seemed to think similar; that the ability to use RAGE Mode + SAM was the reasoning behind the increase (not that they agreed or disagreed with it).
> 
> ...


Speculations? It has 16 Gb of VRAM vs 8 Gb of VRAM. Price difference explained.

the problem is: are those 16 Gb somehow useful at the target 1440P resolution ? Hardly...



Mussels said:


> theres a new feature that if used with B550 or X570, you get a performance boost. seems like that boost was included in the benchmark results, although we dont know much yet.


Ah ok, so not only X570...


----------



## Camm (Oct 29, 2020)

Max(IT) said:


> Speculations? It has 16 Gb of VRAM vs 8 Gb of VRAM. Price difference explained.



All three are 16GB, but I wouldn't be surprised if there is an 8GB 6800 variant.


----------



## Mussels (Oct 29, 2020)

Max(IT) said:


> Ah ok, so not only X570...


My guess is some form of PCI-E 4.0 caching, using available system ram to speed things up. Maybe thats why they went 16GB as well, slower than the competition but if the datas already loaded/cached, it wont matter


----------



## Max(IT) (Oct 29, 2020)

Mussels said:


> My guess is some form of PCI-E 4.0 caching, using available system ram to speed things up. Maybe thats why they went 16GB as well, slower than the competition but if the datas already loaded/cached, it wont matter


Understood, but since I have a B550 motherboard I was wondering about that feature to be supported on every Ryzen 5000 board. AMD didn’t mention X570, but just Ryzen 5000.


----------



## SLK (Oct 29, 2020)

Camm said:


> Just to break this down.
> 
> 1\ You don't know RT performance numbers. Its likely weaker, but literally have nfi atm. Also, with RT still all being hybrid.... its a feature not a requirement.
> 2\ AMD mentioned its DLSS alternative, it hasn't detailed it yet however.
> ...



1) AMD DLSS alternative is an unknown, I don't pay for unknown. DLSS 2.0 is here and it "just works".
2) Again AMD's encoder is an unknown, I can only base on recent history and Nvidia's solution is proven to be superior at this moment.
3) RT can only be appreciated by someone who values realistic graphics, for the rest, it's just a feature.

Even if I were a pure gamer, there is no reason for me to go AMD. When I pay almost $600-$700 for a GPU, $50 difference is nothing for these added features, not to mention, AMD has to regain some credibility with its driver problems in the last year.


----------



## Camm (Oct 29, 2020)

SLK said:


> is an unknown



And Nvidia availability doesn't exist. You keep dealing in absolutes over a product that has only been announced in a 20 minute clip and press deck. But sure, go buy a 3080/3090 since you deal in certainty, and you certainly won't find one > <.

And seriously, AMD driver problems? When I got my 2080 Ti I was BSODing for fucks sake. Both vendors have and continue to have issues at times.


----------



## ebivan (Oct 29, 2020)

SLK said:


> 1) AMD DLSS alternative is an unknown, I don't pay for unknown. DLSS 2.0 is here and it "just works".
> 2) Again AMD's encoder is an unknown, I can only base on recent history and Nvidia's solution is proven to be superior at this moment.
> 3) RT can only be appreciated by someone who values realistic graphics, for the rest, it's just a feature.
> 
> Even if I were a pure gamer, there is no reason for me to go AMD. When I pay almost $600-$700 for a GPU, $50 difference is nothing for these added features, not to mention, AMD has to regain some credibility with its driver problems in the last year.


Well, everything said about RT performance on Radeon is pure speculation at this point. So don't judge it, before there is any evidence! Anyways it remains to be seen how RT will be adopted in the future, right now its a goodie, used on 20 games (none of which interests me). The future may bring broader adoption of RT, as the consoles can do it. But even then, the consoles are slower than the 6800xt and games will be optimized for consoles (as its always been) so RT it will probably run great on the AMD cards too.

I have tried all the different hardware encoders out there, and none of them was good. To be honest they're all shit! Sorry if you want quality, you have to do it in software. So no reason to go green for me there. Only reason to go for NVENC would be if you only had a tiny 4 or 6 core CPU that couldn't handle software encoding. But if you only had such a low tier cpu, you wouldn't buy 3080/6800xt anyways.

DLSS is absolutely overhyped. First of all there is only a handful of games that support it. Then the difference to native rendering is absolutely visible even with 2.0. Its just a blur. And, last but not least, it only makes sense for absolute hardcore esport guys, because 3080 and 6800xt seen to have no problems rendering in 4k/60fps or 1440p/120fps, so its really only a thing for competitive 4k/120fps+ gamers.


----------



## Valantar (Oct 29, 2020)

Mussels said:


> What uhh, what titles out there support AMD's ray tracing right now?


Most of them, most likely. I don't know of any RTX-enabled games that don't use DXR, which is a DX12_2 feature, which RDNA 2 explicitly supports. It's likely that games will need some tweaks and minor updates to work out any kinks of the specific implementation, but as long as the GPUs support DXR they should work out of the box.


lynx29 said:


> It just occurred to me, the 6800XT shroud design has no vents on the side with the I/O... so all that heat is getting shot out the top of the car direct on to the CPU... Nvidia has the better reference design here, the most hot part gets shot outside the case next to the I/O and and the less hot air shot out in the case at the rear of the card... hmm this will be interesting to see CPU temps with this gpu... that seems like a really bad design flaw... the air is literally going to hit the CPU straight as it leaves the card, especially if you use an air cooler and it will just suck that hot air right in before it has a chance to escape.


GPUs with vertically oriented fins like that are quite common, and typically perform well. Like all open-air/axial fan cooling setups it requires sufficient case airflow, but it really shouldn't be an issue in terms of heating up your CPU. Nvidia's design with a large fan literally blowing hot air into your CPU cooler's intake is much more directly troublesome there, and even that works just fine.


This thread though ... damn. I mean, even though we don't have third-party review numbers yet, so we can't trust the numbers given 100%, we are still seeing AMD promise *a bigger jump in performance/W than Nvidia delivered with Maxwell*. (And given the risk of shareholder lawsuits, promises need to be reasonably accurate.) Does nobody else understand just how insane that is? The GPU industry hasn't seen anything like this for a decade or more. And they aren't even doing a "best case vs. worst case" comparison, but comparing the (admittedly worst case) 5700 XT against the seemingly overall representative 6800 XT - if they were going for a skewed benchmark, the ultra-binned 6900 XT at the same power would have been the obvious choice. 

Yes, the prices could be lower (and given the use of cheaper memory technology and a narrower bus than Nvidia, AMD probably has the margins to do that if competition necessitates it, even with 2x the memory). For now they look to either deliver a bit less performance for 2/3 the price, matching or slightly better performance for $50 less, or somewhat better performance for $79 more. From a purely competitive standpoint, that sounds pretty okay to me. From a "I want the best GPU I can afford" standpoint it would obviously be better if they went into a full price war, but that is highly unlikely in an industry like this. This nonetheless means we for the first time in a _long_ time will have a competitive GPU market across the entire price range.

I'll be mighty interested in a 6800 XT - after seeing 3rd party reviews, obviously - but just as interested in seeing how this pans out across the rest of the lineup. I'm predicting a very interesting year for ~$2-400 GPUs after this, given that $5-600 GPUs are now matching the value of previous-gen $400 GPUs we should get a noticeable bump in perf/$ in that price range.


----------



## ebivan (Oct 29, 2020)

For all the people complaining about the prices, remember that 3080 for 700 is absolute nonsense! Or has anyone actually gotten a FE 3080 for 700 bucks? No, nobody has! Cheapest AiB cards are about 850 and winning the lottery is more likely than getting a 3080 for 850 at the moment!

It remains to be seen if there will be reasonable quantities of 6800xt of course, but since AMDs move to built in that cache instead of using exotic hardly available 6x memory and using TSMCs 7nm of which AMD seems to have good availability seems to point to better availability at launch, but we will see.


----------



## Hyderz (Oct 29, 2020)

Very happy that amd released new gpu in the higher tier segment that competes with Nvidia.
Hrmmmm now the hard choices... Amd or Nvidia???

the 6800 (non xt) looks good and i think nvidia will answer it with RTX 3080 LE


----------



## ratirt (Oct 29, 2020)

jabbadap said:


> uhm where did they claim such a thing? All I can see is the slide claiming 35 games supporting FidelityFX, which does not mean that those 35 games has or will have any form of DXR or RT. FidelityFX has been around for a while now. And yeah directx ultimate is more than just RT.


It's 5 games at start. double-clicked for some reason and didn't fix it sorry for that.



SLK said:


> Cons for AMD:
> - 1st gen RT, so its a slower than Ampere's RT
> - No DLSS, which gives huge performance advantage in the second most popular game in the world (Fortnite)
> - NVENC is far superior to AMD's implementation. (OBS Streaming software has native support for Nvidia but not AMD)
> ...


Where did you get this from? Your thoughts? 
There is DLSS in form of Super Resolution. We don't know how it works but it is there and we will have to wait and see what it brings.
You dont know if it will be slower than Amperes RT. 6000 series will use Microsoft DXR API with the Ray Accelerators for every CU. 
You way of perceiving this makes me confused. 
CUDA support  , NVENC  
It's just the same way if I'd said NV sucks cause it doesn't have AMD Cores.


----------



## SLK (Oct 29, 2020)

ebivan said:


> Well, everything said about RT performance on Radeon is pure speculation at this point. So don't judge it, before there is any evidence! Anyways it remains to be seen how RT will be adopted in the future, right now its a goodie, used on 20 games (none of which interests me). The future may bring broader adoption of RT, as the consoles can do it. But even then, the consoles are slower than the 6800xt and games will be optimized for consoles (as its always been) so RT it will probably run great on the AMD cards too.
> 
> I have tried all the different hardware encoders out there, and none of them was good. To be honest they're all shit! Sorry if you want quality, you have to do it in software. So no reason to go green for me there. Only reason to go for NVENC would be if you only had a tiny 4 or 6 core CPU that couldn't handle software encoding. But if you only had such a low tier cpu, you wouldn't buy 3080/6800xt anyways.
> 
> DLSS is absolutely overhyped. First of all there is only a handful of games that support it. Then the difference to native rendering is absolutely visible even with 2.0. Its just a blur. And, last but not least, it only makes sense for absolute hardcore esport guys, because 3080 and 6800xt seen to have no problems rendering in 4k/60fps or 1440p/120fps, so its really only a thing for competitive 4k/120fps+ gamers.



While I respect your opinion, for $50 more, I get all these features. At this price bracket, $50 is not much. 5900X and 3080 for me.


----------



## Vya Domus (Oct 29, 2020)

mandelore said:


> If AMD are going to compare a 6900XT with rage mode (overclocked), surely they should compare it to an overclocked 3090 for a fair comparison.
> 
> Or just not use rage mode. Cant get a proper idea of performance, although they "are" just slides.



To be fair, if they are right and it's a one click feature, it's basically PBO for GPUs. No one seems to have a problem with PBO so you have that.



Mussels said:


> What uhh, what titles out there support AMD's ray tracing right now?



DXR is hardware agnostic, AMD just has to provide a back end implelentation.


----------



## SLK (Oct 29, 2020)

ratirt said:


> It's 5 games at start. double-clicked for some reason and didn't fix it sorry for that.
> 
> 
> Where did you get this from? Your thoughts?
> ...



Marketing rule number 1: Always show yr best

Yesterday's Radeon presentation clearly indicates they have matched Ampere's raster performance. They did not show RT numbers and Super Resolution is something they are working on. If the RT was as good as Ampere, they would have shown numbers. Simple deduction.


----------



## turbogear (Oct 29, 2020)

Jism said:


> And if it's not what you need in relation of sound, slap a watercooler and your done. Watercooling always has better efficiency compared to air to be honest. And because of that you could hold the boost clocks longer. I'm glad for AMD to bring back not one but two generations leap forward that competes with the 3090 which costs 1500 compared to 999$. 24GB of Vram is'nt a real reason to pay 500$ more, considering nvidia pays like 4 $ per each extra memory chip. The margins are huge.



That is exactly why I picked reference cards for Vega 64 and well Radeon VII was only available as reference.  
Replaced the cooler with water block and applied undervolt which helped to keep the boost clock higher for much longer times in games giving higher gaming performance.
With water block the temperatures were great for both Vega64 and Radeon VII with maximum 40°C at full load. 

The results  from my benchmarks for both of these cards tuned with undervolt and watercooling is available in Owners section at TPU forum for respective cards.


----------



## Mussels (Oct 29, 2020)

Vya Domus said:


> To be fair, if they are right and it's a one click feature, it's basically PBO for GPUs. No one seems to have a problem with PBO so you have that.
> 
> 
> 
> DXR is hardware agnostic, AMD just has to provide a back end implelentation.



So DXR works in all RTX titles? I assume no one really knows yet, but these games have "RTX: On" not "DXR: On"


----------



## ratirt (Oct 29, 2020)

SLK said:


> Marketing rule number 1: Always show yr best
> 
> Yesterday's Radeon presentation clearly indicates they have matched Ampere's raster performance. They did not show RT numbers and Super Resolution is something they are working on. If the RT was as good as Ampere, they would have shown numbers. Simple deduction.


Best doesn't mean there is only one way of doing it. Sure but this doesn't change a thing.
There aren't many games supporting DXR 1.0 and 1.1 at the moment. When NV released RTX there were none. So maybe give it some time for the cards to launch and then make an opinion when they are tested? AMD will host another presentation with the deep dive in the RDNA2's architecture. Expect this to be revealed then.


Mussels said:


> So DXR works in all RTX titles? I assume no one really knows yet, but these games have "RTX: On" not "DXR: On"


Well NV was first to implement RT so RTX it is but it doesn't mean it won't change to DXR if AMD joins the club.


----------



## Vya Domus (Oct 29, 2020)

SLK said:


> Marketing rule number 1: Always show yr best
> 
> Yesterday's Radeon presentation clearly indicates they have matched Ampere's raster performance. They did not show RT numbers and Super Resolution is something they are working on. If the RT was as good as Ampere, they would have shown numbers. Simple deduction.



When they showed those numbers back at the Ryzen event that should have been from a 6900XT, right ? Except it wasn't, they didn't show their best.



Mussels said:


> So DXR works in all RTX titles? I assume no one really knows yet, but these games have "RTX: On" not "DXR: On"



There is no such thing as "enabling RTX", it's a marketing trick Nvidia did to make it seem like it's their technology and they seem to have pulled it successfully. DXR is the feature that enables ray tracing, RTX is just the back end implementation, there is no RTX API or anything like that.


----------



## ebivan (Oct 29, 2020)

SLK said:


> While I respect your opinion, for $50 more, I get all these features. At this price bracket, $50 is not much. 5900X and 3080 for me.


It is not 50 more. There ARE no 3080 for 700€. There never were 700$ FE cards on the market for private customers!
*RTX3080 for 700$ was a big fat lie!*

Since Xbox uses DX ML, Super Resolution will be in most new games, dont worry!


----------



## ratirt (Oct 29, 2020)

ebivan said:


> It is not 50 more. There ARE no 3080 for 700€. There never were 700$ FE cards on the market for private customers!
> *RTX3080 for 700$ was a big fat lie!*
> 
> Since Xbox uses DX ML, Super Resolution will be in most new games, dont worry!


It's not just the price that is higher. The cards are nowhere to be found. In my area these cards availability is January


----------



## SLK (Oct 29, 2020)

ebivan said:


> It is not 50 more. There ARE no 3080 for 700€. There never were 700$ FE cards on the market for private customers!
> *RTX3080 for 700$ was a big fat lie!*
> 
> Since Xbox uses DX ML, Super Resolution will be in most new games, dont worry!



I expect the same for 6800XT, $649 will be another lie.



ratirt said:


> It's not just the price that is higher. The cards are nowhere to be found. In my area these cards availability is January


That sucks. I should be getting mine in mid-November.


----------



## ratirt (Oct 29, 2020)

SLK said:


> I expect the same for 6800XT, $649 will be another lie.


That depends on the availability not if somebody lied or not.
NV availability was non existent and that bump the price. Nobody says NV lied about the MSRP price and so as AMD doesn't lie. Put yourself together bro 


SLK said:


> That sucks. I should be getting mine in mid-November.


If you get it, good for you  Pre-orders is not my type of thing.


----------



## TheoneandonlyMrK (Oct 29, 2020)

SLK said:


> I expect the same for 6800XT, $649 will be another lie.
> 
> 
> That sucks. I should be getting mine in mid-November.


Doubtful I bought a Polaris and Vega at AMD's MSRP day one price, and if not I wouldn't just pay a lier anyway.


----------



## ebivan (Oct 29, 2020)

SLK said:


> I expect the same for 6800XT, $649 will be another lie.


As I said, it remains to be seen. Right now you are just making assumptions. 
Fact is, Nvidia lied to us. And they keep lying to us when they say RTX3000 availability is not a supply issue. 
Another fact is, its not the first time they broadly lied to their customers (remember GTX970?) 

For AMD, at least they have record of not falsifying their bar graphs. For availability, well we'll see. But looking at their choice to use regular GDDR6 and makig up for the lack of bandwidth by using cache lets me hope that there will be more cards on launch day.


----------



## Xuper (Oct 29, 2020)

What's with DLSS/RT ? I feel they're like Async compute which fanboy made them overhyped. come after 5 years or more , so we can talk about raytracing/super resolution.I remembered :

1) Ryzen 1xxx/2xxx can't get higher clock so intel is better until Ryzen 3xxx came out.
2) Ryzen 3xxx has higher memory latency so Intel is best for gaming until AMD announces Ryzen 5xxx
3) Ryzen 5xxx is matched and people are looking for something ?


----------



## Mussels (Oct 29, 2020)

Xuper said:


> What's with DLSS/RT ? I feel they're like Async compute which fanboy made them overhyped. come after 5 years or more , so we can talk about raytracing/super resolution.I remembered :
> 
> 1) Ryzen 1xxx/2xxx can't get higher clock so intel is better until Ryzen 3xxx came out.
> 2) Ryzen 3xxx has higher memory latency so Intel is best for gaming until AMD announces Ryzen 5xxx
> 3) Ryzen 5xxx is matched and people are looking for something ?



At least DLSS allows cards to age more gracefully, it'd be like having a 780ti right now that managed 4k60 (while really running the game at like 720p)
I see *that* tech, in either implementation to be a massive deal going forward


----------



## Xuper (Oct 29, 2020)

Mussels said:


> At least DLSS allows cards to age more gracefully, it'd be like having a 780ti right now that managed 4k60 (while really running the game at like 720p)
> I see *that* tech, in either implementation to be a massive deal going forward


Problem is you can't get DLSS on every game.even after 5 years , DLSS will implement only in famous game.

Edit : I forgot to mention it : Hitman 2.


----------



## ebivan (Oct 29, 2020)

Mussels said:


> At least DLSS allows cards to age more gracefully, it'd be like having a 780ti right now that managed 4k60 (while really running the game at like 720p)
> I see *that* tech, in either implementation to be a massive deal going forward


I doubt that. As you may know, having cards age gravefully is not in the interest of tech companies. They need to sell new hardware, not support old hardware. So right now they'll give you DLSS 2.0 support for current games, next years games will get DLSS 3.0 support, but sooner or later older cards wont support future DLSS versions, so that current cards will be limited to DLSS Versions of their corresponding generation.


----------



## SLK (Oct 29, 2020)

ebivan said:


> As I said, it remains to be seen. Right now you are just making assumptions.
> Fact is, Nvidia lied to us. And they keep lying to us when they say RTX3000 availability is not a supply issue.
> Another fact is, its not the first time they broadly lied to their customers (remember GTX970?)
> 
> For AMD, at least they have record of not falsifying their bar graphs. For availability, well we'll see. But looking at their choice to use regular GDDR6 and makig up for the lack of bandwidth by using cache lets me hope that there will be more cards on launch day.



Nvidia share is 80% in the DGPU market. Of course, the demand is much higher for Nvidia than AMD. It's not a fair comparison. Moreover, Nvidia created so much hype around Ampere launch that demand was unprecedented. Gamer's Nexus Steve explained this in one of his videos to investigate whether its a supply issue or a demand issue. According to him, its a demand issue.


----------



## Valantar (Oct 29, 2020)

SLK said:


> Marketing rule number 1: Always show yr best
> 
> Yesterday's Radeon presentation clearly indicates they have matched Ampere's raster performance. They did not show RT numbers and Super Resolution is something they are working on. If the RT was as good as Ampere, they would have shown numbers. Simple deduction.


I don't think anyone here is arguing that RDNA 2 will match Ampere in RT performance, but also going by simple deduction: given that the RT performance of RDNA 2 is good enough for both major console manufacturers, it can't be that far behind. Console makers are fully aware of the RT capabilities of Turing (and the issues with these), so I would expect them to demand at least parity with Turing for RT on a console to be even remotely viable.


Mussels said:


> So DXR works in all RTX titles? I assume no one really knows yet, but these games have "RTX: On" not "DXR: On"


That's because RTX has so far been the only existing implementation of DXR so far, so Nvidia has had free rein to market it as they have seen fit. RTX is in effect nothing more than a sticker on top of DXR.


SLK said:


> I expect the same for 6800XT, $649 will be another lie.


There's little reason to expect this - while this is no doubt a very large die, unlike the GA102 it's made by the best foundry around and on a tried and tested process node. Yields should be much better than for Nvidia, and while there's still no fab capacity to spare on TSMC 7nm, AMD has likely bought up all the wafer starts they possibly could in the run-up to RDNA 2 and Ryzen 5000 (not to mention the new consoles). Of course a lot of capacity also opened up a while back when Apple moved to 5nm, which AMD likely grabbed a decent chunk of.

Will availability be tight? No doubt - the gaming market has grown _massively_ in the last few years, so interest in and demand for these products is higher than ever, and production capacity hasn't really increased to match (no wonder, seeing how building a fab costs billions of dollars and takes at least 5 years). It stands to reason that supply will be tighter than before. But I don't expect supply for these to be as tight in relation to demand compared to the RTX 3000-series.


Mussels said:


> At least DLSS allows cards to age more gracefully, it'd be like having a 780ti right now that managed 4k60 (while really running the game at like 720p)
> I see *that* tech, in either implementation to be a massive deal going forward


That's an excellent point, but it also needs a universal driver-side implementation (one that doesn't require developer intervention) to work like that. AMD is promising their alternative to be more in that direction (though I'd be surprised if it was as good as Nvidia's in terms of image quality), and even cross-platform (so any developer effort would be applicable also to competing GPUs, making adoption more likely). It'd also be _really_ interesting to see if consoles would adopt this.


By the way, looking forward to when the new consoles are out and the RX 6700 series appear: it will be _really_ cool to see someone do a comparative review with the more traditional wide VRAM buses of the consoles vs. the narrow bus+cache solution for RDNA2 dGPUs.


----------



## Max(IT) (Oct 29, 2020)

ebivan said:


> DLSS is absolutely overhyped. First of all there is only a handful of games that support it. Then the difference to native rendering is absolutely visible even with 2.0. Its just a blur.



you clearly don't have a card supporting it, because your assumptions are absolutely wrong.
DLSS works very well and definitely it is NOT just blur.

The only true point is the limited support in games.


----------



## SLK (Oct 29, 2020)

Xuper said:


> Problem is you can't get DLSS on every game.even after 5 years , DLSS will implement only in famous game.
> 
> Edit : I forgot to mention it : Hitman 2.



Famous games have the most players. Most players mean Nvidia can sell more cards. Which card do you think a Fortnite player will choose? A card without DLSS or a card with DLSS??


----------



## ratirt (Oct 29, 2020)

SLK said:


> Famous games have the most players. Most players mean Nvidia can sell more cards. Which card do you think a Fortnite player will choose? A card without DLSS or a card with DLSS??


You are missing the point. If NV implements DLLS only in some games, that means the DLSS is harder to implement and since it is such a good feature not all games will have it. That is not good from a player perspective. If AMD's super resolution can be done easier, the support for it will be better and it will not matter which game you play you will have that feature. Also the development from game developers will be easier meaning support will be better.


----------



## Xuper (Oct 29, 2020)

SLK said:


> Famous games have the most players. Most players mean Nvidia can sell more cards. Which card do you think a Fortnite player will choose? A card without DLSS or a card with DLSS??


and with game that doesn't support DLSS , you think player chooses a card with DLSS or a card with more raw power? What I wanted to say that DLSS support is limited. 
AMD left a lot clocks for AIB card.I'm sure RDNA2 AIB can reach 2400mhz or more with water cooling.


----------



## Max(IT) (Oct 29, 2020)

Xuper said:


> Problem is you can't get DLSS on every game.even after 5 years , DLSS will implement only in famous game.
> 
> Edit : I forgot to mention it : Hitman 2.


DLSS 2.0 (the previous version was quite crap) was released  in March 2020...


----------



## Zach_01 (Oct 29, 2020)

SLK said:


> Famous games have the most players. Most players mean Nvidia can sell more cards. Which card do you think a Fortnite player will choose? A card without DLSS or a card with DLSS??


Because Fortnite is all about eye-candy... Bad example.
my bad... confused with RTRT

I still believe this tho...
I like eye-candy games but right now both RTRT and DLSS2.0 are more than limited. Maybe in a year or 2.



Xuper said:


> and with game that doesn't support DLSS , you think player chooses a card with DLSS or a card with more raw power? What I wanted to say that DLSS support is limited.
> AMD left a lot clocks for AIB card.I'm sure RDNA2 AIB can reach 2400mhz or more with water cooling.


AIBs dont need water coolers to push 350W on these cards. I had a card in the past with 375W on air(2fan) cooler and it was fine. As long as you can get that heat outside of case.


----------



## nikoya (Oct 29, 2020)

so now I have to hate LG for not implementing Freesync on C9 and B9 Oleds.


----------



## ratirt (Oct 29, 2020)

NV's problem is that they push for their own standard forcing developers to go with it. Look what happened with Gsync. Still is but then NV cards can use the open standard Adaptive sync which AMD's FreeSync is based on. Same is with DLSS. It would seem, NV wants to cut down the competition with their product tying developers hands. Game developers will do it since there is no alternative. Now there is and it will be way easier to implement on variety of games. AMD's gonna have another presentation about the RDNA2 architecture, There will be more about the features and how they work exactly. Maybe it's worth to take a look at it.



nikoya said:


> so now I have to hate LG for not implementing Freesync on C9 and B9 Oleds.


Hate NV. They forced it upon LG. Besides, there can still be screens with FreeSync.


----------



## ebivan (Oct 29, 2020)

SLK said:


> Moreover, Nvidia created so much hype around Ampere launch that demand was unprecedented.


Yeah, has Nvidia interns doing their market research?


----------



## Vya Domus (Oct 29, 2020)

SLK said:


> Which card do you think a Fortnite player will choose? A card without DLSS or a card with DLSS??



Are you serious ? Your average Fortnite player is probably like 14 and hardly knows on what planet he is, hell he probably plays it from a phone or console, DLSS is the last thing he would think or care about. You think that the more popular the game is the more important these things are, couldn't be further from the truth. It's the other way around, the more popular a game the wider the audience meaning most people wont know or care to know about some niche technology.

It's the 25+ year old enthusiast buying crazy expensive hardware that thinks about whether or not a GPU has a certain feature and he is in the minority.


----------



## Valantar (Oct 29, 2020)

nikoya said:


> so now I have to hate LG for not implementing Freesync on C9 and B9 Oleds.


This _might_ turn out to be wrong, but AFAIK these GPUs should still support VRR on C9 and B9 OLEDs, as those (at least on paper) support HDMI 2.1 VRR, which these GPUs also do. The proprietary implementation on them is "HDMI 2.1 VRR" support on non-HDMI 2.1 Nvidia GPUs. But at least in theory, you should be able to enable "FreeSync" (i.e. HDMI 2.1 VRR) on these GPUs regardless of explicit FreeSync support.


----------



## Blueberries (Oct 29, 2020)

I'm curious to see how smart access memory turns out. The 3070 is still the best value card as of today, and there will likely be a 3080ti in the $1000-1200 range to challenge the 6900XT soon


----------



## ratirt (Oct 29, 2020)

Valantar said:


> This _might_ turn out to be wrong, but AFAIK these GPUs should still support VRR on C9 and B9 OLEDs, as those (at least on paper) support HDMI 2.1 VRR, which these GPUs also do. The proprietary implementation on them is "HDMI 2.1 VRR" support on non-HDMI 2.1 Nvidia GPUs. But at least in theory, you should be able to enable "FreeSync" (i.e. HDMI 2.1 VRR) on these GPUs regardless of explicit FreeSync support.


Actually. The VRR is supported by these screens but it is not FreeSync equivalent. Maybe it will work but it may not necessarily work as a FreeSync monitor or a TV would work.


----------



## R0H1T (Oct 29, 2020)

Valantar said:


> The GPU industry *hasn't seen anything like this for a decade or more*.


Let's not kid ourselves, AMD could well be promising the jump wrt the worst performing RDNA card out there as opposed to say the most *efficient* one ~ which btw is inside a *Mac* not PC



ebivan said:


> Yeah, has Nvidia interns *doing their market research*?


With a major launch, great job JHH learning on the job/at consumer's expense


----------



## medi01 (Oct 29, 2020)

Vya Domus said:


> DXR is hardware agnostic, AMD just has to provide a back end implelentation.


I was told multiple times that actual implementation in green sponsored games uses green proprietary extensions.

I just don't get why people think it is a good thing, as it basically spells d e a t h for RT.



Blueberries said:


> The 3070 is still the best value card as of today,


No it's not, 6800 is.

6800 => $80 (16%) more for +18% performance and +8GB of VRAM.


----------



## ratirt (Oct 29, 2020)

medi01 said:


> No it's not, 6800 is.
> 
> 6800 => $80 (16%) more for +18% performance and +8GB of VRAM.


Having same impression. Of course reviews are needed to show the bigger picture but I don't understand why people see 3070 as a better value here and bring it up as if it's something obvious at this point.


----------



## TheoneandonlyMrK (Oct 29, 2020)

ebivan said:


> I doubt that. As you may know, having cards age gravefully is not in the interest of tech companies. They need to sell new hardware, not support old hardware. So right now they'll give you DLSS 2.0 support for current games, next years games will get DLSS 3.0 support, but sooner or later older cards wont support future DLSS versions, so that current cards will be limited to DLSS Versions of their corresponding generation.


The Polaris and 1080ti said What?!


----------



## Vya Domus (Oct 29, 2020)

medi01 said:


> I was told multiple times that actual implementation in green sponsored games uses green proprietary extensions.
> 
> I just don't get why people think it is a good thing, as it basically spells d e a t h for RT.



It doesn't really matter if they use proprietary extensions because every manufacturer will have to get DXR working using their own extensions anyway. But when you actually write applications using DXR you can only use the API calls Microsoft wrote.


----------



## TumbleGeorge (Oct 29, 2020)

Max(IT) said:


> I was honestly expecting more, especially from the 6800 that was my target. I mean, 16 Gb of VRAM are highly unnecessary (10 would have been perfect)


I think that AMD has better knowledge than Your how VRAM size is better. Especially for long-term users. Nobody write comment about that:


> 12:10PM EDT - RDNA3 by the end of 2022


 @Anandtech liveblog. Must be reason to break 15-18 months development cycle.


----------



## Bytales (Oct 29, 2020)

ZoneDymo said:


> performance is good, price not so much, sigh.
> 
> like take the RX6800, beats the 2080ti so it will be a bit faster then an RTX3070, but it also costs 580 dollars vs 500 dollar for the RTX3070, so not really a clear winner in the "What to buy" discussion.



AMD is a clear winner to me, ill buy AMD just to "not buy nvidia", their performance will be more than enough. If can "not buy nvidia" and get similar perforamcne even ifi pay 100 dollars more, or in the case of 6900XT for 500 dollars less, ill take it, just to punish nvidia. They milked everyone for so long for so much. All in the name of profit.


----------



## Max(IT) (Oct 29, 2020)

Vya Domus said:


> Are you serious ? Your average Fortnite player is probably like 14 and hardly knows on what planet he is, hell he probably plays it from a phone or console, DLSS is the last thing he would think or care about.


do you have ANY data to support your claim ?
Or just because you don't like the game you are making assumptions?


----------



## medi01 (Oct 29, 2020)

Vya Domus said:


> It doesn't really matter if they use proprietary extensions because every manufacturer will have to get DXR working using their own extensions anyway. But when you actually write applications using DXR you can only use the API calls Microsoft wrote.



When I mean using extensions, I mean calling vendor specific ops.



Max(IT) said:


> do you have ANY data to support your claim ?


Do you have other double standards to apply to arguments?


----------



## Max(IT) (Oct 29, 2020)

TumbleGeorge said:


> I think that AMD has better knowledge than Your how VRAM size is better.


not necessarily.
Or do you think AMD has better knowledge than Nvidia, that put 8 Gb on the 3070 ?

Did you ever check VRAM usage on your VGA while playing at 1440P ?



medi01 said:


> Do you have other double standards to apply to arguments?


also to you: do you have ANY data about Fortnite average players age ?

I don't know, and I'm quite sure you don't either


----------



## Vya Domus (Oct 29, 2020)

Max(IT) said:


> Or just because you don't like the game you are making assumptions?



It's funny because I played Fortnite for about a year with a friend of mine and his little brothers and their friends who were all 14.


----------



## medi01 (Oct 29, 2020)

Max(IT) said:


> also to you: do you have ANY data about Fortnite average players age ?


I have plenty of anecdotal data, including my kid and his friends and kids of my friends.
No, it's in no way scientific and good to see a person who applied the same high standard to own words had time to make @Vya Domus  and myself better.


----------



## Max(IT) (Oct 29, 2020)

medi01 said:


> No it's not, 6800 is.
> 
> 6800 => $80 (16%) more for +18% performance and +8GB of VRAM.



so you already have a comparison between 6800 and 3070 ?
Quite strange, since AMD didn't show one yesterday ...



medi01 said:


> I have plenty of anecdotal data



that answer my question: you don't have ANY data


----------



## ratirt (Oct 29, 2020)

Max(IT) said:


> not necessarily.
> Or do you think AMD has better knowledge than Nvidia, that put 8 Gb on the 3070 ?
> 
> Did you ever check VRAM usage on your VGA while playing at 1440P ?


It is not knowledge but it is marketing's knowledge. Just like there where 3080 coming with 20Gb. Shortage of memory and the idea was scrapped.


Max(IT) said:


> so you already have a comparison between 6800 and 3070 ?
> Quite strange, since AMD didn't show one yesterday ...


AMD did show slides comparing 3070 with 6800. The only thing is that AMD used 2080 Ti which is basically same performance. Surprised you have missed it.


----------



## Metroid (Oct 29, 2020)

16gb of ram is very attractive and that sends a clear message to nvidia and now nvidia needs to respond with something similar or superior than 16gb of gddr, competition is good for us.


----------



## Vya Domus (Oct 29, 2020)

Max(IT) said:


> that answer my question: you don't have ANY data



Neither do you, all studies that I found required a minimum age of 18. But you'd have to pretty dense to believe for a second this game isn't played mostly by children given the memes and plethora of complaints from parents and all the media attention. That's enough evidence.


----------



## medi01 (Oct 29, 2020)

ratirt said:


> AMD did show slides comparing 3070 with 6800. Surprised you have missed it.


Hardly surprising people in green reality distortion field missed it.
There are 3 typical reactions:

1) Missed it completely
2) Didn't catch that 6800 is 18% faster than 2080Ti
3) But muh DLSS/muh useles RT checkboxes in a handful of games, but muh CUDA, but muh Drivers


----------



## WeeRab (Oct 29, 2020)

If AMD can supply enough inventory, they will take major market share from Nvidia.
  Nvidia's problems are not with 'scalpers' - Their problems go MUCH deeper. i.e. abysmal yields on the Samsung 8nm process.
That is why you cannot actually buy  any of these Ampere cards. (Not the average Joe anyway), so AMD have an open field.
Most of the tech press were ecstatic about Ampere - But since AMD have matched, if not surpassed Nvidia this time around...I would expect similar gushing reviews
on the AMD 6xxx series.
 I won't hold my breath though....


----------



## ratirt (Oct 29, 2020)

medi01 said:


> Hardly surprising people in green reality distortion field missed it.
> There are 3 typical reactions:
> 
> 1) Missed it completely
> ...


Blindfold bro 
If the claims for 6800 will be confirmed with reviews, it will blow the 3070. What I wonder now is, if the 6800 can be modified/updated with the 6800XT bios for additional performance/power delivery just like the 5700 could with 5700XT's BIOS. That would be just perfect and it would give 6800 way more edge. It shouldn't be a problem since they both have 16GB of Vram.


----------



## ebivan (Oct 29, 2020)

Max(IT) said:


> also to you: do you have ANY data about Fortnite average players age ?
> 
> I don't know, and I'm quite sure you don't either


Well, here you go:








						Fortnite Usage and Revenue Statistics (2020)
					

Fortnite refers to a videogame series, set in a post-apocalyptic, zombie-infested world. It is produced by Epic Games, and uses the company’s signature Unreal Engine. At present, there are two games that fall under the Fortnite umbrella: a team-based survival shooter called Fortnite: Save the...




					www.businessofapps.com
				



.


> Newzoo find that 53% of Fortnite players were aged 10-25



Ande also here: (they didnt take unter 18 into account)








						70 Fortnite Statistics: 2020/2021 Data & Market Share Analysis | CompareCamp.com
					

By this time, Fortnite needs no introduction. After its Battle Royale mode launch in late 2017, Fortnite has quickly become a colossal force in the gaming



					comparecamp.com
				





> The overwhelming age bracket of Fortnite players happens to be on the 18-24 age range, representing almost 2/3 of players, or 62.7%.


----------



## Max(IT) (Oct 29, 2020)

Vya Domus said:


> It's funny because I played Fortnite for about a year with a friend of mine and his little brothers and their friends who were all 14.


Good for you (I don't like the game), but my question still is: do you have any factual data about players' age ?


----------



## Vya Domus (Oct 29, 2020)

Max(IT) said:


> Good for you (I don't like the game), but my question still is: do you have any factual data about players' age ?



Did you not read my previous comment or the one above or you are just pretending like you didn't ?


----------



## Max(IT) (Oct 29, 2020)

ratirt said:


> *AMD did show slides comparing 3070 with 6800*. The only thing is that AMD used 2080 Ti which is basically same performance. Surprised you have missed it.


No they DIDN'T.
2080 Ti and 3070 ARE NOT the same thing. And in many cases 3070 outperformed 2080Ti, so the "18% figure" is wrong.
Other than that, it is with Smart Access Memory, that requires a Ryzen 5000. Situation for everyone else could be different, so my suggestion is stop cheerleading and wait for actual benchmarks.



Vya Domus said:


> Did you not read my previous comment or the one above or you are just pretending like you didn't ?


your comment is about anecdotal evidence at best.


----------



## Vya Domus (Oct 29, 2020)

Max(IT) said:


> your comment is about anecdotal evidence at best.



So you didn't, we're done then.


----------



## Max(IT) (Oct 29, 2020)

ebivan said:


> Well, here you go:
> 
> 
> 
> ...


Did you read it?
There are 3 statistics that says different numbers. And "18-25" is quite different from "14 years old" as claimed above.
At 20-25 years old you absolutely know what DLSS is and if it fits your needs or not.



Vya Domus said:


> Neither do you, all studies that I found required a minimum age of 18. But you'd have to pretty dense to believe for a second this game isn't played mostly by children given the memes and plethora of complaints from parents and all the media attention. That's enough evidence.


yes I clearly stated above I DON'T KNOW, but I'm not the one making wrong assumptions here. SO the proof is on you.
You said 14 years old and I asked something to backup that claim.


----------



## Vya Domus (Oct 29, 2020)

Lmao one study said 53% between 10-25 and one that they didn't ask people below the age of 18 and this guy is still at it.


----------



## Frick (Oct 29, 2020)

Max(IT) said:


> At 20-25 years old you absolutely know what DLSS is and if it fits your needs or not.



So there's a linear relationship between knowledge and age?


----------



## Max(IT) (Oct 29, 2020)

Metroid said:


> 16gb of ram is very attractive and that sends a clear message to nvidia and now nvidia needs to respond with something similar or superior than 16gb of gddr, competition is good for us.


I would have preferred 10 or 12 Gb VRAM for $50 less.
16 GB for the intended target (mostly 1440P) is totally useless.



Frick said:


> So there's a linear relationship between knowledge and age?


Actually I don't think so.
In my opinion there are 16 years old guys more competent than people here. I was just arguing about the "Fortnite players are 14 years old so they don't know about DLSS" claimed above.



Vya Domus said:


> Lmao one study said 53% between 10-25 and one that they didn't ask people below the age of 18 and this guy is still at it.


yes ONE study, and it didn't show us how many "14 years old players" there were. You don't know if 90% of those players are 24 years old, because the study just put a big class 10-25.
You should learn more about statistics


----------



## ratirt (Oct 29, 2020)

Max(IT) said:


> No they DIDN'T.
> 2080 Ti and 3070 ARE NOT the same thing. And in many cases 3070 outperformed 2080Ti, so the "18% figure" is wrong.
> Other than that, it is with Smart Access Memory, that requires a Ryzen 5000. Situation for everyone else could be different, so my suggestion is stop cheerleading and wait for actual benchmarks.











						NVIDIA GeForce RTX 3070 Founders Edition Review - Disruptive Price-Performance
					

NVIDIA's GeForce RTX 3070 is faster than the RTX 2080 Ti, for $500. The new FE cooler is greatly improved, runs very quietly, and even has fan stop. In our RTX 3070 Founders Edition review, we'll also take a look at RTX performance, energy efficiency, frametimes, and overclocking.




					www.techpowerup.com
				



Try this and look what the difference in performance between the 3070 and the 2080 Ti is. AMD didn't use 3070 because they are not available. Anyway, the 2080 Ti is a good card representing the 3070's performance. Even though they are two different cards, the performance in general is the same across the board.


----------



## medi01 (Oct 29, 2020)

Some fun:


----------



## Max(IT) (Oct 29, 2020)

medi01 said:


> Hardly surprising people in green reality distortion field missed it.
> There are 3 typical reactions:
> 
> 1) Missed it completely
> ...


first of all: I'm not an Nvidia supporter.
I own both a 2070 Super and a 5700XT, and I'm still undecided about 3070 or 6800. I will wait for actual review before buy something.

1) Did I miss something AMD DIDN'T show to us ? There is NO 6800 vs 3070 comparison in their slides;
2) there is NO 18% speed difference even with the 2080 Ti, since it is different depending on the game used, and 6800 benchmarks were JUST with Smart Access Memory enabled, a feature that requires a Ryzen 5000. AMD purportedly omitted 6800 plain results, while they did it for 6800XT. This is called marketing;
3) but DLSS/RT are a factor, just like CUDA is for some users. CUDA doesn't matter to me but RT definitely does, and it is very strange AMD didn't show anything about that.


----------



## medi01 (Oct 29, 2020)

Max(IT) said:


> first of all


You act as a typical butthurt green boi would. So, shrug.


----------



## Max(IT) (Oct 29, 2020)

ratirt said:


> NVIDIA GeForce RTX 3070 Founders Edition Review - Disruptive Price-Performance
> 
> 
> NVIDIA's GeForce RTX 3070 is faster than the RTX 2080 Ti, for $500. The new FE cooler is greatly improved, runs very quietly, and even has fan stop. In our RTX 3070 Founders Edition review, we'll also take a look at RTX performance, energy efficiency, frametimes, and overclocking.
> ...


I already read that review and any others. I know that 3070 nd 2080Ti are quite similar (depending on the game and resolution).
The point is AMD cherrypicked some benchmarks (that's absolutely normal in a marketing presentation) and you cannot say "it is 18% faster" without a proper review.
Especially because on the 6800, the mainstream card, they tested the card enabling a feature available ONLY to owners of a CPU that doesn't even really exist yet.


----------



## ratirt (Oct 29, 2020)

Max(IT) said:


> first of all: I'm not an Nvidia supporter.
> I own both a 2070 Super and a 5700XT, and I'm still undecided about 3070 or 6800. I will wait for actual review before buy something.
> 
> 1) Did I miss something AMD DIDN'T show to us ? There is NO 6800 vs 3070 comparison in their slides;
> ...


You should watch the presentation again because what you are saying is total horeseshit. Not sure if you are deliberately acting like that or you're just an ignorant feeding this thread.


Max(IT) said:


> already read that review and any others. I know that 3070 nd 2080Ti are quite similar (depending on the game and resolution).
> The point is AMD cherrypicked some benchmarks (that's absolutely normal in a marketing presentation) and you cannot say "it is 18% faster" without a proper review.
> Especially because on the 6800, the mainstream card, they tested the card enabling a feature available ONLY to owners of a CPU that doesn't even really exist yet.


You mean like NV's DLSS? What is with this cherry picked? Can you make a list of games which in your opinion would not be cherry picked (not sure what you mean about cherry-picked benchmarks) which AMD should have gone with for the presentation?


----------



## Max(IT) (Oct 29, 2020)

medi01 said:


> You act as a typical butthurt green boi would. So, shrug.


Sure...






ratirt said:


> You should watch the presentation again because what you are saying is total horeseshit. Not sure if you are deliberately acting like that or you're just an ignorant feeding this thread.


I was expecting this kind of reaction. As I said before, even if I have two AMD systems in my house, AMD supporters are the worst on the web, in total denial. I didn't even see something similar by Apple supporters...


----------



## medi01 (Oct 29, 2020)

Max(IT) said:


> AMD cherrypicked some benchmarks


For FS, you are mistaking LIsa and Jenseng.
When Lisa says "X" it is X.
When Jensen says "X", it is never X, and by how much real is off, depends on how much pain Jensen is in.
"Cherry picked", eh, Borderlands 3:






And then there are lovely youtube channels like DF, who totally for free can figure how to trounce older card performance to make Jensen's lies to look like usual likes.



Max(IT) said:


> Sure


That doesn't change the way you act. Even if you had "AMD" tatoo on your forehead.


----------



## Max(IT) (Oct 29, 2020)

medi01 said:


> For FS, you are mistaking LIsa and Jenseng.
> When Lisa says "X" it is X.
> When Jensen says "X", it is never X, and by how much real is off, depends on how much pain Jensen is in.
> 
> ...


you are being childish.
I don't care about "Lisa or Jansen".

They are nothing to me.

I do care about the product, and I will choose between 6800 and 3070 the moment I will see a proper review. Not a marketing presentation.
The "way I act" is not cheerleading a brand. And it is not your business


----------



## Vya Domus (Oct 29, 2020)

Max(IT) said:


> yes ONE study, and it didn't show us how many "14 years old players" there were. You don't know if 90% of those players are 24 years old, because the study just put a big class 10-25.
> You should learn more about statistics



I thought you'd let go but you insist to be giga cringe worthy with this fixation on 14 year old players.

Let's recap, all studies dismissed entries from people below the age of 18, one did concluded that 53% of players are between 10-24 and given that most other studies concluded about 30% are between 18 and 24 that likely leaves a lot of users below 18. Our resident 200IQ statistician here therefore concluded that because most studies don't included that age bracket and only one does, it must mean there is no data that shows children are playing this game. This is from someone telling me to learn statistics mind you. Tons of news reports and articles about parents being annoyed about the fact their kids play this game isn't anecdotal evidence.

More over, you go on to say people between 20-25 years absolutely know what DLSS is. Really, genius, where's the data ? If you want to play this stupid game, square up and don't say something this outrageous.


----------



## ratirt (Oct 29, 2020)

Max(IT) said:


> Sure...
> 
> View attachment 173711
> 
> ...


If I'm AMD supporter then you are NV supporter. Case closed.


----------



## medi01 (Oct 29, 2020)

Max(IT) said:


> I don't care about "Lisa or Jansen".


You should not either.
But if you are into tech and still haven't noticed that it is almost never that Jensen says truth and almost never that Lisa lies, there is something wrong.


----------



## Max(IT) (Oct 29, 2020)

ratirt said:


> If I'm AMD supporter then you are NV supporter. Case closed.


no, I am not.
I have NO brand loyalty at all.





Actually lately I spent much more on AMD than on other brands, and I'm considering a 6800 as my next VGA.


----------



## ratirt (Oct 29, 2020)

Max(IT) said:


> no, I am not.
> I have NO brand loyalty at all.
> 
> View attachment 173713
> ...


Cool  Thanks for sharing. 
I'm also considering and I will wait for 6800 and xt versions reviews but the odds are what they are and pointing cherry-picking games for benchmarks or usage of AMD new developed features with support for their products is not diminishing their credibility nor should be considered as cheating in any way. The raw performance is there and it is great in comparison to NVidia's cards.


----------



## Max(IT) (Oct 29, 2020)

ratirt said:


> Cool  Thanks for sharing.
> I'm also considering and I will wait for 6800 and xt versions reviews but the odds are what they are and pointing cherry-picking games for benchmarks or usage of AMD new developed features with support for their products is not diminishing their credibility nor should be considered as cheating in any way. The raw performance is there and it is great in comparison to NVidia's cards.


I never said they are cheating.
It is normal marketing and every company is doing that to create the hype and increase sales. It is their business.
I'm just saying that I will make a decision after the first review by Techspot and Guru3D.



Vya Domus said:


> I thought you'd let go but you insist to be giga cringe worthy with this fixation on 14 year old players.
> 
> Let's recap, all studies dismissed entries from people below the age of 18, one did concluded that 53% of players are between 10-24 and given that most other studies concluded about 30% are between 18 and 24 that likely leaves a lot of users below 18. Our resident 200IQ statistician here therefore concluded that because most studies don't included that age bracket and only one does, it must mean there is no data that shows children are playing this game. This is from someone telling me to learn statistics mind you. Tons of news reports and articles about parents being annoyed about the fact their kids play this game isn't anecdotal evidence.
> 
> More over, you go on to say people between 20-25 years absolutely know what DLSS is. Really, genius, where's the data ? If you want to play this stupid game, square up and don't say something this outrageous.


still a lot of words and no evidence about "14 years old"...


----------



## mtcn77 (Oct 29, 2020)

medi01 said:


> You should not either.
> But if you are into tech and still haven't noticed that it is almost never that Jensen says truth and almost never that Lisa lies, there is something wrong.


There must be an impostor.


----------



## ratirt (Oct 29, 2020)

Max(IT) said:


> I never said they are cheating.
> It is normal marketing and every company is doing that to create the hype and increase sales. It is their business.
> I'm just saying that I will make a decision after the first review by Techspot and Guru3D.


You are mixing hype with increased sales and marketing. The features are supposed to work and they do in most cases. If they do support their own products it is not hype, nor marketing scheme. 
I always wait for the reviews but these presentations do give some level of information and comparison how their products perform against competition. 
I can tell you now, You won't see much difference in performance between 3070 and 6800 with the reviews. If you do see those, I'm sure it will be in AMD's favor.
How will it turn out, we will yet to see.


----------



## Vya Domus (Oct 29, 2020)

Max(IT) said:


> still a lot of words and no evidence about "14 years old"...



There is, you are just too cringy and ignorant. By the way still waiting on that data about your ridiculous claim that 20-25 year old people must know about DLSS.


----------



## Max(IT) (Oct 29, 2020)

ratirt said:


> You are mixing hype with increased sales and marketing. The features are supposed to work and they do in most cases. If they do support their own products it is not hype, nor marketing scheme.
> I always wait for the reviews but these presentations do give some level of information and comparison how their products perform against competition.
> I can tell you now, You won't see much difference in performance between 3070 and 6800 with the reviews. If you do see those, I'm sure it will be in AMD's favor.
> How will it turn out, we will yet to see.


marketing business is to create hype to increase sales.
Everything is correlated.

I will read the reviews and if 6800 will be faster than 3070 at a comparable price I will buy it when available. I am not going to replace my 3900X with a 5900X until later next year, so I need a review without using the Smart Access Memory @ 1440P (my resolution for gaming).



Vya Domus said:


> There is, you are just too cringy and ignorant. By the way still waiting on that data about your ridiculous claim that 20-25 year old people must know about DLSS.


you are completely turning things on the table in order to hide your clearly baseless claim about Fortnite being played mostly by 14 years old kids....

YOU started saying the 14 years old kids don't know or care what DLSS is (or on what planet they are), and I just pointed out that maybe Fortnite is also played by older guys that, according to your narrative, are more capable of understanding DLSS and RT.
Here to refresh your fading memory:



> Your average Fortnite player is probably like 14 and hardly knows on what planet he is, hell he probably plays it from a phone or console, DLSS is the last thing he would think or care about.



I don't think knowledge is age related. And I surely know 16 years old kids smarter than you in this field.


----------



## Vya Domus (Oct 29, 2020)

Max(IT) said:


> I just pointed out that maybe Fortnite is also played by older



You pointed out nothing of any importance or relevance, the one study that included all age brackets shows there are many kids playing the game. Which means there are also a lot of 14 year old playing the game.

Saying that 20-25 year old people *absolutely* know about DLSS is by far the most ridiculous and baseless thing said on here and it doesn't follow my narrative at all, I simply said the kind of people that care about this come from a very small niche. Your original claim that most people who play Fortnite would chose a GPU with DLSS is still not only ridiculous but not backed by anything at all. Again, you wanted to play this dumb game of showing definite proof and so far all your ideas have been debunked.



Max(IT) said:


> And I surely know 16 years old kids smarter than you in this field.



Criiiiiiiiiiiiiiiiiinge


----------



## medi01 (Oct 29, 2020)

What do we know about DLSS by the way?


----------



## ratirt (Oct 29, 2020)

Max(IT) said:


> marketing business is to create hype to increase sales.
> Everything is correlated.
> 
> I will read the reviews and if 6800 will be faster than 3070 at a comparable price I will buy it when available. I am not going to replace my 3900X with a 5900X until later next year, so I need a review without using the Smart Access Memory @ 1440P (my resolution for gaming).


I don't think so. Marketing business is to present the products to the customers/ potential clients so that they know what the products offer and what it's are all about and if you need it.
Hype on the other hand, is presentation of a product without full scope of what it offers and blown out of proportion. For instance exaggeration of what the product can do, like performance.    
At least that's how I take it.


----------



## Max(IT) (Oct 29, 2020)

Vya Domus said:


> You pointed out nothing of any importance or relevance, the one study that included all age brackets shows there are many kids playing the game. Which means there are also a lot of 14 year old playing the game.
> 
> Saying that 20-25 year old people *absolutely* know about DLSS is by far the most ridiculous and baseless thing said on here and it doesn't follow my narrative at all, I simply said the kind of people that care about this come from a very small niche. Your original claim that most people who play Fortnite would chose a GPU with DLSS is still not only ridiculous but not backed by anything at all. Again, you wanted to play this dumb game of showing definite proof and so far all your ideas have been debunked.
> 
> ...




not going to lose anymore time with you.
English could not be my first language, but I wrote this:



> *AT* 20-25 years old you absolutely know what DLSS is and *if it fits your needs* or not.



That is quite different than "any 20-25 years old know about DLSS".
So maybe you should review your language skills, other than statistics...

Let me know when you find data about 14 years old Fortnite players. Until then, bye bye...


----------



## Vya Domus (Oct 29, 2020)

medi01 said:


> What do we know about DLSS by the way?
> 
> View attachment 173720



ｃｒｉｓｐ


----------



## Max(IT) (Oct 29, 2020)

medi01 said:


> What do we know about DLSS by the way?
> 
> View attachment 173720



we are going OT here, but you cannot take a single frame screenshot to demonstrate how DLSS works.
Online there are tons of videos about DLSS 2.0 quality.


----------



## mtcn77 (Oct 29, 2020)

Vya Domus said:


> ｃｒｉｓｐ


Roast, or crisp?


----------



## Vya Domus (Oct 29, 2020)

Max(IT) said:


> not going to lose anymore time with you.



Good choice, because you had nothing intelligent to say. You are still in denial about there being no data, as if ages 10-24 somehow must not include 14, right ? "Learn statistics" my ass, you are illiterate on many domains not just statistics.


----------



## Nater (Oct 29, 2020)

Skimmed the thread.  Two things stand out:

1. Where are all the people congratulating AMD on 16GB of vram vs 10GB?  It seems they're shitting on the AMD 6000 series and still leaning towards the RTX 3000 series.  So RTX > 6GB of vram (which was a dealbreaker before)?

2.  My 11 year old asked for a new RTX card(he's on a GTX 1070@1080p) when the Fortnite preview dropped on YouTube.  Kids aren't as ignorant as you think, and marketing works.

He gon' be pissed when he gets my 5700XT hand-me-down.


----------



## Valantar (Oct 29, 2020)

R0H1T said:


> Let's not kid ourselves, AMD could well be promising the jump wrt the worst performing RDNA card out there as opposed to say the most *efficient* one ~ which btw is inside a *Mac* not PC


So you didn't read the end notes in the slides then? Efficiency numbers are 5700 XT vs. 6800 XT. The 5700 XT is the least efficient RDNA 1 GPU, true, but the 6800 XT is by no means a best-case scenario - that would be the heavily binned, limited availability 6900 XT at the same power (which they incidentally listed as having a 65% improvement over the 5700 XT). So no, this doesn't seem like a "best v. worst" comparison.


Max(IT) said:


> At 20-25 years old you absolutely know what DLSS is and if it fits your needs or not.


Uh ... only if you actually care about that kind of stuff. Which the average gamer does not whatsoever. This is quite limited enthusiast knowledge. Most gamers' level of knowledge about technical features is more or less on the level of "does the game run on my hardware, y/n?".


Max(IT) said:


> The point is AMD cherrypicked some benchmarks (that's absolutely normal in a marketing presentation) and you cannot say "it is 18% faster" without a proper review.


No cherry picking here. The _only_ reason not to trust AMD's data here is that they themselves did the testing. The games used, settings used and hardware setups used are all publicized in the slide deck, if you bothered to read. There's no indication that the games they picked are disproportionately favoring AMD GPUs - shown at least partly by the relatively broad span of results in their testing, including games where they lose. Could you imagine Nvidia showing a performance graph from a game where they weren't the fastest? Yeah, there's a difference of attitude here, and AMD marketing for the past couple of years (since Raja Koduri left RTG and Dr. Su took over control there, at least) has been impressively trustworthy for first-party benchmarks.



Max(IT) said:


> That is quite different than "any 20-25 years old know about DLSS".


Actually, it isn't. You effectively said "at age X, you absolutely know what ABC is and if it fits your needs or not". So, that's two statements in one: that at age X you know what ABC is, and that at age X you have the breadth of knowledge and judgement skills required to know if it fits your needs or not". So whether you meant to or not, you did in effect say that "by the age of 25, everyone knows what DLSS is".

It's obvious there are young tech enthusiasts who know what DLSS is, though at age 14 I sincerely doubt there are more than a handful of tech enthusiasts worldwide with the reasoning and temperament to not just be outright screaming fanboys and fangirls - that's not their fault, they just lack the biological and experiential basis for acting like reasonable adults. But still, inferring that "above a certain age anyone knows about [obscure technical term X]" is way, way out there.

As for Fortnite being full of 14-year-olds: that's a given. No, the majority aren't 14 - that would be _really weird_. But there are _tons_ of kids playing it, and likely the majority of players are quite young. Add to that the broad popularity of the game, and it's safe to assume that the average Fortnite player has _absolutely no idea_ what DLSS is. Heck, there are likely more people playing Fortnite on consoles and phones than people in the whole world who know what DLSS is.


----------



## basco (Oct 29, 2020)

We all love each other at TPU don´t we

i find it interesting that amd says 650 watt for 6800 \ 750 watt for 6800xt \ and 850watt for 6900xt while all have 2x8pin with latter 2 same tbp


----------



## Vya Domus (Oct 29, 2020)

Nater said:


> Where are all the people congratulating AMD on 16GB of vram vs 10GB?



Congratulating for what ? Large VRAM capacities should have been the norm anyway.


----------



## FranciscoCL (Oct 29, 2020)

ratirt said:


> Actually. The VRR is supported by these screens but it is not FreeSync equivalent. Maybe it will work but it may not necessarily work as a FreeSync monitor or a TV would work.



XBoX One X works with HDMI VRR just like Freesync, even with LFC.










I guess it will be the same with the next consoles and videocards based on the same RDNA2 GPU.


----------



## Valantar (Oct 29, 2020)

Nater said:


> 2.  My 11 year old asked for a new RTX card(he's on a GTX 1070@1080p) when the Fortnite preview dropped on YouTube.  Kids aren't as ignorant as you think, and marketing works.
> 
> He gon' be pissed when he gets my 5700XT hand-me-down.


... if your 11-year old gets pissed for getting a 5700 XT as a hand-me-down, I'm sorry to say you have a seriously tech-spoiled kid. Heck, I don't even have a GPU close to the 1070


----------



## medi01 (Oct 29, 2020)

Max(IT) said:


> Online there are



Videos, with known paid shills like DF hyping it.
And then there are eyes to see and brains to use, god forbid.
And even some articles to be found, with, god forbid, actual pics:









						Why this month’s PC port of Death Stranding is the definitive version [Updated]
					

A major embargo is up, so we've added comparison images for anti-aliasing methods.




					arstechnica.com
				





At the end of the day, it is a cherry picking game:





DLSS 2.0 (these are shots from picture that hypes DLSS by the way):






But it demonstrates how delusional DLSS hypers are. As any other image upscaling tech, it has its ups and downs.
WIth 2.0 it is largely the same as with TAA which it based on: it gets blurry, it wipes out small stuff, it struggles with quickly moving stuff.


----------



## Valantar (Oct 29, 2020)

medi01 said:


> Videos, with known paid shills like DF hyping it.
> And then there are eyes to see and brains to use, god forbid.
> And even some articles to be found, with, god forbid, actual pics:
> 
> ...


What is with you and hating on Digital Foundry? Can you explain how they in _any _way whatsoever are "shills" for anyone? They do in-depth technical reviews with a much higher focus on image quality and rendering quality than pretty much anyone else. Are they enthusiastic about new features that have so far been Nvidia exclusive? Absolutely, but that proves nothing more than that they find these features interesting.

Btw, those comparison shots look to have different DoF/focal planes; the one on top has sharper leaves but more blurry grass, the one on the bottom has the opposite. Makes it very difficult to make a neutral comparison.


----------



## InVasMani (Oct 29, 2020)

Vya Domus said:


> ｃｒｉｓｐ


 Even outside of the reflections at the full image size you can notice it in area's a bit, but the reflections remind me of how heat and smoke from a high intensity fire can kind of distort your vision or like a oil painting it's pretty ugly in that area. I can see that changing in the future as RTRT tensor cores improve and DLSS gets refined more, but it needs work before that happens. The way that DLSS works lower quality RTRT settings are going to look ugly like that with DLSS because it just doesn't have enough pixel density to work off of relative to the math calculations it can quickly process to benificial in the first place. I think those area's in the next generation or two of DLSS and tensor cores will improve a lot though.


----------



## medi01 (Oct 29, 2020)

Valantar said:


> hating


I happen to dislike paid shills.



Valantar said:


> Can you explain how they in _any _way whatsoever are "shills" for anyone?


You mean, when was it that they got caught?
Easy.
The Doom's demo with Ampere super-exclusive-totallynotforshilling-preview.



Valantar said:


> Btw, those comparison shots look to have different DoF/focal planes; the one on top has sharper leaves but more blurry gras


Stone that is at the same focal distance as the bush is sharper on below pic.
Seriously, do you really want to venture in "DLSS doesn't make things look worse"?


----------



## Vya Domus (Oct 29, 2020)

Valantar said:


> What is with you and hating on Digital Foundry? Can you explain how they in _any _way whatsoever are "shills" for anyone? They do in-depth technical reviews with a much higher focus on image quality and rendering quality than pretty much anyone else. Are they enthusiastic about new features that have so far been Nvidia exclusive? Absolutely, but that proves nothing more than that they find these features interesting.
> 
> Btw, those comparison shots look to have different DoF/focal planes; the one on top has sharper leaves but more blurry grass, the one on the bottom has the opposite. Makes it very difficult to make a neutral comparison.


I can't say I hate them or think that they are shills but they have a history of parsing every single DLSS and DXR implementation even though we know very well some have been less than stellar.


----------



## Valantar (Oct 29, 2020)

I keep being surprised by how people are comparing image quality in games - especially fast-paced ones - by cropping out tiny areas, often from the corners of the screen or similar places where the player is unlikely to focus for much of the time, and often upscaling them just to shout "look, this one is a bit less sharp!"

Put simply: if you have to do that, you also need a very good screen and to be actively looking for problems to spot them while playing. There is obviously a threshold below which these things do become visible, but for now, while I don't think DLSS is the second coming of Raptorjesus like some people do, it's a damn impressive technology nonetheless. The only major drawback is how complex its implementation is and thus how limited adoption is bound to be, as well as it being proprietary.

Heck, when I get a new monitor I'll be going for something like a 32" 2160p panel (need the sharpness for work), but I'll likely play most games at 1440p render resolution and let bog-standard GPU upscaling handle the rest. I'll likely not be able to tell much of a difference in-game, and the performance increase will be massive.


----------



## Vya Domus (Oct 29, 2020)

Valantar said:


> I keep being surprised by how people are comparing image quality in games - especially fast-paced ones - by cropping out tiny areas, often from the corners of the screen or similar places where the player is unlikely to focus for much of the time, and often upscaling them just to shout "look, this one is a bit less sharp!"



If the cropped out area looks worse isn't it a logical conclusion that the full image looks worse as well ? The problem is even a tiny bit of blur gets amplified by post processing and the fact that most displays have lower effective resolution when the image is moving.


----------



## Arc1t3ct (Oct 29, 2020)

Cool! I can't wait for W1zzard's review where he demonstrates how inferior these cards are compared to NVIDIA.


----------



## Valantar (Oct 29, 2020)

medi01 said:


> I happen to dislike paid shills.
> 
> 
> You mean, when was it that they got caught?
> ...


... so getting exlusive access to hardware for a limited preview makes them shills? They were very explicit about the limitations placed on them for that coverage and that this in no way amounted to a full review. Heck, it was repeated several times. Getting an exclusive preview of a new technology is a great scoop for any tech journalist, and they would all do it if they were asked - possibly with the exception of GN. This does tell us that they don't have _extremely strict_ editorial standards, but it in no way qualifies them as paid shills. That you go that far in your judgement just makes it clear that you're seeing black and white where there are tons of shades of gray. That's your problem, not DF's.


medi01 said:


> Stone that is at the same focal distance as the bush is sharper on below pic.


Like I said: the below pic has sharper grass (and the stone that's sitting in the grass), the top pic has more detailed leaves. I have no idea which of the two is DLSS, so I'm not arguing for or against either, just that your example is rather poor. You could say the bottom pic has slightly better sharpness overall, but it's essentially impossible to judge as it's clear that the focal points of the two shots are different. As for the stone being "at the same focal distance" - what? Do you mean the same distance from the camera? That's not the same thing ...


----------



## ratirt (Oct 29, 2020)

FranciscoCL said:


> XBoX One X works with HDMI VRR just like Freesync, even with LFC.
> 
> 
> 
> ...


It works but it is not the same. FreeSync offers more like Low Framerate Compensation (FreeSync premium). VRR doesn't have that.


----------



## Valantar (Oct 29, 2020)

Vya Domus said:


> If the cropped out area looks worse isn't it a logical conclusion that the full image looks worse as well ? The problem is even a tiny bit of blur gets amplified by post processing and the fact that most displays have lower effective resolution when the image is moving.


Not necessarily: most game cameras have some form of distortion towards the edges of the image (as they are made to emulate real lenses to some degree, so there's distortion towards the edges of the image - it's a 2D projection of a more or less spherical captured image, after all so distortion is unavoidable), meaning the centre of the image is likely to be a tad sharper. Your point about displays having lower effective resolution also goes _against_ this mattering - if your display can't show the difference, it won't matter if the source image is sharper. It's obvious that post processing exacerbates blur, but again the question becomes whether or not it's noticeable.


----------



## Razbojnik (Oct 29, 2020)

ZoneDymo said:


> performance is good, price not so much, sigh.
> 
> like take the RX6800, beats the 2080ti so it will be a bit faster then an RTX3070, but it also costs 580 dollars vs 500 dollar for the RTX3070, so not really a clear winner in the "What to buy" discussion.



*And has 16 gbs of vram compare to 3070's 8 ....I see nothing but a clear winner here.


----------



## Valantar (Oct 29, 2020)

ratirt said:


> It works but it is not the same. FreeSync offers more like Low Framerate Compensation (FreeSync premium). VRR doesn't have that.


... that video is _explicitly _about how, on the LG CX, you _don't_ need FreeSync Premium activated for LFC to work. Did you even read the title, let alone watch it?


----------



## Razbojnik (Oct 29, 2020)

Unless nvidia comes out with super edition that burns amd both in ram capacity and boost in performance I don't see it winning this one. 300 wats same or better performance than ampere..paired with ryzen 5 these will be overclocking beasts. Only goes to show how much ampere is underperforming and how much it was actually a calculated cut from what it should have been.


----------



## TumbleGeorge (Oct 29, 2020)

LoL I'm in trouble! When AMD release Radeon VII this was only model with 16GB and for sure was marketing. But when offer entire middle+high classes with 12GB and 16GB VRAM that is not just marketing!


----------



## ratirt (Oct 29, 2020)

Valantar said:


> ... that video is _explicitly _about how, on the LG CX, you _don't_ need FreeSync Premium activated for LFC to work. Did you even read the title, let alone watch it?


Apparently I need to catch  up since I've read different news saying it can't do it.


----------



## BoboOOZ (Oct 29, 2020)

Valantar said:


> What is with you and hating on Digital Foundry? Can you explain how they in _any _way whatsoever are "shills" for anyone? They do in-depth technical reviews with a much higher focus on image quality and rendering quality than pretty much anyone else. Are they enthusiastic about new features that have so far been Nvidia exclusive? Absolutely, but that proves nothing more than that they find these features interesting.


I'll bite for discussion's sake and because their paid review after the Nvidia launch has really upset me.

They are smart people who understand very well how what they are saying might influence the buying decisions of their audience.
Look at this video:








Instead of having a disclaimer that the exact settings for the comparisons to run are given by Nvidia, and that by using any other combination of settings  (resolution, RTX/DLSS, detail) results may be quite different, he's simply hyping up the card most of the time, talking about feelings and stuff. That's nowhere near the moral standards oh Hardware Unboxed, Gamer's Nexus or TPU.
Hate is a very strong word, but I dislike the fact that they, as many other review sites and YouTubers, are becoming influencers instead of reviewers.


----------



## medi01 (Oct 29, 2020)

Valantar said:


> Put simply: if you have to do that, you also need a very good screen


No.

You can "feel" things before you can easily point them out on screen, for starters.

In ars technica "quickly moving mouse" example, entire screen is blurred, no "zoom in" is required. It just helps to focus attention on particular screen.





Valantar said:


> ... so getting exlusive access to hardware for a limited preview makes them shills?


That made them suspect.
The actual misleading video confirmed that they are indeed shills.





Valantar said:


> Like I said: the below pic has sharper grass


Below picture has blurred out mess, of kind that you could also see on other examples.


----------



## Valantar (Oct 29, 2020)

BoboOOZ said:


> I'll bite for discussion's sake and because their paid review after the Nvidia launch has really upset me.
> 
> They are smart people who understand very well how what they are saying might influence the buying decisions of their audience.
> Look at this video:
> ...


I agree that that video was a bit dubious, but no disclaimer? Hm. I think I read their piece rathertthan watched the video, but their written pieces are typically identical to their on-screen scripts. That contains this (second) paragraph:


> Full disclosure: I can bring you the results of key tests today, but there are caveats in place. Nvidia has selected the games covered, for starters, and specified 4K resolution to remove the CPU completely from the test results and in all cases, settings were maxed as much as they could be. The games in question are Doom Eternal, Control, Shadow of the Tomb Raider, Battlefield 5, Borderlands 3 and Quake 2 RTX. Secondly, frame-time and frame-rate metrics are reserved for the reviews cycle, meaning our tests were limited to comparisons with RTX 2080 (its last-gen equivalent in both naming and price) and differences had to be expressed in percentage terms, meaning some slight re-engineering of our performance visualisation tools. This work actually proved valuable and the new visualisations will be used elsewhere in our reviews - differences in GPU power do tend to be expressed as percentages, after all.


----------



## Max(IT) (Oct 29, 2020)

Valantar said:


> Uh ... only if you actually care about that kind of stuff. Which the average gamer does not whatsoever. This is quite limited enthusiast knowledge. Most gamers' level of knowledge about technical features is more or less on the level of "does the game run on my hardware, y/n?".



you are heavily underestimating young people here (and to be clear, I'm 48 years old). We are speaking about computer gamers here, and many of them know about their hardware.
By the way my entire point was that speaking about "14 years old players" is silly...



> No cherry picking here. The _only_ reason not to trust AMD's data here is that they themselves did the testing. The games used, settings used and hardware setups used are all publicized in the slide deck, if you bothered to read. There's no indication that the games they picked are disproportionately favoring AMD GPUs - shown at least partly by the relatively broad span of results in their testing, including games where they lose. Could you imagine Nvidia showing a performance graph from a game where they weren't the fastest? Yeah, there's a difference of attitude here, and AMD marketing for the past couple of years (since Raja Koduri left RTG and Dr. Su took over control there, at least) has been impressively trustworthy for first-party benchmarks.



I didn't even write about _not trusting_ them 
The level of aggressiveness in this thread by AMD supporters is staggering.
I just said we need the independent reviews to actually understand the real performance of the hardware, because data showed ARE cherrypicked (the whole situation is cherrypicked being a marketing presentation) and far from being complete.
I didn't saying they were lying. They are just showing one part of the story, and it is perfectly understandable. To know how much the 6800 is better than 3070 on an average system (one without a Zen 3) we need the review.



> As for Fortnite being full of 14-year-olds: that's a given. No, the majority aren't 14 - that would be _really weird_. But there are _tons_ of kids playing it, and likely the majority of players are quite young. Add to that the broad popularity of the game, and it's safe to assume that the average Fortnite player has _absolutely no idea_ what DLSS is. Heck, there are likely more people playing Fortnite on consoles and phones than people in the whole world who know what DLSS is.



being full of 14 yo doesn't mean the majority are 14 yo as claimed above.
That was my point.
Clearly we are speaking just about PC players here, because people playing Fortnite on a Playstation , a Nintendo or a smartphone clearly could be totally unaware of PC technologies.


----------



## BoboOOZ (Oct 29, 2020)

Valantar said:


> I agree that that video was a bit dubious, but no disclaimer? Hm. I think I read their piece rathertthan watched the video, but their written pieces are typically identical to their on-screen scripts. That contains this (second) paragraph:


Ahh, yeah, watch the first 5 minutes of the video   , the tone and wording is quite different imho.


----------



## Max(IT) (Oct 29, 2020)

medi01 said:


> Videos, with known paid shills like DF hyping it.
> And then there are eyes to see and brains to use, god forbid.
> And even some articles to be found, with, god forbid, actual pics:
> 
> ...


According to your specs list you DON'T have any hardware that supports DLSS.
I have.
I saw with my eyes on several titles. DLSS + RT are something I like to have on my GPU, not a blurry mess like you are trying to demonstrate.


----------



## BoboOOZ (Oct 29, 2020)

Max(IT) said:


> being full of 14 yo doesn't mean the majority are 14 yo as claimed above.
> That was my point.
> Clearly we are speaking just about PC players here, because people playing Fortnite on a Playstation , a Nintendo or a smartphone clearly could be totally unaware of PC technologies.


I'm pretty sure most people playing FN are kids.

I play with my kid and his friends all the time, and I can see by the nicknames that most of our adversaries are very young, too. Statistics are skewed, because my kid and many of his friends play on their parents' account.

Anyway, the discussion about RTX in FN is pointless, the game is very playable on old hardware and anyone playing it remotely competitively puts shadows and post effects on "LOW", otherwise they cannot aim accurately while in the thick of the battle.


----------



## SLK (Oct 29, 2020)

BoboOOZ said:


> I'm pretty sure most people playing FN are kids.
> 
> I play with my kid and his friends all the time, and I can see by the nicknames that most of our adversaries are very young, too. Statistics are skewed, because my kid and many of his friends play on their parents' account.
> 
> Anyway, the discussion about RTX in FN is pointless, the game is very playable on old hardware and anyone playing it remotely competitively puts shadows and post effects on "LOW", otherwise they cannot aim accurately while in the thick of the battle.



Yeah FN has become my bonding time with my kids as well due to this Covid. My son plays on a 1050Ti and when he comes over to my PC (RTX2080), he asks why does his graphics look so much worse. I just told him recently that he can have my PC soon, as I am getting a 3080.


----------



## Valantar (Oct 29, 2020)

Max(IT) said:


> you are heavily underestimating young people here (and to be clear, I'm 48 years old). We are speaking about computer gamers here, and many of them know about their hardware.
> By the way my entire point was that speaking about "14 years old players" is silly...


Sorry, but you are _very_ much overestimating the knowledge of the average PC gamer. As people who frequent this and other forums will no doubt agree with me on, the average non-hardware enthusiast neither knows or cares about features like this, and if they know anything it is typically a poorly informed opinion mostly based on marketing and/or a reddit-based game of telephone where what comes out the other end is rather inaccurate.


Max(IT) said:


> I didn't even write about _not trusting_ them


You said they were cherry-picked benchmarks. That means the benchmarks were picked to make them look good, not to be accurate. That makes them _unreliable_, which means _you can't trust them_. So yes, you did write about not trusting them.


Max(IT) said:


> The level of aggressiveness in this thread by AMD supporters is staggering.
> I just said we need the independent reviews to actually understand the real performance of the hardware, because data showed ARE cherrypicked (the whole situation is cherrypicked being a marketing presentation) and far from being complete.
> I didn't saying they were lying. They are just showing one part of the story, and it is perfectly understandable. To know how much the 6800 is better than 3070 on an average system (one without a Zen 3) we need the review.


I don't mean to come off as aggressive, so sorry about that. But IMO you're using "cherry picked" wrong. It means to pick what is/looks best or most desirable, so that implies that they are leaving out (potentially a lot of) worse-looking results. Going by recent history from AMD product launches (Zen+, Zen 2, RDNA 1, Renoir), their data has been relatively reliable and in line with reviews. Their numbers also include games where they tie or lose to Nvidia, which while obviously not any kind of proof that these aren't best-case numbers, is a strong indication that they're not just picking out results that make them look good. As I said, there is one reason to not trust these numbers: the fact that they weren't produced by a reliable third party. Beyond that, recent history, the selection of games (broad, including titles where they both win and lose), the relatively detailed test setup notes, and the use of standard settings levels rather than weirdly tuned "AMD optimal" settings (see the Vega launch) are all reasons why one could reasonably expect these numbers to be more or less accurate. Of course the use of games without built-in benchmarks means that numbers aren't directly comparable to sites using the same games but different test scenarios, but that doesn't make the numbers unreliable, just not comparable. I am obviously still not for pre-ordering or even taking this at face value, but your outright dismissal is too harsh. I would be very surprised if these numbers (non-SAM, non Rage mode) were more than 5% off any third-party benchmarks.


Max(IT) said:


> being full of 14 yo doesn't mean the majority are 14 yo as claimed above.
> That was my point.
> Clearly we are speaking just about PC players here, because people playing Fortnite on a Playstation , a Nintendo or a smartphone clearly could be totally unaware of PC technologies.


Saying the majority are 14 was obviously an intentional exaggeration, and taking it _that_ literally is a bit too much for me. Besides that, even the average PC gamer knows _very_ little about hardware or software features. Remember, the average PC gamer plays games on a laptop. The biggest group after that uses pre-built desktops. Custom, self-built or built-to-order desktops are a distant third. And even among that group, I would be surprised if the majority knew anything detailed about what DLSS or any comparable feature is - most gamers spend more time playing games than reading about this kind of stuff.


----------



## mechtech (Oct 29, 2020)

I wonder when/if 30, 36, 40CU cards and other cards will be released??


----------



## Valantar (Oct 29, 2020)

mechtech said:


> I wonder when/if 30, 36, 40CU cards and other cards will be released??


I'm expecting at least a mention of further RDNA 2 GPUs at CES - IIRC there's a Lisa Su keynote on the books there. Anything earlier than that would be rather weird, given just how close these announced GPUs are launching to the holiday season ending.


----------



## Max(IT) (Oct 29, 2020)

BoboOOZ said:


> I'm pretty sure most people playing FN are kids.
> 
> I play with my kid and his friends all the time, and I can see by the nicknames that most of our adversaries are very young, too. Statistics are skewed, because my kid and many of his friends play on their parents' account.
> 
> Anyway, the discussion about RTX in FN is pointless, the game is very playable on old hardware and anyone playing it remotely competitively puts shadows and post effects on "LOW", otherwise they cannot aim accurately while in the thick of the battle.



Oh RT on Fortnite is pointless for me too, since I'm not planning to play that game on my PC.
But it still is a popular game, so it is not pointless for others.



mechtech said:


> I wonder when/if 30, 36, 40CU cards and other cards will be released??


most probably early next year.


----------



## Prima.Vera (Oct 29, 2020)

When is the review expected?


----------



## AddSub (Oct 29, 2020)

AMD... they have to get their drivers working proper. I had four Polaris cards (2 x 480, 2 x 580) and one major issue was always software problems. Game breaking bugs, crashes, black screens, you name it, all of it went away by switching to the green team. Again, AMD gfx gear always looks "great!!!!" on paper, crapload of software issues in tow though. They have to fix their drivers and considering how their Adrenaline suite looks and behaves these days, they are going the wrong way. 


...
..
.


----------



## mechtech (Oct 29, 2020)

AddSub said:


> AMD... they have to get their drivers working proper. I had four Polaris cards (2 x 480, 2 x 580) and one major issue was always software problems. Game breaking bugs, crashes, black screens, you name it, all of went away by switching to the green team. Again, AMD gfx gear always looks "great!!!!" on paper, craptop of software issues in tow though. They have to fix their drivers and considering how their Adrenaline suite looks and behaves these days, they are going the wrong way.
> 
> 
> ...
> ...



I have an RX 480 and never had any issues.  Then again, I usually don't update the driver, and have left them for over a year.  I kind of agree with the Adrenaline suite, I preferred the older version.  Some of the older versions had the option only to install the driver and not the suite.  Many times I just did that to avoid all those extras I never used or needed.  They need to do that with Adrenaline, have custom install with a good selection to pick from, such as driver only.  Only issue is the occasional wattman crash, that doesn't really seem to do anything.  I also found windows 1709 and 1809 to be pretty good stability wise, but they are eol now.


----------



## Valantar (Oct 29, 2020)

mechtech said:


> I have an RX 480 and never had any issues.  Then again, I usually don't update the driver, and have left them for over a year.  I kind of agree with the Adrenaline suite, I preferred the older version.  Some of the older versions had the option only to install the driver and not the suite.  Many times I just did that to avoid all those extras I never used or needed.  They need to do that with Adrenaline, have custom install with a good selection to pick from, such as driver only.  Only issue is the occasional wattman crash, that doesn't really seem to do anything.  I also found windows 1709 and 1809 to be pretty good stability wise, but they are eol now.


Driver issues seem to be extremely variable in who gets them. I never had any issues (that I didn't cause myself with aggressive undervolting or the like) with my Fury X and RX 570. But some people keep having issues across many different cards. As GN said in their video covering the launch: they have been able to recreate some, but it took some effort, so it's not like they're extremely common. Too common, yes, but not a deal-breaker unless you have one of those systems that just doesn't seem to like AMD's drivers.


----------



## tfdsaf (Oct 29, 2020)

SLK said:


> Marketing rule number 1: Always show yr best
> 
> Yesterday's Radeon presentation clearly indicates they have matched Ampere's raster performance. They did not show RT numbers and Super Resolution is something they are working on. If the RT was as good as Ampere, they would have shown numbers. Simple deduction.



ALL of the ray traced games so far have Nvidia's proprietary ray tracing methods. They are based on DXR of course, but completely optimized for Nvidia, so obviously AMD hardware will not be able to run with that ray tracing, or its going to have worse performance. This doesn't matter though, as only a handful of games have ray tracing support and literally like 1 or 2 are actually decent in their implementation, as in looks reasonably better than established techniques. 

AMD is literally going to have all of the consoles catalog of games which will be build and optimized for AMD RDNA2 implementation of ray tracing. 

Personally I think Nvidia was pushing it way too hard with ray tracing, they just needed a "new" feature to put on the marketing, without it actually being ready for practical use. In fact even the Ampere series is crap at rendering rays, and guess what their next gen will be as well, same with AMD. We are realistically at least 5 years to be able to properly trace rays in real time in games, without making it extremely specific and cutting 9/10 corners. Right now ray tracing is literally just a marketing gimmick, its extremely specific and limited. 

If you actually did a full ray traced game with all of the caveats of actually tracing rays and you have trillions of rays in the scenes it will literally blow up existing GPU's, its not possible to do it. It will render like 0.1fps per second. 

This is why they have to use cheats in order to make it work, this is why its only used on 1 specific thing, either shadows, either reflections, either global illumination, etc.... never all of them and even at that its very limited. They limit the rays that get processed, so its only the barebones rays that get traced. 

Again we could have had a 100x times better ray tracing implementation in 5 years, with full ray tracing capabilities that doesn't cut as much corners, that comes somewhat close to rendered ray tracing, that isn't essentially a gimmick that tanks your performance by 50% and that is with very specific and very limited tracing, again if you actually did a better ray tracing it will literally tank performance completely, you'd be running at less than 1 frame per second.


----------



## Zach_01 (Oct 29, 2020)

Max(IT) said:


> I was honestly expecting more, especially from the 6800 that was my target. I mean, 16 Gb of VRAM are highly unnecessary (10 would have been perfect) and the price, probably because of that amount of VRAM, is $50/60 higher than the sweet spot, and definitely too close to the 6800XT.
> we know nothing about RT performance, so we should wait for the review before draw any conclusion.
> 
> 
> When did they speak about X570 ???


They said it alright...

Copy form another thread:

_"I can see there is a lot of confusion about the new feature AMD is calling "Smart Access Memory" and how it works. My 0.02 on the subject.
According to the presentation the SAM feature can be enabled only in 500series boards with a ZEN3 CPU installed. My assumption is that they use PCI-E 4.0 capabilities for this, but I'll get back to that.
The SAM feature has nothing to do with InfinityCache. IC is used to compensate the 256bit bandwithd between the GPU and VRAM. *Thats it, end of story*. *And according to AMD this is equivalent of a 833bit bus*. Again, this has nothing to do with SAM. IC is in the GPU and works for all systems the same way. They didnt say you have to do anything to "get it" to work. If it works with the same effectiveness with all games we will have to see.

Smart Access Memory
*They use SAM to have CPU access to VRAM and probably speed up things a little on the CPU side*. Thats it. They said it in the presentation, and they showed it also...
And they probably can get this done because of PCI-E 4.0 speed capability. If true thats why no 400series support.
*They also said that this feature may be better in future than it is today, once game developers optimize their games for it.*
I think AMD just made PCI-E 4.0 (on their own platform) more relevant than it was until now!"

Full CPU access to GPU memory:
View attachment 173701

----------------------------------------------------------------------_

*So who knows better than AMD if the 16GB is necessary or not?*


----------



## Calmmo (Oct 29, 2020)

nope









						Announcing Microsoft DirectX Raytracing!
					

If you just want to see what DirectX Raytracing can do for gaming, check out the videos from Epic, Futuremark and EA, SEED.  To learn about the magic behind the curtain, keep reading. 3D Graphics is a Lie For the last thirty years,




					devblogs.microsoft.com
				




DXR is M$'s


----------



## utilizedamplitude (Oct 29, 2020)

nikoya said:


> so now I have to hate LG for not implementing Freesync on C9 and B9 Oleds.



I have a c9 with a Vega64 and Freesync works just fine. All that is required is to use CRU and configure the tv to report as Freesync compatible.


----------



## lexluthermiester (Oct 29, 2020)

I know I'm late to the party so I'm repeating what's already been said, oh well(TLDR), but HOT DAMN! If those numbers are real, AMD has got the goods to make life interesting(perhaps even difficult) for NVidia. AMD is also not gimping on the VRAM either. This looks like it's going to be AMD for this round of GPU king-of-the-hill!

This is a very good time for the consumer in the PC industry!!

What I find most interesting is that the 6900XT might be even better than the 3090 at 8k gaming. Really looking forward to more tests with 8k panels like the one LTT did, but more fleshed out and expansive.


----------



## InVasMani (Oct 29, 2020)

mechtech said:


> I wonder when/if 30, 36, 40CU cards and other cards will be released??


 52CU and 44CU is the next logical step based on what's already released AMD seems to disable 8CU's at a time. I can see them doing a 10GB or 14GB capacity device. It would be interesting if they utilized GDDR6/GDDR6X together and use it along side variable rate shading say use the GDDR6 when you scale the scene image quality back further and the GDDR6X at the higher quality giving mixed peformance at a better price. I would think they'd consider reducing the memory bus width to 128-bit or 192-bit for SKU's with those CU counts though if paired with infinity cache. Interesting to think about the infinity cache in a CF setup how it impacts the latency I'd expect less micro stutter. The 99 percentiles will be interesting to look at for RDNA2 with all the added bandwidth and I/O. I suppose 36CU's is possible as well by extension, but idk how the profit margins would be the low end of the market is erroding each generation further and further not to mention Intel entering the dGPU market will compound that situation further. I don't think a 30CU is likely/possibly for RDNA2 it would end up being 28CU if anything and kind of doubtful unless they wanted that for a APU/mobile then perhaps.


----------



## Shatun_Bear (Oct 29, 2020)

These cards paired with Ryzen 5000 series is the way to go, Nvidia left out in the cold a little:





__





						AMD Smart Access Memory: Zen 3 + RDNA 2 = Intel, NVIDIA destroyer
					

AMD's new Smart Memory Access uses the Zen 3-based Ryzen 5000 series + X570 chipset + RDNA 2-based Radeon RX 6000 series together.




					www.tweaktown.com


----------



## ador250 (Oct 29, 2020)

10K Cuda cores 384bit 35 TFLOPS 350watt, all just to match with 256bit 23TFLOPS 300watt 6900XT, LawL. Ampere is a failure architecture. Let's hope Hopper will change something for Nvidia. Until then RIP Ampere.


----------



## SLK (Oct 29, 2020)

tfdsaf said:


> ALL of the ray traced games so far have Nvidia's proprietary ray tracing methods. They are based on DXR of course, but completely optimized for Nvidia, so obviously AMD hardware will not be able to run with that ray tracing, or its going to have worse performance. This doesn't matter though, as only a handful of games have ray tracing support and literally like 1 or 2 are actually decent in their implementation, as in looks reasonably better than established techniques.
> 
> AMD is literally going to have all of the consoles catalog of games which will be build and optimized for AMD RDNA2 implementation of ray tracing.
> 
> ...



True, full ray tracing "aka" Path tracing is too expensive now, hence the hybrid rendering and tools like DLSS to make it feasible. However, even in its current form, it looks so good. I have played Control, Metro Exodus and Minecraft and it just looks beautiful. In slow-moving games, you can really experience the glory of ray tracing and its hard to go back to normal version after that. In fast-paced titles though, like battlefield or Fortnite, I don't really notice it.


----------



## Zach_01 (Oct 29, 2020)

ador250 said:


> 10K Cuda cores 384bit 35 TFLOPS 350watt, all just to match with 256bit 23TFLOPS 300watt 6900XT, LawL. Ampere is a failure architecture. Let's hope Hopper will change something for Nvidia. Until then RIP Ampere.


To be honest those AMD charts for 6900XT vs 3090 is with AMD GPU overclocked and SmartAccessMemory On. So that’s no 300W for starters
I guess it wouldn’t be 350+W but stil no 300W.

I’m not saying that what AMD has accomplished is not impressive. It is more than just impressive. And with that SAM feature with 5000cpu+500boards it might change the game.

And to clarify something, SAM will be available on all 500series boards. Not only X570. They use PCI-E 4.0 interconnect between CPU and GPU for the former to have VRAM memory access. All 500 boards have PCI-E 4.0 speed for GPUs.


----------



## Metroid (Oct 29, 2020)

Max(IT) said:


> I would have preferred 10 or 12 Gb VRAM for $50 less.
> 16 GB for the intended target (mostly 1440P) is totally useless.



Yeah, but that 16gb will likely be a 3080ti and that is targeted for 4k, 16 or 20gb for 3080 was cancelled.


----------



## Zach_01 (Oct 29, 2020)

They need more VRAM for SmartAccessMemory


----------



## Metroid (Oct 29, 2020)

ador250 said:


> 10K Cuda cores 384bit 35 TFLOPS 350watt, all just to match with 256bit 23TFLOPS 300watt 6900XT, LawL. Ampere is a failure architecture. Let's hope Hopper will change something for Nvidia. Until then RIP Ampere.



Failure they have not used cache like amd used here. So like lisa said, they used cache like they did to zen3. Imagine if nvidia uses cache, they will likely get the upper hand but that takes time and effort to develop such product and if that happens then will be in 2 years or so. So amd will have 2 years to think how to get closer to nvidia on raytracing and nvidia has 2 years to understand how to implement cache on their gpus like amd did.


----------



## Zach_01 (Oct 29, 2020)

Metroid said:


> Failure they have not used cache like amd used here. So like lisa said, they used cache like they did to zen3. Imagine if nvidia uses cache, they will likely get the upper hand but that takes time and effort to develop such product and if that happens then will be in 2 years or so. So amd will have 2 years to think how to get closer to nvidia on raytracing and nvidia has 2 years to understand how to implement cache on their gpus like amd did.


That would require a complete GPU redesign. They already occupy a large portion of the die with tesnor and RT cores. Also a very different memory controller is needed.
The path that nVidia has chosen, do not allow them to implement such a cache. And I really doubt that they will abandon Tensor and RT cores in future.


----------



## Metroid (Oct 29, 2020)

Zach_01 said:


> That would require a complete GPU redesign. They already occupy a large portion of the die with tesnor and RT cores. Also a very different memory controller is needed.
> The path that nVidia has chosen, do not allow them to implement such a cache. And I really doubt that they will abandon Tensor and RT cores in future.



So if that is true then they have to find a way to outdo cache from amd, like i said before if amd pushes to 384 bit gddr6 like nvidia then nvidia is doomed, 256 bit is beating nvidia best already.


----------



## Zach_01 (Oct 29, 2020)

If we believe AMDs numbers that structure (256bit+128MB) is giving them an 833-bit with GDDR6 equivalent (effective).
The thing is that we don’t know if increasing the actual bandwidth to 320 or 384 is going to scale well. You have to have stronger cores to utilize extra (normal or effective) bandwidth.

EDIT PS
and they have to redesign mem controller for wider bus = expensive and larger die.
It’s a no go...


----------



## hero1 (Oct 29, 2020)

ShurikN said:


> As far as I remember all the leaks and rumors, there is not going to be an AIB 6900XT



For reals? That'll be insane I'd they don't have AIBs involved. Imagine the amount of money they'll make when(if) the review backup their performance claims.


----------



## Zach_01 (Oct 29, 2020)

AIBs on 6900XT will mean 1100-1200$ prices if not more. Maybe AMD doesn’t want that.
But then again... 6800XT AIBs mean matching the 6900XT for less... (700~800+)

It’s complicated...


----------



## BoboOOZ (Oct 29, 2020)

Metroid said:


> So if that is true then they have to find a way to outdo cache from amd, like i said before if amd pushes to 384 bit gddr6 like nvidia then nvidia is doomed, 256 bit is beating nvidia best already.


I think it's time to rein in the hype train a bit. First off, we have no idea how this Infinity Cache scales up or down, if you just assume a linear scaling, you're probably wrong.

Second, remember Fermi: Nvidia put out a first generation of cards which were horrible, and then they iterated on the same node just 6 months later and fixed most of the problems.

The theoretical bandwidth showed up on AMDs slides is just as theoretical as Ampere TFlops.

Let's not get ahead of ourselves, AMD did well in this skirmish, the war is far from over.


----------



## Metroid (Oct 29, 2020)

BoboOOZ said:


> I think it's time to rein in the hype train a bit. First off, we have no idea how this Infinity Cache scales up or down, if you just assume a linear scaling, you're probably wrong.
> 
> Second, remember Fermi: Nvidia put out a first generation of cards which were horrible, and then they iterated on the same node just 6 months later and fixed most of the problems.
> 
> ...



in my view, amd could have beaten nvidia, they did not want to, i guess they are doing like they did to intel, zen2 = competitive x intel, zen3 = beat intel, i guess this time will be similar, 6xxx = competitive, 7xxx = beat nvidia.

Before this, i said do not underestimate amd, amd has a new management but to tell you the truth i myself have not had the thought amd could be competitive this time x nvidia, i guessed 30% less performance than x 3080, i was wrong.


----------



## BoboOOZ (Oct 29, 2020)

Metroid said:


> in my view, amd could have beaten nvidia, they did not want to, i guess they are doing like they did to intel, zen2 = competitive x intel, zen3 = beat intel, i guess this time will be similar, 6xxx = competitive, 7xxx = beat nvidia.


Only Nvidia is not Intel, Nvidia is a fast responding company capable of taking decisions in days and implementing them in weeks or months. They have tons of cash (not cache  ), good management, loads of good engineers and excellent marketing and mindshare. Intel only had tons of cash.
Edit: ... and mindshare, tbh, which they haven't completely eroded yet, at least in some markets


----------



## Zach_01 (Oct 29, 2020)

nVidia is not exactly in the position that Intel is. Sure they made some apparently dumb decisions but they have the resources to come back soon. And probably sooner than RDNA3.
The fact that RDNA3 is coming in 2 years gives nVidia room to respond.


----------



## Metroid (Oct 29, 2020)

Well, we have to wait for reviews to confirm what amd showed. It's hard to believe even if you see if is real, amd was so far behind that if all true then we have to start believing in miracles too if you dont believe it already.


----------



## InVasMani (Oct 29, 2020)

I'm a bit perplexed at how Smart access memory works in comparison to how it's always worked what's the key difference between the two with like a flow chart. What's being done differently doesn't the CPU always have access to VRAM anyway!? I imagine it's bypassing some step in the chain for quicker access than how it's been handle in the past, but that's the part I'm curious about. I mean I can access a GPU's VRAM now and the CPU and system memory obviously plays some role in the process. The mere fact that the VRAM performance slows down around the point where the L2 cache is saturated on my CPU seems to indicate the CPU design plays a role though it seems to bottleneck by system memory performance along with the CPU L2 cache and core count not thread count which adds to the overall combined L2 cache structure. You see a huge regression of performance beyond the theoretical limits of the L2 cache it seems to peak at that point and it 4-way on my CPU slows a bit up to 2MB file sizes then drops off quite rapidly after that point. If you disable the physical core too the bandwidth regresses as well so the combined L2 cache impacts it from what I've seen.



ador250 said:


> 10K Cuda cores 384bit 35 TFLOPS 350watt, all just to match with 256bit 23TFLOPS 300watt 6900XT, LawL. Ampere is a failure architecture. Let's hope Hopper will change something for Nvidia. Until then RIP Ampere.


 Yeah I'd defiantly want the Radeon in mGPU over the Nvidia in the same scenario 600w versus 700w not to mention the infinity cache could have real latency beneficial impact in that scenario potentially as well. I'm curious if the lower end models will have CF support or not I didn't see any real mention of CF tech for RDNA2, but they had a lot of other things to cover. I think a 128-bit card with less CU's 44/52 and the same infinity cache could potentially be even better a lower overall TDP perhaps the same VRAM capacity, but overall maybe quicker than 6800XT at a similar price would be hugely popular and widely successful. I think a 44CU of that nature would probably be enough to beat the 6800XT slightly and could probably cost less plus you could upgrade to that type of performance. It might not win strictly on TDP however then again maybe it's close if AMD is pushing the clock frequency rather steeply and efficiency is going out the window as a byproduct of that. Now I wonder if the infinity cache in crossfire could be split 50/50 with 64MB to each GPU that the CPU can access and the other left over 64MB on each could shared between each other reducing the inter-latency connection between the GPU's and bandwidth to and from the CPU. The other interesting part maybe it can only push 128MB now, but once a newer compatible CPU launches it could push 256MB of smart cache to the CPU with 128MB from each GPU in Crossfire!!? Really interesting stuff to explore.


----------



## lexluthermiester (Oct 29, 2020)

Metroid said:


> Well, we have to wait for reviews to confirm what amd showed.


While true, benchmarks have been pretty much on the mark with their stated claims for Ryzen. I see no reason they would over-exaggerate these stats.


----------



## Valantar (Oct 29, 2020)

InVasMani said:


> I'm a bit perplexed at how Smart access memory works in comparison to how it's always worked what's the key difference between the two with like a flow chart. What's being done differently doesn't the CPU always have access to VRAM anyway!? I imagine it's bypassing some step in the chain for quicker access than how it's been handle in the past, but that's the part I'm curious about. I mean I can access a GPU's VRAM now and the CPU and system memory obviously plays some role in the process. The mere fact that the VRAM performance slows down around the point where the L2 cache is saturated on my CPU seems to indicate the CPU design plays a role though it seems to bottleneck by system memory performance along with the CPU L2 cache and core count not thread count which adds to the overall combined L2 cache structure. You see a huge regression of performance beyond the theoretical limits of the L2 cache it seems to peak at that point and it 4-way on my CPU slows a bit up to 2MB file sizes then drops off quite rapidly after that point. If you disable the physical core too the bandwidth regresses as well so the combined L2 cache impacts it from what I've seen.


CPUs only have access to RAM on PCIe devices in 265MB chunks at a time. SAM gives the CPU direct access to the entire VRAM at any time.


InVasMani said:


> 52CU and 44CU is the next logical step based on what's already released AMD seems to disable 8CU's at a time. I can see them doing a 10GB or 14GB capacity device. It would be interesting if they utilized GDDR6/GDDR6X together and use it along side variable rate shading say use the GDDR6 when you scale the scene image quality back further and the GDDR6X at the higher quality giving mixed peformance at a better price. I would think they'd consider reducing the memory bus width to 128-bit or 192-bit for SKU's with those CU counts though if paired with infinity cache. Interesting to think about the infinity cache in a CF setup how it impacts the latency I'd expect less micro stutter. The 99 percentiles will be interesting to look at for RDNA2 with all the added bandwidth and I/O. I suppose 36CU's is possible as well by extension, but idk how the profit margins would be the low end of the market is erroding each generation further and further not to mention Intel entering the dGPU market will compound that situation further. I don't think a 30CU is likely/possibly for RDNA2 it would end up being 28CU if anything and kind of doubtful unless they wanted that for a APU/mobile then perhaps.


52 and 44 CUs would be very small steps. Also, AMD likes to do 8CU cuts? Yet the 6800 has 12 fewer CUs than the 6800 XT? Yeah, sorry, that doesn't quite add up. I'm very much hoping Navi 22 has more than 40 CUs, and I'd be very happy if it has 48. Any more than that is quite unlikely IMO. RDNA (1) scaled down to 24 CUs with Navi 14, so I would frankly be surprised if we didn't see RDNA 2 scale down just as far - though hopefully they'll increase the CU count a bit at the low end. There'd be a lot of sales volume in a low-end, low CU count, high-clocking GPU, and margins could be good if they can get by with a 128-bit bus for that part. I would very much welcome a new 75W RDNA2 GPU for slot powered applications!

Combining two different memory technologies like you are suggesting would be a complete and utter nightmare. Either you'd need to spend _a lot_ of time and compute shuffling data back and forth between the two VRAM pools, or you'd need to double the size of each (i.e. instead of a 16GB GPU you'd need a 16+16GB GPU), driving up prices massively. Not to mention the board space requirements - those boards would be massive, expensive, and very power hungry. And then there's all the issues getting this to work - if you're using VRS as a differentiator then parts of the GPU need to be rendering a scene from one VRAM pool with the rest of the GPU rendering _the same scene_ from _a different VRAM pool_, which would either mean waiting massive amounts of time for data to copy over, tanking performance, or keeping data duplicated in two VRAM pools simultaneously, which is both expensive in terms of power and would cause all kinds of issues with two different render passes and VRAM pools each informing new data being loaded to both pools at the same time. As I said: a complete and utter nightmare. Not to mention that one of the main points of Infinity Cache is to lower VRAM bandwidth needs. Adding something like this on top makes no sense.

 I would expect narrower buses for lower end GPUs, though the IC will likely also shrink due to the sheer die area requirements of 128MB of SRAM. I'm hoping for 96MB of IC and a 256-bit or 192-bit bus for the next cards down. 128 won't be doable unless they keep the full size cache, and even then that sounds anemic for a GPU at that performance level (RTX 2080-2070-ish).

From AMD's utter lack of mentioning it, I'm guessing CrossFire is just as dead now as it was for RDNA1, with the only support being in major benchmarks.


----------



## Zach_01 (Oct 29, 2020)

Metroid said:


> Well, we have to wait for reviews to confirm what amd showed. It's hard to believe even if you see if is real, amd was so far behind that if all true then we have to start believing in miracles too if you dont believe it already.


It’s spending money for R&D and do a lot of engineering...
Just a lot of people didn’t have faith in AMD because it seemed that it was too far behind in the GPU market. But the last 3-4 years AMD has shown some seriousness about their products, and seem more organized and focused.


----------



## Metroid (Oct 29, 2020)

Zach_01 said:


> It’s spending money for R&D and do a lot of engineering...
> Just a lot of people didn’t have faith in AMD because it seemed that it was too far behind in the GPU market. But the last 3-4 years AMD has shown some seriousness about their products, and seem more organized and focused.



Like I said before new management, new ideas, new employees, new objectives, new products and so on.


----------



## InVasMani (Oct 29, 2020)

lexluthermiester said:


> While true, benchmarks have been pretty much on the mark with their stated claims for Ryzen. I see no reason they would over-exaggerate these stats.


 You know something I noticed with the VRAM RAMDISK software when I played around with it is read performance seems to follow system memory constraints while the write performance follows the PCIE bus constraints, but you can use NTFS compression on partition format as well and speed up the bandwidth a lot and you go a step further than that as well you can compress the contents with Compact GUI-2 and use LZX compression compression it further at a high level and while I can't exactly benchmark that as easily the fact that it can be done is interesting to say the least and could speed up bandwidth and capacity further yet to the device it's basically a glorified RAMDISK that matches system memory read performance with write speeds matching PCIE bandwidth. The other neat part is when I saw Tom's Hardware test Crysis run in VRAM it performed a bit better on the minimal frame rates for 4K over NVME/SSD/RAMDISK the system RAMDISK was worst. I think that's actually expected because it eats into system memory bandwidth pulling double duties while the VRAM is often sitting around waiting for the contents to populate it in the first place and essentially works like a PCIE 4.0 x16 RAMDISK sort of device which is faster than NVME technically and less complexity than a quad M.2 PCIE 4.0 x16 setup would be. The other aspect Tom's Hardware tested it with PCIE 3.0 NVME and GPU's. I can't tell if that was error of margin or not, but it appeared like if repeatable it was a good perk and one might yield even better results than what was tested for at the same time.



Zach_01 said:


> It’s spending money for R&D and do a lot of engineering...
> Just a lot of people didn’t have faith in AMD because it seemed that it was too far behind in the GPU market. But the last 3-4 years AMD has shown some seriousness about their products, and seem more organized and focused.


 This I said for a long while now quite often as AMD's finacials and budget improves I anticipate the Radeon to follow suit and make a stronger push in the GPU market against Nvidia. I think they are pleasantly a littler earlier than I figured they'd be at this stage though I thought this kind of rebound progress would happen next generation not this one. It just shows how hard AMD has worked to address the performance and efficiency of the Radeon brand and IP tech it's impressive this is exactly where the company wants to be headed. It's hard to not to get complacent when you've been at the top awahile look at Intel and Nvidia for that matter so it's really about time, but it wouldn't have happened w/o good management on the part of Lisa Su and the overall engineering talents she's put to work as well it's not a stroke of luck it's continued effort and progress with efficient management. To be very fair her personal experience is also very insightful for her position as well she's the right leader for that company 100% similar scenario with Jensen Huang even if you don't like leather jackets.



Valantar said:


> CPUs only have access to RAM on PCIe devices in 265MB chunks at a time. SAM gives the CPU direct access to the entire VRAM at any time.


 That's exactly the kind of insight info I was interested absolute game changer I'd say.



Valantar said:


> 52 and 44 CUs would be very small steps. Also, AMD likes to do 8CU cuts?


 Could've sworn I was 80CU/72CU/64CU's listed...appears I got the lowest end model CU count off and it's 60. So seems they can cut 8 or 12 CU's at a time possibly more granular than that though I'd expect some similar granular size cuts to other SKU's. That said I still don't really expect they'd do cuts below 44CU's for the desktop anyway. I guess they could possibly have 56CU/52CU/44CU maybe they stretch it to 40CU as well who knows, but I doubt it if they retain the infinity cache w/o scaling it's cache size as well. I do see 192-bit and 128-bit being plausible and depends mostly around CU count which makes the most sense.

I'd like to see what could be done with CF with the infinity cache better bandwidth and I/O even if it gets split amongst the GPU's should translate to less micro stutter. Less bus bottleneck complications are always good. It would be interesting if some CPU cores got introduced on the GPU side and a bit of on the fly compress in the form of LZX or XPRESS 4K/8K/16K got used prior to the infinity cache sending that data along to the CPU. Even if it only could compress files up to a certain file size on the fly quickly it would be quite useful and you can use those types of compression methods with VRAM as well.


----------



## mechtech (Oct 29, 2020)

InVasMani said:


> 52CU and 44CU is the next logical step based on what's already released AMD seems to disable 8CU's at a time. I can see them doing a 10GB or 14GB capacity device. It would be interesting if they utilized GDDR6/GDDR6X together and use it along side variable rate shading say use the GDDR6 when you scale the scene image quality back further and the GDDR6X at the higher quality giving mixed peformance at a better price. I would think they'd consider reducing the memory bus width to 128-bit or 192-bit for SKU's with those CU counts though if paired with infinity cache. Interesting to think about the infinity cache in a CF setup how it impacts the latency I'd expect less micro stutter. The 99 percentiles will be interesting to look at for RDNA2 with all the added bandwidth and I/O. I suppose 36CU's is possible as well by extension, but idk how the profit margins would be the low end of the market is erroding each generation further and further not to mention Intel entering the dGPU market will compound that situation further. I don't think a 30CU is likely/possibly for RDNA2 it would end up being 28CU if anything and kind of doubtful unless they wanted that for a APU/mobile then perhaps.



Well depends on definition of low end?  Usually midrange as far as price is concerned is about $250 ish US.  The RX480 when I bought it about 4 years ago was 2304 shaders (36CU) 8GB ram, 256-bit bus for $330 cnd or roughly $250US.  Maybe since card prices now start at $150 US the midrange is closer to $300 US???

That's typically my budget.  I am hoping something gets released along those lines, it could double my current performance and put it in 5700XT performance territory.

If not oh well, I have many other ways to put $350 cnd to better use.


----------



## Zach_01 (Oct 29, 2020)

InVasMani said:


> You know something I noticed with the VRAM RAMDISK software when I played around with it is read performance seems to follow system memory constraints while the write performance follows the PCIE bus constraints, but you can use NTFS compression on partition format as well and speed up the bandwidth a lot and you go a step further than that as well you can compress the contents with Compact GUI-2 and use LZX compression compression it further at a high level and while I can't exactly benchmark that as easily the fact that it can be done is interesting to say the least and could speed up bandwidth and capacity further yet to the device it's basically a glorified RAMDISK that matches system memory read performance with write speeds matching PCIE bandwidth. The other neat part is when I saw Tom's Hardware test Crysis run in VRAM it performed a bit better on the minimal frame rates for 4K over NVME/SSD/RAMDISK the system RAMDISK was worst. I think that's actually expected because it eats into system memory bandwidth pulling double duties while the VRAM is often sitting around waiting for the contents to populate it in the first place and essentially works like a PCIE 4.0 x16 RAMDISK sort of device which is faster than NVME technically and less complexity than a quad M.2 PCIE 4.0 x16 setup would be. The other aspect Tom's Hardware tested it with PCIE 3.0 NVME and GPU's. I can't tell if that was error of margin or not, but it appeared like if repeatable it was a good perk and one might yield even better results than what was tested for at the same time.
> 
> This I said for a long while now quite often as AMD's finacials and budget improves I anticipate the Radeon to follow suit and make a stronger push in the GPU market against Nvidia. I think they are pleasantly a littler earlier than I figured they'd be at this stage though I thought this kind of rebound progress would happen next generation not this one. It just shows how hard AMD has worked to address the performance and efficiency of the Radeon brand and IP tech it's impressive this is exactly where the company wants to be headed. It's hard to not to get complacent when you've been at the top awahile look at Intel and Nvidia for that matter so it's really about time, but it wouldn't have happened w/o good management on the part of Lisa Su and the overall engineering talents she's put to work as well it's not a stroke of luck it's continued effort and progress with efficient management. To be very fair her personal experience is also very insightful for her position as well she's the right leader for that company 100% similar scenario with Jensen Huang even if you don't like leather jackets.
> 
> ...


I think they are able to cut/disable CUs by 2. If you look RNDA1/2 full dies you will see 20 and 40 same rectangular respectively. Each one of these rectangular are 2CUs.

RDNA1





RDNA2




————————

And I’m pretty convinced that Crossfire is dead long ago.


----------



## nikoya (Oct 30, 2020)

ratirt said:


> Actually. The VRR is supported by these screens but it is not FreeSync equivalent. Maybe it will work but it may not necessarily work as a FreeSync monitor or a TV would work.


yes 6800XT support HDMi 2.1 VRR (just like consoles)

RTings rate C9/B9 VRR 4K 40-60hz (maybe because they disn't have HDMI2.1 source to test upper rates?)

looks like there is a modding solution to activate Freesync  up to 4k 120hz


__
		https://www.reddit.com/r/Amd/comments/g65mw7

some guys are experiencing short black screens when launching/exiting some game.
but well no big issue.

I think Im gonna go red. sweet 16Gb and frequencies reaching heaven 2300Mhz+ yummy  I want it give that to me.

Green is maxing 1980Mhz on all they products. OC isn't existing at all there.

DXR on AMD with 1 RA (Raytracing Accelerator) per CU seems not so bad. I mean with the consoles willing to implement a bit of Raytracing, at least we will be able activate DXR and see what it looks like. anyway it doesnt looks like the wow thing now. just some puddles of water reflexions here and there.

DLSS.. well AMD is working on a solution, and given that the consoles are demanding for this solution as they have less power to reach 4k60+ FPS this could get somewhere in the end.

Drivers Bugs : many 5700XT users doesnt report issues seems many months, and again given that most of games are now dev. for both PC and Consoles Im pretty confident that AMD is gonna be robbust.

Also Im fed up with all my interfaces beeing green. Geforce Exp. I want to discovers the AMD HMI just for the fun to look into every possible menues and options.

I would have gonne for the 6900XT if they had it priced 750$ on a 80 / 72 CU ration basis
that would have been reasonable. even 800$ just for the "Elite" product feeling. but 999$ they got berserker here. not gonna give them credits.

In my opinion they should have done a bigger die and crush NVidia once for all. just for the fun.

all in all... November seems exiting



Johnny05 said:


> I have a c9 with a Vega64 and Freesync works just fine. All that is required is to use CRU and configure the tv to report as Freesync compatible.



ah yeah thx just saw your answer now. good to see that it works for you as well.


----------



## InVasMani (Oct 30, 2020)

I actually would've guessed 2CU's.


Zach_01 said:


> I think they are able to cut/disable CUs by 2. If you look RNDA1/2 full dies you will see 20 and 40 same rectangular respectively. Each one of these rectangular are 2CUs.
> 
> RDNA1
> View attachment 173792
> ...


 Still AMD has to differentiate SKU's so it's a matter of how they go about it and how many SKU's they try to offer in total. AMD I'm sure wants fairly good segmentation across the board overall along with price considerations. If they added 3 more SKU's and did what they did for high end SKU's in reverse meeting most closely at the end I think they probably go with 56CU/44CU/36CU SKU's to pair with the current 80CU/72CU/60CU offerings. The 60CU/56CU would be most closely matched in price and performance naturally. Now if AMD has to create new die's to reduce the infinity cache and if they reduce the memory bit width I think 128-bit with 64MB infinity cache makes a lot of sense especially were they to swap out the GDDR6 for GDDR6X. I really see that as pretty good possibility. It actually seems to make a fair bit of sense and the 56CU would be closely match to the 60CU version, but perhaps at a better price or efficiency relative to price either way it seems flexible and scalable. They can also bump up the original 3 SKU's with GDDR6X down the road too. I think AMD kind of nailed it this time around on the GPU side really great progress and a sort of return to normacy on the GPU front between AMD/Nvidia or ATI/Nvidia either way it's good for consumers hopefully or better than it had been at the very least.


----------



## Zach_01 (Oct 30, 2020)

InVasMani said:


> I actually would've guessed 2CU's.
> Still AMD has to differentiate SKU's so it's a matter of how they go about it and how many SKU's they try to offer in total. AMD I'm sure wants fairly good segmentation across the board overall along with price considerations. If they added 3 more SKU's and did what they did for high end SKU's in reverse meeting most closely at the end I think they probably go with 56CU/44CU/36CU SKU's to pair with the current 80CU/72CU/60CU offerings. The 60CU/56CU would be most closely matched in price and performance naturally. Now if AMD has to create new die's to reduce the infinity cache and if they reduce the memory bit width I think 128-bit with 64MB infinity cache makes a lot of sense especially were they to swap out the GDDR6 for GDDR6X. I really see that as pretty good possibility. It actually seems to make a fair bit of sense and the 56CU would be closely match to the 60CU version, but perhaps at a better price or efficiency relative to price either way it seems flexible and scalable. They can also bump up the original 3 SKU's with GDDR6X down the road too. I think AMD kind of nailed it this time around on the GPU side really great progress and a sort of return to normacy on the GPU front between AMD/Nvidia or ATI/Nvidia either way it's good for consumers hopefully or better than it had been at the very least.


My, absolutely based on (my) logic, estimation is that AMD will stay away from GDDR6X. Because they can get away with the new IC implementation. And second for the all kinds of expenses. GDDR6X is more expensive, draws almost X3 the power from “simple” GDDR6, and the memory controller need to be more complex too (=more expenses on die area and fab cost).

This I “heard” partially...
The three 6000 we’ve seen so far is based on the Navi21 right? 80CUs full die. They may have one more N21 with further less CUs, don’t know how many, probably 56 or even less active with 8GB(?) and probably same 256bit bus. But this isn’t coming soon I think because they may have to make inventory first (because of present good fab yields) and also see how things will go with nVidia.

Further down they have Navi22. Probably (?)40CUs full die with 192bit bus, (?)12GB, and clocks up to 2.5GHz, 160~200W, with who knows how much IC it will have. That will be better than 5700XT.
And also cutdown versions of N22 with 32~36CUs 8/10/12GB 160/192bit (for 5600/5700 replacements) and so on, but at this point is all on full speculations and things may change in future.

Also rumors for Navi23 with 24~32CUs but... it’s way too soon.

Navi21: 4K
Navi22: 1440p and ultrawide
Navi23: 1080p only


----------



## Valantar (Oct 30, 2020)

Zach_01 said:


> I think they are able to cut/disable CUs by 2. If you look RNDA1/2 full dies you will see 20 and 40 same rectangular respectively. Each one of these rectangular are 2CUs.





InVasMani said:


> I actually would've guessed 2CU's.
> Still AMD has to differentiate SKU's so it's a matter of how they go about it and how many SKU's they try to offer in total. AMD I'm sure wants fairly good segmentation across the board overall along with price considerations. If they added 3 more SKU's and did what they did for high end SKU's in reverse meeting most closely at the end I think they probably go with 56CU/44CU/36CU SKU's to pair with the current 80CU/72CU/60CU offerings. The 60CU/56CU would be most closely matched in price and performance naturally. Now if AMD has to create new die's to reduce the infinity cache and if they reduce the memory bit width I think 128-bit with 64MB infinity cache makes a lot of sense especially were they to swap out the GDDR6 for GDDR6X. I really see that as pretty good possibility. It actually seems to make a fair bit of sense and the 56CU would be closely match to the 60CU version, but perhaps at a better price or efficiency relative to price either way it seems flexible and scalable. They can also bump up the original 3 SKU's with GDDR6X down the road too. I think AMD kind of nailed it this time around on the GPU side really great progress and a sort of return to normacy on the GPU front between AMD/Nvidia or ATI/Nvidia either way it's good for consumers hopefully or better than it had been at the very least.


Yep, CUs are grouped two by two in ... gah, I can't remember what they call the groups. Anyhow, AMD can disable however many they like as long as it's a multiple of 2.

That being said, it makes no sense for them to launch further disabled Navi 21 SKUs. Navi 21 is a big and expensive die, made on a mature process with a low error rate. They've already launched a SKU with 25% of CUs disabled. Going below that would only be warranted if there were _lots_ of defective dice that didn't even have 60 working CUs. That's highly unlikely, and so they would then be giving up chips they could sell in higher power, more expensive SKUs just to make cut-down ones - again, why would they do that? And besides, AMD has promised that RDNA will be the basis for their full product stack going forward, so we can expect at the very least two more die designs going forward - they had two below 60 CUs for RDNA 1 after all, and reducing that number makes no sense at all. I would expect the rumors of a mid-size Navi 22 and a small Navi 23 to be relatively accurate, though I'm doubtful about Navi 22 having only 40 CUs - that's too big a jump IMO. 44, 48? Sure. And again, 52 would place it too close to the 6800. 80-72-60-(new die)-48-40-32-(new die)-28-24-20 sounds like a likely lineup to me, which gives us everything down to a 5500 non-XT, with the possibility of 5400/5300 SKUs with disabled memory, lower clocks, etc.

As for memory, I agree with @Zach_01 that AMD will likely stay away from GDDR6X entirely. It just doesn't make sense for them. With IC working to the degree that they only need a relatively cheap 256-bit GDDR6 bus on their top end SKU, going for a more expensive, more power hungry RAM standard on a lower end SKU would just be plain weird. What would they gain from it? I wouldn't be surprised if Navi 22 still had a 256-bit bus, but it might only get fully enabled on top bins (6700 XT, possibly 6700) - a 256-bit bus doesn't take much board space and isn't very expensive (the RX 570 had one, after all). My guess: fully enabled Navi 22 will have something like a 256-bit G6 bus with 96MB of IC. Though it could of course be any number of configurations, and no doubt AMD has simulated the crap out of this to decide which to go for - it could also be 192-bit G6+128MB IC, or even 192-bit+96MB if that delivers sufficient performance for a 6700 XT SKU.


----------



## Zach_01 (Oct 30, 2020)

The battle will continue and I think it will be more fierce at low-mid range where the most cards are sold. Not that Top-End is over...
Its really nice and exciting to see them both fight for "seats" and market share, all over again... Not only for the new and more advanced products (from both), but for the competition also!
I'm all set for a GPU for the next couple of years but all I want is to see them fight!!


----------



## R0H1T (Oct 30, 2020)

Valantar said:


> So no, this doesn't seem like a "best v. worst" comparison.


I didn't say that, hence the word could. AMD can get the numbers they desire by comparing the less efficient cards, that's it. Different cards can have vastly different perf/w figures, the efficiency jump in & of itself says nothing. What it does tell us however is that AMD's removed some bottlenecks from their RDNA uarch that improved efficiency by a lot. There could be more efficient cards in the 6xxx lineup which might well be more than 70% more efficient than the worst RDNA card out there. The bottomline being there's more than one way to skin the cat & while the jump is tremendous indeed I can't say it's that surprising, not to me at least. In case you forgot AMD has lead Nvidia in perf/W & overall performance in the last decade, I'm frankly more impressed by the zen team's achievements.


----------



## Valantar (Oct 30, 2020)

R0H1T said:


> I didn't say that, hence the word could.


But your wording was vague. You said they could, yet failed to point out that in this case it's quite clear that they didn't. Which makes all the difference.


R0H1T said:


> AMD can get the numbers they desire by comparing the less efficient cards, that's it. Different cards can have vastly different perf/w figures, the efficiency jump in & of itself says nothing. What it does tell us however is that AMD's removed some bottlenecks from their RDNA uarch that improved efficiency by a lot. There could be more efficient cards in the 6xxx lineup which might well be more than 70% more efficient than the worst RDNA card out there.


That's likely true. If they have a low-and-(relatively-)wide RDNA 2 SKU like the 5600 XT, that would no doubt be more than 70% better than the 5700 XT in perf/W. And of course if they, say, clock the living bejeezus out of some SKU it might not significantly beat the 5600 XT in perf/W. At that point though it's more interesting to look at overall/average/geomean perf/W for the two lineups and compare that, in which case there's little doubt RDNA 2 will be a lot more efficient.


R0H1T said:


> The bottomline being there's more than one way to skin the cat & while the jump is tremendous indeed I can't say it's that surprising, not to me at least. In case you forgot AMD has lead Nvidia in perf/W & overall performance in the last decade,


Sorry, what? Did you mean to say the opposite? AMD has been behind Nvidia in perf/W and overall performance since the 780 Ti. That's not quite a decade, but seven years is not nothing, and the closes AMD has been in that time has been the Fury X (near performance parity at slightly higher power) and the 5600 XT (near outright efficiency superiority, but at a relatively low absolute performance).


R0H1T said:


> I'm frankly more impressed by the zen team's achievements.


I'd say both are about equally impressive, though it remains to be seen if the RDNA team can keep up with the extremely impressive follow-through of the Zen team. RDNA 2 over RDNA 1 is (at least according to AMD's numbers) a change very similar to Zen over Excavator, but since then we've now seen significant generational growth for two generations (with a minor revision in between). On the other hand RDNA 1 over GCN was also a relatively big jump, but one that also had more issues that Zen (even accounting for early RAM and BIOS issues). So the comparison is a bit difficult at this point in time, but it's certainly promising for the RDNA team.


----------



## Zach_01 (Oct 30, 2020)

Don’t forget that the next RDNA3 is ~24months away.
That is a lot more from the 15 month period (RDNA1 to 2).
The impressive stuff may continue on their all new platform on early 2022 for ZEN5 and late 2022 for RDNA3, and it could be bigger than what we’ve seen so far.


----------



## BoboOOZ (Oct 30, 2020)

Zach_01 said:


> Don’t forget that the next RDNA3 is ~24months away.
> That is a lot more from the 15 month period (RDNA1 to 2).
> The impressive stuff may continue on their all new platform on early 2022 for ZEN5 and late 2022 for RDNA3, and it could be bigger than what we’ve seen so far.


The promise for RDNA is to have short incremental cycles just like with Zen, so RDNA3 is due for end of next year, beginning of 2022 at the latest. That's what everybody is saying, and Lisa just said that the development of RDNA3 is well under way



			https://www.tweaktown.com/images/news/7/4/74274_06_amds-next-gen-rdna-3-revolutionary-chiplet-design-could-crush-nvidia_full.png


----------



## R0H1T (Oct 30, 2020)

Yeah can't imagine AMD not going 5nm/RDNA 3 in much less than 2 years, especially since they probably have all the access to TSMC's top nodes now. Nvidia certainly is gonna release something better much sooner, AMD can't let the Turing saga play out for another year!


----------



## Zach_01 (Oct 30, 2020)

Yet they show it on a slide of RDNA2 that RNDA3 is for end of 2022. I’m not making this...


----------



## BoboOOZ (Oct 30, 2020)

Zach_01 said:


> Yet they show it on a slide of RDNA2 that RNDA3 is for end of 2022. I’m not making this...


Show that slide, I showed you mine


----------



## Zach_01 (Oct 30, 2020)

No I’m not...


----------



## Valantar (Oct 30, 2020)

(Source.) It's AMD's typical vague roadmap, it can mean any part of 2022, though 2021 is _very _unlikely.


----------



## Shatun_Bear (Oct 30, 2020)

I think RDNA2 is the equivalent of Zen 2 in the PC space. Extremely competitive with their rival allowing massive market share gains (they can only go up from 20%).

RDNA3 is said to be another huge leap and on TSMC's 5nm whilst Nvidia will be trundling along on 7nm or flirting with Samsung's el cheapo crappy 8nm+ or 7 nm (nvidia mistake) with their 4000 series.


----------



## springs113 (Oct 30, 2020)

Valantar said:


> (Source.) It's AMD's typical vague roadmap, it can mean any part of 2022, though 2021 is _very _unlikely.


 how i read that map... is the end of 2021... before 2022.  Also every leaker out there says 2021.


----------



## Valantar (Oct 30, 2020)

springs113 said:


> how i read that map... is the end of 2021... before 2022.  Also every leaker out there says 2021.


Guess that depends if you're reading the "2022" point as "start of 2022" or "end of 2022". I prefer pessimism with the possibility of being surprised, so I'm firmly in the latter camp.


----------



## BoboOOZ (Oct 30, 2020)

Valantar said:


> Guess that depends if you're reading the "2022" point as "start of 2022" or "end of 2022". I prefer pessimism with the possibility of being surprised, so I'm firmly in the latter camp.


Yes, but leakers   ...
Plus, while AMD might feel encouraged to slow things down a bit on the CPU side, since they are starting to compete with themselves a bit, in the GPU market they need to keep the fast pace for quite a while, before even hoping to get to a similar position.


----------



## lexluthermiester (Oct 30, 2020)

BoboOOZ said:


> Plus, while AMD might feel encouraged to slow things down a bit on the CPU side, since they are starting to compete with themselves a bit


Plus, they are focused on a new socket. The Ryzen 5000 series of CPU's are the last for socket AM4. The next will likely be AM5.


----------



## Valantar (Oct 30, 2020)

lexluthermiester said:


> Plus, they are focused on a new socket. The Ryzen 5000 series of CPU's is the last for socket AM4. The next will likely be AM5.


And they can't rush that out before PCIe 5.0 is at least technically viable (likely needs new on-board hardware to ensure signal integrity, which might not be available at consumer price levels for a while) and DDR5 has wide availability. Definitely good reasons to hold off AM5 for a while yet.

But using that as an argument that AMD will try to quicken their GPU development pace? Nah, sorry, not buying that. 16 months between RDNA 1 and RDNA 2. Now we're supposed to get RDNA 3 in < 14 months? And remember, a launch later in the year than this isn't happening no matter what. It's either pre holiday season or CES. Which makes that 12 months, not 14. I really don't see that as likely. I'll be more than happy to be proven wrong, but I'm definitely sticking to a more cautious approach here.


----------



## R0H1T (Oct 30, 2020)

Why do you think they'll just straight up go with PCIe 5.0 ? They most certainly can skip on that.

DDR5 is a given, PCIe 5.0 is not much of a necessity even on servers. Of course with Xilinx (acquisition) they might surprise us or something.


----------



## Valantar (Oct 30, 2020)

R0H1T said:


> Why do you think they'll just straight up go with PCIe 5.0 ? They most certainly can skip on that.
> 
> DDR5 is a given, PCIe 5.0 is not much of a necessity even on servers. Of course with Xilinx they might surprise us or something.


I don't think it's necessary at all, but launching a new long-term platform ~a year before the availability of a I/O standard is generally a bad idea. Of course it's possible that they could launch AM5 with the promise of future PCIe 5.0 support (i.e. first-gen motherboards and CPUs will have 5.0, but will be compatible with next-gen CPUs and mobos that have 5.0 support, just at 4.0 speeds when mixed), but again, that's rather sloppy.


----------



## dragontamer5788 (Oct 30, 2020)

Zach_01 said:


> I think they are able to cut/disable CUs by 2. If you look RNDA1/2 full dies you will see 20 and 40 same rectangular respectively. Each one of these rectangular are 2CUs.



Note: CU is now a bit of a historical artifact. RDNA and RDNA 2 are organized into WGPs, or "Dual Compute Units" (because each WGP has the resources of 2x CUs of old). That's why there are 40 RDNA clusters, which count as 80 "CUs" (even though CUs don't really exist anymore).

CUs were in Vega, and are a decent unit to think about while programming the GPU. WGPs work really hard to "pretend" to work like 2x CUs for backwards compatibility purposes... but they're really just one unit now.

-----

As such: the proper term for those 40x clusters on your RDNA2 die shot is Workgroup Processor (WGP)... or "Dual-compute units" (if you want to make a comparison to Vega).


----------



## BoboOOZ (Oct 30, 2020)

Valantar said:


> But using that as an argument that AMD will try to quicken their GPU development pace? Nah, sorry, not buying that. 16 months between RDNA 1 and RDNA 2. Now we're supposed to get RDNA 3 in < 14 months? And remember, a launch later in the year than this isn't happening no matter what. It's either pre holiday season or CES. Which makes that 12 months, not 14. I really don't see that as likely. I'll be more than happy to be proven wrong, but I'm definitely sticking to a more cautious approach here.


You forget that during these 16 months they effectively launched 3 architectures, RDNA2 + 2 custom  APUs, with different architectures and features for consoles. Now the whole GPU design team is free to work on the new GPU generation.


----------



## TheoneandonlyMrK (Oct 30, 2020)

Valantar said:


> I don't think it's necessary at all, but launching a new long-term platform ~a year before the availability of a I/O standard is generally a bad idea. Of course it's possible that they could launch AM5 with the promise of future PCIe 5.0 support (i.e. first-gen motherboards and CPUs will have 5.0, but will be compatible with next-gen CPUs and mobos that have 5.0 support, just at 4.0 speeds when mixed), but again, that's rather sloppy.


They added pciex4 into zen later.


----------



## Valantar (Oct 30, 2020)

BoboOOZ said:


> You forget that during these 16 months they effectively launched 3 architectures, RDNA2 + 2 custom  APUs, with different architectures and features for consoles. Now the whole GPU design team is free to work on the new GPU generation.


"The whole design team" is at least four separate design teams (two for Zen). It's not like all the Zen design engineers can just slot into a GPU design team without a significant retraining period. The semi-custom team is no doubt already working on 5nm refreshes for both console makers, but some of their engineers could have been moved to a field closer to their expertise, whether that's CPU, GPU, I/O, fabric, etc. Ryzen is under continuous development; one team just finished Zen 3, the other is hard at work with Zen 4, and no doubt the Zen 2 team is now ramping up development of Zen 5. There might be some minor shuffling, but nothing on the scale you are indicating.



theoneandonlymrk said:


> They added pciex4 into zen later.


That's true. But that was quite a long time after AM4 launched, not a year or less.


----------



## BoboOOZ (Oct 30, 2020)

Valantar said:


> "The whole design team" is at least four separate design teams (two for Zen). It's not like all the Zen design engineers can just slot into a GPU design team without a significant retraining period. The semi-custom team is no doubt already working on 5nm refreshes for both console makers, but some of their engineers could have been moved to a field closer to their expertise, whether that's CPU, GPU, I/O, fabric, etc. Ryzen is under continuous development; one team just finished Zen 3, the other is hard at work with Zen 4, and no doubt the Zen 2 team is now ramping up development of Zen 5. There might be some minor shuffling, but nothing on the scale you are indicating.


I wonder where do you get the info on the console 5nm refreshes, do you have any source, or are you just guessing? Sony made it clear there will be no refreshes this generation, at least, and there is no leak or hint of that yet, if any of that will come, it will most probably be way later after RDNA3.


----------



## TheoneandonlyMrK (Oct 30, 2020)

BoboOOZ said:


> I wonder where do you get the info on the console 5nm refreshes, do you have any source, or are you just guessing? Sony made it clear there will be no refreshes this generation, at least, and there is no leak or hint of that yet, if any of that will come, it will most probably be way later after RDNA3.


Doesn't mean they won't evolve what's out for a cheaper BOM, it's what they do.


----------



## InVasMani (Oct 30, 2020)

Zach_01 said:


> My, absolutely based on (my) logic, estimation is that AMD will stay away from GDDR6X. Because they can get away with the new IC implementation. And second for the all kinds of expenses. GDDR6X is more expensive, draws almost X3 the power from “simple” GDDR6, and the memory controller need to be more complex too (=more expenses on die area and fab cost).
> 
> This I “heard” partially...
> The three 6000 we’ve seen so far is based on the Navi21 right? 80CUs full die. They may have one more N21 with further less CUs, don’t know how many, probably 56 or even less active with 8GB(?) and probably same 256bit bus. But this isn’t coming soon I think because they may have to make inventory first (because of present good fab yields) and also see how things will go with nVidia.
> ...


 That does make sense on the GDDR6X situation on the cost, complexity, and power situation relative to GDDR6 and with the infinity cache being so effective. I'd like to think with 192-bit they'd have more than  40CU's considering the infinity cache. If it were 128-bit with 64MB infinity cache I could see something like 36CU even being quite reasonable. I think trying to aim higher than RNDA1 is in AMD's best interest for both longevity and margins or at least matching it at better efficiency and cost to produce.



Valantar said:


> Yep, CUs are grouped two by two in ... gah, I can't remember what they call the groups. Anyhow, AMD can disable however many they like as long as it's a multiple of 2.


 Looking at them I actually wouldn't expect them to cut that few realistically for a few reasons obviously SKU differentiation is one obvious reason, but the other is heat distribution balance. I'm not sure that's really ideal cutting 4CU's in total with slices of 2CU's diagonal from each other on opposite sides of the die itself kind of makes more sense. That said AMD has a lot of tech packed into their circuitry these days with precision boost and granular management over them that they probably cut only 2CU's if they felt inclined and not have to worry drastically about the heat management and hot spots becoming a real concerning aspect. If it were me I'd probably approach like I described trying to keep heat distribution most efficient when cutting the CU's down. The SKU differentiation is really the biggest concern I feel though I don't think they are going to slice these up 50 ways to kingdom come myself unless they were trying stirr up a bit of a bidding contract war between the AIB's for slightly better binned SKU's of die's in rather finely incremental differentiating ways. I suppose it could happen, but depends on added time and cost to sort thru all that.


----------



## Valantar (Oct 30, 2020)

BoboOOZ said:


> I wonder where do you get the info on the console 5nm refreshes, do you have any source, or are you just guessing? Sony made it clear there will be no refreshes this generation, at least, and there is no leak or hint of that yet, if any of that will come, it will most probably be way later after RDNA3.


No source, but _every single_ console generation since the PS1 has had some sort of refresh. I'm not talking about the new tier, mid-generation *upgrades *that we saw with the current generation. Refresh = same specs, new process, smaller, cheaper die with lower power draw. The PS1 had at least one slim version. The PS2 had at least 2. I don't think the OG Xbox had one, but the 360 had two, and the One had one (the S). The PS3 had at least a couple, and the PS4 had one (the Slim). Given that 5nm is already in volume production today, it stands to reason that it'll be cheap enough in 2-3 years that console makers will want to move to it. Even if the cost per die is the same due to the more advanced process, they'll save on the BOM through lower power draw =  smaller PSU and heatsink.



InVasMani said:


> Looking at them I actually wouldn't expect them to cut that few realistically for a few reasons obviously SKU differentiation is one obvious reason, but the other is heat distribution balance. I'm not sure that's really ideal cutting 4CU's in total with slices of 2CU's diagonal from each other on opposite sides of the die itself kind of makes more sense. That said AMD has a lot of tech packed into their circuitry these days with precision boost and granular management over them that they probably cut only 2CU's if they felt inclined and not have to worry drastically about the heat management and hot spots becoming a real concerning aspect. If it were me I'd probably approach like I described trying to keep heat distribution most efficient when cutting the CU's down. The SKU differentiation is really the biggest concern I feel though I don't think they are going to slice these up 50 ways to kingdom come myself unless they were trying stirr up a bit of a bidding contract war between the AIB's for slightly better binned SKU's of die's in rather finely incremental differentiating ways. I suppose it could happen, but depends on added time and cost to sort thru all that.


I didn't say they would be cutting 2 off anything, I said they can cut any number as long as it's 2x something. I.e. 2, 4, 6, 8, 10, 12... Even numbered cuts only, in other words. Nor did I say anything about where they would be cut from - that is either decided by where on the die there are defects, or if there aren't any, whatever is convenient engineering-wise. To quote myself, this is my (very rough and entirely unsourced) guess for the Navi 2 lineup in terms of CUs:


Valantar said:


> 80-72-60-(new die)-48-40-32-(new die)-28-24-20 sounds like a likely lineup to me, which gives us everything down to a 5500 non-XT, with the possibility of 5400/5300 SKUs with disabled memory, lower clocks, etc.


----------



## Makaveli (Oct 30, 2020)

Seen this yet?









						RX 6800, 6800 XT und 6900 XT: AMD veröffentlicht weitere Benchmarks in 4K und WQHD
					

Nach der Präsentation der Radeon RX 6800, 6800 XT und 6900 XT hat AMD nun weitere Benchmarks der Grafikkarten in 4K und WQHD veröffentlicht.




					www.computerbase.de


----------



## Fluffmeister (Oct 30, 2020)

Makaveli said:


> Seen this yet?
> 
> 
> 
> ...



Yeah not sure if it was posted but AMD up benchmarks with SAM enabled but no rage mode.



			https://www.amd.com/en/gaming/graphics-gaming-benchmarks
		


Results chop and change a bit, but it gives an idea what to expect.


----------



## InVasMani (Oct 30, 2020)

Valantar said:


> I didn't say they would be cutting 2 off anything, I said they can cut any number as long as it's 2x something. I.e. 2, 4, 6, 8, 10, 12... Even numbered cuts only, in other words. Nor did I say anything about where they would be cut from - that is either decided by where on the die there are defects, or if there aren't any, whatever is convenient engineering-wise. To quote myself, this is my (very rough and entirely unsourced) guess for the Navi 2 lineup in terms of CUs:


 I was injecting my thoughts on the 2CU situation or twin units whatever you wish to call them or abbreviate them. What I was saying is it's unlikely AMD would bother with a SKU that differentiates by as few as 2 of the CU's seems most probably it would be someplace between 6 to 12 between two different SKU's to me at this point. I do see AMD leaning toward cutting less CU's where possible though and charging a higher premium for better performance and CU count is probably greatly more important than the bandwidth with the current design it's needed to make full advantage of the bandwidth available. Much of what happens hinges on the infinity cache size and bus width in any future SKU's even outside VRAM that also change things a fair bit HBM2 with infinity cache for new SKU's with even more CU's is a real possible scenario to consider too even w/o changing the bus width that's tons of extra bandwidth and more CU's to go along with it and the HBM2 is more power friendly than the GDDR6 if I'm not mistaken along with occupying less space so a bigger chip is rather tangible though I don't know about the yields of that. That said they could do 3SKU's lower initially then try to build a bigger higher CU count chip with HBM2 in that order to maximize the yields situation because TSMC's node will continue to mature more over time. The cost factor would be the concern with HBM2, but it would be better power, bandwidth, and space savings.



Fluffmeister said:


> Yeah not sure if it was posted but AMD up benchmarks with SAM enabled but no rage mode.
> 
> 
> 
> ...


 That's quite interesting once you drop from 4K to 1440p RNDA2 performance pulls ahead rapidly relative to Ampere. I'd really like to see AMD add 1080p results to this list of benchmarks. The infinity cache seems to really flex it's benefit the most at lower resolutions in perticular which makes sense given the limited amount of cache to work with and huge latency reduction and bandwidth increase it provides better mileage of it naturally. It's actually very much akin to the Intel situation at 1080p so long for eleague high refresh rate gaming. I presume these cards are going to sell like hot cakes to that crowd of users because these cards will scream along nicely at 1080p high refresh rate far as I'm seeing relative to the cost. It'll be interesting to see what happens with RTRT at different resolutions. That infinity cache seems really well effective at lower resolutions.


----------



## Zach_01 (Oct 30, 2020)

InVasMani said:


> That's quite interesting once you drop from 4K to 1440p RNDA2 performance pulls ahead rapidly relative to Ampere. I'd really like to see AMD add 1080p results to this list of benchmarks. The infinity cache seems to really flex it's benefit the most at lower resolutions in perticular which makes sense given the limited amount of cache to work with and huge latency reduction and bandwidth increase it provides better mileage of it naturally. It's actually very much akin to the Intel situation at 1080p so long for eleague high refresh rate gaming. I presume these cards are going to sell like hot cakes to that crowd of users because these cards will scream along nicely at 1080p high refresh rate far as I'm seeing relative to the cost.


If you think AMDs latest performance across resolutions relatively to Ampere seems that it doesn’t do well on the higher/highest.

It’s not really that RDNA2 architecture/IC doesn’t scale well on different resolutions. Or that it does better at lower ones. It’s the Ampere architecture that doesn’t scale well across resolutions.
And you can see that from benchmarks comparing Turing vs Ampere. Turing and RDNA2 have a more “normal” scaling across the 3 well known 1080p, 1440p and 4K.

Seeing benchmarks of Turing vs Ampere across the 3 res you can identify that as you going up Ampere is getting away from Turing to reach the avg relative perf gains of around 30% on 4K. But on 1080p that difference is “only” 20%.
It’s a matter of Ampere’s architecture.

Also, this relative comparison (we don’t actually have full benches between Turing and RDNA2) *short of *confirms that AMD’s IC with the high (effective) bandwidth is working well and delivers its promises as a real wide bus.


----------



## Valantar (Oct 30, 2020)

Zach_01 said:


> If you think AMDs latest performance across resolutions relatively to Ampere seems that it doesn’t do well on the higher/highest.
> 
> It’s not really that RDNA2 architecture/IC doesn’t scale well on different resolutions. Or that it does better at lower ones. It’s the Ampere architecture that doesn’t scale well across resolutions.
> And you can see that from benchmarks comparing Turing vs Ampere. Turing and RDNA2 have a more “normal” scaling across the 3 well known 1080p, 1440p and 4K.
> ...


AFAIK that is mainly because it's only at 4k (and higher) that you can make any real use of the increased FP32 of Ampere, while at lower resolutions you're bottlenecked by other parts of the arch (which weren't doubled).


----------



## InVasMani (Oct 30, 2020)

I'll assume you're probably right about Ampere, but far as the resolution scaling is concerned for RDNA2 1080p will be better use of the bandwidth available than 4K more frames for the same amount of bandwidth assuming the CPU can keep pace and the GPU's CU's can keep all that bandwidth availability fed well enough. All I know is RDNA2 relative to Ampere the scaling on RDNA2 did noticeably better when the resolution scaled down from 4K to 1440p and I suspect that follows thru to 1080p as well because it wasn't like a anomaly from the looks of it at all it was across all the tests the gaps narrows or RNDA2 pulls ahead or pulled away even further. You might be right about Ampere, but the infinity cache could be playing a role on top of that much like a SSD with overprovisioning at a lower resolution you'll have more infinity cache overprovisioning to work with so to speak.


----------



## Zach_01 (Oct 30, 2020)

I guess this “issue” will be cleared as benchmarks will go public with all architectures in them on all resolutions.


----------



## InVasMani (Oct 30, 2020)

I'm confusing myself trying to think about it now honestly. I get what you're saying about Ampere, but at the same time the infinity cache is drastically better on bandwidth and I/O. At lower resolution it could come into play more in terms of being readily obvious to the frame rate impact over a given time frame if the CPU/GPU's other requirements and needs can still lift their weight in accordance as well. I need to see a clearer picture of what's happening and understanding of why. I'm sure "Tech Jesus" at Gamer's Nexus will explain it all in over-provisioned deep analysis.



Valantar said:


> AFAIK that is mainly because it's only at 4k (and higher) that you can make any real use of the increased FP32 of Ampere, while at lower resolutions you're bottlenecked by other parts of the arch (which weren't doubled).


  Honeslty while contributing perhaps for certain the infinity cache works a 2.17x bandwidth increase with a 108.5% I/O improvement or 54.25% reduced latency in essence which more pronounced than adjusting for more FP32 workloads rather than FP16 for example. I think the Ampere aspect comes into play as well, but perhaps the infinity cache is the bigger element unless I'm way off basis on my assessment of the situation.


----------



## Valantar (Oct 30, 2020)

InVasMani said:


> Honeslty while contributing perhaps for certain the infinity cache works a 2.17x bandwidth increase with a 58.5% I/O latency reduction in essence which more pronounced than adjusting for more FP32 workloads rather than FP16 for example. I think the Ampere aspect comes into play as well, but perhaps the infinity cache is the bigger element unless I'm way off basis on my assessment of the situation.


I was only speaking of how Ampere scales in comparison to Turing. Comparing how a so far unreleased architecture with a never before seen feature scales to how two other architectures scale ... that's impossible. We know that Ampere does relatively better at 4k than lower resolutions. From what we've seen from AMD so far, the same is not true for RDNA 2 - it seems to scale much more traditionally. But we can't know anything for sure until we have reviews in. Still, AMD's 1440p numbers looks quite a lot better when compared to Ampere than their 4k ones do.


----------



## Zach_01 (Oct 30, 2020)

We sure need a more technical explanation and approach to this new thing. I’m also interested in the more technical parts and details of any technology that comes.

For my simple non-technical (let alone professional) understanding and explanation, I’m thinking that if IC is truly delivering wide bandwidth (800+bit effective) across different workload levels (up to 4K that is more common than 8K) and scale well across them then the real bottleneck for any better performance is, as you also stated indirectly or not, the cores of the GPU and its surrounding I/O. And if really true they’ve manage to remove bandwidth bottleneck completely, up to 4K at least.

It’s radical! But also not a discovery of the wheel. I can’t think that nVidia’s engineers haven’t think of such implementation. But I can compare nVidia’s approach to the one of Intel. AMD has done steps to CPU world for a unified arch with chiplets that scale really well from just 1 to a large number of them. With its cons.

Intel does not do that but rather was always betting on a more strong arch in its core but couldn’t scale well beyond a number. Today’s nVidia’s approach is doing the same on reverse. It performs better on heavy workloads but does not scale well on lighter ones.

nVidia can’t implement such large cache because doesn’t have room for it in its arch, occupied by Tensor and RT cores. That’s why they need the super high speed 6X VRAM to keep up feeding the cuda cores with data.
In a far edged sense, you can say that AMDs arch (both CPU/GPU) is more of opened sourced and nVidia’s more closed and proprietary. Also RDNA in general is a more of a gaming approach and Ampere(starting with Turing) is more of a work load one that can do well in other loads than gaming, like CGN that was really strong outside gaming.

Rumors say that the next RDNA3 will be more close to ZEN2/3 approach. Chunks of cores/dies tied together with large pools of cache.
That’s why I believe it will not come soon. It will be way more than a year.


----------



## Camm (Oct 30, 2020)

Zach_01 said:


> For my simple non-technical (let alone professional) understanding and explanation, I’m thinking that if IC is truly delivering wide bandwidth (800+bit effective) across different workload levels (up to 4K that is more common than 8K) and scale well across them then the real bottleneck for any better performance is, as you also stated indirectly or not, the cores of the GPU and its surrounding I/O. And if really true they’ve manage to remove bandwidth bottleneck completely, up to 4K at least.



Okay, people tend to think of bandwidth as a constant thing (I'm always pushing 18Gbps or whatever the hell it is) at all times, and that if I'm not pushing the most amount of data at all times the GPU is going to stall.

The reality is only a small subset of data is all that necessary to keeping the GPU fed to not stall. The majority of the data (in a gaming context anyway) isn't anywhere near as latency sensitive and can be much more flexible for when it comes across the bus. IC helps by doing two things. It
A: Stops writes and subsequent retrievals from going back out to general memory for the majority of that data (letting it exist in cache, where its likely a shader is going to retrieve that information from again), and
B: It helps act as a buffer for further deprioritising data retrieval, letting likely needed data be retrieved earlier, momentarily held in cache, then ingested to the shader pipeline than written back out to VRAM.

As for Nvidia, yep, they would have, but the amount of die space being chewed for even 128mb of cache is pretty ludicrously large. AMD has balls chasing such a strategy tbh (but is probably why we saw 384 bit Engineering Sample cards earlier in the year, if IC didn't perform, they could fall back to a wider bus).


----------



## mtcn77 (Oct 30, 2020)

InVasMani said:


> I'm confusing myself trying to think about it now honestly. I get what you're saying about Ampere, but at the same time the infinity cache is drastically better on bandwidth and I/O. At lower resolution it could come into play more in terms of being readily obvious to the frame rate impact over a given time frame if the CPU/GPU's other requirements and needs can still lift their weight in accordance as well. I need to see a clearer picture of what's happening and understanding of why. I'm sure "Tech Jesus" at Gamer's Nexus will explain it all in over-provisioned deep analysis.
> 
> Honeslty while contributing perhaps for certain the infinity cache works a 2.17x bandwidth increase with a 108.5% I/O improvement or *54.25% reduced latency* in essence which more pronounced than adjusting for more FP32 workloads rather than FP16 for example. I think the Ampere aspect comes into play as well, but perhaps the infinity cache is the bigger element unless I'm way off basis on my assessment of the situation.


I think this also encapsulates the gist of it somewhat.
Prior to this, AMD struggled with instruction pipeline functions. Successively, they streamlined the pipeline operation flow, dropped instruction latency to 1 and started implementing dual issued operations. That, or I don't know how they can increase shader speed by 7.9x folds implementing simple progressions to the same architecture.



Camm said:


> As for Nvidia, yep, they would have, but the amount of die space being chewed for even 128mb of cache is pretty ludicrously large. AMD has balls chasing such a strategy tbh (but is probably why we saw 384 bit Engineering Sample cards earlier in the year, if IF didn't perform, they could fall back to a wider bus).


And remember, this is only because they had previously experimented with it, otherwise there would be no chance that they know first hand how much power budget it would cost them. Sram has a narrow efficiency window.
There used to be a past notice which compared AMD and Intel's cell to transistor ratios, with the summary being AMD had integrated higher and more efficient transistor count units. All because of available die space.


----------



## Dave65 (Oct 30, 2020)

In case anyone missed it.


----------



## InVasMani (Oct 30, 2020)

Think about system memory with latency vs bandwidth from latency tightening vs frequency scaling. I think that's going to come into play here quite a bit with the infinity cache situation it has to. I believe AMD tried to get the design well balanced and efficient for certain with minimal oddball compromising imbalances in the design of it. We can already glean a fair amount with what AMD's shown however, but we'll know more for certain with further data naturally. As I said I'd like to see the 1080p results. What you're saying though is fair we need to know more about Ampere and RDNA2 before we can more easily conclude exactly which parts of the design are leading to which performance differences and their impact based on resolution scaling. It's safe to say though there appears to be sweeping differences in design between RNDA2/Ampere to do with the resolution scaling.

If PCIE 4.0 doubled the bandwidth and cut the I/O bottleneck in half and this infinity cache is doing similarly that's a big deal for Crossfire. Mantle/Vulkan,DX12, VRS, Direct Storage API, Infinity Fabric, Infinity Cache, PCIE 4.0 and other things all make mGPU easier if anything the only real barrier developers.


I feel like AMD should just do a quincunx socket setup. Sounds a bit crazy, but they could have 4 APU's and a central processor. Infinity fabric and infinity cache between the 4-APU's and the central processor. A shared quad channel memory for the central processor with shared dual channel access to it from the surrounding APU's. The APU's would have 2 cores each to communicate with the adjacent APU's and the rest could be GPU design. The central processor would probably be a pure CPU design high IPC high frequency perhaps a bigLITTLE design a beastly single core central design the heart of the unit and 8-smaller surrounding physical cores handling odd and ends. There could be a lot of on the fly compression/decompression involved as well to maximize bandwidth and increase I/O. The chipset would be gone entirely and just integrated into the CPU design through the socketed chips involved. Lots of bandwidth, processing, single core performance along with multi-core performance and load balancing and head distribution and quick and efficient data transfer between different parts. It's a fortress of sorts, but it could probably fit within a ATX design reasonably well. You might start out with dual channel/quad channel with two socketed chips the socketed heart/brain and along with a APU and build it up down the road for scalable performance improvements. They could integrate FPGA tech into the equation, but that's another matter and cyborg matter we probably shouldn't speak of right now though the cyborg is coming.



mtcn77 said:


> I think this also encapsulates the gist of it somewhat.
> Prior to this, AMD struggled with instruction pipeline functions. Successively, they streamlined the pipeline operation flow, dropped instruction latency to 1 and started implementing dual issued operations. That, or I don't know how they can increase shader speed by 7.9x folds implementing simple progressions to the same architecture.
> 
> 
> ...


 If I'm not mistaken RNDA transitioned to some form of twin CU design task scheduling work groups that allows for kind of a serial and/or parallel performance flexibility within them. I could be wrong on my interpretation of them, but I think it allows them double down for a single task or split up and each handle two smaller tasks within the same twin CU grouping. Basically a working smarter not harder hardware design technique. Granular is where it is at more neurons. I think ideally you want a brute force single core that occupies the most die space and scale downward by like 50% with twice the core count. So like 4chips 1c/2c/4c/8c chips the performance per core would scale downward as core count increases, but the efficiency per core would increase and provided it can perform the task quickly enough that's a thing it saves power even if it doesn't perform the task as fast though it doesn't always need to either. The 4c/8c chips wouldn't be real ideal for gaming frame rates or anything overall, but they would probably be good for handling and calculating different AI within a game as opposed to pure rendering the AI animations and such don't have to be as quick and efficient as scene rendering for example in general it's just not as vital. I wonder if the variable rate shading will help make better use of core assignments across more cores in theory it should if they are assignable.


----------



## mtcn77 (Oct 30, 2020)

InVasMani said:


> If I'm not mistaken RNDA transitioned to some form of twin CU design task scheduling work groups that allows for kind of a serial and/or parallel performance flexibility within them. I could be wrong on my interpretation of them, but I think it allows them double down for a single task or split up and each handle two smaller tasks within the same twin CU grouping. Basically a working smarter not harder hardware design technique. Granular is where it is at more neurons.


We can get deep into this subject. It holds so much water.


----------



## InVasMani (Oct 30, 2020)

Camm said:


> Okay, people tend to think of bandwidth as a constant thing (I'm always pushing 18Gbps or whatever the hell it is) at all times, and that if I'm not pushing the most amount of data at all times the GPU is going to stall.
> 
> The reality is only a small subset of data is all that necessary to keeping the GPU fed to not stall. The majority of the data (in a gaming context anyway) isn't anywhere near as latency sensitive and can be much more flexible for when it comes across the bus. IC helps by doing two things. It
> A: Stops writes and subsequent retrievals from going back out to general memory for the majority of that data (letting it exist in cache, where its likely a shader is going to retrieve that information from again), and
> ...


 Agree granular chunk blocks if you can push more of them quicker and more efficiently data flow and congestion the better it's handled less stutter encountered. CF/SLI isn't dead because it doesn't work it's been regressing because of other reasons developer support, relative power draw for the same performance from a single card solution, and user sentiment to both those issues. It's not that it doesn't work it's not less ideal, but it does offer more performance that scales well done right and well offers less problematic negatives than in the past. A lot of it hinges on developers supporting it well and is a big problem with it no matter how good it is if they do a poor job implementing it then you have a real problem on hand if you're reliant on that same with tech like DLSS it's great or useful anyway until it's not or not implemented TXAA was the same deal. It's wonderful to a point, but selectively available with mixed results. If AMD/Nvidia manages to get away from the developer/power efficiency/latency quirks with CF/SLI they'll be great that's always been what's held them back unfortunately. It's what cause Lucid Hydra to be a overall failure of sorts. I suppose it had it's influence though just the same from what was learned from it that could be applied to avoid those same pitfalls stuff like Mantle/DX12/Vulkcan API's that that are more flexible and eventhings like variable rate shading. Someone had to break things down into smaller tasks between two separate pieces of hardware and try make it more efficient or learn how it could be made better. Eventually we may get close to the Lucid Hydra realization working in the way it was actually envisioned, but with more steps involved than they had hoped or wished for.



Zach_01 said:


> Rumors say that the next RDNA3 will be more close to ZEN2/3 approach. Chunks of cores/dies tied together with large pools of cache.
> That’s why I believe it will not come soon. It will be way more than a year.


 I would think RNDA3 and Zen 4 will arrive about the same time frame and be 5nm based with some improves to caches, cores, frequency, IPC, and power gating on both with other possible refinements and introductions. I think bigLITTLE is something to think about and perhaps some FPGA tech being applied to designs. I wonder if perhaps the MB chipset will be turned into a FPGA or incorporate some of that tech same with CPU/GPU just re-route some new designs and/or re-configure them a bit depending on need they are wonderfully flexible in a great way perfect no, but they'll certainly improve and be even more useful. Unused USB/PCI-E/M.2 slots cool I'll be reusing that for X or Y. I think eventually it could get to that point perhaps hopefully and if it can be and efficiently that cool as hell.


----------



## Camm (Oct 30, 2020)

InVasMani said:


> CF/SLI isn't dead because it doesn't work it's been regressing because of other reasons developer support, relative power draw for the same performance from a single card solution, and user sentiment to both those issues.



Probably missing the biggest issue, many postprocessing techniques are essentially impossible to do on a Crossfire\SLi solution that are using scene dividing and frame each techniques (the most common way these work).

mGPU tries to deal with this by setting bitmasks to keep certain tasks on a single GPU and an abstracted copy engine to reduce coherency requirements, but it comes down to the developer needing to explicitly manage that at the moment.


----------



## mtcn77 (Oct 31, 2020)

Camm said:


> Probably missing the biggest issue, many postprocessing techniques are essentially impossible to do on a Crossfire\SLi solution that are using scene dividing and frame each techniques (the most common way these work).


You have 2 frontends, though. 2 frontends give 2 times faster CU WF initiation and SIMD wave instructing. While I admit it might displace a solid single pipeline into two and create needless time seams during which the pipeline is running idle, let's be careful to notice there are no pipeline stalls in RDNA2 what so ever. The registers used to be 64x4, 4 latency gap issued, now they are 32x32, enough to cover each lane each clock cycle.
It is also not the same pipeline state object between GCN and RDNA2 either, RDNA2 can prioritise compute and can stop the graphics pipeline entirely. Since gpus are large latency hiding devices, I think this would give us the necessary time needed to seam the images back into one before the timestamp is missed, but I'm rambling.








						AMD Navi vs NVIDIA Turing: Comparing the Radeon and GeForce Graphics Architectures | Hardware Times
					

AMD’s Navi and NVIDIA’s Turing (used to) power the latest GPUs from Teams Red and Green, respectively, The Radeon Navi cards are based on the RDNA architecture which is a complete reimagining of AMD’s approach to gaming GPUs with low latencies and high clocks. NVIDIA’s RTX 20 series “Turing”...




					www.hardwaretimes.com


----------



## InVasMani (Oct 31, 2020)

Post processing is a interesting point on the Crossfire/SLI matter. That said there are work around solutions to that issue such as mCable. I don't see why AMD/Nvidia couldn't make a display with a GPU at that end that does post process at that end in a timely manner built right into the display itself and more advanced. I also kind of find it odd that interlaced mGPU techniques like 3DFX used hasn't made a come back the bandwidth savings are huge use a bit higher resolution and downscale for something akin to a higher DPI. I mean let's see PCIE 3.0 vs PCIE 4.0 you've got double the bandwidth and 1/2 the latency then interlacing same story on the bandwidth and I guess in turn latency combine the both that's x4 times the bandwidth and a 1/4 the latency throw in infinity cache very close to the same thing slightly better actually 1/8 the latency with x8 the bandwidth. The thing with the interlacing yes it perceptively looks a bit worse which I think is largely attributed to the sharpness of the image it's a bit like DLSS you got less pixels to work with of course it's going to appear more blurry and less sharp by contrast. On the plus side you could combine that with a device like mClassic I would think and work a little magic to upscale the quality. Then you've compression as well you can use LZX compression on VRAM perfectly fine for example though obviously doing that quickly would be challenging depending on file sizes involved though limits the file sizes and doing that on the fly is certainly a option I would say in the future to be considered that also of course increases bandwidth and reduces latency from higher I/O.



mtcn77 said:


> You have 2 frontends, though. 2 frontends give 2 times faster CU WF initiation and SIMD wave instructing. While I admit it might displace a solid single pipeline into two and create needless time seams during which the pipeline is running idle, let's be careful to notice there are no pipeline stalls in RDNA2 what so ever. The registers used to be 64x4, 4 latency gap issued, now they are 32x32, enough to cover each lane each clock cycle.
> It is also not the same pipeline state object between GCN and RDNA2 either, RDNA2 can prioritise compute and can stop the graphics pipeline entirely. Since gpus are large latency hiding devices, I think this would give us the necessary time needed to seam the images back into one before the timestamp is missed, but I'm rambling.
> 
> 
> ...


 I'd like to add the perks of mGPU for path tracing is enormous as well think how much more quickly denoise could be done in that scenario!? The prospects of 4 discrete GPU's with a chunk of infinity cache on each running to a CPU with larger chunk of infinity cache that it can split amongst them is a very real future and vastly better than 4-way GTX980/980Ti's was with those old slower and less multicore Intel workstation chips and motherboards like setup is archaic to what we've got now it may as well be a 486 it just looks so dated this current tech in so many area's.


----------



## Zvijer (Oct 31, 2020)

AMD all the way...finaly better FPS then "Green shelter"...


----------



## InVasMani (Oct 31, 2020)

A Ryzen 5600X and 6800 setup looks like it's going to be quite tempting wonder if AMD will do any package deals on that combination or not that'll be great for 1080p/1440p in perticular.


----------



## Camm (Oct 31, 2020)

InVasMani said:


> A Ryzen 5600X and 6800 setup looks like it's going to be quite tempting wonder if AMD will do any package deals on that combination or not that'll be great for 1080p/1440p in perticular.



I would be *very* surprised if AMD doesn't offer package deals with a 5600X+6700XT, 5800X+6800, 5900X+6800XT & 5950X+6900XT combos or some sort of rebate system where you show you buy both in a single transaction you can apply for $50 back or something.


----------



## Zach_01 (Oct 31, 2020)

InVasMani said:


> A Ryzen 5600X and 6800 setup looks like it's going to be quite tempting wonder if AMD will do any package deals on that combination or not that'll be great for 1080p/1440p in perticular.


This combo can easily do 4K, unless you're after high competitive framerate


----------



## Vya Domus (Oct 31, 2020)

Zach_01 said:


> unless you're after high competitive framerate



Because then what ? You get 350 FPS instead of 320 or something ? That system will get you high performance in anything.


----------



## Zach_01 (Oct 31, 2020)

Vya Domus said:


> Because then what ? You get 350 FPS instead of 320 or something ? That system will get you high performance in anything.


I meant 4K.
He said that this system it would be great for 1080/1440p. And I said it could do 4K, unless he wants to stay in lower res for high (100+) framerate.
All 3 current 6000 GPUs are meant for 4K and not 1080/1440p. That was the point...

I didnt speak numbers, but thats what I meant.


----------



## InVasMani (Oct 31, 2020)

Zach_01 said:


> I meant 4K.
> He said that this system it would be great for 1080/1440p. And I said it could do 4K, unless he wants to stay in lower res for high (100+) framerate.
> All 3 current 6000 GPUs are meant for 4K and not 1080/1440p. That was the point...
> 
> I didnt speak numbers, but thats what I meant.


 I get what you're saying and agree it'll handle 4K quite well in additional to 1080p/1440p I'm leaning towards 120Hz+ with 1080p/1440p and taking into account newer games that are more demanding. I think in the case of 4K that combination won't always quite deliver 60FPS as fluidly especially true of scenario's involving RTRT become involved and even otherwise at times at least not w/o some subtle compromises to a few settings. You're right though about 4K being plenty capable of doing 60FPS+ perfectly in quite a few scenario's and hell even upwards of 120FPS at 4K in cases with some intelligent settings compromises. That said I don't plan on getting a 4K 120Hz display regardless at current price premiums. The price sweet spot for 100HZ+ displays is defiantly 1080p and 1440p display options.


----------



## hurakura (Nov 1, 2020)




----------



## jcchg (Nov 1, 2020)

renz496 said:


> see what happen for the last 10 years. did price war really help AMD gain more market share?



Have you heard about AMD Polaris or AMD Ryzen?


----------



## renz496 (Nov 3, 2020)

jcchg said:


> Have you heard about AMD Polaris or AMD Ryzen?



did it help AMD gain more discrete GPU market share? for the past 10 years we have seen AMD competing with price. and yet their market share never exceed 40% mark. the last time AMD had over 40% is back in 2010. despite all the undercutting AMD has been doing for the past 10 years they are pretty much suppressed by nvidia to have below 40%. and until recently 30% is about the best they can hold. the latest report from JPR shows that AMD discrete GPU market share is already down to 20%. 

price war is only effective if you can keep gaining market share from competitor. with ryzen it works. but in GPU world what happen for the past 10 years shows us that price war are ineffective against nvidia. and i have seen it when nvidia starts retaliating with price war the one that end up giving up first was AMD.


----------



## Valantar (Nov 3, 2020)

renz496 said:


> did it help AMD gain more discrete GPU market share? for the past 10 years we have seen AMD competing with price. and yet their market share never exceed 40% mark. the last time AMD had over 40% is back in 2010. despite all the undercutting AMD has been doing for the past 10 years they are pretty much suppressed by nvidia to have below 40%. and until recently 30% is about the best they can hold. the latest report from JPR shows that AMD discrete GPU market share is already down to 20%.
> 
> price war is only effective if you can keep gaining market share from competitor. with ryzen it works. but in GPU world what happen for the past 10 years shows us that price war are ineffective against nvidia. and i have seen it when nvidia starts retaliating with price war the one that end up giving up first was AMD.


That's way too simplistic a view. This drop can't simply be attributed to "AMD is only competing on price", you also have to factor in everything else that affects this. In other words: the lack of a competitive flagship/high end solution since the Fury X (2015), the (mostly well deserved) reputation for running hot and being inefficient (not that that matters for most users, but most people at least want a quiet GPU), _terrible_ marketing efforts (remember "Poor Volta"?), overpromising about new architectures, and not least resorting to selling expensive GPUs cheaply due to the inability to scale the core design in a competitive way, eating away at profits and thus R&D budgets, deepening the issues. And that's just scratching the surface. RDNA hasn't made anything worse, but due to the PR disaster that was the state of the drivers (which, while overblown, had some truth to it) didn't help either. RDNA 2 rectifies pretty much every single point here. No, the 6900 XT isn't likely to be directly competitive with the 3090 out of the box, but it's close enough, and the 6800 XT and 6800 seem eminently competitive. The XT is $50 cheaper than the 3080, but the non-XT is $79 more than the 3070, so they're not selling these as a budget option. And it's obvious that RDNA 2 can scale down to smaller chips with great performance and efficiency in the higher volume price ranges. Does that mean AMD will magically jump to 50% market share? Obviously not. Mindshare gains take _a lot_ of time, and require consistency over time to materialize at all. But it would be extremely surprising if these GPUs don't at least start AMD on that road.


----------



## dinmaster (Nov 9, 2020)

hoping that the reviews show performance with the 3000 series cpu's, intel ones and the new 5000 series because of that new feature (smart access memory) with the 5000 series will not show what the majority of the community runs in their systems and the gained performance from it will skew the benchmarks. I personally run a 3800x and would like to see the differences between the different ones. I know w1zzard will have that in the review or subsequent review of the different cpu's performance scaling with the 6000 gpu's.


----------



## InVasMani (Nov 9, 2020)

The big thing with RNDA2 is it's going to cause Nvidia to react and be more competitive just like you're seeing with Intel on the CPU side.


----------



## Valantar (Nov 9, 2020)

dinmaster said:


> hoping that the reviews show performance with the 3000 series cpu's, intel ones and the new 5000 series because of that new feature (smart access memory) with the 5000 series will not show what the majority of the community runs in their systems and the gained performance from it will skew the benchmarks. I personally run a 3800x and would like to see the differences between the different ones. I know w1zzard will have that in the review or subsequent review of the different cpu's performance scaling with the 6000 gpu's.


All serious review sites use a fixed test bench configuration for GPU reviews, and don't replace that when reviewing a new product. Moving to a new test setup thus requires re-testing every GPU in the comparison, and is something that is done periodically, but in periods with little review activity. As such, day 1 reviews will obviously keep using the same test bench. This obviously applies to TPU, which uses a 9900K-based test bench.

There will in all likelihood be later articles diving into SAM and similar features, and SAM articles are likely to include comparisons to both Ryzen 3000 and Intel setups, but those will necessarily be separate from the base review. Not least as testing like that would mean a _massive_ increase in the work required: TPUs testing covers 23 games at three resolutions, so 69 data points (plus power and thermal measurements). Expand that to three platforms and you have 207 data points, though ideally you'd want to test Ryzen 5000 with SAM both enabled and disabled to single out its effect, making it 276 data points. Then there's the fact that there are three GPUs to test, and that one would want at least one RTX comparison GPU for each test. Given that reviewers typically get about a week to ready their reviews, there is no way that this could be done in time for a launch review.

That being said, I'm very much looking forward to w1zzard's SAM deep dive.


----------

