• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 3080 XT "Navi" to Challenge RTX 2070 at $330

Whats this 10% of die space shen, just basic bucket math, you can see that tensor/rt takes up about half of an SM, with SM's taking up about half of the die, you have at least 20% of the die dedicated to making RT work

You looking @ comparative die shots or marketing slides? ;)

(on the basis that RT is currently too slow to run without supersampling).

Sorry, what? If RTX ran with SSAA it would be a slide show.

As for DLSS, I don't really care on the how, more that it should be providing better fidelity at the same FPS than just using a lower resolution in the first place. Which bluntly, it doesn't.

I can appreciate that. DLSS x2 runs @ native res but performance suffers. I actually think that MLAA tech is interesting & has scope for future IQ/perf advancement, especially on the TAA front. Even MS "super resolution" via DirectML results in undersampled edges with gaps remaining undersampled edges with gaps. Best description of DLSS is a lossy nn image compressor with a reconstruction process side channel (the DLSS profile).
 
Jim Keller is at Tesla now.



Let's not pretend that you are an impartial bystander, shall we?

it is about grenboi disability to accept "water is wet" kind of fact.
AMD never undercut competitor like that is an apparent bullshit, cheapest 8 core by Intel was $1089 when AMD 1800x hit with $499. End of story.
"Let's spin it" effort have come up, so far with:

1) "But $1089 is a HEDT chip" (yay)
2) "But he meant GPU" (actually 290(x) vs 780 wasn't that far at $549 vs slower chip at $650)
3) "But we are not talking about GPUs" (coming from "cartels are fine" guy)

Yay. Pathetic.


I believe AdoredTV does have an actual insider connection and as I see it, AdoredTV just dropped BAD NEWS NOT GOOD NEWS. Namely:
1) Not meeting target clocks
2) Power hungry <= the worst part

3) Losing to VII CU for CU

As for whether 7nm chip with GDDR mem with 2070-ish performance is possible at $330, uh, is it even a question?


Could you guys at least hide your BH? I mean, what the fuck does "oh, but my great company has an answer to this, I don't need to hide and cry" have to do with it? Jesus.



It's the 180 we are discussing here, the "I don't read even the first page"/"I only read reddit titles" kid.
Video is linked on the very first page, with most relevant parts of it as screenshots.



Do you even understand what "is GCN" means?


I recall someone estimated that 22% of die are dedicated to it, not that much.
I was just pointing out that this is a rumor.whether adtv is your guru changes norhing.what's with your rabid attitude?did you piss your pants hearing adtv and it started to itch now?
You're only active on tpu to attack ppl whom you disagree with and bait.
 
Because stating "I believe he has insider links" makes him "my guru".




Oh, get lost.
Sir.come on
Let's not pretend
 
Jim Keller is at Tesla now.

Let's not pretend that you are an impartial bystander, shall we?

it is about grenboi disability to accept "water is wet" kind of fact.
AMD never undercut competitor like that is an apparent bullshit, cheapest 8 core by Intel was $1089 when AMD 1800x hit with $499. End of story.
"Let's spin it" effort have come up, so far with:

1) "But $1089 is a HEDT chip" (yay)
2) "But he meant GPU" (actually 290(x) vs 780 wasn't that far at $549 vs slower chip at $650)
3) "But we are not talking about GPUs" (coming from "cartels are fine" guy)

Yay. Pathetic.

I believe AdoredTV does have an actual insider connection and as I see it, AdoredTV just dropped BAD NEWS NOT GOOD NEWS. Namely:
1) Not meeting target clocks
2) Power hungry <= the worst part

3) Losing to VII CU for CU

As for whether 7nm chip with GDDR mem with 2070-ish performance is possible at $330, uh, is it even a question?

Could you guys at least hide your BH? I mean, what the fuck does "oh, but my great company has an answer to this, I don't need to hide and cry" have to do with it? Jesus.

It's the 180 we are discussing here, the "I don't read even the first page"/"I only read reddit titles" kid.
Video is linked on the very first page, with most relevant parts of it as screenshots.

Do you even understand what "is GCN" means?

I recall someone estimated that 22% of die are dedicated to it, not that much.

Jim Keller moved to Intel from Tesla. Not that it really have anything to do with the GPUs not to mention Navi or RTX...
 
You are right, but for the past few generations, raising false expectations through "unsanctioned" leaks is all that AMD could do in the GPU space. To the point some people pay now over a grand for a video card :(

You honestly think AMD would leak this information to the public? Its not even impressive information, I'm sure AMD would "leak" something a little more titillating. Clocking issues, IPC decrease, and silly naming scheme sound like rumors and hearsay spread by non technical staff. Just like the MSI rep that was telling someone that his b350 board wouldn't work with Ryzen 3rd gen.
 
You honestly think AMD would leak this information to the public? Its not even impressive information, I'm sure AMD would "leak" something a little more titillating. Clocking issues, IPC decrease, and silly naming scheme sound like rumors and hearsay spread by non technical staff. Just like the MSI rep that was telling someone that his b350 board wouldn't work with Ryzen 3rd gen.
I don't care much about what AMD would or wouldn't leak. But at the same time I can't help noticing they never act on these leaks, therefore they must be ok with the generated word of mouth.

And just look at how the world was taken by surprise by Turing or Zen. That proves when companies want to keep something from the public, that's what they'll do.
 
I don't care much about what AMD would or wouldn't leak. But at the same time I can't help noticing they never act on these leaks, therefore they must be ok with the generated word of mouth.

And just look at how the world was taken by surprise by Turing or Zen. That proves when companies want to keep something from the public, that's what they'll do.

Well it's free publicity.
 
Zens leaks were pretty accurate, they were based on engineering samples. We already know what Navi should perform like considering it's based on GCN, but it looks worse lol
 
No continual incremental updates like Fermi->Turing will do that for you. It must be acknowledged NV has executed, esp given the relative competition vacuum across the entire product stack. Not to say GCN8/9 made no improvements to earlier GCN, but without new RTL the key shortcoming of GCN is tough to work around. IMO the front end & entire register/cache/pipeline are 3 gens behind NV. Performance of Vega is reasonable in that context. Until AMD sort their front end 4tris/clk to 6tris/clk deficit they will likely have to keep pushing their silicon harder & lose out in perf/watt.
The problem is not ROP performance, it's management of resources.
GCN have changed very little over the years, while Kepler -> Maxwell -> Pascal -> Turing have continued to advance and achieve more performance per core and GFlop, to the point where they have about twice the performance per watt and 30-50% more performance per GFlop.

You can never have enough frame buffer or bandwidth.
More is usually better, except when it comes at a great cost.
16 GB of 1 TB/s HBM2 is just pointless for gaming purposes. AMD could have used 8 or even 12 GB, and priced it lower.

You honestly think AMD would leak this information to the public? Its not even impressive information, I'm sure AMD would "leak" something a little more titillating.
AMD does certainly leak information when they see a reason to. But no, these specs are not leaked by AMD, as they don't leak them until they are finalized. This thread is just another "victim" of someones speculation in a Youtube channel…
 
GCN have always been resource utilization
Yea, that's an over simplification on my part or wrong way of saying it.... As correct its' not "band-width" more how that memory speed/throughput is utilized in GCN, and correct that wasn't corrected in the architecture with GDDR5, even with Vega/HBM not sure what changes where brought-in to free that up.

And true the "computational power" of GCN is not its' problem, however never truly got employed in gaming engines (DX11). Today I'm not sure its' extent even for DX12, is aiding gaming enough to warrant a huge dependence in the architecture. I understood that they tasked engineering (2015-16) to trim that back in a way to save power, and this "Navi" is the first chip to have that.

I can't help noticing they never act on these leaks, therefore they must be ok with the generated word of mouth.
So they are suppose to come out to confirm/deny (argue, attest, authenticate, bear out, certify, corroborate, substantiate, support, validate, verify, vindicate) every Tom, Dick and Harry story! That's not how any smart-individual or company does it. Once you start... your giving away "something" every time you open your mouth, and where does it stop. And yes any generation of discussion from nothing but rumor is still talk/discussions keeping you or company relevant.
Kardashian's built an Empire on just that kind of crap ?
 
Last edited:
And true the "computational power" of GCN is not its' problem, however never truly got employed in gaming engines (DX11). Today I'm not sure its' extent even for DX12, is aiding gaming enough to warrant a huge dependence in the architecture. I understood that they tasked engineering (2015-16) to trim that back in a way to save power, and this "Navi" is the first chip to have that.
Game engines don't implement low-level GPU scheduling, dependency analysis and low-level resource management, not even the driver can do this, it's managed on chip. While you can tune some aspects of a game engine and see how it impacts performance, you can't solve GCN's underlying problem in software.
 
you can't solve GCN's underlying problem in software
I don't see that I said it anything that is fixed by the software/driver? It was game engine developers that never saw/knew the value or tools to constructed in a way to make use of such "on chip" resources. And much like Bulldozer core implementation, something that was a fault of their going toward a direction nobody was looking to go.
 
Nvidia will lower the cost of the TU106 (2060 and 2070)

Nvidia will counter Navi 10/20 with the TU104

Nvidia will release the RTX 2070Ti (TU104-300A) witch is (1080Ti/Radeon 7) performance for a lower price. Possibly with 8GB and 16GB options.

I herd talk about RTX 2080+ (unlocked TU104-475A) @2GHz with 8GB & 16GB models. ((3072 Cuda Cores))

"RTX 2070Ti" will blow away RX 3080 XT

As the "RTX 2080 Plus" blows away RX 3090 XT next February 2020

This is most likely what's going to happen
If NAVI (That is based on old 2011 GCN Design) ends up anywhere near the RTX 2070 for a $300 to $350 cost, would be an Nvidia embarrassment.

Can AMD give us one more crack at GCN before trashing it? We'll soon find out.
 
If NAVI (That is based on old 2011 GCN Design) ends up anywhere near the RTX 2070 for a $300 to $350 cost, would be an Nvidia embarrassment.

Can AMD give us one more crack at GCN before trashing it? We'll soon find out.

nV's level of embarrassment becomes less every day that goes by with no sign of competition. RTX2070 has been out since October. With RT and DLSS.

If anything if AMD doesn't have a clear rival out by the end of summer for $350 the embarrassment will be all theirs.

The RTX2070 is current production, not new production. Shouldn't AMD at this stage release something better than a RTX2070 for the same $$$?
 
Nvidia will lower there cost of TU106 (2060/2070)

Nvidia will use their TU104 to fight against AMD Navi 10/20 GPUs.

Nvidia is releasing RTX 2070Ti (TU104 300A) same performance as (1080Ti/Radeon 7) at a lower cost. To deal with AMD Navi 10

Nvidia also has a RTX 2080U model coming. (Fully unlocked TU104 model with full 3072 Cuda Cores @2GHz) to deal with AMD Navi 20

Both RTX 2070Ti and RTX 2080U both come with optional 16GB Plus models.

Like always AMD has nothing to go against Nvidia TU102 (RTX Titan/RTX 2080Ti)
 
for the same $$$
Let me fix that, IF... AMD/RGT releases a 7nm part before Nvidia, even if close or nips at RTX 2070, at a price that's 30% less you'll see that as embarrassing?

Though all the while they do it with a pittance of the R&D/financials, while staff restructuring, and basically new relation with a foundry. All while using an architecture that taped out in 2010 and ultimately only saw slight revisions until this (might be) the first major overhaul. I'm not seeing it... unless you mean embarrassing for Nvidia?
 
Let me fix that, IF... AMD/RGT releases a 7nm part before Nvidia, even if close or nips at RTX 2070, at a price that's 30% less you'll see that as embarrassing?

Though all the while they do it with a pittance of the R&D/financials, while staff restructuring, and basically new relation with a foundry. All while using an architecture that taped out in 2010 and ultimately only saw slight revisions until this (might be) the first major overhaul. I'm not seeing it... unless you mean embarrassing for Nvidia?

No one cares about AMD's staff restructuring etc. Except market analysts and investors. Gamers don't give a damn about that. Corporate Heroics don't impact in game FPS.

7nm means nothing unless it's cheaper and faster. Equal won't be good enough. AMD's market share for GPU's is abysmal atm and they need a lot bigger splash than R7 was.

Meanwhile I doubt nV is just doing nothing. AMD is never going to compete by releasing a matching product 6 months later for 30% less. Actually it feels like AMD doesn't even really care that much atm. And that's a shame.
 
Last edited:
Actually it feels like AMD doesn't even really care that much atm
On that we agree AMD /RGT really has not cared to via Nvidia, especially of the enthusiast market. They kept up appearances, but like Intel is now "sitting up straight" you can never be caught slouching in the seat of postulating King.
 
BTW erocker was quoting and replying to my post and surprise I was talking only about Nvidia vs AMD and GPUs. So the whole debate was started by me and was about GPUs and not both of them.

290x beating the first Tiran at half the price?
 
I don't see that I said it anything that is fixed by the software/driver? It was game engine developers that never saw/knew the value or tools to constructed in a way to make use of such "on chip" resources. And much like Bulldozer core implementation, something that was a fault of their going toward a direction nobody was looking to go.
They couldn't even if they wanted to.
The APIs we use (Direct3D, OpenGL and Vulkan) are GPU architecture agnostic.
When it comes to "optimizing" game engines there are very little developers can do, and they certainly can't control the internal GPU scheduling even if they wanted to, and optimization is largely limited to tweaking buffer sizes, resource sizes and generic operations to see what performs better, not any true low-level GPU-specific optimization like most people think.

If NAVI (That is based on old 2011 GCN Design) ends up anywhere near the RTX 2070 for a $300 to $350 cost, would be an Nvidia embarrassment.
How would anyone be embarrassed by Navi coming close to Nvidia?
AMD is the one who should be embarrassed if they can't make a "better" node and a "newer" design with probably more cores and GFlop beat last year's contender from Nvidia, which I don't expect them to do…
 
Back
Top