• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 4070 Founders Edition

Joined
Dec 1, 2022
Messages
77 (0.13/day)
The Navi 21 cards look bizarrely inefficient next to this thing.
Of course it does, even just going by die size Navi 21 isn't even comparable, although the 4070 should really be a 4060 due to die size and performance, going from 3070 for 4070 isn't enough a improvement compared to previous gen x70 tier cards.
If you intentionally overlook every shortcoming, setback and problem AMD's older GPUs have, yes.

Being worse at RT, media handling, encoding, at over double the power consumption and subject to Radeon driver shenanigans many of which have driven even diehards (speaking for myself here) away?

Pass.
Anyone that cares enough about RT shouldn't be buying the low end cards, 4070 doesn't provide the RT level of performance it should be for $600, but Nvidia can price the card at what they want and reviewers and fans will still hype up a really unexciting card.
IMO, media encoding and power consumption aren't enough good enough reasons to recommend a 4070 over other options, and driver issues are way overblown including people whining about not getting updates for a few months.
It saves you a lot of power. With my usage and location it costs about the same as a 6800XT but would save me about 70€ a year in energy consumption.
It will still take years for the card to pay for the difference, for people into the hobby of PC gaming that are upgrading every 2 years, as the 3070 became outdated due to VRAM, I doubt most people will notice the difference in power consumption unless the card is constantly at full load.
 
Last edited:
Joined
Dec 25, 2020
Messages
5,462 (4.17/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Galax Stealth STL-03
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
Popularity I'd NOT synonymous with superiority....if it was, we'd all agree that McDonald's has the highest quality food because they sell the most of it.

While I agree, I think i have outlined at length that there's nothing superior, or redeemable, about picking a last-gen AMD (or even NVIDIA - save for the miracle of finding an RTX 3090 for $600) card over this
 
Joined
Sep 17, 2014
Messages
21,600 (6.00/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
100.628437%... 0.628% faster in classic raster, while being 22% slower in ray tracing, the standard that is being used in all modern game engines moving forward, while consuming 100 W more when gaming. Seems like a wise choice to go for the "winning" RX 6800 XT!

That's also not taking into account DLSS and frame generation.
Too bad Ray Tracing is de-facto unusable on any game going forward due to a measly 12GB, whereas the 6800XT carries 16GB.

Nice try but you've got a big green haze in front of your eyes. Its hilarious. First you cherry pick efficiency and relative RT at 1440p and then leaving out the fact that in raster, its not even faster than a two year old card that can be had for less. Then to top it off 'not taking into account DLSS and FG'... which has shown to go obsolete gen to gen as Ampere can't even do it :D Mate, do you even think.

You've lost all credibility to me both as member and staff. You can't even put this under benefit of the doubt sort of statements, this is just plain awkward nonsense you're spewing.

The 4070 is super DOA and if you buy/bought one, you're an idiot, simple. How to confirm this? Just place its spec list next to a 6 year old 1080ti. Its arguably worse at the same MSRP. And you go 'but yay +22% RT perf over last gen's competitor product in 2023'. Hilariously silly. Here we are in 2023 with a card with the same bandwidth and 1GB VRAM gained vs a 1080ti and there's actually people calling it good because they can cripple their perf with RTX ON.

Oh but of course, this is just an overall relative chart, and 22% is 'in fact a lot more where you have more RT'... :D The bottom line here truly is that RT is still in its infancy and you're a fool paying premium on it. You can safely leave it to Nvidia to keep your 'premium' game experience at sub 60 FPS to keep that buy urge alive! And then safely leave it to parrots like you to sell it. Jokers.
 
Last edited:

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,717 (1.96/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus Block, HWLABS Copper 240/40 + 240/30, D5/Res, 4x Noctua A12x25, 2x A4x10, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MT 26-36-36-48, 56.6ns AIDA, 2050 FCLK, 160 ns tRFC, active cooled
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, full transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 8 KHz Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White, Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19044.4046
Benchmark Scores Legendary
Too bad Ray Tracing is de-facto unusable on any game going forward due to a measly 12GB, whereas the 6800XT carries 16GB.

Nice try but you've got a big green haze in front of your eyes. Its hilarious. First you cherry pick efficiency and relative RT at 1440p and then leaving out the fact that in raster, its not even faster than a two year old card that can be had for less. Then to top it off 'not taking into account DLSS and FG'... which has shown to go obsolete gen to gen as Ampere can't even do it :D Mate, do you even think.

You've lost all credibility to me both as member and staff. You can't even put this under benefit of the doubt sort of statements, this is just plain awkward nonsense you're spewing.

The 4070 is super DOA and if you buy/bought one, you're an idiot, simple. How to confirm this? Just place its spec list next to a 6 year old 1080ti. Its arguably worse at the same MSRP. And you go 'but yay +22% RT perf over last gen's competitor product in 2023'. Hilariously silly. Here we are in 2023 with a card with the same bandwidth and 1GB VRAM gained vs a 1080ti and there's actually people calling it good because they can cripple their perf with RTX ON.

Oh but of course, this is just an overall relative chart, and 22% is 'in fact a lot more where you have more RT'... :D The bottom line here truly is that RT is still in its infancy and you're a fool paying premium on it. You can safely leave it to Nvidia to keep your 'premium' game experience at sub 60 FPS to keep that buy urge alive! And then safely leave it to parrots like you to sell it. Jokers.
Worse than a 1080 Ti and DOA eh?

OK bud.

We'll see how well it sells.

Screenshot_20230413-115800_Opera.png
 
Joined
Mar 10, 2010
Messages
11,878 (2.26/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Too bad Ray Tracing is de-facto unusable on any game going forward due to a measly 12GB, whereas the 6800XT carries 16GB.

Nice try but you've got a big green haze in front of your eyes. Its hilarious. First you cherry pick efficiency and relative RT at 1440p and then leaving out the fact that in raster, its not even faster than a two year old card that can be had for less. Then to top it off 'not taking into account DLSS and FG'... which has shown to go obsolete gen to gen as Ampere can't even do it :D Mate, do you even think.

You've lost all credibility to me both as member and staff. You can't even put this under benefit of the doubt sort of statements, this is just plain awkward nonsense you're spewing.

The 4070 is super DOA and if you buy/bought one, you're an idiot, simple. How to confirm this? Just place its spec list next to a 6 year old 1080ti. Its arguably worse at the same MSRP. And you go 'but yay +22% RT perf over last gen's competitor product in 2023'. Hilariously silly. Here we are in 2023 with a card with the same bandwidth and 1GB VRAM gained vs a 1080ti and there's actually people calling it good because they can cripple their perf with RTX ON.

Oh but of course, this is just an overall relative chart, and 22% is 'in fact a lot more where you have more RT'... :D The bottom line here truly is that RT is still in its infancy and you're a fool paying premium on it. You can safely leave it to Nvidia to keep your 'premium' game experience at sub 60 FPS to keep that buy urge alive! And then safely leave it to parrots like you to sell it. Jokers.
Indeed, I wouldn't touch it with dgistefanis wallet/purse.

luckily, for Nvidia some Don't pay attention to reviews.
 
Joined
Apr 13, 2023
Messages
38 (0.08/day)
Outdated irrelevant chart that comes from the... same source as I posted. Techspot is run by the HUB guys, in case you didn't know.

Never mind, but original point stands. Too many aspects where it is inferior to be considered an option on equal footing.
Not sure what you or others are rambling about. We're talking about price/performance or cost per frame.

16GB 6950XT $600/146 = $4.11
16GB 6800. $470/111 = $4.23
16GB. 6800XT $540/126 = $4.29
12GB 4070. $600/126 = $4.76
20GB 7900XT. $780/161 = $4.84
24GB 7900XTX $950/183 = $5.19

Your 4070 looks pretty bad doesn't it? And that's the best scenario if you get lucky getting it at msrp. Pick a $650 AIB and you're looking at $5.16. big yikes.
 
Joined
Jun 5, 2018
Messages
228 (0.10/day)
...Regarding RDNA 3: it is exceptionally unusual that AMD, the supposedly consumer friendly company, hasn't serviced the highest volume markets in the midrange, and i'll call it: that reason isn't that the midrange is supplied by previous generation overstock. It's because if the 7900 series are anything to go by... they have a stinker in their hands.
100% agree. Which is also why I am skipping this money grab gen from both camps entirely.
 
Joined
Dec 25, 2020
Messages
5,462 (4.17/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Galax Stealth STL-03
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
Too bad Ray Tracing is de-facto unusable on any game going forward due to a measly 12GB, whereas the 6800XT carries 16GB.

Nice try but you've got a big green haze in front of your eyes. Its hilarious. First you cherry pick efficiency and relative RT at 1440p and then leaving out the fact that in raster, its not even faster than a two year old card that can be had for less. Then to top it off 'not taking into account DLSS and FG'... which has shown to go obsolete gen to gen as Ampere can't even do it :D Mate, do you even think.

You've lost all credibility to me both as member and staff. You can't even put this under benefit of the doubt sort of statements, this is just plain awkward nonsense you're spewing.

The 4070 is super DOA and if you buy/bought one, you're an idiot, simple. How to confirm this? Just place its spec list next to a 6 year old 1080ti. Its arguably worse at the same MSRP. And you go 'but yay +22% RT perf over last gen's competitor product in 2023'. Hilariously silly. Here we are in 2023 with a card with the same bandwidth and 1GB VRAM gained vs a 1080ti and there's actually people calling it good because they can cripple their perf with RTX ON.

Oh but of course, this is just an overall relative chart, and 22% is 'in fact a lot more where you have more RT'... :D The bottom line here truly is that RT is still in its infancy and you're a fool paying premium on it. You can safely leave it to Nvidia to keep your 'premium' game experience at sub 60 FPS to keep that buy urge alive! And then safely leave it to parrots like you to sell it. Jokers.

He has a point. Especially once it's equalized for energy consumption, it's a bloodbath. Even accounting RT, it's still a better option, you'd be turning settings down on the 6800 XT anyway because it doesn't have enough processing power to handle it, meanwhile, if VRAM starvation was such a critical concern, the RTX 3090 should be beating the 4070 Ti into a pulp, but it's not. Their performance is roughly equal, with the 3090 Ti only a few points ahead of both, nothing worth mentioning. This is likely due to Ada's new, more efficient way to resolve BVH intersections.

8 GB is low, but 12 will do okay for some time to come. Nvidia being stingy with VRAM is nothing new, and that hasn't caused AMD's cards to magically become faster, by the time it truly mattered, both were long since obsolete. Sure, I 200% agree that RAM/VRAM requirements are rising and will continue to rise (hilarious we are having this conversation since I am usually the guy who openly defends throwing RAM at problems), but I have to argue that 12 GB is still well within the comfort zone for what's this thing is intended to do, 1080p to 1440p gaming, I wouldn't buy it, personally, but that's because I'm the kind of guy who likes to buy the good stuff - sadly priced out of reach now.

Still on the market for an affordable, secondary GPU... even a Vega 56 or 64 would do me well, their price has been going down and some gems have been showing up, I wonder if I'll get lucky?

100% agree. Which is also why I am skipping this money grab gen from both camps entirely.

Against my will, same here. I wanted the 4090, but not at the prices it's being sold. If GPU market doesn't improve, I will use my 3090 until it croaks.

Not sure what you or others are rambling about. We're talking about price/performance or cost per frame.

16GB 6950XT $600/146 = $4.11
16GB 6800. $470/111 = $4.23
16GB. 6800XT $540/126 = $4.29
12GB 4070. $600/126 = $4.76
20GB 7900XT. $780/161 = $4.84
24GB 7900XTX $950/183 = $5.19

Your 4070 looks pretty bad doesn't it? And that's the best scenario if you get lucky getting it at msrp. Pick a $650 AIB and you're looking at $5.16. big yikes.

No, it doesn't look bad at all, considering that this is far from the only metric that matters.
 
Joined
Sep 17, 2014
Messages
21,600 (6.00/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Worse than a 1080 Ti and DOA eh?

OK bud.

We'll see how well it sells.

View attachment 291354

Yes, worse, reading comprehension, try it sometime. Cognitive skills required

1681384758772.png


1080ti:

1681384807697.png


'This is fine'

You just confirmed again that you can't see the right specs in the right relation. Well done. I did even spell it out for you - twice now. Your input won't be missed going forward. You're not entirely wrong, I'm sure the 4070 will sell better as a mainstream card! Its also a given the vast majority of buyers aren't as knowledgeable, and you're sharing their level of knowledge clearly. Again, well done, I hope your proofreader tag doesn't deteriorate further in its credibility, you're sub zero on my scale. The above numbers simply don't lie.
 
Last edited:

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,717 (1.96/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus Block, HWLABS Copper 240/40 + 240/30, D5/Res, 4x Noctua A12x25, 2x A4x10, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MT 26-36-36-48, 56.6ns AIDA, 2050 FCLK, 160 ns tRFC, active cooled
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, full transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 8 KHz Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White, Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19044.4046
Benchmark Scores Legendary
Yes, worse, reading comprehension, try it sometime. Cognitive skills required

View attachment 291358

1080ti:

View attachment 291359

'This is fine'

You just confirmed again that you can't see the right specs in the right relation. Well done. I did even spell it out for you - twice now. Your input won't be missed going forward.
504 is worse than 484?

Ok bud.

:laugh:

Even if your critical thinking is truly this simplistic, comparing specs without looking at actual performance (where the 4070 is almost twice as fast), 484 is a lower number and therefore worse?
 
Joined
Apr 13, 2023
Messages
38 (0.08/day)
He has a point. Especially once it's equalized for energy consumption, it's a bloodbath. Even accounting RT, it's still a better option, you'd be turning settings down on the 6800 XT anyway because it doesn't have enough processing power to handle it, meanwhile, if VRAM starvation was such a critical concern, the RTX 3090 should be beating the 4070 Ti into a pulp, but it's not. Their performance is roughly equal, with the 3090 Ti only a few points ahead of both, nothing worth mentioning. This is likely due to Ada's new, more efficient way to resolve BVH intersections.

8 GB is low, but 12 will do okay for some time to come. Nvidia being stingy with VRAM is nothing new, and that hasn't caused AMD's cards to magically become faster, by the time it truly mattered, both were long since obsolete. Sure, I 200% agree that RAM/VRAM requirements are rising and will continue to rise (hilarious we are having this conversation since I am usually the guy who openly defends throwing RAM at problems), but I have to argue that 12 GB is still well within the comfort zone for what's this thing is intended to do, 1080p to 1440p gaming, I wouldn't buy it, personally, but that's because I'm the kind of guy who likes to buy the good stuff - sadly priced out of reach now.

Still on the market for an affordable, secondary GPU... even a Vega 56 or 64 would do me well, their price has been going down and some gems have been showing up, I wonder if I'll get lucky?



Against my will, same here. I wanted the 4090, but not at the prices it's being sold. If GPU market doesn't improve, I will use my 3090 until it croaks.



No, it doesn't look bad at all, considering that this is far from the only metric that matters.
What metric is infinitely better??? If you care so much about getting the best then get the 4090.

I can order a $470 6800 and I'll just laugh at the 3070/3070i/3080/4070 users.
 
Joined
Dec 25, 2020
Messages
5,462 (4.17/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Galax Stealth STL-03
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
Yes, worse, reading comprehension, try it sometime. Cognitive skills required

View attachment 291358

1080ti:

View attachment 291359

'This is fine'

You just confirmed again that you can't see the right specs in the right relation. Well done. I did even spell it out for you - twice now. Your input won't be missed going forward.

Raw memory bandwidth has ceased to matter for some time now, with the advent of large caches and efficient memory compression/lossless data management algorithms. Remember that the RTX 3090's memory bandwidth is in the vicinity of - and with a very quick OC - can exceed the terabyte per second mark, the same with AMD's old Radeon VII (mine hit 1.25 TB/s easily) - but this is still slower, off-die memory. That the 6900 XT does what it does with half its competitor's memory bandwidth is entirely down to cache, and why high hit rates are essential for it to upkeep performance.

If I had to guess, the 484 GB/s of GTX 1080 Ti would be easily met by around half of that on an RDNA 2 design such as the 6600 XT... and say, don't they perform about the same too? I think the 6600 XT is actually around 10% faster if I recall correctly.
 
Joined
Mar 10, 2010
Messages
11,878 (2.26/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
100% agree. Which is also why I am skipping this money grab gen from both camps entirely.
Virtually no one should be buying generation after generation, like phones there is little to be gained by yearly updates


Except for the fact that NVIDIA owners and users are/have been trained to pay up for a new GPU every two years because Vram ran out.

Or you have the wrong tensor cores doing nothing etc etc.

Yet that's a bonus to some, the Rx580 showed how to avoid EWaste , the 3070 and it's 4070 replacement are showing how to Make EWaste.
 
Joined
Sep 17, 2014
Messages
21,600 (6.00/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
504 is worse than 484?

Ok bud.

:laugh:

Even if your critical thinking is truly this simplistic, comparing specs without looking at actual performance (where the 4070 is almost twice as fast), 484 is a lower number and therefore worse?
It is pretty much the same isn't it, while the core power, as you correctly point out, is virtually doubled. I've made that very comparison, its same ballpark, just like your comment on 'its faster' and then pointing out percentile gaps on raster, I agree, that's the same perf on raster.

See and this kind of bullshit response from your end, is why you lose all credibility every time. Everyone with non hazy vision can see the problem in relative specs core to VRAM, except you.

Raw memory bandwidth has ceased to matter for some time now, with the advent of large caches and efficient memory compression/lossless data management algorithms. Remember that the RTX 3090's memory bandwidth is in the vicinity of - and with a very quick OC - can exceed the terabyte per second mark, the same with AMD's old Radeon VII (mine hit 1.25 TB/s easily) - but this is still slower, off-die memory. That the 6900 XT does what it does with half its competitor's memory bandwidth is entirely down to cache, and why high hit rates are essential for it to upkeep performance.

If I had to guess, the 484 GB/s of GTX 1080 Ti would be easily met by around half of that on an RDNA 2 design such as the 6600 XT... and say, don't they perform about the same too? I think the 6600 XT is actually around 10% faster if I recall correctly.
And yet cache also turns into an achilles heel for even AMD at 4K where it drops off against Nvidia's 4090. At that point, they're saved (most of the time) by hard throughput being at 800GBps still on a 7900XT, to an extent.

Cache does NOT alleviate constraints in the very use cases where you need it most, which is with heavy swapping required due to large amounts of data needed at will. The two are at odds with one another. At that point you are saved somewhat by royal VRAM capacity.

Its a bit like 'I have super boost clocks' under loads where you already exceed useful FPS numbers by miles. Who cares?? Its nice for bench realities, in actual gaming, it doesn't amount to anything. This is where experience comes in. We've seen this all before and crippled bandwidth, real, hard, bandwidth, is and will always be a defining factor.

You put that 6600XT on a higher res, it will die horribly, whereas in relative sense the 1080ti would still be standing upright. I experience this now with a 1080 on 8GB, I can fill the framebuffer, FPS can go down, but the affair is still buttery smooth in frame times.
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,717 (1.96/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus Block, HWLABS Copper 240/40 + 240/30, D5/Res, 4x Noctua A12x25, 2x A4x10, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MT 26-36-36-48, 56.6ns AIDA, 2050 FCLK, 160 ns tRFC, active cooled
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, full transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 8 KHz Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White, Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19044.4046
Benchmark Scores Legendary
It is pretty much the same isn't it, while the core power, as you correctly point out, is virtually doubled. I've made that very comparison, its same ballpark, just like your comment on 'its faster' and then pointing out percentile gaps on raster, I agree, that's the same perf on raster.

See and this kind of bullshit response from your end, is why you lose all credibility every time. Everyone with non hazy vision can see the problem in relative specs core to VRAM, except you.
Non hazy vision eh?

Based off assumptions that VRAM should scale with performance indefinitely and that zero advances have been made in other regards...

Surely we would see these catastrophic effects when playing at 4K, no?
Raw memory bandwidth has ceased to matter for some time now, with the advent of large caches and efficient memory compression/lossless data management algorithms. Remember that the RTX 3090's memory bandwidth is in the vicinity of - and with a very quick OC - can exceed the terabyte per second mark, the same with AMD's old Radeon VII (mine hit 1.25 TB/s easily) - but this is still slower, off-die memory. That the 6900 XT does what it does with half its competitor's memory bandwidth is entirely down to cache, and why high hit rates are essential for it to upkeep performance.

If I had to guess, the 484 GB/s of GTX 1080 Ti would be easily met by around half of that on an RDNA 2 design such as the 6600 XT... and say, don't they perform about the same too? I think the 6600 XT is actually around 10% faster if I recall correctly.
 
Joined
Sep 17, 2014
Messages
21,600 (6.00/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Non hazy vision eh?

Based off assumptions that VRAM should scale with performance indefinitely and that zero advances have been made in other regards...

Surely we would see these catastrophic effects when playing at 4K, no?
The scales do tip over to high bandwidth cards excelling at 4K, yes, whats the point here? This has been true since 2015.
Where did anyone say VRAM should scale with performance indefinitely? That zero advances are made in other regards? We all acknowledge the use of cache. But we also need to judge it for what it really does and for where it presents a limitation.

Nuance. You're missing it, and again, your style of discussing this confirms everything once more. Your input is of little relevance, you prefer to cherry pick the examples where it works out well, omitting the ones where the whole house of cards falls apart. Whereas its exactly those situations where you run into limits that will bother you as an end user, right? Its the same as touting 500 FPS in CS GO to favor an Intel CPU over an X3D (that's not you per say, but we've seen it on TPU). Completely ridiculous nonsense.

Or - just a difference of perspective, let's keep playing nice - where you are content with cards that have life expectancy of 2-3 years, and I expect at least double that from a highly priced product. That's really the only argument you could possibly have for promoting cards with lackluster specs for the money. If you do actually upgrade gen-to-gen or bi-gen, that's a reasonable approach regardless. I don't, I think its a total waste of time to upgrade for 25-30%.
 
Last edited:
Joined
Dec 25, 2020
Messages
5,462 (4.17/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Galax Stealth STL-03
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
It is pretty much the same isn't it, while the core power, as you correctly point out, is virtually doubled. I've made that very comparison, its same ballpark, just like your comment on 'its faster' and then pointing out percentile gaps on raster, I agree, that's the same perf on raster.

See and this kind of bullshit response from your end, is why you lose all credibility every time. Everyone with non hazy vision can see the problem in relative specs core to VRAM, except you.


And yet cache also turns into an achilles heel for even AMD at 4K where it drops off against Nvidia's 4090. At that point, they're saved (most of the time) by hard throughput being at 800GBps still on a 7900XT, to an extent.

Cache does NOT alleviate constraints in the very use cases where you need it most, which is with heavy swapping required due to large amounts of data needed at will. The two are at odds with one another.

Its a bit like 'I have super boost clocks' under loads where you already exceed useful FPS numbers by miles. Who cares?? Its nice for bench realities, in actual gaming, it doesn't amount to anything. This is where experience comes in. We've seen this all before and crippled bandwidth, real, hard, bandwidth, is and will always be a defining factor.

You put that 6600XT on a higher res, it will die horribly, whereas in relative sense the 1080ti would still be standing upright. I experience this now with a 1080 on 8GB, I can fill the framebuffer, FPS can go down, but the affair is still buttery smooth in frame times.

I agree with the point that it's an achilles heel, but unfortunately, that's a problem inherent to AMD's gamble on relying on their last-level cache for bandwidth and using slower G6 modules to save on the BOM. It's a compromise their engineers felt that was fair, at least at the time.

The 6600 XT will die horribly, yes, but that's likely due to smaller frontend, lower bandwidth that its smaller infinity cache can't overcome and 8 GB attached to just two channels (128-bit), but given what it is, a low-cost GPU intended for 1080p gaming, it's a valiant effort IMO. RDNA 3's approach was smart, they decoupled the cache from the GCD and attached it to the MCDs, giving each channel a lot more cache to work with. Ampere's 6 MB L2 + roughly TB worth of bandwidth x Navi 21's 128 MB L3 + 512 GB/s of bandwidth or so has mostly ended in a draw, with the drawbacks of AMD's design only showing at extreme resolutions well beyond what anything reasonable to ask out of a 256-bit bus GPU.

It leaves only one question: why is RDNA 3 underperforming so much? It's either horribly broken, or AMD's software division has a lot, and I mean a lot to explain.
 
Joined
Sep 17, 2014
Messages
21,600 (6.00/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
I agree with the point that it's an achilles heel, but unfortunately, that's a problem inherent to AMD's gamble on relying on their last-level cache for bandwidth and using slower G6 modules to save on the BOM. It's a compromise their engineers felt that was fair, at least at the time.

The 6600 XT will die horribly, yes, but that's likely due to smaller frontend, lower bandwidth that its smaller infinity cache can't overcome and 8 GB attached to just two channels (128-bit), but given what it is, a low-cost GPU intended for 1080p gaming, it's a valiant effort IMO. RDNA 3's approach was smart, they decoupled the cache from the GCD and attached it to the MCDs, giving each channel a lot more cache to work with. Ampere's 6 MB L2 + roughly TB worth of bandwidth x Navi 21's 128 MB L3 + 512 GB/s of bandwidth or so has mostly ended in a draw, with the drawbacks of AMD's design only showing at extreme resolutions well beyond what anything reasonable to ask out of a 256-bit bus GPU.
But that's the very essence of what I'm getting at. If these cards are supported by a more royal memory subsystem, that core can actually carry them quite a bit longer. Its not unplayable FPS if you end up at 40 minimums, we can experience this ourselves and I do it on the daily. It works just fine - but only AS LONG AS you have stable frametimes. That's the territory we speak of when we speak of longevity. And that IS the longevity I really do expect from x70 class cards and up - it is the longevity they've historically also had.
 
Joined
Aug 4, 2020
Messages
1,599 (1.10/day)
Location
::1
But that's the very essence of what I'm getting at. If these cards are supported by a more royal memory subsystem, that core can actually carry them quite a bit longer. Its not unplayable FPS if you end up at 40 minimums, we can experience this ourselves and I do it on the daily. It works just fine - but only AS LONG AS you have stable frametimes. That's the territory we speak of when we speak of longevity. And that IS the longevity I really do expect from x70 class cards and up - it is the longevity they've historically also had.
lets face it, this is jensen's sollbruchstelle in creating a card that will fail in the exact way you're describing within a handful of years to ensure the user will buy yet another new card.

planned obsolescence, hooray!
 
Joined
Sep 17, 2014
Messages
21,600 (6.00/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Not sure what you or others are rambling about. We're talking about price/performance or cost per frame.

16GB 6950XT $600/146 = $4.11
16GB 6800. $470/111 = $4.23
16GB. 6800XT $540/126 = $4.29
12GB 4070. $600/126 = $4.76
20GB 7900XT. $780/161 = $4.84
24GB 7900XTX $950/183 = $5.19

Your 4070 looks pretty bad doesn't it? And that's the best scenario if you get lucky getting it at msrp. Pick a $650 AIB and you're looking at $5.16. big yikes.
The 4070 is MSRP 669 isn't it, not 600? I do know that's the case in EUR, and in reality I'll probably see it start at 700 for the FE.

It'll easily land even above the 7900XTX in cost per frame, as a midrange contender. Its hilariously bad.

I can buy a 7900XT for 836 EUR today and an XTX at just over 1K in the Netherlands. That's a net perf gap of a whopping 50% at roughly same relative cost per frame.

700 was about right it seems, too: (and that's for a bottom end AIB contraption, which for 200W might just be ok)
1681387627187.png

Instant No. To me this feels like paying for a VW UP with all options that drives the exact same as a Renault Twingo or Citroen C1 at stock, I just can't...
 
Last edited:
Joined
Nov 11, 2016
Messages
3,269 (1.16/day)
System Name The de-ploughminator Mk-II
Processor i7 13700KF
Motherboard MSI Z790 Carbon
Cooling ID-Cooling SE-226-XT + Phanteks T30
Memory 2x16GB G.Skill DDR5 7200Cas34
Video Card(s) Asus RTX4090 TUF
Storage Kingston KC3000 2TB NVME
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
lets face it, this is jensen's sollbruchstelle in creating a card that will fail in the exact way you're describing within a handful of years to ensure the user will buy yet another new card.

planned obsolescence, hooray!

Lowering texture quality means the GPU is obsolete? :rolleyes:.

I tested Hogswart Legacy with Low Texture Quality vs Ultra Texture Quality and it makes very little difference, saving ~4GB VRAM @ 4K Ultra RT DLSS
link
 
Joined
Oct 31, 2022
Messages
170 (0.27/day)
Cheapest 6800XT I can find in Germany is 600€ and cheapest 6950XT is 670€. The 6800XT has roughly the same performance in gaming as the 4070 but worse performance in everything else. The 4070 would also save me 70€ a year in energy cost. So the decision isn't really as easy as just comparing gaming performance and sale price and calling it a day.
They sometimes change day by day.
Since RDNA2 is also TSMC, they can be undervolted quite well. Same as the 4070.
Personally I am against buying an old card. But in this case price and performance matters. Currently you either go old AMD for price or Nvidia for RT.

And it is not like the 4070 had ANY RT improvements...raster AND RT is the same as the 3080.
You get +2GB VRAM and 40-50% less powerdraw for a bit less money. That IS a better deal, but not a very attractive one for "Next Gen".

But that will definetly change, when the RX7800XT or 7700XT come out.
The 7700XT will have 12GB VRAM, and probably similar performance to the 4070, but powerdraw and RT will be worse. AMD HAS to sell it for below 499$ to make it attractive (since the 6800XT is ~500$ with 4GB more VRAM).
The 7800XT is the 6900XT competitor with also 16GB VRAM. 599$ might be not low enough for it. 599$ is 649€ and the 6950XT is 649€. To make it viable, it has to be sold for 579$ or less.
It is a real piece of work to make those cards fit into the current market.... how much distance to the other current cards is alright?
 
Joined
May 10, 2020
Messages
737 (0.48/day)
Processor Intel i7 13900K
Motherboard Asus ROG Strix Z690-E Gaming
Cooling Arctic Freezer II 360
Memory 32 Gb Kingston Fury Renegade 6400 C32
Video Card(s) PNY RTX 4080 XLR8 OC
Storage 1 TB Samsung 970 EVO + 1 TB Samsung 970 EVO Plus + 2 TB Samsung 870
Display(s) Asus TUF Gaming VG27AQL1A + Samsung C24RG50
Case Corsair 5000D Airflow
Power Supply EVGA G6 850W
Mouse Razer Basilisk
Keyboard Razer Huntsman Elite
Benchmark Scores 3dMark TimeSpy - 26698 Cinebench R23 2258/40751
Popularity I'd NOT synonymous with superiority....if it was, we'd all agree that McDonald's has the highest quality food because they sell the most of it.
Except we are not speaking about hamburgers here, and if people keep choosing Nvidia over AMD, is because of quality issues with AMD.
 
Joined
May 11, 2018
Messages
1,057 (0.47/day)
lets face it, this is jensen's sollbruchstelle in creating a card that will fail in the exact way you're describing within a handful of years to ensure the user will buy yet another new card.

planned obsolescence, hooray!

Nah, it's much easier now. Just proclaim new DLSS 4.0 with next generation, and you can lock out the previous generation top of the line 1700 EUR card out of the new "tech", and lower it's usefulness even compared to midrange of new line.

That's planned obsolescence, and you can fine tune it just how crappy you want the old cards to perform compared to the new ones, in the drivers!
 
Joined
May 17, 2021
Messages
3,005 (2.57/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
people are really divided here

i think they are all overpriced, the new 4070 and the old 6*** cards (makes it a bit worst for being a older gen card), wouldn't buy either amd or nvidia at these prices
 
Top