• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

When will gpu prices return to normal.

Status
Not open for further replies.

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.64/day)
Location
Ex-usa | slava the trolls
The "Ti" versions are stop-gaps normally launched much later - RTX 3080 Ti in May 2021, while original RTX 3080 much earlier in 2020.

This lineup looks very strange, the performance estimates simply do not look right, and the lineup is heavily rearranged / rebalanced - the RTX 4090 will pull forward and the gap with the rest of the lineup will be huge.

RTX 4090 = RTX 3090 + 56% more shaders.
RTX 4080 = RTX 3080 +18% more shaders.
RTX 4070 = RTX 3070 + 21% more shaders.

While RTX 3090 was only ~10-15% faster on average than the RTX 3080.
RTX 3080 was 30% faster than RTX 3070.

If the shaders do not give higher IPC, then the RTX 4070 will remain about the same performance or slower than the September 2020 10 GB RTX 3080.
 
Joined
Dec 31, 2020
Messages
985 (0.69/day)
Processor E5-4627 v4
Motherboard VEINEDA X99
Memory 32 GB
Video Card(s) 2080 Ti
Storage NE-512
Display(s) G27Q
Case DAOTECH X9
Power Supply SF450
have no doubts that 4070 /80 are at least 50% faster. shaders IPC and clocks are not the same for ampere and ada, but that remains to be seen.
take 1070 / 970 for example. 15% more shaders, 40% clock speed. 61% faster, it's actually too perfect. but somehow the story repeats itself.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.64/day)
Location
Ex-usa | slava the trolls
have no doubts that 4070 /80 are at least 50% faster. shaders IPC and clocks are not the same for ampere and ada, but that remains to be seen.
take 1070 / 970 for example. 15% more shaders, 40% clock speed. 61% faster, it's actually too perfect. but somehow the story repeats itself.

Yeah, the N4 process should be 50-60% faster at the same wattage (than Samsung 8N).
Also, the clocks must be much higher. 2.5 GHz? 2.8 GHz AD103?

The question is - are the shaders smaller with lower IPC or the same size but slightly faster?
 
Joined
Oct 27, 2020
Messages
791 (0.53/day)
The "Ti" versions are stop-gaps normally launched much later - RTX 3080 Ti in May 2021, while original RTX 3080 much earlier in 2020.

This lineup looks very strange, the performance estimates simply do not look right, and the lineup is heavily rearranged / rebalanced - the RTX 4090 will pull forward and the gap with the rest of the lineup will be huge.

RTX 4090 = RTX 3090 + 56% more shaders.
RTX 4080 = RTX 3080 +18% more shaders.
RTX 4070 = RTX 3070 + 21% more shaders.

While RTX 3090 was only ~10-15% faster on average than the RTX 3080.
RTX 3080 was 30% faster than RTX 3070.

If the shaders do not give higher IPC, then the RTX 4070 will remain about the same performance or slower than the September 2020 10 GB RTX 3080.
Usually yes, regarding Ti versions launch, but there are exceptions, for example 3060Ti launched 2 months and 1 week after 3090, 2080Ti launched at the same time with 2080 etc.
And after all, the model numbering could be different from the table you quoted.(We could have for example a RTX 4090 Ultra for a Full AD102 version, who knows? ;)
Irrespective from model numbering which can change, there are many examples (but not always) when the most cut down part of a GPU die comes at an earlier stage of a GPU lifecycle (concurrent or with small difference like 3060Ti) and as the yields improve over the lifetime of the product, either we have a refresh or if not the manufacturer can limit just a little bit the availability of the most cut down part if it make sense based on yields.
Another reason for AD103 not to be excessively stressed out is potentially the SM count, full AD102 (192SM) will be served with 384bit bus & 24Gbps GDDR6X (possibly limited availability in order to be able to support many SKUs) and to have the same bandwidth per/SM with 21Gbps GDDR6X memory on a 256bit bus, AD103 must be 112SM instead of the more orthodoxal 128SM, if Ada design is bandwidth limited (I think it will be, just like Pascal) there is no reason to stress AD103 frequency too much since the gains will be limited anyway, but this is just my speculation, we will see.
On top of specs, additional info can occure if you see what historically Nvidia did in the past when it launched a new lineup and how they brought their top last gen performance level at what (lower) price points.
Try to imagine Nvidia's CEO on stage announcing the $500 Ada part, then correlate with corresponding ada model/specs based on your optic and what performance that part must at least reach in order to generate the minimum buzz and then extrapolate from there.
Without even taking account the specs, how likely is it according to your perception, the $499 part (whatever the name) to be less than 3080? (imo logically it will be at least matching 3080 12GB and at that point Navi 33 how much faster it will be if any, since everybody saying that it will be slower at 4K than 6900XT due to 128bit bus (probably it will have around 6800XT 4K performance?)

IMG_20220626_050233.jpg


Yeah, the N4 process should be 50-60% faster at the same wattage (than Samsung 8N).
Also, the clocks must be much higher. 2.5 GHz? 2.8 GHz AD103?

The question is - are the shaders smaller with lower IPC or the same size but slightly faster?
N4 has 63% clock speed increase potential for logic vs N16 according to TSMC and N16 can hit very similar frequencies to Samsung's 8nm.
Logically there will be eventually OC ada models close to 3GHz if ada architecture is designed for high clocks throughput.
 
Last edited:
Joined
Mar 20, 2022
Messages
213 (0.22/day)
Processor Ryzen 5 5600X
Motherboard Asus B550-F Gaming WiFi
Cooling Be Quiet! Dark Rock Pro 4
Memory 64GB G.Skill Ripjaws V 3600 CL18
Video Card(s) Gigabyte RX 6600 Eagle (de-shrouded)
Storage Samsung 970 Evo 1TB M.2
Display(s) 2 x Asus 1080p 60Hz IPS
Case Antec P101 Silent
Audio Device(s) Ifi Zen DAC V2
Power Supply Be Quiet! Straight Power 11 650W Platinum
Mouse JSCO JNL-101k
Keyboard Akko 3108 V2 (Akko Pink linears)
The mid-high tier RTX 3060 and RTX 3060 Ti cards in Australia have stubbornly refused to drop in price since early May, unfortunately. Only the MSI RTX 3060 Ti Ventus 2X has seen a substantial price drop of AU$100, the rest have hardly moved.

I took some screenshots on 12th May of prices to keep track of them. Here's pricing for those cards on 12th May:

1656215299445.png

1656215314500.png

1656215327622.png


1656215360149.png

1656215369602.png

1656215377651.png


Now the current pricing for comparison:

1656215456143.png

1656215476655.png

1656215492151.png


1656215542806.png

1656215560591.png

1656215580686.png


Granted there's more stock, but most cards haven't seen any change and most prices still suck.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.64/day)
Location
Ex-usa | slava the trolls
I don't know what decision the Nvidia's managers will make but I sure know that this doesn't seem to be the right one...

These know that the users have no choice - it is 50-50 AMD or nvidia, so no matter what they do or how bad their action is, the sales will go on, anyways. They have nothing to lose, so they have the freedom not to think in the right direction.
 
Joined
Oct 27, 2020
Messages
791 (0.53/day)
These know that the users have no choice - it is 50-50 AMD or nvidia, so no matter what they do or how bad their action is, the sales will go on, anyways. They have nothing to lose, so they have the freedom not to think in the right direction.
Maybe, we'll see.
The only reason to do it imo (still wrong decision) is if they saw according to their simulations that full Navi32 is ahead of AD103 and thought to compensate with a +5% performance boost pushing frequency up (and power consumption through the roof).
They could just take the 5% beating (like in 3070Ti/6800 case, since at 350W the performance/W comparison would be fine and the memory size the same unlike what we have now for 3070Ti/6800), inform partners to balance more their production towards OC models and selling just a little bit above SRP these OC models while entry models are just at SRP in order to save face lol (but it doesn't matter if the entry models don't sell much since partners will have already arrange their production accordingly) and maybe add a game bundle at a later date if they still need to push further. (But logically there will not be a need to push further or push any at all lol, the performance/W difference will be less than 10% and the memory size the same, right now 6800 is 35% more efficient vs 3070Ti and has double the memory and still Mindfactory starts at 699€ for a 3070Ti (with a KFA² entry or an INNO3D 3X lol) while 6800 starts at -50€ (649€) for a much more high quality Asrock Phantom Gaming model, so no need at all for Nvidia to worry for 5% performance loss...
I wonder have many people from the TPU audience that buys 4080 level GPUs, would be deterred from the increased TDP. (the below chart doesn't say much it's too generic)
 
Last edited:

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.64/day)
Location
Ex-usa | slava the trolls
AMD Radeon RX 6400 Grafikkarte Preisvergleich | Günstig bei idealo kaufen
AMD Radeon RX 6400 Specs | TechPowerUp GPU Database

Actual retail prices:
Radeon RX 6400 - from 171 euros (MSRP 159 dollars)
Radeon RX 6500 XT - from 170 euros (MSRP 199 dollars)
Radeon RX 6600 - from 298 euros (MSRP 329 dollars)
Radeon RX 6600 XT - from 398 euros (MSRP 379 dollars)
Radeon RX 6650 XT - from 405 euros (MSRP 399 dollars)
Radeon RX 6700 XT - from 509 euros (MSRP 479 dollars)
Radeon RX 6750 XT - from 600 euros (MSRP 549 dollars)
Radeon RX 6800 - from 680 euros (MSRP 579 dollars)
Radeon RX 6800 XT - from 800 euros (MSRP 649 dollars)
Radeon RX 6900 XT - from 900 euros (MSRP 999 dollars)
Radeon RX 6950 XT - from 1208 euros (MSRP 1099 dollars)
 
Joined
Oct 27, 2020
Messages
791 (0.53/day)
It doesn't take account the memory size at all, so it ends being a little unfair depending the comparison.
You could argue that the PCB components or the memory type/speed used is affecting performance so taking account the performance metric you are somehow including them also in a "fair" comparison regarding price/performance metric, but in some games and resolutions, memory size doesn't bring more performance, it just covers you for future games so you can keep your VGA more years, that's an important extra value relatively easier to quantify (in relation with other values like media engine, s/w etc) that's not taken into account at all (in their experiment with the logic they used a 16GB 6800XT/3070 should have cost the same as a 8GB 6800XT/3070.
Also while RTX 3050 is compared based on 4K difference, Navi24 models are compared based on 1080p difference which is unfair since both are 1080p level cards, the reason they doing it, is because memory size & bandwidth and infinity cache size absolutely kill the 4K performance of Navi24 models.
At least they could have used 1080p difference for Navi24/Navi23 (which is the more indicative difference based on what it targets) and QHD difference for RTX3050/3060 if not FHD, QHD it wouldn't change much for 3050 but at least it would "seem" a little bit more fair.
 
Last edited:

Count von Schwalbe

Moderator
Staff member
Joined
Nov 15, 2021
Messages
3,087 (2.78/day)
Location
Knoxville, TN, USA
System Name Work Computer | Unfinished Computer
Processor Core i7-6700 | Ryzen 5 5600X
Motherboard Dell Q170 | Gigabyte Aorus Elite Wi-Fi
Cooling A fan? | Truly Custom Loop
Memory 4x4GB Crucial 2133 C17 | 4x8GB Corsair Vengeance RGB 3600 C26
Video Card(s) Dell Radeon R7 450 | RTX 2080 Ti FE
Storage Crucial BX500 2TB | TBD
Display(s) 3x LG QHD 32" GSM5B96 | TBD
Case Dell | Heavily Modified Phanteks P400
Power Supply Dell TFX Non-standard | EVGA BQ 650W
Mouse Monster No-Name $7 Gaming Mouse| TBD
It doesn't take account the memory size at all, so it ends being a little unfair depending the comparison.
You could argue that the PCB components or the memory type/speed used is affecting performance so taking account the performance metric you are somehow including them also in a "fair" comparison regarding price/performance metric, but in some games and resolutions, memory size doesn't bring more performance, it just covers you for future games so you can keep your VGA more years, that's an important extra value relatively easier to quantify (in relation with other values like media engine, s/w etc) that's not taken into account at all (in their experiment with the logic they used a 16GB 6800XT/3070 should have cost the same as a 8GB 6800XT/3070.
Also while RTX 3050 is compared based on 4K difference, Navi24 models are compared based on 1080p difference which is unfair since both are 1080p level cards, the reason they doing it, is because memory size & bandwidth and infinity cache size absolutely kill the 4K performance of Navi24 models.
At least they could have used 1080p difference for Navi24/Navi23 (which is the more indicative difference based on what it targets) and QHD difference for RTX3050/3060 if not FHD, QHD it wouldn't change much for 3050 but at least it would "seem" a little bit more fair.
It would be more logical to use QHD across the board, as that is where AMD/Nvidia reach parity for equal performing parts. Using 4k seems strongly Nvidia biased to me, especially as 1080p is still the most common gaming resolution.
 
Joined
Oct 27, 2020
Messages
791 (0.53/day)
It would be more logical to use QHD across the board, as that is where AMD/Nvidia reach parity for equal performing parts. Using 4k seems strongly Nvidia biased to me, especially as 1080p is still the most common gaming resolution.
I don't know much about 3DCenter but they don't seem to me Nvidia biased, in their performance charts AMD's models enjoy slightly better % positioning in relation to Nvidia's "similar" models than in TPU % performance charts (for example RX6400 is above GTX1650 in FHD in their charts) i guess they just used 4K in order to be more "fair" to the higher end models?
It's the right thing to do also unless a model suffers from excessive performance loss in 4K like Navi24 in most games (or excessive performance loss in only a handful of games (1?) like GTX 1660 series strangely lol)
If you check performance deltas in older titles at max settings (or esport titles) or newer titles at minimum/medium settings and compare them with new titles at max settings, you will see that as you increase the effects implemented/texture size etc in an engine/game, so the performance delta increasing between cards.
So if you want to see what the performance differences will end up between 2 same gen/manufacturer models after 2-3 or more years, the current 4K indication is possibly a better example because on average due to the more effects implemented in future titles the performance delta will likely increase also (after all we are at cross-gen phase right now, so even more potential for the performance delta to be bigger in future titles imo) of course it's not accurate and can't give exactly what the performance delta will be after 2-3 years but closer in order to be preferable (i mean current 4K deltas will be closer to future QHD deltas than current QHD deltas) and anyway the current QHD deltas isn't dramatically different from current 4K deltas anyway, unless we are talking about specific models, usually Navi 23 and below due to bandwidth limitations & infinity cache size used or other models due to memory size etc, for those models maybe a specific exception, although it isn't out of the realm of possibility in 2 years from now, an unreal engine 5 game (or some other game that pushes the visual boundaries less well optimized than unreal engine's 5) can show performance problems even in QHD for Navi23 due to infinity cache size/bandwidth limitations.
 
Last edited:

Count von Schwalbe

Moderator
Staff member
Joined
Nov 15, 2021
Messages
3,087 (2.78/day)
Location
Knoxville, TN, USA
System Name Work Computer | Unfinished Computer
Processor Core i7-6700 | Ryzen 5 5600X
Motherboard Dell Q170 | Gigabyte Aorus Elite Wi-Fi
Cooling A fan? | Truly Custom Loop
Memory 4x4GB Crucial 2133 C17 | 4x8GB Corsair Vengeance RGB 3600 C26
Video Card(s) Dell Radeon R7 450 | RTX 2080 Ti FE
Storage Crucial BX500 2TB | TBD
Display(s) 3x LG QHD 32" GSM5B96 | TBD
Case Dell | Heavily Modified Phanteks P400
Power Supply Dell TFX Non-standard | EVGA BQ 650W
Mouse Monster No-Name $7 Gaming Mouse| TBD
I don't know much about 3DCenter but they don't seem to me Nvidia biased, in their performance charts AMD's models enjoy slightly better % positioning in relation to Nvidia's "similar" models than in TPU % performance charts (for example RX6400 is above GTX1650 in FHD in their charts) i guess they just used 4K in order to be more "fair" to the higher end models?
It's the right thing to do also unless a model suffers from excessive performance loss in 4K like Navi24 in most games (or excessive performance loss in only a handful of games (1?) like GTX 1660 series strangely lol)
If you check performance deltas in older titles at max settings (or esport titles) or newer titles at minimum/medium settings and compare them with new titles at max settings, you will see that as you increase the effects implemented/texture size etc in an engine/game, so the performance delta increasing between cards.
So if you want to see what the performance differences will end up between 2 same gen/manufacturer models after 2-3 or more years, the current 4K indication is possibly a better example because on average due to the more effects implemented in future titles the performance delta will likely increase also (after all we are at cross-gen phase right now, so even more potential for the performance delta to be bigger in future titles imo) of course it's not accurate and can't give exactly what the performance delta will be after 2-3 years but closer in order to be preferable (i mean current 4K deltas will be closer to future QHD deltas than current QHD deltas) and anyway the current QHD deltas isn't dramatically different from current 4K deltas anyway, unless we are talking about specific models, usually Navi 23 and below due to bandwidth limitations & infinity cache size used, for those models maybe a specific exception, although it isn't out of the realm of possibility in 2 years from now, an unreal engine 5 game (or some other game that pushes the visual boundaries less well optimized than unreal engine's 5) can show performance problems even in QHD for Navi23 due to infinity cache size/bandwidth limitations.
True, I can see your point. I was just referring to the fact that RDNA2 in general performs superior to Ampere at 1080p, roughly equivalent at QHD, and inferior at 4K.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.64/day)
Location
Ex-usa | slava the trolls
Memory size?
If they do that, they should include everything else - die size, memory bus width, memory type, etc...

True, I can see your point. I was just referring to the fact that RDNA2 in general performs superior to Ampere at 1080p, roughly equivalent at QHD, and inferior at 4K.

Yeah, it's inferior at UHD because of older memory standard and lower memory throughput.
It is the most annoying "feature" of the RDNA 2-based cards - they lose competitiveness with higher resolutions.

I hope RDNA 3 will fix this and will do the opposite - become stronger at UHD, and ray-tracing.
 
Joined
Apr 30, 2020
Messages
986 (0.59/day)
System Name S.L.I + RTX research rig
Processor Ryzen 7 5800X 3D.
Motherboard MSI MEG ACE X570
Cooling Corsair H150i Cappellx
Memory Corsair Vengeance pro RGB 3200mhz 32Gbs
Video Card(s) 2x Dell RTX 2080 Ti in S.L.I
Storage Western digital Sata 6.0 SDD 500gb + fanxiang S660 4TB PCIe 4.0 NVMe M.2
Display(s) HP X24i
Case Corsair 7000D Airflow
Power Supply EVGA G+1600watts
Mouse Corsair Scimitar
Keyboard Cosair K55 Pro RGB
Online retailers are still over priced even for used/refurbished, they're double the price of e-bay sellers. Sometimes you're better off going ino the brick and motar to find sales.
 
Joined
Oct 27, 2020
Messages
791 (0.53/day)
Memory size?
If they do that, they should include everything else - die size, memory bus width, memory type, etc...



Yeah, it's inferior at UHD because of older memory standard and lower memory throughput.
It is the most annoying "feature" of the RDNA 2-based cards - they lose competitiveness with higher resolutions.

I hope RDNA 3 will fix this and will do the opposite - become stronger at UHD, and ray-tracing.
Nothing bad about 3Dcenter's approach after all the part «based on their performance alone» says it all, it's clear what they did, they didn't misslead anyone, it's just that since memory is easier to quantify in relation to other differences like media engine s/w etc they could have gone a step further (die size-number of transistors preferably since the process difference will lead to even more virtual differences at what transistors alone will give, memory bus width, memory type are all affecting the final performance of each card anyway so taking account the performance in a degree includes the effect of all those metrics in the price/performance placement. In some cases depending the gpu/game/resolution, extra memory doesn't add anything in the performance, it just can give extra value to your card and you can keep it more years, that's why it would have been interesting to included it somehow in their conclusions.
AMD isn't inferior to Nvidia at 4K due to Nvidia's use of GDDR6X (newer memory standard), since GDDR6X gives little performance benefit for raster, just check 3070/3070Ti (although 3070 regarding raster is the most "memory bandwidth starved" model by far in Ampere lineup with 96Rops & 448GB/s, it's only 7% slower at 4K with -4% less shaders/TMUs etc. vs 3070Ti, for other Ampere models jumping to GDDR6X will give even less difference for raster specifically since ampere raster performance is not memory bandwidth limited like Pascal was for example) the reasons are mostly (like you mentioned) memory bandwidth (24Gbps GDDR6 would help a lot) and also infinity cache size related, RDNA3 according to leaks will fix at least the size of Infinity cache so it will certainly help there (a big cache can help with raytracing also but depends on the raytracing implementation, it must be designed to take advantage of the cache system)
 
Last edited:

ppn

Joined
Aug 18, 2015
Messages
1,231 (0.36/day)
The narrow bus of RTX 4080 and RX 7800 XT renders them equally worse at 4K than prev gen. As for 4070 160bit and 7700 128bit watch them jump off a hill.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.64/day)
Location
Ex-usa | slava the trolls
The narrow bus of RTX 4080 and RX 7800 XT renders them equally worse at 4K than prev gen. As for 4070 160bit and 7700 128bit watch them jump off a hill.

That will be the second generation product lineup with limited memory throughput, so it is highly likely the issue has been addressed and resolved somehow.
We'll see soon, hopefully.
 
Joined
Apr 14, 2022
Messages
749 (0.78/day)
Location
London, UK
Processor AMD Ryzen 7 5800X3D
Motherboard ASUS B550M-Plus WiFi II
Cooling Noctua U12A chromax.black
Memory Corsair Vengeance 32GB 3600Mhz
Video Card(s) Palit RTX 4080 GameRock OC
Storage Samsung 970 Evo Plus 1TB + 980 Pro 2TB
Display(s) Acer Nitro XV271UM3B IPS 180Hz
Case Asus Prime AP201
Audio Device(s) Creative Gigaworks - Razer Blackshark V2 Pro
Power Supply Corsair SF750
Mouse Razer Viper
Keyboard Asus ROG Falchion
Software Windows 11 64bit
I'm thinking of replacing my 2080Ti to 3080Ti/3090. The prices in the second hand market are tempting.
I believe the 4080 will be 5-10% faster than 3080Ti at best, will be available -practically- at Christmas the soonest and cost about 1000 euros, also practically.

So, a 3080Ti at that price or lower, doesn't seem a bad idea to me.

Any suggestions?
 
  • Angry
Reactions: ARF
Joined
May 19, 2015
Messages
78 (0.02/day)
Location
Ukraine
Processor Ryzen 7 7700X
Motherboard Asus ROG Strix X670E-F Gaming WiFi
Memory F5-6000J3038F16GX2-FX5
do you guys think that gpu prices will be better by then?
The market works in a simple way pretty much: a seller increases prices up to the maximum level that a buyer is willing to pay.
It's like that for any product that is aimed to be bought by millions of people.

A manufacturer spends a specific amount of money to produce a product. Of course they want to make as much profit as possible.
But a regular buyer of regular hardware is not rich, so e.g. if NVIDIA spends let's say 150$ to manufacture one RTX 3060 videocard and then tries to sold it for 1500$ - 99,9% of such videocards will not be bought at all and all that money (150$ multiplied by the number of produced videocards) will be wasted because who needs that peformance for THAT price?!
Of course there are some additional "features" of that but the main idea is the same - a manufacturer produces goods to sell, and if they fail to sell - they lose money that they spent on production. Because NVIDIA, AMD, Intel don't need to keep any of the stuff they produced, they need to make money from selling it.

You know what I mean - the prices are being crazy only up to a point that there's no enough buyers willing to pay them. And who is willing to overpay now? That's correct - the miners.
While there's so many miners overpaying for videocards - the prices will be crazy. And they're willing to overpay until it's profitable.
 
Last edited:
Joined
Sep 10, 2018
Messages
6,925 (3.05/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
I'm thinking of replacing my 2080Ti to 3080Ti/3090. The prices in the second hand market are tempting.
I believe the 4080 will be 5-10% faster than 3080Ti at best, will be available -practically- at Christmas the soonest and cost about 1000 euros, also practically.

So, a 3080Ti at that price or lower, doesn't seem a bad idea to me.

Any suggestions?

I'd wait pretty sure whatever costs 800-1000 usd will be 40-60% faster and even if you decide to grab a 3080 ti betting there will be a fire sale on ebay whenever the 4000 series is announced... I have a 2080ti and a 3080ti the increase is nice but really only at 4k does it feel like a generational improvement.

I'd also want to see RDNA 3 before making a decision.... Ampere is old at this point a 3080 12G makes more sense if you can get it 200-ish cheaper than the ti
 
  • Like
Reactions: ARF
Joined
Dec 12, 2020
Messages
1,755 (1.21/day)
I'm thinking of replacing my 2080Ti to 3080Ti/3090. The prices in the second hand market are tempting.
I believe the 4080 will be 5-10% faster than 3080Ti at best, will be available -practically- at Christmas the soonest and cost about 1000 euros, also practically.

So, a 3080Ti at that price or lower, doesn't seem a bad idea to me.

Any suggestions?
But will any of the 4080's actually be available? Or will the miners and scalpers be hoovering them up like dust on a dirty carpet?
 
Joined
Apr 14, 2022
Messages
749 (0.78/day)
Location
London, UK
Processor AMD Ryzen 7 5800X3D
Motherboard ASUS B550M-Plus WiFi II
Cooling Noctua U12A chromax.black
Memory Corsair Vengeance 32GB 3600Mhz
Video Card(s) Palit RTX 4080 GameRock OC
Storage Samsung 970 Evo Plus 1TB + 980 Pro 2TB
Display(s) Acer Nitro XV271UM3B IPS 180Hz
Case Asus Prime AP201
Audio Device(s) Creative Gigaworks - Razer Blackshark V2 Pro
Power Supply Corsair SF750
Mouse Razer Viper
Keyboard Asus ROG Falchion
Software Windows 11 64bit
That's the problem. Even if the 4080 is a bargain(msrp/performance) against the 3080/3080Ti, it would be difficult to get one, either because of the availability or the scalping prices.
So, the good scenario would be a meh price for a 4080, about now...next year.
 
Joined
Jan 5, 2006
Messages
18,584 (2.69/day)
System Name AlderLake
Processor Intel i7 12700K P-Cores @ 5Ghz
Motherboard Gigabyte Z690 Aorus Master
Cooling Noctua NH-U12A 2 fans + Thermal Grizzly Kryonaut Extreme + 5 case fans
Memory 32GB DDR5 Corsair Dominator Platinum RGB 6000MT/s CL36
Video Card(s) MSI RTX 2070 Super Gaming X Trio
Storage Samsung 980 Pro 1TB + 970 Evo 500GB + 850 Pro 512GB + 860 Evo 1TB x2
Display(s) 23.8" Dell S2417DG 165Hz G-Sync 1440p
Case Be quiet! Silent Base 600 - Window
Audio Device(s) Panasonic SA-PMX94 / Realtek onboard + B&O speaker system / Harman Kardon Go + Play / Logitech G533
Power Supply Seasonic Focus Plus Gold 750W
Mouse Logitech MX Anywhere 2 Laser wireless
Keyboard RAPOO E9270P Black 5GHz wireless
Software Windows 11
Benchmark Scores Cinebench R23 (Single Core) 1936 @ stock Cinebench R23 (Multi Core) 23006 @ stock
When will gpu prices return to normal?

 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.64/day)
Location
Ex-usa | slava the trolls
I'm thinking of replacing my 2080Ti to 3080Ti/3090. The prices in the second hand market are tempting.
I believe the 4080 will be 5-10% faster than 3080Ti at best, will be available -practically- at Christmas the soonest and cost about 1000 euros, also practically.

So, a 3080Ti at that price or lower, doesn't seem a bad idea to me.

Any suggestions?

No, there is simply no way the 4080 to be so slow.
Look at the general performance gaps - they are tiny.

1656794535071.png

NVIDIA GeForce RTX 3080 Specs | TechPowerUp GPU Database

I'm thinking of replacing my 2080Ti to 3080Ti/3090. The prices in the second hand market are tempting.
I believe the 4080 will be 5-10% faster than 3080Ti at best, will be available -practically- at Christmas the soonest and cost about 1000 euros, also practically.

So, a 3080Ti at that price or lower, doesn't seem a bad idea to me.

Any suggestions?

Suggestion - wait the next Black Friday.
 
Status
Not open for further replies.
Top