System Name | Old reliable |
---|---|
Processor | Intel 8700K @ 4.8 GHz |
Motherboard | MSI Z370 Gaming Pro Carbon AC |
Cooling | Custom Water |
Memory | 32 GB Crucial Ballistix 3666 MHz |
Video Card(s) | MSI RTX 3080 10GB Suprim X |
Storage | 3x SSDs 2x HDDs |
Display(s) | ASUS VG27AQL1A x2 2560x1440 8bit IPS |
Case | Thermaltake Core P3 TG |
Audio Device(s) | Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset |
Power Supply | Zalman EBT-1000 |
Mouse | Mionix NAOS 7000 |
Keyboard | Mionix |
Keep in mind FF XV is getting DLSS so essentially with tensor cores super sampling is free. So in theory less jaggies sharper cleaner image without performance loss so the performance gap in an apples to apples comparison will likely grow far larger. However I myself won't be buying an RTX series card. Sticking with my 1080Ti for the time being.as demanding as Final Fantasy XV is I was worried that these cards would struggle under it . now I would feel I got my moneys worth in a new 20 series [ that's if I was wanting or willing to buy one to start with ]
it would be nice to see some modern benchmarks with the latest drivers in modern AAA games with SLI and CF @W1zzard
its been awhile since anyone has shown any honestly.
Processor | OCed 5800X3D |
---|---|
Motherboard | Asucks C6H |
Cooling | Air |
Memory | 32GB |
Video Card(s) | OCed 6800XT |
Storage | NVMees |
Display(s) | 32" Dull curved 1440 |
Case | Freebie glass idk |
Audio Device(s) | Sennheiser |
Power Supply | Don't even remember |
Well, I can understand you're used to the huge strides going from GCN 1.0 to 1.1 to 1.2 and so on. But Nvidia can possibly keep up with that.
Though rumor has it these were actually meant for 10nm, but there's no 10nm capacity available atm.
I'm also curious about how you derive performance from specs without knowing the frequency these chips run. Cause that alone can go ±25%.
System Name | M3401 notebook |
---|---|
Processor | 5600H |
Motherboard | NA |
Memory | 16GB |
Video Card(s) | 3050 |
Storage | 500GB SSD |
Display(s) | 14" OLED screen of the laptop |
Software | Windows 10 |
Benchmark Scores | 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling. |
Too bad it costs like 1080Ti.The RTX 2080 offers a similar performance improvement over the GTX 1080 at 2560x1440, where it delivers a performance improvement of 28% and 33%
System Name | 2nd AMD puppy |
---|---|
Processor | FX-8350 vishera |
Motherboard | Gigabyte GA-970A-UD3 |
Cooling | Cooler Master Hyper TX2 |
Memory | 16 Gb DDR3:8GB Kingston HyperX Beast + 8Gb G.Skill Sniper(by courtesy of tabascosauz &TPU) |
Video Card(s) | Sapphire RX 580 Nitro+;1450/2000 Mhz |
Storage | SSD :840 pro 128 Gb;Iridium pro 240Gb ; HDD 2xWD-1Tb |
Display(s) | Benq XL2730Z 144 Hz freesync |
Case | NZXT 820 PHANTOM |
Audio Device(s) | Audigy SE with Logitech Z-5500 |
Power Supply | Riotoro Enigma G2 850W |
Mouse | Razer copperhead / Gamdias zeus (by courtesy of sneekypeet & TPU) |
Keyboard | MS Sidewinder x4 |
Software | win10 64bit ltsc |
Benchmark Scores | irrelevant for me |
Most salty comments are coming from 1080Ti owners. We get it. You need to justify your purchase.
that's because SLI is basically dead.
https://uk.hardware.info/reviews/8113/crossfire-a-sli-anno-2018-is-it-still-worth-it
Less and less Game support, dx12 doesn't help, poor performance gain.
you basically get 5-15% better total FPS by doubling GPU (1080tix2)in most games except a few outliers. 5-15% fps for 100% price I don't think its worth it. plus I quote them
"Furthermore, micro-stuttering remains an unresolved problem. It varies from game to game, but in some cases 60 fps with CrossFire or SLI feels a lot less smooth than the same frame rate on a single GPU. What's more, you're more likely to encounter artefacts and small visual bugs when you're using multiple GPUs as sometimes the distribution of the graphic workload doesn't seem to be entirely flawless."
So you get 5-15%, but you get a framerate that feels not smooth. better just spend the Extra cash on the best in line Single GPU to get the best performances.
I'm not impressed at all. I'll wait for the 2019 cards, especially since 1080 ti overclocks like a beast, as well does reg 1080 and they come very close to those stock scores of 2080 and 2080 ti.
System Name | Little Boy / New Guy |
---|---|
Processor | AMD Ryzen 9 5900X / Intel Core I5 10400F |
Motherboard | Asrock X470 Taichi Ultimate / Asus H410M Prime |
Cooling | ARCTIC Liquid Freezer II 280 A-RGB / ARCTIC Freezer 34 eSports DUO |
Memory | TeamGroup Zeus 2x16GB 3200Mhz CL16 / Teamgroup 1x16GB 3000Mhz CL18 |
Video Card(s) | Asrock Phantom RX 6800 XT 16GB / Asus RTX 3060 Ti 8GB DUAL Mini V2 |
Storage | Patriot Viper VPN100 Nvme 1TB / OCZ Vertex 4 256GB Sata / Ultrastar 2TB / IronWolf 4TB / WD Red 8TB |
Display(s) | Compumax MF32C 144Hz QHD / ViewSonic OMNI 27 144Hz QHD |
Case | Phanteks Eclipse P400A / Montech X3 Mesh |
Power Supply | Aresgame 850W 80+ Gold / Aerocool 850W Plus bronze |
Mouse | Gigabyte Force M7 Thor |
Keyboard | Gigabyte Aivia K8100 |
Software | Windows 10 Pro 64 Bits |
Lol. Yes... we are salty because we need to justify our purchase. You definitely got it.
https://www.forbes.com/sites/jasone...-results-reveal-pricing-problem/#7732da865124
"The GTX 1080 boasts an average 50% better performance than the GTX 980 at 1440p across 7 synthetic and in-game benchmarks. And it rises to the 4K challenge, kicking out an average 65% higher framerates across those same tests compared to the GTX 980. And its entry price is only 9% higher than its 900 series predecessor."
System Name | Thakk |
---|---|
Processor | i7 6700k @ 4.5Ghz |
Motherboard | Gigabyte G1 Z170N ITX |
Cooling | H55 AIO |
Memory | 32GB DDR4 3100 c16 |
Video Card(s) | Zotac RTX3080 Trinity |
Storage | Corsair Force GT 120GB SSD / Intel 250GB SSD / Samsung Pro 512 SSD / 3TB Seagate SV32 |
Display(s) | Acer Predator X34 100hz IPS Gsync / HTC Vive |
Case | QBX |
Audio Device(s) | Realtek ALC1150 > Creative Gigaworks T40 > AKG Q701 |
Power Supply | Corsair SF600 |
Mouse | Logitech G900 |
Keyboard | Ducky Shine TKL MX Blue + Vortex PBT Doubleshots |
Software | Windows 10 64bit |
Benchmark Scores | http://www.3dmark.com/fs/12108888 |
If you are running on a 1080p that is. Some people already moved passed 1080p just like how we move passed 720p in 2008.Neither. I'll sit on my 980Ti. Still gives me great performance for my gaming needs. 3 years out of it so far and it looks like I'll easily get another 2-3 years out of my card (as long as it doesn't die on me). If I sorely feel the need to upgrade, I'll just steal back the second 980Ti I have in my HTPC/Plex/Gaming computer and run SLI again.....curious how well that would be a few years from now....
Processor | AMD Ryzen 5 5600@80W |
---|---|
Motherboard | MSI B550 Tomahawk |
Cooling | ZALMAN CNPS9X OPTIMA |
Memory | 2*8GB PATRIOT PVS416G400C9K@3733MT_C16 |
Video Card(s) | Sapphire Radeon RX 6750 XT Pulse 12GB |
Storage | Sandisk SSD 128GB, Kingston A2000 NVMe 1TB, Samsung F1 1TB, WD Black 10TB |
Display(s) | AOC 27G2U/BK IPS 144Hz |
Case | SHARKOON M25-W 7.1 BLACK |
Audio Device(s) | Realtek 7.1 onboard |
Power Supply | Seasonic Core GC 500W |
Mouse | Sharkoon SHARK Force Black |
Keyboard | Trust GXT280 |
Software | Win 7 Ultimate 64bit/Win 10 pro 64bit/Manjaro Linux |
Well, I can understand you're used to the huge strides going from GCN 1.0 to 1.1 to 1.2 and so on. But Nvidia can possibly keep up with that.
Though rumor has it these were actually meant for 10nm, but there's no 10nm capacity available atm.
I'm also curious about how you derive performance from specs without knowing the frequency these chips run. Cause that alone can go ±25%.
Processor | Intel i5-12600k |
---|---|
Motherboard | Asus H670 TUF |
Cooling | Arctic Freezer 34 |
Memory | 2x16GB DDR4 3600 G.Skill Ripjaws V |
Video Card(s) | EVGA GTX 1060 SC |
Storage | 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500 |
Display(s) | Dell U3219Q + HP ZR24w |
Case | Raijintek Thetis |
Audio Device(s) | Audioquest Dragonfly Red :D |
Power Supply | Seasonic 620W M12 |
Mouse | Logitech G502 Proteus Core |
Keyboard | G.Skill KM780R |
Software | Arch Linux + Win10 |
You are still so wrong...And here we are with the reviews' results that came out a few hours ago. Comparing custom 1080s and custom 1080Tis vs the FE-pre-oced 2080s and 2080Tis, we have less than 25% diff. And I predicted that sort of diff simply because bigger dies cannot clock higher enough than Pascal to get much more performance just from that. And GDDR6 was obligatory to not constraint the higher core count from giving the higher performance intended. So, with those cores available, that increase was the almost certain one. Efficiency and IPC increase is close to 0% after all.
So, much higher price for a much worse vfm cannot become a market success if customers are rational persons, eh?
Processor | AMD Ryzen 5 5600@80W |
---|---|
Motherboard | MSI B550 Tomahawk |
Cooling | ZALMAN CNPS9X OPTIMA |
Memory | 2*8GB PATRIOT PVS416G400C9K@3733MT_C16 |
Video Card(s) | Sapphire Radeon RX 6750 XT Pulse 12GB |
Storage | Sandisk SSD 128GB, Kingston A2000 NVMe 1TB, Samsung F1 1TB, WD Black 10TB |
Display(s) | AOC 27G2U/BK IPS 144Hz |
Case | SHARKOON M25-W 7.1 BLACK |
Audio Device(s) | Realtek 7.1 onboard |
Power Supply | Seasonic Core GC 500W |
Mouse | Sharkoon SHARK Force Black |
Keyboard | Trust GXT280 |
Software | Win 7 Ultimate 64bit/Win 10 pro 64bit/Manjaro Linux |
Only results matter for performance in order to make a buy or not decision. And that is the main way to compare hw. Only if someone uses only a program or 2 that gains more that the average needs to check just those. And on average means that in other games there is smaller than 25% gains (mainly DX11 ones) and in others bigger than that (mainly DX12 or Vulcan ones as hw async compute is there now). Conclusion: VFM of Turing GPUs in their price state today is awful. And your arguments tend to make it worse for nVidia. It is a simple new gen on almost the same manufacturing procedure that cannot give more without making a bigger die. Simple as that. No need to anyone to spend so much now apart from those indifferent of cost to just have the fastest pc hw possible at any time. They will lose much money though as the Turing GPUs will have imho big price cuts in a few (2-3) months and till then RTX might be still missing.You are still so wrong...
For a chip to be 25% faster than another on average it must be way faster than 25% when fully loaded. Simply because it cannot be any faster while neither is fully loaded. I'm just stated the obvious here, if you were truly interested in Turing (as opposed to the "I know without reading" attitude), you would have read Anand's in depth analysis.
And if the above seems overly complicated to you, think about cars: can you shorten the travel time between town A and B by 25% by using a car that's only 25% faster than the reference?