• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Choose R9 290 Series for its 512-bit Memory Bus: AMD

Joined
Dec 6, 2005
Messages
10,885 (1.56/day)
Location
Manchester, NH
System Name Senile
Processor I7-4790K@4.8 GHz 24/7
Motherboard MSI Z97-G45 Gaming
Cooling Be Quiet Pure Rock Air
Memory 16GB 4x4 G.Skill CAS9 2133 Sniper
Video Card(s) GIGABYTE Vega 64
Storage Samsung EVO 500GB / 8 Different WDs / QNAP TS-253 8GB NAS with 2x10Tb WD Blue
Display(s) 34" LG 34CB88-P 21:9 Curved UltraWide QHD (3440*1440) *FREE_SYNC*
Case Rosewill
Audio Device(s) Onboard + HD HDMI
Power Supply Corsair HX750
Mouse Logitech G5
Keyboard Corsair Strafe RGB & G610 Orion Red
Software Win 10

I'm no fan of NVidia, but looking at some of the TPU reviews by W1zzard, the 970 and 980 really shine in idle mode. Like 5W for a 970 vs. 40W for a 290. The 970 and 980 really shine in power consumption and they did pricing right.

Circling back to the Titan vs. 290x, who was winning then? Duh...
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.80/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
I'm no fan of NVidia, but looking at some of the TPU reviews by W1zzard, the 970 and 980 really shine in idle mode. Like 5W for a 970 vs. 40W for a 290. The 970 and 980 really shine in power consumption and they did pricing right.

Circling back to the Titan vs. 290x, who was winning then? Duh...
Another reason why I'm considering a 970. AMD's multi-monitor idle consumption is garbage in comparison until you get to the R7 cards. Considering I'm writing code most of the time on my machine, I would say that saving 50-watts or so over what I have now would be tangible over time since my tower is probably on about 14-16 hours a day and the GPUs aren't loaded 90% of the time-ish.

All in all, I think we can all agree that the GTX 970/980 are ahead of the curve because it's new technology. AMD is behind because they haven't released anything new for quite some time. I will change my stance if they release something new but until then, I just see an aging lineup next to a cutting edge one offered by nVidia.
 

the54thvoid

Super Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
13,119 (2.39/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
Joined
Dec 31, 2009
Messages
19,371 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
when AMD learn to optimize energy consumption and make a decent stock cooling system, so Nvidia will have competition. Today AMD has stratospheric consumption of energy with poorly optimized drivers and ridiculously high (I have a CrossfireX).
Give them time to get their next gen out and see what happens. This is new arch from NVIDIA, while AMD's has been out for quite some time now. ;)

ANd lol @ AMD with its 512MB bus that matters to the .01% of people that rock 4K or x3 4K monitors... Oye. What a marketing machine they are. Preying on the ignorance of the consumer (ok, both have done this to be fair).
 

VictorLG

New Member
Joined
Dec 3, 2014
Messages
1 (0.00/day)
Personally I don't think AMD is the only camp with a driver issue. Both camps in my opinion are equally meh.
I have two systems one with a R9 280X and HD7970 in crossfire, and a new MSI Gold(Bronze) edition GTX970.

The GTX970 has been having a lot of problems on Display Port with screen tearing after coming back from sleep state. Google GTX900 Display Port tearing/black screen, a lot of people have the same problem. And sometimes switching from Nvidia surround to normal triple monitor or vice versa causes BSOD on Windows7.

On the HD7970, I wouldn't say AMD had better or flawless drivers, we all know they don't. But I don't see the Nvidia drivers superior in any way.

So I think driver and feature wise, both camps are equally meh.


Your problems with the 970 are MSI's fault, not Nvidia's. They even stated that there will be a new BIOS release to fix some of the problems, specially the fan rotation and output ones.

I'm quite disappointed with MSI and AMD videocards, drivers for the 290 series are bad, and I had problems with hardware too.

I migrated back to nvidia (EVGA 980GTX SC) and now I'm quite happy with my gaming experience again.
 
Joined
Mar 6, 2011
Messages
155 (0.03/day)
Lots of people screaming that AMD are done for and unrecoverably far behind with the relative performance between 290&290X / 970&980 (neither being true).

It'll be interesting to see what they think when NVIDIA legitimately have zero answer for AMDs next cards for more than 12 months.
 
Joined
Dec 31, 2009
Messages
19,371 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
f2bmp said:
You call me very selective, yet you then showcase a chart with averages from anandtech.
That's from here... not andand... ;)

And your comparison is asinine as its not even comparing the same damn game. In order to make the comparison empirical and have ANY value to it, they need to be tested across the same exact thing. ;)

Lots of people screaming that AMD are done for and unrecoverably far behind with the relative performance between 290&290X / 970&980 (neither being true).

It'll be interesting to see what they think when NVIDIA legitimately have zero answer for AMDs next cards for more than 12 months.
An answer? They one up themselves every time something new comes out. Occasionally each answer with a bump in mid gen (think 7970 Ghz edition or 780ti... etc. That debate, to me, is hilarious because both sides can be right, it just depends on what the poster thinks was released 'first' and what was the 'response'...
 
Last edited:
Joined
Mar 6, 2011
Messages
155 (0.03/day)
That's from here... not andand... ;)

And your comparison is asinine as its not even comparing the same damn game. In order to make the comparison empirical and have ANY value to it, they need to be tested across the same exact thing. ;)

An answer? They one up themselves every time something new comes out. Occasionally each answer with a bump in mid gen (think 7970 Ghz edition or 780ti... etc. That debate, to me, is hilarious because both sides can be right, it just depends on what the poster thinks was released 'first' and what was the 'response'...

That can't happen this time. The big marketing spiel for new cards is high resolutions. Namely 4K and 2560x1440. At high res particularly the HBM cards will blow GDDR cards out of the water. NVIDIA backed the wrong horse and had to ditch their stacked memory plans. They've been redesigning their future architectures to use the AMD-designed HBM .. for some time in 2016.

Also, if AMD do turn out to be using 20nm ... that'll be a disaster for NVIDIA. They don't have any designs that can launch on 20nm anymore.

This is the first time in many years that there will be a big inter-generational leap in performance, and the first time the other firm won't be able to catch up for a long time.
 
Joined
Dec 31, 2009
Messages
19,371 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
2560x1400/1600 doesn't really need HBM. 4K, ok. But hell 256bit cards plow through 2560x1440 with plenty of AA (assuming it has the vram capacity to support it). Not to mention the efficiency improvements of Maxwell's memory architecture offering a fair amount more bandwidth due to their updated memory compression.

Also, if AMD do turn out to be using 20nm ... that'll be a disaster for NVIDIA. They don't have any designs that can launch on 20nm anymore.
Wouldnt TSMC and NVIDIA's launch time have something to do with it? I recall TSMC having delays in moving to their 20nm node essentially forcing NVIDIA to design Maxwell on 28nm instead of 20nm. The 980 and 970, much like the 670/680 were not the 'full' core implementations. I would imagine there are full Maxwell chips upcoming. While those may be more of an incremental improvement, that still leaves AMD with, what I imagine to be around a 15-20% performance gap to close. While that isn't impossible, they need to bring their big boy pants to the table with their new generation. That said, here is to hoping we see that. :)

This is the first time in many years that there will be a big inter-generational leap in performance, and the first time the other firm won't be able to catch up for a long time.
Only time will tell, but, I haven't seen much to make me believe that will happen... but again, I hope so for the sake of competition and innovation. :)
 
Last edited:
Joined
Mar 24, 2012
Messages
533 (0.11/day)
Lots of people screaming that AMD are done for and unrecoverably far behind with the relative performance between 290&290X / 970&980 (neither being true).

It'll be interesting to see what they think when NVIDIA legitimately have zero answer for AMDs next cards for more than 12 months.

wow. did you have fact to back up that statement? or just your delusional assumption?

as usual AMD marketing is fun to watch. but i think this one still okay. it is better than "you guys should holding buying 900 series because we are the future of gaming and our 285 is faster than GTX760"
 
Last edited by a moderator:

qubit

Overclocked quantum bit
Joined
Dec 6, 2007
Messages
17,865 (2.87/day)
Location
Quantum Well UK
System Name Quantumville™
Processor Intel Core i7-2700K @ 4GHz
Motherboard Asus P8Z68-V PRO/GEN3
Cooling Noctua NH-D14
Memory 16GB (2 x 8GB Corsair Vengeance Black DDR3 PC3-12800 C9 1600MHz)
Video Card(s) MSI RTX 2080 SUPER Gaming X Trio
Storage Samsung 850 Pro 256GB | WD Black 4TB | WD Blue 6TB
Display(s) ASUS ROG Strix XG27UQR (4K, 144Hz, G-SYNC compatible) | Asus MG28UQ (4K, 60Hz, FreeSync compatible)
Case Cooler Master HAF 922
Audio Device(s) Creative Sound Blaster X-Fi Fatal1ty PCIe
Power Supply Corsair AX1600i
Mouse Microsoft Intellimouse Pro - Black Shadow
Keyboard Yes
Software Windows 10 Pro 64-bit
Forget all these hires benchmarks for a moment, I wanna see one at 1024x768 just for giggles. :p I want to see framerates at 500-1000fps in some old game to demonstrate just how far graphics performance has come.
 
Joined
Jul 10, 2011
Messages
798 (0.16/day)
Processor Intel
Motherboard MSI
Cooling Cooler Master
Memory Corsair
Video Card(s) Nvidia
Storage Western Digital/Kingston
Display(s) Samsung
Case Thermaltake
Audio Device(s) On Board
Power Supply Seasonic
Mouse Glorious
Keyboard UniKey
Software Windows 10 x64
So 24 wheels is better than 4, right? Right?


 
Joined
Mar 31, 2012
Messages
862 (0.19/day)
Location
NL
System Name SIGSEGV
Processor AMD Ryzen 9 9950X
Motherboard MSI MEG ACE X670E
Cooling Noctua NF-A14 IndustrialPPC Fan 3000RPM | Arctic P14 MAX
Memory Fury Beast 64 Gb CL30
Video Card(s) TUF 4090 OC
Storage 1TB 7200/256 SSD PCIE | ~ TB | 970 Evo | WD Black SN850X 2TB
Display(s) 27" /34"
Case O11 EVO XL
Audio Device(s) Realtek
Power Supply FSP Hydro TI 1000
Mouse g402
Keyboard Leopold|Ducky
Software LinuxMint
Benchmark Scores i dont care about scores
Also, if AMD do turn out to be using 20nm ... that'll be a disaster for NVIDIA. They don't have any designs that can launch on 20nm anymore.

i doubt amd will use 20nm node on their next gen gpu. in my opinion amd will jump on samsung's finfet 14nm on q2 or q3 2015 instead tsmc's finfet 16nm.

amd current gpu remains competitive (at its current price/perf) and only green warriors said otherwise.
 
Joined
May 13, 2008
Messages
762 (0.13/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
when the HBM technology is loaded into RADEON, a problem in a memory band is cleared.
there is no need to increase the secondary cache as Maxwell.

Except if R390x is indeed 4096sp, 512gbps (Which would be 4*1GB HBM operating at 128gbps) would really only be good up to around 1120mhz (if like Hawaii) or around 1200mhz if using the compression tech we saw in R9 285. With or without factoring in scaling (96-97%), that doesn't touch big maxwell (at probably a fairly similar size, if not Fiji granted slightly smaller on the same process)...and you can bet your butt we'll see a '770'-like GM204 (or really weak-sauce butchered big Maxwell sku) if it's stock clock is 1ghz. While this method for bw would work for a 28 or even 20nm part using their current arch, compared to what is possible on 16nm it's not nearly enough if they want to actually compete.

The reason is 4096sp generally won't be used to it's full extent in core gameplay, closer to ~3800 (just as you saw with 280x vs gk104, or 7950/280 vs 7970/280x scaling on a half scale), and when you figure whatever that number is divided by 2560 effective units in GM204, and the fact it can do 1500mhz BECAUSE of having such secondary cache....that ain't good. Btw, this is why big maxwell is essentially 3840 units ([128sp+32sfu]*24). The same way gk104 was essentially 1792 (192+32*8)....because the optimal count for 32/64 ROPs is right around there. Slightly higher in GK104's case (and hence why 280x was slightly faster per clock), but that was a fairly small chip and could expect decent yields. Slightly lower in big maxi's case, but I'd be willing to bet most parts sold will be under that threshold (which is still less than 1 shader module).

What's unfortunate is while excessive compute and high bw is good for certain things (like tressfx etc), it's still a better play to generally have less units than what the rops can handle in most core gaming situations, as it's more power/die/bw efficient (again, see gk104 vs 280x), and if need-be scale the core clock for performance of all units (texture, rops etc) at an optimal ratio. If we essentially get a 2x280x just because AMD has the bandwidth to do so (and clockspeeds won't allow a more efficient core config with higher clock to saturate it, similar to their more recent bins that generally do ~1100mhz) they are kind of missing the big picture in an effort to pull out all the stops and create something slightly faster through brute force...It'll be Tahiti vs GK104 all over again on a literally slightly larger scale.

All they are doing is moving the goalpost with CUs and bw, more-or-less similarly since R600, when a fundamental efficiency change is sorely needed. I'm talking like when they went to 4VLIW instead of 5 (when the avg call was 3.44sp), the move to 4x16 with a better scheduler, or to a lesser extent what they did with compression in 285. Even if the bandwidth problem is solved for another generation (and even that's arguable when larger than 4GB is going to quickly become normal and HBM won't see that for a year or more, not to mention GM200 will literally be out of their league if on the same process) the fundamental issue is the lack of architectural evolution to cope with outside factors (bw, process node capabilities) not lining up with what they currently have. Some of that is probably tied to and hamstrung by their ecosystem (HSA, APUs, etc), but I still think it primarily comes down to lack of resources in their engineering dept over the last couple to few years.

I truly think Roy knows all this (that they currently are in a bad place and the immediate future doesn't appear fabulous either), but his job is his job, and I respect that.
 
Last edited:
Joined
Dec 31, 2009
Messages
19,371 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Forget all these hires benchmarks for a moment, I wanna see one at 1024x768 just for giggles. :p I want to see framerates at 500-1000fps in some old game to demonstrate just how far graphics performance has come.
hires?

You would need a helluva an overclocked cpu to reach those speeds as that is not remotely a gpu limited res.

Run 3dmk 01 though. ;)
 

qubit

Overclocked quantum bit
Joined
Dec 6, 2007
Messages
17,865 (2.87/day)
Location
Quantum Well UK
System Name Quantumville™
Processor Intel Core i7-2700K @ 4GHz
Motherboard Asus P8Z68-V PRO/GEN3
Cooling Noctua NH-D14
Memory 16GB (2 x 8GB Corsair Vengeance Black DDR3 PC3-12800 C9 1600MHz)
Video Card(s) MSI RTX 2080 SUPER Gaming X Trio
Storage Samsung 850 Pro 256GB | WD Black 4TB | WD Blue 6TB
Display(s) ASUS ROG Strix XG27UQR (4K, 144Hz, G-SYNC compatible) | Asus MG28UQ (4K, 60Hz, FreeSync compatible)
Case Cooler Master HAF 922
Audio Device(s) Creative Sound Blaster X-Fi Fatal1ty PCIe
Power Supply Corsair AX1600i
Mouse Microsoft Intellimouse Pro - Black Shadow
Keyboard Yes
Software Windows 10 Pro 64-bit
Oh, just try the original Unreal Tournament from 1999 on modern high end hardware. It really does reach framerates like that and it's so fast, that the game's speed actually varies erratically and looks quite ridiculous. :laugh:

Modern games of course wouldn't go that fast.

@Recus Both of those pictures are pretty cool. :)
 
Joined
Jun 13, 2012
Messages
1,409 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Audio Device(s) Logitech Z906 5.1
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
Anyone that says Nvidia has no answer for AMD's gpu for 12 months and has 0 proof to back statement, you sir are complete AMD tool.

Anyone that says Nvidia has no 20nm gpu and is a disaster if AMD goes with it and has 0 proof as well to back their statement, you sir are as well a Complete AMD Tool.
 
Joined
Dec 31, 2009
Messages
19,371 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Oh, just try the original Unreal Tournament from 1999 on modern high end hardware. It really does reach framerates like that and it's so fast, that the game's speed actually varies erratically and looks quite ridiculous. :laugh:

Modern games of course wouldn't go that fast.

@Recus Both of those pictures are pretty cool. :)
Ahh, it takes a 15 year old game that the iGPU could run that fast to make that point. Gotcha.
 
Last edited:
Joined
Mar 6, 2011
Messages
155 (0.03/day)
Anyone that says Nvidia has no answer for AMD's gpu for 12 months and has 0 proof to back statement, you sir are complete AMD tool.

Anyone that says Nvidia has no 20nm gpu and is a disaster if AMD goes with it and has 0 proof as well to back their statement, you sir are as well a Complete AMD Tool.

They don't. It's a fact. Everyone knew they were a bit behind AMD with stacked memory anyway. However in 2013 when they cancelled HMC entirely and decided to shift to the AMD-designed and Hynix backed HBM, we knew for sure that unless AMD delayed their HBM products enormously, NVIDIA wouldn't be able to compete for a while. HMC Volta was canned, and replaced with HBM Pascal which is tentatively scheduled for H2 '16.

NVIDIA have no 20nm. It's a fact. We don't know if AMD do. Personally I think it's unlikely, but it may transpire.
 
Joined
Dec 31, 2009
Messages
19,371 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
NVIDIA have no 20nm. It's a fact.
You mention its a fact, but... how do you know its a fact? You haven't supported that assertion with any links.

As I said, NVIDIA was planning on the shrink for Maxwell, but TSMC not being ready delayed it essentially forcing them to stick with 28nm for this release. Not resting on their laurels, I think they did a pretty damn good job of increasing IPC, memory efficiency, and power consumption on that same process to bring out the well received 9 series. With that in mind, I think NVIDIA will be in a position to catch up sooner rather than later IF AMD brings a game changer to the table.

Remember, NVIDIA has also brought to the table a die shrink in the same platform (GTX 260 from 55nm to 45nm IIRC). Who's to say that don't have the 20 nm plans still on the shelf ready to go????
 
Joined
Aug 2, 2011
Messages
1,458 (0.30/day)
Processor Ryzen 9 7950X3D
Motherboard MSI X670E MPG Carbon Wifi
Cooling Custom loop, 2x360mm radiator,Lian Li UNI, EK XRes140,EK Velocity2
Memory 2x16GB G.Skill DDR5-6400 @ 6400MHz C32
Video Card(s) EVGA RTX 3080 Ti FTW3 Ultra OC Scanner core +750 mem
Storage MP600 Pro 2TB,960 EVO 1TB,XPG SX8200 Pro 1TB,Micron 1100 2TB,1.5TB Caviar Green
Display(s) Alienware AW3423DWF, Acer XB270HU
Case LianLi O11 Dynamic White
Audio Device(s) Logitech G-Pro X Wireless
Power Supply EVGA P3 1200W
Mouse Logitech G502X Lightspeed
Keyboard Logitech G512 Carbon w/ GX Brown
VR HMD HP Reverb G2 (V2)
Software Win 11

How about comparisons for GPUs running at stock speeds? That's what both AMD and nVidia spec, hardly their fault if the board partners are running the GPUs out of spec.
 

64K

Joined
Mar 13, 2014
Messages
6,773 (1.72/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) Temporary MSI RTX 4070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Temporary Viewsonic 4K 60 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
If AMD goes to the 20nm process with their GPUs then I don't see how Nvidia can compete with them by staying on the 28nm process but maybe Maxwell is that efficient to where they can. I have heard the rumors too that Nvidia is going to wait until next year to go to the 16nm process but how the hell will TSMC be ready for that when they couldn't get the 20nm process down. I don't know. There's some crazy rumors flying around. Here's one of them

http://www.kitguru.net/components/g...0nm-process-technology-jump-straight-to-16nm/
 
Joined
Dec 31, 2009
Messages
19,371 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
but maybe Maxwell is that efficient to where they can.
As I posted above, look what they did with it already on the 980... Several % faster than 780ti, with less memory bus width and CUDA cores, and uses almost 33% less (~100W less) power than a 780ti. Perhaps they wrung the rag dry though...?
 
Last edited:
Joined
Feb 18, 2006
Messages
5,147 (0.75/day)
Location
AZ
System Name Thought I'd be done with this by now
Processor i7 11700k 8/16
Motherboard MSI Z590 Pro Wifi
Cooling Be Quiet Dark Rock Pro 4, 9x aigo AR12
Memory 32GB GSkill TridentZ Neo DDR4-4000 CL18-22-22-42
Video Card(s) MSI Ventus 2x Geforce RTX 3070
Storage 1TB MX300 M.2 OS + Games, + cloud mostly
Display(s) Samsung 40" 4k (TV)
Case Lian Li PC-011 Dynamic EVO Black
Audio Device(s) onboard HD -> Yamaha 5.1
Power Supply EVGA 850 GQ
Mouse Logitech wireless
Keyboard same
VR HMD nah
Software Windows 10
Benchmark Scores no one cares anymore lols
So 24 wheels is better than 4, right? Right?


If you're transporting something larger than a thumb drive, yes yes it is.

the 512bit bus is nice and all but obviously memory bandwidth hasn't really been an issue for a long time now. The superior performance on the 970 and 980 in most cases makes it the better buy. But your comparison fails. You're comparing things of very different natures.
 
Top