Friday, September 27th 2013
Radeon R9 290X Could Strike the $599.99 Price-point
AMD's next-generation flagship graphics card, the Radeon R9 290X, could strike a US $599.99 (or 499.99€, £399.99 before taxes) price-point, turning up the heat on the more expensive offerings by NVIDIA - GeForce GTX 780 and GTX TITAN. The card should be available from mid-October. Based on the new 28 nm "Hawaii" silicon, the card is expected to feature 2,816 GCN stream processors, spread across 44 SIMDs (11 computing units). Other specifications include 172 TMUs, 44 ROPs, and a 512-bit wide GDDR5 memory interface, holding 4 GB of memory, which likely achieves its >300 GB/s memory bandwidth with a 5.00 GHz memory clock. The company is expected to launch 6 GB variants of the card a little later.
Source:
Softpedia
95 Comments on Radeon R9 290X Could Strike the $599.99 Price-point
It conceivably has more to do with the memory controllers within the chip and of it's implemented by either side. AMD figures the cost of a wide bus negates the cost of memory to manage that spec. Perhaps it was that the volume from supplier wasn't there for both Nvidia and AMD (Nvidia got there first and AMD realizes their going to need boat loads) and it was smarter to go wider and offer 4Gb with more bandwidth. But can they do those quickly and then sell it for less...?
In your favour though, adequate cooling is required on the card to maintain clocks and keep VRM's cool. Titans need better coolers (ACX, etc) for consistent high clocks. I do forgot sometimes that my card is under water. I figure you buy a Titan - you buy water cooling too :laugh:
Anyway, isn't this about the fabled, mystical R9 290X card? I really would like to see the bare PCB and see what AMD have built. No point having an awesome new chip and deviously good API's coming if the reference card is a piece of crap. Let's have some robust solid chokes and voltage circuitry that can take a beating.
And not that bloody blower fan......
i really like 6gb of vram.
i really like the presence of fp capability.
i wouldnt be interested in nerfed hardware. i never was.
we need to move forward, not backward..
GTX 780 = 2304 shaders, 3GB GDDR5, 1:24 FP64
Titan = 2688 shaders, 6GB GDDR5, 1:3 FP64
Other possible combinations are therefore
2688 shader, 3GB GDDR5, 1:3 FP64 and 2688 shader, 6GB GDDR5, 1:24 FP64
You could also add 7Gb/sec effective memory if the GK 110's memory controller could be QA'ed for the speed. Running out of spec for OC'ed cards is generally a whole lot different from reference validation. So buy the 6GB version. Just as you like 6GB isn't it conceivable that someone else would be happy to sacrifice 3GB to have 30-40% reduction in price? Same argument. See above. Titan : 2688 shaders....K6000 : 2880 shaders. Titan is a salvage part...so you're lusting over a nerfed part already Tell that to Jen Hsun and Rory, and provide an alternative income stream for them to recoup their loss of ROI.
It's a nice idea...but basically an idealized scenario totally divorced from reality.
HD 7870XT (Tahiti LE) 75% enabled die (shaders). Introduced 5 months after the fully enabled part.
GTX 660Ti (GK104-300) 88% enabled die (shaders). Introduced 5 months after the fully enabled part.
HD 6930 (Cayman CE) 83% enabled die (shaders). Introduced 12 months after the fully enabled part.
GTX 560Ti 448SP (GF110-270) 88% enabled die (shaders). Introduced 13 months after the fully enabled part.
HD 5830 (Cypress LE) 70% enabled die (shaders). Introduced 5 months after the fully enabled part
www.overclock.net/t/1429858/taobao-asus-radeon-r9-290x-bundled-with-bf4-735
Probably not very credible, but still something to look at.
According to this site the R9-290X is $839.83, but as a comparison, the Gigabyte GTX 780 WF3 OC is $814, and the Asus DC2OC is $821.
If nothing else, it says that the etailer isn't one that will lure customers from Newegg!
most cards are 8 16 24 32 40 48 usually a multiple of 8
44 doesnt really fit in with that
Also ROPs and CUs are not the same thing .. ROPs = rasterizers, CUs= thingies that have the shaders in them
A 384BIT controller was mentioned here an you were kinda angry about how AMD could not explain this at the conference. Now i see a bunch of comments got deleted and it mentions 512BIT. ?
i expect 80% of nvidia performance for a lesser p[rice or 5% better for the same price
640 shaders 16 rops 40 shaders per ROP
1280 shaders 32 rops 40 shaders per ROP
2048 shaders 32 rops 64 shaders per ROP
2816 shaders 44 rops 64 shaders per ROP
So in terms of efficiency of shaders to rop and its relation to performance the new GPU will likely hit the same wall as the Tahiti does.
at 2816 shaders 48 rops it drops to 58.6 shaders per ROP still not quite where it needs to be
The way AMD usually designed a GPU was to start in the middle scale up and scale down
so 7870 = starting point 1280 shaders 80 TMU 32 ROPs
half a 7870 = 7770 640 40 16
scaling up would have been 1920 120 48 what we got was 2048 128 32
When it comes to GPUs there are of course diminishing returns however a balanced design tends to be better overall.
just look at the 7770 to 7870 to 7970
128bit > 256bit > 384bit
1gb > 2gb > 3gb
640 > 1280 > 2048
16 ROPs > 32 ROPs > 32 ROPs
40 TMUs > 80 TMUs > 128 TMUs
AMDs approach in the past would have been
128bit > 256bit > 384bit > 512bit
1gb > 2gb > 3gb > 4gb
640 > 1280 > 1920 > 2560
16 ROPs > 32 ROPs > 48 ROPs > 64 ROPs ( cut back 8 ROP increase still puts it at 45 Shaders per ROP at 56 ROPs. Allowing for a GPU with 3200 shaders 200 TMU 64 ROP = 50 shaders per ROP
40 TMUs > 80 TMUs > 120 TMUs > 160 TMUs
you can see where things dont quite make sense
Granted wafer size die size getting perfectly working chips all come into play but you get the idea. In terms of AMDs own designs and efficiencies
increasing shader count without proper ROP count tends to result in issues
5850 vs 5870 comes to mind. back with the 1440 shaders vs 1600 shaders at the same clock speeds performance was about 2% difference due to ROP limitation
normally with a die shrink ROP can do more work thus its been alright but since we are still stuck at 28nm I would have rather seen 48 ROPs for better shader to ROP ratio I am also rambling like mad and dont give a fuck. THe long story short seems to be that 64 shaders per rop is not nearly as efficient as 40 shaders per rop.