Thursday, October 3rd 2013
Radeon R9 290X Priced at $729.99 on Newegg.com
US retailer Newegg.com leaked pricing of its reference design Radeon R9 290X graphics cards. An MSI-branded reference-design card got its own store page with its pricing redacted, but Newegg did a sloppy job at that. On close inspection of the HTML code of the page, we discovered the pricing value intended for that page to be US $729.99 (excl. taxes). Given that Newegg.com tends to add $10 to $30 on MSRP, pricing of the R9 290X is likely to have been set at $699 (excl. taxes).
126 Comments on Radeon R9 290X Priced at $729.99 on Newegg.com
2-3 lower tier cards might sell for substantially less than one high priced card, but their component fit-out - 2-3 x PCB's, I/O, GDDR5 chips, power delivery and logic components, cooling, packaging and shipping will still be proportionally higher- and if the larger die yields well you also have a likely minimal overall fabrication cost penalty- if any.
Frostbyte 3 & whatever Eidos is using are a great start.
And here's a little joke: If I've got a quad @ 4GHz, 8GB of RAM & a R7 260X GPU... will I be able to run any XBO-era game up to it's EOL @ 900p or such? :p
It says 9999.99 now and everyone can modify that price
What I found out though is that if you type in R9 290X on amazon, you will get a wonderful Sapphire Radeon HD 7950 3GB DDR5
Any word of GTX 770 price cut?
www.xtremesystems.org/forums/showthread.php?286994-Amd-Volcanic-Islands-details&p=5209737&viewfull=1#post5209737
2600K@5 GHz
This is what a single Titan scores:
AMD isn't here to run a charity.
But we`re talking about stock here... well, w/ each brand`s implementation of turbo/self-OC I guess...
Either way, I expect AT LEAST a little improvement w/ newer drivers, maybe not how R7000s did since early 2012 `till now, but still...
Titan at stock (boost) is only 9% faster than the reported 290X run.
Note gpu-z reported 1006Mhz but AB reported 993Mhz.
And this is overclocked... :laugh:
25% faster than stock. :p
I should point out that in the initial leaks of the 290X performance it was noted the AMD card lost in every benchmark, even though it won in most of the games. This might not be very well optimised for benchmarks at this stage so I wouldn't be concerned with them at all. After, all, benchmarks are just for fun and you don't buy a gfx card for that (well, most folk don't). It's the games performance that counts and i think the 290X will work out very well.
NeoXF lines up with a Stock Titan and the54thvoid with a OC TITAN
GeForce TITAN Series Family
Carry on..
the54thvoid has already explained that his Titan is running at 993MHz (AB reported).
A stock Titan, while nominally 834MHz and 876Mhz boost, will actually in stock trim and with adequate cooling run at pretty much it's highest speed bin in 3DMark Firestrike.
The highest speed bin for a stock titan without voltage modification is in fact 992MHz.
As you can plainly see from the reported clockspeeds on this page, the stock Titan will indeed peg the maximum boost state in games, let alone an intensive graphics benchmark. Since Firestrike will peg GPU usage close to 100% it soon becomes apparent that the Titan will sit at the highest freq. it can unless thermally limited.
But like you said, chances are that run, if real, was probably made using early drivers, just check the frame pacing graphic, it's all over the place compared to more recent AMD drivers.
Thing is, if AMD has its way that probably won't even matter, as developers will most likely choose to support mantle if the architecture is shared by all next gen consoles and AMD PC cards, like I said, it could represent a paradigm shift in PC video performance, so people should look at this card as a future proof investment, in my personal case I already have a rig waiting for these cards once they are released, but at $729 a pop for this limited edition, I prefer to wait for the vanilla version, I don't really care about BF4 premium, if AMD releases the not limited edition of this card at $599, it'll have a winner in its hands :)
Not so sure about the common API reasoning either. Sony have their own console API's (2 + a wrapper I believe- neither of which is Mantle), and I don't see Microsoft ditching D3D any time soon. Mantle is pretty much PC and architecture (and game engine at this stage) specific, and once the hyperbole is stripped away it looks like a welcome addition to PC gaming but not the second coming of Christ. The fact Steam looks linked to Nvidia and Intel wont help propagate Mantle in its current guise. [Source]
There's a precedent, remember back in 2004 when some HL2 benchmarks leaked running much faster on Ati hardware than on Nvidia's cards? When the game was finally released the Source engine made full use of the R300 potential, and even mid level cards like the Radeon 9500 would out perform the green team's flagship at the time (the FX5800) by quite a measurable margin.
Sure, the particulars of that debacle were very different from what we have today, but truth is, AMD is in an enviable position before the release of these new cards, as sole providers for both major next gen consoles they'll use that leverage to help developers make a rendering engine get much "closer to the metal" on most gaming platforms that share this architecture and reduce costs by making it much easier to port games between Xbone, PS4 and PC without loss of performance in the process.
Sure, Mantle is a PC initiative, and it may be in the interest of MS and Sony not to provide the necessary tools to make this porting so easy, but developers working on GCN based GPUs will find it much easier to program to these particular hardware targets and make such optimizations available when releasing a particular game on multiple platforms.
Look, I might be playing devil's advocate here, but the potential is there, and AMD has made everyone fully aware that they intend to take advantage of their position on multiple occasions, so it's not a stretch to think that we may see history repeat itself like the glory days of the 9700Pro.
A lot will depend on how committed (cash) AMD are to making it work. Activision would seem to a natural bandwagon jumper so I could see them adopting it- but I can't really see much benefit in easing workload on an engine that produced stratospheric framerates in any case.
Ideally the API needs to raise the game i.q. levels substantially over DX/OGL to make the software anything other than a bullet point...if it doesn't then it becomes just another feature like TressFX or PhysX, and of course if it does then we end up full circle from the advent of the gaming GPU, because dollars to donuts the other two vendors will follow suit....the good old, bad old days of Glide, Matrox Simple Interface, RRedline, SGL etc...