gg nvidia? The fact that AMD immediately saw that the R600 series wasn't going to be stellar and decided only to improve in a way that wouldn't really have implications on the R&D effort for the RV700, which they have been working on for ages.
Core clock of over 1Ghz means ZILCH without other statistics. It's like saying there's a new P4 Netburst Extra Extreme Edition P4EEE with 5Ghz. It's a worthless power-consuming hog. We need architecture changes, fab shrinks and multiparallelism.
If they said 500Mhz clock but with 512 memory bus and 1024 universal shader units, and 75W total consumption, then I'd be much more impressed.
I hope so. Really, though, I'd love to see ATI stouting only DDR4/5 with this series, it would give them a slight edge on nVidia as far as spec sheets go.
I think you've got it right about the MEM BUS as well, partly why I mentioned 256bit isn't that big a deal if the GPU is clocked at 1GHz with DDR5. The bandwidth of the MEM itself will make up for it. But, as I also pointed out, if they're packing 1GB of high bandwidth MEM, a 256b BUS could prove to be a limitation - we'll have to see, the upgrading to 32 TMU might work out just nicely.
Either way, the next year and a half is stacking up to be quite competitive between red and green - which is what we all really want to see more than one camp leading the pack. We benefit more from close competition more than we do one leading and one trailing.
Um FYI, the 512bit/384bit memory bus is really redundant at this stage as the architecture and the GPUs don't, and cant use the 512 memory bit bus addressing to its fullest potential.
Most of you guys are thinking way too "zomg 256bit memory bus suxxors". Its the raw calculating power of the GPU itself thats important as well as the efficiency of it. The bit width of the memory bus isn't important if the GPU architecture is poor.
Okay in this case not poor but weaker, say for example G92 vs RV670. GDDR4 evidently has way more memory bandwidth, however the RV670 is slower than the G92! Now Compare RV670 to R600. R600 has the 512 bit bus... any performance increases? Little to none. The GPU isnt fast enough/can't proccess that much to use the 512 bit width to its max potential, same reason why Nvidia took a step back as well.
Another thing, it
costs more to make a card with a wider memory bus. Why you may ask? Because it is required to have more memory chips. Each chip is 32 bits. Therefore 32bits x 8 chips = 256; 256bit, 32bits x 12chips = 384bit, and finally, 32 bits x 16 chips = 512; 512 bit.... may seem obvious to some but that's why the G80/R600s were priced so damn high versus current 256 bit cards of equivalent. More memory chips, more components needed onboard and finally a requirement for a longer PCB (Usually) due to increased power consumption from the extra chips as well as the core (larger memory controller).
ATi has been under so much pressure lately, I bet this thing will roxorz nvidia's soxorz
hey addsub ROPs are outdated everyone uses shader units now
[sarcasm]Hey look!!!! Its awesome that ATI Ripped out their ROPs... now I cant even game in 3D AWESOME!!![/sarcasm].
ROPs are needed FYI.
I'm guessing the reasons why the numbers of components are in a core are because:
1. Core balancing, as with multi GPU technologies, I've noticed the linear
decrease in performance as you add more GPUs. This means that GPU R&D HAVE to balance out the core; more doesn't equal better a lot of the times, and i think the same applies for GPUs. Within an architecture, you probably can only have a specific amount of parts for the GPU before you start getting decreases in efficiencies.
2. Another reason is the fact that the numbers make it modular to manufacture
3. Cost/performance feasibility.
Its easy for you guys to go "HEY LETS CHUCK 1024 SHADERS AND 32 TMUs AS WELL AS A 512 BIT BUS!!!111", but wouldn't they have done it if it was THAT bloody easy?
Anyway guys, please stop arguing "you're a fanboy/you're biased!" with each other...
OT: I'm wondering if Intel's larabee will be even decent, the fact that its just a really powerful CPU thats not really designed to be dedicated to rendering somewhat worries me. However since one of their Xeons do ray-tracing at liek 60 fps or something I might be wrong (then again, games NEVER use ray-tracing...nor do GPUs have the ability.