Monday, January 10th 2011

Cheaper 1 GB Radeon HD 6950 and HD 6970 Coming Soon
While it may not have toppled NVIDIA's fastest single-GPU graphics card, AMD's Radeon HD 6900 series sure stepped up competition in the high-end segment, with Radeon HD 6970 competing with GeForce GTX 570, and the HD 6950, which can be unlocked to the HD 6970, having a class of its own. To further increase competitiveness, and probably to ward off the GeForce GTX 560 threat, AMD is reportedly directing partners to add 1 GB variants of the HD 6950 and HD 6970.
Currently 2 GB of GDDR5 memory is standard for both SKUs. With half the memory and cost-effective choice of PCB and components, AMD partners can significantly reduce prices at the expense of some performance, but end up with equal or better price-performance ratios to GTX 570 and the upcoming GTX 560. The two new SKUs will be available soon. Pictured below is a Sapphire Radeon HD 6970 2 GB. Sapphire is said to be one of the first with HD 6900 1 GB series.
Source:
HT4U.net
Currently 2 GB of GDDR5 memory is standard for both SKUs. With half the memory and cost-effective choice of PCB and components, AMD partners can significantly reduce prices at the expense of some performance, but end up with equal or better price-performance ratios to GTX 570 and the upcoming GTX 560. The two new SKUs will be available soon. Pictured below is a Sapphire Radeon HD 6970 2 GB. Sapphire is said to be one of the first with HD 6900 1 GB series.
46 Comments on Cheaper 1 GB Radeon HD 6950 and HD 6970 Coming Soon
Back on subject, if these cards are within 5% of the 2GB models, and if price is more than right, AMD might have a winner here
Anyway for a single card solution on a single monitor this makes much more sense. Price is where its at.
Hell from what ive seen
Far Cry 2 can use 1.1-1.2gb of vram
Metro 2033 can use with DX11 options on 1.5gb + of vram
Crysis back in 2006 was able to hit 800-900mb of vram and then some with AA etc
games are starting to use more ram and as new stuff gets tacked on its only going to go up up up.
These new 1gb 6950s and 6970s will be fine for 1680x1050 = 1920x1080 but there still gonna get slapped on the vram front at 1920x1080 or higher.
Depth of field has a large ram footprint but is also shader intesive the new 4vliw offers better performance then VLI5 but even then the extra ram makes a rather large difference at least in my time playing the game
granted in some games im not getting proper gpu usage but im hitting a ram limit back on 5800s its rather interesting to see as some games even when ram limited it has little effect in others it causes massive slow downs
below is the memory usage on 2x Nvidia gpus in Metro 2033
1.5gb usage on Nvidia card /= AMD card.
Dosent matter how they dictact storage there still gonna end up with exact the same memory usage. because the card itself cant dictate what file type the games art files are using thus dosent matter.
a single character mesh + textures + lighting effects will use around 10-15mb by themselves
So the 2 characters on your screen there are around 30mb your character is do to LOD distance around 10mbs so just for the 2x NPCS and your character your hitting 40-50mbs of vram. Now that all add up when you look at the entire picture, objects, textures, light sources, AA via a post process filter, Direct Computed Blur filter that has to store info per frame along with particle effects you hit the ram limit pretty quickly. Not to mention that while LODs help and so do mipmaps that info again still has to be stored into vram thus no matter what nvidia does or ATi it wont effect the vram usage its still gonna be 1500mb on Nvidia or ATi no matter how you slice it. And yes Screen resolutions also factors into this heavily as well. It should also be mentioned shadow maps eat vram for breakfast lunch dinner and midnight snack.
edit for punctuation do to grammer nazi's
Both companies optimise with their own libraries, and push them on Game Dev's to use them as you know.
That's how the cookie crumbles. :P
That is of course unless every single 3d app and file format system devised in the game industry is lying because in general the only way to drop vram usage is to lower texture quality and render settings in general. besides there compressions aka and optimizations are basically messing with texture FILTERING not texture compression. examples Civilization 5 uses direct compute compression on the leaders textures nvidia dosent do anything to that nor ATi there gpus just have to do the grunt of decompressing the already compressed archives, thus again moot point it dosent change memory footprint inside the gpus enough to mean squat seriously 5-10KB they might be able to eek out isnt gonna be enough to make 500mb of texture info and data shrink to so a gpu with 1gb of ram can handle the same information workload as one with 1.5gb or more. because to compress that data further would create artifacts artifacts that would cause inferior texture quality. a good example of this is Onlive and there method for compression it greatly impairs image quality for bandwidth so as to maintain a fluid image same applies basically.
so in the end there usually only 1 compression method applied and its done as a way to either
A) keep modders out
B) allow the texture files to work with said game engine
C) compressed to fit in a group container ala something similar to BSA archives etc
en.wikipedia.org/wiki/MegaTexture
Not saying your wrong or anything, just there are plenty of methods that sometimes mean that vram usage is not static even for textures across both vendors. :)
and you realize only game that uses Mega Texture tech is Enemy Territory Quake wars right and its not done on the GPU lvl its simple a massive texture that takes the place of tiling a smaller texture gpus dont mean squat in terms of compressing that. So sure while Mega Texture may in a sense break the vram limit since its streaming the info it does change that aspect but still it must follow the same general guidelines aka it has to work at set ram limits thus quality filtering mipmaps = smaller texture size etc all allow it to fit thats true but im not talking lowering settings or getting something to fit. I am simply stating that you cant magicall shrink something smaller then normal compression allows without a trade off in quality that is extremely noticeable in this regard most games today usage around 500-700mbs of vram 8xAA can have a toll around 100-200mb so you end up at 600-900mb of vram usage. Add in Direct Compute which also needs ram to store data post processing effects etc its not hard to break the limit really.
Hell TeamFortress 2 at 8xAA 1920x1200 on the aging source engine can top out around 900mb of vram if you have extra ram games will take advantage of it when pushing settings it might not always mean a frame rate increase but i can generally eliminate frame drops that tend to come from hitting a vram wall.
below is Oblvion and its Vram usage with AA and resolution taken into account thats a 5 year old title now.
Stalker Shadow of Chernobyl dosent actually use AA as its a deferred rendering engine AA is a basically a blur filter.
ether way those graphs are from games released prior to 2007 thus its been 4 years since the article these came from and they dont show usage at 1920x1200 which ups the vram usage again. so even in 2007 we were close to the 1gb barrier as of today we are already near the 1.5gb barrier and climbing.
again none of these take into account the fact that shadow map sizes have gone from in oblivions case of 256x256 up to 4096x4096 in some games with many using 1024 or 2048 map sizes in games. Dynamic lighting is also a huge vram eater and is getting used in more and more games
you will notice that by using dynamic lighting vs static vram usage nearly doubles in Stalker. That holds true today many games running say Unreal Engine 3 still use static lighting. We are at the transition phase right now AMD / Nvidia have been adding more and more ram to high end gpus for this reason. Metro 2033 in the old way aka DX10 no special features runs easy on a system dosent use much ram either but when pushing its DX11 features vram usage balloons and the same will be said of other titles that utilize new features.
Hope this turns to be true, not like the "5730".