Monday, February 25th 2013
AMD Teases New TressFX Technology
In our last teleconference with AMD's graphics team, the company briefly touched upon a new feature it was working with Square Enix to ship with the upcoming Tomb Raider franchise reboot game. It turns out AMD has a name for it, TressFX. The company is said to detail the technology tomorrow (26/02), but signed its teaser off with a catchphrase: "Render, Rinse, Repeat." To us, it sounds like a technology that helps GPUs render elements such as hair and foliage more accurately. Then again, we wonder how it would make Lara Croft look any hotter (who ties her hair into a French Braid). Something to look forward to 26th for.
51 Comments on AMD Teases New TressFX Technology
Oh hellllloooo
Whatever Intel and Nvidias plans are they have to take a backseat for the next 5yrs or until the next round of consoles come. Is that why Nvidia hired the man that helped secure the contracts Bob Feldstein. To get lucky themselves.
Now back to reality. What have all PhysX powered games in common? They were all dumbed down for anything non NVIDIA just to make PhysX look so awesome even though half of it could be easily done on any CPU. But they instead completely cut it out to artificially make PhysX look better.
And even if we look PhysX running on NVIDIA powered systems. What did they realistically gain? Realistic moving fog and some liquids that still looked like mercury. The rest was some rubbish shattered glass that i've seen back in 2001 in Red Faction 1 and which still looks phenomenal even today, i've seen insane environment in yet again red Faction Guerrilla running entirely on CPU. I've seen amazing physics details in Max Payne 2 on all the misc items in the world. Years before anyone even heard of HW physics. Same goes for clothing simulation. UT99 in 1999 had the most amazing moving flags. Sure they were not dynamic but the movement was so freakin authentic. And the Max's leather jacket in Max Payne 2 yet again. PhysX hasn't done anything others haven't done with CPU. They just overdid it on all ends so it ran like shit even on the fastest quad cores to an extent it often looked just plain ridiculous.
Graphics evolved to what we have today because ANYONE can get them regardless if you have a gfx card from one or another camp. But examples like 3dfx Glide, Creative EAX, PhysX, AMD Bullet Physics never really achieved anything because it only ran on one or another hardware and because of that you can't afford to integrate it into games core gameplay. And if you can't do that you can only do pointless gimmicky things with it. Well who gives a toss about kicking some rubbish cans around or watch old newspaper floating around in a virtual wind. CPU can do that sort of nonsense.
What we need is an open physics standard that anyone can use, integrate into core gameplay and that any end user can enjoy it. Who cares if AMD controls all 3 consoles at the moment? There are still NVIDIA users which aren't exactly in minority and because of them you again can't use TressFX in core gameplay because you'll make the game uninteresting for all the NVIDIA users and if you're a game developer you just can't afford that.
So for the love of gamers and yourself NVIDIA and AMD, stick the exclusives of your physics engines up yours (both), sit behind the same table, work out an open physics engine everyone can benefit from so we can finally move the horribly static games into something they could evolve into like almost 8 years ago. just think of how dynamic gameplay would be if physics in games were actually real, not the shit they sell it to us right now as "realistic". Imagine Battlefield maps that would actually and for real dynamically change through combat time. But because the standard hasn't been worked out you see a wall chip here and a wooden plank broken there and some plant knocked of there and thats pretty much it. Not amused because by now, they could ALL do a lot better...
Anyway I approve Amd carry on..
These proprietary features potentially bring in money for the manufacturers, but they stagnate game development. Long hair, frilly dresses, tall grasses... yeah, they're computationally expensive. But it's mostly cosmetic, so why not pass it onto all those idle CPU threads that all the game programmers are too lazy to use for other things?
In all complete seriousness though, this is good...and I'll tell you why I think it matters beyond the ridiculous name.
Who had tessellation in their gpus first? ATi.
Who used it? Nobody.
Whom capitalized on the feature? nvidia (Fermi gen).
Who went for tremendous amounts of flops in their architecture first? ATi.
Who took of advantage of it? Nobody.
What happened next? We got proprietary physx and cuda on nvidia hardware using their lesser flops, but it made them oodles because they got behind united concepts of utilizing the flops in a productive way. Now, nvidia's arch has caught up in that respect.
Seriously peeps. What this says is AMD is keeping it industry-standard with directcompute/opencl etc (which is GREAT), but capitalizing on using their innovations in a tangible way (with something even I didn't know was in their GPUs). This is a very key change is direction of what has always been wrong with (let's call them for the sake of clarity) ATi. Engineering and cool new tech has never been a problem for the red guys; it's a big part of why people like me have always liked their camp (That and their openness in the past to discuss low-level arch/design decisions and particulars with us laymen in forums like B3D/Rage3D etc and to people like Anand in the press...ATi always seemed like open and friendly engineers from top-to-bottom) but their marketing/software developer relations/adoption of their forward-thinking ideas always has been (and in many cases the same is true of AMD) absolutely terrible. In a lot of ways they were (and perhaps still are) a direct inverse of nvidia. ATi creates (crossfire, eyefinity etc etc etc) nvidia exploits (SLI, 'surround' etc etc etc). Marketing ALWAYS wins the battle, because perception is created by tangible examples of use. ATi was and has always been the Diamond Rio PMP300 to nvidia's Itunes.
This is a serious step in the right direction. When you factor in the driver pushes lately, things may see a tremendous turn-around in reality, if not including eventually perception of their current (if not future) pushes forward in industry-first features. That is awesome, and truly worthy of discussion.