Monday, December 16th 2019

AMD Publishes FEMFX Deformable Physics Library on GPUOpen

FEMFX is a multithreaded CPU library for deformable material physics, using the Finite Element Method (FEM). Solid objects are represented as a mesh of tetrahedral elements, and each element has material parameters that control stiffness, how volume changes with deformation, and stress limits where fracture or plastic (permanent) deformation occur. The model supports a wide range of materials and interactions between materials. We intend for these features to complement rather than replace traditional rigid body physics. The system is designed with the following considerations:
  • Fidelity: realistic-looking wood, metal, plastic, even glass, because they bend and break according to stress as real materials do.
  • Deformation effects: non-rigid use cases such as soft-body objects, bending or warping objects. It is not just a visual effect, but materials will resist or push back on other objects.
  • Changing material on the fly: you can change the settings to make the same object behave very differently, e.g., turn gelatinous or melt.
  • Interesting physics interactions for gameplay or puzzles.
The library uses extensive multithreading to utilize multicore CPUs and benefit from the trend of increasing CPU core counts.

Features
  • Elastic and plastic deformation
  • Implicit integration for stability with stiff materials
  • Kinematic control of mesh vertices
  • Fracture between tetrahedral faces
  • Non-fracturing faces to control shape of cracks and pieces
  • Continuous collision detection (CCD) for fast-moving objects
  • Constraints for contact resolution and to link objects together
  • Constraints to limit deformation
  • Dynamic control of tetrahedron material parameters
  • Support for deforming a render mesh using the tetrahedral mesh
To maximize the value for developers, we're providing the implementation source code as part of GPUOpen under the MITx11 License. The full release includes the library source code, sample code, and for Unreal Engine developers, source for a plugin that demonstrates custom rendering and scene creation.
Add your own comment

40 Comments on AMD Publishes FEMFX Deformable Physics Library on GPUOpen

#1
cucker tarlson
physics on cpu in 2020 :rolleyes:
get back to us when you have something better than we've had for years.
and a friggin screenshot in the OP. :laugh:
Posted on Reply
#2
TheGuruStud
cucker tarlsonphysics on cpu in 2020 :rolleyes:
get back to us when you have something better than we've had for years.
and a friggin screenshot in the OP. :laugh:
But you really don't have a anything. Nvidia has crippled physx and I guess no one cares about havok anymore, apparently.
Posted on Reply
#3
cucker tarlson
TheGuruStudBut you really don't have a anything. Nvidia has crippled physx and I guess no one cares about havok anymore, apparently.
well at least this is multithreaded and open source.
still,physics should be done on gpu.
physx is pretty good,played control this year,environmental destruction is absolutely ridiculous in boss fights.naturally it's not widely adopted tho.
Posted on Reply
#4
silentbogo
TheGuruStudBut you really don't have a anything. Nvidia has crippled physx and I guess no one cares about havok anymore, apparently.
PhysX isn't crippled, and Havok is still widely used (especially in multiplatform titles). The only downside, is that MS and Havoc haven't done anything to improve it since 2011, and DX physics is taking too long to come to fruition.
cucker tarlsonphysics on cpu in 2020 :rolleyes:
get back to us when you have something better than we've had for years.
Properly threaded, scalable and not platform-dependent? You've got to share what you are smoking.
Posted on Reply
#5
cucker tarlson
silentbogoProperly threaded, scalable and not platform-dependent? You've got to share what you are smoking.
some good green stuff :D
Posted on Reply
#6
TheoneandonlyMrK
cucker tarlsonwell at least this is multithreaded and open source.
still,physics should be done on gpu.
physx is pretty good,played control this year,environmental destruction is absolutely ridiculous in boss fights.naturally it's not widely adopted tho.
Because you say so, Nvidia backer says gpu physx only please.
Posted on Reply
#7
cucker tarlson
theoneandonlymrkBecause you say so, Nvidia backer says gpu physx only please.
no,because it's better.
Posted on Reply
#8
HTC
cucker tarlsonphysics on cpu in 2020 :rolleyes:
get back to us when you have something better than we've had for years.
and a friggin screenshot in the OP. :laugh:
Back then, you had mostly dual or quad-core CPUs with no extra threads: the penalty for using it would significantly impact the game's performance, which is why a dedicated card was required. nVidia integrated PhysX into their GPUs but that still had a significant impact in performance when used, though orders of magnitude less than via CPU.

Now, 8c / 16t are "normal" and you can already get 16c / 32t. There's little to no benefit in that many cores regarding performance increase in games but, if you can take advantage of those extra cores for Physics, that means the GPU can be less negatively affected by the performance penalty associated with those computations via GPU.
Posted on Reply
#9
cucker tarlson
HTCBack then, you had mostly dual or quad-core CPUs with no extra threads: the penalty for using it would significantly impact the game's performance, which is why a dedicated card was required. nVidia integrated PhysX into their GPUs but that still had a significant impact in performance when used, though orders of magnitude less than via CPU.

Now, 8c / 16t are "normal" and you can already get 16c /32t. There's little to no benefit in that many cores regarding performance increase in games but, if you can take advantage of those extra cores for Physics, that means the GPU can be less negatively affected by the performance penalty associated with those computations via GPU.
yes but you've got GPUs having absolutely ridiculous compute power too.why spend extra on an 8c/16t when a 6c/6t is plenty and your gpu packs so much power. how much does a 5700xt/2070 super pack ? 8-9 tflops ? probaly 10 overclocked. And both can do fp+int or fp16. Your rdna2 console gpu will probably be close to that too.
Posted on Reply
#10
HTC
cucker tarlsonyes but you've got GPUs having absolutely ridiculous compute power too.why spend extra on an 8c/16t when a 6c/6t is plenty and your gpu packs so much power.
Said compute power, when utilized for Physics can have the negative effect of introducing higher frame times. If you can offload to the CPU and it's unused cores / threads that portion of the computations required for the game, that helps, no?
So long as it doesn't negatively affect frame times more than what's currently available via GPUs, it's a viable alternative, IMO.
Posted on Reply
#11
cucker tarlson
HTCSaid compute power, when utilized for Physics can have the negative effect of introducing higher frame times. If you can offload to the CPU and it's unused cores / threads that portion of the computations required for the game, that helps, no?
So long as it doesn't negatively affect frame times more than what's currently available via GPUs, it's a viable alternative, IMO.
it is an alternative,but I'd rather max out my budget on the gpu and keep the cpu a good value option rather than go buy 8c/16t cause it's there.
3700x is nice as far as cost per core,but 8c/16t is not even close to being fully utilized.I never spent as much on any of my i7s as the 3700x costs and I think most pc gamers don't intend to either.I never even wanted an i7 but 2015 came,I got a 144hz dispaly,games got multithreaded and there was no other option than get a 4790k.seriously,whatever utility software most of us home/gaming rig owners run does well on a 9400f/3500x or even ryzen 3/core i3.it's for gaming we buy the CPU.


but seriously,btranrur,can we get at least a video ? people lauhged at rtx demos.I guess screenshots are preferred now.for physics.
Posted on Reply
#13
Steevo
cucker tarlsonno,because it's better.
So a technology that used a ASIC is better when it delivers lower performance on generic hardware that is already being fully utilized for its primary function? Tell me more about how going slower wins the race..... Games rarely use more than 6 cores, we have fully utilized GPU hardware, and underutilized CPU cores, but somehow they shouldn't be used?

I guess thats what happens when you buy hype.
Posted on Reply
#14
cucker tarlson
SteevoGames rarely use more than 6 cores (...) underutilized CPU cores
No.Can't be more wrong.
take a game that uses some sort of cpu physics,bf5 as a good example,and see what happens to cpu loads during explosion.

what we have is gpu architectures that pack more and more compute power into smaller and smaller power envelopes.
SteevoI guess thats what happens when you buy hype.
Exactly,like recommending buying 8c/16t workstation cpus for gaming cause of physics.

you got $700 to spend ? get a $200 cpu and a $500 gpu instead of packing a $350 cpu in there.
Posted on Reply
#15
Steevo
cucker tarlsonNo.Can't be more wrong.
take a game that uses some sort of cpu physics,bf5 as a good example,and see what happens to cpu loads during explosion.

what we have is gpu architectures that pack more and more compute power into smaller and smaller power envelopes.


Exactly,like recommending buying 8c/16t workstation cpus for gaming cause of physics.
By that idea we should still all have single core 256MB machines.

Also a lot of the Physx libraries aren't real time, most of the "GPU" work was precooked and prerendered. Meaning any GPU could render it, or any CPU could.

Out of order at 4Ghz is better than out of order on a GPU at 2Ghz, just how silicon design and cost work. And yes, I guess if I have the choice of a CPU with 20 cores and its faster and costs the same as a competitive CPU with 4 I will buy it.
Posted on Reply
#16
cucker tarlson
SteevoBy that idea we should still all have single core 256MB machines.
no,but that's your opinion and I'll defend your right to voice it.

let's wait and see how this thing turns out.
Posted on Reply
#17
95Viper
Stay on the topic.
Discuss nicely.
If you are trolling... stop it.
Take your arguing to PMs.

Thank you.
Posted on Reply
#18
Vayra86
cucker tarlsonyes but you've got GPUs having absolutely ridiculous compute power too.why spend extra on an 8c/16t when a 6c/6t is plenty and your gpu packs so much power. how much does a 5700xt/2070 super pack ? 8-9 tflops ? probaly 10 overclocked. And both can do fp+int or fp16. Your rdna2 console gpu will probably be close to that too.
Maybe now with int capable GPUs, some new doors will open?

Its more wishful thinking than anything mind; I am still baffled we're exploring RT while proper physics is still in its infancy after so many years.

But the more likely route is that CPUs will simply keep gaining cores and once mainstream has come up to 8c (we're closing fast) a CPU library is becoming very useful. AMD's timing here is quite right, and it will further enforce their core/thread advantage vs Intel too. Its probably better too, we don't need another Physx with ditto adoption.
Posted on Reply
#19
cucker tarlson
Vayra86Maybe now with int capable GPUs, some new doors will open?
Its more wishful thinking than anything mind; I am still baffled we're exploring RT while proper physics is still in its infancy after so many years.
yup,you get a game that looks beautiful and then physics look like crap.

as for the rt,since what I wrote above very much relates to shadows,I'm glad rt came along.we're wasting resources for incredibly accurate and sharp shadows,while the goal should be totally somewhere else.smooth,life-like and dynamic.

look at reflections too.SSR looks like crap in many cases.want high quality ssr reflections ? in rdr2 they perfected it at the cost of 40% performance hit.ridiculous,might as well get rtx option,would run the same and look better.
Posted on Reply
#21
Steevo
cucker tarlsonyup,you get a game that looks beautiful and then physics look like crap.

as for the rt,since what I wrote above very much relates to shadows,I'm glad rt came along.we're wasting resources for incredibly accurate and sharp shadows,while the goal should be totally somewhere else.smooth,life-like and dynamic.

look at reflections too.SSR looks like crap in many cases.want high quality ssr reflections ? in rdr2 they perfected it at the cost of 40% performance hit.ridiculous,might as well get rtx option,would run the same and look better.
True, I remember how crappy old games, and hell even new ones are when you get stuck in places due to faulty game physics. Swings in GTA were deadly.

I think a combined approach of "precooked" tables and vector data which can be handled easily on a CPU core handed to the GPU for Z depth pass, lookup tables of reflectivity values while running the ray tracing, then use the rendered angle values for objects and store that as long as its in frame and only have to update the angle relative to the "user" to update the shadow and reflection map. Its going to take new hardware, and its still computationally expensive, but so was AF for a long time, then we found the right way to do it in hardware with almost no performance penalty.

Physics can do the same, its all just math, and a lot of it, but hardware acceleration for other things are just data tables or actual physical transistors in the right pattern to match an algorithm.
Posted on Reply
#22
cucker tarlson
SteevoTrue, I remember how crappy old games, and hell even new ones are when you get stuck in places due to faulty game physics. Swings in GTA were deadly.

I think a combined approach of "precooked" tables and vector data which can be handled easily on a CPU core handed to the GPU for Z depth pass, lookup tables of reflectivity values while running the ray tracing, then use the rendered angle values for objects and store that as long as its in frame and only have to update the angle relative to the "user" to update the shadow and reflection map. Its going to take new hardware, and its still computationally expensive, but so was AF for a long time, then we found the right way to do it in hardware with almost no performance penalty.

Physics can do the same, its all just math, and a lot of it, but hardware acceleration for other things are just data tables or actual physical transistors in the right pattern to match an algorithm.
I mean we're getting so much compute power even with entry level hardware,and mid range has built in asic accelerators.

apparently people who were screaming amd cards we superior in terms of compute pefromance conveniently forgot about it for the sake of arguing (not you)
Posted on Reply
#23
Xuper
One guy tried to bullshitty femfx
garbage physx been doing this for a grip
then one answered :
Wrong.
NVidia Flex is doing this, but is not in the default UE4. You have to install a fork made by NVidia or make your own.
PhysX is only used to manage collisions between solid meshes. It does not allows you to use soft bodies out of the box.
Posted on Reply
#25
evernessince
cucker tarlsonNo.Can't be more wrong.
take a game that uses some sort of cpu physics,bf5 as a good example,and see what happens to cpu loads during explosion.

what we have is gpu architectures that pack more and more compute power into smaller and smaller power envelopes.


Exactly,like recommending buying 8c/16t workstation cpus for gaming cause of physics.

you got $700 to spend ? get a $200 cpu and a $500 gpu instead of packing a $350 cpu in there.
I don't really see what you are fighting against here. Are you against doing physics on the CPU thus leaving extra GPU cores to actually rendering more frames? If CPU core counts keep doubling every other generation like AMD has been doing, there's not reason to not run physics on the CPU.
Posted on Reply
Add your own comment
Nov 21st, 2024 13:10 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts