Thursday, July 9th 2020

NVIDIA Surpasses Intel in Market Cap Size

Yesterday after the stock market has closed, NVIDIA has officially reached a bigger market cap compared to Intel. After hours, the price of the NVIDIA (ticker: NVDA) stock is $411.20 with a market cap of 251.31B USD. It marks a historic day for NVIDIA as the company has historically been smaller than Intel (ticker: INTC), with some speculating that Intel could buy NVIDIA in the past while the company was much smaller. Intel's market cap now stands at 248.15B USD, which is a bit lower than NVIDIA's. However, the market cap is not an indication of everything. NVIDIA's stock is fueled by the hype generated around Machine Learning and AI, while Intel is not relying on any possible bubbles.

If we compare the revenues of both companies, Intel is having much better performance. It had a revenue of 71.9 billion USD in 2019, while NVIDIA has 11.72 billion USD of revenue. No doubt that NVIDIA has managed to do a good job and it managed to almost double revenue from 2017, where it went from $6.91 billion in 2017 to $11.72 billion in 2019. That is an amazing feat and market predictions are that it is not stopping to grow. With the recent acquisition of Mellanox, the company now has much bigger opportunities for expansion and growth.
Add your own comment

136 Comments on NVIDIA Surpasses Intel in Market Cap Size

#101
Vya Domus
eLJay88AMD uses a far superior production process (7nm) yet it can't compete with Nvidia on a 12nm Node. It is not even in the same ball-park.
I find it fascinating you bring up process technology yet you also conveniently omit the fact that Nvidia's highest end offering has a staggering 80% more transistors. Basically Nvidia has the performance lead right know because they made a chip twice as big, wow, imagine if it was in the same "ball-park".

Will fans of the green team ever understand how to properly compare hardware ? Probably not but one can only hope.
eLJay88my good old HD 6850 was inferior to Nvidia offering in the stability department
Yeah I bet, "rock solid stability" is the definitely the first thing that pops in my head from that era, besides stuff like this : www.techpowerup.com/review/asus-geforce-gtx-590/26.html
I went to heat up the card and then *boom*, a sound like popcorn cracking, the system turned off and a burnt electronics smell started to fill up the room
Posted on Reply
#102
cucker tarlson
Vya DomusI find it fascinating you bring up process technology yet you also conveniently omit the fact that Nvidia's highest end offering has a staggering 80% more transistors. Basically Nvidia has the performance lead right know because they made a chip twice as big, wow, imagine if it wasn't in the same "ball-park".

Will fans of the green team ever understand how to properly compare hardware ? Probably not but one can only hope.
would amd make that 750mm2 chip possible with navi when 5700xt is already close to 250w ?
Posted on Reply
#103
mtcn77
cucker tarlsonwould amd make that 750mm2 chip possible with navi when 5700xt is already close to 250w ?
No, they would make 4*175mm² possible. They have both HBM2 and MCM engineering experience. No need for waste at reticle limit. They can seperate out the 4 rasterisers as well. It would be an amazing gpu.
Posted on Reply
#104
Vya Domus
Something else green team fans don't seem to understand is the relationship between power, die size scaling, frequency and voltages. Larger processors tend to be far more power efficient, it's the reason why a 2080ti which has roughly 2.3X times the number of shaders that a 2060 has also doesn't need 2.3X times the power.



This phenomena is a complete mystery to them.
Posted on Reply
#105
mtcn77
Vya DomusSomething else green team fans don't seem to understand is the relationship between power, die size scaling, frequency and voltages. Larger processors tend to be far more power efficient, it's the reason why a 2080ti that has roughly 2.3X times the number of shaders that a 2060 has also doesn't need 2.3X times the power.
Old times when the saying went, big gpus don't need so much of everything. I don't know which since gpureview shut down.
Posted on Reply
#106
cucker tarlson
Vya DomusSomething else green team fans don't seem to understand is the relationship between power, die size scaling, frequency and voltages. Larger processors tend to be far more power efficient, it's the reason why a 2080ti which has roughly 2.3X times the number of shaders that a 2060 has also doesn't need 2.3X times the power.



This phenomena is a complete mystery to them.
is this the reason rx470 beats vega 64 in power efficiency ? :roll:
and why would you not include one with 5700xt,I wonder.....
Posted on Reply
#107
john_
FiendishNvidia even was all for a third party porting GPU PhysX to run on Radeons but AMD refused to provide proper support.
Nvidia wanted to create two types of gamers. Those with an Nvidia GPU that would enjoy their games at their full visual quality and those who would have to settle for something visually inferior. We also saw how open Nvidia was the next years with their GameWorks libraries. Locked and proprietary. Also PhysX, those first years it was running extremely badly on the CPU, being totally unoptimized. If I remember correctly(after so many years), it was running on a single thread and using ancient MMX instructions. It's software version was meant to make the GPU version look like 10 times faster. So, why would AMD trust Nvidia and support a proprietary and locked standard? AMD supporting PhysX back then would have been a mistake. Make hardware PhysX a necessity in gaming and hope that a company like Nvidia will not stub you in the back? I think not.

Nvidia's true intentions where totally clear when it chose to lock PhysX while it was probably totally independent to the primary GPU used. Probably because Ageia developed that way and Nvidia didn't bothered to make it incompatible with anything else than Nvidia GPUs. So with a simple patch you could unlock PhysX and play those few games that where supporting hardware PhysX with an AMD primary GPU without any problems and good framerates. I enjoyed Alice with a 4890 as a primary card and a 9600GT as a PhysX card. Super smooth, super fun. There was also a driver from Nvidia that accidentally came out without a PhysX lock. I think it was 256.xxx something.
Nvidia could had offered PhysX without support in those cases where the primary GPU was not an Nvidia one. They didn't.
Posted on Reply
#108
R-T-B
cucker tarlsonAnd satan
Lol,I dont know what's funnier,the contents of this paragraph or the fact it starts with "on topic"
I like the implication that profiting off any of those things is inherently bad. I don't feel it is if it's filling a need (like gaming during a pandemic), and at least for mining, AMD arguably profited more.
Posted on Reply
#109
Fiendish
john_Also PhysX, those first years it was running extremely badly on the CPU, being totally unoptimized. If I remember correctly(after so many years), it was running on a single thread and using ancient MMX instructions. It's software version was meant to make the GPU version look like 10 times faster.
That was a bullshit myth.
www.codercorner.com/blog/?p=1129
So, why would AMD trust Nvidia and support a proprietary and locked standard?
You're ascribing information from 2010 to AMD's decision making process in 2008. In 2008 AMD had a lot more reason to distrust Intel than Nvidia and yet they had no problem supporting Intel's proprietary and locked standard (Havok).
AMD supporting PhysX back then would have been a mistake. Make hardware PhysX a necessity in gaming and hope that a company like Nvidia will not stub you in the back? I think not.
AMD didn't just "not support" PhysX, they also explicitly backed Havok against it. It was a business move through and through, one that was clearly made to hurt Nvidia and stifle their development, which it did... but it also deprived their customers of a potential feature and stifled the industry's development.
Nvidia's true intentions where totally clear when it chose to lock PhysX while it was probably totally independent to the primary GPU used.
This came after AMD had already made their intentions clear that they wouldn't play ball.
Posted on Reply
#110
Lucas_
mtcn77You want me to go on a wild goose chase? Some Intel executive said that as a joke. I literally cannot find it at this time.
haha I like the expression :)
Posted on Reply
#111
john_
FiendishThat was a bullshit myth.
www.codercorner.com/blog/?p=1129
Yeah, he explains that they didn't had the time, resources and knowledge to make it multithreaded and use the best options available. Your point? You have to realize that someone posting a huge explanation in public, doesn't necessarily say all the truth.
You're ascribing information from 2010 to AMD's decision making process in 2008. In 2008 AMD had a lot more reason to distrust Intel than Nvidia and yet they had no problem supporting Intel's proprietary and locked standard (Havok).
Intel and AMD are a duopoly in the x86 business for decades. Think Samsung and Apple. They fight in courts and at the same time they are doing business together. Nvidia on the other hand was a newcomer that had a vision where the GPU is doing the most heavy tasks in a PC, while the CPU is playing the roll of a traffic policeman in the system. Nvidia was and today clearly is a common enemy to both of them.
AMD didn't just "not support" PhysX, they also explicitly backed Havok against it. It was a business move through and through, one that was clearly made to hurt Nvidia and stifle their development, which it did... but it also deprived their customers of a potential feature and stifled the industry's development.
Common... bullshit myth from Nvidia fans. We had physics effects long before Ageia. Nvidia took something that was free at a time, running on the CPU and tried to make it a proprietary feature for it's GPUs. The result was games with PhysX effects available only to those with an Nvidia GPU. The rest could "enjoy" a totally different game.
This came after AMD had already made their intentions clear that they wouldn't play ball.
Nvidia's lock didn't had to do with AMD's decision. If the GPU was not an Nvidia one, PhysX got disabled. I had read a case where someone with an Nvidia GPU was having problems enabling PhysX because he had a second USB monitor and the Nvidia driver was treating the USB driver as a non Nvidia graphics driver. Hilarious. Also I believe Nvidia removed the lock a few years later. i think drivers after 2014 do not lock PhysX, but i might be wrong here.
In any case Nvidia could let the PhysX feature unlocked and just throw a pop up window while installing the driver informing that there would be no customer support when a non Nvidia GPU is used as primary. That customer support and bug reports would be valid only for those using an Nvidia GPU as primary. Anyway let me repeat something here. With a simple patch PhysX was running without any problems with an AMD card as primary.
Posted on Reply
#112
Jomale

Nvidia GTX 30X0 burns like coal in hell.
Posted on Reply
#113
Fiendish
john_Yeah, he explains that they didn't had the time, resources and knowledge to make it multithreaded and use the best options available. Your point? You have to realize that someone posting a huge explanation in public, doesn't necessarily say all the truth.
Your claim, "It's software version was meant to make the GPU version look like 10 times faster" is bullshit. You're saying the codebase from before Nvidia's acquisition was designed to make GPU's faster, despite not even being ported to GPU's yet.
Intel and AMD are a duopoly in the x86 business for decades. Think Samsung and Apple. They fight in courts and at the same time they are doing business together. Nvidia on the other hand was a newcomer that had a vision where the GPU is doing the most heavy tasks in a PC, while the CPU is playing the roll of a traffic policeman in the system. Nvidia was and today clearly is a common enemy to both of them.
So AMD is fine with proprietary and locked standards as long as they dick over Nvidia. Gotcha.
Common... bullshit myth from Nvidia fans. We had physics effects long before Ageia. Nvidia took something that was free at a time, running on the CPU and tried to make it a proprietary feature for it's GPUs. The result was games with PhysX effects available only to those with an Nvidia GPU. The rest could "enjoy" a totally different game.
No they didn't... what is with you guys and how quick you resort to just blatant lying? Nvidia didn't change anything regarding the propriety of Ageia's properties and it sure as hell wasn't anymore "free" before than it was after. The effects that were ported to the GPU were the effects that would only run well on PPUs before. GPU PhysX effects could have been functional on Radeons if AMD had properly supported the porting, it's AMD's fault PhysX wasn't supported on their hardware in Alice and other games.
Nvidia's lock didn't had to do with AMD's decision. If the GPU was not an Nvidia one, PhysX got disabled. I had read a case where someone with an Nvidia GPU was having problems enabling PhysX because he had a second USB monitor and the Nvidia driver was treating the USB driver as a non Nvidia graphics driver. Hilarious. Also I believe Nvidia removed the lock a few years later. i think drivers after 2014 do not lock PhysX, but i might be wrong here.
In any case Nvidia could let the PhysX feature unlocked and just throw a pop up window while installing the driver informing that there would be no customer support when a non Nvidia GPU is used as primary. That customer support and bug reports would be valid only for those using an Nvidia GPU as primary. Anyway let me repeat something here. With a simple patch PhysX was running without any problems with an AMD card as primary.
Nvidia's lock came after AMD's decision to torpedo their efforts, you're saying Nvidia should have thrown AMD's customers a bone when even AMD wouldn't throw them one themselves and were simultaneously giving Nvidia the finger.
Posted on Reply
#114
john_
FiendishYour claim, "It's software version was meant to make the GPU version look like 10 times faster" is bullshit. You're saying the codebase from before Nvidia's acquisition was designed to make GPU's faster, despite not even being ported to GPU's yet.
All my posts here talk about Nvidia and PhysX. I don't care about the code before Nvidia. That's your conclusion because it just suits you. But anyway, that link you posted proves that the code was far from optimized. And Nvidia could make it much more optimized easily and fast. They are a software company in case you don't know that. They had a ton of programmers and experience to fix it, but they didn't.

Now, PhysX wasn't locked when it was meant to run on Ageia cards, before Nvidia took over. I would be objecting on Ageia cards, you would also, if I was seeing developers throwing all physics effects on the Ageia card and forcing people to buy one more piece of hardware, when there where already multicore CPUs to do the job.

By the way. Saying ALL the time that the other person posts bullshit, is a red flag. You are putting a red flag on yourself, that you are a total waste of time. You look like a brainless 8 years old fanboy that just wants to win an argument when you keep saying that the other person posts bullshit. This is the simplest way to explain it to you.
So AMD is fine with proprietary and locked standards as long as they dick over Nvidia. Gotcha.
If this convenient explanation makes yourself happy, no problem. Why spoil your happiness?
No they didn't... what is with you guys and how quick you resort to just blatant lying? Nvidia didn't change anything regarding the propriety of Ageia's properties and it sure as hell wasn't anymore "free" before than it was after. The effects that were ported to the GPU were the effects that would only run well on PPUs before. GPU PhysX effects could have been functional on Radeons if AMD had properly supported the porting, it's AMD's fault PhysX wasn't supported on their hardware in Alice and other games.
You just don't want to understand. You have an image in you mind that Nvidia is a company run by saints who want to push technology and make people happy. Maybe in another reality. It's funny that GameWorks even hurt performance in older series of Nvidia cards, but hey, Nvidia would have treated AMD cards fair with it's locked and proprietary code. You reject reality and then you ask what is it with us? And who are we? Are we a group? Maybe a group of non believers?
Nvidia's lock came after AMD's decision to torpedo their efforts, you're saying Nvidia should have thrown AMD's customers a bone when even AMD wouldn't throw them one themselves and were simultaneously giving Nvidia the finger.
Look at my system specs. My answer is there. I keep a simple GT 620 card in my system just so I can enjoy hardware PhysX effects in games like Batman for example. When i enable software PhysX in a systme with a 4th gen quad core i5 and an RX 580 framerate goes down to single digit. That simple GT 620 is enough for fluid gaming. And no I didn't had to install a patch because as I said, Nvidia decided to remove the lock? Why did they removed the lock. Did they decided to support AMD cards by themselves? Throw a bone to AMD's customers? Maybe they finally came in agreements with AMD? And by the way, why didn't they announced that lock removal? There was NO press release.
But things changed. The PhysX software became faster on the CPU, it had to become, or Havoc would have totally killed it and only a couple of developers where choosing to take Nvidia's money and create a game with hardware PhysX, where gamers that where using Intel or AMD GPUs would have to settle for a game with only minimal physics effects. No developer could justify going hardware PhysX in a world with 4-6-8 cores/threads CPUs. So Nvidia decided to offer a PhysX software engine that was usable. It's dream to make AMD GPUs look inferior through physics had failed.

As long as you reject reality, you would keep believing that we(the non believers) post bullshit and lies.
Posted on Reply
#115
MrMeth
Vya DomusIt's a speculation for those that don't know what Intel began to do in the early 2000s. They drove AMD out of the OEM and server business with billions in bribes, there was no way AMD would have survived that exodus up until today had they not bought ATI and started shipping APUs in consoles, at one point in time that was basically their only considerable source of income. Intel probably never predicted that they were going to buy ATI nor what their intentions with it were.
This is an excellent point , AMD with out ATI purchase doesn't make it through the failure that was Bulldozer , the console Contracts were the only thing keeping the lights on. What a scary world no AMD and ,$1k Intel quad core CPUs.
Posted on Reply
#116
Fiendish
john_All my posts here talk about Nvidia and PhysX. I don't care about the code before Nvidia. That's your conclusion because it just suits you. But anyway, that link you posted proves that the code was far from optimized. And Nvidia could make it much more optimized easily and fast. They are a software company in case you don't know that. They had a ton of programmers and experience to fix it, but they didn't.
Sigh... they did fix it with PhysX 3, it was a significant overhaul and was completed as quickly as could be expected for something like that. Maybe you need to take another look that the timelines here.
Now, PhysX wasn't locked when it was meant to run on Ageia cards, before Nvidia took over. I would be objecting on Ageia cards, you would also, if I was seeing developers throwing all physics effects on the Ageia card and forcing people to buy one more piece of hardware, when there where already multicore CPUs to do the job.
You just described Ageia's pre-Nvidia business model verbatim...
By the way. Saying ALL the time that the other person posts bullshit, is a red flag. You are putting a red flag on yourself, that you are a total waste of time. You look like a brainless 8 years old fanboy that just wants to win an argument when you keep saying that the other person posts bullshit. This is the simplest way to explain it to you.
You lied, objectively... why shouldn't you be called out?
You just don't want to understand. You have an image in you mind that Nvidia is a company run by saints who want to push technology and make people happy. Maybe in another reality. It's funny that GameWorks even hurt performance in older series of Nvidia cards, but hey, Nvidia would have treated AMD cards fair with it's locked and proprietary code.
WOW, so you're saying you know whats in my mind and you are going to tell me what I think and want... that's certainly not an insidious and disingenuous way to argue </s>
You reject reality and then you ask what is it with us? And who are we? Are we a group? Maybe a group of non believers?
I guess you just jumped into the discussion without reading the other posts, that makes sense, that kind of lazy ignorance seems your speed.
Look at my system specs. My answer is there. I keep a simple GT 620 card in my system just so I can enjoy hardware PhysX effects in games like Batman for example. When i enable software PhysX in a systme with a 4th gen quad core i5 and an RX 580 framerate goes down to single digit. That simple GT 620 is enough for fluid gaming. And no I didn't had to install a patch because as I said, Nvidia decided to remove the lock? Why did they removed the lock. Did they decided to support AMD cards by themselves? Throw a bone to AMD's customers? Maybe they finally came in agreements with AMD? And by the way, why didn't they announced that lock removal? There was NO press release.
But things changed. The PhysX software became faster on the CPU, it had to become, or Havoc would have totally killed it and only a couple of developers where choosing to take Nvidia's money and create a game with hardware PhysX, where gamers that where using Intel or AMD GPUs would have to settle for a game with only minimal physics effects. No developer could justify going hardware PhysX in a world with 4-6-8 cores/threads CPUs. So Nvidia decided to offer a PhysX software engine that was usable. It's dream to make AMD GPUs look inferior through physics had failed.

As long as you reject reality, you would keep believing that we(the non believers) post bullshit and lies.
Your notions are directly contradicted by the facts, how is calling that out rejecting reality?
Posted on Reply
#117
john_
FiendishSigh... they did fix it with PhysX 3, it was a significant overhaul and was completed as quickly as could be expected for something like that. Maybe you need to take another look that the timelines here.
Oh my. Here we go again.

Well I checked the date. SDK 3.0 came out in June 2011. I guess programmers also need time to learn it and implement it, so games using it came out when? Probably when hardware PhysX was clear that wasn't meant to became a standard.

Try again.
You just described Ageia's pre-Nvidia business model verbatim...
Ageia didn't had the connections, money, power to enforce that. So even if they wanted to do that, they couldn't. Also PPUs where not something that people where rushing to buy, so developers wouldn't cripple the game for 99% of their customers, just to make 1% happy. Nvidia was a totally different beast. And they did try to enforce physics on their GPUs. You are NOT reading or you just pretend to not read what I post.
You lied, objectively... why shouldn't you be called out?
It seems that you are a waste of time after all. What you don't like is not a lie. I could call you also a liar. But I am not 5 years old.
WOW, so you're saying you know whats in my mind and you are going to tell me what I think and want... that's certainly not an insidious and disingenuous way to argue </s>
I come on. You keep posting like a 5 years old. I am just bored to post the parts of your posts where you make assumptions about what I think, what I mean, where I intentionally lie.
I guess you just jumped into the discussion without reading the other posts, that makes sense, that kind of lazy ignorance seems your speed.
You do reject reality. As for ignorance, it's your bliss.
Your notions are directly contradicted by the facts, how is calling that out rejecting reality?
So you had nothing to say here.

Well, nice losing my time with you. Have a nice day.
Posted on Reply
#118
medi01
Intel's income is higher than NV's revenue...

Am I missing something about the whole "AI business" or is it about rather straighforward number crunching?
XaledATI is now 7 years behind Nvidia
By which braindamaged metric? Dear God...
Posted on Reply
#119
bmacsys
mtcn77Intel is a company that tries every trick in the book. They take their leisure to putting their right foot forward, until they threw the kitchen sink and the bath tub at the problem. That is about when they have decided upon the said question.
That is some of the dumbest stuff I have ever read in my life.
steve360Even worse is that they turned down Apple's pitch for the CPU in the first iPhone. Intel did not see the potential of smartphones at the time. ARM is now a major long term threat to their existence.

Intel has form when it comes to these sorts of missed opportunities.
This was the most arrogant, short sighted decisions in the history of tech. Intel may have owned mobile computing.
ARFAMD is no underdog. Check the dictionary for the meaning of the word:

Until AMD can make inroads into the server rooms where they have like maybe 10% penetration and oem's where they have even less than that they are still HUGE underdogs. Retails sales are like 1% of the overall market for cpu's if that.
Posted on Reply
#121
R0H1T
R-T-BAMD arguably profited more.
I'd argue the retailers & miners profited a heck lot more, if AMD started a mining farm back then I bet they'd have made 10x the profit during the (peak) mining booms.
Posted on Reply
#122
R-T-B
R0H1TI'd argue the retailers & miners profited a heck lot more,
No. Just no. AMD's cashflow is much larger than that. Hell bitcoins market cap isn't even really all that huge in talking corporation money. Let me put it this way, if Intel wanted to, they could just buy all bitcoins and end it.

AMD probably could too though it would likely end them.

The disparity between the big money like that and Bitcoin is actually quite large.
Posted on Reply
#123
R0H1T
You mean during the two mining peaks, when worldwide crypto market cap soared 5x-10x in months? Some of currencies saw even greater gains. I don't have the exact numbers from back then but I do remember AMD's GPU division didn't report anywhere near the kind of profits you saw from crypto & yes I realize electricity cost play a major role in it. Of course the smart(er) money probably invested in crypto rather than opening giant farms.
Posted on Reply
#124
Fiendish
john_Oh my. Here we go again.

Well I checked the date. SDK 3.0 came out in June 2011. I guess programmers also need time to learn it and implement it, so games using it came out when? Probably when hardware PhysX was clear that wasn't meant to became a standard.
So first, it was Nvidia crippled/hobbled "their" software to make their GPU's look better, then it was Nvidia didn't "fix" the software they inherited from Ageia, then it's Nvidia didn't fix it fast enough... seriously?
Ageia didn't had the connections, money, power to enforce that. So even if they wanted to do that, they couldn't. Also PPUs where not something that people where rushing to buy, so developers wouldn't cripple the game for 99% of their customers, just to make 1% happy. Nvidia was a totally different beast. And they did try to enforce physics on their GPUs. You are NOT reading or you just pretend to not read what I post.
Ageia got multiple developers to do exactly what you're saying they couldn't do, have you done any research on this?
Again, Nvidia completely supported porting GPU PhysX to Radeons.
gizmodo.com/nvidia-helping-modders-port-physx-engine-to-ati-radeon-5023150
It seems that you are a waste of time after all. What you don't like is not a lie. I could call you also a liar. But I am not 5 years old.
I come on. You keep posting like a 5 years old. I am just bored to post the parts of your posts where you make assumptions about what I think, what I mean, where I intentionally lie.
You've made claims like Nvidia designed the CPU portion of PhysX to make the GPU portion look better, which is impossible because the CPU portion was written before GPUs were even part of the equation and was not even "designed" by Nvidia in the first place. When I pointed this out, did you clarify or correct your claim... no you just moved on to more falsities. Are you saying that wasn't intentional?
You do reject reality. As for ignorance, it's your bliss.
You seemingly not reading the prior posts so as to easily put together what I meant by "you guys" is me.... rejecting reality? You're not even making sense anymore.
So you had nothing to say here.
It was already discussed before. Nvidia tried to extended their technology to AMD's products but AMD said no way, go to hell, we are backing Intel... so Nvidia said no YOU go to hell and locked out their products in response. Check the dates, AMD acted in bad faith first by stringing Eran Badit and consumers along and then sinking the whole thing. Nvidia supporting the porting effort directly contradicts the core of your notions.
Posted on Reply
#125
Vya Domus
What's certain is that Physx always ran poorly, even if you had said Nvidia hardware, there is no doubt that Nvidia tried to turn it into a disadvantage for their competitor. I'm glad that basically no one is using it these days, good riddance, it was outclassed by many in-house physics engines anyway.
Posted on Reply
Add your own comment
Aug 16th, 2024 19:21 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts