So you got nothing and are insisting I don't know anything about this while bringing nothing to the table, while I provided literal custom coded examples of what AI is poor at and others provided direct links supporting my claims on how it works.
Cool. Now I see the troll. If you seriously want to proceed I was asking for proof of any AI in present tech that does not use tables for training.
PS: Btw, Dall-E works using the same tables and is visual, check the source.
So...I really don't want to spark this off again, but I need to say something.
Let's assume for just a moment that Tesla has made an AI. Let's also let that AI be their self driving mechanism. I'll start by qualifying some things:
1) The AI cannot account for variation in the roads due to construction...so it fails the test of intelligence there.
2) The AI has been documented to just go silly occasionally...implying that the learning structures can be side tracked by many things.
3) The AI is legally required to have someone watching it...so it's dumber than most people who learn to drive a semi professionally.
What could account for such an AI? Well, it's not an AI. Instead of learning it's a complicated series of lookups, which has been "trained" to weight certain responses against other responses. For instance, your forward ultrasonic emitter detects an obstruction. It doesn't know what it is, but because it's relatively maintaining distance and you are moving at speed, it can allow you to follow that sensor reading without applying brakes. Cool. It's now something that humans have had since the early days of mining.
But that's not all....it can slow you down. Yes...if the input for the sensor is decreasing by a certain value it knows something is in front of it...but only that it's decreasing in value so the brake should be applied. Cool...your average 2 year old knows this when the start walking and discover that running into a wall hurts and thus should not be repeated.
But...it's more complicated than that. It must be, because Musk isn't completely full of crap....right? Well, no. Let me go with the crap example. Your toilet bowl is about as intelligent as your car...just mechanically based rather than electronic. It's got a float, that in air is heavier than the surrounding medium. As such, it pulls open a valve and allows water to fill into the tank. Once the water fills, the float is less dense, and instead of applying a gravitational force the bouyancy actually applies an upward force. This closes the valve when the tank fills...making sure that the tank doesn't overfill. Is your toilet an AI...or is it just an I/O system designed to take in data and respond to that very narrow range of values?
Alternatively, let me ask you something simple. An AI by its nature can learn like a human. It is an artificial intelligence...but intelligence. Take your AI for the rocket, and ask it to process how to make a peanut butter and jelly sandwich...without significant rules. Now...what do you get? Nothing. Funny that...your average five year old can make a sandwich...but because your "AI" isn't strictly inside a situation with pass/fail conditions and measurable outcomes it cannot do the thing. Read up on a dead simple version of it here:
PB&J
This is why I'm only going to be afraid of AI if it gets to the point of being able to make me a sandwich without comparing a thousand different good examples and iterating until it gets something viable. It's why I laugh when I see people create an "AI" for racing games...that takes 10 runs to not simply drive off the track...and a thousand more to understand that slamming into a wall is not a good strategy. That type of "AI" is not intelligent, but iterative and scaled responses to stimulus. That's not something to be afraid of...as long as you can introduce any stimulus that it hasn't been trained on yet. Being fair, Star Trek knew this in TOS era. They proved it when their "thinking machines" were destroyed by a simple logical paradox...where a true intelligence is barely even slowed by it. To quote a famous saying, "that does not compute." This basic ability to do more than process and weight responses is what people often forget...and I think it's what you're trying to communicate.
That said, if people believe that all thought is that it's easy to pretend that AI is already here...because why not. If you believe this though then there's no such thing as free will...because our outputs are always defined by the inputs...and with enough understanding of a person you could adequately predict them 100% of the time. That is the goal of AI though....right? Always do the "right" thing with certain data....ignoring that the beauty of the I in AI is being able to respond to stimuli that haven't been trained for.