True, but they still need a LAN/WiFi connection. And so far, when denied such they run fine.
I'm talking about something else: situations, when you'll decide to have an IoT-enabled devices online. There could be very different reasons, but I'll repeat the main three:
1) you decide you can benefit from IoT,
2) someone will make you do it (e.g. fire safety regulations or whatever),
3) some non-IoT related issues will win over - e.g. you'll decide that having a top-of-the-line fridge that is IoT-only (that only works online) is the most interesting option anyway - despite the potential spying, that you could block by getting a more basic model that can work offline.
And the fridge example is already fairly subjective, while I think there are a lot of IoT devices that will be much easier to accept.
For example - do you have anything against flood sensors? Do you find it a threat that someone unprivileged could hack into and check whether you're house has been flooded?
Glad you mentioned aircraft. As a pilot myself, how technology on aircraft works is something very familiar to me. Aircraft, until recently, could not even be accessed remotely. Only UAV's fly remotely. And as for Auto-pilot, there have been a number of accidents where pilots depended way too much on automation and didn't fly the aircraft themselves.
As a person from insurance business, I actually know a bit about crash statistics, but let's leave that. We agree that at this point there must be a pilot in the plane.
But I'm also doing a lot of research in computational decision making methods, so now the discussion is really moving into my comfort zone.
Computers and technology can make flying safer, but it can not replace an experienced and seasoned pilot. Likewise an automobile driver can not truly be replaced.
This is not a proper comparison and you're making a mistake here.
Pilots are trained not to operate planes when everything works (because autopilot is doing that). They're precisely prepared to take over when the autopilot is malfunctioning or can't do something necessary.
It's actually the exact opposite with cars: drivers are thought to operate cars in the most probable situations.
Cars have security systems that are there to handle extreme situations and malfunctioning, because drivers can't. It's not possible to teach an average person e.g. how to safely handle sliding on a wet surface. This would take months and still eliminate many applicants.
And BTW: because cars don't drive on their own (yet), they're not designed to avert crashes.
Car safety is all about minimizing the results of a crash, when it happens (because the designers assume it will).
Such approach clearly can't be used in planes. Here it's all about staying in the air and landing safely.
they can not "think" for us. And they should be allowed to try.
By contrary, they can.
You're thinking about "thinking" as a creative process, that computers can't do reliably (yet...).
However, "thinking" in general is mostly about fairly programmable reactions, which computers are designed to do. Your brain:
1) gets input: task to do and current state of the world around you,
2) processes it based on what it knows and the historical outcomes (experience) -> neural network,
3) chooses the best solution.
Think about games.
Computers are fairly rubbish in designing them, because this is a creativity task. They can, however, play many games very well.
The important fact is that if a game has finite number of states, theoretically a computer will always win with a human, because it can choose the optimal strategy. It's just a matter of processing power and storage for the states (inputs) and strategies (outputs).
For example: computers are already superior in simple games (e.g. checkers) and are surpassing us in more complicated ones like chess.
But Go has too many states - you can't teach a computer to handle all possibilities. As a result the AI has to, for example, rely on a neural-network ("intelligence") approach and these are still not as efficient as the biological ones we have.
So going back to driving (but staying in game theory language): if you have a system with both AI and humans "playing", you have infinite number of states, which have to be approximated by a finite set - just to let an AI do anything useful. And it's still a huge set, because there are countless possible situations on roads.
As a result it's a difficult task just to program a car to drive between humans that follow the game rules. On top of that you have to teach a car to handle situations, where humans are breaking the law.
However, if you
limit the game to a smaller area with less states (e.g. a single city) and all players are playing according to the same rules, you can teach cars to always drive safely. In other words: if every car is autonomous, you can give each of them precise, deterministic instructions for each situation (e.g. each junction).
At that point you're only left with unpredictable objects to cover: pedestrians, pets and so on. And here also the autopilot will win, because, despite having an inferior "intelligence", it gathers a lot more information than a human can. It will, for example, "see" people hidden behind an obstacle.