That, plus the sub-100% reliability of everything built by humans (it's true for humans too, I know). Mechanical failures, lapses in processing power, etc. When you're playing a game on PC, and Windows starts updating in the background, you might notice a little hiccup depending on the game and your hardware. Your phone sometimes drops mobile broadband signal, but it comes back after a couple seconds. You can't afford the same while driving. Also, there's weather conditions. Maybe you can't see a road sign because of some thick fog or heavy rain, but you might not even need to because you know the road, you know which road sign is where, what the speed limit is, etc. I'm not sure how this kind of prediction works in computers (if there is such a thing at all).
Agreed, though I know that selfdriving cars have the autonomous part running on a closed loop that should default to a halt if anything that is not within a 99,999999999% certainty (-ish) But yeah there are for sure alot to be aware of. The good think is that this innovation have been an evolution over the last 10-15 years continuing with loads of testing throughout the last decade
You provided a link that has statistical data in it.
I am a scientist. I work in reality. I work in factual information. Correct and incorrect, black and white, yes or no, positive or negative, true and false. Inbetweens and "gray" areas are not something it tend to dabble in.
Yes, it does.
Rubbish. AI will never have the ability to intuitively and creatively anticipate roadway problems, nor be able many other things that the human mind can do easily but computers are completely lacking in.
Very flawed understanding...
In the context of the topic of this article, the question of safety is raised. Driver safety is not a subject open for gray-area interpretations. It must be treated as an absolute. Elon is a brilliant person, but he is still human and still subject to making poor choices. Suggesting the idea of installing a game system into the front panel dashboard of a vehicle is a glaring and shining example of same..
1. The link was for the US goverments forecast
2. Well then we got that confirmed!
3. Agree to disagree
4. Thats the think: you assume that an AI have to understand the road the way you do; it doesnt. The 'classic creativity is a wonder of humans' is cool and all, but thats not what we are discussing: we are discussing the a machinelearned understanding of an by humans pretty well defined system. We already have clear rules in trafic and we can set up a lot of 'if then else' to ensure that autonomous is more SAFE than humans: It will take more time to be BETTER at driving (not comming to a halt or slowing down due to lack of understanding context.
5. The car manifactures disagree, the research disagree; Companies are not just throwing billions of dollars into research for the fun of it. Ill repeat myself: Its not a matter of if but when. (That when can be in 5 years if you believe elon, or it can be in 10-15 years if we do it slow, but it will happen. The most realistic reason for this not to happen within the next 5-20 years is nuclear holocaust.
6. Giving the user the possibility to use something in a wrong context does not mean that you nudge them to. As mentioned by elon musk; WHEN the cars become fully autonomous you can enjoy your netflix from the 'driver-seat' Its does not mean that they will encourage this not, let alone allow it to be possible (again unless you guys have tested the logic of the UI in drive-mode, I really have a hard time taking your argument serious. You might be right, but you also might be wrong. Its a flip-a-coin argument which is stupid from an intellectual point of view.
You get the last word; try hold back the need to just misinterpred on purpose

=