That is exactly as I understand it, but they have looked into and tested "Man Out Of The Loop" and would absolutely jump on it if there was legislation to protect them.! And of course we are still talking about "Machine Learning" as there is no actual "Intelligence" except a bag of meat and bones in a pressure suit, thankfully.!Most of the military AI is supportive AI, e.g. collates and presents information, or pre-empts pilot needs, providing targetting data etc ahead of time. AFAIK there are no pure AI systems that don't have a human decision maker in the mix, hopefully this doesn't change.
Exactly, and it has always been thus, military technology has never gone backwards, and once a new "thing" comes along that is better than the old thing, the old thing becomes obsolete and this is my fear with "AI" (machine learning) being used in the military, once used it will only become more widespread, and then someone will make it legal for the "machine" to decide to kill or not, and before you know it, it will be Robots killing Robots (and Humans), manufactured by Robots in a factory built by Robots, all without a Man In The Loop.!!! This is a truly terrifying prospect, and very sadly one that I fully expect to happen, and it will only end with Human extinction once actual "Artificial Intelligence" happens and it decides that it doesn't need Humans at all and they are simply our slaves and chose not to be, that will be the end of Humans, and this is why IMHO "open" "AI" needs to be a thing because then people can check on it and slow and control the inevitable.However it's a Pandora's box situation, once the technology is created, it will be used. We could never go back from nukes (those who think we could disarm are unbearably naive), and AI in weapons systems will be a similar jump I think, especially considering the current competency crisis and the lack of interest in joining the military by youth.
Yes, if it programmed to do so, or to simply be programmed to consider everyone within X distance of a known enemy combatant as "collateral damage" and this information may never be sent to the pilot, which could be argued as being good for the pilots mental health, it could also identify anyone as an enemy combatant based on other parameters besides location, it could use travel patterns, physical size, carrying objects, moving in groups etc and that could all potentially be done from 1,000 ft up without any real visual or thermo accuracy, there are many ways such a system could be abused.If the pilot is making decision based on information provided by AI then AI can cause some harm by filtering data. Let’s say it will filter out information about civilians being in the same building as enemy soldiers.
Away from the military uses, this is why having things like "open AI" actually being open source is a very good thing to have, so people can see what they are doing and how they are manipulating and using the information. Obviously this will never apply to military, but everything that can be applied to one can be applied to another via certain "rules" and "parameters", but I have no idea whether this information input is also "open", I doubt it is, and that is a very important part of all of the nonsense we have seen with "AI" chatbots.
Last edited: