Pentagon Publishes Guide to Ethical Wartime Use of AI

Pentagon Publishes Guide to Ethical Wartime Use of AI

A Pentagon advisory board has published a set of guidelines on the ethical use of artificial intelligence (AI) during warfare. 





In "AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense," the Defense Innovation Board (DIB) shied away from actionable proposals in favor of high-level ethical goals. 





In its recommendations, the board wrote that the Department of Defense's AI systems should be responsible, equitable, traceable, reliable, and governable.  





Since AI systems are tools with no legal or moral agency, the board wrote that human beings must remain responsible for their development, deployment, use, and outcomes.





As far as being equitable, the board wrote that the Department of Defense (DoD) "should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons."





To ensure AI-enabled systems are traceable, the board recommended the use of transparent and auditable methodologies, data sources, and design procedure and documentation.





The board recommended that the DoD's AI should be as reliable as possible, and because reliability can never be guaranteed, that it should always be governable. That way, systems "that demonstrate unintended escalatory or other behavior" can be switched off. 





The board called for ethics to be an integral part of the development process for all new AI technology, rather than an afterthought. 





"Ethics cannot be 'bolted on' after a widget is built or considered ..

Support the originator by clicking the read the rest link below.