Pentagon Adopts New Ethical Principles for Using AI in War

The Pentagon is adopting new ethical principles as it prepares to accelerate its use of artificial intelligence technology on the battlefield.


The new principles call for people to “exercise appropriate levels of judgment and care” when deploying and using AI systems, such as those that scan aerial imagery to look for targets.


They also say decisions made by automated systems should be “traceable” and “governable,” which means “there has to be a way to disengage or deactivate” them if they are demonstrating unintended behavior, said Air Force Lt. Gen. Jack Shanahan, director of the Pentagon’s Joint Artificial Intelligence Center.


The Pentagon’s push to speed up its AI capabilities has fueled a fight between tech companies over a $10 billion cloud computing contract known as the Joint Enterprise Defense Infrastructure, or JEDI. Microsoft won the contract in October but hasn’t been able to get started on the 10-year project because Amazon sued the Pentagon, arguing that President Donald Trump’s antipathy toward Amazon and its CEO Jeff Bezos hurt the company’s chances at winning the bid.


An existing 2012 military directive requires humans to be in control of automated weapons but doesn’t address broader uses of AI. The new U.S. principles are meant to guide both combat and non-combat applications, from intelligence-gathering and surveillance operations to predicting maintenance problems in planes or ships.


The approach outlined Monday follows recommendations made last year by the Defense Innovation Board, a group led by former Google CEO Eric Schmidt.


While the Pentagon acknowledged that AI “raises new ethical ambiguities and risks,” the new principles fall short of stronger restrictions favored by arms con ..

Support the originator by clicking the read the rest link below.