Human Rights Watch Criticizes U.S. Shift on AI Weapons Safeguards

Human Rights Watch Criticizes U.S. Shift on AI Weapons Safeguards
————————————
Human Rights Watch has criticized a decision by the United States Department of Defense to reject ethical restrictions proposed by an artificial intelligence company for military use, warning that the move signals weaker safeguards for civilians in the development of AI-based weapons.
According to the rights group, the dispute emerged after Anthropic refused to allow its technology to be used in fully autonomous weapons or in mass surveillance of American citizens under a defense contract. The organization said U.S. Defense Secretary Pete Hegseth ordered the termination of the agreement on February 27.
Shortly afterward, the Pentagon reportedly signed a new agreement with OpenAI, which stated its systems could be used for “any lawful purpose,” a condition Human Rights Watch described as a significant shift in government policy.
The organization also noted that a January memorandum on AI from the Defense Department appeared to remove earlier requirements that operators of autonomous weapons maintain appropriate levels of human judgment over the use of force.
Human Rights Watch warned that fully autonomous weapons could pose serious risks to civilians because such systems may struggle to distinguish between combatants and noncombatants in complex environments.
The group urged governments meeting at the United Nations in Geneva to address the risks of autonomous weapons under the Convention on Certain Conventional Weapons.




