Thursday, February 6, 2025

Google Removes Pledge Not to Develop AI for Weapons and Surveillance

Google has recently updated its ethical guidelines for AI, removing previous commitments not to apply artificial intelligence technology to weapons or surveillance. The Washington Post reports that in a significant shift from its earlier stance, Google has revised its AI principles, eliminating a section that outlined four “Applications we will not pursue.” Until recently, this list included weapons, surveillance, technologies likely to cause overall harm, and use cases that violate international law and human rights principles. The company declined to comment specifically on the changes to its weapons and surveillance policies.

Google executives Demis Hassabis, head of AI, and James Manyika, senior vice president for technology and society, explained the update in a blog post on Tuesday. They emphasized the need for companies based in democratic countries to serve government and national security clients, given the global competition for AI leadership within an increasingly complex geopolitical landscape. The executives stated that democracies should lead AI development, guided by core values like freedom, equality, and respect for human rights.

The updated AI principles page now includes provisions for human oversight, feedback incorporation, and technology testing to mitigate unintended or harmful outcomes. However, the removal of the explicit commitment against developing AI for weapons and surveillance marks a departure from Google’s previous position. Demis Hassabis, who joined Google in 2014 after the acquisition of his AI start-up DeepMind, had previously stated that the terms of the acquisition stipulated that DeepMind technology would never be used for military or surveillance purposes.  (Read More)