Google Removes AI Pledge Against Weapons and Surveillance

Google has removed a key commitment from its AI principles, deleting a pledge not to develop artificial intelligence for weapons or surveillance. The change, first noticed by Bloomberg, reflects a shift in the tech giant's stance on military and security applications of AI.

The updated version of Google's AI policy, published Tuesday, now emphasizes that the company pursues AI "responsibly" and by "widely accepted principles of international law and human rights." However, it no longer explicitly states that Google will avoid developing AI for weapons or mass surveillance.

The revised policy was introduced in a blog post by Demis Hassabis, the head of Google DeepMind, and James Manyika, senior vice president of research labs. They framed the update as part of Google's belief that "democracies should lead in AI development" and that AI should be built in collaboration with governments and organizations that uphold values such as "freedom, equality, and respect for human rights."

Google has faced internal protests from employees over its contracts with the U.S. Department of Defense and the Israeli military, particularly in the areas of cloud computing and AI. The company has consistently maintained that its technology is not used to harm humans -- but recent revelations challenge that claim.

The Pentagon's AI chief recently told TechCrunch that some companies' AI models, including Google's, are helping accelerate the U.S. military's "kill chain" -- the process by which targets are identified and engaged in combat operations.

The removal of the anti-weapons and anti-surveillance pledge is already sparking backlash from digital rights groups, AI ethics researchers, and some Google employees.

Google was one of the few major AI companies that had made a clear commitment not to develop AI for warfare. Some believe that walking back that commitment suggests a prioritization of profit and power over ethical responsibility.

Others argue that Google's new AI policy is vague, replacing concrete commitments with broad, subjective language about "international law and human rights" -- a standard that is open to interpretation and could allow the company to justify nearly any AI application.

Google's softened AI stance may reflect growing pressure from Washington to ensure that leading U.S. tech companies remain competitive in the global AI race -- especially against China.

The U.S. government has been increasingly focused on integrating AI into military strategy, and tech firms like Google, Microsoft, and Amazon have been expanding their roles in national security.

Google's decision to quietly remove its pledge raises critical questions about the future of AI ethics.

Apakah Anda menemukan kesalahan atau ketidakakuratan?

Kami akan mempertimbangkan komentar Anda sesegera mungkin.