Google has announced its decision to sign the European Union's voluntary Code of Practice for Artificial Intelligence (AI), developed by 13 independent experts. This code aims to assist companies in complying with the EU's Artificial Intelligence Act, which imposes strict regulations on AI systems to ensure safety, transparency, and copyright protection. The code focuses on transparency, copyright, and safety and security measures for AI models.
While Google supports the initiative, Kent Walker, the company's President of Global Affairs and Chief Legal Officer, expressed concerns that certain aspects of the code could impede AI development in Europe. Specifically, he highlighted potential issues related to deviations from established copyright law, delays in approval processes, and requirements that might expose trade secrets, all of which could affect Europe's competitiveness in the global AI market.
Other major tech companies have varied responses to the code. Microsoft is expected to endorse it, while Meta Platforms has declined to sign, citing legal uncertainties for AI developers. The EU's AI legislation aims to establish a global standard for AI governance amid rising concerns over the technology's ethical and legal implications.
The AI Act, which came into effect in August 2024, is considered the world's strictest regime for regulating AI technology. It has been heavily criticized by the US government and Big Tech groups who argue that it will stifle growth. However, the EU remains firm on its digital sovereignty and commitment to regulating AI to ensure safety and transparency.
Google's decision to sign the code reflects a commitment to ethical AI practices, but also underscores the ongoing debate between fostering innovation and ensuring responsible implementation of AI technologies.