EU AI Act Sets New Standards for Global Financial Institutions

The EU AI Act, the world's first legal framework for artificial intelligence, aims to protect against harmful AI effects and establish trust in technology. The Act imposes penalties of up to 35 million euros or 7% of global revenue for noncompliance.

U.S. companies, including financial institutions, must adhere to the Act when launching or using AI in the EU market, regardless of their presence in Europe. This includes a requirement for AI systems to be inventoried and assessed for compliance.

AI systems deemed to carry 'unacceptable' risks are banned, while high-risk applications, such as those used for recruitment or credit scoring, face stringent regulations. Lighter rules apply to limited-risk applications, with no restrictions on minimal-risk systems.

For instance, a U.S. bank that develops an AI tool for the U.S. but plans to use it in the EU must ensure robust risk management and documentation, as well as human oversight where necessary.

As the EU moves forward with this legislation, U.S. banks are advised to prepare by establishing AI governance frameworks and conducting risk assessments. The Act's implementation begins in phases, with general-purpose AI rules taking effect on August 2, 2025, and high-risk model regulations starting August 2, 2026.

エラーや不正確な情報を見つけましたか?

できるだけ早くコメントを考慮します。