The European Commission has initiated a public consultation on new regulations for high-risk artificial intelligence (AI) systems.
The consultation, open until July 18th, seeks input from various stakeholders, including civil society, tech industries, AI developers, researchers, and authorities.
The EU's AI Act, which aims to regulate AI risks, was finalized last year, with most provisions set to take effect in August. High-risk systems will face strict requirements, such as high-quality data sets, activity logging, detailed documentation, clear user information, human oversight, and robust cybersecurity.
Examples of high-risk AI systems include those used in hiring processes or loan applications. The Commission must present guidelines by February 2, 2026, specifying the practical application of obligations and examples of high-risk and non-high-risk AI systems.
The new legislation allows or prohibits AI use based on the risk it poses. It has already come into force, with a gradual rollout planned, potentially fully implemented by 2027.
Since February, biometric identification systems and those scoring individuals based on behavior are prohibited. The Commission is also preparing the fourth version of guidelines for generative AI models, aiming for timely implementation.