top of page
  • Voltaire Staff

EU AI Act comes into force with partial ban on biometric collection



The European Union's regulation for the application of AI has come into force with the law banning law enforcement use of remote biometrics in public places.

 

"The majority of rules of the AI Act will start applying on 2 August 2026. However, prohibitions of AI systems deemed to present an unacceptable risk will already apply after six months, while the rules for so-called General-Purpose AI models will apply after 12 months," according to the European Union Artifical Intelligence Act.


A subset of potential uses of AI have been under the law identified as high risk, such as biometrics and facial recognition, AI-based medical software, or AI used in domains like education and employment.


Their developers need to provide compliance with risk and quality management obligations and undertake a pre-market conformity assessment with the possibility of being subject to regulatory audits.

 

High-risk systems used by public sector authorities or their suppliers will also need to be registered in an EU database.

 

A third "limited risk" tier applies to AI technologies like chatbots or tools that can be used to create deepfakes. These will have to provide transparency to make sure that users are not deceived.

 

The law imposes penalties for violations of banned AI applications, which will ranged from fines of up to 7 per cent of global annual turnover, 3 per cent for breaches of other obligations; and up to 1.5 per cent for supplying incorrect information to regulators.

 

The developers of general-purpose AIs will have to give a summary of training data and commit to having policies to ensure they respect copyright rules, among other requirements. The exact requirements for high-risk AI systems under the Act are a work in progress.


Image Source: Unsplash

 

 

Comments


bottom of page