The AI Act’s rules on banned AI systems including facial recognition systems start to apply 2 February.
Civil society groups are concerned about the lack of European Commission guidance on banned artificial intelligence systems as the 2 February starting date for provisions of the AI Act dealing with these tools approaches.
Companies still have until mid-next year to bring their policies in line with most of the provisions of the EU’s AI Act, but the ban on AI systems such as social scoring, profiling and facial recognition systems will kick in earlier.
The Commission’s AI Office unit responsible fordealing with the issue said it would develop guidelines to help providers with compliance by early 2025, following a consultation on prohibited practices it carried out last November.
However, those documents have not yet been published. A spokesperson for the institution told Euronews that the aim is to publish the guidelines “in time for the entry into application of these provisions on 2 February”.
Ella Jakubowska, head of policy, at advocacy group EDRi said that there are “significant gaps and many open questions around the AI Office”.
“It is really worrying that interpretive guidelines still have not been published. We hope this will not be a harbinger of how the AI Act will be enforced in the future,” she added.
Loopholes
The AI Act foresees prohibitions for systems deemed to pose risks due to their potential negative impacts on society. However, the AI Act also foresees some exceptions where the public interest outweighs the potential risk, such as in law enforcement cases.
Caterina Rodelli, EU policy analyst at global human rights organization Access Now is sceptical of these exceptions: “If a prohibition contains exceptions, it is not a prohibition anymore.”
“The exceptions mainly benefit law enforcement and migration authorities, allowing them to use unreliable and dangerous systems such as lie-detectors, predictive policing applications, or profiling systems in migration procedures,” she said.
EDRi’s Jakubowska has similar concerns, and fears that “some companies and governments will try to exploit this to continue developing and deploying unacceptably harmful AI systems.”
The issue was heavily debated when the EU AI Act was negotiated, with lawmakers calling for strict bans on facial recognition systems.
National regulators
The AI Act will have extra-territorial scope, which means that companies that are not based in the EU can still be subject to its provisions. Businesses can be fined up to 7% of global annual turnover for breaches of the Act.
Most of the AI Act provisions will apply next year, allowing for standards and guidance to be prepared.
In the meantime, member states have until August of this year to set up their national regulators who will be tasked with overseeing the AI Act. Some countries have already started preparatory steps and tasked data protection or telecom bodies with oversight.
“[This] seems to be a bit of a patchwork, with little to nothing known in several countries about either the market surveillance authorities or the notified bodies that will oversee the rules nationally,” said Jakubowska.