Fine-Tuning the AI Act. Definitions, high-risk applications and conformity assessments

Policy Brief
credit: pixabay

On February 8, I-Com hosted an event devoted to the Artificial Intelligence Act (AIA), which is the first (hard) law on AI by a major regulator anywhere. For the occasion, I-Com worked on a policy document focussing on this innovative piece of legislation which will play a crucial role in the European digital transition and whose effects would not only be restricted to the EU Member States, considering the Union’s consolidating role in setting global standards and exporting its values.

Hence, the Act aims at striking a balance between enhancing innovation while granting fundamental rights in the global race to unleash the potentials of AI. The underlying goal is to confirm European leadership as a global standard setter, as has already happened for the General Data Protection Regulation.

The analysis of the AIA starts from some key definitions set out in Art. 3. It appears that some of them – i.e., “in a view to”, “subliminal technique” (Art. 5.1.a) – need further clarification to avoid fragmented, and diverging, interpretations across Member States. If not, the attempt of a maximum harmonisation, which is endemic in European regulations, could be vain.

The AIA is based on a risk-based approach. It provides a taxonomy of the activities carried out by AI with the corresponding legal regime, dividing them into three categories: (i) prohibited activities; (ii) high-risk activities; and (iii) low-risk activities. However, there are some doubts about the correct placement of certain activities in the corresponding cluster, especially regarding biometric identification systems.

The proposal also raises further questions, namely, the treatment of general-purpose AI, the setting up of national and cross-border sandboxes, and the upskilling of regulators and companies.

Read more here:

No posts to display