The White Paper on Artificial Intelligence (AI) is, along with the Data Strategy, one of the pillars of the European Commission Digital Strategy published in February 2020. As reported in our article in December 2020, the White Paper on AI aims at setting a framework for trustworthy AI, based on excellence and trust as well as putting forward a legislative proposal on AI.
On April 21 the European Commission presented its first Artificial Intelligence Act (AI Act), which aims at regulating AI around a horizontal risk-based approach, such as introducing tight obligations in relation to the potential impact that AI applications can entail. The draft regulation indeed includes a series of obligations for AI applications that could have a direct impact on citizen’s personal or professional life, an example being recruitment processes, therefore considered risky. Companies or administrations using AI would therefore need to ensure transparency, human oversight over AI usage and compliance with regulation in place. The AI Act also aims at prohibiting AI usage considered to be incompatible with EU values, such as AI systems to manipulate human behavior.
On governance, its application will be implemented by a newly born European Artificial Intelligence Board, whose function will be to adopt positions on issues stemming from the enforcement and implementation of the AI Regulation. This Board will be chaired by the Commission and composed by designated national market surveillance authorities (National Supervisory Authorities) and the European Data Protection Supervisor. Generally, the provisions of the AI Act render impossible for Member States to regulate AI at national level, although the text allows them to ‘adjust’ the provisions to the national AI regime. In addition, Member States will be able to regulate on exempted cases such as for instance AI applications for military use. When it comes to enforcement, this remains the responsibility of EU countries through their designated notifying authorities but the Commission can itself launch an investigation to ensure these comply with the text provisions. Furthermore, each national surveillance authority will have to inform both the Commission and other national authorities of any initiative aiming at restricting or prohibiting AI applications. Finally, the Commission envisages to launch a ‘Coordinated Plan’ with national governments to foster investments in AI skills and infrastructure, with the main objective being to gradually increase public and private investment to 20 billion euros per year for the next decade. National funding is expected to be coupled with resources coming from the Recovery and Resilience Facility (RRF), the Digital Europe Programme and Horizon Europe. Besides this all, as the Commission proposal touches upon consumer protection, one of the main challenges for European regulation on AI will be to effectively balance on the one hand the ensuring of product safety with legislation, while on the other hand avoiding slowing down the uptake of AI and innovation in EU’s economy and industry.
In Brussels, negotiations between the co-legislators, the European Parliament and the Council of Ministers of EU Member States, are proceeding at slow pace given the sensitivity and technicality of the dossier. In the Parliament, committees are still clashing over the file, with JURI, LIBE and ITRE challenging IMCO’s lead. In the Council, some Member States are still defining their positions while for others the text provisions on prohibited and high-risk AI applications are still unclear. Moreover,
the appropriate balance between fundamental rights protection and public security is expected to heat up talks. The most controversial part concerns biometric identification in public spaces, which will only be allowed to law enforcement authorities in very specific cases, such as kidnapping or terrorist attacks and following ex ante approval from judicial authorities.
With the AI Act the EU thus aspires to become, in Internal Market Commissioner Thierry Breton’s words, ‘the main “pacemaker” in regulating the use of AI on a global scale. In parallel, the human rights body Council of Europe (CoE) is working on an international treaty that would introduce strict rules on AI systems posing risks safeguards on human rights, the rule of law and democratic functioning. The treaty is bound to introduce strict rules for AI systems that might be at odds with human rights, including the much-discussed biometric recognition technologies. While the CoE upcoming treaty might strengthen EU’s position in setting a global standard on AI, negotiations at EU level on the AI Regulation are likely to take several months,