“Artificial Intelligence, deep learning, machine learning – whatever you’re doing if you don’t understand it – learn it. Because otherwise you’re going to be a dinosaur within 3 years” – Mark Cuban, Upfront Summit, 2017.
I. Background
We now live in an era where artificial intelligence (AI) is no longer a distant possibility but a clear and present fact, being used daily in various degrees and inherently playing a significant role in the everyday lives of billions of persons. AI ranges from the trivial spam mail filtering to chatbots employed in numerous economic sectors, from digital devices suggesting new songs to mobile phone applications finding in real time the fastest route or monitoring your everyday diet and wellbeing. In a nutshell, AI refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions.
All evidence points out that AI is here to stay for the long run and, taking note of this new reality as well as the unprecedent ethical and moral challenges raised by an extended use of AI, a plethora of actors, from national governments to international bodies, activists for the safeguard of the fundamental rights and Silicon Valley tech moguls are all increasingly concerned about the need to implement adequate AI regulations at a national and international level.
On this background, noting the potential transformative impact of AI on the economy and society as a whole, the European Commission’s President Ursula von der Leyen expressed in 2019 her intention to put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence[1]. To initiate the preparatory work for laying the foundations of a future AI regulation, the European Commission selected in 2019 an AI expert steering group to help stimulate a multi-stakeholder dialogue, gather participants’ views and reflect them in its analysis and reports.
In its turn, the European Parliament adopted in October 2020, several resolutions related to AI, including civil liability[2] and copyright[3] and a Resolution on a Framework of Ethical Aspects of AI, Robotics and Related Technologies[4], which recommended to the European Commission to propose a new regulation in order to harness the opportunities and benefits of AI. They were followed in 2021 by a draft report on AI in criminal matters[5] and a draft report on education, culture and the audio-visual sector[6].
Following all these actions, on 21 April 2021 the European Commission published the Proposal for the Regulation laying down harmonized rules on artificial intelligence (“the Proposal”)[7].
II. Main scope of the Proposal
The Proposal is part of a wider draft legislative package (partially) regulating the emerging technologies in the EU (the Digital Services Act[8], the Digital Markets Act[9], the Machinery Regulation[10], the revised Product Liability Regulation[11] and the Data Governance Act[12]) and its main purpose is to regulate the use of AI systems in the EU, including to:
– ensure that AI placed and used on the EU market are safe and respect existing laws on fundamental rights, as well as EU values;
– ensure legal certainty to facilitate investments and innovation in AI;
– enhance governance and effective enforcement of existing laws on fundamental rights and safety requirements applicable to AI systems;
– facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
The Proposal defines the AI systems as “software that is developed with one or more of [certain] approaches and techniques… and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”.
The definition captures not only AI systems as stand-alone software products, but also products and services relying on AI services directly or indirectly.
Furthermore, certain practices are strictly prohibited for all AI systems, such as the placing on the market, putting into service or use of AI systems which:
– deploy subliminal techniques beyond a person’s consciousness to materially distort a person’s behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm;
– exploit vulnerabilities of a group due to their age, physical or mental disability to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
– evaluate or classify the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score leading to detrimental or unfavorable treatment of certain natural persons or whole groups thereof.
Moreover, the Proposal also prohibits the use of ‘real-time’ remote biometric identification systems in publicly accessible areas for law enforcement, unless this is strictly necessary for specified objectives, namely:
– the targeted search for specific potential victims of crime, including missing children;
– the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;
– the detection, localization, identification or prosecution of a perpetrator or suspect of certain criminal offences.
In any scenario, the use of ‘real-time’ remote biometric identification systems in publicly accessible areas will only be allowed where such systems ensure necessary and proportionate safeguards and conditions, notably with regard to temporal, geographic and personal limitations.
The Proposal also provides in annexes II and III an exhaustive list of high-risk AI systems, whose implementation is conditional upon the fulfillment of certain conditions, such as: providing a risk management system for the entire lifecycle of the high-risk AI system, implementing mitigation and control measures, providing information and training and conducting testing.
The Proposal also provides obligations to be assumed by providers, manufacturers, importers, distributors, and users of AI systems, including the following:
– providers must ensure compliance with the AI Regulation, implement quality management systems, draw up relevant technical documentation, keep logs generated by their high-risk AI systems, comply with conformity assessment and registration obligations, and report on serious incidents and malfunctions;
– manufacturers must ensure compliance as if they were the providers of the high-risk AI system;
– distributors, importers, users and other third parties will also be subject to providers’ obligations if they place a high-risk AI system on the market or into service or modify the purpose of the high-risk AI system (already on the market/into service);
– users must use the high-risk AI systems in accordance with the instructions, ensure that input data is relevant, monitor the operation of such systems and keep records and information requirements.
Transparency obligations are applied, pursuant to the Proposal, to certain types of AI systems such as those intended to interact with natural persons, used for emotion recognition, biometric categorization and systems which generate or manipulate content (“deep fakes”)[13].
The Proposal also provides a number of measures to support innovation, such as regulatory sandbox schemes, reduction of regulatory burdens for small and medium-sized enterprises and startups, and the creation of digital hubs and testing facilities.
To ensure the smooth, effective and harmonized implementation of the new regulations, the Proposal sets up dedicated AI governance systems at EU and national level.
At EU level, the Proposal establishes a European Artificial Intelligence Board, having as main duties to facilitate the implementation of the regulations, providing advice and expertise to the Commission and collecting and sharing best practices among Member States.
At national level, Member States must designate one or more national competent authorities and, among them, the national supervisory authority, for the purpose of supervising the application and implementation of the regulations.
Furthermore, the European Data Protection Supervisor will act as the competent authority for the supervision of the EU institutions, agencies and bodies when they fall within the scope of the AI regulations.
III. Concerns and recommendations in the Joint Opinion issued by the EDPB and the EDPS
Taking note of the implications the new AI regulations could have with regard to the protection of personal data, on 18 June 2021, the European Data Protection Board (“EDPB”) and the European Data Protection Supervisor (“EDPS”) adopted the Joint Opinion no. 5/2021 on the Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (the “Joint Opinion”).
The Joint Opinion welcomes the publication of the Proposal but highlights various concerns and recommendations, mostly stemming from the implications this regulation may have with regard to EU values and rights protected by the General Data Protection Regulation 2016/679 (“GDPR”), the Regulation 2018/1725 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data (“EUDPR”), the ePrivacy Directive[14], the Law Enforcement Directive 2016/680 (“LED”), as well as the European Convention of Human Rights and the Charter of Fundamental Rights of the EU.
The Joint Opinion takes positive note of the risk-based approach underpinning the Proposal as well as of the coherent approach aimed at applying it both to the Member States (public and private sector) and to EU institutions, offices, bodies and agencies.
A first concern is raised regarding the exclusion of the international law enforcement cooperation from the scope of the Proposal, due to a significant risk of circumvention (e.g. third countries or international organizations operating high-risk applications relied on by public authorities in the EU).
Furthermore, the Joint Opinion underlines that the GDPR, EUDPR, LED and ePrivacy Directive (collectively the “EU data protection legislation”) should apply to any processing of personal data falling under the scope of the Proposal.
A clear relationship should also be ensured to the existing EU data protection legislation by amending Article 1 for stating that the Proposal does not seek to affect the application of the EU data protection legislation, including the task and powers of the supervisory authorities competent to monitor compliance with the EU data protection legislation.
We discuss in the sections below the main concerns and recommendations addressed by the Joint Opinion with regard to the implications of these new AI regulations.
3.1. Risk-based approach
The Joint Opinion welcomes the risk-based approach underpinning the Proposal but recommends that certain societal/group risks (e.g. collective effects with a particular relevance, like group discrimination or expression of political opinions in public spaces) posed by AI systems should be assessed and mitigated.
Additionally, the concept of “risk to fundamental rights” should be aligned with that used in the EU data protection legislation, insofar as aspects related to the protection of personal data come into play.
Moreover, the Joint Opinion argues that the exhaustive list of high-risk AI systems detailed in annexes II and III of the Proposal should be amended and updated constantly in order to comprise all types of use cases which involve significant risks, as they may occur or evolve from time to time. Doing otherwise risks generating a black-and-white effect with weak attraction capabilities of highly risky situations and therefore undermining the overall risk-based approach underlying the Proposal.
While the Proposal obliges the providers of AI systems to perform risk-assessments, the Joint Opinion notes that the data controllers will be the actual users of the AI systems and therefore the providers cannot properly assess all types of uses. Consequently, subsequent (more granular) assessments (DPIA) should be performed by the users of the AI systems, considering the context of use and the specific use cases.
3.2. Prohibited uses of AI
EDPB and EDPS note that the list of prohibited AI practices provided in Article 5 of the Proposal does not cover certain relevant practices, such as certain intrusive forms of AI (especially affecting human dignity) which must be prohibited. Furthermore, the Joint Opinion stresses that social scoring of any type (performed by public or private entities) also should be prohibited pursuant to Article 5 of the Proposal.
Furthermore, AI systems based on biometric technology used to categorize individuals from biometrics (for instance, from face recognition) into clusters according to ethnicity, gender, as well as political or sexual orientation, or other grounds for discrimination prohibited under the Charter of Fundamental Rights present a risk of intrusiveness and should be prohibited.
Public authorities and private entities should not be allowed, under Article 5 of the Proposal, to use such AI systems whose scientific validity has not been yet confirmed or which are in direct conflict with essential values of the EU (e.g. polygraph). The opinion notes that not only such AI systems are infringing the EU legislation, but, when used for predicting future human behavior, their use also affects the human dignity, such as, for example, when predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics or past criminal behavior. According to the Joint Opinion, using such AI systems will lead to pivotal subjection of police and judicial decision-making, objectifying the human being affected.
A general ban should also be imposed on the use of AI for automated recognition of human features in publicly accessible places, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioral signals, in any context.
Taking into consideration that public remote biometric identification of individuals poses a high-risk of intrusion into individuals’ private lives, a stricter approach is necessary. Hence, the EDPB and EDPS note that public remote biometric identification might have serious proportionality implications, given that it would involve processing of data of an indiscriminate and disproportionate number of data subjects for the identification of only a few individuals (e.g. passengers in airports and train stations).
Additionally, the effortless nature of remote biometric identification systems also presents transparency problems and issues related to the legal basis for the processing under the EU data protection legislation (a practical solution to properly inform individuals about this processing is still unsolved as are the effective and timely exercise of the rights of individuals). Furthermore, the use of such AI systems is deemed to have a direct negative effect on the exercise of freedom of expression, assembly, association and freedom of movement, since people are entitled to the reasonable expectation of being anonymous in public spaces.
The Proposal prohibits the use of ‘real time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, except for three situations, namely: (a) targeted search of potential victims of crime (e.g., missing children); (b) prevention of specific, substantial and imminent threats to the life or physical safety of natural persons or of a terrorist attack; (c) detection, localization, identification or prosecution of a perpetrator or suspect of a criminal offence.
The Proposal defines the ’real time’ remote biometric identification system as “a remote biometric system whereby capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also, limited short delays in order to avoid circumvention”.
The Joint Opinion recommends a further clarification of the term “significant delay” in the definition, considering that (a) a mass identification system is able to identify thousands of individuals in only a few hours; and (b) the intrusiveness of the processing does not always depend on its purpose or on the identification being done in real-time or not. For example, the opinion notes that ex-post remote biometric identification in the context of a political protest is likely to have a significant chilling effect on the exercise of the fundamental rights and freedoms, such as freedom of assembly and association. Similarly, the use of such AI systems for private security purposes poses the same threats to the fundamental rights of respect for private life, family life and protection of personal data.
Furthermore, EDPB and EDPS consider that the use of AI to infer emotions of a natural person is highly undesirable and should also be prohibited, except for certain well-specified cases such as for health or research purposes (e.g., patients, where emotion recognition is important) and only subject to implementing appropriate safeguards and observing all other data protection obligations, such as purpose limitation.
3.3. High-risk AI systems
According to Article 43 of the Proposal, providers must carry out conformity assessments for high-risk AI systems. The Joint Opinion recommends amending this article to provide for an ex-ante third party conformity assessment for high-risk AI systems, as such measure would strengthen the legal certainty and confidence in all high-risk AI systems.
Additionally, pursuant to the same Article 43, existing high-risk AI systems must undergo a new conformity assessment procedure whenever they are significantly changed. AI systems already in use are excluded from the scope of this requirement unless those systems are subject to “significant changes” in design or intended purpose.
EDPB and EDPS stress that term “significant changes” is unclear and consequently the regulation should be amended to specify that AI systems already established and in operation should undergo a conformity assessment whenever a change occurs which may affect their compliance with the AI regulations.
3.4. Governance and European Artificial Intelligence Board
Noting that EDPS will fulfill the role as AI regulator for the EU public administration, the Joint Opinion recommends that EDPS’ role be further detailed, while the independence guarantees awarded to various supervisory authorities and the provisions regulating their relationships must be clarified to ensure compliance with the EU data protection legislation. On another note, the Joint Opinion agrees that the designation of data protection authorities as national supervisory authorities is likely to ensure a more harmonized regulatory approach, contribute to the consistent interpretation of data processing provisions and avoid contradictions in the enforcement of the new regulations among EU member states.
Regarding the establishment of the European Artificial Intelligence Board (the “EIAB”) as supervisory authority at EU level, the EDPB and EDPS consider that the Proposal should give more autonomy to EAIB for ensuring the consistent application of the regulations across the single market. In addition, they recommend that the competences for the enforcement of the new regulations should be conferred to the EAIB and its legal status be further clarified.
3.5. Interplay with the data protection framework, sandbox & further processing
The Joint Opinion is of the view that it is vital to have a clearly defined relationship of the Proposal with the existing EU data protection legislation, in order to ensure and uphold the respect and application of the EU acquis in the field of personal data protection and of the fundamental rights granted to data subjects pursuant to the GDPR.
Moreover, Member States must implement AI regulatory sandboxes for facilitating the development, testing and validation of innovative AI systems for a limited time before their placement on the market.
Noting that sandboxes give the opportunity to provide safeguards needed to build trust and reliance on AI systems, EDPB and EDPS acknowledge the need for the providing guidance on how to strike a good balance between being a supervisory authority on one hand and giving detailed guidance through a sandbox on the other.
Regarding transparency requirements, EDPB and EDPS agree with the Proposal’s approach that high-risk AI-systems need to be registered in public databases. Where the transparency condition cannot be ensured due to reasons of secrecy (e.g., the transparency obligation does not apply to AI systems used to detect, prevent, investigate, or prosecute criminal offences), safeguards should be in place and AI systems should be registered with the competent supervisory authority.
Another recommendation concerns special categories of data, which should benefit from a more coherent regulatory approach, as the current provisions do not seem sufficiently clear to create a legal basis for the processing of this kind of data.
Furthermore, the compliance mechanisms provided by the Proposal, such as certification and codes of conduct, be further enhanced. Particularly, the Proposal should include the principles of data minimization and data protection by design among the requirements to be considered for obtaining a certification (CE marking).
IV. Conclusions
The publication of the Proposal is welcomed as a necessary step for safeguarding the fundamental rights of EU citizens and residents in the light of the rapid development of AI. At the same time, the Joint Opinion highlights that, given the complexity of the Proposal and the very sensitive nature of the regulated matters, significant adjustments are required until the Proposal can translate into a well-functioning legal framework, efficiently supplementing the GDPR in protecting fundamental rights and freedoms while also fostering innovation.
Although the Joint Opinion is not legally binding, given the authority EDPB and EDPS are entrusted with in the field of data protection, its concerns and recommendations will certainly be reflected in one form or another in the upcoming AI regulations.
[1] For more details, see Ursula von der Leyen, A Union that Strives for More: My Agenda for Europe: Political Guidelines for the Next European Commission 2019-2024, 2019.
[2] Available here
[3] Available here
[4] Available here
[5] Available here
[6] Available here
[7] Available here
[8] European Commission, Proposal for a Regulation of the European Parliament and of the Council on a Single Market for Digital Services (Digital Services Act) and amending Directive 2000/31/EC (COM(2020) 825 final).
[9] European Commission, Proposal for a Regulation of the European Parliament and of the Council on contestable and fair markets in the digital sector (Digital Markets Act) (COM (2020) 842 final).
[10] European Commission, Proposal for a Regulation of the European Parliament and of the Council on machinery products (COM(2021) 202 final).
[11] European Commission, ‘Communication from the Commission to the European Parliament, the Council, the European and Economic and Social Committee and the Committee of the Regions: Fostering a European Approach to Artificial Intelligence (COM(3032) 205 Final)’ (21 April 2021) 2.
[12] European Commission, Proposal for a Regulation of the European Parliament and of the Council of the European Data Governance (Data Governance Act) (COM(2020) 767 final).
[13] “Deep fakes” represent AI systems that generate or manipulate image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.
[14] Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) as amended by Directive 2006/24/EC and Directive 2009/136/EC.
Vlad Cordea, Managing Associate Ijdelea Mihailescu
Adrian Manolache, Associate Ijdelea Mihailescu