Statement of “Voice in Bulgaria” on the “Regulation of the European Commission and of the Council laying down harmonised rules on Artificial Intelligence – Artificial Intelligence Act” and its application in Migration

Statement of “Voice in Bulgaria”  on the “Regulation of the European Commission and of the Council laying down harmonised rules on Artificial Intelligence – Artificial Intelligence Act” and its application in Migration

Visual: Lidia Stanulova, (created during EthicAI=Labs 2022 Goethe-Insitut)

 

Part I – Overview of EU regulation on Artificial Intelligence

 

  1. 1. Introduction

 

On 14 June 2023, the European Parliament will vote on the long-awaited “Artificial Intelligence Legislation”, the EU AI Act[1] , (hereafter referred to as the Regulation or the Regulation Proposal). Following this step, the text will be sent to the Council of the European Union for consultation and vote. Further changes and additions are expected to be made in this trilogue process between the Parliament, the Council and the Commission. Therefore, the civil sector and human rights organisations should continue their activism and propose changes guaranteeing respect for human rights in every area, including for the most vulnerable groups within the Union. This Opinion is an overview of the Draft Regulation focusing on the risks of the use of AI by Member States in the implementation of migration and security policies.

 

Overview

 

The European Commission published the Regulatory Proposal in April 2021 as a draft supranational mechanism for regulating artificial intelligence within the European Union. The proposal was circulated together with the Coordinated Plan for Artificial Intelligence – 2021[2] , which aims to promote investment and infrastructure for the creation and development of technologies in the field of Artificial Intelligence, Internet of Things and Robotics. Regulation and the Coordinated Plan are a long awaited and long debated topic, both within and outside EU Member States, as all major service providers in this field are interested in developing their activities in the European market, including leading companies such as Open AI, Google and IBM. After two years of discussions with the civil sector, technology companies and Member State governments, the Commission has chosen to propose a centralised horizontal regulation that is based on the risk of a specific technology tool. The two main balancing objectives set out in the document are – protecting the fundamental rights and freedoms of European Union citizens and promoting innovation in artificial intelligence, robotics and the Internet of Things (IoT).

 

Despite the good intentions, the Regulation has been criticised by a number of experts and human rights organisations due to the lack of adequate guarantees that AI will not be used to violate the rights of migrants and asylum seekers on the territory of the Union. In the following lines, the main principles and norms set out in the Regulation, the risks associated with the use of AI, and the activities involving the use of AI that threaten migrants’ rights are discussed. The Opinion concludes by proposing changes to the Regulation to ensure that unacceptable and high-risk technologies are not used in the Union’s migration policies.

 

 

  1. Risk-based approach

 

To balance its approach and encourage innovation, the Regulation divides AI technologies into four groups according to the risks to human beings and their rights.

  • Unacceptable risk. Technologies with “unacceptable risk” will be banned in the European Union. They are not exhaustively listed, but cover practices that are contrary to or in irreconcilable conflict with the fundamental rights and values of the Union, for example – systems that aim to manipulate human behaviour towards self-harm or dangerous behaviour, social profiling for the purpose of obtaining public resources, and certain biometric technologies that collect personal data about citizens remotely. In the second part of this study, we argue that such technologies with unacceptable risk should not be used for migration control and national security purposes.
  • High risk – High risk systems are a major target of regulation. These systems put the lives, health, or fundamental rights and freedoms of citizens at risk. Similar applications are in healthcare, transport – for example autonomous cars, crime investigation and critical infrastructure. For these systems, the Regulation provides detailed and mandatory licensing procedures and operating rules. High-risk artificial intelligence to be licensed must go through a compliance assessment, be transparent as a process and model, and be easily explainable. Vendors must provide full and clear information on the algorithms, capabilities and limitations of the technology models. In this part, the Commission is trying to limit one of the main characteristics of artificial intelligence, namely a ‘black box’ that even its own creators can hardly predict and control. The basis of the ‘self-learning’ process in deep learning is that humans cannot fully predict what results and conclusions the model will reach. Another important aspect of the Regulation is that users have the right to be notified when they interact with AI (e.g. a chat-bot) and to request human intervention or to challenge the AI’s decisions.
  • Limited Risk and Minimal Risk – Voluntary standardization is provided for limited and minimal risk systems that will operate on the basis of trust and voluntary quality certification.

 

 

  1. Centralisation and subsidiarity

 

The regulation foresees the creation of a European Artificial Intelligence Council to coordinate the efforts of member states in implementing the regulation. Alongside this, national regulators will monitor the market to ensure that technological innovation meets technical and legal requirements. The regulation also promotes the sharing of data within the European Union and regulates the creation of a common database for the purpose of technological progress in different sectors. Although easy access to data can encourage the development of start-ups and improve the economic ecosystem in the Union, it is worrying that users’ data from different areas will be administered by centralised bodies of the European Commission and used for innovation. Even if this data is anonymised, there remains a huge risk of abuse, cyber-attacks, corruption and discrimination in the centralised processing of large amounts of personal data.

 

  1. Artificial Intelligence Civil Liability Directive

 

In parallel with the proposal for the Regulation of Artificial Intelligence, the European institutions have developed a Directive of the European Parliament and of the Council on non-contractual civil liability applicable to the use of Artificial Intelligence – AI Liability Directive[3] . The Directive was voted in September 2022 and addresses some of the risks to consumers from the use of AI. These risks and the arguments to the Directive overlap with some of the risks outlined in the Proposed AI Regulation, although the Directive has a much narrower scope – “non-contractual liability in the use of AI”. A harmonised framework for Member States to accept no-fault (strict) liability for the use of AI is settled, accompanied by a reversal of the burden of proof.

 

  1. Risks

 

Both the Regulation and the Directive identify four main groups of risks associated with the use of AI – connectivity, autonomy, data dependency and opacity. Connectivity allows cyber-attacks or AI errors to quickly spread a potential danger to multiple users. Autonomy is the foundation of AI, but it can quickly make the technology unmanageable. A key characteristic of AI is that the designers of a particular deep learning algorithm cannot predict all of the results that the model will produce, as it is not based on simple “if….., then…” computer logic. On the contrary, AI models are self-learning and thus “outperform” what is embedded in the model run itself. Self-learning is the basis of advances, but it can also lead to anomalies in various domains. Data dependency is another feature of AI that can pose a serious risk to human rights. With poor quality or incomplete data, AI can lead to undesirable and dangerous consequences. A major danger comes from the fact that incomplete or distorted data can easily make the model discriminatory for individuals or groups of people. Last but not least, as a serious risk, the Directive and the Regulatory Proposal point to the opacity of AI models. They have been called “black boxes” by some experts, precisely because it is impossible to trace the relationship between the source data and the results that the model suggests. The complexity of the “neural networks” that constitute the self-learning matrix for “deep learning” makes it impossible even for the creators of the network to tell by what path and logic a given outcome or solution was achieved. This has implications for proving specific harms, predicting outcomes or assessing impact.

 

 

Part II. Artificial Intelligence in Migration – A Useful Security Tool or an Experiment with the Human Person?

 

The AI dangers mentioned above apply most to AI designed and used for border control and migration purposes. Although there are requirements for non-discrimination in the use of high-risk systems, the draft Regulation has been heavily criticised for not containing adequate mechanisms to protect citizens’ economic and social rights. For example, Human Rights Watch[4] points out that automated systems for assessing people applying for social assistance pose risks in all their phases: 1) Identity verification, 2) Applicant assessment, and 3) Fraud and risk prevention and investigation. Similar systems are used in migration. Some of the identification and identity verification systems used in Europe collect and process an unnecessarily large amount of personal data and are racist and discriminatory, according to HRW[5] . When using identification and verification systems, there is a high risk of discrimination based on race or other physical characteristics, as the technologies work with photos and videos that are assessed based on pre-established algorithms. The algorithms themselves can be ‘trained’ to assess based on skin colour and facial origin.

Next, a number of experts[6] have criticised the European institutions and member states for using artificial intelligence to automatically assess the potential danger of migrants or groups of migrants entering the EU. The UN Special Rapporteur on Racism, Racial Discrimination and Xenophobia highlights in a report that the technologies used by member states and Frontex, such as drones in the Mediterranean, tools predicting the movement of people and automated decision-making in specific migration cases, “increase racism and discrimination and may lead to further damage in an already discretionary system.”[7]

 

  1. Pilot initiatives in Migration

 

EU Member States and Frontex are already piloting the use of AI in various areas of border control and migration. Initiatives such as those listed below are of serious concern to observers and human rights experts.

AI in Migration[8] .

 

Member States are piloting the use of AI in various initiatives to:

  • Processing of work visa applications
  • ETIAS requirements for visa-free travel in the EU,
  • Requests for prolonged and permanent stay
  • Grant of international protection
  • Improving SIS-SIRENE[9] processes for the exchange of information between Member States’ security services and between them and Europol.
  • Strengthening border controls in Schengen. The stated aim is to ensure secure and reliable border administration.
  • Improving the operation of IT systems in the European Union.
  • Improving EU policy-making and procedures.

 

The AI tools that will be used in these pilot and later permanent initiatives are:

  1. Chatbots and intelligent agents.
  2. Risk assessment applications.
  3. Knowledge and data management applications.
  4. Analytical tools and computer vision programs to evaluate photos, videos and real images.

Almost every initiative listed above will use risk assessment tools that are AI in nature, meaning that automated systems will assess specific individuals or groups of migrants.

 

Take the example of the International Protection Initiative[10] . It is envisaged that AI will be used in each part of the process – Application, Assessment and Communication. The main activities to be carried out by the AI are – 1. Vulnerability analysis of the asylum seeker – to what extent is he/she in danger of persecution or torture, 2. Chatbot which is used for the process of registering the Protection Request “where no human intervention is required”. This raises the question – who decides where human intervention is and is not required and how? 3. Next, an AI model will be used that predicts what the risk is to the host country from a particular asylum seeker based on characteristics such as – country of origin, previous protection claims, years, etc. Here the risk of automated assessment is also high, both from wrong and discriminatory decisions of the model, 4. It is envisaged to use AI to allocate specific individuals and groups of asylum seekers to geographical regions based on their characteristics, 5. Last but not least, an intelligent intelligence tool will be implemented to find and process sensitive data.

The stated aim of this initiative and its activities is to make the process of granting protection “data-led”, which would lead to an alleged increase in the transparency of processes. Despite the stated aims, each of the actions and tools mentioned leaves the fate of hundreds of thousands of people seeking protection in the hands of artificial intelligence. There are serious risks in the possibility that AI models could be fed with incorrect or incomplete data, or that the AI models themselves produce inaccurate results. This risk is pointed out by Deloitte analysts who have been commissioned to assess the impact of proposed AI in migration processes. [11]

 

  1. Risks and criticisms

 

A number of expert and human rights organisations in Europe have criticised the approach of the EU and its member states in using AI in migration processes and in protecting the Union’s borders. Regulation is seen as insufficient to guarantee the rights, freedoms and dignity of migrants entering Europe. More than 160 organisations have signed a joint statement[12] , which concludes that the Regulation does not offer adequate protection for the risks associated with the use of AI – (“the EU AI Act does not adequately address and prevent the harms stemming from the use of AI in the migration context.”).

 

Some of the AI systems in migration are inherently “unacceptable risk” that should be banned altogether because their risk cannot be mitigated by technological or legal improvements. These are, for example, predictive analytics tools that aim to prohibit or restrict migration. These are usually used by third party ‘gatekeepers’ and aim to predict which regions are expected to have ‘irregular migration’ in order to prevent the movement of large groups of people. Another tool that carries an unacceptable risk is risk assessment and profiling systems for individual migrants. These tools are designed to predict whether a person would pose a danger to the EU or what the likelihood of an illegal activity is. Next, particularly dangerous are AI systems that are used for ’emotional recognition’ such as those that assess whether a person is lying or telling the truth – ‘automated lie detectors’ based on an emotional assessment made by the AI.  Other systems used in border and migration control, even if not banned, should be categorised as high risk – such as border surveillance systems, biometric processing and assessment systems, etc.

 

Conclusion

In conclusion, the European Commission and Member States need to include in the Regulation specific guarantees that AI with “unacceptable risk” will not be used in Migration and that those that are in fact “high risk” will be properly categorised and subject to enhanced monitoring. In this regard, 1) systems to predict and stop ‘irregular migration flows’, 2) systems to individually profile and assess migrants, 3) automated ‘lie detectors’, 4) as well as AI systems that remotely collect and assess biometric data in public places should be banned. For other systems that pose a potential threat to migrants’ rights and freedoms, higher requirements for transparency and institutional control should be regulated by including them in the list of “high-risk” systems – such as systems for border surveillance by drones and facial recognition cameras, for the collection and processing of migrants’ biometric data, and systems for the assessment of evidence related to migration procedures. Last but not least, the Regulation should include adequate administrative mechanisms for migrants affected by AI to challenge the use of specific systems that violate their human rights and freedoms.

 

Prepared by Lyubomir Avdzhiyski, advocacy expert

[1] Artificial Intelligence Legislation, 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206

[2] Coordinated Plan for “Developing a European Approach to Artificial Intelligence”, 2021, https://digital-strategy.ec.europa.eu/en/library/coordinated-plan-artificial-intelligence-2021-review

[3]AI Liability Directive, 2022, https://commission.europa.eu/system/files/2022-09/1_1_197605_prop_dir_ai_en.pdf

[4] How the EU’s Flawed Artificial Intelligence Regulation Endangers the Social Safety Net: Questions and Answers, 2021, https://www.hrw.org/news/2021/11/10/how-eus-flawed-artificial-intelligence-regulation-endangers-social-safety-net#_how_are_governments

[5] https://www.hrw.org/news/2021/11/10/how-eus-flawed-artificial-intelligence-regulation-endangers-social-safety-net#_how_are_governments

[6] Technological Testing Grounds: Border tech is experimenting with people’s lives in 2020, https://edri.org/our-work/technological-testing-grounds-border-tech-is-experimenting-with-peoples-lives/

[7] Ibid.

[8] European Commission, Directorate-General for Migration and Home Affairs, Opportunities and challenges for the use of artificial intelligence in border control, migration and security . Volume 1, Main report – , Publications Office, 2020, https://data.europa.eu/doi/10.2837/923610

[9] SIRENE cooperation, explanation, https://home-affairs.ec.europa.eu/policies/schengen-borders-and-visa/schengen-information-system/sirene-cooperation_en

[10] See 8, p.24, https://op.europa.eu/en/publication-detail/-/publication/c8823cd1-a152-11ea-9d2d-01aa75ed71a1/language-en

[11] https://op.europa.eu/en/publication-detail/-/publication/c8823cd1-a152-11e-9d2d-01aa75ed71a1/language-en

[12] Joint statement: The EU Artifical Intelligence Act must protect people on the move, 2022, https://www.statewatch.org/news/2022/december/joint-statement-the-eu-artifical-intelligence-act-must-protect-people-on-the-move/