- Symposium
- Advanced Digital Technologies in Migration Management: Data Protection and Fundamental Rights Concerns
AI and Asylum in the EU Legal Framework
A Liaison Dangéreuse?
In recent years, the European Union’s asylum system has seen the gradual integration of AI tools aimed at streamlining certain processes optimizing workflows. Since the stakes in asylum cases are high and involve fundamental rights, careful scrutiny of AI’s role in this highly sensitive area is necessary.
In response to the proliferation of AI, the European Union (EU) adopted its first legally binding AI regulation, the EU AI Act. The Act seeks to ensure that AI respects fundamental rights while promoting innovation through a risk-based approach. However, the Act leaves key questions unresolved and contains exceptions that may undermine the protection of fundamental rights in this context.
This blog post argues that the use of AI in asylum procedures, particularly in phases like credibility assessment, can disrupt the delicate procedural balance that has been carefully constructed through legal frameworks. Furthermore, it questions whether the EU AI Act, with its inherent flexibility and discretionary allowances, is the best regulatory tool to safeguard the fundamental rights of asylum seekers. This way, the contribution highlights the critical concerns raised by AI’s involvement in asylum procedures and offers preliminary considerations regarding the effectiveness of the safeguards provided by the EU AI Act.
AI in EU Asylum Procedures: Enhancing or Disrupting the Procedural Balance?
The asylum procedure in the EU is complex and it is grounded in international law and European legal frameworks, including EU’s constitutional law. These frameworks create a system that seeks to balance the thorough evaluation of asylum claims with the vulnerability of applicants, many of whom lack documentary evidence to support their claims.
The asylum process comprises several key stages: registering the application, conducting interviews to assess the claim’s grounds, assessing available evidence, and issuing a decision. At each stage, specific legal principles are designed to create a fair and thorough process where the position of the applicant is balanced against the need for accurate decision-making. For example, the shared burden of proof between applicants and authorities [Directive 2011/95/EU Article 4(1)] ensures that decision-makers consider the inherent difficulties faced by asylum seekers. Similarly, the principle of the “benefit of the doubt” allows for flexibility when applicants cannot provide adequate documentation.
Central to this process is the interview phase, where applicants provide their personal testimonies regarding their need for international protection. Credibility assessments during this stage are critical, as asylum claims often rely heavily on personal accounts due to the lack of physical evidence. If the applicant’s statements are deemed credible and aligned with the legal criteria for international protection, they may be granted refugee status or other forms of protection. As such, the assessment of credibility is a delicate and highly important aspect of the asylum process.
Despite this complexity, AI tools are increasingly being used to “enhance” specific phases of the asylum process in several EU Member States. For instance, in Germany, a Dialect Identification Assistance System (DIAS) has been in operation since 2017. DIAS analyzes phonetic patterns in applicants’ speech to provide a probabilistic determination of their country or region of origin. This tool is used to supplement other evidence in the asylum process, particularly when applicants lack identification documents. The German authorities argue that DIAS helps identify fraudulent narratives, optimizes decision-making, and facilitates the return of rejected asylum seekers.
While AI tools like DIAS offer practical benefits, their deployment in such sensitive phases of the asylum process raises significant concerns. The AI-generated analysis of dialects contributes directly to the credibility assessment, which can have profound implications for the outcome of the asylum claim. If authorities rely too heavily on AI outputs, they may undervalue other evidence or ignore the “benefit of the doubt” principle that is essential in cases where documentary proof is lacking.
Additionally, the use of AI in this context can undermine the procedural balance of the asylum system. Even without altering existing laws, AI tools like DIAS can create outcomes that shift the decision-making process toward a more data-driven, and possibly less human-centered, approach. This shift is concerning because AI systems, while advanced, are not infallible. Errors in data analysis or interpretation could result in incorrect conclusions about an applicant’s origin, leading to unfair decisions with potentially life-threatening consequences for the individual concerned.
Given the impact of AI on these delicate procedures, it is imperative to ensure that AI tools deployed in asylum cases meet the highest standards of accuracy and fairness. This is where the EU AI Act comes into play, offering an opportunity to regulate the use of AI in such high-stakes environments.
To Be High-Risk, or Not to Be? Classifying AI Systems in Asylum under the EU AI Act
The EU AI Act, adopted in 2024, represents a major step forward in the regulation of AI within the EU, which had been left regulated. The Act employs a risk-based approach, classifying AI systems into four categories based on their potential harm: unacceptable risk, high risk, limited risk, and minimal risk. AI systems considered unacceptable are outright prohibited, while high-risk systems, which include those used in asylum, migration, and border management, are subject to strict regulatory requirements (Annex III, point 7). These include mandatory risk management processes, transparency measures, human oversight, and fundamental rights impact assessments.
However, the EU AI Act contains several exceptions and loopholes that could limit the effectiveness of these safeguards, depending on the classification of a system as high-risk on the one hand and, on the other hand, concerning asylum matters.
In fact, Article 6, paragraph 3, specifies that an AI system deemed high-risk should not be classified as such if it does not pose a significant risk of harm to the health, safety, or fundamental rights of individuals, including by not materially influencing the outcome of decision-making. This presumption applies where any of the three conditions established are met. Thus, when deploying an AI system within the asylum sector, it will be crucial to determine whether the system falls under the high-risk classification and, if so, to understand the ensuing impact. To do so, it will be extremely important to take into consideration the European Commission Guidelines that will be provided no later than 18 months from the date of entry into force of this Regulation after consulting the European Artificial Intelligence Board.
Moreover, the EU AI Act introduces exceptions to the strict rules for high-risk AI systems, particularly in areas like asylum, migration, and border management. Normally, providers and public authorities using high-risk systems must register them in an EU database to ensure transparency. This database is meant to be publicly accessible and easily understandable. However, for AI systems used in law enforcement, migration, asylum, and border control, the registration must be in a secure, non-public section, accessible only to the European Commission and relevant national authorities (Art. 49, par. 4, EU AI Act). This restricts transparency, with no clear reason for the limitation.
Additionally, high-risk AI systems in asylum are subject to relaxed human oversight requirements (Art.14, EU AI Act), with flexibility depending on the system’s context and level of autonomy. In asylum cases, where the stakes are exceptionally high, exempting human oversight could lead to situations where AI tools operate with minimal human intervention, increasing the risk of errors or bias going unchecked.
This creates a troubling scenario where AI tools like DIAS – if classified as a high-risk system – could continue to operate without the highest standards of accountability and transparency, potentially affecting the rights of asylum seekers. For applicants, the use of AI certainly adds an additional layer of complexity to an already complicated process, raising concerns about fairness and, more generally, about fundamental rights. If applicants are not fully informed about how AI systems are influencing decisions, they may find it difficult to challenge those decisions, further weakening their position within the asylum process and potentially challenging the right to an effective remedy and access to the right to asylum itself.
Concluding Remarks
The integration of AI into EU asylum procedures presents a double-edged sword. On the one hand, AI offers opportunities to streamline the asylum process and improve efficiency in managing large caseloads. On the other hand, the use of AI in sensitive phases like credibility assessments risks undermining the procedural balance designed to protect vulnerable individuals.
The EU AI Act is a significant regulatory step forward, but its application to asylum procedures remains fraught with uncertainty. The Act’s exceptions and discretionary allowances may weaken the protection it offers, particularly concerning transparency and human oversight.
While the EU AI Act marks progress in regulating AI, its effectiveness in the asylum context will ultimately depend on how well these safeguards are enforced. Striking a balance between technological innovation and the protection of human rights is essential in ensuring that AI contributes positively to asylum procedures without compromising the fundamental rights of those seeking refuge in Europe.
