Exploring Asylum Decision Making Through Machine Learning
Asta Sofie S. Jarlner (Centre of Excellence for Global Mobility Law, Copenhagen)
A GenSeM Migration Dialogue co-organised with the Sussex Centre for Migration Research (SCMR), University of Sussex
Wed Feb 26,
1-2pm (GMT)
University of Sussex, Global Studies Resource Centre
Online. Register: https://scmr_gensem2.eventbrite.co.uk
It is a fundamental principle of international refugee law that there should be single and universal standard for access to refugee protection and that similar cases consequently should yield the same outcome. However, multiple studies have found that there are large variations in outcomes of refugee status determination (RSD) that cannot be explained by the material facts of a claim for international protection.
The importance of credibility assessment, existence of bias and variations between decision makers is well-established in scholarship. Yet, moving beyond the individual cases and onto general patterns, little is known about what structurally informs credibility assessment.
At a time where more signatory states to the Refugee Convention are experimenting with implementing AI-driven tools to support their asylum management, understanding the structural interplay between bias and credibility assessment in the historic practice is more pressing than ever. Failure to address bias in human-led RSD will risk embedding and exacerbating discriminatory patterns if RSD becomes AI-assisted or -driven.
Utilizing a dataset of 15,000 full-text cases from the Danish Refugee Appeals Board, this project aims to computationally identify the traits that predict when decision-makers are likely to question a claimant's credibility. I extract features that both the literature and soft law have identified as central to the RSD process. These features encompass material facts relevant to the case, characteristics related to the claimant, and factors pertaining to the interview setting. Finally, I extract the credibility assessment made by the Refugee Appeals Board as a binary variable, indicating whether issues with credibility were noted in the interview summary (1) or not (0).
What distinguishes this study from previous computational analyses is its primary focus on predicting the credibility assessment as the outcome. This unique model enables a detailed examination of how different feature combinations impact decision-makers reception of a testimony and other evidence, providing a more nuanced analysis of existing human biases within the RSD process.
BIO: Asta Sofie S. Jarlner (she/her) is a PhD Fellow at the Center of Excellence for Global Mobility Law, University of Copenhagen. She holds a BA in Political Science form Freie University in Berlin and an MSc in Social Data Science from the University of Copenhagen. As part of the Nordic Asylum Law and Data Lab and the project Algorithmic Fairness for Asylum Seekers and Refugees, she mainly works with computational methods to identify how features in interaction may predict if a claimant is perceived as credible or not, paying special attention to how the Danish Refugee Appeals Board determine risk group affiliation for identity-based claims. Her overall research aims to shed light on human bias in decision-making at time of increasing automation, to avoid embedding and exacerbating marginalising patterns.