Dr. Sophie Nakueira holds a Master of Laws in Commercial Law and PhD in Law from the University of Cape Town. She is a visiting fellow at the Chair for Human Understanding of Algorithms and Machines at the Research Centre for Trustworthy Data Science and Security at the University of Duisburg-Essen.
In the project “Vulnerability and Technology, ” Nakueira’s research will explore the impact of AI and emerging technologies on vulnerable populations in humanitarian contexts. Building on her previous research on vulnerability and humanitarian governance in Africa, this research project will explore the following topics:
1. Trust and algorithmic decision-making
What are the opportunities and risks of AI-assessments in asylum/resettlement claims and what is the preference of claimants and refugee status determining officers? What are the tradeoffs of using machine learning algorithms? How do refugees understand eligibility assessment by AI systems and how does this impact on perceptions of fairness? Specifically, we want to understand whether refugees perceive or AI tools as fairer than humans and why?
2. AI Safety in Humanitarian Contexts
What is AI safety in humanitarian contexts? How, do we ensure AI safety in fragile humanitarian contexts? The aim is to unpack the social aspect of ‘safety’ by examining the benefits and harms on vulnerable populations and the opportunities and trade-offs of using such as surveillance technologies in such contexts.
3. Accountability for AI /technological Harms
Different actors are involved in the developing, procuring and using AI systems and other biometric tools. This raises complex legal questions in respect to where and against whom a legal claim can be brought. The aim will be to map the diverse state and non-state actors that are involved in the supply chain and use of AI systems and other emerging tech.