The increasing adoption of biometric technology by governments, aid organizations, and other stakeholders in the Middle East has critical implications for regional developments in business, governance, and society. And while some observers and stakeholders have noted the potential for such tools to streamline security infrastructure and provide opportunities for sectors as diverse as mobile payment and financial security, a growing chorus of voices has raised concern about the potential of biometric data to similarly streamline violations of human rights, particularly those of the region’s most vulnerable populations. The use of biometrics in the field of refugee aid, in particular, will continue to set precedents for the digital rights of vulnerable populations in the region and for human rights more broadly. Biometric data includes iris scans, fingerprints, and facial features, and is distinct from other forms of identification due to its uniqueness and immutability. While biometric technology has been utilized by the United Nations High Commissioner for Refugees (UNHCR) in individual locations since the early 2000s, the technology has now proliferated in humanitarian settings, particularly among actors and agencies that work with refugees and asylum seekers.
Risk or resource?
Since January 2019, the UNHCR has used biometric forms of identity verification for registering and distributing aid to over 7 million refugees and asylees in 60 countries. Some have claimed that advancements in the use of biometrics have enabled aid organizations to better serve refugees by improving efficiency, accuracy, and anti-fraud measures. There are serious concerns, however, about how biometric data is stored and who has access to such sensitive information. The actions of aid organizations are critical to regional human rights issues as they interact with numerous vulnerable populations, many of whom lack both adequate access to information about how their sensitive data is being used or shared and legal recourse in the event of misuse of such data. Humanitarian agencies’ policies regarding technology will therefore set an important precedent for the treatment and privacy of vulnerable populations like refugees, particularly when such populations interact with or travel through international borders.
Since the advent of biometric data use in the field of refugee aid, the UNHCR has amassed detailed information about the populations it serves, sometimes including information on movement that is gathered when they receive food and cash assistance at various locations. This can help execute the UNHCR’s protection mandate, but the threat of data breaches adds significant risks. In 2016, for example, a third party was able to enter a Red Rose software platform and access the personal information of over 8,000 families receiving humanitarian assistance in West Africa. This incident is a reminder that the data stored by the UNHCR has significant value, and is at risk of exploitation.
In addition to threats from outside actors looking to exploit or profit from refugees, biometric data may be sought by governments under the guise of protecting national security. The European Commission’s EURODAC database, for example, regularly collects information on hundreds of thousands of asylum seekers entering the EU, documenting fingerprints, gender, and other identifying information. Although initially protected from third-party actors, in 2015 the information became accessible to Europol among other agencies. The UNHCR faces a similar risk of data meant to help vulnerable populations being used for law enforcement purposes. In 2014, Lebanon requested access to UNHCR’s biometric database, claiming that “Any country in the world has ownership of data being collected on its territories.” The request underscored not only the vulnerability of refugees and their lack of control over their own biometrics, but also larger questions about potential clashes between national sovereignty and international control of humanitarian data and operations. In the case of Lebanon’s request, for example, Syrian refugees reported concern about their personal information reaching the Syrian government, with some stating that they planned to refuse iris scans, even if it meant forfeiting food and cash aid from the UNHCR and other agencies.
Real fears accompany the use of biometric data because, in the hands of host governments, the information could result in criminal scrutiny, persecution, or even a threat to life. In a move reflecting these risks, Rohingya refugees staged a three-day work strike in 2018 over concerns about the use of their biometric data. Many feared that the UNHCR would share their information with the Myanmar government, further endangering their lives. The UNHCR attempted to assuage their fears, assuring refugees that the data was only used to distribute services in Bangladesh, but mistrust over the use of data remains. In 2019, the World Food Programme (WFP) partially suspended aid to Yemen due to disagreement with the Houthis over the control of biometric data, leaving 850,000 Yeminis without critical support. The Houthis opposed the collection of the biometrics, claiming that it was illegal for the WFP to control the data.
Taking precautions
When not handled with care and sensitivity, the use of biometric data can further exacerbate the dangers faced by refugees, but with proper privacy and protection measures in place, it could be an empowering tool offering a documented identity to vulnerable populations. First and foremost, it is essential that the UNHCR provide publicly available policies on how data is used and which third parties can access sensitive information. The 2015 “Policy on the Protection of Personal Data of Persons of Concern to UNHCR” was a good first step, augmented by the 2018 “Guidance on the Protection of Personal Data of Persons of Concern to UNHCR.” The UNHCR should ensure these policies are available in local languages and in paper form. Furthermore, governments and actors that work with refugees and asylum seekers must collaborate with data privacy specialists and technologists to set clear and secure data privacy standards for all operational transfer of data.
Informed consent is included in UNHCR biometrics policy, but further examination is required to ensure refugees are truly able to exercise choice. Without sharing biometric information, refugees are not eligible for aid from the UNHCR, leaving them in a precarious position. Alternative options should be investigated and, if feasible, implemented for those who have personal objections to the use of biometrics.
As the collection and application of biometric data continues to grow, aid agencies must acknowledge the inherent privacy risks of this practice and engage in it only when there are clear and measurable benefits to refugees. Aid organizations should prioritize the well-being of their beneficiaries over the desires of third-party actors. Therefore, all actions involving biometric data must be grounded in a “do-no-harm” ethic and actively promote the welfare of refugees. The policies of humanitarian institutions reverberate beyond humanitarian settings and will influence technology practices across the Middle East. By ensuring that responsible policies are in place, aid organizations set an important precedent that will prevent future breaches of privacy and violations of human rights by other humanitarian and government actors.
Madelyn Johnson is a junior at Florida State University studying Middle Eastern Studies and International Affairs. She primarily studies issues relating to migration and refugees. She is currently supporting MEI's Cyber Program as an intern and has previously interned at the International Rescue Committee and Al Hadaf, a human rights organization in Amman, Jordan. Eliza Campbell is the Co-Director of MEI's Cyber Program, and a researcher in technology and human rights at the Center for Contemporary Arab Studies at Georgetown University. The views expressed in this piece are their own.
Photo by ALI MUKARREM GARIP/Anadolu Agency via Getty Images
The Middle East Institute (MEI) is an independent, non-partisan, non-for-profit, educational organization. It does not engage in advocacy and its scholars’ opinions are their own. MEI welcomes financial donations, but retains sole editorial control over its work and its publications reflect only the authors’ views. For a listing of MEI donors, please click here.