In an era dominated by digital advancements, biometric technology has emerged as a cornerstone of security measures in various sectors, ranging from online banking services to governmental identification systems. The integration of fingerprint scanning and facial recognition has become commonplace, offering users a seamless and efficient means of authentication. However, beneath the surface of this convenience lies a complex web of security concerns and ethical implications. While biometrics may promise enhanced security, the collection and storage of sensitive personal data raise significant privacy issues. Moreover, the proliferation of biometric databases, coupled with the utilization of artificial intelligence, introduces new challenges regarding governance, accountability, and the potential for discriminatory practices. As countries around the world adopt biometric identification systems, it is imperative to critically examine the broader societal impacts and address the inherent risks associated with these technologies.
Many countries such as “Argentina, Belgium, Colombia, …, and Spain” have established National Identification databases that use varying degrees of artificial intelligence to store biometric data. Recognizing the profound impact of possessing detailed personal information on every citizen, particularly when wielded by governments and authoritarian figures underscores the potential dangers inherent in such systems. “Historically, national ID systems have been used to discriminate against people on the basis of race, ethnicity, religion, and political affiliation”. There are legitimate concerns about racial bias in these technological advancements.
The development and implementation of AI and biometric tools involve collaborative efforts among varied stakeholders. However, a lack of diversity in these endeavours can lead to inherent flaws in their practical application. While facial recognition databases used by law enforcement agencies may aid in convicting criminals, their surveillance of minority communities can be racially motivated. For example, in the United States, the government has previously utilised biometric data to target African Americans for surveillance. Such actions as highlighted by the American Civil Liberties Union (ACLU) perpetuate and stigmatize Black Americans, often labelling them as potential extremist individuals, while failing to adequately condemn white supremacists. This disparity underscores systemic racism, which risks exacerbation with the advent of novel and emerging technologies.
Ana Brandusescu is a Doctoral Candidate at McGill University studying “the scale of public governance and AI” and previously worked as a Senior Research and Tech Policy consultant. To have a better understanding of this topic, I discussed AI and Biometric data policies with her and her expertise helped shed light on critical issues and gaps in current policy frameworks. In the Canadian context, we discussed Bill C-27 and the shortcomings regarding policy frameworks established to protect marginalised communities. Ana emphasised the importance of ‘Collective Rights’. She highlighted that “Privacy legislation has always focused on the individual and that is a problem within itself as it doesn’t address collective privacy..a lot of prevention of discrimination could be done at a collective privacy level”.
Apart from a shift in the focus from individuals to communities, governments also need to engage with marginalised and discriminated against groups. The Assembly of First Nations outlined a brief to the parliamentary standing committee of Industry and Technology highlighting the infringements and abuses of human rights they have faced. It emphasised the importance of cooperation between First Nations groups and policymakers in the legislative process. Data sovereignty is also a key concern as “First Nations people have the right to determine and make decisions regarding the circumstances in which information is collected about them and how this information is used and shared.” The issue of misusing AI by the government and law enforcement is complex. Still, to tackle this issue there needs to be an acknowledgment of the deep-rooted systemic injustices as well as a cultural shift in how society perceives and treats indigenous people.
The integration and development of new AI technologies exacerbate the risk of racial profiling, as well as profiling other minority groups. Considering the present systemic biases against many minority groups, there are serious concerns regarding the abuse of power using generative AI technologies. On a societal level, there needs to be more representation, and inclusion of minorities and indigenous groups, as well as increased literacy on AI data and security allowing the general public to understand the scope of the issue and how it affects them. Transparency and accountability have to be a priority in governance.
Edited by Susana Baquero Salah
Anvita Dattatreya is in her third year at McGill University, currently pursuing a B.A. Double Majoring in Economics and International Development Studies. As a staff writer in Catalyst she hopes to write articles on topics including socio-economic and political issues in South-East Asia.