Alexa or Alex?: The Gendered Reality of A.I.

Alexa or Alex?: The Gendered Reality of A.I.

We have all dreamed of having our lights turned on or off at the snap of a finger, or listening to our favourite song while drinking our morning coffee without fussing with our phones to find Britney Spears’ “Oops I did it again!”. These dreams have come true thanks to Artificial Intelligence.

Artificial Intelligence has been in the spotlight for many decades, displaying a utopian view of our future as a global human society cohabiting with intelligent forms of technology, assisting us in our daily endeavours. Indeed, AI can be defined as a specific type of technology that  “leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.” It is revolutionary in the way that it is an extension and improvement of the human mind, as it strives to think and act rationally.

We have all heard of Amazon’s Alexa, Apple’s Siri or even Sophia the Robot (now a citizen of Saudi Arabia!) – three of the most famous AI systems developed in the last decade. However, something that is striking is that these famous AI systems are all gendered, and more particularly, all automatically women by default. This piece aims to uncover why this is so, and what this can tell us about ingrained societal gender roles through the perspective of AI machine learning.

AI is a revolutionary technology, ergonomically designed to facilitate our lives and advance our society in various ways. It is, however, trained in ways that enforce certain biases present in our world – as humans are intrinsically biased, thus leading to biased data. Most of the AI systems we use today are characterized by ‘non-deep’ machine learning, which is more reliant on human intervention than ‘deep’ learning AI systems. According to IBM, in the particular case of AI, “human experts determine the set of features to understand the differences between data inputs, usually requiring more structured data to learn.” Thus, we can understand that one of the fundamental pillars of AI training is based on the quality of the data and its organization within the training framework of AI formulation. 

As Charly Walther explains, “If the data is messy or inaccurately labelled, the model will learn incorrectly and the whole project could be jeopardized.” This step determines the AI system’s whole essence, its future behaviour, the functioning of its central algorithm and the overall relationship with its users (humans). After the data is properly selected, it goes through three distinct phases: training, validation, and testing. During this process, the algorithm is perfected, tweaked and adjusted to specific variables throughout many attempts. It is also carefully monitored by AI engineers – all humans who are automatically biased due to their upbringing, the society they live in, and the norms and values they have personally internalized. It is interesting to note here that while AI technology has been labelled as a reflection of human progress and innovation, it has come with a certain subjectivity rooted in socialized behaviour  from those creating machine learning systems. If technology is the extension of human ingenuity, and the human mind is socialized and biased by certain norms, thus it can be argued that AI technology is also biased. 

It is therefore important to understand that the root cause of gender bias is fixed in how AI is trained, particularly in the subjective data given to AI systems. Why is Alexa a woman by default? Why don’t we have John, the personal voice assistant? Or, “Salt” the gender neutral robot? Why do AI systems have to be gendered at all? 

It is thus interesting to analyze how AI perpetuates gender roles in real life, and how this impacts society by implicitly reinforcing the patriarchy and giving us a “gendered fantasy of women occupying subservient roles”. 

The latter is especially visible in Siri’s response of “I’d blush if I could” to words with sexual undertones – which inspired UNESCO to publish a paper urging tech companies to stop choosing women’s  voices by default, stating that “​​the assistant’s [Siri’s] submissiveness in the face of gender abuse remains unchanged since the technology’s wide release in 2011.” Moreover, Zoe Bachman, in her project Tendernet, writes; “If a voice assistant ignores when people are using harassing language, it does nothing to move the conversation surrounding violence against women in forward.” It is important to note that online and offline are not two separate entities and that one influences the other. Thus, conservation of oppressive gender biases online, would have real “material consequences” in our societies. 

Such encounters with AI voice assistants instill an image of women as the default caregivers in society, who are subjugated to men, and intellectually incapable of taking initiative  without the orders of a man. Zoe Bachman states; “It’s not an accident that our technologies that function effectively as “caregivers” or “secretaries” are designed to have female voices – design and software have historically served existing power structures.” As such, it is more logical for a women’s voice to be the default in AI voice systems as women are sexualized in a capitalist framework. Thus, products catering to a large majority of society (including men), presenting a woman as a subservient assistant, would sell more and generate greater profit. 

As seen above, AI training is critical in condemning biases of all sorts. If AI is biased, it is because the engineers who developed the machine learning system are biased – thus proving that gender biases and gender roles are socialized and “taught” rather than intrinsically biological. 

Consider a pre-trained AI system as a human baby, not having been in contact with any values and norms (equivalent to data for the AI machine learning) passed on by elders, parents and society – this human baby would not have any biases and would be considered a “rational” being in this context. Unfortunately, this isn’t possible as we are all socialized from a very young age, into certain norms, rules and societal values linked to our upbringing, culture, societal class, race, and the perception of our intersectional identity in society. The role of primary socialization (the process by which an individual learns the basic values, norms, and behaviors that are expected of them by their society) acts here as a mirror to how AI is trained. In the context of gender biases, taught through primary socialization and ingrained in our institutions – technology has a crucial role in shaping our vision of the world, and if it is biased, it will perpetuate sexism intergenerationally and intragenerationally. 

For the future, Bachman states that AI interfaces should begin to “reflect principles of consent, agency, embodied learning, and plurality” in the context of gendered biases in an intersectional feminist framework. The objective here would be to dismantle patriarchy from our institutions, markets, media and internalized norms to advance our societies toward  an increasingly equitable and fair environment for all. Technology is one of the first steps in achieving this, as it represents our relationship  with our near and distant future, as a society.

However, if a bias is overlooked, human biases will be bound to repeat through machine learning. As such, there needs to be a deep understanding of the cultural and social framework which contributes to certain algorithms engrained in AI.

As AI develops further, and our relationship with it deepens, there needs to be a serious re-evaluation of the underlying ethics and biases present in the initial stages of machine learning, as it poses serious consequences in our reality.

 

Edited by Liz Bredt

Leave a Reply

Your email address will not be published. Required fields are marked *