Technological Disparities: Exploiting the Global South and AI’s Hidden Human Labour

Technological Disparities: Exploiting the Global South and AI’s Hidden Human Labour

Due to the explosive growth of the tech industry, disparities between the Global North and the Global South are becoming increasingly apparent. Advances in artificial intelligence (AI) are revealing hidden inequalities that have been concealed behind the conceptual framework of technology. Moreover, while a surge in technological advancements could contribute to the elimination of social divisions and support the development of smaller economies, divisions remain deeply embedded within these systems. This burdens various groups of people and perpetuates uneven power structures. In particular, workers in the Global South are trapped in a vicious cycle of exploitation due to unsafe working conditions, harming their psychological well-being. This piece will discuss the dangers associated with modern technology, specifically with the development of AI.

Today’s ideal technological enterprises, such as Google and Microsoft, are characterized by lavish job sites, high wages, and inclusive, friendly work environments. The employees at these companies can expect to earn above six figures per year. In comparison, the reality for AI workers in the Global South continues to be far removed from what we see; from inadequate wages to detrimental socioeconomic conditions, the less fortunate continue to confront mounting economic disparities. Furthermore, given the widespread belief that the Global South is “undeveloped” and “lacks scientific advancements,” individuals may readily assume that this region plays a minor role in implementing and advancing AI. However, in reality, the Global South has been systematically exploited to allow for the Global North to reap the full benefits from AI since they provide low-cost labour, ample resources, and haphazardly controlled legal safeguards, making it easier for tech companies to profit from. As novelist William Gibson once said, “The future is already here… it is just not evenly distributed,” referencing the North-South divide in the digital market. This highlights the glaring disparities between workers along these two socioeconomic lines, where the wealthy continue to benefit from labour in developing nations while enjoying a higher standard of living, all while individuals in the Global South are exploited as a result of these advancements.

Businesses like OpenAI, the creators of ChatGPT, serve as prime examples of this paradox. Computerized AI claims to guarantee an atmosphere of security for consumers; despite this, moderated content is contracted out to corporations in developing economies, and it detrimentally affects the psychological well-being of the employees. As stated by OpenAI:

“Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content… Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content.” 

When ChatGPT first debuted, its powerful capacity to produce content on almost any topic—from relationship counselling to academic narratives—made it one of the most bewildering technological innovations. Nevertheless, to ensure that consumers can use these instruments safely, additional safeguards must be implemented to remove harmful content that may be readily identifiable through computerized data. However, machine learning has constraints; thus, human moderators must examine the data AI technologies have identified as harmful.

Regulated AI technology continues to support Western consumers and billion-dollar industries, although this privilege does not extend beyond these nations. As AI ethicist Andrew Strait states, “[AI technologies are] impressive… [but these] generative models are not magic, they rely on massive supply chains of [hidden] human labour and scraped data, much of which [is] used without consent.” Tech firms look towards the lenient oversight of fundamental human rights and psychological wellness in the Global South, with minimal constraints on data privacy, to obtain low-cost labour. Exploiting their need for capital throughout the continent has proven to be unethical, given that these employees are obligated to censor graphic material. 

A significant portion of the horrifying content, which includes homicide, bestiality, and child sexual abuse, appears to have come from the most sinister regions of the internet. The majority of this content may even be against the law. This data is frequently distributed to contracting firms in the Global South—Kenya in this instance—and reflects the harrowing realities of unsafe working conditions that beset this relatively unheard-of sector. According to TIME Magazine, one employee was tasked with evaluating around “150 to 250 passages of text” during a nine-hour shift, often working overtime. Furthermore, although their contracts indicated that OpenAI would pay $12.50 per hour, in practice, they were only paid as little as $1.50. Clearly, this was psychologically draining, and although these workers received counselling, the sessions were deemed ineffective, given the intense pressure to meet an established quota.

Given the speed at which AI is developing, there is a need for tech companies to critically examine increasing gaps that reflect societal divides, hierarchies, and inequities between the North and the South. Yet, it remains evident that low-wage workers at the bottom of the AI value chain are working tirelessly with minimal legal protections while entangled in this exploitative industry. Workers regularly come across distressing content and have been expected to identify hate speech, violent pictures, and sexual assault. Attempts to take action against these unlawful actions have failed. Therefore, political will is necessary to impose security measures instead of setting unrealistically high standards at the contentious intersection of technology, business, and human rights.

Edited by Isaac Yong

Leave a Reply

Your email address will not be published. Required fields are marked *