Artificial intelligence (AI) is a leap in technology influencing how individuals understand digitalization as it rapidly flows into every facet of everyday life. The 2023 Global AI Safety Summit has drawn attention to how consumers can communicate with these domains with greater efficiency due to developments in computational power. It is essential to carefully evaluate its impact on the community, ethical behaviour, and privacy. The most significant consequence of these developments is the emerging field of AI-powered technology known as “deepfakes,” which allows users to replace a person’s face and body. Deepfakes, like most technologies, are harmless, but their use and any adverse impacts they may have are subject to the degree of power their users give them. When users intentionally generate the subject in any given scenario to give the deceptive appearance that these individuals are participating in something they are not, it presents moral questions. Therefore, because sexual predators who make use of these advancements have shown themselves to be malicious, this essay will explain how AI platforms inadvertently enable cybercriminals to abuse this technology to generate exploitative films or photographs that model children.
UK Prime Minister Rishi Sunak convened the two-day Global AI Safety Summit, which took place earlier in November of 2023 at Bletchley Park. The timing of this meeting proved vital as nations rival for leadership in the global development and application of AI-based technologies. There were approximately 150 attendees who took part in the summit, representing virtually every nation around the globe, including politicians, renowned academics, and executives from a number of the top technological firms in the world. By examining the moral implications of AI models, the summit aimed to determine the next steps for safe developments and explore the hazards connected with AI advancements. Regulation of the potentially damaging growth presented by machine learning can occur through disciplinary action, as panelists acknowledge the significance of establishing a global agreement in understanding the power and hazards of AI. The AI Safety Summit, according to ethicist John Tasioulas, “[signals] that political leaders are aware that AI poses serious challenges and opportunities, and [are] prepared to take appropriate action.” Furthermore, there were still significant gaps within the discourse, despite the summit being praised for the high level of conversations as a crucial first step toward international cooperation on technological governance.
AI is undoubtedly an unsolved puzzle with the power to reduce existential risks while simultaneously possessing the ability to be immensely destructive. Advances in AI have accelerated the development of systems for manipulating image, audio, and video data. This development made it possible to create highly realistic audiovisual simulations, or “deepfakes,” that can replicate any environment its developer wants. Thus, AI platforms unwittingly assist cybercriminals in exploiting this advanced technology to produce child-impersonating photographs or exploitative films.
A study conducted by the UK’s Internet Watch Foundation (IWF) claims that as deepfakes become increasingly prevalent, AI is developing the ability to produce remarkably accurate representations of child sexual abuse material (CSAM). The IWF discovered a concerning increase in AI content portraying sexual abuse of children, including exchanges that are muted inside pedophilic networks across dark web forums where pedophiles frequently take part. By combining the faces of children they find online with nude bodies, this kind of AI enables pedophiles to rapidly create and produce “new” content for their erotic satisfaction. For instance, in April 2023, deepfake software was used to produce at least seven sexually obscene recordings of children, leading to the arrest of 61-year-old Steven Larouche from Quebec, Canada.
AI has come a long way since it was first developed. However, with the recent development of deepfake technology, the significant advances in intelligence technologies have made it increasingly challenging to distinguish between authentic and modified data. Professor Hany Farid of UC Berkeley claims that “the fakes are becoming more real [and] difficult to discern.” Consequently, this technology has fallen into the possession of many. Cybercriminals are prepared to stop at nothing to use AI for their benefit towards sinister ends, especially when it involves children.
With the advancement of deepfake technologies in terms of quality, versatility, and easy access, anybody could find themselves involved, which presents severe ethical hazards. Since they are relatively straightforward to manufacture, predators can modify billions of photographs from an external image generator, enabling them to generate lifelike pictures with little or no computational effort. According to Canadian Provincial Court Judge Benoit Gagnon, “A simple video excerpt of a child available on social media, or a video of children taken in a public place, could turn them into potential victims of child pornography.” Furthermore, the Federal Bureau of Investigation issued a cautionary statement, recognizing that they continue to receive reports concerning kids, as well as non-consenting adults, whose images or videos have been manipulated into explicit material, “warning the public of malicious actors creating synthetic content [by] manipulating benign photographs or video[s].” This dire predicament worsens as technological advances allow predators to produce astonishingly realistic films and photographs.
Advanced technology fosters the creation and distribution of exploitative content and has transformed the darkest recesses of the Internet into a deadly instrument of global ruin. The current legal structure, which outlaws graphic, sexually charged content portraying minors younger than eighteen, has been found incapable of addressing the challenges posed by deepfake technology. Tech businesses, government agencies, and industry stakeholders must work together to establish clear policies to tackle anonymity and consent regarding content creation to make it possible for AI face swap technology to be utilized safely. It is impossible to exaggerate the need for an international agreement on AI regulation as the current laws and regulations fail to adequately keep up with the latest developments in deepfake technology.
In reducing the adverse effects of digitally mediated child pornography in AI, multilateral cooperation is of the utmost importance. As deepfake capabilities increase in the world of technology, the present regulatory structure needs to consider these capabilities. Suppose something is not done to stop this rapidly developing calamity; in that case, we run the risk of opening an opportunity for AI to be exploited as a tool for the systematic abuse of defenceless children.
Edited by Mahnoor Zaman
In her fourth year at McGill University, Shannon is a staff writer for the Catalyst. Pursuing a double major in Sociology and Art History for her B.A., she is especially interested in social-political discrimination affecting Indigenous communities within Canada and across the globe.