Since its debut in November 2022, ChatGPT has garnered ample attention, admiration, and consternation. The artificial intelligence chatbot has broken ground with its ability to convincingly simulate human conversation, solve physics questions, and write poems. The AI has raised several concerns for education, given its ability to write essays and solve homework problems. In addition to issues with misuse of information, including copyright and privacy violations, many are worried that ChatGPT will take their jobs. With many people apprehensive and panicking about our future with ChatGPT, it is worthwhile to realistically examine the challenges we may face due to the technology and what value we may be able to gain from it.
The implications of ChatGPT for education have been a subject of avid debate among teachers, administrators, and concerned citizens. After all, if this computer program can write your essays and do your math homework, how can we have any trust in our methods of assessing students? In a survey from Study.com, 89% of student respondents revealed that they had used ChatGPT to help with homework assignments, which seems to confirm the public’s fears. Writing coach and founder of Crush the College Essay Peter Laffin claimed that this technology “has the capacity to blow up our entire writing education curriculum.”
However, the experts seem less distressed. Their main idea is that ChatGPT is, first and foremost, a computer model, more specifically a Large Language Model. Its method of producing language is fundamentally different from how humans do it. Matthew Sag, Law Professor at Emory University, studies copyright implications for large language models. Sag refers to the saying that “an infinite number of Monkeys will eventually give you Shakespeare,” comparing ChatGPT to “a large number of monkeys,” which produces impressive but lacking results. OpenAI, the research laboratory that released ChatGPT, admits as much, giving the disclaimer that ChatGPt may produce “plausible-sounding but incorrect or nonsensical answers.” A similar idea is echoed by author, game designer, and academic Ian Bogost, who explains that ChatGPT “offers a way to probe text” and “to play with text,” but does not actually produce it as it does not understand the meaning of text and language. In his words, ChatGPT and similar models are more “aesthetic instruments than epistemological ones.” Indeed, the AI often makes mistakes, providing outright incorrect answers to basic factual questions. On top of this, the more “artistic” essays and poems that it generates are often bland and unengaging. As the experts have pointed out, ChatGPT does not pose a significant threat to our education as it falls short compared to human-produced language. Furthermore, various software has been developed to detect text produced by AI. Thus, the massive panic over our education threat seems somewhat overblown.
Some have also pointed out potential ethical and legal issues with the AI, including violations of intellectual property rights and the distribution of misinformation. The problem is that some of the training material that ChatGPT uses may be copyrighted, and therefore, the text it produces may also be copyrighted. Use of this text could result in copyright infringement. Perhaps more worrying is that large language models have no commitment to the truth and can produce inaccurate or entirely incorrect information. As media outlets like Buzzfeed start to use ChatGPT to create content, there is a threat of widespread misinformation. Despite OpenAI’s efforts to mitigate this by providing a free moderation tool that filters inappropriate training materials, it does not filter political material. It also has minimal power for languages other than English. ChatGPT can also share personal information or sensitive data from its training datasets with its users. This may also facilitate the production of deepfake video and audio more realistically and on a larger scale to misinform and scam individuals. If left unaddressed, these problems could have disastrous consequences for everyone who consumes digital media.
Whenever new AI crops up, the question everybody always asks is, “will it take my job?” It makes sense to be afraid or even insulted by the notion that a piece of code could do what you, a sophisticated, complex human being, do for a living. Unfortunately, it indeed appears that ChatGPT has the capacity to fulfill various jobs, especially those involving content creation and technology. The chatbot was found to perform “at or near the passing level for all three exams” required to obtain a medical license in the US. It also obtained a B to B- grade on a final exam for an operations of management course at the Wharton School of Business of the University of Pennsylvania. It must be noted, however, to take this information with a grain of salt. As mentioned, ChatGPT still faces many limitations regarding the accuracy of the information it provides and its inescapably generic style. AI cannot match human creativity and idiosyncrasy. There is also a significant “hidden workforce” that is often overlooked. Although many jobs like delivery services and driving services can be automated, workers almost always ensure that everything runs smoothly. Although technology is rapidly evolving, we can not forget the worth of subjective human judgement and values.
There is an argument to be made for ChatGPT as a tool for increasing efficiency, improving products and services, and helping make workers’ jobs easier. When presenting the AI at first, the developers said that ChatGPT was for experimental purposes to make interactions with AI more natural. Hence, ChatGPT may be used to improve apps and programs like chat functions within them. It can also help to automate and facilitate menial tasks for certain jobs. It can write cold emails for sales workers, explain and document code for engineers, and write job descriptions for human resource workers. This may sound like great news, but unfortunately, there is little evidence to show that productivity and efficiency are increasing. Despite the technological advancement in recent years, US labour productivity– defined as economic output per hour of labour– has only decreased since 2005. This might be partially explained by the idea that IT-based firms have highly concentrated ownership and, thus, innovations in IT enable the maintenance of restraints on competition, resulting in an overall productivity slowdown. Companies also tend to use these novel technologies to improve surveillance and control over workers rather than to support them. AI may have the potential to help us, but not unless we apply it responsibly and appropriately.
When talking about the potential advantages of ChatGPT, we must also not neglect concerns about digital equity. Due to differing access to technology and the internet, the “digital divide” between underprivileged and developed regions is no new concept. This disparity is apparent from the urban-rural level within cities to the national and continental levels. Regions with access to the AI may use it for educational purposes to facilitate grading for teachers or to explain concepts to students. As mentioned, they could also be used for economic and industrial productivity. However, those without access to computers or sufficient knowledge of the technology will not be able to reap these benefits, exacerbating the divide. The privilege of access to this technology also means that developed countries can more easily exploit other countries for labour. Self-driving companies like Tesla employed Venezuelan workers to label self-driving data for a little over 90 cents per hour, following the economic crisis in the country in the form of “AI colonialism.” Before we rejoice over intellectual triumphs like ChatGPT, we must question who will benefit and who will pay the price.
ChatGPT raises a multitude of concerns, some that are blown out of proportion and some that have been neglected. Real concerns, such as the threat of widespread misinformation and ethical issues, have been lost amongst existential panic. Here, I have only scratched the surface of the ramifications of ChatGPT and other advanced AI for our future, and there will surely be more complications as technology advances. However, we can do nothing except examine the facts and perhaps take comfort in the fact that, at least for a while, our human quirks, flaws, and sentience make us indispensable.
Edited by Ruqayya Farrah
Setareh Setayesh is a second year student at McGill University, currently pursuing a B. Sc degree in Psychology. She joined Catalyst as a staff writer this year and is particularly interested in social epidemiology and the interaction between culture and politics in relation to human rights and development.