Machine Learning

Machine Learning

ChatGPT AI writing app challenges not only our education system but also our thinking.

The University of Notre Dame's Fremantle campus is tucked away in the city's West End, which means it's eerily quiet during the long summer break. Located at the southern end of the harbor, which is still active even though they are planning to move, and with Georgian and Victorian architecture, the West End begins to look like an old photograph as students return home and give way to a slow bustle. the pace... Yes, cafes, shops and cocktail bars stay open during the break, beckoning holidaymakers as they wander past the Round House, the former prison built on Bentham's infamous panorama and the Swan River Colony's first permanent the building. But other than that, the area seems strangely peaceful; it is not a ghost town, but an unmistakable human presence that animates it.

However, the campus itself is buzzing as faculty and staff return to prepare for the new semester. Like the military alerted to a new and incredibly effective weapon, academics are excited about the next big advance in artificial intelligence. a development that may so disrupt modern academic practice that it appears to violate the "trick" of the trade of contract modification. . - as fashionable as a cheat sheet in the palm of your hand Asking essay questions and preparing course plans, everyone involved in knowledge transfer is well aware that the space has changed and everyone is talking about ChatGPT.

Launched in November 2022, ChatGPT is an AI-powered chatbot that uses deep learning algorithms to generate answers to questions in natural language. Based on a large database of trained materials, it can instantly synthesize original content into answers to specific questions, essays on specific topics, literary parodies, scripts, and more. therefore, the Turing test is decidedly abandoned by humans. Mirror In other words, there is very little in his responses that betrays him as a non-human actor, as many journalists have already demonstrated by reproducing pieces of AI-generated content in their articles and asking readers to distinguish between them. (Given the potential job losses from ChatGPT and its equivalents, this is perhaps an unwise strategy.) Of course, in the various essays I've been asked to write, there's nothing that could cause any inconvenience other than the insistence on not messing around . "then" with "what" or "alternatively" with "alternatively". There is no way around it. ChatGPT is great, it's a game changer.

Not that this would necessarily be seen as a third sector issue. At the moment, opinion on ChatGPT seems to be split between those who see it as a potential research and writing tool, and those who (like me) see it as a challenge, perhaps an existential challenge, to a particular model of education. that's all Group One is definitely smaller, although their opinion has dominated the media lately, perhaps due to a "balance" fetish. Essentially, these comments denounce concerns about ChatGPT as a kind of Luddism, similar to the panic that has gripped print media, television, the Internet, and Wikipedia (still lazily maligned in academia as a reliable source of disinformation). ). ) information). Thus, the first group adheres to the so-called "instrumental" view of technology, which characterizes all tools and methods as essentially neutral phenomena that people use to achieve their goals, as opposed to culture-forming phenomena. the points that are provided there. The instrumental view is very common in Silicon Valley and is a reflexive belief in progress, albeit a kind of fatalism bordering on nihilism. It also tends to suggest that the human brain is itself a "technology" that can be reconfigured to be more efficient. For example, when advocates of new technologies argue that their use will free up space for students to focus on other tasks, they are repeating the same conceptual model that has driven AI experts forward: let's call it the "brain-computer model". dream of natural language processing in the first place. They fully characterize the process of creation of the human mind.

In my view, this points to a fundamental problem with ChatGPT, which is not so much accuracy, bias or hate speech, but the potential to cause another failure of our agency, and therefore our ability to be free and move forward. Writer and artist James Bridle argues that the information revolution has ushered in a "new dark age" in which the cost of increasingly smart devices is increasingly misinformed. Not only the ways of working in algorithmic machines are disappearing, but also the practical understanding of the world, how things fit together, turning the world into an impenetrable "black box" for its inhabitants. . Despite the transhuman dreams of the tech bros and Elon Musk's warnings about the dangers of "strong" AI, few of us are foolish enough to think that ChatGPT actually cares. The problem is that the person using it doesn't really think about it either. And since thinking is still the university's raison d'être (if only semi-officially), the problem goes beyond narrow concerns about accuracy or plagiarism. It is a challenge for the university itself or its liberal concept, as well as for the society it seeks to serve.

As a result, the creation of ChatGPT provides an opportunity to think more broadly about the role of technology in formal education. After all, what is education for? If the goal is to create a workforce that can use artificial intelligence, then ChatGPT training makes sense. But if its goal is to create thinking people whose well-being depends on their ability to think, perhaps it's better to be careful. We shouldn't stop at ChatGPT either. We may include other technologies in our review, some of which have become more mainstream science due to COVID-related lockdowns. (I don't know of anyone who thinks online classes are as effective as face-to-face classes, or who thinks these teaching methods are good for students' mental health.) at least about the importance given to it in modern educational practice. One of Pollyanna's common moves is to invoke the example of Socrates, who believed that writing was harmful to memory and distorted intellectual inquiry because a text could not explain or modify its arguments as a human could. What could be more regressive, they ask, than opposing the alphabet? What an ostentatious technophobic spirit! But you don't have to follow the Athenian water all the way to see that he was right about something (writing causes memory atrophy) or that he asked the relevant question: what does this technology give us? what does it take We Cassandras can be a bunch of reactions, and of course one should always wonder if one has confused the state of the world with the state of one's back. However, I find Pollyanna's approach naive. At the very least, it is worth considering that Socrates, in addition to creating what we might call the "techno-critical" tradition, also pioneered the dialectical method of reasoning on which the liberal university is based.

Of course, I do not propose to redefine the modern university as an Athenian agora. But as we move into the era of scientific and technological capitalism, a time when not only human speech but also people can be reproduced, rather than through the traditional method of student banning, we must rediscover our ability to appreciate new technologies. Lewis Mumford called the spirit of "democratic technologies" that places human freedom and flourishing at the center of our discussions. This feature removes ChatGPT when it is used for untrustworthy purposes, which is urgent. We need to start thinking about thinking machines because of the fear of a future society where we become supernatural, ghosts roaming the silicon city without the flesh and blood of others, without which we can never become fully human.

Richard King

Richard King is a writer from Fremantle. His latest book , Monsters Will Be Here. Is technology diminishing our humanity?' will be published in May.

Machine Learning for Everyone - Complete Course