Resisting “technopanic”: There Are Better Ways For Universities To Respond To ChatGPT

Resisting “technopanic”: There Are Better Ways For Universities To Respond To ChatGPT

Since the launch of Open AI in November 2022, ChatGPT has generated a lot of concern and controversy, most of which have had seemingly lingering consequences in our classrooms. In particular, ChatGPT allows, perhaps encourages, plagiarism, and there is a risk that this will go unnoticed, leading to (further) deterioration of educational standards.

But what is behind this anxiety? I want to warn you about (another) introduction to Technotorber; Instead, I argue that universities should respond constructively to the new kid on the AI ​​block with fear and defensiveness.

ChatGPT and its disagreements

ChatGenerative Pre-Trained Transformer (ChatGPT) is a chat that "generates comprehensive and thoughtful answers to questions and queries". This technology can give different results. You can answer questions about quantum physics. You can write a poem. Oh, and it's not just a "persuasive essay on any topic," it's faster than a human can type.

Like any technological innovation, ChatGPT is not perfect. You seem to have some understanding of post-2021 data, so it doesn't apply to your category, say, luxury Rep flight. George Santos. However, its versatility and complexity, especially when compared to other AI tools, have attracted the attention of both researchers and the public.

For example, the Guardian quoted the vice chancellor last month as saying that "the emergence of increasingly sophisticated text generators, most recently ChatGPT, which generate highly credible content and seem to increase the difficulty of recognition." The story goes that thanks to the advent of ChatGPT and similar AI advances, some Australian universities are returning to the traditional pen-and-paper test.

West, for his part, describes Chat GPT as "the last great disruption in education" and a new threat of theft that "university leaders" are hastily "combating."

This kind of worry is not unfounded. Hoaxes, forgery and contract fraud are major problems in higher education institutions. As the 2022 study explains:

Assessment integrity is critical to academic integrity, and academic integrity is important to all higher education institutions. If students score academic evaluations given by others, their confidence in their abilities and performance will decrease.

Threats to academic integrity are not uncommon. The study, which surveyed 4,098 students and six higher education providers from six Australian universities, reported that all of the above activities had reached alarming levels.

This threat to the integrity of higher education is recognized by university staff and the community. Before ChatGPT there was media coverage of contract manipulation and the use of AI to create targets. Most teachers are familiar with instances where students have taken excerpts from academic essays or Wikipedia for their own work. This can also happen due to the need to properly check after class notes and acknowledge the work of others.

Add to this mix the difficult and often unsafe environment in which university teachers work. A 2021 article on The Conversation found that around 80 per cent of teaching jobs in Australian universities are held by staff on temporary and short-term contracts, with no guarantee of more stable or permanent employment. . All academics (regardless of employment status) work in areas of layoff, increasing education between assignments.

AI hasn't disrupted the industry, but it hasn't made the difference for college professors any easier. Responding to breaches of academic integrity can be time-consuming and emotionally draining for both researchers and students. Some violations, such as those allegedly committed by ChatGPT, may go unnoticed by software designed to detect them and may be more difficult for teachers to verify.

Want to get the best religion and ethics in your inbox?

Sign up for our weekly newsletter.

Beyond "techno-terror".

My concern is that ChatGPT's sole or primary focus on academic integrity concerns could ultimately lead to technological panic . Techno-panic is the realization that social issues and public safety are threatened by technology, be it smartphones, social media or AI.

Techno-panic serves many purposes. They provide convenient scapegoats for real and imagined social problems. These goats are easy to identify; They are not human and cannot respond (ChatGPT may be an exception here). Techno-panic sentimentalism is perfectly suited to the age of clicks, although such panic predates Web 2.0, exemplified by the "bad video" craze of the 1980s.

Ultimately, techno-panic is a loser. By design, they are disinterested in shaping constructive ways to approach technology, expecting punishing and often unrealistic actions (such as deleting their social media accounts or banning AI from the classroom). Technological innovation continues to be deterministic and negative for human endeavor.

In fact, AI is nothing but a human creation. Their use and misuse reflects and perpetuates social customs, values, belief systems and attitudes. A recent study concluded that combating ethical issues related to artificial intelligence “whether we are developers, novices learning about AI, or new users to AI interactions requires educating ourselves in the early stages of our interaction with AI. .

A constructive way forward

In this spirit, let me try to encourage universities to respond constructively to the emergence of chatgpt. Some have already been implemented. Obviously, they can all be integrated into settings outside of the ivory tower, such as elementary and high schools.

  • Hold briefings with AI experts (academia researchers, media professionals) about ChatGPT and similar AI tools. This session can be offered individually to students and staff. They should provide an overview of what these technologies do, potential risks and benefits. It is important to consider these advantages, because AI is completely without problems, and to suggest otherwise, if not paranoid, is naive. We hope these sessions allow students and staff to express their concerns and learn something new. Members of the two groups ranged in their understanding of ChatGPT, from using the technology to just a scary topic.
  • Develop clear and unambiguous institutional rules regarding the use of student AI for graded assignments.
  • Integrate AI into the classroom to develop knowledge, prepare students for the workplace, and teach how to use AI ethically. In a blog post, Tama Leaver mentioned the Western Australian Department of Education's decision to ban ChatGipt from public schools. Leaver is specifically referring to young people here, although his comments can apply to students of any age:

Education must equip our children with critical skills to ethically use, evaluate, and disseminate innovative AI uses and outcomes. Our education system is so paranoid that every student wants to use it to cheat in some way, so don't be forced to sit at home and take exams.

  • Inclusion of mandatory ethics training in all courses, especially in students' first year. This training may be a semester or quarter course, or may be combined as part of an existing course (eg: "Introduction to Computer Science", "Introduction to Media and Communication"). Decisions to violate academic integrity (buy essays or use a chatbot to write your essay) are ethical in nature; It is a decision based on what is considered right and wrong. The same is true of decisions to use technology for good or bad.

This proposal has implications for tighter college budgets and reduced academic and student time. Even the most well-intentioned and philanthropic AI researchers don't want to invite their chatbot colleagues to devote their time and attention to other important questions.

However, this advice is still better than giving up and accepting defeat for our technologies.

Jay Daniel Thompson is Professor of Professional Communication at RMIT University's School of Media and Communication. Her research examines ways to practice ethical online communication in the face of online misinformation and digital hostility. He is co-author of Content Production for Digital Media: An Introduction and Fake News in Digital Cultures.

Declare , updated