Edtech

Is AI assisting students or creating a future of cheaters?

Artificial Intelligence (AI) has been making moves to get into the classrooms. Will it make our children confident students or cheaters?

This month, tech giants OpenAI, Microsoft, and Anthropic launched a US$23 M partnership with major US teachers’ unions to train educators on using AI in K–12 classrooms through a new initiative called the National Academy for AI Instruction.

Imagine a whole generation graduating class with grades earned only through AI. Now, this is your workforce in sectors like medicine, economics, and engineering, still getting by with AI assistance. Not a very reassuring future don’t you think?

While the companies promise benefits like personalized learning and efficiency, the move targets the education sector, including both students and teachers, as a potential user market. Open AI has been encouraging teachers to use ChatGPT for a couple of years now. So, to be clear, it is for profits.

Supporters say AI really helps in engaging students for a subject. Bill Gates, an ardent supporter of AI in education, wrote last year, in his blog that AI “like many AI technologies at this point, is far from perfect. But it also reinforced my belief that AI will be a total game-changer for both teachers and students once the technology matures.”

However, critics warn of AI’s role in enabling cheating, weakening critical thinking, and spreading misinformation. ChatGPT launched in 2022, and the very next year, moral panic started to build as concerns about students using AI to cheat spread.

Students are beginning to rely too much on AI coding assistance tools according to a study conducted by the University of San Diego. A recent Harvard’s Graduate School of Education survey of 1,500 teens revealed AI usage to brainstorm and answer questions kids hesitate to ask in the classroom are rampant. However, studies also highlight the critical factors driving AI adoption in education and its tangible benefits for student performance. A Harvard study suggested that AI tutors can lead students to become more engaged. However, the same study also found that students are frequently using AI for cheating and shortcuts. 

The question is, is AI truly going to benefit students or only teach them how to get to the desired answer?

According to a collaborative study between University of Hail, Saudi Arabia and Kalinga Institute of Industrial Technology (KIIT), Odisha, India, ChatGPT usage can significantly up academic self-esteem, engagement and buoyancy causing higher education students to experience a sense of control and life satisfaction. However, concerns like biased response, limited knowledge, and lack of emotional intelligence of ChatGPT can limit trustworthiness and cause disengagement.

Cheating is one of the major concerns of AI usage in the classroom. But what counts as cheating in an age of ubiquitous AI?

Recently, the Quartz did a special on Cluely, an AI tool many are using to cheat on job interviews and Zoom calls. Companies like Cluely are promoting the idea that “etiquette and ethics are just cultural bugs waiting to be patched.”

“From coders submitting AI-generated work, to students using ChatGPT to “write” their essays and professors using ChatGPT to “grade” those essays, the line between using tools and outright dishonesty is blurring fast,” said Quartz.

Cluely’s manifesto says, “Every time technology makes us smarter, the world panics. Then it adapts. Then it forgets. And suddenly, it’s normal.”

Will cheating with AI’s help become normal?

Again, it’s significant to remember that it wasn’t AI that invented cheating but people. LLMs are basically built on scraped material, which means its very foundation might be called cheating. It’s becoming a slippery slope.

AI is built in our image, which means it mimics us. No wonder we often catch AI models cheating too. A study showed that AI models cheat to win when playing chess.

Imagine a whole generation graduating class with grades earned only through AI. Now, this is your workforce in sectors like medicine, economics, and engineering, still getting by with AI assistance. Not a very reassuring future don’t you think?

Maybe we just need to be aware while we use AI and not rely on it blindly. AI isn’t here to limit our abilities but to enable them. As long as we don’t make it a lazy habit of turning to AI only to cut corners, we should be fine. But checks should be in place.

AI in education walks a fine line between empowering students and encouraging shortcuts. Its potential is vast, but so are its risks. To prevent a future of dependency and dishonesty, we must treat AI as a tool, not a crutch, and build a culture of responsible use, integrity, and human oversight.

Navanwita Bora Sachdev

Navanwita is the editor of The Tech Panda who also frequently publishes stories in news outlets such as The Indian Express, Entrepreneur India, and The Business Standard

Recent Posts

AI’s risk: Big tech’s bold moves, strange missteps & the search for safety

As AI becomes central to search, decision-making, and even creative work, the question isn’t just…

2 days ago

Intelligent cooking robots are here. Will America warm up to them? 

Imagine a kitchen where a robotic arm dices onions, a vision system judges the perfect…

2 days ago

Your next lover might be a bot: Inside the rise of AI porn

Researchers looked at a million ChatGPT interaction logs and concluded that after creative composition, the most popular…

7 days ago

Talk to me, bot: Why AI therapy is both a hug and a hazard

A recent news informs that some therapists are now secretly using ChatGPT during therapy sessions.…

1 week ago

AI social impact: The great divider or the great equalizer?

The social impact of digitization is palpable even before AI enters the picture. Research shows…

1 week ago

New tech on the block: Data analytics, skilling, digital twin, medtech, streaming, digital content, cloud, cybersecurity, app & no code

The Tech Panda takes a look at recent tech launches. Data Analytics: The Most Scalable…

1 week ago