Getty Images
Virtual Life 27 May 2024 8 MIN

Let’s face it, ChatGPT has entered the classroom

For the AI-generation, the most coveted skill of the future workspace is just beginning to take shape

When Instagram was launched in 2010, few could ever imagine its grasp over our lives. Back then, many saw it as nothing more than a frivolous distraction that people wasted time on. Today, entire livelihoods—even businesses—depend on it. Those who were quick on the draw—the meme-makers and first-generation influencers—learnt early on how to work the algorithm to their advantage, leaving traditionalists probably wishing they’d been better prepared.  

In 2024, Generative AI is bringing on a similar, or even bigger, reckoning in our everyday lives. A recent survey indicated that 75 per cent of Gen Z workers use AI tools at work (millennials and Gen X are far more hesitant to do so). Photographers use Midjourney to create concept art, writers use ChatGPT to brainstorm catchy headlines or summarise difficult research papers (guilty as charged), researchers use Perplexity AI in lieu of old-fashioned web searches, and developers use WebWave to create full-fledged sites. Today, you have tools that can help you make a slick video or even a song, just make sure you know your prompts. AI is no longer some abstract, futuristic technology that businesses use to analyse the stock market or streamline supply chains (or create Terminators)—it is literally in the palm of our hands, poised to transform our everyday lives in ways we can barely comprehend. This is the Instapocalypse times a million. A question everyone’s asking is: how will the next generation of the workforce adapt? 

“Everyone is using ChatGPT,” says Riya*, a finance student at a private institute in Maharashtra. When the OpenAI chatbot launched in November 2022, it was one of the first AI technologies that was accessible to lay users, which made it somewhat of a go-to for the text generation. Since then, it’s been used to generate everything from captions for social media posts to cover letters for job applications—and of course, academic assignments. “It’s not even a secret anymore,” the 19-year-old shrugs. “But teachers can’t do anything about it unless there’s proof that something has been ChatGPTed, and that won’t be anytime soon.”  

Of course, a student would still get caught if they didn’t make sure to undo some of the chatbot’s more glaring mistakes; but targeted prompting and careful editing could make AI generation undetectable even by detectors such as zeroGPT or Turnitin. I can’t help but relate—as a student, I often Sparknotes-ed readings that were too tedious to get through, and I’m pretty sure my professors knew. Boomers and elder millennials passed chits, Gen Y used Wikipedia, Gen Z has ChatGPT. The difference is that ChatGPT is easier to use, but harder to control. Every student seems to know of someone who was caught for passing off AI-generated work as their own. They also seem to know an equal number of students who managed to get away with it. Ayush*, an 18-year-old computer science student at a university in Haryana, shares a story about a friend who, as an experiment, spent five hours prompting ChatGPT to write a detection-proof paper that would have taken him all of three hours to write on his own. Ayush’s friend is essentially teaching himself what may well become a coveted workplace skill in the near future.  

“If you have a story about a young woman from rural India who feels liberated after getting a bicycle, would you feed that into ChatGPT? You’d want to write that yourself, because that is a story with soul. And AI doesn’t have a soul…yet.”              

Among students I spoke to, the consensus is they’re in favour of using AI as a starting point but not a crutch. “As long as you’re not entirely generating a whole essay, but just using it as a reference point, that is fine—I don’t think the teachers mind either,” believes Ayush. Riya agrees, “I think it’s okay to refer to ChatGPT, but not rely on it.”  

Professor Mohan Ramamoorthy, who teaches journalism at Chennai’s Asian College of Journalism, holds a pragmatic view that many students will probably agree with. “I am okay with young people being a little cheeky,” he laughs, emphasising that this is his personal opinion, not endorsed by his college. “If it’s a one-off assignment and nobody cares, I can understand you’d want to use ChatGPT to churn it out.” Besides, no matter how advanced AI eventually becomes, he doubts it will entirely replace human creativity. In journalism, news automation (generating reports on sports scores or the stock exchange, for instance) could revolutionise the way information is compiled and disseminated. “But if you have a story about, say, a young woman from rural India who feels liberated after getting a bicycle, would you want to feed that information into ChatGPT?” he offers. “You’d want to write that yourself, because that is a story with soul. And AI doesn’t have a soul…yet.”              

Shemal Pandya, who teaches exhibition design at the National Institute of Design in Ahmedabad, also believes there will always be room for traditional forms of creativity, and AI will just be a new tool in the artists’ belt. He points out how his own college teaches watercolour and oil painting alongside newer technologies like VR. “In that sense, AI is just another medium. And as long as my students justify its usage as a medium, I’d be okay with them using it. But I’d hold them accountable to developing the skill of using AI, and not just using it because they think it is easy—and I’m not going to pretend it’s easy.”  

This is most educators’ main concern about students using GenAI: are they thinking critically about how and why they’re using the tools, or are just they trying to get away with doing the bare minimum? Kanika Singh, director at the Centre of Writing and Communication at Ashoka University in the Delhi-NCR, remarks that, thanks to their reliance on autocorrect, many of her students get basic spelling and punctuation wrong when they write by hand. She personally still enjoys putting pen to paper, and firmly believes that the so-called tedious parts of creativity—writing by hand or manually looking up the meaning of a word—are valuable to the creative process. She worries that students may come to rely on AI to “do their thinking for them”, but has had to accept that the tide has already turned. “We are all increasingly under pressure to produce more, and work more ‘efficiently’. So we are just part of a larger ecosystem, where many of these skills are on the decline.”  

It is understandable, then, that students who also find value in the older skills are wary of what GenAI means for their future careers. Ananya* (22), an aspiring animator-filmmaker now in her final year of design college in Gujarat, believes that using GenAI to generate creative content is “quite literally plagiarism”. Indeed, there are a number of other ethical concerns in the mix: GenAI tools are famously trained on the work of millions of creators who receive neither compensation nor credit; these same tools threaten to shunt them out of the job market. Young animators like Ananya are also at risk of exploitation because entry-level jobs that would ordinarily go to fresh college grads will increasingly be automated. “I am worried this will close up the industry and make it less accessible to different classes of people,” she adds. “The capitalist divide will increase further and further.” 

If AI is going to be a big part of our world, then shouldn’t our education systems prepare us for this eventuality?  

It’s important to remember here that GenAI is only the tip of the iceberg; there are several AI tools that might actually help us bridge the equity gap in the education system. Professor Ramanujam, a visiting professor at Azim Premji University and the Asian College of Journalism, points to AI analytics tools like neural networks, which allow you to personalise content for each individual user at an unprecedented scale. We’ve already seen this in advertising: in a campaign for Cadbury, agency Wavemaker used AI to create a video in which Shah Rukh Khan urges viewers to shop at their local retailers, even naming specific stores in each viewer’s immediate vicinity. Educators could similarly customise their curricula for each student according to their unique learning patterns—something that overburdened teachers with large class sizes are not currently equipped to do. “The classroom has to be shaken up, and this technology will shake it up,” shares the professor. “We just need to make sure that students aren’t hurt in this shake-up.” The direction we take depends on how—and to what extent—we choose to embrace new technologies. Prof Ramanujam articulates the dilemma rather poetically: “What do we want to keep for human beings, and what will we give to the machines?”

And that brings us to the heart of this piece, what Prof Ramamoorthy might call its ‘soul’: what is it that we really need from our educational institutions? Isn't education, at its core, an exercise in understanding and adapting to our world? If AI is going to be a big part of our world, then shouldn’t our education systems prepare us for this eventuality? 

Some college classrooms are taking that plunge. Ashoka University, NID, and the Asian College of Journalism, to name a few, have already started hosting workshops to introduce students and faculty alike to the application of AI in various fields. Earlier this year, OpenAI partnered with Arizona State University to explore new avenues for generative AI technologies in education, and simultaneously teamed up with Common Sense Media to develop AI guidelines for teens, educators and families. Ed tech non-profit Khan Academy’s chatbot, Khanmigo, already provides free teaching assistance to thousands of educators, and thanks to a recent partnership with Microsoft, will now be powered by Azure OpenAI Service—meaning the tool could soon be in the hands of millions the world over. Could this be a first step towards making ChatGPT as much of a fixture in classrooms as Microsoft Office and Zoom? 

While we may not have a choice about adopting AI, what we can do is choose to not get taken by surprise this time. “We did not get social media right,” rues Prof Ramamoorthy. “We just let it take over our lives and now we are trying to find ways of regulating it.” As exciting as the possibilities may be, it is essential to listen to voices of dissent, to ensure the technology is not only used in a balanced way, but also developed more equitably. AI tools are overwhelmingly accessible only to the privileged classes who already disproportionately benefit from greater access to computers, to education in English (which is what large language models are predominantly trained in), and to job profiles that aren’t likely to be automated anytime soon. “I don’t worry about the future of students who come from privileged backgrounds, as they will land on their feet,” Prof Ramamoorthy shares. “I am worried about people who are getting trained in things that will be replaced by AI, because their systems are not so agile and flexible to change. To fix that problem, you’d have to fix the inequities in the education system. Let’s see how it goes.” 

*The students interviewed for this piece requested to remain anonymous to avoid getting into trouble with their universities.