Optimized Out of Learning

When students no longer see cheating as cheating — and what that means for learning

Chungin “Roy” Lee was a computer science student at Columbia University. In early 2025, he built an AI tool called Interview Coder that helped him cheat on job interviews for tech internships. The tool watched his screen, listened to his audio, and fed him answers in real-time during virtual interviews with Amazon, Meta, and TikTok. All three companies offered him internships. Columbia suspended him. Lee thought this was “absurd” because Columbia had a partnership with OpenAI, ChatGPT’s parent company. He dropped out, rebranded his cheating tool as “Cluely,” and raised $5.3 million in venture capital funding. His company now helps people “cheat on everything” – exams, sales calls, job interviews. Lee compared his tool to calculators and spellcheck, claiming it wasn’t really cheating at all. It was innovation!

Lee is not an outlier. He is what happens when AI companies flood schools with tools that think for students, and venture capitalists reward the students who game the system. In the UK alone, nearly 7,000 university students were formally caught cheating with AI tools during the 2023-24 academic year – triple the number from the year before. That translates to 5.1 cases per 1,000 students, up from 1.6 per 1,000. One in four American teenagers now uses ChatGPT for schoolwork, double what it was in 2023. Twenty-six percent of K-12 teachers have caught students cheating with AI. Forty-three percent of college students admit to using ChatGPT, with 89% using it for homework and 53% for essays.

Detection has become nearly impossible. A University of Reading test found that 94% of AI-written submissions went undetected by teachers. AI-detection tools are so unreliable that they falsely accuse innocent students while missing actual AI work. In an attempt to deal with the ubiquity of cheating, teachers now require students to draft essays in Google Docs so they can watch the writing process unfold. They are returning to oral presentations and handwritten exams

At the University of Illinois Urbana-Champaign, professors Karle Flanagan and Wade Fagen-Ulmschneider caught dozens of students cheating in their data science course. The students sent apology emails. “Dear Professor Flanagan, I want to sincerely apologize,” the first one read. Flanagan thought the student was owning up to it, being apologetic. Then a second email arrived. “I sincerely apologize.” Then a third. Then a fourth. Dozens of nearly identical emails, all containing the same phrase. The professors realized the apologies themselves were AI-generated. They showed the emails in class, a student photographed the slide, posted it to X, and it got 28 million views. At St. Peter’s University in New Jersey, English professor Stephen Cicirelli had a student submit an AI-written paper. When caught, the student apologized—with an email that also appeared to be written by ChatGPT. “You’re coming to me after to apologize and do the human thing and ask for grace,” Cicirelli said. “You’re not even doing that yourself?”

But the deeper problem is not the cheating. The deeper problem is that students like Lee genuinely don’t understand why using AI to do their school work is wrong. Lee spent 600 hours in the top 2% of global LeetCode users, called it “the most miserable hours of my life,” and concluded that the solution was not to become better at coding but to build a tool that would fake it for him. He believed – and venture capitalists agreed with $5.3 million – that this was “innovation.” His company published a manifesto arguing there’s no reason to memorize facts, write code, or research anything when AI can do it faster. Columbia had taught him to optimize. He optimized himself right out of learning.

A generation of students is learning that thinking is optional, that effort is for suckers, that the only skill that matters is knowing which AI to use and how to hide it.

But AI usage now starts much earlier, undermining foundational intellectual and educational skill development. Middle and high school students are growing up with AI as their default problem-solving tool. Twenty percent of seventh and eighth graders now use ChatGPT for schoolwork. By high school, that number climbs to 69%. They ask it to solve their math problems (29% of teens think this is acceptable use), and complete homework assignments they don’t understand rather than struggling through them. They are learning early that thinking is optional, that effort can be outsourced, that the appearance of understanding matters more than actual understanding.  These are the years when adolescents develop critical thinking skills, learn to tolerate frustration, build resilience through challenge. They are learning instead to reach for a shortcut when something feels hard.

The social and emotional costs are harder to measure but no less real. Students who never learn to sit with confusion, work through problems independently, or experience the satisfaction of figuring something out on their own are students who will struggle to develop confidence, persistence, and genuine competence. They are being trained to depend on external tools for internal capabilities. And unlike calculators or spellcheck, which only handle mechanical tasks, AI does the thinking for them. It robs students of the very practice their developing brains need most.

Many parents are unaware of the implications or are unaware this is even happening. About a quarter of parents don’t think their children are using AI for schoolwork – even though those same children report that they are. They see their children doing homework on laptops. They don’t know the homework is being written by ChatGPT. They don’t know that ChatGPT is grading itself through AI detection tools that their children have learned to fool. They don’t know that their teenagers are outsourcing their college admissions essays, their scholarship applications, their entire academic identity to machines. 

And the technology moves too fast with new tools launched weekly. By the time parents understand what their children are using, three more apps have been deployed. A generation of students is learning that thinking is optional, that effort is for suckers, that the only skill that matters is knowing which AI to use and how to hide it.

At only 21-years-old, Roy Lee raised $5.3 million for a tool that replaces thought and fakes competencies. He then received $15 million more from Andreessen Horowitz. He was profiled in TechCrunch, covered in major outlets, and interviewed on podcasts about his marketing genius. Roy Lee’s rise is the natural outcome of a logic that treats learning as inefficient, effort as naive, and thinking as optional. He didn’t invent that logic. He just monetized it. The real cautionary tale isn’t one student’s trajectory from Ivy League dropout to VC-backed founder (although we’ve seen that story many times before), it is valuation put on his “cheat on everything” tool and approach. That the attention economy once again rewarded convenience over substance.

 

Sources:

  • TechCrunch, “Columbia student suspended over interview cheating tool raises $5.3M to ‘cheat on everything'” (April 2025)
  • CNBC, “How Google is responding to AI cheating in coder interviews” (March 2025)
  • The Guardian, “Nearly 7,000 UK university students caught using AI to cheat” (June 2025)
  • Axios, “ChatGPT confounds colleges and high schools” (May 2025)
  • Nerdynav, “ChatGPT Cheating Statistics (2025)”
  • Pew Research / Vox, “26% of middle and high school students using ChatGPT” (2024)