DEV Community

Cover image for The Great AI Heist That Wasn't: My Friend vs. Our Technophile Lecturer
Ashikur Rahman (NaziL)
Ashikur Rahman (NaziL)

Posted on

The Great AI Heist That Wasn't: My Friend vs. Our Technophile Lecturer

Three days. That’s all the time our lecturer gave us for the assignment. Not a week, not a generous five working days. Just seventy-two hours to pull together a full submission worthy of passing his notoriously sharp eyes.

But it wasn’t the deadline that rattled us. It was the warning.

“I know all your tricks because I was a student myself. If you lift this assignment from ChatGPT or Meta, I will know.”

And for some reason, I believed him. Maybe it was the confident tone. Maybe it was the way he said “Meta” like he actually knew what LLaMA was. Or maybe it was just the fact that he’s in his early thirties—young enough to know how AI works, old enough to want to catch you misusing it.

Either way, I took the warning seriously. I shelved all plans of asking my favorite AI sidekick for help and decided to do it the hard way: fingers on keyboard, mind fully engaged, caffeine in bloodstream.

My friend, on the other hand, remained defiant.

“Don’t worry. I have a master plan.”

Oh, really?

He said it with the smugness of a man who had just outwitted the entire system. He looked me straight in the eye and dropped the bombshell:

“I’ll just ask ChatGPT to make the story more human and less like AI.”

(Cue Tony Stark eye roll GIF —please, tell me you get the reference.)

Never in my life had I witnessed such elite dumbassery. Over two decades of existing on this planet and the best his brain could cook up was... to tell ChatGPT not to be ChatGPT?

Genius.

The AI Delusion: When Overconfidence Meets Automation
What makes this hilarious isn’t just that he thought it would work—it’s that he thought he was being original. As if dozens of other students didn’t have the same “brilliant” idea to outwit a tech-savvy lecturer by doing the digital equivalent of putting on sunglasses and hoping no one recognizes you.

Let me be clear: asking ChatGPT to “sound more human” is like asking Siri to whisper. It might change the tone, but you’re still talking to a machine. And a smart lecturer, especially one who’s lived in both the analog and digital worlds, can absolutely tell.

The patterns are unmistakable:

The weirdly flawless grammar

The overly polite transitions (“Indeed, it is worth noting…”)

The way it hugs the middle ground on every opinion

The overuse of phrases like “In conclusion,” “Furthermore,” and “It is imperative”

It’s like trying to sneak past a security camera by wearing a fake mustache. It’s not foolproof. It’s fool-you.

The Lecturer Who Knew Too Much
Our lecturer isn’t just suspicious—he’s prepared. You can tell he’s run assignments through AI detectors before. He probably keeps a list of common LLM phrases. He might even have a personal vendetta against copy-paste culture.

More importantly, he understands what most students don’t: AI is a tool, not a crutch. Use it for inspiration? Sure. Use it for grammar checks? Great. But feeding in a prompt and turning in the output? That’s just handing in someone else’s work—and in this case, “someone” happens to be an algorithm trained on the collective text of the internet.

Temptation vs. Trust: The Real Lesson Here
I get the temptation. When time is tight and pressure is high, AI feels like a lifeline. It writes fast, sounds smart, and never procrastinates. But here’s the problem: it doesn’t think for you. It just sounds like it does.

And our lecturer, with his steely Gen Z-Millennial hybrid energy, knew exactly what would happen. He knew we’d try to outsmart the system with the very tools he grew up alongside. And he was ready for it.

So I sat down. I opened a blank document. And I wrote the thing from scratch.

It wasn’t pretty. It wasn’t perfect. But it was mine.

Meanwhile, my friend spent more time trying to outwit the AI detectors than it would’ve taken to just write the assignment himself. He layered prompts. He paraphrased. He “humanized” his responses. He even ran the final result through a rewriting tool just to be sure.

All that work… to avoid doing the work.

Epilogue: The Grades Come In
Three days later, the results were posted. I passed—nothing spectacular, but solid. My friend? He got flagged for plagiarism.

Turns out, the detector didn’t even catch it. The lecturer did. Said it “read like ChatGPT trying to be a person.” Ouch.

There’s a moral here, and it’s not just “don’t use AI.” It’s don’t assume tech can do your thinking for you. You can’t out-hack someone who already knows the cheat codes.

Sometimes, the smartest thing you can do… is just do the assignment.

Final Thoughts
AI tools are amazing. They’re helpful, powerful, even inspiring. But they’re not a replacement for learning—or for showing your own voice. Because no matter how smart the software gets, it’ll never have your tone, your experience, or your human messiness.

So yeah. Maybe next time, I’ll still consult ChatGPT. But as a partner, not a ghostwriter.

And maybe my friend… well, maybe he’ll finally learn that telling ChatGPT to “be more human” is like asking a vending machine for dating advice.

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.