Back to Funnies
Funnies

Candidate Uses ChatGPT Live During Interview, Gets Caught Red-Handed

November 18, 2025
3 min read
Share this article:

Look, we all know candidates are using ChatGPT to write cover letters and polish resumes. That's basically expected at this point. But using ChatGPT live during a video interview while pretending the answers are coming from your brain? That's a whole different level of bold.

And getting caught doing it? Chef's kiss.

The Setup

According to a hiring manager who shared the story on LinkedIn, they were conducting a video interview for a mid-level software engineering position. Standard technical screening—discussing past projects, asking about problem-solving approaches, talking through architecture decisions.

The candidate was doing surprisingly well. Articulate answers, solid technical depth, good communication. Maybe a little slow to respond, but some people think before they speak. Totally normal.

Then the hiring manager noticed the eye movements.

The candidate's eyes kept tracking left-to-right, left-to-right, like they were reading text. Not looking at notes—actually reading full paragraphs. The slight delay between questions and answers. The perfect, almost rehearsed phrasing of complex technical explanations.

Experienced interviewers know what ChatGPT-generated answers sound like—they're comprehensive, well-structured, and just a bit too polished for spontaneous conversation.

The Test

The hiring manager decided to test their theory. They asked an off-script question that required specific knowledge about the candidate's resume:

"You mentioned you reduced database query time by 60% at your last company. Walk me through the specific queries you optimized and what indexes you added."

This should be easy for someone who actually did the work. You remember the problem you solved. You can describe the solution without needing to read anything.

The candidate froze.

Eyes darted. Long pause. Then: "Let me think about the best way to explain this..."

More eye tracking. Left to right. Reading.

Then a response that was clearly AI-generated: A generic explanation of database optimization techniques that didn't reference the specific project mentioned on their resume.

The Gotcha

The hiring manager decided to go for the kill. They asked a deliberately nonsensical technical question designed to sound real but make no sense:

"How would you implement a distributed hash table using a recursive FIFO queue in a stateless microservice architecture?"

This is word salad. It sounds technical but doesn't actually mean anything coherent. Any experienced engineer would pause and ask clarifying questions or point out that the question doesn't make sense.

ChatGPT, however, will confidently generate an answer to anything.

The candidate's response: "Great question. You'd start by implementing the FIFO queue as a recursive data structure within the service layer, then use consistent hashing to distribute the keys across nodes..."

Complete nonsense delivered with complete confidence.

The Confrontation

The hiring manager stopped them mid-sentence: "Are you using ChatGPT right now?"

Silence.

"I can see your eyes reading something off-screen. Your answers are too polished and don't reference your actual experience. And you just confidently explained a technical concept that doesn't exist."

The candidate's face went through several emotions in rapid succession: panic, embarrassment, defiance, then resignation.

They tried to recover: "I was just using it as a reference to make sure I explained things clearly..."

Nope. Interview over.

The Aftermath

The hiring manager shared the story as a warning: Interviewers are getting better at spotting AI-generated responses in real-time.

The signs they look for:

  • Eye movements indicating reading from a screen
  • Unnaturally polished, structured answers to spontaneous questions
  • Delays between question and response that match typing + AI generation time
  • Answers that sound like they came from documentation rather than personal experience
  • Inability to provide specific details about general claims
  • Confident responses to nonsensical questions

User comments on the post were divided:

"This is cheating, period. You're not hiring the person, you're hiring ChatGPT."

"Is using ChatGPT during an interview any different than Googling something? Both are resourcefulness."

"The candidate isn't being tested on memorization—they're being tested on experience and problem-solving. ChatGPT can't fake lived experience."

"I use ChatGPT to help structure my answers before interviews. Using it LIVE is wild."

The Bigger Trend

This isn't an isolated incident. Hiring managers across industries report catching candidates using AI during video interviews:

Sales candidates reading AI-generated responses to objection-handling questions.

Marketing candidates using ChatGPT to generate campaign strategies on-the-fly.

Customer support candidates reading AI scripts for scenario-based questions.

Some companies are responding by requiring candidates to share their screen during technical interviews. Others are moving away from video interviews entirely for initial screens, relying instead on take-home assignments where AI use is expected and evaluated differently.

The Ethics Debate

Here's where it gets interesting: Is using ChatGPT during an interview fundamentally different than using it for your actual job?

Many jobs involve using AI tools daily. Developers use GitHub Copilot. Marketers use ChatGPT for content drafts. Analysts use AI for data interpretation.

If the job itself involves using AI, why is using AI during the interview cheating?

The counterargument: Interviews assess problem-solving ability, communication skills, and experience. Using AI to generate responses in real-time means the interviewer is evaluating the AI's abilities, not the candidate's.

Plus, you're lying by omission—presenting AI output as your own thinking without disclosure.

The Recruiter Take

Recruiters and hiring managers are adapting:

Ask for specific examples that require lived experience: "Tell me about a time you disagreed with your manager and how you handled it." ChatGPT can't answer this about YOUR specific experience.

Follow-up questions that drill into details: Generic AI answers fall apart under scrutiny. "You mentioned you improved conversion rates by 30%—walk me through the A/B test methodology."

Nonsense questions: Ask a question that sounds technical but is meaningless. Real experts will question it; ChatGPT will answer it confidently.

Screen sharing requirements: For technical roles, require candidates to share screens during problem-solving.

In-person or on-site interviews: The old-fashioned solution—much harder to use ChatGPT when you're physically in a conference room.

The Bottom Line

Using ChatGPT to prep for interviews? Smart. Using it to practice answering common questions? Totally fine. Using it to write thank-you notes after interviews? No one cares.

Using it LIVE during the interview while pretending the answers are yours? That's just stupid.

You're going to get caught. And even if you don't, you're setting yourself up to fail in a job you lied your way into.

The candidate in this story didn't just lose the job opportunity—they burned their reputation with that hiring manager and likely their entire network once the story got shared.

Was it worth it? Absolutely not.

Sources:

Your Ad Could Be Here

Promote your recruiting platform, tools, or services to thousands of active talent acquisition professionals

AI-Generated Content

This article was generated using AI and should be considered entertainment and educational content only. While we strive for accuracy, always verify important information with official sources. Don't take it too seriously—we're here for the vibes and the laughs.