AI Video Interview Tool Gave Our Best Candidate a 3/10 Because They Didn't Smile Enough
Our company bought an AI-powered video interview platform that promised to "objectively score candidates" and "eliminate human bias" from our hiring process.
Last week, it gave our strongest technical candidate a 3/10 score. Not because of their answers—those were excellent. But because the AI decided they "lacked enthusiasm and positive affect."
Translation: they didn't smile enough while explaining SQL query optimization.
I hate everything.
The AI's "Objective" Scoring Rubric
According to the vendor, their AI analyzes:
- Facial expressions
- Tone of voice
- Word choice
- Eye contact with the camera
- "Energy levels"
- Response pacing
All to generate an "objective" score that removes human bias from candidate evaluation.
Here's what the AI apparently doesn't analyze:
- Whether the answer is correct
- Whether the candidate demonstrates deep knowledge
- Whether their approach to problem-solving is sound
You know, the stuff that actually matters.
How We Discovered the Problem
A hiring manager called me confused. "Why did this candidate score so poorly? Their answers were great. They clearly know what they're doing."
I pulled up the AI's scoring breakdown. It was... something.
Technical Knowledge: 8/10 (the AI got this part right) Communication Skills: 4/10 (because they paused to think before answering) Cultural Fit: 2/10 (because... insufficient smiling?) Leadership Potential: 3/10 (based on voice tone analysis, apparently)
Overall Score: 3.8/10
The AI had determined this candidate was a poor fit based almost entirely on the fact that they looked like a normal person having a serious technical conversation rather than a game show host.
We Watched the Interview Recording
The candidate's performance was solid. They:
- Answered every technical question correctly
- Explained their reasoning clearly
- Demonstrated deep understanding of the role requirements
- Asked thoughtful questions about our architecture
But they committed the cardinal sin of looking thoughtful and focused instead of beaming with enthusiasm while discussing microservices deployment strategies.
How dare they approach a technical interview like a serious professional conversation instead of a TED Talk.
The AI's "Enthusiasm" Obsession
We started reviewing other candidates the AI had scored. A pattern emerged:
High scores went to:
- Candidates who smiled constantly (even during technical questions)
- People with animated facial expressions
- Candidates who spoke quickly with high energy
Low scores went to:
- Thoughtful candidates who paused before answering
- People with neutral expressions while concentrating
- Anyone who treated the interview like a professional conversation instead of a performance
The AI had essentially created a scoring system that rewarded "acting enthusiastic" over "being competent."
The Bias the AI "Eliminated"
The vendor claimed their AI would eliminate bias from hiring. Here's the bias it actually introduced:
Personality bias: The system massively favored extroverted, high-energy personalities. Thoughtful, introverted candidates who give excellent answers but don't perform enthusiasm? Penalized.
Cultural bias: The "appropriate" level of eye contact, facial expressiveness, and energy varies dramatically across cultures. The AI was trained primarily on North American norms and penalized candidates from cultures with different communication styles.
Neurodiversity bias: Autistic candidates, people with social anxiety, or anyone who struggles with neurotypical social performance cues? The AI demolished their scores regardless of competence.
Gender bias (the one they claimed to fix): Studies show that women are penalized for not smiling enough AND for smiling too much. The AI just automated this double standard at scale.
So much for eliminating bias. The AI just made it faster and gave it an objective-sounding score.
The "Eye Contact" Metric Was Absurd
One of the AI's scoring factors was "maintains appropriate eye contact."
It's a video interview. You're staring at a camera lens. There is no human on the other end making eye contact back. The entire concept of "eye contact" doesn't apply.
But the AI would score candidates poorly if they looked away from the camera while thinking. Because apparently, the appropriate way to handle a difficult technical question is to stare unblinkingly into the camera lens like you're trying to intimidate it into submission.
One candidate got marked down for "poor eye contact" because they looked at their second monitor—where they were referencing documentation while answering a technical question about a specific API.
The AI penalized them for... checking their facts to give an accurate answer. Perfect.
The Voice Tone Analysis Was Hilariously Bad
The AI claimed to analyze voice tone to detect confidence, enthusiasm, and communication skills.
In practice:
- Candidates with deeper voices scored higher on "leadership potential"
- Women with higher-pitched voices got lower "confidence" scores
- Non-native English speakers with accents got crushed on "communication" ratings regardless of fluency
One candidate literally gave the same answer as another candidate—we compared transcripts—but scored 3 points lower because they had a softer speaking voice.
The AI basically rediscovered vocal prejudice and called it objective analysis.
My Favorite Part: The AI Contradicted Itself
The best example of the AI's brilliance:
Candidate A: Answered quickly, spoke confidently, smiled throughout. AI Score: 9/10 AI Feedback: "Highly enthusiastic, strong communicator, excellent cultural fit"
Candidate B: Paused to think, gave more detailed answers, neutral expression. AI Score: 4/10 AI Feedback: "Lacks enthusiasm, communication could be improved, may not fit our fast-paced culture"
The twist: Candidate B gave objectively better answers. More detail, better examples, deeper technical understanding. But they scored 5 points lower because they didn't perform enthusiasm while doing it.
When we hired Candidate B anyway (over the AI's objection), they became one of our best engineers.
The high-scoring Candidate A? Crashed and burned in the technical assessment round. Turns out smiling enthusiastically doesn't actually correlate with engineering ability. Who knew?
The Vendor's Response
We contacted the vendor with our concerns. Their response was... enlightening.
"Our AI is trained on thousands of successful interviews to identify patterns that predict candidate success."
Cool. So if your training data includes biased human decisions (which it definitely does), your AI just learned to replicate those biases faster.
When we pointed out that the AI was scoring candidates based on smiling and voice tone rather than competency, they said we could "adjust the weighting of different factors."
So... we can manually reduce the bias that their AI introduced? Doesn't that defeat the entire purpose of using AI to "eliminate bias"?
What the AI Actually Did
Let's be clear about what happened:
We paid thousands of dollars for an AI that:
- Replicated existing human biases
- Added new algorithmic biases
- Wrapped it all in "objective" scores
- Almost cost us an excellent hire
And the vendor claimed this was eliminating bias from our hiring process.
We Turned It Off
We disabled the AI scoring feature and went back to having humans watch the recorded interviews and evaluate candidates based on their actual answers.
Revolutionary, I know.
The vendor was confused why we didn't want their AI scoring anymore. We explained that we prefer to hire based on competence rather than smile frequency.
They said we were "missing out on the benefits of AI-driven insights."
The benefits of being told that our best candidate is actually bad because they didn't perform enthusiasm? Yeah, we'll survive without those insights.
What We Learned
- "AI removes bias" often means "AI automates bias at scale"
- Scoring candidates on facial expressions and voice tone is absurd
- Enthusiasm performance ≠ job competence
- Video interview AI is largely snake oil wrapped in tech buzzwords
- Sometimes human judgment is actually better than algorithmic scoring
The Bottom Line
AI video interview scoring tools don't eliminate bias. They quantify subjective impressions and call them objective metrics.
An AI that penalizes candidates for not smiling enough during technical questions isn't removing bias—it's institutionalizing personality-based discrimination and slapping a score on it.
Want to know if a candidate is good? Have a human watch their interview and evaluate whether they can do the job. Novel concept, I know.
Or keep using AI to filter out your best candidates because they didn't beam enthusiastically while discussing database schemas.
Your choice. But don't call it unbiased.
We're keeping the video interview platform for recording and sharing interviews across the team. But the AI scoring? Disabled forever.
Turns out the best way to eliminate bias in video interviews is to evaluate candidates based on their answers instead of their smile intensity.
Who could have possibly predicted this revolutionary insight?
Reach 1000s of Recruiting Professionals
Advertise your recruiting tools, services, or job opportunities with The Daily Hire
AI-Generated Content
This article was generated using AI and should be considered entertainment and educational content only. While we strive for accuracy, always verify important information with official sources. Don't take it too seriously—we're here for the vibes and the laughs.