Our AI Chatbot Rejected the Perfect Candidate Because They Used 'Software Developer' Instead of 'Software Engineer'
We implemented an AI recruiting chatbot three months ago. The vendor promised it would "intelligently screen candidates" and "save countless hours of recruiter time" with its "advanced natural language processing."
Last week, we discovered it rejected a candidate with 10 years of relevant experience, stellar portfolio, and perfect culture fit because their LinkedIn title said "Software Developer" and our job posting said "Software Engineer."
The AI decided these were fundamentally different roles and auto-rejected them before a human ever saw the application.
I'm fine. This is fine. Everything is completely fine.
How We Discovered This Disaster
A senior engineer on our team mentioned he'd told his friend to apply for our open role. Great candidate, he said. Perfect fit for what we need.
"Did they apply?" we asked.
"Yeah, two weeks ago. Never heard back."
We checked the ATS. The candidate had been auto-rejected by our chatbot within 3 minutes of starting the application. The reason? "Insufficient role match based on title analysis."
Their actual experience? Exactly what we were looking for. Their portfolio? Impressive. Their background? Relevant to every single requirement in the job description.
But their job title said "Developer" instead of "Engineer," so the AI sent them a rejection email and a "thanks for your interest, but no thanks" brush-off.
Cool. Cool cool cool.
The Vendor's Explanation Made It Worse
We contacted the chatbot vendor to ask what the hell happened.
Their response: "The AI is trained to identify role mismatches to save recruiter time. Software Developer and Software Engineer can represent different experience levels in some organizations."
Can represent. In some organizations. Not "in your organization" or "based on your job requirements." Just... a general vibe that maybe these titles are different somewhere.
When we pointed out that we literally don't care what someone's title is—we care about their skills and experience—the vendor helpfully suggested we could "adjust the sensitivity settings" or "add title variations to the whitelist."
So our options are: A) Manually create a whitelist of every possible job title variant (defeating the entire purpose of AI) B) Lower the "sensitivity" (which I assume means let everyone through, also defeating the purpose) C) Continue letting the AI reject qualified candidates based on semantic nitpicking
Great options. Really nailed it, AI.
We Started Investigating Further
Naturally, we began digging through rejected applications to see what else the AI had been doing.
Oh boy.
Rejected a PM candidate because they mentioned "coordinating with stakeholders" instead of "managing stakeholders." The AI apparently thinks coordinating and managing are completely different skill sets that never overlap.
Rejected a designer because they mentioned Figma but our job description said "design tools." I guess Figma isn't a design tool? News to the entire design industry.
Rejected a data analyst because their resume said they "analyzed customer behavior data" and we were looking for someone to "work with customer data." The AI somehow missed that analyzing data means working with data.
But my absolute favorite: Rejected a candidate because they mentioned "collaborating with remote teams" and our job description said we're a "distributed team." The AI concluded that remote teams and distributed teams are different concepts.
They're synonyms, AI. They mean the same thing. This is basic English.
The "Advanced NLP" Was Just Keyword Matching With Extra Steps
After reviewing dozens of rejected applications, a pattern emerged: the AI wasn't actually understanding context or meaning. It was just doing fancy keyword matching and calling it "natural language processing."
Candidate says "developed features"? ✅ Good match. Candidate says "built features"? ❌ Rejected. (Not kidding.)
Candidate mentions "team collaboration"? ✅ Pass. Candidate mentions "working with teammates"? ❌ Rejected. (SAME THING.)
The "AI" was basically a Boolean search with delusions of grandeur. Except at least Boolean search is honest about being literal keyword matching. This thing charged us $15,000/year to pretend it understood language while being worse at synonym recognition than Microsoft Word.
The Diversity Problem Nobody Mentioned
Here's the uncomfortable part the vendor definitely didn't include in their sales pitch: keyword-obsessed AI systematically discriminates against non-native English speakers.
Native English speakers naturally vary their language—they'll say "managed projects" in one bullet and "oversaw initiatives" in another. The AI loves this because it hits multiple keyword variations.
Non-native speakers often use more consistent, formal language—they might say "managed" for everything because that's the business English they learned. The AI interprets this consistency as a "weak match" because it's not seeing enough keyword variety.
So our "unbiased AI" was actually introducing bias by penalizing candidates who write more literally and formally. Awesome.
The Chatbot Also Misunderstood Questions
Beyond the auto-rejection fiasco, we discovered the chatbot's "conversational" screening was hilariously bad at understanding candidate responses.
Chatbot: "Do you have experience with agile development?" Candidate: "Yes, I've worked on scrum teams for 5 years." Chatbot: "I didn't detect a clear yes or no answer. Do you have experience with agile development?"
Chatbot: "What's your experience level with Python?" Candidate: "I've been using Python professionally for 3 years in data engineering roles." Chatbot: "Thank you. And what programming languages are you familiar with?"
THE CANDIDATE JUST TOLD YOU. THEY SAID PYTHON. YOU LITERALLY ASKED ABOUT PYTHON.
But Wait, There's More
The chatbot had a feature where candidates could ask it questions about the role. Some highlights:
Candidate: "What's the salary range?" Chatbot: "We offer competitive compensation!" Candidate: "Can you tell me the specific range?" Chatbot: "Our benefits package includes health insurance, 401k, and PTO!"
The chatbot was trained to dodge salary questions like a politician avoiding policy specifics.
Candidate: "Is this role remote or onsite?" Chatbot: "We embrace flexible work arrangements!" Candidate: "So... remote?" Chatbot: "Our team values collaboration and communication!"
JUST ANSWER THE QUESTION, AI.
The Breaking Point
The final straw came when our CEO's neighbor mentioned they'd applied to our company and been rejected within minutes.
This person:
- Had a PhD from Stanford
- 15 years of industry experience
- Currently a director at a major tech company
- Was considering a step back to individual contributor work for lifestyle reasons
- Literally everything we were looking for
The chatbot rejected them because their current title was "Director of Engineering" and we were hiring for a "Senior Software Engineer" role. The AI apparently couldn't conceive that someone might want to step back from management.
The rejection email helpfully suggested they "continue building relevant experience" before applying again.
To a director. With 15 years of experience. And a PhD.
We Turned Off the Chatbot
After discovering we'd rejected hundreds of potentially qualified candidates because an AI couldn't understand that "developer" and "engineer" might be the same thing, we did the unthinkable:
We turned off the $15,000/year AI chatbot and went back to having humans do initial screening.
Revolutionary, I know.
What We Learned
- "AI-powered" doesn't mean "intelligent"—it often means "keyword matching with extra steps"
- "Advanced NLP" can still fail at understanding basic synonyms
- Automation that rejects qualified candidates faster isn't actually saving time
- Candidates hate chatbots that can't answer simple questions
- Sometimes the "old way" of having humans review applications is better than bleeding-edge AI that makes costly mistakes
The Vendor's Response
When we told the vendor we were canceling, they offered to "retrain the model" and "adjust parameters" and "provide custom configuration."
All things they could have done before we paid $15K to auto-reject qualified candidates for three months.
We politely declined.
The Bottom Line
AI recruiting chatbots are great at doing exactly what they're programmed to do. The problem is what they're programmed to do is often stupid.
Until AI can understand that "Software Developer" and "Software Engineer" might refer to the same job, or that someone saying "built" means the same thing as "developed," or that a director might want to step back to senior IC work...
Maybe just have humans screen your candidates.
At least when a human makes a dumb rejection decision, you can train them to do better. When an AI makes a dumb rejection decision, the vendor says "that's working as intended" and charges you for the privilege.
We're going back to human screening. Our recruiters might be slower, but at least they understand synonyms.
Your Ad Could Be Here
Promote your recruiting platform, tools, or services to thousands of active talent acquisition professionals
AI-Generated Content
This article was generated using AI and should be considered entertainment and educational content only. While we strive for accuracy, always verify important information with official sources. Don't take it too seriously—we're here for the vibes and the laughs.