Back to Funnies
Funnies

AI Salary Benchmarking Tool Accidentally Displays Compensation in Zimbabwe Dollars, Offers Look Incredible (Until Someone Notices)

Share this article:

A company's new AI-powered salary benchmarking tool had a small configuration issue where it pulled compensation data in Zimbabwe dollars (ZWL) instead of US dollars but displayed the amounts without currency symbols. The result: the company's job offers looked wildly generous, with entry-level positions appearing to offer $450,000 annually and senior roles showing compensation packages exceeding $2 million.

Three candidates accepted offers immediately. The talent acquisition team was confused about why suddenly everyone was so enthusiastic about below-market compensation. It took two weeks for someone to notice that the numbers were technically accurate—just in the wrong currency by a factor of about 13,000x.

When Exchange Rates Make Everyone Look Rich

Reports from workplace forums indicate the company implemented a new AI salary benchmarking platform that pulled compensation data from multiple global sources to provide market-rate recommendations. The tool was supposed to automatically detect the company's location and currency, then convert all benchmark data to the appropriate local currency for display.

According to technical forum discussions, the AI correctly identified the company as US-based but somehow defaulted to pulling salary data from Zimbabwean job boards (which use ZWL) while failing to convert the numbers to USD before displaying them. The result was compensation figures that looked suspiciously high but not so outrageous that anyone immediately questioned them.

One recruiter allegedly posted: "The benchmarking tool suggested $450,000 for an entry-level marketing coordinator role. I thought 'wow, we're really trying to compete hard for talent' and didn't question it. In my defense, Bay Area tech salaries are wild. This seemed plausible."

The Offers That Were Too Good to Be True

Reports suggest the first hint something was wrong came when candidates started accepting offers with unusual enthusiasm. One candidate allegedly replied to an offer letter with "I accept immediately before you realize this is a mistake." The recruiter thought they were joking. They were not joking.

Another candidate reportedly called within 30 minutes of receiving the offer and said "I'll take it, when can I start, I'm canceling all other interviews immediately." The hiring manager allegedly thought "great, we finally made a competitive offer!" The offer was for a junior software engineer role showing $620,000 annual compensation. In Zimbabwe dollars, this was roughly $47 USD. In US dollars, it would have been a very competitive offer. In reality, the intended offer was $85,000 USD.

A third candidate allegedly sent a reply email that just said "YES YES YES YES YES" followed by "Please send paperwork immediately, I am ready to resign today." The recruiter posted: "I should have known something was up when candidates were acting like we'd offered them lottery winnings. But I thought maybe we'd just finally gotten competitive with compensation. Narrator: we had not."

The Two-Week Delay in Discovery

According to reports, the problem went unnoticed for two weeks because multiple people saw the numbers and independently assumed they were correct for different reasons:

Recruiters thought: "Leadership must have approved aggressive compensation to attract top talent."

Hiring managers thought: "HR must have done market research showing these rates are necessary."

Finance thought: "Recruiting must have gotten special approval for these high offers."

Leadership thought: "These numbers seem high but I trust that recruiting knows the market."

Nobody actually checked the source data or questioned why entry-level roles were suddenly showing $400K+ compensation. One VP allegedly said in retrospect: "We were all busy, the AI tool was new, and everyone assumed someone else had verified the numbers. Classic organizational failure mode."

The problem was discovered when a finance analyst running year-end budget projections noticed that if all pending offers were accepted, the company's personnel costs would exceed total company revenue by 300%. They flagged this as "probably wrong" and started digging.

The Discovery Moment

Reports indicate the finance analyst traced the numbers back to the salary benchmarking tool and noticed all the source citations were from Zimbabwean job sites. They allegedly checked the exchange rate, did quick math, and realized the compensation numbers were accurate—in ZWL, not USD. At 2025 exchange rates, ZWL 450,000 equals approximately $35 USD. Not quite the competitive offer the company thought they were making.

The analyst allegedly sent an email to the talent acquisition team with the subject line "We have a currency problem" and attached screenshots showing the same role posted on a US job board at $85K USD and a Zimbabwean job board at ZWL 1.1 million (about $85 USD when converted). The AI tool had pulled the ZWL figure, failed to convert it, and displayed "1100000" as the salary recommendation, which recruiters interpreted as $1.1 million USD.

One recruiter allegedly read this email and experienced what they later described as "immediate existential dread followed by nausea." Another posted: "I have never felt my stomach drop faster than the moment I realized we'd been making offers in the wrong currency for two weeks. Three people have already accepted. I don't know how to fix this."

The Accepted Offers Crisis

The immediate problem was the three candidates who had already accepted offers based on the inflated numbers. Legal was consulted. The consensus was grim: the offers were in writing, had been accepted, and didn't specify currency. The company was potentially on the hook.

However, according to reports, the offer letters did include language about "commensurate with experience" and "subject to final approval," which legal argued gave the company an out. The talent acquisition team was tasked with calling each candidate to explain the situation.

One call allegedly went like this:

Recruiter: "Hi Sarah, I need to discuss your offer letter. There was a technical issue with our salary benchmarking tool."

Candidate: "I knew it was too good to be true. How bad is the real number?"

Recruiter: "The offer should have been $85,000 USD, not $620,000. The tool was pulling data in Zimbabwe dollars and we didn't catch it."

Candidate: "...Zimbabwe dollars?"

Recruiter: "Yes. We're very sorry. The actual offer is still competitive for the market, just not quite $620K competitive."

Candidate: [long silence] "I already gave notice at my current job."

Recruiter: "I know. We're prepared to offer an additional $10K signing bonus as apology for the confusion. $95K total first-year compensation."

Candidate: "I don't know what to say. I'm obviously disappointed, but I appreciate the honesty. Can I have 24 hours to think about it?"

According to reports, two of the three candidates ultimately accepted the corrected offers (with apology bonuses). One declined and took another position. The company counted themselves lucky it wasn't worse.

The Other Pending Offers

Reports suggest there were 14 other offers in various stages of preparation that hadn't been sent yet. Those got corrected immediately. The talent acquisition team also had to go through every role they'd posted in the past two weeks and update the salary ranges, which were still displaying in inflated ZWL numbers.

One job posting allegedly showed "$1.2M - $1.8M" for a senior accountant role that should have been "$85K - $95K." Several candidates had applied specifically because the compensation looked exceptional. The recruiting team had to send bulk emails explaining the error: "Due to a technical issue with our job posting platform, salary ranges were displayed incorrectly for roles posted between [dates]. Please see corrected ranges below."

According to forum discussions, several candidates replied asking if the company was hiring for roles actually paying $1.2M because they were interested in those. The recruiter had to explain that no, those roles don't exist, it was a currency conversion error, sorry to get your hopes up. One candidate allegedly responded: "You're telling me I got excited about a $1.8 million accounting job that doesn't exist? This is the worst Monday."

The AI Tool Vendor Response

Reports indicate the company contacted the salary benchmarking tool vendor to report the currency bug. The vendor's initial response was allegedly "our system is working as designed." The company replied "your system thinks we're paying entry-level employees $450,000. That is not working as designed."

After investigation, the vendor apparently acknowledged that the AI's location detection had malfunctioned, causing it to pull salary data from the wrong geographic region while failing to apply currency conversion. They pushed a fix within 48 hours and offered three months of free service as apology.

The company allegedly asked "can we have six months free instead, given that we almost lost three hires due to your bug?" The vendor reportedly agreed. The talent acquisition team considered this a small victory in an otherwise terrible situation.

The Internal Lessons Learned

According to reports, the company conducted a post-mortem and implemented new processes:

  1. All AI-generated compensation recommendations must be spot-checked against known market rates before being used in offers. Revolutionary concept of "trust but verify."

  2. Currency symbols must be displayed on all compensation figures to prevent confusion. Apparently "$85,000" and "85000" look different enough to catch errors.

  3. Significant changes in compensation ranges require approval from finance before being implemented. No more assuming leadership pre-approved aggressive increases.

  4. New tools require testing period before production use especially for high-stakes processes like compensation. Again, should have been obvious.

One talent acquisition leader allegedly commented: "We learned that AI tools can fail in really creative ways, like pulling data from Zimbabwean job boards when you're a US company. We also learned that we need better verification processes because apparently we'll all just assume million-dollar entry-level offers are normal if the AI says so."

The Candidate Reactions

Reports from workplace forums indicate several candidates posted about the experience on Reddit and Blind. One allegedly wrote: "I got a job offer for $620K, accepted immediately, then got a call saying 'sorry it was a currency error, the real offer is $85K.' I don't know whether to laugh or cry. I already told my family I was getting a life-changing salary."

Another posted: "Company's AI tool quoted salaries in Zimbabwe dollars by mistake. I saw a posting for $1.2M and thought 'finally, someone who values accountants properly.' Turns out it was $85K and a technical error. Still taking the job but wow, what an emotional journey."

The Reddit comments allegedly included: "This is the most elaborate accidental catfishing I've ever heard," "Imagine telling your partner you got a $600K offer and then having to explain Zimbabwe dollar exchange rates," and "The AI giveth hope and the AI taketh away."

The Bigger Picture

According to HR technology experts, this incident highlights risks of AI-powered compensation tools that pull data from multiple sources without adequate validation. One expert posted: "AI can scrape salary data from global sources efficiently, but if it doesn't properly handle currency conversion and geographic context, you get exactly this problem: wildly incorrect numbers that look plausible until someone does basic sanity checking."

Another commented: "The real failure here wasn't the AI bug—software bugs happen. The failure was that multiple people saw $450K entry-level offers and nobody questioned it. That's an organizational critical thinking problem, not just a technology problem."

The Current Status

Reports indicate the company has corrected all affected job postings, updated their salary benchmarking processes, and implemented additional verification steps for AI-generated recommendations. They also allegedly changed vendors to a salary benchmarking tool that is "less creative about global data sources" and "better at not suggesting we pay interns $800,000."

One recruiter posted an update: "We've recovered from the Great Zimbabwe Dollar Incident of 2025. All offers are now correctly stated in US dollars. We have trust issues with AI tools. But we're functional. Our candidates are less disappointed. It's fine. Everything's fine."

The three candidates who accepted corrected offers all started their roles and allegedly have a running joke about being the "$620K employees" who actually make significantly less than that. One new hire reportedly has "ZWL Millionaire" in their Slack bio. The company is choosing to laugh about it.

The Moral

If you're implementing AI-powered salary benchmarking tools, maybe verify they're pulling data in the correct currency before sending offers to actual humans. Currency symbols exist for a reason—use them. And if your AI tool suggests paying entry-level roles $450,000, perhaps question that before assuming your company just got really generous with compensation.

Also, when candidates accept offers with unusual enthusiasm and phrases like "before you realize this is a mistake," that's your signal to double-check the numbers. They might know something you don't. Like the fact that you're accidentally offering them 13,000x market rate due to a currency conversion bug.

Verify your AI tools, folks. Especially when they control numbers with life-changing financial implications. Zimbabwe dollars are not interchangeable with US dollars, and AI systems that think they are will create spectacular disasters that live forever in HR legend.

May your compensation data always display in the correct currency, and may your candidates' enthusiasm be based on realistic salary figures. It's a low bar, but apparently we need to set it.

Advertise With Us

Get your message in front of recruiting professionals

AI-Generated Content

This article was generated using AI and should be considered entertainment and educational content only. While we strive for accuracy, always verify important information with official sources. Don't take it too seriously—we're here for the vibes and the laughs.