DEV Community

Cover image for The Great Tech Interview Bias: Why Are We Still Ignoring AI in Hiring? 🤔💡
Rodrigo Estrada
Rodrigo Estrada

Posted on

The Great Tech Interview Bias: Why Are We Still Ignoring AI in Hiring? 🤔💡

Job hunting in tech in 2025 is a bizarre experience. I’ve been leading multi-role teams for years, constantly switching between different languages and technologies. Thanks to LLMs, I can now jump between Python, TypeScript, Terraform, Bash, Go, SQL (PostgreSQL, Snowflake, KSQL), PySpark, and Pandas without the cognitive overload that used to burn me out. But interviews? They’re stuck in the past.

It’s already hard to find good developers, yet hiring processes make it even harder. If a company wants a “React+TypeScript dev” or a “Spark engineer,” they set up trivia or highly specific interviews. A developer with an engineering mindset doesn’t necessarily work with a single technology every month — this is more characteristic of a programmer whose sole role is to master one technology. There’s no problem with that, but depending on the person, switching technologies might take a few minutes to recall details, and more importantly, requires a calm and appropriate environment for the transition — not a high-pressure interview where they are being questioned in a capricious manner.

🚨 Does that mean you don’t know the tech? No.

🚨 Does it mean you can’t pick it up in minutes? No.

🚨 Does it mean you aren’t good at real-world problem-solving? Absolutely not.

These interviews aren’t filtering for the best engineers. They’re filtering for:

People who do repetitive tasks daily 📌
People who have the time to prepare for every possible trivia question ⏳
People who can regurgitate syntax on demand 🧠💨
And here’s the paradox: Some companies do provide a prep list so you can “get ready.” But doesn’t that defeat the whole point? If anyone can pass with enough prep time, what are you really testing? Meanwhile, some of the best engineers won’t even bother wasting time on it because it provides zero real-world value.

“But Google, Meta, and the big guys do it!” 🏢💰 Sure. But they have the same problem. The difference? At least their compensation is high enough that talented engineers might be willing to play along.

The “No Help Allowed” Fallacy 🤦♂️
It’s worse now. Companies say “No LLMs! No Google!” but then expect answers like you are an LLM. The logic? If you can’t do simple problems unaided, you surely can’t solve complex ones.

🚨 That’s a logical fallacy. 🚨

In actual development, you have:

Google 🌎
Stack Overflow 📚
Books 📖
AI code assistants 🤖
Senior devs guiding you , as well as peers or even juniors who might have a fresh perspective 🧑💻
Libraries to solve common problems 🛠️
Linters 🛠️
Formateadores automáticos de código ✨
Completadores de sintaxis ⌨️
Auto-completado ⚡
Auto-nombramiento de variables 🏷️
Auto-generación de documentación 📖
So why not let candidates use LLMs? 🤯 Now, instead of testing trivia, you can give them real-world problems. The reality is that even the interviewer is limited to trivia or basic problems because it’s impossible to test real-world scenarios in such a short time. In the end, it all comes down to trust — if you don’t trust the person you’re hiring, then you shouldn’t be hiring them at all.

A good LLM user can solve difficult problems fast 🎯
The way they prompt LLMs shows their reasoning process 🧠
They still need to understand what they’re doing or it won’t work 🔍
Compare that to old-school interviews where success depends at significant level on the interviewer’s attitude and bias. If they lack the skill to guide without giving away answers, the candidate might freeze.

Interviewing using code challenges without assistance is more of a teaching or instructional skill rather than an engineering skill. Most engineers, no matter how senior they are, do not necessarily possess this ability.

The Harsh Truth: LLMs Are Already Better Than Most Developers 😬
For simple and even mid-level coding, LLMs already outperform most devs. A skilled LLM user can even tackle complex problems. So… does it really matter if a dev can solve a medium-level problem without help?

A developer is still essential. The point isn’t that AI replaces them, but that solving problems without assistance doesn’t add as much value as some think. An LLM can outperform a solo developer in many cases, but only when guided by a skilled developer who understands what to ask, how to refine outputs, and when to step in. The true value lies in leveraging AI effectively, not in rejecting it.

Anyone who’s used LLMs for real development knows: if you don’t understand the tech, AI won’t save you.

Adapt or Be Replaced… Not by AI, but by Those Who Adapt 🏃💨
This entire debate isn’t about AI replacing developers. It’s about developers refusing to adapt. AI won’t take your job. Your resistance to AI will.

Companies need to wake up. Interviewing like it’s still 2010 is wasting talent and filtering out great engineers just because they don’t memorize trivia. The best engineers are already working with AI, not against it. Maybe it’s time hiring teams did the same. 😉

Corollary: A Shift in Perspective 🎯
After a surprising failure in an interview, I revisited HackerRank and LeetCode after years of not using them. The interesting thing? I didn’t struggle much at all. This made me realize something about human psychology: I had a subconscious bias, resisting something I didn’t see value in.

So, I reevaluated. Now, I solve LeetCode and HackerRank problems occasionally, just for fun. The key lesson? While I still believe these aren’t great for interviews, they are fantastic for reducing dependency on LLMs and avoiding AI hallucinations. As a mental exercise — when done in a relaxed and enjoyable way — it has actually improved how I interact with LLMs.

Sometimes, it’s not about rejecting change, but understanding how to balance new tools with fundamental skills. 🚀.

Top comments (0)