DEV Community

Kevin Naidoo
Kevin Naidoo

Posted on • Originally published at kevincoder.co.za

Tunnel Syndrome: AI's biggest weakness?

As a developer, are you worried that AI is going to replace you? This is a valid concern, but let me put your mind at ease, chances are this is highly unlikely in 2025, or at any point in the near future.

Why am I so confident?!

AI photo

AI Tunnel Syndrome is one of AI's biggest weaknesses and it's not going to get better any time soon regardless of "o3" or whatever model comes out next.

Preamble: The Consciousness

Okay, so "AI Tunnel Syndrome" is a term I just coined while thinking about AI stuff, but bear with me for now, we'll get back to what I mean in a minute.

AI Awakening

The thing is, to be human, you are more than just a library of knowledge. You are filled with memories, emotions, and experiences. You are a being that can adapt to just above any environment and thrive even in the most impossible circumstances.

You have a worldview and consciousness.

AI on the other hand is frozen in time. You see, there is a reason why we call them "models" and each new iteration has a version number. Do you have a version number? No, because your knowledge and experience grow every second of every day.

Models have no consciousness, they are just algorithms that can parse, scan, and find patterns in a large corpus of information.

The version indicates that the model's cutoff knowledge base has just moved further down the line with more data and various improvements on both the dataset and the algorithm.

The LLM stops learning at this version cutoff, every conversation thereafter is just an interaction, and there's no impact on the model's persistent knowledgebase.

The World View

As human beings, we understand not only the knowledge we accumulate over time but also how that knowledge fits into the broader context of the world around us.

While we are not soothsayers(well most of us anyway), we do have the ability to see into the future to a certain extent and make decisions based on all three temporal dimensions: the past, the present, and the future.

To illustrate what I mean, here's an example:

AI eating ice cream

Imagine it's a bright sunny day, incredibly hot, and you are sweating like crazy. At this moment you have all the knowledge of everything you've ever learned. You have memories and past experiences too. Sure you might forget stuff, but overall, you remember most things.

Your mind floats to thoughts of a nice cold ice cream, and you remember how refreshing and cooling it feels to have one from past experiences.

This now drives your decision to buy an ice cream! but wait! You did buy an ice cream from the guy down the street 3 months ago, and it wasn't all that great.

You also know, it's fast approaching lunchtime, and at 12 PM the city center gets really busy. On such a hot and sweaty day, the last thing you want to do is get stuck in a crowd of people.

So eventually, you decide to take a quiet side street instead, where you can buy your ice cream away from the city's hustle and bustle.

While enjoying that ice cream, you look to the city center and smile 😊, it's lunchtime and everyone is flocking there! Phew, dodged a bullet, didn't you?

Wait what does this have to do with AI!?

Patience πŸ™ eager-beaver, I'm getting there πŸ™ƒ

In the above example, your brain is using senses, memories, past experiences, and emotions altogether, all at once to build this complex decision tree in record time, whilst still consuming very little energy.

Furthermore, you are even using that information to predict the future and other possible edge cases and consequences.

Models on the other hand have no worldview and are narrow-focused, they take your prompt and do some math on it to determine patterns and complex relationships between words.

Thereafter based on the data they've been trained on, they generate a response that's most likely going to satisfy the prompt. Depending on the quality of your prompt, this will impact the calculation performance and thus the final result.

This is perfectly fine for a simple "lookup" question like: "Explain the theorem of Pythagoras". The question has one clear goal, thus the model can easily determine the meaning and generate an accurate response.

Give it a prompt like "Build me a landing page for an electrician, the colors are red and black. For the content, use placeholder images and text. I need 5 pages: About, services, contact, gallery..."

If you ask a human to do this task, even with this poor specification, they probably will come back with a complete website. AI on the other hand, even powerful models like Sonnet will build you something, but it's never going to be polished and will most likely miss obvious features.

What is AI tunnel syndrome?

With your worldview in the case of the ice cream example, you were able to piece together a complete picture of the environment, the taste, the texture, the horror of overcrowding, and the pleasant memory of eating the ice cream.

Coupled with the fact that you are conscious, every event in your life no matter how small or big it is, contributes to your worldview, your memories, and your learning.

tunnel vision

To drive home the true meaning of "AI Tunnel Syndrome", let's look at another practical example. In the real world just writing code is not enough, you need to sit in meetings with non-technical people; they often don't understand their actual requirements, budget constraints, and various factors that could come into play when building out their application requirements.

Can you imagine a request: "We need something like Twitter, so we can all communicate internally with each other, share files, and even video call. Please can you build this for us?"

πŸ€–: Sure, has access to the codebase, goes off, generates some Next.js code. Mostly works, but there is no auth integration so just about anyone can initiate a chat, also there's no mime validation on files, so one can upload just about anything, and finally, the UI has a logo of the old Twitter bird icon?!

πŸ‘¨β€πŸ’»: I ain't got 3 months to build that, we'll just set up Slack.

The human immediately identifies information beyond the prompt, the context, and the consequences of executing the task concerning time and budget constraints.

The AI, on the other hand, finds the relevant information and starts building but doesn't cater to many edge cases or even think about just using a pre-built solution like Slack.

πŸ’‘The lack of this complete "big picture" view (which is second nature to human beings) is what I call "AI Tunnel Syndrome".

Enter the Agent

AI Agents

Agents are an interesting next evolution of AI models. They are like Symfony orchestrators, in that, they coordinate models so that they can accomplish much more complex tasks.

Agents suffer from "AI Tunnel Syndrome" to a lesser extent thus they can, in niche roles be drop-in replacements for actual human beings.

This is the future, AI agents working with humans side-by-side! More on that in a future article.

What do you think? Do you agree? Would love to hear your feedback and thoughts. Until next time, Thanks for reading!!

Top comments (2)

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer

AI tunnel syndrome is definitely one of AI's most underrated weaknesses. I love your ice cream example, and I have dealt with customer requirements like the ones you describe! Experience, emotion, intuition, and thinking beyond the given information are qualities that make us human and won't be replaced by digital technology any time soon, if ever.

I vaguely remember posts and discussions about the agent theory back in the 1990s, so I'm already excited about your upcoming article!

Collapse
 
kwnaidoo profile image
Kevin Naidoo

Awesome 😊, thanks for reading.