DEV Community

LHJ Piper
LHJ Piper

Posted on

Don’t Let AI Programming Ruin Your Career—Treat Them as Your Interns, Not Employees or Teachers!

Disclaimer: This article contains no AI-generated content; it is entirely handwritten. The first draft of this article was generated in Chinese and translated into English by Grok-3.

Yesterday, I came across some expressions of concern, such as “AI is Cultivating a Generation of ‘Illiterate Programmers’ Who Can’t Code.” I also recall earlier claims that “AI will ruin junior programmers.” Whether you agree or disagree, you must admit there’s some truth to these views—they’re not entirely baseless.

Here are the main points of these concerns:

  1. AI-assisted programming causes problem-solving skills to deteriorate, as people have fewer chances to think independently.
  2. There’s a “withdrawal” effect from AI programming, leaving people feeling dumber or slower without it.
  3. “We haven’t become ‘10x programmers’ thanks to AI; we’ve just become 10 times more dependent on it.”

Speaking for myself, I feel rather fortunate. Before large language models became widespread—or even existed—I had already patiently built a relatively solid foundation in computer science. I diligently completed assignments, projects, and exams from classic courses like Zhejiang University’s data structures and MIT’s operating systems. By the end of 2022, when I needed to write my graduation thesis, the initial version of GPT-3.5/ChatGPT emerged—just in time.

If I had started my studies a few years later, I’d surely question the value of learning foundational knowledge. Even if I recognized its importance and began studying, I might not be willing to learn as thoroughly as I did in the “pre-AI era.” I might toss classic papers to AI and ask it to summarize the key points for me. If I didn’t understand something, I’d ask AI rather than digging into the paper myself. For course assignments, AI would auto-complete them for me, and since humans are naturally lazy, I’d struggle to resist: “Oh, it filled in a bunch for me—well, that’s the gist of it, I get it, as long as it runs and finishes the assignment!” Worse still, since solutions to course assignments have long been part of AI’s training data, the auto-completed work would likely be flawless—robbing me of the chance to sharpen my debugging skills with tools like GDB during my student years. When such a student later tackles real-world problems where AI’s solutions aren’t perfect, they’ll face challenges far tougher than those met by engineers from the “pre-AI era.”

Yet, unlike all professions in history that have faced “elimination threats,” software engineers—or programmers—are the most inclined to “actively embrace new things.” It’s a job requirement, after all.

So, how do we safely “embrace” this double-edged sword, this “demon”?

Treat AI programming assistants as your interns, not as employees, colleagues, or teachers!

1) Your subordinates or colleagues are accountable for their code, but your interns are not.

If code you wrote with AI has bugs, would you sue the AI company? Of course not—they’ve already disclaimed liability. Even if it’s code written by AI or commits suggested by AI, the responsibility ultimately falls on you.

What’s that? Your company has integrated AI into CI/CD, like using AI to procedurally write unit tests? No problem—if the AI’s code fails, whoever deployed the AI is accountable.

In short, responsibility must land on a human. If you fully entrust your work to an AI tool, you can’t say when issues arise, “This was written by so-and-so; go find them!”

For critical logic written by AI, you must review it yourself, just as you’d carefully check an intern’s code before it goes into production.

Moreover, you should be extra cautious about letting AI access highly sensitive information. It’s like not allowing a summer intern to touch your company’s most competitive core technology. Imagine this: the internship ends, the intern leaves, and with the confidential knowledge gained from you, they land a full-time job at your competitor. Would you make interns sign non-compete agreements? Hardly practical.

AI programming assistants carry a similar risk, though it’s unlikely AI companies care about our mundane business code. Still, for programmers, safeguarding core data, keys, and code is worth considering. My initial suggestions are:

  • Store passwords, tokens, and other sensitive info in profile files like ~/.bashrc, reading them as environment variables at runtime.
  • Or ensure your project’s config files are ignored by AI tools.

2) Assign your intern one task at a time—don’t rush.

Think back to your first meeting with an intern—a fresh-faced newbie on their first internship. How did you introduce the upcoming work?

Was it like this?

  • Hello, student. Now, assume you’re a senior Go language engineer who’ll provide me with secure, elegant Go backend system solutions.
  • I need a backend system for an image hosting service, supporting image uploads, downloads, list queries, and more.
  • Pay special attention to the image storage solution and caching scheme for preview images, as project cost is a key factor.
  • Choose the most ecosystem-friendly Go language service framework to ensure smooth future development and maintenance.
  • The project must be highly scalable, with corresponding hot-update solutions.
  • Produce excellent API documentation to ease frontend integration.
  • Don’t skip linting, static checks, or unit tests.

If you said that to an intern, you’d witness the epitome of bewilderment. Obviously, you wouldn’t do that.

You shouldn’t do it with AI either. In other words, being too greedy often leads to subpar results and wasted time. We should treat each interaction with AI like a meeting with an intern.

For example:

  • First meeting (asking DeepSeek-R1 or another reasoning model): I want to design a high-performance image hosting backend system. I’m skilled in Go—are there any recommended existing solutions? If not, how should we design it? How do we plan the project?
    • This first meeting is like letting the intern plan their summer project, then refining and finalizing it.
    • At the end of the chat, you can ask AI to output a summary of the design plan, structure diagram, tech stack, etc., much like having the intern summarize the project for you. Then, use that summary to kick off the next meeting or AI chat.
  • Second meeting (using the prior “meeting summary” as a pre-prompt for the AI programming tool): Based on the current project design, let’s start building the minimal viable functionality. Let’s first design the first service in the dependency injection!
    • In this second meeting, letting AI generate specific code is akin to having the intern write code.
    • If something’s off, fix it yourself to prevent cascading errors.
  • Subsequent meetings would focus on daily progress.

Here’s a tip:

  1. Models like GPT-O1 or DeepSeek-R1 excel at reasoning but are slower (since they reason before outputting) and costlier, so use them for project design.
  2. For daily code generation, opt for inference models like DeepSeek-V3 or Claude-Sonnet-3.5—they’re faster and cheaper. If your project context is clear and well-structured, the code quality is often solid. Especially if you’ve already implemented one unit test, AI will follow it to nail the rest perfectly. But if you ask AI to write unit tests from scratch, it might not fully match your expectations.

3) You can’t leave an intern’s project completely untouched.

This comes from personal experience. I have a mild compulsion to write some code daily. But over the past six months, that habit morphed into “having AI generate some code daily.” Surprisingly, my productivity slowed down.

During the New Year holiday, I wanted to build a simple SPA (Single Page Application) with basic Vue3 logic. Over five days, I made zero progress. Unable to focus on code during the holiday, I opted to rely entirely on AI assistance.

Eventually, I was stunned to see AI spinning in circles on my project. I’d defined a basic FileUploader.vue component (somewhat complex), and in XXXView.vue, my AI couldn’t decide whether to use it or write new file upload logic, spawning a host of new issues.

As the holiday neared its end, I finally sat down and looked closely. The logic was so simple! I wrote it myself in no time.

You might say I misused the AI tool or that it’s better with React than Vue. But you can’t guarantee this black box won’t stall on slightly novel problems.

Especially when your code hits production issues, you can’t expect an intern to rush from school to fix it, just as you can’t expect AI to deliver spot-on answers in a pinch.

For AI-written projects, it’s best to understand how they’re implemented. That way, when you need to tweak them, you can dive in quickly.

4) You can’t use an intern as your search engine.

Copying every bug to AI right away is absurd, laughable, and sad. Even if AI advances for another decade, this habit will remain absurd, laughable, and sad—unless you’ve got a cutting-edge brain-computer interface feeding AI all your perceptions and context.

Handing error messages to AI is fine, of course. But it shouldn’t be an instinct. Otherwise, why employ you as an engineer? For problems you can roughly pinpoint with your brain, a quick search engine query beats asking AI in speed and fit.

The same applies to basic info lookups. Programmers these days seem too lazy to read documentation.

Imagine you’re using the aws s3 sync command and want to know what --exact-timestamps does. Would you call an intern over and ask, “What’s this parameter do, and give me a few specific examples?”

Two minutes later, the intern returns with a markdown report cobbled together from sketchy blogs. You read it, half-grasping it, unsure if it’s accurate.

Why not just search doc: aws s3 sync --exact-timestamps on a search engine? In five seconds, you’d have the most official, reliable, and clear documentation in front of you. It’s all text—why settle for someone else’s regurgitated version?

5) An intern can be an expert in certain areas, and that’s not shameful.

Suppose your boss wants to launch a CUDA-related business line and names you the technical lead. You could recruit interns who’ve worked with CUDA in school and listen to their ideas during project design.

Likewise, when we hit new problems and lack direction, we can “consult” AI. Just as we might consult interns—expertise varies, and that’s normal.

But with interns, you wouldn’t carelessly toss out shallow questions. Even for something simple, you’d ask rigorously: “Given the current situation, what’s the better technical route? Please outline it based on ecosystem, development difficulty, maintenance difficulty, performance, cost, etc.” You should engage AI the same way to maximize the value of the information you get.


Conclusion: Those who know me might recall that a year ago, I’d always put “AI” in quotes. I don’t believe today’s probabilistic models have true “intelligence.” I still feel that way. But I also believe we should move with the times rather than cling to formalities. Though I personally dislike AI tools, I’d wager that, outside of professional researchers, I’m among the most active, proactive, and extensive users of new AI tools. Let’s learn critically together, keep improving, and strive for excellence.

Top comments (0)