AI Can't Write Better Code Than Devs
As artificial intelligence (AI) continues to evolve, it's becoming an increasingly popular tool amo...
Some comments have been hidden by the post's author - find out more
For further actions, you may consider blocking this person and/or reporting abuse
So true:
In my workflow, special when I work alone ( side project ), AI is a good to argue. So sometime capable to replace a talk with a collogues.
Yes, in my experience the AI assisted coding is mostly reminding me times when I had to review and patch code after my younger colleagues. However, as it was with the colleagues, AI often knows some interesting library, technique or idea.
Certainly beneficial, does not solve for everything.
Yep, I use it a ton for my side projects, haven't used it much in large open source or collaborative projects yet, but I'm sure to try some day.
You should add the caveat 'yet' to this article. Broadly speaking I agree that AI can be weak. I just had a bug in some code I'm writing and I asked Claude to identify the problem. The first response recommended making changes which didn't really seem to address the issue. When I challenged it it was able to re-evaluate and actually identified the issue. The thing is I wasn't able to understand where the issue was occurring just by eyeballing the code. The reason is that the bug was a second order effect in a browser. Claude did manage to identify it. But could it write a whole system? Obviously not. yet.
I’d love to agree with you, but as mentioned in Apple’s article, given the current design, AI lacks true reasoning capabilities and won’t achieve it without a significant breakthrough. So, I’d say replacing human developers is still a distant possibility. The idea that AI might someday get there seems like an obvious stretch—one the article already addresses.
Oh man. Here we go again. Is there some kind of objective benchmark we can test for in this "true reasoning" capability? A benchmark that 90% of humans pass but which AI consistently fails? The main issue with current models and the mechanism of training them is they use back propagation. It is a very expensive approach which results in crystalline models. Switch to Hebbian learning - no small matter - and suddenly you get local learning in real time and adaptive networks. Is that far away? Probably not given the value of what has been achieved to date. I 'lost' a debate in 2017 about how we wouldn't see systems like we see today for at least my lifetime. By lost I mean most people agreed my estimate was unrealistic. My estimate was 2020. Well, 2024 and we have machines which pass the Turning Test so convincingly they need to condition them to be explicit about being AI the whole time. Every time some proclamation is made about how we won't make further progress we just blast on past.
Yeah, you might be right—maybe. But I never said AI will never replace developers. What I did say is that AI can't replace developers, at least not with its current capabilities. As I mentioned, unless AI undergoes some groundbreaking advancements, it simply won't be able to do it.
If AI can't consistently produce new and innovative solutions at a 90%+ success rate like human programmers, no one who values quality would choose AI over human developers. Sure, small ideas or companies might rely on AI to handle most tasks, but do you really think a company like Apple or Google would fire their entire staff and let AI run everything? I highly doubt it.
To add to that, AI can probably handle what most junior developers do—which is kind of sad because it raises the question: how will they get hired and eventually become senior devs?
But currently, AI can't replace a professional's job. And I keep stressing right now because I never said it won’t happen in the future. I’m not here to reassure anyone that they won’t lose their jobs down the line or anything like that.
I've heard that AI can give junior practitioners, of many fields, an easy path to tap the collected experience (and occasional collected stupidity) of seniors. IoW it can shorten their learning curve. Perhaps it will put a premium on cognitive ability, as opposed to wisdom.
That said, I still treat Copilot as a junior, albeit knowledgeable, developer.
Aren't many of the limitations being addressed by providing the ability to execute code and reason in steps? This is a very fast-moving field, and dual-mode thinking is on the cards right now. I find that context is the biggest issue to AI working well on my codebase.
My point might not have been clear: if AI can write most or all of your code, you’re probably not building anything new or impressive. Try creating something like Bluesky from scratch using only AI. Or better yet, build an entire platform like Twitter (sorry, X) from the ground up with nothing but AI. Then, ask it to ensure the performance surpasses what human developers achieved. You’ll quickly understand why companies still hire developers instead of relying solely on AI, despite it being cheaper in their own words.
True innovation and breakthroughs currently come from humans. That’s not to say AI can’t assist with complex codebases—it absolutely can. But can it truly replace a developer? Not yet.
Fair enough, I agree with that. It can't replace a senior developer or a software architect; if it did, it would create commodities (because that's what people would ask it to do, just like they ask for the articles we see littering this place).
My initial response was to the impression that "if it does work, it's a bad idea because it removes some things humans enjoy doing", which is what I took from my first reading. I don't disagree with your examples. If we were going to globally vote on whether AI should replace those things, whether we should ban it from those tasks or create laws to reduce its remit, I'd be voting on the side of the humans. No such laws will come, so we must ride the wave and work out what the machines will do next.
I don't know how quickly AI will advance; I don't know what it will continually fail at. The AI music generation stuff from Udio shocked me with its advancement, so who knows where.
To be fair, in my comment, I was also commenting, having read other comments here.
So, I was wandering around my general findings:
I think the moral arguments of the copyright and training dilemmas are a challenge. I learned everything I know by looking at other people's work. If we make machines capable of doing that faster, it's only a matter of time until they are trained on every piece of publically available material, and we have to start morally talking about what can be learned from observation, who has the power to learn, and why can't someone invent a machine that can learn. We can pass a law in one jurisdiction, but this is an arms race, and there are many jurisdictions...
For now and for the foreseeable future, though, these tools are only really useful to aid a human in moving faster, so the output is still reliant on the human in the mix, their skills and their vision. That may be as far as they go.
There are many historical references for the challenges of technological advancements, especially during the Industrial Revolution. Perhaps this is the first time such advancements have stretched beyond business boundaries, though.
Recently I had to transfer a fairly complex code from VBA to Arduino C++, which is different in many aspects. I was truely amazed that the result from ChatGPT did compile without changes. There had been some substantial changes in error handling, as the Arduino platform uses a serial output only. So, it seems that AI gets better in handling "complex" problems.
I can't stress this enough: this isn't the kind of complexity I was referring to in my article. AI can indeed handle a lot of tasks—I even converted my project from React to Svelte using AI.
But if your code was truly as complex as you claim, to the point where it would challenge a human, AI wouldn’t have fixed it without errors. AI can certainly assist with some codebases and issues, but when it comes to really complex code, it's just not capable right now.
To add to that, AI can probably handle what most junior developers do—which is kind of sad because it raises the question: how will they get hired and eventually become senior devs?
But currently, AI can't replace a professional's job. And I keep stressing right now because I never said it won’t happen in the future. I’m not here to reassure anyone that they won’t lose their jobs down the line or anything like that.
Things are evolving rapidly and I´m pretty unsure where this will end up. Finally, all predictions are not worth much. Either things happen - or they don't. Or they develop in a completely unexpected direction.
Maybe AI can´t replace a professional's job. But even now it can do things humans can´t. I was just amazed that - over half a year or so - AI solutions have grown so much better. Maybe in one year or in ten AI can replace a professionals job - or make it much more creative, as it frees people from doing the stupid things.
The only thing we can say for sure: AI will change many job profiles. For good or for bad? Up til now this is a human decision...
I think developers who do not use the AI tools are the ones who will get replaced because they will become slower.
On the other hand, working a lot with those tools, you will get rusty.
I think over-reliance on AI tools is the first sign that what a developer is doing is on its way to becoming a task AI can take over. In the short term, making use of AI to be more productive than the competition might work, but in the long term, the only solutions are a) seek higher grounds by deepening your skills and taking on more tasks that simple AI automation still cannot handle or b) overcome capitalism.
Yeah, you're kinda right, I mean I also use AI (for web dev) but depending on your line of work as a dev you probably can or can't use AI a ton, as such it's not fair to say everyone who doesn't use AI will become slower.
But depending on the task, and using AI can be a real help.
What I mean by slower is that every developer has to Google search for a solution or how to install a dependency... Small tasks like this are much faster with a prompt than doing a Google search.
I completely understand—that's why I mentioned it depends on your line of work. If your company uses proprietary software and doesn't allow its documentation to be indexed by Google or utilized by existing AI models, then using AI might be counterproductive because it lacks the necessary context.
As a web developer, there's already a wealth of publicly available information that AI has learned from, making it quite useful in our field. However, for areas like cybersecurity or network engineering, depending on the company's policies and the nature of the work, leveraging AI might not be as feasible or beneficial.
I think you're right that, right now, AI agents will not be replacing traditional coders. My experience is that even when give the proper context, LLMs can still give you code or responses that don't reflect your need, and a human touch is needed.
I think another big problem with LLMs in the long term when it comes to creative solutions (e.g. writing, coding, etc) is that their are fundamentally "non-original". They do not make new approaches or ideas, they only use old ideas to find solutions and attempt to give you the best answer from what already exists.
I think "creative codes", or coders solving problems in unique ways or solving new problems will always been needed.
Yeah, what I do believe is that they will be replacing most of the existing no-code website builders, though.
yet
Excellent points. AI is good and will aid us. However, the hype and exaggerated expectations vested in it as-per-today are harmful. In the future, maybe... But that also means "maybe not". And somehow, those who believe in the "maybe" tend to lean towards "surely". On occasion, lean quite heavily... Like, 89 degrees, heavily. :)
On the other hand, when I was a student, I read an article saying that instead of large desktop tower computers, everybody will be equipped with a personal, pocket-sized device and have global access to the world's full information for free...
But they also suggested we'd have flying cars, so I'd go with "maybe", still.
Hello Michael Amachree,
thank you for your article.
It's easy to read and provides a good overview of why you think AI can't replace developers.
In my opinion, a great analogy (it confirmes my experiences):
"Always approach AI as a junior developer that you guide, giving it clear instructions and correcting its mistakes."
I also really like how you describe the problem and offer a solution to it.
Recently, I used ChatGPT for the first time to solve an issue with placing a child window using the Win32 API. Whilst the initial response from ChatGPT was wrong, it was easily corrected. I didn't plan on using any code generated, I was after a full understanding of the problem, but the code snippets generated looked like they would work.
I have to say, the whole experience was like an intelligent rubber duck. . It was like having a seasoned colleague to talk things through, only responses came back in seconds, rather than minutes if you were chatting with a human.
Worth reading!
Great perspective, "Treat AI as a Junior Dev". Thank you for sharing
Nice
In time, AI will replace developers for sure. I am very confident!
That's true just for now and today
But tomorrow..?