I write a lot of code using AI assistance. Not because I don't like coding, but because it allows me to spend more time on the semantics of the code and less time on the syntax. A part of me still enjoys getting into the weeds of the syntax, but when there's some delivery on the line, it's somewhat irresponsible to choose the scenic route.
Disclaimer: Before we start, I should tell you that if you are strongly opinionated about the aesthetics of your code, and have a artisanal approach to writing code, this article is probably not for you. In fact, it may offend you.
Structure Matters: Organizing for AI Understanding
Organize code into larger, well-structured files with clear module boundaries instead of scattered small files. This gives AI agents better context and reduces cognitive load. Group related functionality together and maintain a modular directory structure to help direct AI attention to relevant code sections.
Coding agents build context from the code they can "see" at once. When code is scattered across many small files, the agent may miss important relationships or make assumptions without full context. For example, if related functionality is split across multiple files, the AI might suggest changes that break dependencies it wasn't aware of.
Good structure helps the AI understand the full picture and make more informed suggestions. You might think the agent can follow the imports, and they do - just not always, and not always in the way you expect.
Naming as Documentation: Clear Intent for AI Understanding
Using descriptive function names that encode behavior helps because coding agents typically rename functions when their behavior changes. This naturally enforces Single Responsibility Principle - smaller, focused functions become less manageable for humans alone, but more manageable for humans + AI.
Descriptive function names serve as a form of executable documentation that AI can reliably interpret. When a function is named validateUserEmailAndSendConfirmation()
rather than just processEmail()
, the AI understands both the purpose and the expected behavior.
This is particularly valuable because AI assistants often suggest changes based on pattern matching - with clear naming, they're more likely to maintain the intended functionality. Or if they change the functionality, they're likely to change the name of the function to match the new functionality. This makes it easier for you to grok the changes post-hoc.
Regular Entropy Reduction: Code Gardening with AI
Run periodic prompts asking AI to identify and fix inconsistencies in naming conventions and file organization. Think of this as garbage collection for code structure. For example:
Review the [module] for:
1. Inconsistent function naming patterns
2. Misaligned error handling approaches
3. Varying parameter validation styles
4. Mixed naming and nomenclature
Suggest refactoring to align these patterns.
I have a theory (for another day) that foundational models have a default tendency to maximize the amount of tokens they produce. Therefore, assume that the AI will try to be more verbose in the diff it produces than a human would. Also, depending on the coding agent, it's also likely that it'll not delete code that you've explicitly written, even if it's not used. I can't speak for all coding agents, but I've found this to be the case with most of them. My assumption that it's trying to minimize false positives for unused code.
Breadcrumb Comments: Short-circuiting the AI's Understanding
Instead of traditional documentation, leave strategic "breadcrumb" comments that act as context anchors for AI. These aren't typical documentation comments, but rather key decision points or architectural assumptions that help AI models understand the "why" behind the code.
# CONTEXT: Validation with email is enough because it's immutable
# ASSUMPTION: A user can only have one email address, and they can't change it
For a human who had worked in your codebase, leaving these comments seems redundant. But for an AI, it helps save on additional reasoning loops. Most coding agents go through some kind of intent -> context -> reasoning -> diff -> apply loop. What's been helpful for me is treating the agent as a developer who doesn't have a complete understanding of the codebase, and probably never will (depending on the size of the codebase) - but can get up to speed quickly on a localized section of the codebase.
Defensive Practices: Protecting Against AI Mistakes
A couple of helpful practices to defend your codebase against AI-suggested changes:
Write tests before implementation. Many AI agents can inadvertently delete code while trying to maintain internal consistency. Strong test coverage prevents regressions and acts as a safety net for AI-suggested changes. Better yet, get the agent to write the tests for the portion of the code you're changing before you get the agent to write the code. My prediction here is that TDD will make a comeback as a result of this.
Enable IDE-integrated static analysis tools. They can guide the AI on maintaining code standards and provide guardrails that help ensure code quality.
What I Use
I use a combination of the following agents depending on the task at hand:
- Cursor with Claude: My daily driver for coding assistance. The tight integration with Claude's capabilities makes it excellent for real-time pair programming and complex code changes.
- Qodo Merge: Specifically designed for PR assistance, helping review and suggest improvements to pull requests.
- Code Buff: Great for broader changes to documentation and marketing sites, as well as smaller changes where constant oversight isn't necessary. Particularly useful for batch processing tasks.
- Townie by Val.town: Perfect for writing and hosting one-off JavaScript functions.
Top comments (0)