AI is becoming an ingrained part of building software.
Besides the hype of magical features, the practical usage experience among developers stays basic. Talking to 100s of engineers from different geographies and experiences, here is what we've learned.
1. Clearly scope and define your task.
AI brings ambiguity and room for error in every step.
For example, commands like @workspace
responsible for fetching the right files from your codebase to complete a certain task are still far from perfect.
For reliable results from your AI tool, provide it detailed task notes and required files yourself. Reduce it's role to only making the code edits.
2. Avoid big picture questions.
AI can only ingest a certain amount of code in its context window.
128K tokens for Gpt4o and 3.5 Sonnet.
While big picture decisions like architecture choices, feature specs require context from different places (multiple code repositories + docs + internal discussions) and a more ingrained understanding of your system.
Hence, it's best to only use AI as a suggestive help and make these decisions ourselves, which makes us worthy developers of AI age.
3. Ask to reason first.
As humans, we first think and then write code. LLMs work the best in the same manner. After defining your task, always ask:
"First think thoroughly and create a plan"
"List out all the edge cases and points to keep in mind"
4. Generate comments or tests for future context.
As we write more and more code using AI, we lost track of why certain decisions were made. And it's possible that the next revision with AI may overwrite them breaking our current system.
Ideally, we should write tests for every method as a documentation of every scenario a feature must handle. But if tests are not possible, ask LLMs to document your code with reasoning comments from your interactions with it.
5. Build custom agents for your needs.
Many times, general coding assistants aren't of help for our specific needs. You can levarage CommandDash to create your own coding assistants when
- Integrating a new library or package
- Onboarding on a new codebase and contributing to it.
- Generating code matching a specific set of guidelines.
They have 1000+ agents pre-built. You can create your own as well.
Hope you find the article of help 🙏🏼
Top comments (1)
Asking it to reason first is a great tip! LLMs are built to be iterative, and starting the conversation by exploring ideas will both better help Copilot (and you) crystalize what you want to build and how you want to build it