As developers, we're always looking for ways to improve our productivity and streamline our workflows. With the recent advances in AI (and my firm belief that these technologies will be game-changing), I decided to put these tools to the test in a real-world development scenario.
My Goal: Build a full-stack AI-powered food tracking app PoC in just 7 hours.
The Result: It worked pretty well! But not without challenges.
Here's what I learned about using AI as a development buddy.
The Project
The goal was ambitious but focused: create a mobile app that lets users photograph their food and automatically log it to a food tracker using AI recognition. The tech stack included my go-to's: Quasar Framework (VueJS Framework), Google Firebase, and Capacitor (Mobile Dev Framework), with Google Gemini 2.0 Flash for AI inference.
Nothing super fancy, but more than enough complexity to put AI assistance to the test.
The A.I. Tools
I used a combination of AI tools to assist with different aspects of the project:
- ChatGPT (o1 Model): For high-level planning, requirements gathering, and general guidance
- Claude 3.5 Sonnet via WindSurf/Cascade: For code generation and specific development tasks, I used a tool called WindSurf by Codeium. It is essentially an IDE built off Visual Studio Code with an A.I. code writer built in. It has direct access to files, and can quickly write and edit entire files with version control.
- Google Gemini 2.0 Flash: For image recognition and food classification. This model is super powerful and super cheap.
The "AI as Dev Intern" Approach
One of the most valuable insights I gained was thinking of AI as a junior developer or intern. This mental model completely changed how I approached the collaboration. Just as you wouldn't dump an entire project on a new intern's desk and expect perfect results, I learned to:
- Be explicit with instructions
- Break tasks into manageable chunks
- Review output and provide feedback
- Not expect too much at once
- Give guidance and direction when needed
This approach led to better payoff, and worked better than treating the AI as either "all-knowing" or "completely unreliable".
What Worked Well
Planning and Requirements
AI proved exceptionally helpful in the planning phase. I started with a rough project outline and used iterative prompts to refine it. For example, one of my initial prompts was:
Can you help me plan out making the following app in 7 hours:
<information about the app>
Detail out how long I should spend on each section,
and revise my overall plan as needed.
This sparked a back-and-forth that helped crystallize the project scope and timeline, leading to a more realistic and detailed plan.
Documentation
The AI tools excelled at generating and maintaining documentation. It could quickly create comprehensive markdown files and keep them updated as the project evolved. This freed up valuable time for actual development work.
Iterative Development
Breaking down development into smaller, focused prompts proved highly effective. Instead of trying to generate entire components at once, I used a cascade of prompts, each building on the previous one. For example:
- First, setting up Firebase backend infrastructure
- Then, building the frontend boilerplate
- Following with specific features like the camera capture screen
- Finally, adding data processing and display functionality
A neat trick I found, was to have the A.I. keep notes of the progress and next steps, so it could have something to reference in between prompts. Since I was using WindSurf, I could use a "master prompt" to keep the conversation going. The master prompt would instruct the A.I. to refer back to the notes and add to them as it goes along writing code.
Bug Fixes and Post Development
Another place where WindSurf shined was in bug fixes and post-development tasks.
After testing my app out and noting some bugs, I passed my list to WindSurf and it could quickly identify and correct issues, and even generate test cases to ensure the fixes worked. This saved me a ton of time and effort.
I had the following list of bugs, and only on a few occasions did I have to manually fix them:
x = Was fixed by AI
- = Was fixed by me, assisted by AI
* = Was not fixed by AI, I had to fix it myself
- [x] BUG> Camera goes blank after taking picture and hitting cancel (Web implementation)
- [x] BUG> In the Detected Foods View If no foods are detected, list is blank, which is good, but we should show some sort of icon and text to explain that no foods have been added or detected
- [x] BUG> In the Detected Foods View, it would be nice to add each Item (text title, confidence, delete button, select matching food drop down) better grouped to make it clear that all those elements are a part of the same food detection. This can be done via a q-card, some better styling, or whatever
- [ * ] BUG> In the Detected Foods View, Select Matching food drop down does not reopen properly when attempting to open drop down for something added solely via search (rather than an item added via the camera)
- [x] BUG> In the Detected Foods View, Cancel button should return user to Dashboard debounceSearch should not fire if input is empty
- [-] FEATURE> Entry Submission (Backend) - Take IMG, and resize, crop, compress, etc to make the image fit nicely in a 128*128 square nicely. Create multiple sizes for image, and add links to images in the record for submission (along side imageUrl)
- [x] FEATURE> Dashboard - Show IMG for each captured meal entry, loading appropriate size
- [x] FEATURE> Add View Meal Page -> show all details and charts and stuff
- [x] FEATURE> Add Edit Meal Functionality -> Bring back modal from Camera page view, but only use the modal. Do not start camera.
- [x] FEATURE> Add Delete Meal Functionality
- [x] FEATURE> History View Meals: Ability to view ALL MEAL entries, paginated, by day. Use Quasar Timeline -> have Meals button in menu goto this page. This page should show the same type of cards shown for meal entries on the IndexPage.vue
- [x] FEATURE> History View Water: Ability to view ALL WATER entries, paginated, by day, Use Quasar timeline -> have Water button in menu goto this page. This page should show the same type of cards shown for water entries on the IndexPage.vue
- [x] BUG> Water history Edit does not work
- [x] BUG> Water history data is weird
Challenges and Gotchas
Not everything was smooth sailing. Some key challenges emerged:
File Editing Issues
I discovered several limitations when it came to file modifications:
- Open files sometimes got seemingly locked / became uneditable by WindSurf
- Longer conversations absolutely decreased the quality of code editing
NOTE: As of writing this, it seems the file lock/failed filed editing bug has been fixed in the latest version of WindSurf.
Context Management
The quality of AI assistance degraded in longer conversations. I learned to start fresh conversations for new components and provide condensed context rather than trying to maintain one long session.
As the code got longer and more complex, the AI struggled to keep up. I had to break down the tasks into smaller, more manageable chunks to maintain quality, or had to provide lots of details and context to keep the AI on track.
Going off track would entail dropping features, erasing relevant code, or sometimes building the wrong thing entirely. This was a big time sink, and I had to be very careful to keep the AI on track or decide to code manually.
Knowing When to Code Manually
Sometimes, doing it yourself is simply faster. Learning to recognize these moments saved considerable time and frustration. As noted in my development log: "When it works, it works well. When it doesn't, it can be a time sink."
Best Practices That Emerged
Git Commit Frequently: Before having AI make significant changes, commit your current state. This provides an easy fallback if things go wrong.
-
Prepare Boilerplate Code: Set up your project structure and basic configurations before engaging AI assistance. In my case, this meant:
- Setting up Quasar/Vue CLI
- Configuring Firebase CLI and components
- Establishing basic project structure
Clear and Detailed Prompting: Be as specific as possible with your requirements. Include relevant context and examples. Spend as much time as you can in your requirements and design phase, and provide as much detail as possible. These notes and direction will not only help you execute on your project better, but it drastically improves the quality of the code generated by the AI.
Iterative Feedback: Don't hesitate to correct or guide the AI. For example, when I received output that didn't quite match requirements, I provided specific feedback:
That works pretty well, except a few things aren't working:
1. When I go to the home page (/) I get a blank screen
2. The Dashboard summary view doesn't show macro nutrient information
3. The water should be in cups
This kind of specific feedback usually led to quick, accurate corrections.
The Verdict
Using AI absolutely accelerated development. While it required guidance and occasional correction, the combination of ChatGPT and Claude along with WindSurf allowed me to progress much further than I could have alone in the same time frame.
That said, success required understanding both the capabilities and limitations of these tools. The key was finding the right balance between AI assistance and human expertise, using each where they excel.
Looking Forward
This experiment suggests a future where AI tools become an integral part of the development workflow - for now, not replacing developers, but augmenting their capabilities. The key is learning to work with these tools effectively, understanding their strengths and limitations, and developing workflows that maximize their benefits while minimizing their drawbacks.
For developers interested in incorporating AI into their workflow, I recommend starting small, establishing clear patterns of interaction, and gradually expanding AI's role as you become more comfortable with its capabilities and limitations.
The tools are at the level of a junior developer, and soon with enough training data and iteration, they will be able to do much more. I can see these tools ultimately writing entire applications and handling the majority of the development process. Until the models can simulate a senior developer, we will still need human oversight and guidance.
Strengths:
- Documentation: Generating and maintaining documentation
- Minor Bug Fixes / Troubleshooting: Identifying and correcting bugs
- Unit Testing: Generating test cases and ensuring code quality
- Code Generation / Small Components: Writing boilerplate code and basic components to get you started
- Planning and Requirements Gathering: Outlining project scope and timeline
- Infrastructure Setup: Configuring basic project structure and setup, with Terraform, Dockerfiles, etc.
- Rapid Learning and Prototype Generation: Quickly learning new frameworks and generating prototypes
Weaknesses:
- Complex Logic: Handling intricate business logic or complex algorithms
- Design and UX: Creating visually appealing interfaces or user experiences
- Writing Larger Components: Writing complex modules or larger components end to end
- Context / Memory: Maintaining context over long conversations or across multiple components
- Complex Debugging: Troubleshooting complex issues or edge cases, often requires guidance to be useful
- Up to Date with Latest Tech: Keeping up with the latest frameworks, libraries, and best practices
I'm excited to see where this technology goes and how it will shape the future of software development.
The Pictures!!!
Here are some quick screenshots of the app I was able to build in 7 hours. It's not perfect, but it's a solid start!
(Using Camera to Capture Food)
(Sending Image to AI for Food Recognition)
(Viewing Results - Adding Food to Tracker)
Shameless, But Relevant Plug:
While I am still working on the food app mentioned in this post, I was able to finish another project quite quickly using WindSurf and A.I.
If you're interested in an AI-powered Git CLI helper to make commit messages and release notes easier to maintain, check out eGit
Top comments (0)