AI development is moving at breakneck speed. New models, APIs, and frameworks pop up every month, and keeping up can feel overwhelming. As someone who frequently prototypes AI applications, I found myself constantly rebuilding the same project scaffolding: setting up API keys, installing dependencies, creating Streamlit UIs, and integrating vector databases.
So I thought: Why am I doing this manually every time?
That’s when RunKit was born – a CLI tool that lets developers scaffold AI projects in seconds, not days. Here’s what I learned while building it.
Why I Built a CLI for AI Projects
Every AI project starts the same way:
- Decide on an LLM (Claude, Gemini, or a local model like Ollama)
- Set up environment variables and API keys
- Install dependencies
- Create a basic UI (usually Streamlit or FastAPI)
- Add optional features like caching, conversation memory, or a vector database
I was repeating this setup so often that I started copy-pasting boilerplate code. But maintaining multiple AI projects like this was a nightmare. Dependencies would change, API requirements would shift, and suddenly my old setups were outdated.
So I built RunKit: a CLI tool that lets you do this with a single command:
pip install run-kit
run-kit my-ai-project
And boom! A ready-to-run AI project is generated with best practices, customizable features, and a clean architecture.
The Challenges of Building a CLI
Building a CLI isn’t just about automating tasks; it’s about making the experience smooth for users. Here are some of the challenges I faced:
1. Making the CLI Intuitive
Nobody wants to read a 10-page manual to use a CLI. The goal was to make it as frictionless as possible. That’s why RunKit asks interactive questions:
🚀 Initializing AI project: my-ai-project
? Select the LLM provider you want to use:
> Anthropic (Claude)
Google (Gemini)
LLM Local (Ollama)
Multiple providers
? Select additional features:
[x] Caching system
[x] Conversation persistence
[ ] File uploads
[x] Vector database
This way, you only generate the components you actually need.
2. Handling Different AI Providers
Each LLM provider (Anthropic, Google, OpenAI, etc.) has its own API structure, rate limits, and quirks. I had to build separate templates for each while keeping the CLI modular. The architecture ended up like this:
run-kit/
├── templates/
│ ├── anthropic/ # Claude-specific setup
│ ├── gemini/ # Google Gemini setup
│ ├── ollama/ # Local LLM setup
│ └── base/ # Common project structure
This allows RunKit to support multiple AI providers without bloating the setup process.
3. Balancing Simplicity and Customization
Some users just want a quick prototype, while others need a production-ready setup. Finding the balance between “easy to use” and “fully customizable” was tricky.
The solution? RunKit generates a project structure that’s easy to modify but comes with sensible defaults. A generated project looks like this:
my-ai-project/
├── app.py # Entry point
├── .env # API keys
├── requirements.txt # Dependencies
├── app/
│ ├── llm/ # LLM connection logic
│ ├── utils/ # General utilities
│ ├── data/ # Local storage for chat history, uploads, etc.
│ ├── styles/ # Custom CSS (optional)
│ └── db/ # Vector database integration (optional)
You can start with this as a foundation and expand as needed.
What I Learned from Building RunKit
1. Developers Love Speed
The main feedback I got from early users? They loved how quickly they could spin up an AI project. Time spent setting up boilerplate is time wasted, and a CLI like RunKit saves hours.
2. Flexibility Beats Perfection
No tool will fit every developer’s workflow perfectly. That’s why RunKit doesn’t lock you in. You can modify the generated project as much as you want. It’s your code from day one.
3. Open Source Is the Way to Go
By making RunKit open-source, I’ve received contributions, bug reports, and feature ideas I would’ve never thought of alone. The community is what makes tools like this thrive.
What’s Next for RunKit?
The CLI is still evolving! Here’s what’s coming next:
- Fine-tuning support: RunKit will help you fine-tune custom models
- Multi-modal capabilities: Image and audio input support
- Function calling examples: Easily integrate LLM function calls
- Agent frameworks: Better integrations for AI agents
If you’re interested, check out the repo and give it a star ⭐:
What's next?
Building this CLI was a game-changer for my AI workflow. It removed so much friction from project setup and let me focus on actually building AI applications instead of messing with dependencies.
If you’ve ever found yourself setting up the same AI project structure over and over again, maybe it’s time to automate it. Trust me, your future self will thank you.
Top comments (2)
Nice Bro!
Thanks bro 🙌