DEV Community

Cover image for How I Built a Local LLM-Powered File Reorganizer with Rust
Evgenii Perminov
Evgenii Perminov

Posted on

How I Built a Local LLM-Powered File Reorganizer with Rust

Introduction: Diving (Back) Into Rust

Some time ago, I decided to dive into Rust once again—this must be my nth attempt. I’d tried learning it before, but each time I either got swamped by the borrow checker or got sidetracked by other projects. This time, I wanted a small, practical project to force myself to stick with Rust. The result is messy-folder-reorganizer-ai, a command-line tool for file organization powered by a local LLM.


The Inspiration: A Bloated Downloads Folder

The main motivation was my messy Downloads folder, which often ballooned to hundreds of files—images, documents, installers—essentially chaos. Instead of manually sorting through them, I thought, “Why not let an AI propose a structure?”


Discovering Local LLMs

While brainstorming, I stumbled upon the possibility of running LLMs locally, like Ollama or other self-hosted frameworks. I loved the idea of not sending my data to some cloud service. So I decided to build a Rust-based CLI that queries a local LLM server for suggestions on how to reorganize my folders.


Challenges: LLM & Large Folders

  • Initial Model: I started using llama3.2:1b, but the responses didn’t follow prompt instructions well, so I switched to deepseek-r1, which performed much better.
  • Context Limits: When testing on folders with many files, the model began forgetting the beginning of the prompt and stopped following instructions properly. Increasing num_ctx (which defines the model’s context size) helped partially, but the model still struggles with 100+ files.
  • Possible Solutions:
    • Batching Requests: Split the file list into smaller chunks and send multiple prompts.
    • Other Ideas?: If you’re an LLM expert—especially with local models like Ollama—I’d love advice on how to handle larger sets without hitting memory or context limits.

CLI Features

  • Configurable Model: Specify the local LLM endpoint, model name, or other model options.
  • Customizable Prompts: Tweak the AI prompt to fine-tune how the model interprets your folder’s contents.
  • Confirmation Prompt: The tool shows you the proposed structure and asks for confirmation before reorganizing any files.

Looking for Feedback

  • Rust Community: I’d love code feedback — best practices, performance tips, or suggestions on how to structure the CLI.
  • LLM Gurus: Any advice on optimizing local model inference for large file sets or advanced chunking strategies would be invaluable.

Conclusion

This project has been a great way to re-learn some Rust features and experiment with local AI solutions. While it works decently for medium-sized folders, there’s plenty of room to grow. If this concept resonates with you—maybe your Downloads folder is as messy as mine—give it a try, open an issue, or contribute a pull request.

Thanks for reading!

Feel free to reach out on the GitHub issues page, or drop me a note if you have any thoughts, suggestions, or just want to talk about Rust and AI!

Top comments (0)