DEV Community

Cover image for The evil decorators of CrewAI. Fight or flight?
Pavel Koryagin
Pavel Koryagin

Posted on

The evil decorators of CrewAI. Fight or flight?

After completing the course “Multi AI Agent Systems with crewAI,” mentioned earlier, you’re confident that CrewAI is a piece of cake, and you’re ready to start solving your own tasks! That’s the moment where the first obstacle arises.

ℹ️ This post is part of the “Crew AI Caveats” series, which I create to fill in the gaps left by official courses and to help you master CrewAI faster and easier.

To start, you need to create a project locally, right? But the course didn’t teach you how to do that.

You decide to follow the official Quickstart guide to set up your local project, but you see a discrepancy with the course: YAML, a class, and decorators.

This is a limbo state: “Multi AI Agent Systems with crewAI” does not show decorators at all, then “Practical Multi AI Agents and Advanced Use Cases with crewAI” shows YAML with manual processing, and demos like “How To Create AI Agents From Scratch” show them without much explanation.

Decorators and YAML are unexpected

So what to do?

Option 1. My Quickstart

Recommended for your first project right after completing the course “Multi AI Agent Systems with crewAI.” This option uses a plain and straightforward crew boilerplate.

Honestly, I’ve checked a few boilerplates and dozens of examples. All of them either use directives (that’s our Option 2) or are broken in some way. The example I recommend here is the closest to perfect.

  • Make a copy of this directory: instagram_post example
  • Initialize a git repo and make the initial commit.
  • Create an empty .env file.

  • In pyproject.toml:

    • Update crewai to the latest version,
    • Add crewai_tools.
    • Run poetry lock,
    • Resolve version conflicts and execute poetry lock again.
  • Run poetry install --no-root.

  • Find self.llm = Ollama(model=os.environ['MODEL']) and replace it with a provider-agnostic version self.llm = LLM(model=os.environ['MODEL']), where LLM is imported from crewai.

  • Do not try to run it as it is.

    • If you do, you’ll have to resolve several dependencies (run Ollama locally, obtain two API keys, install a package missing in pyproject.toml). These complications are not what you need when you start a project.
  • Instead, start editing the tasks and agents to your needs right at this point.

    • Start small (one task and one agent; not all the tasks you imagine at once).
    • Fill the .env file, using your knowledge from the course and the providers you’re already familiar with.
    • Either delete .env.example or synchronize it with your version of .env.
    • The same is for README.md–delete or adjust.
  • Run python main.py.

  • Fix errors.

  • Enjoy yourself.

Proof of concept: it works

At this point, I’d recommend adding monitoring as the previous post suggests.

ℹ️ I had no capacity to test adding monitoring and to publish an updated version of this boilerplate. Sorry, maybe later. Write in the comments if anything doesn’t work for you.

Option 2. Official Quickstart

Do this only when you have solved a couple of your own problems via Option 1 and gained confidence in using CrewAI elements.

  • Take the official Quickstart
  • Ensure that the process with all the tasks at once works.
  • Fight with problems as you add something a bit less trivial.

I did many experiments of code organization with decorators to avoid the problems I’m sharing below, but so far, I have no robust solution to share. I will show you some hacks in snippets in the post “Running tools and tasks individually or in subsets” of this series.

This is it! But you probably want to know why I don’t like decorators? There’s more than just discrepancy with the course.

The good, the bad, and the fragile

YAML idea is okay, generally intuitive, and maybe even neat. But in order to make it work, you need to do some implicit transformations.

In plain code, you cannot reference an agent by name:

the_task = Task(
    ...
    agent=the_agent # A variable, this works
)
the_task = Task(
    ...
    agent='the_agent' # A string id, does not work
)
Enter fullscreen mode Exit fullscreen mode

The same is true for tools. But with YAML, this is exactly what you do. To make it work, CrewAI introduces some decorators...

Decorators as they appear in the official boilerplate

At first glance, those five decorators look alright (@tool, the 5th, is not in the screenshot). The caveat is that they introduce unexpected behaviors. The basic sequential task flow with a complete crew as in the boilerplate works, but any deviation results in cryptic error messages.

As a software architect, I insist on testing app components separately (and there is a post on it in this series down the road). However, creating subsets of tasks for focused testing is problematic with CrewAI decorators.

  • If you create a separate crew function for testing, it fails because the agents list hasn’t been initialized by the decorator.
  • Adding a parameter to the API might work, but only for scalar values.
  • Passing more complex data structures results in cryptic errors:

One of my hacks that failed

Note that things might fail in a different way if you’re reading this post months later than it was written.

And one more thing: the problem is not in the “decorator” pattern per se! The problematic ones are these five (CrewBase, agent, task, tool, crew). As opposed to this, you could find a good example of decorators in Flow of the same framework. They are elegant, flexible, and I haven’t faced any problem with them so far.

So, if you want full control or simply fewer surprises, consider building your own architecture instead of relying on CrewAI’s default setup. In this series, down the road, you will find more solutions to incorporate.

Stay tuned

In the next post: CrewAI’s name has fooled me. Let’s see what its Process variants can and what they cannot.

Top comments (0)