output_pydantic
is awesome and guarantees the output format to you. But its implementation is missing an important obvious thing.
ℹ️ This post is part of the “Crew AI Caveats” series, which I create to fill in the gaps left by official courses and to help you master CrewAI faster and easier.
If you use monitoring, you could make interesting discoveries, and one of mine is this.
When you use output_pydantic
, the LLM does not see your field descriptions.
Code:
class FillContentPlanGapsOutput(BaseModel):
date: str
time: str
channel: str = Field(..., description="Channel ID. Lowercase name, e.g. twitter, youtube, etc.")
typeOfContent: str
LLM prompt from monitoring:
Here’s my topic where I approach the solutions.
And here’s a very early draft of a solution:
@task
def fill_content_plan_gaps(self) -> Task:
field_info = "\nOutput fields:\n"
for field_name, field_instance in FillContentPlanGapsOutput.model_fields.items():
field_info += "- " + field_name + ((": " + field_instance.description) if field_instance.description is not None else "") + "\n"
return Task(
config=self.tasks_config['fill_content_plan_gaps'],
expected_output=self.tasks_config['fill_content_plan_gaps']['expected_output'] + field_info,
output_pydantic=FillContentPlanGapsOutput,
)
Logs:
This is the expect criteria for your final answer: List of content requests with channel, content type, and time slot.
Output fields:
- date
- time
- channel: Channel ID. Lowercase name, e.g. twitter, youtube, etc.
- typeOfContent
you MUST return the actual complete content as the final answer, not a summary.
Ensure your final answer contains only the content in the following format: {
"date": str,
"time": str,
"channel": str,
"typeOfContent": str
}
When you read it, the problem might be already fixed in the framework. Otherwise, make a generic implementation of the prompt customizer that I haven’t made yet.
Stay tuned
In the next post: No revolutionary observations, just a heads-up about a discrepancy in official boilerplate versions.
Top comments (0)