Hi there! I'm Raphael, cofounder and CTO at Pilar, a Brazilian company offering real estate brokers and
brokerages software and services in a low success fee model. Instead of charging high upfront fees, we take a small percentage of each successful transaction, making our success directly tied to our customers' success. Our team of 20 tech builders is constantly innovating, and our latest product is Pilar Homes, a brand new real estate portal designed to provide the best experience for home buyers and agents.
In this post, I'll share our experience generating reports using AI, specifically Anthropic's Claude 3.5 Sonnet, comparing two different approaches.
Our philosophy for operating tasks will be covered in a future post (stay tuned!), but in short, they end up in a "Tech Service Desk" board as Jira tickets. Generating a report is one such task, and most take about 30 minutes of engineer time to resolve, with complex reports rarely exceeding a couple of hours. But things are changing. Our boutique brands that started with just one or two partners are expanding and becoming larger brokerages, and we're signing more contracts with established players of the industry. While the growing demand for reports could be solved by throwing more engineering hours at it, I saw an opportunity to explore AI agents and learn about architecture patterns in a real-world context.
First approach: letting AI run the show and reaching the max_tokens
limit
For our initial approach, I exposed tools to Claude's 3.5 Sonnet model that allowed it to execute database queries,
convert the retrieved documents in CSV and write the results to a .csv
file.
Here's how we structured it, heavily inspired by the blog post above:
# Each collection object describes a MongoDB collection and its fields
# This helps Claude understand our data schema
COLLECTIONS = [
{
'name': 'companies',
'description': 'Companies are the real estate brokerages. If the user provides a code to filter the data, it will be a company code. The _id may be retrieved by querying the company with the given code. Company codes are not used to join data.',
'fields': {
'_id': 'The ObjectId is the MongoDB id that uniquely identifies a company document. Its JSON representation is \"{"$oid": "the id"}\"',
'code': 'The company code is a short and human friendly string that uniquely identifies the company. Never use it for joining data.',
'name': 'A string representing the company name',
}
},
# more collections described after this point, but the idea is the same...
]
# this is the 'system' argument for client.messages.create
ROLE_PROMPT = "You are an engineer responsible for generating reports in CSV based on a user's description of the report content"
# this is the 'user' message
task_prompt = f"{report_description}.\nAvailable collections: {COLLECTIONS}\nCompany codes: {company_codes}\n.Always demand a company code from the user to filter the data -- the user may use the terms imobiliária, marca, brand or company to reference a company. If the user wants a field that does not exist in a collection, don't add it to the report and don't ask the user for the field."
The report_description
is simply a command line argument read via argparse
and company_codes
are retrieved from the database and exposed to the model for it to know which companies exist and recognize what's a company code in the user input. Examples: (MO - Mosaic Homes, NV - Nova Real Estate, etc.).
Tools available to the model were: find
and docs2csv
.
def find(collection: str, query: str, fields: list[str]) -> Cursor:
"""Find documents in a collection filtering by "query" and retrieving fields via projection"""
return db.get_collection(collection).find(query, projection={field: 1 for field in fields})
def docs2csv(documents: list[dict]) -> list[str]:
"""
Convert a dictionary to a CSV string.
"""
print(f"Converting {len(documents)} documents to CSV")
with open('report.csv', mode='w', encoding='utf-8') as file:
writer = csv.DictWriter(file, fieldnames=documents[0].keys())
writer.writeheader()
writer.writerows(documents)
return "report.csv"
Claude was able to invoke the find function performing well structured queries with projections on our database and generate small CSV reports (less than 500 lines) using the docs2csv
tool. Larger reports triggered max_tokens
errors, though.
After analyzing our token usage patterns, we realized that most of our token consumption came from processing individual records through the model. This led us to explore a different approach: having Claude generate the processing code instead of processing the data directly.
Second approach: Python code generation as a workaround
While solving the max_tokens
limitation wasn't technically difficult, it required rethinking our approach to the problem.
The solution? Have Claude generate Python code that would run on our CPU instead of processing each document through the AI.
I had to modify the role and task prompts and removed tools.
Here's the Gist for the report generation code.
The command to generate the report is:
$ env -S "$(cat .env)" python . --input="Generate a CSV report containing AVA's active listings with their code, owner's contact info (name, email and phone number)"
Claude generated Python content (works beautifully):
from pymongo import MongoClient
import csv
from bson import ObjectId
print("Connecting to MongoDB...")
client = MongoClient('mongodb://admin:admin@localhost:27017/')
db = client.pilar
# Get company _id from code
print("Getting company id for code 'AVA'...")
company = db.companies.find_one({"code": "AVA"})
if not company:
print("Company not found")
exit(1)
company_id = company["_id"]
# Get active properties
print("Querying active properties...")
properties = db.properties.find({
"company_id": str(company_id),
"$or": [
{"deactivation_details": None},
{"deactivation_details": ""}
]
})
# Create CSV file
print("Creating CSV file...")
with open('active_properties.csv', 'w', newline='', encoding='utf-8') as file:
writer = csv.writer(file)
writer.writerow(['Listing ID', 'Owner phone number', 'Owner name'])
for prop in properties:
print(f"Processing property {prop['commercial_id']}...")
# Get owner object
owner = prop.get('owner')
# Get owner phone
owner_phone = "N/A"
if owner and "phones" in owner and len(owner["phones"]) > 0:
owner_phone = owner["phones"][0]["number"]
# Get owner name
owner_name = "N/A"
if owner and "name" in owner:
owner_name = owner["name"]
writer.writerow([
prop.get('commercial_id', 'N/A'),
owner_phone,
owner_name,
])
print("CSV file created successfully!")
Conclusion
Our journey with Claude 3.5 Sonnet demonstrated that AI can significantly improve operating efficiency, but success lies in choosing the right architecture. The code generation approach proved more robust than direct AI processing while maintaining the benefits of automation.
Besides correctly building reports, the code generation approach also allows for an engineer to review the AI work and that's a very good thing to have.
To fully automate the process, remove the human-in-the-loop and handle higher volumes of reports, distributing work across multiple agent instances — each processing fewer tokens — would be the natural evolution of this system. For an excellent discussion of the architectural challenges in such distributed AI systems, I highly recommend Phil Calçado's latest post on building AI products.
Key learnings from this implementation:
- Direct AI processing works well for smaller datasets
- Code generation provides better scalability and maintainability
- Having human review in the loop adds reliability
Top comments (0)