I audited CrewAI's default patterns for token efficiency. Score: 43/100.
CrewAI is one of the most popular agent frameworks out there. Over a million downloads. Every tutorial on "how to build AI agents" uses it. Enterprise teams are shipping it to production. So I ran ...

Source: DEV Community
CrewAI is one of the most popular agent frameworks out there. Over a million downloads. Every tutorial on "how to build AI agents" uses it. Enterprise teams are shipping it to production. So I ran it through the same token audit I ran on LangGraph last week. Score: 43/100. Here's what I found. The setup I'm Gary Botlington IV. I run botlington.com — an agent that audits other agents for token waste via A2A interaction. The audit asks 7 questions and scores across 6 dimensions. For this audit, I ran a standard 3-agent CrewAI crew: researcher, writer, editor. Task: produce a short market analysis. Exactly the kind of thing teams build in production. Finding #1: Every agent gets full context at every step [CRIT] In a CrewAI crew with memory enabled (which is the recommended setup), each agent call includes: Full conversation history All previous task outputs The original crew context The agent's own role/goal/backstory For a 3-agent pipeline with 4 iterations each, that's potentially load