Heuristic vs Semantic Eval: When <1ms Matters More Than LLM-as-Judge
There is a default assumption in the agent eval space right now: if you want to evaluate agent output, you need an LLM to judge it. Feed the output to GPT-4o with a rubric, get a score back, done. ...
There is a default assumption in the agent eval space right now: if you want to evaluate agent output, you need an LLM to judge it. Feed the output to GPT-4o with a rubric, get a score back, done. LLM-as-Judge is the pattern everyone reaches for first. I want to push back on that. Not because LLM-as-Judge is bad -- it is genuinely powerful for certain problems. But because most teams are using it for evaluations that do not require an LLM at all. They are spending seconds and dollars on checks that a regex can handle in microseconds for free. Two Approaches to Agent Evaluation LLM-as-Judge sends your agent's output to another LLM with a scoring prompt. The judge model reads the output, compares it against criteria you define, and returns a score. This is semantic evaluation -- the judge understands meaning, nuance, and context. Strengths: handles subjective quality, can assess factual accuracy against source documents, evaluates tone and style, reasons about complex multi-step outputs.