LLM-as-a-judge is attractive: you can score thousands of examples quickly, iterate on prompts, and compare models without waiting on a human review queue. The risk is that you build a fast measurement system that is wrong.
Reliable LLM judging requires the same discipline as any other evaluation system: calibration, controls, and transparency.
Start with rubrics that are testable
Judges are only as good as the rubric. Avoid vague criteria like "helpfulness" without definitions. Use rubrics with clear anchors and examples (see evaluation rubrics).
For RAG, include groundedness and citation correctness. For tool use, include schema correctness and side-effect safety.
Build a calibration set with known answers
Before trusting judge scores, create a calibration set:
- A handful of high-quality answers and clearly bad answers.
- Edge cases: policy boundaries, ambiguous questions, conflicting sources.
- Human-reviewed labels for the highest-impact intents.
Use calibration to tune prompts, compare judge models, and detect judge drift over time.
Watch for judge drift and incentive problems
Judges can drift because models change, prompts change, or the distribution of tasks shifts. Treat judge drift as a quality risk:
- Pin judge versions and prompt templates.
- Run a fixed benchmark set regularly (see benchmarking).
- Keep a human review loop for high-severity categories (see human review operations).
Also beware of incentive problems: if teams optimise for judge scores, they may learn to "game" the judge without improving user outcomes.
Combine judge scoring with real-world signals
Judge scoring is an offline tool. Validate improvements online with guardrails and outcome metrics (see experimentation and value metrics).
Make failures actionable
Use judges to produce categories that engineers can fix: retrieval failures, grounding failures, policy failures, tool failures (see error taxonomy). When judge output is not actionable, it becomes noise.
LLM-as-a-judge works when it is treated as a controlled measurement system, not a shortcut.