Enterprises rarely run “a model.” They run a system: a provider model version, a routing policy, prompts and templates, retrieval corpora, tools, and post-processing filters. When something goes wrong, the hardest question is often simple: what changed?
Model cards and lineage practices answer that question. They make AI systems explainable to internal stakeholders and defensible to external reviewers.
What a practical model card includes
A useful model card is concise and operational. Typical fields:
- Intended use. What the model is used for, and what it is not used for.
- Risk tier. The autonomy level and required controls (see risk appetite).
- Evaluation summary. Benchmarks, red team findings, known limitations (see evaluation metrics).
- Operational constraints. Latency budgets, cost ceilings, rate limits.
- Data boundaries. What data is allowed to reach the model (see data minimisation).
Lineage: more than “model version”
Lineage should cover the full behaviour surface:
- Model identity. Provider, model name, region, and version (or release date).
- Prompt/policy versions. System prompt, safety prompts, tool schemas (see prompt registries).
- Retrieval corpus version. Knowledge base snapshot IDs (see knowledge base governance).
- Tooling versions. Integration endpoints, schemas, and permission policies (see tool authorisation).
This lineage data should be logged in production responses so incidents can be triaged quickly (see incident response).
Use lineage to enable safe change
Model cards and lineage are most valuable when connected to release management:
- Canary rollouts can compare outcomes by version (see canary rollouts).
- Drift monitoring can alert when behaviour changes unexpectedly (see drift monitoring).
- Procurement decisions can be grounded in clear requirements and evidence (see procurement).
In short: model cards and lineage are the difference between “we use an LLM” and “we can operate an AI system responsibly.”