Frontier AI governance used to focus heavily on acceptable use policies, prompt handling and output review. Those still matter, but they are no longer sufficient. Models are becoming better at long-running tasks, software work, research, analysis, tool use and cybersecurity-relevant reasoning. Governance has to cover what the system can do, not only what it can say.
In 2026, governance needs to be capability-aware. GPT-5.5 and Claude Mythos Preview are different signals, but they point in the same direction: models are moving into higher-consequence work. That means access, evaluation, monitoring and accountability need to be designed around use-case risk.
The missing middle
Many organisations have high-level AI principles and low-level technical controls, but the middle layer is thin. That layer should translate policy into operating rules: which model classes can touch which data, which tools they can call, which actions require approval, which logs are retained, and which evaluation gates must pass before release.
This is where most practical risk lives. A model that drafts a marketing summary and a model that opens tickets, edits code or scans sensitive infrastructure may use similar interfaces, but they do not deserve the same governance treatment.
A workable governance pattern
Start by classifying AI use cases into four roles: assistant, analyst, operator and agent. Assistants produce low-risk drafts. Analysts transform information into recommendations. Operators use tools inside tight boundaries. Agents can coordinate multiple steps and may require persistent state, credentials or escalation paths.
Each role should have a default control profile. For example, an assistant may need style guidance and privacy checks. An operator needs tool permissions, logging and rollback. An agent needs state management, termination criteria, incident handling, stronger evaluation and explicit human handoff.
Governance as an enablement system
The purpose of governance is not to slow adoption. It is to make adoption repeatable. When controls are defined clearly, teams can move faster because they know the path from idea to pilot to production. Review boards can focus on exceptions and genuinely high-risk decisions rather than re-litigating every experiment.
The organisations that manage frontier models well will maintain a living inventory of use cases, model versions, datasets, tools, owners, risks, evaluations and incidents. That inventory becomes the factual base for investment, audit, procurement and continuous improvement.
Source context: OpenAI's GPT-5.5 release note and Anthropic's Project Glasswing announcement.