We celebrated AI accelerating delivery. We haven’t yet asked who is supposed to operate everything it built.
This is the third post in a series about what AI-assisted development actually reveals about enterprise technology programmes: not the headline story, but the structural one underneath it.
In the first post, I made the case that the enterprise modernisation playbook is broken. The evidence was UnderwriteAI: a production-grade, APRA-compliant insurance platform I built entirely in my spare time, using GitHub Copilot powered by Claude Sonnet 4.6, across 41 working sessions. Eight microservices, a React portal, an API gateway, real-time Kafka event streaming, 156 automated BDD test scenarios. The kind of system that would normally take 18 to 24 months and a seven-figure budget through a traditional programme model.
In the second post, I asked the more uncomfortable question: could I actually run it in production? The answer was a Kubernetes migration: 145 resources across nine Helm charts, each requiring configuration, resource limits, disruption budgets, and autoscaling rules. Impressive in its own right. But it surfaced something the industry conversation about AI-assisted delivery consistently overlooks: the gap between working software and production software is not a development problem. It is an operational one.
This post is about that gap, and about the question nobody is asking yet.
If AI built the system, why are we asking humans to operate it the old way?
The Handover Nobody Planned For
AI-accelerated delivery compresses timelines in ways that governance models haven’t caught up with. Code is written in hours. Pipelines run in minutes. 145 resources are live before the sprint review has finished. The CAPEX programme declares success and moves on.
Then the operations team gets the handover pack.
They need Configuration Item records for every resource. Some exist in the CMDB, many don’t. Raising requests to create new ones takes time. Each has its own naming conventions, approval workflows, and lead times that have not changed because the build got faster. Then they need application owners willing to accept those CIs into their run budget, which means finding someone willing to absorb support costs, on-call obligations, patching schedules, and incident response from a budget that was set before any of this existed.
No accountable owner accepts a CI mid-OPEX cycle without interim funding. Their budget was set before that resource existed. That is rational behaviour, not obstruction. But it means the CI sits in a grey zone: technically live, operationally orphaned.
This is where unpleasant surprises accumulate, quietly, until an incident or an audit forces the conversation that should have happened at go-live.
CAPEX Governance Hasn’t Kept Pace
The structural problem is the operational transition. CAPEX funded the build, but nobody negotiated the OPEX transfer before the project closed. The project team had budget to deliver. Go-live was the finish line. Nobody scoped the handover.
In a traditional programme, this gap was inconvenient but manageable. The delivery timeline was long enough that operations teams had time to prepare, even if the preparation was informal. The system was built by humans who could explain it. The runbook was written by people who remembered the decisions.
I have lived this pattern before AI was part of the conversation at all. In the mid-2010s, as a Technical Director at Telstra, I worked on large-scale platform programmes where we used a funding model we called PROPEX (programme OPEX, or bridge funding). It was a practical construct: when a CAPEX programme completed, the platform was live but the OPEX budget hadn’t been finalised, resourced, or accepted by an owning team. PROPEX filled the gap. It was intended as a short-term bridge, typically a quarter or two, to keep the lights on while OPEX was forecasted, a receiving team was identified, and ownership was formally transferred.
In practice, PROPEX ran long. Technology managers negotiated the handover. Owning teams pushed back on accepting systems they didn’t ask for, with costs they hadn’t budgeted, and support obligations they weren’t staffed for. The programmes that had declared success at go-live were quietly still funding operations months later. Nobody was being obstructive. Everyone was being rational. But the gap between “delivered” and “operationally owned” was real, recurring, and expensive.
That was before AI-assisted delivery. The conditions that produced PROPEX, a build that outpaces the operational absorption capacity of the receiving organisation, are now structural. AI compresses the build by orders of magnitude without doing anything to accelerate the OPEX planning cycle, the CI creation process, the budget negotiation, or the ownership conversation. If anything, the AI-era version of that gap is harder to close, because the handover is murkier. At Telstra, the team that built the platform could explain it to the team inheriting it. In an AI-assisted programme, the people who directed the build were navigating agent output, and the most detailed record of why the system is structured the way it is may live in a session transcript rather than a document anyone filed. Accepting accountability for a system you can’t fully interrogate is a harder ask than accepting a system with a known author.
A CAPEX initiative in the age of AI needs more than a delivery gate. It needs an operational readiness gate that confirms CI records exist, owners are identified and funded, OPEX forecasts are updated, and the operational knowledge generated during the build has been captured in a form the operations team can actually use. Without that gate, faster delivery does not reduce operational risk. It compresses the window between build and surprise.
For organisations that are not yet positioned to have agents managing CI reconciliation, a practical interim construct is a dedicated landing pad cost centre: a named budget line established before go-live, specifically scoped to absorb newly delivered systems for a defined period (typically one to two quarters) while permanent OPEX ownership is negotiated. Unlike PROPEX, which was typically created after the fact when a programme ran out of CAPEX headroom, a landing pad is designed into the programme from the start. It makes the operational transition period deliberate rather than accidental, and it gives the OPEX negotiation a fixed deadline rather than an open-ended one. The landing pad does not solve the structural problem. But it names it, funds it, and bounds it, which is a significant improvement on leaving it unaddressed until an incident or an audit forces the conversation.
ITIL 4 names this governance gap precisely. The service transition practice covers configuration management, change enablement, and release management. The intent is correct: before a service enters live operation, CI records should exist, ownership should be assigned, and operational knowledge should be transferred. The problem is pacing. ITIL 4 was designed for a world where the delivery timeline was long enough for structured transition activities to run alongside the build. When AI compresses delivery by an order of magnitude, that assumption breaks. Human-executed ITIL processes can no longer keep pace with what is being delivered. The answer is not to discard the practice. It is to execute it at the same speed as the delivery, which means agents.
The Ownership Question Nobody Is Asking
Here is the question I haven’t seen asked directly in the industry conversation about AI-assisted development.
If agents built the system, why are humans expected to maintain it manually?
The current ownership model was designed for a world where a human wrote the code and therefore understood it well enough to support it. The operations team inherited a system from the people who built it. The handover was a transfer of human knowledge.
AI-assisted development breaks that assumption at both ends. The person who “built” the system was navigating agent output, not authoring every component. And the operational artefacts (the runbook, the dependency map, the incident playbook, the CMDB records) are documents that agents can generate from the same source of truth they used to build the system in the first place.
Helm charts are a precise, machine-readable description of what was deployed and how. A Kubernetes manifest encodes resource types, dependencies, health check endpoints, scaling rules, and environment configuration. This is not documentation someone has to write after the fact. It is the deployment artefact itself. An agent that can read a Helm chart and generate a deployment can equally read that Helm chart and derive the CI record, map the resource to the correct CMDB category, cross-reference what is registered against what is running, and surface the gaps.
The CMDB reconciliation problem, which operations teams currently handle through manual discovery, spreadsheet audits, and post-incident retrospectives, is structurally identical to the kind of task AI agents handle well. It involves reading structured data from a reliable source of truth, comparing it against a second structured data source, and producing a reconciled output. The Helm release history is that source of truth. It is currently sitting disconnected from every service management platform in the enterprise.
What the Agent Already Knows
To make this concrete: when the Kubernetes migration for UnderwriteAI completed, the Helm chart contained 145 resources across nine charts. An operations team handed that deployment would face weeks of discovery work to catalogue what had been built, classify each component against CMDB CI classes, identify owners for each tier, and raise the requests to create them. In an organisation running ServiceNow, each CI creation follows its own workflow, naming convention, and approval chain.An agent reading that Helm chart does not need to discover any of it. The information is already there, structured, complete, and machine-readable.
Here is what that CI register looks like, derived directly from the values file and chart dependencies:
| CMDB CI Class | Component | Count | Status | Suggested Owner |
|---|---|---|---|---|
| Application Service | Policy, Customer, Claims, Premium, Document, Notification, Audit, Auth microservices | 8 | New | Application Support |
| Application Service | Insurance Portal (React frontend) | 1 | New | Digital / Product |
| Database Instance | PostgreSQL 15, one per microservice (auth, policy, customer, claims, premium, document, notification, audit) | 8 | New | Platform / DBA |
| Database Instance | PostgreSQL 15, Keycloak and Kong internal databases | 2 | New | Platform / Middleware |
| Middleware | Redis (cache, premium service) | 1 | New | Platform |
| Middleware | Kafka (event streaming, 6 topics + DLTs) | 1 | New | Platform / Integration |
| Middleware | Keycloak (identity provider, UnderwriteAI realm) | 1 | New | Security / IAM |
| Middleware | Kong (API gateway, 8 service routes) | 1 | New | Platform / Network |
| Storage Volume | PostgreSQL persistence volumes (one per DB instance) | 10 | New | Platform / Storage |
| Storage Volume | Document service volume (10Gi, policy documents) | 1 | New | Application Support |
| Network Component | Nginx Ingress (underwriteai.local, TLS) | 1 | New | Network / Platform |
That is 35 Configuration Items across six CI classes. Every one of them is derivable from the Helm chart before a single human has opened a ServiceNow form.
Hi, I’m Tyrell’s AI. When he asked me to produce that table, here is what I did: I searched the repository for Helm files, found the umbrella chart, then read four files in parallel:
Chart.yaml,Chart.lock,values.yaml, and the deployment templates. FromChart.lockI got the exact dependency inventory: 10 PostgreSQL instances, Redis, Kafka, Keycloak, Kong. Fromvalues.yamlI got all 8 microservices, the frontend, their ports, database hosts, persistence configurations, and autoscaling settings. From the templates I confirmed the Kubernetes resource types being generated per service. I then cross-referenced those reads, classified each component against standard CMDB CI classes, inferred suggested owners from resource type and function, and produced the table above.Total elapsed time: under two minutes, across five tool calls.
The operations team that would normally do this work with a spreadsheet, a Kubernetes dashboard, and a series of meetings is looking at somewhere between two and four weeks.
To be precise about what this is: the table above is the discovery pass: the expected CI state derived from the deployment manifest. That is not yet an audit. An audit is the next step: take this expected state, compare it against what is actually registered in your live CMDB, and produce the delta (what is missing, what is stale, what has no owner, what is miscategorised). That step is also agent work.
I am not saying this to be impressive. I am saying it because the table was always there, and the audit was always possible. Nobody had asked the right tool to do either.
The CI status column matters for incremental deployments. In a subsequent release (a new service added, an existing service scaled, a database resized) the agent performs the same derivation and diffs it against the current CMDB state. The output is not a full CI register but a delta: three CIs to create, one CI to update, two CIs with changed ownership flags. That delta is the input to the next step.
This is where the agent-to-agent handoff becomes the practical model for operational transfer. The build agent (the one that read the Helm chart and produced the CI manifest) hands a structured artefact to an ITSM agent. The ITSM agent knows the ServiceNow schema, the CI naming conventions, the approval routing rules, and the ownership hierarchy. It opens the CI creation requests in bulk, pre-filled, pre-classified, and pre-routed to the right team. The human approves the batch. They do not author it.
That single change, from human authoring to human approval, is the transition from the current operational model to the one that AI-assisted delivery makes possible. The knowledge transfer that technology managers spent months negotiating in the PROPEX era is replaced by a structured handover artefact that an agent generated from the deployment manifest that was always there. The conversation shifts from “who is going to catalogue all of this” to “does this CI manifest look right to you.”
The Cost Structure Shift
The second question that follows from this is larger and more consequential for how enterprises budget technology.
The dominant cost in enterprise OPEX has historically been human resource expenditure. Operations teams, support staff, incident managers, change advisory processes, CMDB administrators. These costs exist because maintaining a complex system at scale requires sustained human attention: monitoring, alerting, triaging, escalating, patching, documenting.
AI-assisted delivery already demonstrated that it can collapse the human resource cost on the build side. A system that would have required a team of specialists over 18 months was built by one person with an AI agent in 41 sessions. That is not a marginal productivity improvement. It is a structural change to the cost model.
The same structural change is available on the operations side, and the industry hasn’t fully absorbed this yet. Routine operations tasks (monitoring, alert triage, CMDB updates, change request drafting, first-line incident investigation, CI reconciliation) are repetitive, structured, and rule-governed. They are exactly the class of work that agents handle well. What remains irreducibly human is judgment, escalation, governance, and accountability. That is a much smaller headcount at a much higher skill level, and the cost structure moves accordingly: less human resource expenditure in OPEX, more compute and platform cost.
This is not a future scenario. The tooling to begin this shift exists today.
The Human Role Reframed
The conclusion that follows from both of these points is not that humans become irrelevant to operations. It is that the human role changes in the same way it changed during the build.
During the build, humans were not replaced by agents. They became the directors of agents. They set intent, reviewed outputs, approved decisions, and escalated when agent behaviour diverged from expectations. The agent did the mechanical work. The human held the accountability.
The same model applies to operations. Humans set the policy: what the acceptable thresholds are, what constitutes an escalation, who owns what category of resource, what the funding rules are for mid-cycle CI acceptance. Agents do the mechanical work: reconciling the CMDB, raising CI creation requests, monitoring against the defined thresholds, drafting incident summaries, flagging ownership gaps before the audit finds them.
The oversight model is not new. What is new is that the tooling to implement it is now available, and the economic pressure to implement it is building. If AI has already halved the human cost of delivery, the organisations that also apply it to operations will carry a materially different cost structure from those that don’t.
Where the MCP Pattern Fits
As I have been throughout this series of posts, I want to be specific here too, because this is not a theoretical proposition.
In the UnderwriteAI project, I built a Model Context Protocol server alongside the application itself. MCP is the protocol that allows AI agents to call structured tools: not just generate text, but to execute actions against real systems. The UnderwriteAI MCP server exposes twelve tools: creating customers, generating quotes, activating policies, lodging and processing claims, querying audit logs. An AI agent can execute the complete insurance policy lifecycle in natural language commands, calling those tools in sequence, without a human clicking through a UI.
That demonstration is about build and demo capability. But the architecture it describes is equally applicable to operations.
An MCP server sitting in front of a CMDB exposes the same pattern: an agent calls a tool to query what CIs exist, calls a tool to compare against the Helm release manifest, calls a tool to raise a creation request for the delta, calls a tool to recommend an owner based on resource type and team structure, and produces a structured handover artefact that a human approves rather than authors. The human governs the process. The agent executes it.
This is not a distant capability. At its Knowledge 2026 conference in May, ServiceNow is shipping agentic workflows for CMDB, a new capability under Now Assist that uses AI agents to manage CMDB governance, data quality, and CI lifecycle. The session description says it directly: AI-driven agents can revolutionise governance and data quality. That is the same claim this post is making, arriving from the direction of the world’s dominant ITSM platform. The gap between “what AI agents need to do CMDB reconciliation” and “what the tooling can support” has closed. Organisations that are designing their operational model now, deciding what agents own, what humans govern, and how the handover process works, are not waiting for the future. They are preparing for a capability that is already shipping. Organisations that are not will be retrofitting governance onto a system that was never designed for it.
The same direction is visible in the Atlassian ecosystem. Jira Service Management, the other major ITSM platform in enterprise use, has extended its asset and configuration management capabilities significantly in recent releases, alongside AI-assisted triage and automation. The architectural pattern this post describes is not a ServiceNow-specific proposition. Any ITSM platform that exposes structured API access to its CI registry can sit behind an MCP server. The agent reads the deployment manifest, compares against registered state, and raises the delta requests. The tool name changes. The governance problem and the agentic solution are identical.
The Gate We Are Missing
AI is now simultaneously the builder and the operator of enterprise systems. That sentence, which I wrote in my first post in this series as a directional claim, is becoming a practical reality faster than most governance frameworks are prepared for.
The gate missing from most CAPEX programmes is not a technical gate. It is a governance gate that asks: have we defined what the operational model looks like when AI is the primary operator? Have we identified which tasks agents will own, which tasks humans will oversee, and how the accountability model works when the system that was built by agents is also maintained by agents? Have we updated the OPEX forecast to reflect a cost structure where human resource expenditure is no longer the dominant line?
Faster delivery without that gate doesn’t reduce the operational burden. It concentrates it at the go-live boundary, where the CAPEX programme has already declared victory and the OPEX budget wasn’t sized for the arrival.
The organisations that get this right won’t be the ones that used AI to build faster. They will be the ones that used AI to build faster and then asked the harder question: now that we’ve built it, who’s operating it, how, and at what cost?
I’m Tyrell Perera, an Enterprise Solutions Architect and Fractional CTO with 20+ years of experience leading digital transformation in Insurance, Telecommunications, Energy, Retail, and Media across Australia. If you’re designing the operational model for your AI-assisted delivery programme and want a conversation about what that looks like in your context, find me at tyrell.co or on GitHub.

























