Saturday, April 11, 2026

The Enterprise Modernisation Playbook Is Broken. I Know Because I Helped Write It ...

After two decades inside large-scale transformation programmes, I stopped waiting for the right conditions. I built the proof myself. On weekends. 

I've spent 20+ years as an Enterprise Solutions Architect inside large-scale technology transformation programmes. Insurance. Telecommunications. Energy. Retail. Media. Different industries, different technology stacks, different executive sponsors. The same programme structure, year after year.

Discovery phase. Architecture blueprints. Governance frameworks. Vendor selections. Roadmaps that stretch eighteen months before a single line of production code is written. And somewhere in month fourteen, when the business context has shifted and the original assumptions are quietly no longer true, a measured renegotiation of scope. The "minimum viable" quietly becomes the "maximum achievable."

I've built a career navigating this model. I'm not writing this from the outside. I've led engineering organisations of 90 people inside that model, managing platforms supporting hundreds of millions in annual revenue at one of Australia's largest telecommunications companies. I'm not dismissing it wholesale. For some problems, it's still the right approach. But something has changed in the last eighteen months that makes the old playbook genuinely obsolete for a significant class of enterprise modernisation challenges.

I was frustrated enough to prove it on my own time.

 


What the Old Playbook Assumes 

 

Large-scale transformation programmes are built on a set of assumptions that were reasonable when they were formed.

Assumption 1: Building software is expensive and slow. Therefore, front-load the planning. Get the architecture right before committing to implementation. The cost of changing direction mid-programme is prohibitive.

Assumption 2: Complexity requires specialisation. Regulated domains like insurance, banking, and healthcare require deep domain expertise, and that expertise takes time to co-ordinate across teams. Move carefully.

Assumption 3: Working software is a late-stage deliverable. The artefacts of early phases are documents: requirements, designs, blueprints. Stakeholders validate against slides and wireframes. Working software comes at the end, when you integrate and test.

These assumptions shaped programme structures, governance models, vendor relationships, and, critically, the way executives are asked to think about technology investment.

Every one of these assumptions is now wrong.

 


What Changed: AI Collapsed the Distance Between Intent and Working Software

 
 

I want to be precise here, because this point is usually made too broadly.

I'm not saying "AI speeds up development." That framing undersells the structural change. What has actually happened is that the distance between a clear statement of intent and working, tested, production-grade software has collapsed to a degree that invalidates the planning-heavy programme model entirely.

To test this hypothesis properly, I chose the hardest domain I could think of: Australian insurance. Regulatory obligations under APRA, the Privacy Act 1988, and the Insurance Contracts Act 1984. Multi-service architecture requirements. Real-time event streaming, audit trail integrity, compliance reporting. If you want a genuinely complex proving ground, insurance qualifies.

I started building outside of work hours. No team. No budget. No programme governance structure.

Over 41 working sessions, using GitHub Copilot powered by Claude Sonnet 4.6, I built UnderwriteAI: a production-grade, eight-microservice insurance policy administration system. Policy management. Customer onboarding with Privacy Act consent capture. A rating engine covering five insurance products. Claims workflow from lodgement through settlement. APRA regulatory reporting. Kafka event streaming across six topics. A React portal. Kong API gateway. Keycloak authentication. 156 automated BDD test scenarios covering Australian compliance requirements.

The architecture is not a prototype. The compliance is not simulated. The test coverage is not aspirational.

And I built the whole thing in my spare time.

An equivalent programme scoped through a traditional delivery model, with vendor selection, requirements workshops, architecture review boards, and staged releases, would conservatively carry an 18 to 24 month timeline and a seven-figure budget before a line of production code shipped. This took 41 sessions.

 


The Moment That Clarified Everything

There are actually two demonstrations from this project, and the progression between them is the point.

The first is a 16-chapter walkthrough of the complete insurance lifecycle: customer creation, premium rating, policy binding, claims lodgement, workflow progression, notifications, APRA reporting, renewals. A browser opens. Every screen is navigated. Every form is filled. Every button is clicked. It looks like a polished product demonstration performed by a skilled operator.

There is no human operator. The entire browser session is driven by a Playwright script authored by the same AI that built the platform. I provided the instruction to run it. That is the full extent of my involvement. The AI that wrote the code also wrote the tests, and the tests are the demo.

That realisation sat with me for a while. Then I took it one step further.

I wired GitHub Copilot into the live platform via the Model Context Protocol, a standard that allows AI agents to call real APIs directly as tools. In the second demonstration, there is no browser at all. No Playwright script. No human navigating screens. Just a VS Code chat window and natural language instructions.

In eleven tool calls, Copilot created a customer with Privacy Act consent captured, ran the premium rating engine for a comprehensive motor policy, bound the policy, lodged a claim for a not-at-fault rear collision, advanced the claim through the full regulatory workflow (acknowledge, investigate, assess, approve, settle) and pulled the immutable APRA audit trail.

Every step landed in the live database. Every Kafka event fired. Every notification dispatched. Every audit record written.

The progression across the two demos is not a technical curiosity. It is a directional signal. In the first demo, the AI uses the interface designed for humans because it can. In the second, it discards that interface entirely and operates the system directly. The browser, and by extension the entire human-facing layer, turns out to be optional infrastructure.

I've spent years explaining to executive stakeholders what possible looks like in a regulated domain. These two demonstrations are now the explanation.

Watch the full demo:



What This Means for Your Technology Organisation

 

I want to offer four genuinely consequential implications for CIOs and CTOs. Not the usual list of AI adoption recommendations.

 

1. Your planning horizon is your biggest risk.

If your modernisation programme is spending its first twelve months producing documents rather than working software, you are not managing risk. You are accumulating it. The business context that justified the programme will change. The technology landscape will change. The AI tools available to your engineering teams will change dramatically. Programmes that defer working software to the integration phase will arrive at that phase with outdated assumptions and no mechanism to detect it.

Product-led modernisation, defined simply as shipping working, tested, incrementally improving software from week one, is not an Agile methodology recommendation. It is a risk management position.

 

2. The regulated domain objection no longer holds.

The most common pushback I receive when discussing faster, more iterative approaches to enterprise transformation is: "Our domain is too complex. We have regulatory obligations. We can't move that quickly."

I built UnderwriteAI specifically to empirically test this objection. APRA compliance, dual-consent privacy obligations, statutory notice timelines, immutable audit trails: none of these prevented iterative delivery. Some of them were easier to implement correctly when tested continuously from the beginning rather than bolted on at the end. Compliance that is woven into every sprint cannot be descoped. Compliance that is scheduled for the "integration phase" routinely is.

 

3. AI is now simultaneously the builder and the operator of enterprise systems.

This is the implication that most organisations haven't fully absorbed.

The MCP demonstration is not a curiosity. It is a preview of enterprise architecture in which AI agents are first-class participants in business workflows. Not augmenting human activity. Executing it. The question for your technology organisation is not whether to prepare for this, but whether your current modernisation investments are producing the kind of clean, API-first, event-driven architecture that AI agents can actually operate.

Legacy systems with opaque integrations and inconsistent APIs are not just technically awkward. They are structurally incompatible with the direction enterprise computing is moving. Every year of deferred modernisation is a year of compounding incompatibility with the operational model that is already emerging.

 

4. You can start smaller than you think, and sooner than your governance model assumes.

The most common response I get when sharing this with technology leaders is: "That's compelling, but we can't restructure our whole programme around it." That is not what I'm suggesting.

Pick one bounded domain. A single workflow that is materially important but not mission-critical enough to paralyse decision-making. Set a 90-day deadline. Ship working software against it. Not a prototype, not a proof of concept: working software, with tests, running against real data.

What you learn in those 90 days about what AI can and cannot do in your specific environment, with your specific constraints, is worth more than the outputs of a six-month discovery phase. And you will have working software at the end of it, which means the next conversation with your board is grounded in evidence rather than projections.

 


The Question I'd Leave You With

Most modernisation programmes can show you a roadmap. Many can show you a milestone report. Very few can show you working software that solves the actual problem: real compliance, real test coverage, and a live demonstration you can put in front of a sceptical stakeholder today.

I built that in my spare time to prove a point about what is possible.

The question worth asking of your current transformation programme (or the one you are about to commission) is simple: what is the working software that proves this is on the right track? Not the wireframes, not the architecture diagrams, not the vendor's reference implementation. The working software, running against real data, that a sceptical stakeholder can interact with today.

If the answer is "we'll have that in the integration phase," the programme structure is carrying more risk than the governance papers are showing you.

 


I'm Tyrell Perera, an Enterprise Solutions Architect and Fractional CTO with 20+ years of experience in digital transformation across Insurance, Telecommunications, Energy, Retail, and Media in Australia. 

UnderwriteAI is a project I built entirely in my own time, outside of my day job. It is currently in a private repository while I work through what comes next, whether that is open sourcing it, building a product around it, or using it as a foundation for advisory engagements. If you're navigating modernisation decisions for your organisation and want to explore what this model looks like in your context, I'd welcome the conversation. 

Find me at tyrell.co or on GitHub.