Insights

Three Months After Go-Live: When Mobilisation Success Becomes Operational Failure

St Pancras close

31/03/2026

The alliance mobilisation is, by all conventional measures, complete. The programme director presents the results to the board: satisfaction surveys are averaging 4.7 out of 5, with workshop feedback describing sessions as "excellent" and "transformative", the partnership charter is signed by all parties, and the governance structure is established and apparently operating well. The board approves the completion report and, on the face of it, the alliance appears ready for delivery.

Twelve weeks into operations, however, the same alliance is in crisis. Technical disagreements that should have been resolved through integrated problem-solving are being escalated through legal channels, commercial positions that the partnership charter was supposed to prevent have hardened into defensive standoffs, and the governance structure that was working smoothly during mobilisation cannot make decisions under operational pressure. Trust between partners, which looked robust during mobilisation, has collapsed.

The mobilisation metrics were real enough, but the collaboration capability was not. This pattern appears across major projects with uncomfortable consistency, and the explanation is straightforward: we measure partnership success during mobilisation when enthusiasm is high, relationships are new, and pressure is low, then wonder why the alliance that mobilised successfully delivers unsuccessfully. The answer is simple enough once you name it - we measure when it is easy, not when it matters.

Mobilisation creates ideal conditions for positive measurement, which is precisely why those measurements tell you so little about delivery capability. Partners are optimistic about working together because nobody has failed each other yet, the commercial terms are agreed but not yet tested under pressure, governance meetings address theoretical scenarios rather than real crises, and workshop exercises simulate collaboration without the constraints of actual delivery. It is remarkably easy to score well on collaboration metrics when collaboration has not yet been tested, and this ease is exactly what makes the metrics meaningless.

Operational delivery, by contrast, creates the real test of whether collaborative capability exists. Technical problems emerge that were not anticipated during planning, commercial interests that aligned during bidding diverge under cost pressure, schedule compression forces decisions that expose underlying disagreements, and governance that worked for planning-level choices struggles with operational-level complexity. The collaboration capability that looked strong during mobilisation proves inadequate for delivery reality, and the organisation discovers this at exactly the point when doing something about it is most expensive.

This timing problem creates dangerous complacency because the board sees excellent mobilisation metrics and assumes the partnership is sound, programme leadership focuses on technical delivery treating collaboration as solved, and when the crisis arrives 12 weeks later it feels sudden. But the capability gap was there from the start, just not yet tested under conditions that would have made it visible.

The answer is simple enough once you name it - we measure when it is easy, not when it matters.

Consider what mobilisation actually measures and you can see why it predicts so little. Satisfaction surveys ask whether people feel positive about the partnership, which is fair enough when interactions have been structured workshops, but feeling positive about working together is different from being able to work together when a critical path activity is delayed and each partner faces different board pressures. Workshop feedback measures whether people found the sessions valuable, but workshops create artificial conditions designed to generate collaboration, so whether partners can collaborate in a workshop says nothing about whether they can collaborate during a crisis when the client is demanding answers. Partnership charters measure whether the architecture for collaboration exists, which is necessary but insufficient because having governance processes does not mean those processes will work under pressure.

The measurement question becomes: when should we assess whether an alliance can actually collaborate? The conventional answer is during mobilisation, but the correct answer is under operational pressure because that is when collaboration either works or fails. Alliance directors face this continuously, where the board wants assurance, mobilisation metrics provide it, and three months into operations the assurance proves false.

Some alliances recognise this timing problem and design for it explicitly by treating mobilisation measurement as baseline rather than validation, planning assessment points at 30 days, 60 days, and 90 days and beyond into operations specifically to test whether mobilisation capability transfers to delivery pressure, and accepting that early positive metrics mean very little until tested against operational reality. Most alliances, however, do not take this approach - they measure during mobilisation, declare success, and move to delivery, then discover 12 weeks later that the capability they thought they had built was actually just enthusiasm about building capability, and the difference becomes expensive.

P20 PA 01452

The water sector provides a contemporary example of this pattern playing out, where Ofwat's PR24 determinations emphasise collaborative delivery across supply chains, alliances form with positive mobilisation metrics, then discover during operational delivery that the collaboration capability is inadequate for programme complexity. Energy transition programmes face the same challenge at larger scale, where offshore wind developments require collaboration between developers, supply chain, regulators, and coastal communities, and whether partnerships succeed depends not on how they start but on whether they can maintain collaboration when everything goes wrong.

The measurement reform required is straightforward enough in principle: stop measuring partnership success at the point when partnerships are easiest to measure, and start measuring at the points when measurement actually tells you something useful. This means effective externally validated diagnostic assessment under operational pressure rather than just during protected mobilisation, tracking capability development through the delivery cycle rather than declaring success at day one, and accepting that positive early metrics are encouraging but meaningless until tested against reality. Alliances (and any forming partnership) should be comfortable enough with discomfort to accept that the less perfect the initial mobilisation is, the more real the ongoing delivery can be.  Three months and beyond after go-live is when alliances discover whether they can actually collaborate, and the question is whether we will measure at that point or continue declaring success based on metrics that predict nothing.

What would change if you measured partnership success 90 days into delivery, not 90 days into mobilisation?

Back to insights