Six offers · Six prices · Six durations

Six things
we sell. Nothing else.

Productised AI engagements for ServiceNow, AWS, and Microsoft customers. Fixed fee. Named outcomes in the SOW. Milestone payments at 40/30/30 — final 30% contingent on outcome verification.

Six offers at a glance

All six. One scroll.

Productised AI assessment

AI Readiness Sprint · six weeks

We assess what's already in your platform, hand you a 90-day agentic AI roadmap your CFO will sign off on, and tell you which of the next five offers — if any — is the right one.

Who this is for

Mid-market enterprises (1,000–10,000 employees) running ServiceNow, AWS, or Microsoft who want a structured AI roadmap before committing budget. Particularly fit for organisations that have run two or more AI pilots that didn't path to production — the readiness gap is typically platform activation or data foundation, not strategy.

The trigger

Usually one of three: a CIO who's been asked by the board for "an AI strategy" with a 60-day deadline; a CFO who's seen the renewal cost on Now Assist or Copilot E5 and wants to know what's actually being used; or a head of a function (HR, IT, customer service) who's read about ticket deflection and wants to know whether it's real for their estate.

What's in the SOW
01 Platform audit. What AI capability is licensed, what's activated, what's not. Specific SKU-level findings on Now Assist, Copilot, Bedrock, Q in Connect, etc.
02 Use-case shortlist. Five to seven use cases scored on impact, feasibility, and time-to-value. Vendor-neutral.
03 90-day roadmap. Sequenced delivery plan with named owners, dependencies, and rough investment range per work package.
04 Board-grade summary. Two-page executive write-up suitable for board paper or steering committee circulation.
Three measured outcomes
01 Activation gap quantified.Specific dollar value of unused AI capability already in the customer's existing licence stack.
02 Top three use cases SOW-ready.Each scoped to the point a follow-on engagement could start within 30 days.
03 Board paper in hand.A two-page write-up the customer's CIO can circulate without rewriting.
Risk · Remedy

If the readiness assessment doesn't surface findings the customer considers materially actionable, the final 30% milestone payment is held back pending discussion. Our SOW says so explicitly.

Assessment plus working pilot

AI Foundation + Pilot · ten weeks

Everything in the AI Readiness Sprint, plus a production-pathed pilot in your existing platform. Not a demo. Not a sandbox. A working capability your team can keep operating after we leave.

Who this is for

Customers who want the readiness assessment but already know they're going to need a pilot to convince internal stakeholders. The pilot is scoped tightly so the engagement still finishes in ten weeks — typically one use case, one platform, one function.

The trigger

"We need to show the board something working, not just a strategy deck." Or: "We can't fund a full programme until we see one capability in production." Both lead to the same answer: a pilot scoped to ship in eight weeks of build time, with two weeks for the readiness assessment that frames it.

Three measured outcomes
01 Working pilot in production-adjacent environment.Functional, observable, ready for stakeholder demos with real data.
02 Production path documented.What's required to take the pilot from observable-in-staging to operating in production. Investment range, timeline, dependencies.
03 Knowledge transfer complete.Your team can extend, modify, and operate the pilot without further engagement from us.
Risk · Remedy

If the pilot doesn't reach observable-working state by week eight, the final milestone payment is renegotiated. The risk lives with us, not you.

Production-grade ServiceNow agentic AI

Workforce Agent Sprint · twelve weeks

Production-grade agentic AI in ServiceNow for HR or IT service automation. Measured ticket deflection in the SOW. Final 30% payment tied to deflection target.

Who this is for

ServiceNow customers with Now Assist licensing (or who can activate it) running HR Service Delivery or IT Service Management at meaningful volume — typically 10,000+ tickets per quarter in scope. The deflection economics need ticket volume to make sense — below this threshold, the AI Readiness Sprint is a better starting point.

Three measured outcomes
01 30–40% deflection on in-scope ticket categories.Measured at week 12 against the baseline established in week 1. Final 30% milestone payment tied to hitting the lower bound.
02 Time-to-resolution improvement on non-deflected tickets.Agentic assistance for human agents. Typical improvement: 25–35% faster mean resolution.
03 Operating playbook handed over.Your platform team can extend the agent library to new ticket categories without further engagement.
Risk · Remedy

Final 30% payment tied to deflection target — measured by ServiceNow telemetry, not self-reported. If we miss the lower bound, we hold back.

Multi-function agentic deployment

Workforce Agent Programme · sixteen to twenty weeks

End-to-end agentic workforce deployment across HR, IT, and one additional function in a single engagement. Largest engagement we sell. The economics work because we sequence, not because we discount.

Who this is for

Enterprise customers with mature ServiceNow estates (HRSD + ITSM, plus a third module like CSM or FSM) who want all three agentic AI deployments handled in a single coordinated engagement. The third function is typically Customer Service Management, Field Service, or Strategic Portfolio Management.

Three measured outcomes
01 Deflection across all three functions.Per-function targets agreed in week 1. Final 30% tied to weighted average across functions, not any single one.
02 Cross-function agent library.Agents that escalate or hand off across HR/IT/third-function boundaries — typical of mid-market with shared service models.
03 Annex A operating model.Documented continuity, escalation, and team-redundancy structure. Required for engagements at this scale.
Risk · Remedy

At this engagement size the risk-remedy structure is more rigorous — independent third-party measurement of deflection at week 20, milestone gate at week 12 with right-of-termination if leading indicators don't track.

Board-level AI strategy for regulated mid-market

AI Strategy Sprint · four to six weeks

A board-grade AI strategy for APRA-regulated FinServ and adjacent buyers. Six weeks, not six months. Vendor-neutral. Survives panel scrutiny.

Who this is for

APRA-regulated FinServ, adjacent regulated industries (super, insurance, health, utilities), or large mutuals with board-level AI strategy mandates. The buyer is usually a CRO, COO, or CFO with a regulator-driven deadline, not a CIO with a technology question.

Three measured outcomes
01 Board paper in hand.15–20 page board-grade document. Regulatory references explicit (CPS 230, CPS 234, ASIC RG 271 where relevant). Suitable for board paper or steering committee circulation.
02 Risk taxonomy documented.AI risk register mapped to existing risk frameworks. Specific to the customer's regulatory context, not generic.
03 Three-year roadmap.Sequenced delivery plan with named owners, regulatory checkpoints, and budget envelope.
Risk · Remedy

If the board paper doesn't pass internal Risk Committee review on first read, we work the iteration into the original fee. Up to two rounds. The risk of "regulator-grade" lives with us.

Strategy plus vendor independent validation

AI Strategy + Vendor Validation · six to eight weeks

Everything in the AI Strategy Sprint, plus an independent vendor validation. We tell you whether to buy, build, or partner — with no vendor incentive of our own. The honest answer to a question most consultancies won't answer honestly.

Who this is for

Customers in active vendor evaluation cycles — typically post-RFI, pre-RFP — who want a vendor-neutral structured assessment of their shortlist. Especially relevant when the shortlist includes both platform-vendor offerings (ServiceNow Now Assist, AWS Bedrock, Microsoft Copilot) and specialist independents (Moveworks, Glean, etc.).

Three measured outcomes
01 Vendor-neutral validation report.Each shortlisted vendor scored against your specific use cases, regulatory context, and platform estate. Bias surfaced explicitly, not hidden.
02 Buy / build / partner recommendation.Specific to each in-scope use case, not a single recommendation across the strategy. Mid-market customers usually need a mix.
03 Negotiation positioning brief.For the vendors you do select, what to push back on in commercial terms — based on what they've conceded to peer customers we know.
Risk · Remedy

We carry no platform-vendor revenue at the validation stage. If a vendor we recommended at the validation stage ever pays us a referral fee for that customer, we refund the engagement fee. The independence claim has teeth.

Either way

Not sure which one? Run the Index.

Fifteen minutes will tell you which of these six is fit. Or whether none of them are — in which case we'll point you somewhere better suited.

Three reference customers
Rest Super Frasers Property HPE