cover image placeholder · Consultation Pattern · Build-vs-Buy Advisory
advisory · 2026

Consultation Pattern · Build-vs-Buy Advisory

1–4 wk
to recommendation
ANECDOTE

How a Consultation engagement runs at the studio. Teams arrive with a build-vs-buy or modality-selection question; we hand back a written brief that makes the decision actionable.

How we classify metrics →

Challenge

The teams who book a Consultation usually look the same. A CTO trying to decide whether to build an agent in-house or license one. A head of L&D scoping whether immersive is worth the spend before commissioning an Atlas. A founder writing a Series A deck and needing studio-grade reasoning to point at when the technical questions land. The shared shape: a real decision in the next quarter, and a sense that the option space has not been worked carefully enough to commit.

What they do not need: another deck. What they do need: someone who has shipped the kind of thing they are considering, who will read the situation honestly, and who will write down a recommendation that survives a hostile question.

Approach

A Consultation runs 1-4 weeks. The studio does the same four things in roughly the same order, sized to the budget and the urgency.

We have run this engagement pattern across 2024-2026 for clients in EdTech, B2B SaaS, health technology, and museum / civic platforms. Some engagements were one-week sprints (a Series A pitch reality-check); some ran the full four weeks (a build-vs-buy on an agent platform with a $300K decision attached).

Instrumentation

KPIMethodThreshold
Decision-readinessPre/post stakeholder survey≥ 4/5 post-engagement
Option coverageDocumented alternatives in the brief≥ 3 viable paths analyzed
Time to clarityCalendar from kickoff to final brief≤ 4 weeks
Recommendation depthBrief includes scope, cost, risk, KPIs per pathAll four named per recommended path

Result

The Number1-4 weeks to recommendation
TierANECDOTE
DeliverableA written advisory brief (15-25pp) plus working session(s)

By design, Consultation has no quantitative business outcome we can measure inside the engagement window. The outcome is a decision the client owns and can act on. We measure the brief, not the downstream build.

Impact

The clean version of this engagement: the team walks in with a question they have already half-answered, and the brief turns that half-answer into something they can commit to (or kill). The muddy version: the team wanted a working prototype and bought a Consultation thinking it was an Atlas. The brief lands, the team likes it, and they wish they had a thing to point at.

What we’d do differently

The deliverable is a brief, not a working artifact. Clients sometimes regret commissioning a Consultation when they should have done an Atlas. We are now firmer in the scoping call: if the question is “what should we build,” that is Consultation. If the question is “can this be built,” that is Atlas. The week-one decision statement now explicitly names which question the engagement is answering, and we will walk away from the engagement rather than blur the line.

The brief is also a one-time artifact. Repeat engagements with the same client would benefit from a lightweight written record between visits so each new Consultation does not relitigate context. We are piloting that on engagements where the same client returns within six months.

Impact

Decisions made faster, with the option space, the cost envelope, and the risks named out loud. Clients walk in arguing; they walk out aligned (or aligned on what to argue about next).