Challenge
The teams who book a Consultation usually look the same. A CTO trying to decide whether to build an agent in-house or license one. A head of L&D scoping whether immersive is worth the spend before commissioning an Atlas. A founder writing a Series A deck and needing studio-grade reasoning to point at when the technical questions land. The shared shape: a real decision in the next quarter, and a sense that the option space has not been worked carefully enough to commit.
What they do not need: another deck. What they do need: someone who has shipped the kind of thing they are considering, who will read the situation honestly, and who will write down a recommendation that survives a hostile question.
Approach
A Consultation runs 1-4 weeks. The studio does the same four things in roughly the same order, sized to the budget and the urgency.
- Scoping. A one-page decision statement signed off in the first three days. Who needs to say yes. What evidence they need. What “clarity” looks like at the end. If we cannot write this page together, we do not start the engagement.
- Discovery interviews. Two to five conversations with the people who own the problem and, where possible, the audience the decision affects. Industry context, technical constraints, the things that cannot move.
- Analysis. Option-mapping. At least three viable paths, each costed and risked. Tradeoff modelling against the criteria from the decision statement.
- The brief + working session. A 15-25 page advisory brief and one or two live working sessions to walk it, take fire, and refine the recommendation.
We have run this engagement pattern across 2024-2026 for clients in EdTech, B2B SaaS, health technology, and museum / civic platforms. Some engagements were one-week sprints (a Series A pitch reality-check); some ran the full four weeks (a build-vs-buy on an agent platform with a $300K decision attached).
Instrumentation
| KPI | Method | Threshold |
|---|---|---|
| Decision-readiness | Pre/post stakeholder survey | ≥ 4/5 post-engagement |
| Option coverage | Documented alternatives in the brief | ≥ 3 viable paths analyzed |
| Time to clarity | Calendar from kickoff to final brief | ≤ 4 weeks |
| Recommendation depth | Brief includes scope, cost, risk, KPIs per path | All four named per recommended path |
Result
| The Number | 1-4 weeks to recommendation |
| Tier | ANECDOTE |
| Deliverable | A written advisory brief (15-25pp) plus working session(s) |
By design, Consultation has no quantitative business outcome we can measure inside the engagement window. The outcome is a decision the client owns and can act on. We measure the brief, not the downstream build.
Impact
The clean version of this engagement: the team walks in with a question they have already half-answered, and the brief turns that half-answer into something they can commit to (or kill). The muddy version: the team wanted a working prototype and bought a Consultation thinking it was an Atlas. The brief lands, the team likes it, and they wish they had a thing to point at.
What we’d do differently
The deliverable is a brief, not a working artifact. Clients sometimes regret commissioning a Consultation when they should have done an Atlas. We are now firmer in the scoping call: if the question is “what should we build,” that is Consultation. If the question is “can this be built,” that is Atlas. The week-one decision statement now explicitly names which question the engagement is answering, and we will walk away from the engagement rather than blur the line.
The brief is also a one-time artifact. Repeat engagements with the same client would benefit from a lightweight written record between visits so each new Consultation does not relitigate context. We are piloting that on engagements where the same client returns within six months.