Research

Design Behaviourism

Technology should amplify human potential rather than replace it. Three words from Tyler's notebook that became the studio's operating constraint, and the only test that decides whether we ship.

The one sentence

Design Behaviourism starts from a single conviction: technology should amplify human potential rather than replace it. The phrase sounds modest. The implications aren't. It rules out a category of work the studio could otherwise win, and it rules in a category that's harder to sell but easier to defend.

The framework asks one question of every system we touch. After the system runs, is the human in the loop more capable, or less? If the honest answer is "less," the system fails the test, regardless of how much it ships or how well it demos.

Why a framework, not a value

"Human-centered" is a value. Values are cheap. They survive every redesign by being abstract enough to mean anything. Design Behaviourism is a framework because it forces a behavioural prediction before the build starts: what will the user be able to do after using this that they couldn't do before, and how will we know.

The behaviourist part isn't ideological. It's epistemic. We can't measure intent. We can measure behaviour. The framework keeps the studio honest about which one we're actually changing.

The three tests

Every immersive or agentic system we ship has to clear three checks. None of them are subjective.

Applied: education at scale

The clearest applied case is the Crash Course production pipeline Tyler architected at Thought Café (16M+ subscribers, 60+ subjects, weekly cadence sustained for nine years — see the case study). Writers and animators stayed the editorial decision-makers; the pipeline handled the asset reuse, the templating, and the production-line work that previously consumed senior animator hours per episode.

The Design Behaviourism tests pass cleanly. Capability delta: a small team could sustain weekly across dozens of subjects. Reversibility: the editorial taste lived in the human team; the pipeline could be replaced without losing the show. Attention cost: animators had to learn the modular system, and the framework documented the cost up front rather than hiding it.

The pattern carries into how the studio now scopes Summit and Constellation engagements: build the production-line work into the system, leave the editorial work with the humans, and name the attention cost honestly.

Applied: health and longevity

Tyler serves as Innovation Lead at BEKIN Health, where the framework gets tested against a harder constraint: clinical settings punish replacement systems faster than entertainment ones do. A health tool that absorbs clinician judgment doesn't just fail the test; it causes harm.

The general principle that carries across to postreality engagements in the health tier: instrument the human's decision, don't replace it. Agentic systems can surface the relevant prior cases, the relevant biomarker context, the relevant privacy constraints. They can't make the clinical call. The framework keeps the build honest about which side of that line each feature sits on.

What the framework rules out

Plenty. Autonomous decision-making systems where the human is decorative. Engagement-optimization wrappers that mistake time-on-site for capability. AI-coach products that simulate expertise the user never acquires. The studio doesn't take that work, and the framework is the reason we have a clean answer when asked why.

We'd rather lose pitches than ship a system that quietly shrinks the people who use it.

What it rules in

Immersive experiences that teach a measurable skill. Agentic pipelines that hand the operator more leverage and more visibility. Knowledge graphs that survive team turnover by encoding what people learned, not by replacing the learning. Spectacle work where the moment expands what an audience knows is possible.

Design Behaviourism is the studio's bet that the durable market is the one where buyers can point at people who got better at their work. The framework is how we keep showing up to make that bet.

Where the framework is still being tested

The framework isn't finished. Two open questions the studio hasn't fully answered.

The first: how to measure capability delta when the delta itself is long-arc. Immersive training that pays off six months later isn't measurable inside an engagement window. We've taken to writing the predicted delta into the engagement contract with an explicit follow-up clause. It's expensive to honour and embarrassing when the follow-up shows nothing. We do it anyway because the alternative is unverifiable claims.

The second: how to price work where the value is "the user got better" versus "the system shipped." Industry expects per-deliverable pricing. Capability-delta pricing puts both sides of the table on the hook for the outcome. We're experimenting with split structures. The honest answer today is that we don't have a clean formula, and we say so to clients who ask.

The framework keeps the questions visible. That's most of what a framework should do.