Building Within Constraints
When Your Platform Won't Do What You Need, the Response Reveals Everything
First Edition — 28 March 2026 · Greg Williams · steko.co.nz/thinking
When AI platforms constrain persistent memory, file access, and autonomous execution, practitioners face a design decision that most don't recognise as a decision at all: build workarounds that get the job done today, or build governed structures that produce the full target capability within current constraints — structures designed to retire gracefully when the platform matures. This paper examines seven such structures — circumventers — developed across 131 sessions of governed AI-assisted methodology development. Each circumventer addresses a specific platform constraint, produces the target capability rather than a degraded approximation, depends on and enables other circumventers in a dependency chain, and carries a designed upgrade path. Six of seven upgrade paths are now beginning to activate. The distinction between workarounds and circumventers — between technical debt and architectural capital — is the design decision that determines whether a practice survives its tools' evolution or is buried by it.
What this paper found
The Constraint Problem Your AI Consultant Won't Lead With
If you're working with AI tools in any serious professional capacity — or commissioning someone who is — there are things you should know before the investment is committed, not after.
AI platforms constrain persistent memory, file access, and autonomous execution. Every session starts cold. The system that helped you build a complex deliverable yesterday has no memory of doing so today. There is no file system it can check, no state it can verify, no way to pick up where it left off without being told — in detail — what happened before. For short, self-contained tasks, this barely matters. For anything that accumulates knowledge across sessions — methodology development, programme governance, complex analysis that builds on prior work — the constraints are structural.
The constraint conversation happens daily in technical communities. The problem is not that nobody talks about it. The problem is that it happens in the wrong room — between practitioners after the engagement has started, not between the consultant and the client before the scope is agreed.
The consequence is predictable. The client commissions AI-assisted work expecting continuity and accumulated learning. The practitioner knows the platform provides none of these by default. The gap gets filled with workarounds — ad-hoc solutions that produce a good-enough approximation of the missing capability. These workarounds function. They get the job done. And they accumulate quietly as liabilities.
The honest conversation — the one that should happen before the scope is signed — goes something like this: the platform constrains us in specific ways; here is how we've designed around those constraints; here is what that design costs; here is what it protects you from; and here is what happens when the platform matures and those constraints lift. That conversation looks different depending on whether the practitioner has designed around the constraints or merely worked around them. The distinction is not rhetorical. It is architectural. And it is the subject of this paper.
A scoping note: not every engagement needs this level of investment. For short, self-contained tasks — a one-off analysis, a document review, a single-session deliverable — lightweight approaches may be entirely appropriate. The constraint problem becomes structural when work accumulates across sessions, when continuity matters, and when the stakes justify governed infrastructure. This paper addresses that context.
Technical Debt Has a Mirror Image
Ward Cunningham coined the technical debt metaphor in 1992, in an experience report at the OOPSLA conference. His insight was deliberately financial: shipping code that reflects an incomplete understanding of the problem is like borrowing money. A little debt speeds development, so long as it is paid back promptly. Martin Fowler extended the concept in 2009 with a two-dimensional classification — deliberate versus inadvertent, reckless versus prudent — that made the metaphor operational.
Technical debt is universally understood as a liability. But what happens when the constraint is not a shortcut? What happens when the platform genuinely cannot do what you need — not because you chose the quick path, but because the path doesn't exist yet?
The response to a platform constraint reveals the design philosophy of the practice. One response is to build a workaround: an approximation that gets close enough. Copy the key decisions into a document. Re-paste the context each session. Manually verify what the system remembers. This works. It is also debt — not because it was a shortcut, but because it creates a maintenance obligation that grows with every session. The interest compounds.
The other response is what this paper calls a circumventer. Four properties distinguish a circumventer from a workaround:
- It addresses a platform constraint blocking a specific capability class — not a quality shortcut, a genuine absence.
- It produces the target capability within current constraints. Not a degraded approximation. The actual capability.
- It has a clean upgrade path: either it retires gracefully or it becomes foundational when the platform matures. The path is designed, not hoped for.
- It depends on and enables other circumventers. The chain is the architecture. Independent workarounds are independent liabilities.
Where technical debt represents a compromise you intend to unwind, a circumventer represents an investment that produces returns now and compounds as the platform evolves. The difference only becomes visible when the platform changes — and by then, the practice carrying workarounds has maintenance debt it cannot easily unwind, while the practice carrying circumventers has architectural capital that activates.
Seven Structures That Produce What the Platform Won't
The seven circumventers documented here were built individually across 131 sessions to solve specific operational problems. The pattern — seven structures with four shared properties forming a dependency chain — was identified through retrospective synthesis of 3,980 lines of change history across 52 session addenda.
Circumventer 1: Session Continuity Chain
The platform has no persistent memory between sessions. A structured Session Handoff document carries state. A Session Context Package carries standards. A calibrated initiation prompt bootstraps every new session to full operating capability within the first exchange. Evidence: 131 consecutive sessions of state continuity with no chain breaks. Upgrade path: the handoff thins progressively as the platform provides persistent state. Every subsequent circumventer depends on this one.
Circumventer 2: Append-Only Change Register
The platform cannot maintain a growing register across sessions. Each session produces an addendum containing that session's changes. The chain of addenda is the history. Evidence: 1,341 registered changes with continuous numbering — no gaps, no duplicates, no breaks. Upgrade path: transitions to a queryable database table.
Circumventer 3: Library Decomposition Engine
The methodology library exceeds the AI's context window. A governed processing engine decomposes library-wide operations into bounded session batches, with processing state transferring between sessions via a run orchestration document. Evidence: three complete runs, the latest producing a fully verified library baseline of 55 documents. Upgrade path: larger context windows change batch sizing economics; the decomposition governance survives.
Circumventer 4: Structural Knowledge Graph
No native capability exists to see how 55+ documents relate to each other. A machine-generated graph maps every document, relationship, and dependency. Evidence: 55 nodes and 489 edges, verified through spot-check and library reconciliation. Upgrade path: future tenant of a queryable graph database.
Circumventer 5: Altitude-Governed Loading Model
Context window is finite. A four-altitude model loads each document at the resolution appropriate to the current task — from not-loaded through compressed summary to full document. Evidence: a pilot measurement demonstrated 48.4% context savings without information loss. Upgrade path: database-backed queries replace document-based loading decisions.
Circumventer 6: Retention Compression Tiers
The practice's own quality instruments grow with the library. The altitude model is applied reflexively to the practice's own instruments — stable items compress into batch verification while recent items remain at individual resolution. Evidence: 92 anchors managed within approximately 5–8% of context budget, with 108 consecutive perfect scores. Upgrade path: database-backed verification against held anchor definitions.
Circumventer 7: Batch Enrichment with Machine Verification
No file system access, no autonomous execution, no way to verify output integrity at scale. Library-wide data transformation with machine-verifiable integrity guarantees — seven checksums per document, two of which are genuine integrity proofs. Evidence: 37 documents processed in a single session, 37 of 37 passed verification. Upgrade path: when file system access arrives, the checksums become the agent's self-audit mechanism.
The Chain, Not the Parts
Listing seven circumventers is a taxonomy. Understanding how they connect is architecture.
The dependency chain is sequential in construction but concurrent in operation. Circumventer 1 (session continuity) enables everything — without state transfer between sessions, nothing else functions. Circumventer 2 (change register) enables Circumventer 3 (library decomposition), which produces the baseline library that Circumventer 4 (knowledge graph) indexes. Circumventer 4 provides the structural map that Circumventer 5 (altitude model) uses for loading decisions. Circumventer 5 creates the context headroom that makes Circumventer 7 (batch enrichment) feasible. Circumventer 6 (compression tiers) is the reflexive application: the methodology's own instruments governed by the same principles as everything else.
During any given session, all seven are active simultaneously. The session opens with Circumventer 1 bootstrapping state. Circumventer 2 records every change. Circumventer 5 governs every loading decision. The architecture is lived, not loaded.
This concurrency distinguishes a dependency chain from a collection of independent solutions. Independent workarounds can be adopted or abandoned individually — which makes them feel flexible but means they don't compound. A dependency chain compounds by design. Each circumventer's output becomes another circumventer's input.
An honest note: the individual circumventers were built to solve immediate problems. The dependency chain was identified through retrospective synthesis, not planned through top-down architecture. Some connections were designed at the time — the handoff document was always intended to thin as the platform matured. Others were discovered — the convergence between task management, portfolio pipeline, and platform architecture was not planned. The distinction between designed and discovered connections is itself part of the transferable pattern. Claiming every connection was foreseen would be neater but untrue.
What Happens When the Constraints Lift
The seven circumventers were designed with explicit upgrade paths — the retirement mechanism built into the original structure. That design decision is what separates circumventers from workarounds.
The platform constraints documented in Section 1 are now lifting. The Model Context Protocol — an open standard for connecting AI systems with external data sources, introduced by Anthropic in November 2024 and subsequently adopted by every major AI provider — provides the database connectivity that several circumventers were designed to use. The protocol was donated to the Linux Foundation in December 2025, co-founded by Anthropic, Block, and OpenAI, with support from Google, Microsoft, AWS, Cloudflare, and Bloomberg.
Six of seven circumventer upgrade paths now have concrete activation mechanisms — from session continuity moving to database-backed state, through the change register transitioning to queryable tables, to batch enrichment gaining direct file system writes. Each transition is governed by a trust model calibrated across 88 verification cycles: a capability must demonstrate sufficient reliability before its document-based predecessor retires. A capability that fails after promotion can be demoted without breaking the methodology. The practice operated for over 100 sessions on documents alone. The database is an enhancement, not a dependency. If it fails, the markdown files are still there.
The full analysis of each upgrade path and the governance model for progressive capability promotion is available in the complete paper — request the full paper here.
The Honest Part
This paper has described seven circumventers, a dependency chain, and an upgrade pathway. Here is what it has not claimed, and what remains unproven.
Not everything was designed. The dependency chain was discovered through retrospective synthesis, not planned through top-down architecture. Some upgrade paths were designed from the outset. Others became apparent only when the platform matured enough to reveal them.
The upgrade paths are designed but mostly not yet activated. The infrastructure that would activate them is provisioned and in trial. The trust model that would govern the transition is in its calibration phase — 88 verification cycles of operational data, but not yet at the maturity threshold the methodology requires.
The workaround pattern in the industry is observational, not empirical. Multiple industry sources confirm ad-hoc AI workflows as the prevailing mode. This is consistent evidence from credible sources, but it is not a controlled study.
The circumventer concept is our framing. Cunningham and Fowler provide the established vocabulary for technical debt. The extension — circumventer as the inverse pattern, architectural capital rather than liability — is a contribution of this practice, not an established term. We believe the distinction is genuine and useful. Time and external scrutiny will determine whether it earns wider adoption.
What comes next. New constraints emerge as the platform matures — authentication governance, autonomous execution boundaries, and the challenge of maintaining quality standards when the AI has the capability to act without supervision. A trust model developed from clinical AI research — four empirically validated failure modes that apply without modification to methodology infrastructure — provides the diagnostic framework for these emerging challenges. One failure mode in particular — testing for the appearance of quality rather than actual quality — has already killed an architecture proposal before it was built.
This paper is a First Edition. The upgrade paths described in Section 5 will either activate as designed or they won't. When they do — or when they reveal something unexpected — a Second Edition will document what actually happened. The commitment is to the reader: we will come back, we will report honestly, and the evidence trail will be continuous.
This is the summary. The full analysis goes deeper.
The complete research paper includes detailed evidence for each circumventer, the full dependency chain analysis, per-circumventer upgrade path specifications, the trust model governance architecture for progressive capability promotion, and the methodology for applying this framework to your own practice. If this thinking is relevant to what you're working on, the full paper is available on request.
Request the full paper →Sources & Provenance
Cunningham, Ward. "The WyCash Portfolio Management System." Addendum to the Proceedings of OOPSLA 1992. ACM, 1992.
Fowler, Martin. "TechnicalDebtQuadrant." martinfowler.com. 14 October 2009.
Ramaswamy, S., Klang, E., Nadkarni, G., et al. "AI Triage in Emergency Medicine." Nature Medicine, 2026. DOI: 10.1038/s41591-026-04297-7.
Anthropic. "Introducing the Model Context Protocol." anthropic.com. November 2024.
Anthropic. "Donating the Model Context Protocol and Establishing the Agentic AI Foundation." anthropic.com. December 2025.
Colophon
Edition: First Edition — 28 March 2026
How this article was produced
This article was produced under a governed production method for research articles (RPP-001 v0.1.0). The protocol requires argument definition, source inventory with gap analysis, external enrichment, structured section architecture, adversarial review, and substance traceability verification before publication.
What the practitioner brought: The practitioner directed the argument, shaped the editorial framing (including the critical insight that the constraint problem is not hidden knowledge but an information asymmetry between practitioners and clients), selected the article sequence, designed the production method, conducted independent citation verification, and approved every design decision from story selection through publication.
What the production engine brought: Research synthesis across 131 sessions of library evidence, source inventory and gap analysis, draft production at publication depth, concurrent bibliography construction, and structured adversarial review. The engine identified the Cunningham/Fowler connection to the circumventer concept and executed the three-pass review protocol.
Powered by Claude Opus 4.6 · RPP-001 v0.1.0
| Hostile reader review | 3 archetypes tested. All DEFENSIBLE post-revision. |
| Substance traceability | 94.4% of factual claims verified (17/18). One flag resolved as non-issue. |
| Practitioner spot-check | 3 citations independently verified. Pending practitioner action. |
| Register compliance | PASS — lens-not-subject, psychological register, brand consistency. |
We have made best efforts to ensure the accuracy and integrity of this article. If you believe any claim, citation, or finding requires correction, we welcome that feedback at [email protected] and will undertake to review and respond accordingly.