THE OBSERVATION
In March 2026, MIT Technology Review and Rimini Street convened a live webinar titled “Architecting Systems of Action,” examining how organizations should deploy AI, APIs, and large language models on top of existing enterprise resource planning (ERP) systems. The panel included enterprise architects and technology strategists addressing an audience of sixty-four professionals. The conversation surfaced a structural contradiction that has direct implications for any institution considering AI adoption.
The panelists advanced three claims in rapid succession. First, that AI guidelines should be built into the code, not treated as external policy documents. Second, that AI should be layered on top of existing systems without disruption. Third, that AI both saves time on existing tasks and modernizes systems. These three claims cannot coexist without a governance framework that none of the panelists described, and that no audience question, save one, attempted to surface.
If AI guidelines should be in the code, then someone must have the authority to write those guidelines and the institutional mandate to bind the organization to them. That authority was never identified.
When asked directly who holds the authority to define the guidelines that get coded into AI systems, the panel declined to answer. That silence is the finding of this paper.
THE CONTRADICTION
Enterprise AI strategy currently operates on an unexamined assumption: that AI deployment is an optimization problem. The system works; the question is how to make it work faster, smarter, or more efficiently. This assumption permits a “layer on top” architecture because there is a stable base worth preserving running underneath. In the ERP example, the ERP handles payroll, procurement, and logistics. The AI enhancement then does not replace the logic; it accelerates it.
This assumption fails in two specific ways, both of which were visible in the webinar.
First, the claim that AI modernizes systems contradicts the claim that AI layers on top without disruption. Modernization is disruption. If the system is genuinely being modernized, something foundational has changed. If nothing foundational has changed, the organization has automated a task and labeled it transformation. The word “modernize” performed enormous rhetorical work in the webinar without a concrete definition.
Second, the panel listed five priorities for getting AI deployment right: transactional integrity between AI intelligence and ERP execution, careful automation, policy as code, data sovereignty, and business knowledge as the key differentiator. Every item is a technical or operational constraint. Not one addresses the people the system acts upon, the institutional purpose the system serves, or the accountability structure when the system fails.
The panel told the audience not to be obsessed with technology, then gave five technology-based domain answers. They said guidelines should be in the code, then described no process for determining whose guidelines, derived from what authority, reviewed by what governance body. They acknowledged they cannot yet truly measure ROI and even so, continued to recommend deployment.
Perhaps most revealingly, the panel drew a distinction between deterministic intelligence and probabilistic intelligence. This is a technically important observation: deterministic systems produce predictable, repeatable outputs given the same inputs, while probabilistic systems, including large language models, produce variable outputs shaped by statistical inference. The governance implications of this distinction are profound and were left entirely unaddressed.
“If the intelligence layer is probabilistic and the execution layer is deterministic, then the governance architecture must mediate between a system that cannot guarantee consistent recommendations and a system that will execute whatever recommendation it receives.”
Policy-as-code becomes exponentially more complex when the intelligence producing the input to that code is, by design, non-deterministic. The panel treated this distinction as a technical observation. It is, in fact, the strongest argument for why governance architecture must precede deployment.
THE GOVERNANCE GAP
The missing layer in this conversation is institutional governance architecture. Not policy documents. Not best-practice checklists. Not AI ethics statements posted to organizational websites. Architecture: the structural conditions that determine what an AI system is permitted to do, who is accountable when it does something it should not, and how the institution knows whether the system produced the outcome it was adopted to produce.
Enterprise can survive this gap longer than other sectors because the stakes are operational. A poorly automated procurement decision costs money. It can be reversed, corrected, absorbed. The system has tolerance for error because the error domain is financial.
Education cannot survive this gap. A poorly automated placement decision, an algorithmically generated learning pathway, or an AI-driven assessment that mischaracterizes student understanding acts on a child’s developmental trajectory. The error domain is not financial. It is human. The consequences compound rather than resolve, and in most cases, the institution has no measurement architecture to detect the error in the first place.
The panelists themselves identified the risk without recognizing its structural implications. They observed that the greater the autonomy granted to the AI, the greater the institutional risk. They further acknowledged that AI cannot explain why it made a given decision, which means organizations need agents to audit the agents. Both observations are correct. Both describe a governance architecture that does not yet exist in their framework. If autonomy scales risk, then someone must hold the authority to set the autonomy threshold, and that authority must be encoded into the system, not left to individual operators. If AI decisions are opaque enough to require automated auditing, then the auditing architecture must be designed before deployment, not retrofitted after failure.
This is the governance gap: the distance between an institution’s stated commitment to responsible AI use and its structural capacity to enforce that commitment at the system level. When that distance is large, the institution is engaged in what this author has termed Ghost Transformation™: the language of change is present, the appearance of change is constructed, but the structural conditions underneath are actively protected from examination.
THE K-8 PROBLEM IS DISTINCT
The governance gap is present across sectors. It is most acute, and most urgent, in K-8 learning environments, for three reasons that do not apply to enterprise or higher education contexts.
Legal exposure is categorically higher. The Children’s Online Privacy Protection Act (COPPA) creates a hard threshold at age thirteen that most AI governance frameworks were not designed to address. The frameworks that do exist — FERPA, the Student Data Privacy Consortium (SDPC) agreements, state-level student privacy laws — were designed to protect student data and records. They are necessary. They do not reach the interaction itself: the sustained, real-time exchange between a child and an AI system. The governance gap for enterprise is institutional. For K-8, it is also legal.
The interaction is dialogic, not transactional. AI deployment in K-8 learning environments increasingly involves direct, sustained conversation between a child and an AI system — not a student retrieving information from a search interface, but a child thinking aloud with a system designed to respond to them as an individual. This is categorically different from using AI to personalize content delivery or accelerate grading workflows. The governance frameworks designed for transactional AI do not transfer to dialogic AI. A content recommendation algorithm and a conversational dialogue partner require fundamentally different governance architectures, and current frameworks make no distinction between them.
The cognitive stakes are developmental. A child who spends formative learning years in poorly governed AI interactions is not only a compliance risk. The interaction shapes how they think, what they expect from intellectual exchange, and what they understand themselves to be capable of as a thinker. The governance gap here is not administrative — it is developmental. The consequences of insufficient governance in a K-8 AI deployment are not recoverable through a system patch or a policy revision. They accumulate in the child.
No existing framework addresses the dialogic, developmental, and legal conditions of AI interaction with children in K-8 learning environments simultaneously. SDPC governs data. COPPA governs collection. FERPA governs records. None govern what happens in the space between a child and an AI. That is the gap this paper names and the gap the Student AI Learning Compact was designed to fill.
TWO POSTURES, ONE FRAMEWORK
The solution is not to reject enterprise AI architecture. It is to recognize that two fundamentally different institutional postures exist, each requiring a different entry point into the same governance framework.
| POSTURE A — Optimization | POSTURE B — Readiness |
|---|---|
| The system works. AI enhances execution speed, decision quality, and process efficiency. | The system is structurally incomplete. AI adoption exposes architectural gaps rather than enhancing existing capacity. |
| Governance question: Where does the AI layer sit, what data does it access, and what enforcement gates prevent unauthorized action? | Governance question: Does the institution have the structural conditions to adopt any tool responsibly? |
| Applicable to enterprise ERP environments with stable architectures. | Applicable to education, healthcare, public sector, and any institution where the stakes are developmental rather than operational. |
Enterprise typically enters at Posture A. Education must enter at Posture B. The failure mode visible in the MIT/Rimini Street webinar is not that enterprise gets this wrong; it is that enterprise frameworks are exported to sectors that require Posture B without the translation layer that would make them structurally appropriate.
The panelists stated that systems have to be ready to deploy before they are deployed in order to not disrupt. This is Posture B. They described the readiness requirement and then spent the remainder of the conversation operating entirely within Posture A.
A THREE-TIER GOVERNANCE ARCHITECTURE
Regardless of which posture an institution enters from, the governance architecture operates in three tiers.
Tier 1: Governance Logic. Before any code is written, the institution answers foundational questions: What decisions is this system making or informing? Who is accountable when the system is wrong? What data is it accessing, and who consented to that access? What institutional purpose does this deployment serve, and how will that purpose be measured? These answers become configuration constraints that bind the system’s behavior. They are not policy paragraphs. They are system requirements.
Tier 2: Code-Level Enforcement. The panelists were right: every policy should be code. But the inverse must also hold: every piece of code enacting a policy must trace back to an institutional authority that approved it. If a district policy states that AI cannot make final placement decisions, that constraint is a hard stop in the workflow, not a line in a handbook. If a company states that LLM outputs require human review before client delivery, that is a required gate in the API call chain. The ethical commitment becomes a technical specification. This tier is further complicated by the deterministic-probabilistic divide: because LLM outputs are inherently variable, enforcement gates must account for a range of possible AI recommendations rather than a single predictable output. Code-level enforcement in a probabilistic environment requires boundary conditions, not point constraints.
In K-8 dialogic AI specifically, Tier 2 operates with an additional architectural requirement: the governance layer must simultaneously enforce child safety standards and preserve the integrity of the learning interaction. These are not competing requirements. They are the same design problem solved at once. An AI dialogue partner that cannot stray into harmful content, cannot simulate a relationship, and cannot deliver answers rather than questions is not merely a safer system — it is a better pedagogical instrument.
“Governance in the code is not a constraint on learning. It is its precondition.”
Tier 3: Accountability and Measurement. The system logs what the AI recommended, what a human decided, and what the outcome was. This is not optional transparency. It is built into the data architecture. The panelists acknowledged this need when they observed that agents must audit other agents, because AI cannot explain its own decisions. But audit without governance is surveillance without purpose. The auditing architecture must be designed to answer a prior institutional question: what were we trying to produce, and did we produce it?
In education, this measurement requirement is not satisfied by engagement metrics, time-on-platform data, or content completion rates. These are measures of interaction volume, not learning quality. For AI systems that interact directly with K-8 students, measurement must reach inside the interaction itself — assessing the quality of the thinking the AI interaction produced, not merely whether the interaction occurred.
DialogIQ, the embedded assessment protocol within the ThinkBridge dialogic AI framework, is designed precisely for this purpose: to measure thinking quality in real time, inside the session, across six developmentally calibrated dimensions. It is one institutional response to the measurement gap this paper identifies.
THE QUESTION THAT WAS NOT ANSWERED
The question posed during the webinar was this: if AI guidelines should be built into the code, and AI is described as both a time-saving tool and a system modernizer, then where does the modernization actually happen, and who has the authority to define the guidelines that get coded in?
The panel’s decision not to engage this question is itself diagnostic. It reveals that the current enterprise AI conversation has no answer to the governance question because no one in the client organization has been given the authority to own it. The vendor cannot answer it because answering it would constrain the sale. The technology team cannot answer it because it is not a technology question. The legal team cannot answer it because it is not a compliance question. The C-suite cannot answer it because they have not been told it exists.
“You can’t measure ROI” is not a fact about AI. It is an abdication of institutional accountability. It means: we adopted this tool before we defined what it was supposed to produce, so we cannot evaluate whether it worked.
The role that is missing, in enterprise and in education alike, is the institutional governance architect: the function responsible for translating organizational commitments into system-level constraints, and for building the measurement architecture that determines whether those constraints are producing the outcomes the institution adopted the technology to achieve.
IMPLICATIONS
For enterprise leaders: The five priorities identified in the webinar are necessary but insufficient. Transactional integrity, careful automation, policy as code, data sovereignty, and business knowledge must be preceded by a governance determination: who in this organization has the authority to define the policies that become code, and by what process are those policies reviewed, updated, and enforced? Without that determination, policy-as-code is automation of unexamined assumptions.
For school districts and education institutions: The enterprise “layer on top” architecture does not transfer. Every AI adoption in education is a disruption because the foundational systems were not designed as integrated architectures. Before evaluating any AI tool, the district must establish institutional readiness: does the organization have the structural conditions, governance authority, data architecture, and measurement capacity to adopt responsibly? This is not a technology question. It is an institutional integrity question.
For districts deploying any AI system with student-facing components in K-8 environments, the governance infrastructure must address not only data privacy and security but the dialogic, developmental, and legal conditions specific to children. The Student AI Learning Compact (SALC), convened by the Fulcra Institute as a direct extension of this paper’s findings, provides the field’s first adoptable coalition framework for meeting that obligation. The full SALC governance framework and signatory structure are available at teachingwithai.org.
For technology vendors: Claiming that AI both saves time and modernizes systems without defining modernization is a credibility risk. Clients are beginning to recognize the gap between deployment and governance. The vendors that will lead the next phase of enterprise and institutional AI are those that build governance architecture into their offerings rather than leaving it as an exercise for the buyer. For vendors operating in K-8 learning environments, SALC compliance provides a verifiable, publicly documented standard against which governance claims can be evaluated — by districts, by families, and by the field.
For policymakers: The governance gap identified in this paper is not a gap that market forces will close. Vendors have limited incentive to constrain their own deployments. Districts lack the technical capacity to specify governance requirements in procurement. Families have no visibility into the AI interactions their children are having. Policy infrastructure — at the state and federal level — must establish minimum governance standards for AI systems deployed with children. SALC is a field-led attempt to establish those standards before policy mandates them. It will not be the last attempt. The question is whether the field builds the infrastructure before the harms require it.
WHAT COMES NEXT
This paper argues that governance architecture must precede deployment. It identifies the gap between that requirement and current practice. The natural question that follows is: what does adoptable governance infrastructure actually look like?
In direct response to the governance gap this paper names, the Fulcra Institute is convening the Student AI Learning Compact (SALC) — a coalition framework establishing adoptable standards for any AI system with a student-facing or student-affecting component in a K-8 learning environment. SALC is not a product, a certification program owned by a vendor, or a compliance overlay designed for lawyers. It is field infrastructure: publicly available, free for districts and schools to adopt, and designed to function alongside existing legal frameworks — COPPA, FERPA, SDPC — while reaching the governance territory those frameworks do not cover.
SALC governs across seven domains: Legal and Compliance, Content Guardrails, Interaction Boundaries (universal and dialogic), Safety and Escalation Protocols, Session Structure, Transparency Requirements, and Equity and Access Standards. Each domain carries grade-band-specific standards — calibrated to K–1, 2–4, 5–6, and 7–8 — reflecting the developmental reality that governance appropriate for a thirteen-year-old is not governance appropriate for a six-year-old.
The ThinkBridge framework, developed by the author and held as Clairant IP, is the first SALC-compliant implementation. Its embedded assessment protocol, DialogIQ, provides rubric-based LLM measurement of thinking quality across six dimensions — Thought Fluency, Questioning, Argumentation, Discovery, Self-Discovery, and Integration — calibrated by grade band and designed for in-session capture rather than summative testing. ThinkBridge demonstrates the central argument of this paper in practice: that governance built into the code and genuine learning are not in tension. The constraints that make the system safe for children are the same constraints that make it pedagogically rigorous. The governance is the design.
The full SALC framework, signatory structure, and adoption process are available at teachingwithai.org. Districts, platforms, schools, and research partners are invited to join the coalition.
KEYWORDS