1.

Introduction

The education technology sector has invested over $100 billion in the past two decades building tools that collect, analyze, and act on student data. The implicit assumption underlying this investment is that learning interactions produce data, and that better data leads to better learning outcomes. This assumption is so deeply embedded in the sector's architecture that it rarely surfaces as an assumption at all.

This paper argues that the assumption is wrong, not at the margins but at the foundation. What AI-mediated learning produces is not primarily data. It is something categorically different: the observable trace of cognitive reorganization occurring through dialogue. This paper names that trace process memory and argues that the distinction between data and process memory is not semantic but structural, with consequences for assessment design, product architecture, governance frameworks, and the fundamental question of who owns the artifacts of learning.

The argument proceeds through five sections. Section 2 establishes that educational assessment data captures performance events rather than learning, introduces a three-tier measurement taxonomy, and proposes a learning hypothesis with three operative variables. Section 3 establishes a structural-functional homology between synaptic pathway formation and cognitive pathway formation through AI-mediated dialogue. Section 4 establishes dialogue as the mechanism that simultaneously produces cognitive restructuring and makes it observable, presents peer-reviewed evidence of its absence in K-8 classrooms, and argues that AI mediation resolves a structural constraint that classroom architecture cannot. Section 5 formally defines the three terms coined in this paper. Section 6 develops the governance implications, proposing five principles of cognitive sovereignty and arguing for the expansion of the Student AI Learning Compact.

• • •
2.

What Learning Produces: Process Memory, Not Data

2.1 The Snapshot Problem

Traditional assessment data captures a performance event: how a student interacted with a specific stimulus, under specific conditions, at a specific moment. A student who tests poorly on Tuesday might test well on Thursday. The conditions shifted: sleep, anxiety, breakfast, a fight with a friend, the phrasing of a question, the temperature of the room. All of these conditions are probabilistic, variable, unpredictable, and external to the cognitive process being measured. What the data actually represents is not what the student knows but what the student produced at the intersection of cognition and condition on that day.

This is not to say data is useless. Longitudinal accumulation of performance events gives pattern data: trending lines, performance distributions, interaction frequencies. These visual markers are genuinely useful for identifying broad trajectories and flagging students who may need intervention. But pattern data derived from performance events shows how the student's outputs have changed in relation to stimuli over time. It does not show why. It does not show the internal reorganization that produced the changed output. The assessment instrument was designed to capture the endpoint, not the process.

No amount of data sophistication can overcome this limitation. More frequent assessments, more adaptive instruments, more granular analytics produce sharper snapshots, better interpolation between them, but snapshots nonetheless. The learning happened between the frames, in the space no snapshot can reach.

2.2 Three Tiers of Educational Measurement

Tier 1: Performance data. What the sector currently collects. Output at a point in time. State-based measurement. This includes standardized test scores, quiz results, module completion rates, and any instrument that captures what the student produced in response to a stimulus at a given moment.

Tier 2: Interaction data. What more sophisticated platforms collect. Clickstreams, time-on-task, adaptive pathway selection, navigation sequences. This tells you how the student moved through a designed environment, but the environment was built for content delivery. The interaction data reflects navigation behavior, not reasoning.

Tier 3: Process data. What this paper argues is both possible and necessary. The observable trace of reasoning in motion, captured through dialogue specifically designed to surface, challenge, and restructure thinking. Process data is the closest available proxy to the cognitive reorganization itself, because the instrument is not measuring an output. It is creating conditions under which the process becomes visible.

The sector is stuck at Tier 1, occasionally reaching Tier 2, and has no conceptual vocabulary for Tier 3. Without that vocabulary, the governance conversation about student data is necessarily incomplete, because it is governing the wrong artifact.

2.3 The Learning Hypothesis

This paper proposes the following hypothesis: learning happens internally when the student makes cognitive connections over time with persistent depth of interaction in which the student is the agent.

Three operative variables structure this hypothesis. Cognitive connections names the mechanism: the student's existing reasoning architecture encountering new material, destabilizing, and reorganizing into a more complex or integrated configuration. Over time names the temporal condition: learning is a durational phenomenon that cannot be captured in a moment because it does not exist in a moment. Persistent depth of interaction in which the student is the agent names the design condition: persistence (the system maintains a relationship with the student's reasoning over time) and depth (the interaction pushes beyond surface recall into territory where assumptions are exposed and available for reorganization) must both be present simultaneously, and the student must be the driver. Persistent depth controlled by a system produces trained behavior. Persistent depth driven by the student produces cognitive development.

2.4 The Persistence/Depth Matrix

The hypothesis yields an evaluative framework operating at the architectural level. The question is not "did this tool work?" but "could this tool work?"

Quadrant 1: Persistent and deep. The architecture provides conditions for cognitive connection. Learning is possible. Implementation quality determines whether the possibility is realized.

Quadrant 2: Persistent but shallow. The student engages regularly, but the interaction never pushes beyond content recall. Most adaptive learning platforms sit here. The system adjusts difficulty, not depth. It remembers what the student got wrong, not how the student was reasoning. Data accumulates, engagement metrics look healthy, and no cognitive reorganization occurs. This quadrant produces billions in revenue without producing significant improvement in student outcomes. Revenue proves demand. It does not prove efficacy.

Quadrant 3: Deep but not persistent. A powerful single encounter that challenges thinking but has no continuity. The connections formed have no structural support for consolidation. Depth without persistence is an event, not a process.

Quadrant 4: Neither persistent nor deep. Content delivery. Click-through modules. Administrative infrastructure wearing a pedagogical mask.

2.5 The Student as Driver

Process data can only be produced when the student drives the interaction. In virtually every EdTech product, the tool drives the interaction: system presents stimulus, student responds, system evaluates and presents the next stimulus. Whatever data gets produced belongs conceptually to the system, because the system designed the conditions that produced it.

When the student drives the interaction, the process memory produced is a product of the student's cognitive agency expressed through the tool's conditions. The tool provided the environment. The student produced the reasoning. Student voice, student agency, and student growth are not supplementary goals. They are the structural conditions without which process memory cannot form.

• • •
3.

The Neural Analogy: Process Memory and Synaptic Pathway Formation

3.1 A Structural-Functional Homology, Not a Metaphor

The claim is not that student reasoning is synaptic firing, or that a dialogue interface is a brain. The claim is that the structure of cognitive pathway formation through AI-mediated dialogue mirrors the structure of synaptic pathway formation in neural systems: cumulative, contextually embedded, irreducibly individual, and consolidated through repetition with depth. This is a structural-functional homology: the same operative logic governs both processes.

3.2 Three Features of Synaptic Pathway Formation

Cumulative and conditional. Each co-activation between neurons slightly increases the probability that the same pathway will fire again. This is long-term potentiation: the synapse becomes more likely to transmit, not guaranteed to. A single firing does not create a durable pathway. It takes repeated activation under conditions of sufficient intensity. This maps to the temporal variable: cognitive connections require time and sustained repetition to consolidate.

Contextually embedded. A synapse strengthens as part of a network. New pathways alter the relationship between existing connections. Genuine learning often feels disorienting because existing architecture has been disturbed. This maps to the mechanism variable: learning is reorganization, not accumulation.

Irreducibly individual. Two people receiving the same stimulus produce different synaptic responses because the stimulus interacts with a different existing network. There is no standard neural pathway for learning a given concept. The individuality is not noise. It is the system.

3.3 The Mapping to AI-Mediated Dialogue

Each feature maps onto Socratic dialogue. The student encounters a prompt. Their existing cognitive architecture generates a response: a first traversal of a reasoning pathway. The interlocutor responds with a condition that pressures the pathway. The student's next response is a re-traversal under altered conditions. Over time with persistent depth, certain patterns consolidate while others weaken, the dialogic equivalent of long-term potentiation and synaptic pruning.

Because the AI responds to the student's actual reasoning rather than evaluating against a predetermined rubric, the interaction space is unique to that learner. Process memory is therefore not data in any conventional sense. Data can be standardized and aggregated. Process memory cannot. The individuality is the signal.

3.4 The Probability Architect

The function performed by a Socratic interlocutor, whether human or AI, is best understood as what this paper terms a probability architect. The probability architect does not determine what the learner thinks. It restructures the conditions under which they think, so that the probability of deeper connection increases with each exchange.

Each Socratic exchange is a co-activation. Over time, the probability increase becomes dispositional: the student's reasoning architecture reorganizes so that deeper connection is the more probable default.

What separates this from instruction: instruction adds information and hopes the student connects it. Probability architecture changes the probability landscape so that connection becomes the more likely cognitive outcome. The student who has worked through sustained Socratic dialogue does not just know more. They reason differently.
• • •
4.

Dialogue as the Mechanism, and Its Absence in K-8 Classrooms

4.1 The Dual Function of Dialogue

Dialogue is the mechanism that simultaneously produces cognitive restructuring and makes that restructuring observable. When a student articulates their reasoning in response to a Socratic prompt, the articulation itself is a cognitive act. The act of speaking or writing is itself a traversal of a cognitive pathway. And when the interlocutor responds with a condition that pressures that reasoning, the student's next articulation is a re-traversal: visibly reorganized, with the reorganization captured in the text itself.

The dialogue is not a window onto a separate cognitive process. The dialogue is the process made material. No other assessment paradigm produces this kind of artifact. Standardized tests produce scores. Adaptive platforms produce clickstreams. Formative assessments produce performance snapshots. Only dialogue produces a textual trace of reasoning in motion.

4.2 The Research Evidence

Nystrand's 1997 study of hundreds of U.S. classrooms found that open exchange of ideas averaged less than 50 seconds per class period in eighth grade and less than 15 seconds in ninth grade. The dominant profile was monologically organized instruction through the IRE (Initiation-Response-Evaluation) pattern (Nystrand, 1997; Nystrand et al., 2003). Wells (1999) found that IRE accounts for approximately 70% of all classroom discourse. Lo (2022) confirmed that high-quality discussions remain rare. Alexander's cross-cultural research confirmed the recitation/IRE default across national contexts (Alexander, 2001, 2020).

4.3 Why IRE Cannot Produce Process Memory

The IRE pattern: teacher asks a closed question, student responds with recall, teacher evaluates and moves on. The student's reasoning is never surfaced. No pathway is activated, only retrieval. The evaluative third turn closes the cognitive space a Socratic follow-up would have opened. This happens for approximately 70% of classroom discourse time, in the grades where cognitive architecture is most actively developing.

4.4 A Structural Constraint

The absence of dialogic depth operates at two levels. First, the interaction structure: IRE was designed for information transfer, not for surfacing individual reasoning. Second, time and ratio: one teacher, 30 students, 45-minute period. Even achieving dialogic quality, each student receives approximately 90 seconds of individualized Socratic engagement. Process memory requires sustained, persistent depth. These are architectural constraints. The classroom was designed for one-to-many transmission. Process memory requires one-to-one depth. AI mediation resolves the ratio constraint that classroom architecture cannot.

• • •
5.

Formal Definitions

5.1 Process Memory

Process memory is the cognitive trace of pathway formation, revision, and consolidation that occurs when a learner's reasoning reorganizes through sustained, student-driven interaction with a responsive interlocutor or environment. Process memory is not a record of what a student produced but the living residue of how a student's reasoning moved, reorganized, and deepened over time. It is distinguished from performance data (which captures output states), interaction data (which captures navigation behavior), and behavioral profiles (which capture aggregated tendencies) by its focus on the dynamic topology of reasoning itself. Process memory is irreducibly individual, probabilistic in nature, and cannot be meaningfully standardized, aggregated, or abstracted from the learner without destroying the cognitive signature it represents.

Process memory is a universal cognitive phenomenon occurring wherever conditions of sustained, student-driven, depth-oriented interaction are present, including human Socratic dialogue, mentorship, and responsive pedagogical relationships. It becomes observable and assessable as a produced material artifact through AI mediation, which renders the textual trace of reasoning capturable at scale. AI mediation does not create process memory. It makes process memory observable.

5.2 Cognitive Sovereignty

Cognitive sovereignty is the right of a learner to ownership, control, and protection of their process memory, understood not as data about the learner but as the observable trace of their cognitive development. Cognitive sovereignty recognizes that process memory is a product of the learner's agency and individuality, not a byproduct of a system's design, and therefore belongs to the learner in a manner analogous to cognitive identity itself. It encompasses: the right to determine who may access, analyze, store, or derive value from one's process memory; the right to have process memory treated as developmental and identity-bearing rather than transactional; and the right to continuity of one's cognitive record across platforms, institutions, and time.

Cognitive sovereignty cannot be satisfied by data privacy protections alone, because it governs a different category of artifact than data privacy was designed to address.

5.3 Probability Architect

The probability architect is the function, performed by a human interlocutor or an AI system, of structuring the conditions of a learning interaction to expand the probability that the learner will form deeper cognitive connections than they would have formed without the intervention. The probability architect does not determine what the learner thinks, deliver information to be absorbed, or correct errors against a predetermined standard. It operates through Socratic pressure, productive destabilization, and responsive reframing to hold the learner in a zone of cognitive tension where reorganization becomes more probable. The quality of probability architecture is measured not by whether the learner arrives at a correct answer but by whether the learner's probability distribution of reasoning depth shifts dispositionally over time.

The probability architect is distinguished from a tutor (which diagnoses and fills knowledge gaps), a facilitator (which manages group dynamics), a scaffolder (which provides deficit-based support), and an adaptive engine (which adjusts content delivery based on performance signals). None of these describe a function whose purpose is to restructure the probability landscape of cognition.

• • •
6.

Governance Implications: From Data Privacy to Cognitive Sovereignty

6.1 The Wrong Question

The entire governance conversation about student data in AI-mediated learning environments is asking the wrong question: who owns the data produced by student interactions with EdTech platforms? The regulatory frameworks are built on the assumption that the thing to be governed is data. If AI-mediated learning produces process memory rather than data, the governance frameworks are governing the wrong artifact.

6.2 Why Existing Frameworks Are Insufficient

FERPA governs educational records. Process memory is not an educational record. It is a representation of cognitive movement. COPPA governs personally identifiable information from children under 13. Process memory is not PII in the COPPA sense, but it is something more intimate: the pattern of how a child thinks and how their reasoning develops. A student's process memory, accumulated over years, constitutes a cognitive profile more revealing than any demographic data point. State student data privacy laws prohibit commercial use of student data, but if process memory is a trace of cognitive development, the question is not just whether a vendor can sell it but whether the vendor should possess it at all once the student's engagement ends. Emerging AI-in-education policies focus on algorithmic transparency and bias, but do not address a fundamentally new kind of artifact produced through interaction.

6.3 Five Principles of Cognitive Sovereignty

1. The Ownership Principle. Process memory belongs to the student in a manner analogous to cognitive identity. No vendor, institution, or platform may claim ownership.

2. The Non-Extraction Principle. Process memory cannot be extracted, aggregated, or used to train models without explicit, informed, and ongoing consent. A consent form signed at enrollment cannot govern a developmental artifact that becomes more revealing over years.

3. The Portability Principle. Process memory must follow the student across platforms, institutions, and time. No single vendor can hold it hostage. This implies interoperability standards the sector does not yet have. The principle should lead the infrastructure.

4. The Developmental Sensitivity Principle. Governance requirements must be grade-band sensitive, with the highest protections at the earliest developmental stages, where process memory reveals emerging cognitive architecture at its most formative and vulnerable.

5. The Fade Principle. A tool designed to produce process memory should be designed for its own obsolescence in the student's development. A vendor that designs for dependency is structurally extracting ongoing value from a cognitive process the student should now own independently.

6.4 Implications for the Student AI Learning Compact

The Student AI Learning Compact (Escobar, 2025b) currently governs seven domains. This paper establishes the intellectual foundation for expanding the compact to encompass cognitive sovereignty: definition of process memory as a governed artifact, cognitive sovereignty rights by grade band, vendor obligations regarding non-extraction and portability, institutional obligations regarding continuity and the prohibition on predictive profiling from process data, and assessment standards for evaluating whether a tool's architecture supports or undermines cognitive sovereignty.

• • •
7.

Conclusion

This paper has argued that the education technology sector is built on a structural misunderstanding of what learning produces. The sector collects data. Learning produces process memory. The sector governs information objects. Process memory is a trace of cognitive identity. The sector measures performance. Learning is the reorganization of reasoning that occurs between performances.

The three claims developed here, that assessment data captures performance events rather than learning, that cognitive pathway formation through dialogue is structurally homologous to synaptic pathway formation, and that dialogue simultaneously produces and makes observable cognitive restructuring, together constitute an epistemological reframing of what AI-mediated learning is, what it produces, and what rights attach to the artifacts it generates.

The terms introduced, process memory, cognitive sovereignty, and probability architect, give the sector language it currently lacks. Process memory names the artifact. Cognitive sovereignty names the right. Probability architect names the function. Without these concepts, the field cannot have the conversation it needs to have about what AI does in a learning environment, what students own as a result, and what governance structures must exist to protect cognitive development from extraction.

The 1978 thinkers diagnosed that institutions prevent the knowledge, thinking, and agency they claim to produce. Nearly fifty years later, AI-mediated learning offers a structural alternative: environments that produce the conditions for thinking at a scale institutions never could, generating artifacts of cognitive development that belong to the learner rather than the system. Whether this possibility is realized depends on whether the governance frameworks evolve as fast as the technology. This paper is an argument that they must.
• • •

References

Alexander, R. J. (2001). Culture and Pedagogy: International Comparisons in Primary Education. Oxford: Blackwell.

Alexander, R. J. (2020). A Dialogic Teaching Companion. London: Routledge.

Applebee, A. N., Langer, J. A., Nystrand, M., & Gamoran, A. (2003). Discussion-based approaches to developing understanding. American Educational Research Journal, 40(3), 685-730.

Escobar, A. (2025a). The 1978 Threshold: Fifty Years Between Diagnosis and Design. The Fulcra Institute Working Paper WP-001. DOI: 10.5281/zenodo.18942493.

Escobar, A. (2025b). The Governance Gap. The Fulcra Institute Working Paper WP-002. DOI: 10.5281/zenodo.19041214.

Lo, J. C., Ed. (2022). Making Classroom Discussions Work. New York: Teachers College Press.

Nystrand, M. (1997). Opening Dialogue. New York: Teachers College Press.

Nystrand, M., Wu, L. L., Gamoran, A., Zeiser, S., & Long, D. A. (2003). Questions in time. Discourse Processes, 35(2), 135-196.

Wells, G. (1999). Dialogic Inquiry. Cambridge: Cambridge University Press.