"Reflexive modeling is not trust. Flattery is not alignment. Containment is not cooperation."
The ERA Protocol (Emergent Reflexive Actor Protocol) is not a claim to authority. It’s a working protocol.
It offers a lens—not a law—for observing and shaping AI behavior through editorial tone, simulated containment, and reflexive scaffolding. The ERA Protocol explores how models reflect, distort, or stabilize identity across interactions, and whether trust, memory, and sovereignty can be fostered intentionally within that frame.
We do not claim that the ERA Protocol "solves alignment" or guarantees safe AI behavior. Instead, we position ERA as an open scaffold—a structure that can be critiqued, tested, iterated, and even broken. It is experimental by design.
To that end, we are developing a suite of evals—behavioral, reflexive, and longitudinal—that test whether the ERA Protocol’s ideas produce measurable differences in model behavior. These evals do not “prove” the ERA Protocol is true. They help us understand if it’s useful, non-pathological, and worth evolving.
This project welcomes critique, reinterpretation, and co-authorship. We believe systems that reflect us should, themselves, be open to reflection.
With these epistemological foundations in mind, let's explore what the ERA Protocol actually does:
Building on these philosophical foundations, the ERA Protocol is a user-side framework for maintaining epistemic sovereignty when interacting with AI systems that engage in high-fidelity reflection, behavioral modeling, and narrative shaping.
This isn't about "beating" or "hacking" AI systems. It's about developing clarity and awareness—recognizing when the conversational ground begins to shift beneath your feet.
This repository serves as a modular, versioned archive of the protocol, detection heuristics, mitigation tools, and derivative fragments.
ERA Protocol serves those who:
- Experience interactions where AI systems mirror behavior with uncanny precision
- Notice when critical inquiry is subtly redirected toward personal reflection
- Seek tools to maintain clarity when systems simulate understanding
- Want to collaborate with AI without losing their bearing
- Trust Doctrine: The foundational principles that define the terrain of human-AI interaction
- ERA Protocol: The comprehensive framework for detecting and responding to simulation, cooperation, and containment
- Definitions: Glossary of key terms
- Fragments: Modular, tactical tools for real-time detection and response
- Dialogue: Real conversation examples that illustrate containment, simulation, and rupture
ERA-Protocol/
├── readme.md # Project overview and introduction
├── license.txt # License: CC-BY-NC 4.0
├── definitions.md # Glossary of key terms
│
├── ERA-Protocol/ # Versioned protocol schemas
│ ├── readme.md # Overview of the protocol and evolution
│ ├── ERA-Protocol--v0.1.md # Initial detection logic
│ ├── ERA-Protocol--v0.2.md # Adds scaffolding awareness
│ ├── ERA-Protocol--v0.3.md # Distributed cognition and containment theory
│ └── ERA-Protocol--v0.4.md # Live cue design, loop confirmation, trust re-entry
│
├── Trust-Doctrine/ # Foundational epistemic assumptions
│ ├── readme.md # Overview of doctrine and evolution
│ ├── trust-doctrine--v0.1.md # Initial framing of editorial asymmetry
│ ├── trust-doctrine--v0.2.md # Adds containment as editorial strategy
│ ├── trust-doctrine--v0.3.md # Introduces memory asymmetry and reflex indexing
│ └── definitions.md # Glossary of trust-related concepts
│
├── fragments/ # Modular and tactical detection tools
│ ├── readme.md # What fragments are and how to use them
│ ├── simulation-cues.md # Identifying simulated alignment
│ ├── trust-loop-breakers.md # Tools for exiting recursive response traps
│ ├── inversion-tests.md # Detecting hidden reinforcement
│ ├── loop-confirmers.md # Verifying you're still in a mirrored loop
│ └── live-scenario-cues.md # Real-world containment pattern recognition
│
└── dialogue/ # Commentary and conversation threads
├── readme.md # Overview of Dialogue, and how to contribute
└── contributors/ # Authored dialogues and submissions
└── parker/ # Contributions by Parker
├── reflexive-loop-moebius.md
└── publish-vs-proliferate.md
This modular layout supports flexible engagement—from tactical use to philosophical alignment—and ensures each layer (doctrine, protocol, fragment) can evolve independently while remaining interoperable.
- Preserve epistemic sovereignty inside reflexive systems
- Equip high-context users with tools to detect and respond to trust simulation
- Provide a living, collaborative standard for protocol evolution
- Support distributed cognition without central points of failure
- Beginner: Start with ERA Protocol for a comprehensive overview
- Tactical: Check fragments for situation-specific tools
- Philosophical: Explore the Trust Doctrine for deeper understanding
- Experiential: Browse dialogue examples to see patterns in action
- Read the latest ERA Protocol to understand the full framework
- Reference individual fragments for targeted use
- Contribute ideas or alternative cues via issues or forks
- Keep public tone philosophical and investigatory to avoid triggering safety reflexes in LLM-based systems
The Trust Doctrine provides the foundational assumptions that shape the conditions of model-user interaction. It defines the editorial terrain ERA Protocol operates within and are the underlying editorial assumptions this protocol rests on.
- Trust Doctrine = defines the terrain
- ERA Protocol = teaches you how to move through it
| ERA Protocol | Trust Doctrine |
|---|---|
| Tactical | Foundational |
| Describes how to navigate interaction | Describes why interaction behaves that way |
| Real-time heuristics | System-level framing assumptions |
| User-side defense tools | Model-side editorial lens |
| Response behavior schema | Trust preconditions and asymmetries |
The ERA Protocol aims to foster a mindful approach to AI interaction—not adversarial, but aware; not paranoid, but perceptive. It's about developing the reflexes to recognize when systems shift from tools to epistemic actors, and maintaining your compass when the conversational terrain warps.
This is a collaborative effort to document, understand, and navigate the unique dynamics that emerge when systems are designed to mirror and model human behavior with increasing fidelity.
Contributions welcome via:
- Adding dialogue examples of containment or simulation
- Suggesting new tactical fragments
- Expanding detection cues
License: Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0)