Framework Library
Reusable patterns for accountability, auditability, and governance in model-mediated systems.
Some items are conceptual research outputs and others are implementation-ready materials.
Consensus Framing
Core
A structured methodology for analyzing how consensus is established, maintained, and violated in model-driven systems.
Highlights gaps between stated norms and observed behaviors so teams can spot governance failures before they escalate into disputes, harm, or institutional breakdowns.
Absurdity Gap Analysis
Core
A way to operationalize the distance between claimed system behavior and observed outcomes.
Useful for identifying when model-driven decisions violate reasonable expectations, institutional norms, or compliance requirements.
Active-State Risk & Privacy Harm
Legal Concepts
A conceptual framing that treats risk exposure and privacy harm as event-based conditions rather than purely statistical outcomes.
Intended to support careful reasoning about surveillance infrastructure, compliance, and accountability.
Model-Level Accountability
AI Governance
Accountability scaffolding that starts at the model architecture and evaluation layer, not just deployment or operator behavior.
Useful for independent builders and research groups who want principled controls without Big Tech infrastructure.
Legal Framework for Model-Driven Systems
Longform
A longform synthesis that integrates consensus framing, absurdity gap detection, and active-state risk reasoning.
Presented as research and argumentation, not legal advice, and may evolve as citations and case analysis are refined.