Technical Documentation v1.1

Epistemic Audit Protocol

Standard operating procedures for the verification of analytical integrity in AI-generated text.

01

Operational Scope

The Epistemic Audit Engine is a post-generation integrity layer. It evaluates text not for semantic quality, but for evidentiary support. Its primary function is to detect and flag instances where language certainty exceeds the available structured evidence.

System Capabilities
  • Consistency verification vs. Graph
  • Grounding of specific entities
  • Detection of false precision
  • Calibration of confidence intervals
Out of Scope
  • Moral or ethical judgment
  • Subjective literary critique
  • Binary "Truth" determination
  • Intent analysis
03

Hallucination Taxonomy

The engine classifies epistemic failures into six distinct codes. These codes determine the severity of the risk score penalty.

CodeTypeDefinition
H1Unsupported AssertionFact stated without evidence
H2False PrecisionFabricated specificity
H3OverconfidenceCertainty > Evidence
H4Illegitimate InferenceUnsupported causality
H5Cross-Claim InconsistencyInternal contradiction
H6Narrative LaunderingOpinion presented as fact
05

Risk Scoring & Humility

The Epistemic Risk Score (0.0 - 1.0) is a composite metric derived from the weighted sum of hallucination penalties, normalized by document length.

The Humility Bonus

The system rewards epistemic hygiene. If a text contains a claim that cannot be verified, but the language used is appropriately tentative (e.g., "suggests," "likely," "sources indicate"), the penalty is reduced by up to 50%. This incentivizes calibrated uncertainty over unsupported confidence.

06

System Limits & Liability

Human-in-the-Loop Required

This tool is a decision-support instrument, not a decision-maker. It should never be effectively used to automate censorship, moderation, or publishing decisions without expert review. Hallucination detection is probabilistic, not deterministic.