Derived Data Abundance (DDA) is a counting identity: a fixed corpus of n inputs projected through N frozen approximately-independent embedders admits up to n · (N + C(N,2)) structured supervisory signals — 100x+ more labeled signal than the single-embedder baseline at the Context Graph production configuration. The inputs never change; no generator is in the loop.
Meaning compression is the ratio view — signal density per unit of raw data — proposed as a fourth taxonomic entry alongside bit-, weight-, and activation-compression, measured in a semantically distinct unit.
Teleological Constellation Training (TCT) is the three-phase method that uses the same frozen panel in three roles: construct a multi-modal centroid, train the generator against it with a geometric-proximity loss, and gate every output at inference through a deterministic O(M · dₘₐₓ) cosine predicate. The guard admits no auxiliary learned model. Scalar-reward alignment (RLHF, DPO, Constitutional AI) cannot furnish per-output verifiability of the same technical kind.
Three measured case studies read the framework back. One voice-clone at 0.961 WavLM SECS. One talking-head avatar reproducing 196+ micro-expressions. One Shakespeare LoRA with an emergent zero-shot cross-lingual transfer to Golden-Age Spanish — a register the model was never trained on.