Figure 1.Meaning compression is a training-data-side ratio, not Shannon bit-rate compression.
Figure description
The taxonomy distinguishes four compression questions. Bit compression counts bits per raw data unit. Weight compression counts task performance per parameter. Activation compression counts runtime memory per output quality. Meaning compression asks how many structured training signals can be derived per raw-data unit without replacing the source observation with generated data.
Compression, but on the training-data side
Bit compression asks how few bits can represent a signal. Weight compression asks how much capability fits in model parameters. Activation compression asks how much runtime memory is needed. Meaning compression asks how many structured labels can be extracted from fixed real data.
Why this differs from synthetic data
The DDA move does not ask a generator to create replacement data. It keeps the source observation anchored and uses frozen instruments to expose more measurable structure inside it.
Where the ratio is measured today
The current evidence surface is system-specific: Context Graph for text, ClipCannon for video, and the Shakespeare case as a text-style demonstration with explicit verification scope.
Figure 2.The Shakespeare LoRA case is a text-side meaning-compression story with explicit verification scope.
Figure description
The figure shows a small public-domain Shakespeare corpus on the left and derived training records on the right. It frames the case as evidence that structured labels can be materialized from fixed real text while keeping the source data grounded.
measured
Shakespeare LoRA Full State Verification
The current manuscript reports five manually verified prompts on the trained LoRA, including cross-lingual period-style transfer observed on a Spanish prompt.
Verification prompts
5/5 PASS
Public-domain Shakespeare style LoRA; the result is a case study, not a general style-transfer theorem.
Related videos
00:11:49 / mXAJgE2G87Q
Shakespeare LoRA as a signal-density case study
Concrete public-domain text example for how a small corpus can produce many derived training records.
Plain-language talk track for the fixed-data DDA move and the meaning-compression ratio.
The formal site claim is narrower than the talk title: DDA is fixed-corpus decomposition and meaning compression is a proposed structured-signal-density measure.