HolographiX

HolographiX logo HolographiX: Holographic Information MatriX

V3.0 — Information fields for lossy, unordered worlds

paper Paper DOI: 10.5281/zenodo.18000522 book Book: ISBN-13: 979-8278598534 github GitHub: source medium Medium Article podcast Podcast 2025 Dec 20th

holographix cover

Thesis (what this actually is)

HolographiX is a framework that makes information resilient by construction.

Optical holography is not a “transport trick”: it is a property of representation. Cut the plate and you still see the whole scene; you lose detail, not geometry. This repository engineers the same invariance in software: it defines a representation in which any non‑empty subset of fragments decodes to a coherent best‑so‑far estimate, improving smoothly as more fragments arrive.

So the core claim is not “we can move data on UDP”. The claim is:

Reliability is moved from the channel to the object.
The information artifact itself remains structurally interpretable under fragmentation, loss, reordering, duplication, delay, partial storage, partial computation.

Networks, filesystems, radios, object stores, model pipelines are just different ways of circulating or sampling pieces of the same resilient field.

Abstract (operational contract)

Let x be a source (image/audio/bytes). The encoder produces a set of chunks:

C = E(x) = {c_1, …, c_N}

For any subset S ⊆ C with S != ∅, the decoder returns:

x_hat(S) = D(S)

The system is designed so that x_hat(S) is coherent for every non‑empty S, and quality improves smoothly with S . Missing evidence should express mainly as loss of detail (high‑frequency energy / precision), not as missing coordinates (holes in space/time).

This “any subset works” property is the generalized holography.

Not Just an Encoder/Decoder: A Resilient Information Field Framework

If you look at HolographiX as “a decoder that explodes information into a field and then transports it”, you miss the point. The “field” is not an intermediate representation: it is the data structure, the durable form of the information.

Transport is optional; field operations are fundamental. You can store a field, merge fields, heal a damaged field, stack fields to raise SNR, pack multiple objects into one field, prioritize transmission by gain, add recovery chunks. Those are not add-ons; they are the algebra that makes “resilient information” real.

Mechanism (how the invariance is enforced)

HolographiX uses a coarse + residual decomposition. The coarse part provides a scaffold that makes single‑chunk decoding meaningful. The residual is distributed across chunks using a deterministic golden‑ratio interleave (the MatriX) so that each chunk touches the whole support of the signal. As a result, “how many chunks you have” matters far more than “which chunk IDs you have”.

HolographiX separates representation from transport. The codec/math defines how evidence is spread across chunks; the same chunks can live on UDP meshes, filesystems, object stores, delay-tolerant links, or flow directly into inference pipelines. The primitive is a stateless best‑so‑far field.

AI fit: best‑so‑far fields enable anytime inference — models can run on partial reconstructions (or on field tokens) and improve continuously as more evidence arrives. Operational loop: receive chunks -> decode best‑so‑far -> run model -> repeat.

v3.0 (olonomic) in one sentence

v3 moves residuals into local wave bases (DCT for images, STFT for audio). Missing chunks become missing coefficients (“missing waves”), shrinking chunks while keeping graceful degradation.

What’s new in 3.0 (olonomic v3)

Images: residual -> block DCT (default 8×8), JPEG‑style quantization, zigzag, golden split across chunks. Missing chunks = missing waves, not missing pixels.

Audio: residual -> STFT (Hann window, overlap‑add), per‑bin quantization, golden split across chunks. Missing chunks = softer/detail loss, not gaps/clicks.

Metadata containers: v3 coarse payloads carry codec params (block size, quality, padding for images; n_fft/hop/quality for audio) without changing header size.

CLI flag: --olonomic to encode with v3. Decoding auto‑detects version per chunk dir.

Install

python3 -m venv .venv && source .venv/bin/activate
pip install -e ./src          # install holo and deps (numpy, pillow)
# or run in-place: PYTHONPATH=src python3 -m holo ...

Quick start

Note: .holo output directories (like src/flower.jpg.holo/) are generated locally and are not committed to the repo. Run the encode step before any example that reads src/flower.jpg.holo.

Encode / decode an image:

python3 -m holo src/flower.jpg 32
python3 -m holo src/flower.jpg.holo --output out.png

Encode / decode olonomic (v3):

python3 -m holo --olonomic src/flower.jpg --blocks 16 --quality 40
python3 -m holo src/flower.jpg.holo --output out.png

Chunk sizing: use either TARGET_KB (positional) or --blocks (explicit count); if both are given, --blocks wins.

Audio:

python3 -m holo /path/to/track.wav 16
python3 -m holo --olonomic /path/to/track.wav 16
python3 -m holo /path/to/track.wav.holo --output track_recon.wav

Try packet‑sized chunks (mesh/UDP):

python3 -m holo src/flower.jpg 1 --packet-bytes 1136 --coarse-side 16

graded reconstruction
Graded reconstruction: fewer chunks soften detail without holes.

Layering map (codec -> transport -> modem)

holo.codec   -> chunk bytes (field representation)
holo.net     -> datagrams (framing + mesh)
holo.tnc     -> audio/radio modem (AFSK/FSK/PSK/etc)
holo.tv      -> multi-frame scheduling (HoloTV windows)

architecture map
Codec → transport → field layering: genotype/phenotype/cortex/mesh analogy.

CLI cheat‑sheet (core)

Framework CLI map

      Input (image / audio / arbitrary file)
                    |
                    v
         +--------------------------+
         |  Holo Codec CLI          |
         |  python3 -m holo         |
         |  encode/decode/heal      |
         +--------------------------+
                    |
              .holo chunk dir
                    |
      +-------------+------------------+
      |                                |
      v                                v
+----------------------+        +----------------------+
| Holo Net CLI         |        | Holo TNC CLI          |
| python3 -m holo net  |        | python3 -m holo tnc-* |
| UDP framing + mesh   |        | AFSK WAV modem        |
+----------------------+        +----------------------+
      |                                |
      v                                v
 UDP sockets                      Audio / Radio link

Notes:

CLI help and navigation

python3 -m holo --help
python3 -m holo net --help
python3 -m holo net <command> --help
python3 -m holo tnc-tx --help
python3 -m holo tnc-rx --help
python3 -m holo tnc-wav-fix --help

Python API highlights

import holo
from holo.codec import (
    encode_image_holo_dir, decode_image_holo_dir,
    encode_audio_holo_dir, decode_audio_holo_dir,
    encode_image_olonomic_holo_dir, decode_image_olonomic_holo_dir,
    encode_audio_olonomic_holo_dir, decode_audio_olonomic_holo_dir,
    stack_image_holo_dirs,
)

encode_image_olonomic_holo_dir("frame.png", "frame.holo", block_count=16, quality=60)
decode_image_holo_dir("frame.holo", "frame_recon.png")   # auto-dispatch by version

encode_audio_olonomic_holo_dir("track.wav", "track.holo", block_count=12, n_fft=256)
decode_audio_holo_dir("track.holo", "track_recon.wav")

Advanced modes (field algebra in practice)

Recovery (RLNC): optional recovery chunks (recovery_*.holo) can reconstruct missing residual slices under heavy loss. Encode with --recovery rlnc --overhead 0.25 and decode with --use-recovery.

Coarse models: v3 coarse is pluggable (--coarse-model downsample|latent_lowfreq|ae_latent). latent_lowfreq keeps only low‑frequency DCT/STFT coefficients; ae_latent loads optional tiny weights from .npz if present.

Uncertainty output: decode with --write-uncertainty to produce *_confidence.png (images) or *_confidence.npy (audio) where 1.0 = fully observed.

Chunk priority: encoders write per‑chunk scores and manifest.json ordering. Use --prefer-gain for best‑K decode, and mesh sender priority flags to transmit high‑gain chunks first.

Fixed‑point healing: Field.heal_fixed_point(...) iterates healing until deltas stabilize, with drift guards for lossy v3.

CLI healing: use --heal or --heal-fixed-point on a .holo directory to re-encode the current best‑so‑far.

TNC (experimental): holo.tnc provides a minimal AFSK modem + framing to carry holo.net.transport datagrams over audio.

HoloTV (experimental): holo.tv schedules multi-frame windows and demuxes datagrams into per-frame fields above holo.net and holo.tnc.

Update summary (latest)

Recovery: systematic RLNC (recovery_*.holo) + GF(256) solver, optional in v3 image/audio encode/decode; mesh can send recovery chunks.

Coarse models: downsample/latent_lowfreq/ae_latent interface wired into v3 metadata, with optional training script for AE weights.

Uncertainty: confidence maps/curves from decoder masks, new meta decode helpers, and honest healing to attenuate uncertain regions.

Chunk priority: score-aware manifest + prefer-gain decode and mesh sender ordering.

Healing: fixed-point healing loop with convergence metric and drift guards.

The local healing process is implemented as a deterministic self-consistency loop: we iteratively apply a repair operator until the field stabilizes (a practical fixed-point iteration). (Ref. Hamann, S. (2025). From topology to dynamics: The order behind α and the natural constants, v1.0.6 (01 Sep 2025), §9.1 — used here as conceptual inspiration for fixed-point self-consistency loops.)

Implementation status (plan checklist) - [x] Review codec/field/mesh flow and implement RLNC recovery chunk format + GF(256) helpers. - [x] Add coarse-model abstraction (downsample, latent_lowfreq, ae_latent) and store model name in v3 metadata. - [x] Implement uncertainty tracking + meta decode helpers; integrate priority selection and manifest generation. - [x] Add tests for recovery/uncertainty/prefer-gain/fixed-point healing; update README/examples/CLI. - [x] Run test suite and address issues.

CLI guide (new features)

Recovery encode + decode:

PYTHONPATH=src python3 -m holo --olonomic src/flower.jpg --blocks 12 --recovery rlnc --overhead 0.5
PYTHONPATH=src python3 -m holo src/flower.jpg.holo --use-recovery

Prefer-gain decode (best-K chunks):

PYTHONPATH=src python3 -m holo src/flower.jpg.holo --max-chunks 4 --prefer-gain

Uncertainty output:

PYTHONPATH=src python3 -m holo src/flower.jpg.holo --write-uncertainty

Healing (one-step and fixed-point):

PYTHONPATH=src python3 -m holo src/flower.jpg.holo --heal
PYTHONPATH=src python3 -m holo src/flower.jpg.holo --heal-fixed-point --heal-iters 4 --heal-tol 1e-3

Mesh sender priority + recovery:

PYTHONPATH=src python3 src/examples/holo_mesh_sender.py --uri holo://demo/flower --chunk-dir src/flower.jpg.holo --peer 127.0.0.1:5000 --priority gain --send-recovery

TNC quickstart (experimental)

AFSK loopback example (no soundcard required):

import numpy as np
from holo.tnc import AFSKModem
from holo.tnc.channel import awgn

modem = AFSKModem()
payload = b"hello field"
samples = modem.encode(payload)
samples = awgn(samples, 30.0)  # optional noise
decoded = modem.decode(samples)
assert decoded == [payload]

Ham radio transport (HF/VHF/UHF/SHF)

Holo does not turn images into audio content. The audio you transmit is only a modem carrier for bytes. WAV size is dominated by AFSK bitrate (~payload_bytes * 16 * fs / baud). To shrink WAVs, raise --baud or lower --fs, and optionally reduce payload size with v3 (--olonomic, lower --quality / --overhead).

Pipeline overview:

image/audio -> holo.codec -> chunks (.holo)
  -> holo.net.transport (datagrams)
  -> holo.tnc (AFSK/PSK/FSK/OFDM modem)
  -> radio audio

radio audio
  -> holo.tnc -> datagrams -> chunks -> holo.codec decode

Why datagrams on radio:

In practice you can replace the AFSK demo with any modem that yields bytes. The Holo layers above it stay unchanged.

One-line commands (encode + TX, RX + decode)

# WAV size scales with bitrate (~payload_bytes * 16 * fs / baud); shrink by raising --baud or lowering --fs.
# Noisy band (HF-like, v3): encode -> tnc-tx
PYTHONPATH=src python3 -m holo --olonomic src/flower.jpg --blocks 12 --quality 30 --recovery rlnc --overhead 0.25

PYTHONPATH=src python3 -m holo tnc-tx --chunk-dir src/flower.jpg.holo --uri holo://noise/demo --out tx_noise.wav \
  --max-payload 320 --gap-ms 40 --preamble-len 16 --fs 9600 --baud 1200 --prefer-gain --include-recovery

# Noisy band (HF-like): tnc-rx -> decode
PYTHONPATH=src python3 -m holo tnc-rx --input tx_noise.wav --uri holo://noise/demo --out rx_noise.holo --baud 1200 --preamble-len 16

PYTHONPATH=src python3 -m holo rx_noise.holo --output rx.png --use-recovery --prefer-gain

# Clean link (VHF/UHF/SHF, v3): encode -> tnc-tx
PYTHONPATH=src python3 -m holo --olonomic src/flower.jpg --blocks 12 --quality 30 \
  && PYTHONPATH=src python3 -m holo tnc-tx --chunk-dir src/flower.jpg.holo --uri holo://clean/demo --out tx_clean.wav \
  --max-payload 512 --gap-ms 15 --preamble-len 16 --prefer-gain --fs 9600 --baud 1200

# Clean link (VHF/UHF/SHF): tnc-rx -> decode
PYTHONPATH=src python3 -m holo tnc-rx --input rx_clean.wav --uri holo://clean/demo --out rx_clean.holo --baud 1200 --preamble-len 16 \
  && PYTHONPATH=src python3 -m holo rx_clean.holo --output rx.png --prefer-gain

Loopback tip (no radio): use tx_noise.wav as rx_noise.wav to test the full chain.

On-air workflow (TX -> RX)

1) Encode the image/audio into .holo chunks. 2) Run tnc-tx to build a WAV (baseband audio). 3) Feed WAV audio into the radio (line-in/IF preferred). 4) Record RX audio into a WAV. 5) Run tnc-rx to rebuild chunks, then decode the image/audio.

Suggested parameter table (AFSK, conservative defaults):

Link quality –baud –fs –max-payload –gap-ms –include-recovery Compression/AGC
Noisy/variable (HF) 1200 9600 320 40 yes OFF
Clean link (VHF/UHF/SHF) 1200 9600 512 15 optional OFF

Notes:

HoloTV quickstart (experimental)

Schedule chunks across a window of frames and feed them to a receiver:

from pathlib import Path

from holo.tv import HoloTVWindow, HoloTVReceiver
from holo.cortex.store import CortexStore

window = HoloTVWindow.from_chunk_dirs(
    "holo://tv/demo",
    ["frames/f000.holo", "frames/f001.holo"],
    prefer_gain=True,
)

store = CortexStore("tv_store")
rx = HoloTVReceiver("holo://tv/demo", store, frame_indices=[0, 1])

for datagram in window.iter_datagrams():
    rx.push_datagram(datagram)

frame0_dir = rx.chunk_dir_for_frame(0)
print("frame 0 chunks:", sorted(Path(frame0_dir).glob("chunk_*.holo")))

Olonomic v3 details (operational)

Images: residual = img − coarse_up -> pad to block size -> DCT‑II (ortho) per block/channel -> JPEG‑style quant (quality 1‑100) -> zigzag -> int16 -> golden permutation -> zlib per chunk. Missing chunks zero coefficients; recon via dequant + IDCT + coarse.

Audio: residual = audio − coarse_up -> STFT (sqrt-Hann, hop=n_fft/2 default) -> scale by n_fft -> per‑bin quant steps grow with freq (quality 1‑100) -> int16 (Re/Im interleaved) -> golden permutation -> zlib per chunk. Recon via dequant, ISTFT overlap‑add, coarse + residual.

Metadata: packed inside coarse payload (PNG + OLOI_META for images; zlib coarse + OLOA_META for audio) so headers stay backward‑compatible.

Field operations are first-class: decode partially at any time, heal to restore a clean distribution, stack exposures to raise SNR, and pack/extract multiple objects into one field.

Repository map

README.md                  top-level overview (this file)
src/pyproject.toml         packaging for editable install
src/requirements.txt       runtime deps (numpy, pillow)

src/holo/                  core library
  codec.py                 codecs v1/v2/v3 (image/audio), chunk scoring, recovery hooks
  recovery.py              GF(256) RLNC recovery chunks + solver
  __main__.py              CLI entry (codec + tnc-tx/tnc-rx)
  __init__.py              public API surface
  container.py             multi-object packing/unpacking
  field.py                 field tracking + healing (fixed-point)
  cortex/                  storage helpers (store.py backend)
  net/                     transport + mesh + content IDs
    transport.py           datagram framing/reassembly (HODT/HOCT)
    mesh.py                UDP mesh sender/receiver + priority order
    arch.py                content_id helpers
  models/                  coarse model abstraction (downsample/latent_lowfreq/ae_latent)
  tnc/                     modem + framing + WAV CLI
    afsk.py                AFSK modem
    frame.py               framing + CRC
    cli.py                 tnc-tx/tnc-rx WAV helpers
  tv/                      HoloTV scheduling + demux helpers
  mind/                    stubs/placeholders for higher-layer logic

src/examples/              runnable demos (encode/decode, mesh_loopback, heal, pack/extract, benchmarks)
src/tests/                 unit tests (round-trip, recovery, tnc, tv, healing)
src/codec_simulation/      React/Vite control deck for codec exploration (optional)
src/docs/                  Global Holographic Network guide (mesh/INV-WANT, DTN, examples for sensor fusion/AI/maps)
src/infra/                 containerlab lab + netem/benchmark configs
src/systemd/               sample systemd units for mesh sender/receiver/node
src/tools/                 offline tools (e.g., AE coarse training/export)

Testing

PYTHONPATH=src python3 -m unittest discover -s src/tests -p 'test_*.py'

Network/mesh smoke test (loopback UDP using holo:// content IDs):

PYTHONPATH=src python3 src/examples/mesh_loopback.py
# emits galaxy.jpg chunks on 127.0.0.1, stores in src/examples/out/store_b/...,
# and writes a reconstructed image to src/examples/out/galaxy_mesh_recon.png

Other functional checks (examples):

PYTHONPATH=src python3 src/examples/heal_demo.py
PYTHONPATH=src python3 src/examples/pack_and_extract.py

Results snapshot

src/galaxy.jpg, coarse-side=16, v3 command:

python3 -m holo --olonomic src/galaxy.jpg --blocks 16 --quality 40 --packet-bytes 0

Total ~0.35 MB (coherent single-chunk recon). Same settings v2 pixel residuals: ~1.69 MB. Visual quality comparable; v3 degrades as “missing waves”, not holes.

photon collector
Photon-collector stacking: multiple exposures reinforce structure over noise.

psnr curves
PSNR vs received chunks: quality rises smoothly; variance stays low.

Design principles

Interchangeability by construction: golden permutation ensures quality depends mostly on chunk count, not chunk IDs.

Graceful loss: missing chunks zero high‑freq waves instead of creating spatial/temporal holes.

Stateless decode: any subset of valid chunks decodes without coordination.

Transport‑agnostic: codec math is separate from mesh/UDP; use your own transport if needed.

References

© 2025 holographix.io