Examples¶
The examples/ directory ships one self-contained quickstart that
exercises the public bitlogic API end-to-end. It depends only on
bitlogic, torch, and torchvision — no internal experiment harness
— so it doubles as a runnable smoke test in a clean clone.
MNIST quickstart¶
uv sync --extra cpu --extra dev # or: --extra cu128
uv run python examples/train_mnist.py # full run, 5 epochs
uv run python examples/train_mnist.py --epochs 1 # ~1-minute smoke
examples/train_mnist.py builds the canonical
DistributiveThermometer → LogicDense → LogicDense → GroupSum stack
described in the Quick Start, fits the thermometer
encoder on a slice of the training set, and runs a plain PyTorch loop.
The script accepts the usual knobs:
--data-dir DIR torchvision download cache (default: ./data)
--batch-size N default: 128
--epochs N default: 5
--lr LR default: 5e-3
--num-bits N thermometer bit-width (default: 8)
--layer-width N LogicDense width (default: 1000)
--seed N default: 0
--device cpu|cuda default: cuda if available, else cpu
A 5-epoch run lands around 97-98 % test accuracy on MNIST — a reasonable baseline for a model with 4-input LUTs and learnable connections.
Reproducing the TMLR paper¶
The configs and SLURM submitters that drive the E1–E7 sweeps in §4 of the TMLR paper live in a separate research tree that is not part of this repository. The full reproduction archive (training harness, sweep configs, paper sources) is available on request — please open an issue or contact the authors.