Skip to content

Encoders API

Input encoders turn continuous data into bits so it can feed a LUT network. Three encoders are shipped: two thermometer-style (monotone bit vectors — b wires encode b+1 levels) and one Brevitas-style uniform fixed-point quantizer (binary-coded integer — b wires encode 2**b levels). All are plain nn.Modules — fit once on training data, then use like any other module.

from bitlogic import DistributiveThermometer

enc = DistributiveThermometer(num_bits=8).fit(train_samples)
bits = enc(images)    # adds a num_bits factor into axis `encode_axis`

encoders

Input encoders for LUT networks.

Call enc.fit(train_sample) once to compute thresholds, then use like any other nn.Module.

DistributiveThermometer

DistributiveThermometer(num_bits: int, encode_axis: int = 1, fit_axes: Iterable[int] | None = None, input_bits: int = 8)

Bases: _ThermometerBase

Quantile-based thresholds from the fit sample.

Source code in bitlogic/encoders/thermometer.py
def __init__(
    self,
    num_bits: int,
    encode_axis: int = 1,
    fit_axes: Iterable[int] | None = None,
    input_bits: int = 8,
):
    super().__init__()
    if num_bits <= 0:
        raise ValueError(f"num_bits must be positive, got {num_bits}")
    if input_bits < 0:
        raise ValueError(f"input_bits must be non-negative, got {input_bits}")
    self.num_bits = int(num_bits)
    self.encode_axis = int(encode_axis)
    self.fit_axes = tuple(fit_axes) if fit_axes else (1,)
    self.input_bits = int(input_bits)
    self.register_buffer("thresholds", None, persistent=True)
    self.register_buffer("input_min", None, persistent=True)
    self.register_buffer("input_max", None, persistent=True)
    # Full non-batch input shape recorded at fit() time, needed by
    # hardware exporters to lay out one input byte per spatial position
    # (the thresholds tensor has singletons along reduced axes and can't
    # recover e.g. H and W on its own).
    self.register_buffer("fit_input_shape", None, persistent=True)

Thermometer

Thermometer(num_bits: int, encode_axis: int = 1, fit_axes: Iterable[int] | None = None, input_bits: int = 8)

Bases: _ThermometerBase

Evenly spaced thresholds between min and max of the fit sample.

Source code in bitlogic/encoders/thermometer.py
def __init__(
    self,
    num_bits: int,
    encode_axis: int = 1,
    fit_axes: Iterable[int] | None = None,
    input_bits: int = 8,
):
    super().__init__()
    if num_bits <= 0:
        raise ValueError(f"num_bits must be positive, got {num_bits}")
    if input_bits < 0:
        raise ValueError(f"input_bits must be non-negative, got {input_bits}")
    self.num_bits = int(num_bits)
    self.encode_axis = int(encode_axis)
    self.fit_axes = tuple(fit_axes) if fit_axes else (1,)
    self.input_bits = int(input_bits)
    self.register_buffer("thresholds", None, persistent=True)
    self.register_buffer("input_min", None, persistent=True)
    self.register_buffer("input_max", None, persistent=True)
    # Full non-batch input shape recorded at fit() time, needed by
    # hardware exporters to lay out one input byte per spatial position
    # (the thresholds tensor has singletons along reduced axes and can't
    # recover e.g. H and W on its own).
    self.register_buffer("fit_input_shape", None, persistent=True)

UniformFixedPoint

UniformFixedPoint(num_bits: int, encode_axis: int = 1, fit_axes: Iterable[int] | None = None, input_bits: int = 8, signed: bool = False)

Bases: Module

Unsigned / signed uniform fixed-point quantizer with a binary-coded output.

Parameters:

Name Type Description Default
num_bits int

Output precision in bits. Must be > 0. The forward emits num_bits binary wires per feature, carrying the integer code of the quantized value (LSB at index 0).

required
encode_axis int

Axis along which the num_bits dimension is folded into the output. Default 1 (channel axis for NCHW).

1
fit_axes Iterable[int] | None

Axes kept per-feature when computing scale / min / max. All other axes are aggregated. Default (1,) = per-channel scale, matching the thermometer encoders' convention.

None
input_bits int

Integer grid width used for eval-mode input snapping and for the HDL input port. Same semantics as the thermometer encoder — set to 0 to disable quantization entirely.

8
signed bool

If False (default), the input is assumed non-negative and quantized to q ∈ [0, 2**num_bits - 1] with zero_point = 0. If True, the input is quantized symmetrically around zero with zero_point = 2**(num_bits-1); the stored integer level is still in [0, 2**num_bits - 1] (offset-binary) so the downstream LUT address space is encoder-agnostic.

False

Raises:

Type Description
ValueError

If num_bits is not positive or input_bits is negative.

Source code in bitlogic/encoders/uniform_fixed_point.py
def __init__(
    self,
    num_bits: int,
    encode_axis: int = 1,
    fit_axes: Iterable[int] | None = None,
    input_bits: int = 8,
    signed: bool = False,
):
    super().__init__()
    if num_bits <= 0:
        raise ValueError(f"num_bits must be positive, got {num_bits}")
    if input_bits < 0:
        raise ValueError(f"input_bits must be non-negative, got {input_bits}")
    self.num_bits = int(num_bits)
    self.encode_axis = int(encode_axis)
    self.fit_axes = tuple(fit_axes) if fit_axes else (1,)
    self.input_bits = int(input_bits)
    self.signed = bool(signed)
    self._qmax = (1 << self.num_bits) - 1  # max code in unsigned mode
    # Signed mode uses Brevitas' narrow-range symmetric convention:
    # zero_point = 2**(num_bits-1) - 1 so that zero is exactly representable
    # AND the most-negative input maps to code 0 (not a below-grid value
    # that would clamp and collapse with the most-negative input's own
    # integer encoding). Max reachable code is therefore 2*zero_point =
    # 2**num_bits - 2, leaving the all-ones bit pattern unused.
    self._zero_point = (1 << (self.num_bits - 1)) - 1 if self.signed else 0
    self._max_code = 2 * self._zero_point if self.signed else self._qmax

    self.register_buffer("scale", None, persistent=True)
    self.register_buffer("input_min", None, persistent=True)
    self.register_buffer("input_max", None, persistent=True)
    # Pre-computed integer threshold ladder for HDL consumption:
    # ``T_i`` is the input-grid value such that ``input >= T_i`` is
    # equivalent to ``q >= i`` for ``i ∈ [1, 2**num_bits - 1]``. The
    # emitter walks this ladder to produce a thermometer-style comparator
    # tree and then packs the result into ``num_bits`` binary wires via a
    # small popcount. Shape: ``(*fit_shape, 2**num_bits - 1)``.
    self.register_buffer("thresholds_int", None, persistent=True)
    self.register_buffer("fit_input_shape", None, persistent=True)

fit

fit(x: Tensor) -> UniformFixedPoint

Compute per-feature scale from the calibration sample and freeze.

Max-based observer: scale = max(|x|_clamped) / levels. The stored scale is snapped to the input_bits integer grid so that eval-mode Python output agrees bit-exactly with the emitted HDL.

Parameters:

Name Type Description Default
x Tensor

Calibration tensor. Shape must include all axes listed in fit_axes; all other axes are aggregated over.

required

Returns:

Type Description
UniformFixedPoint

self for chaining.

Source code in bitlogic/encoders/uniform_fixed_point.py
def fit(self, x: torch.Tensor) -> UniformFixedPoint:
    """Compute per-feature scale from the calibration sample and freeze.

    Max-based observer: ``scale = max(|x|_clamped) / levels``. The
    stored scale is snapped to the ``input_bits`` integer grid so that
    eval-mode Python output agrees bit-exactly with the emitted HDL.

    Args:
        x: Calibration tensor. Shape must include all axes listed in
            ``fit_axes``; all other axes are aggregated over.

    Returns:
        ``self`` for chaining.
    """
    input_min, input_max = self._compute_min_max(x)
    if self.signed:
        # Symmetric narrow-range: scale from |x|. zero_point = 2**(b-1) - 1,
        # so max positive input maps to code = 2 * zero_point and max
        # negative input maps to code = 0 (exactly representable).
        abs_max = torch.maximum(input_max.abs(), input_min.abs())
        scale = abs_max / max(1, self._zero_point)
        # Expand stored input_min/input_max to the symmetric span so the
        # integer grid covers both half-lines fully and the ladder does
        # not collapse at the low end.
        input_max = abs_max
        input_min = -abs_max
    else:
        # One-sided range; clamp at zero for unsigned quantization.
        nonneg_max = input_max.clamp_min(0)
        scale = nonneg_max / max(1, self._qmax)

    # Snap scale to the input-bits grid so eval and HDL agree. We choose
    # the coarsest representable scale ≥ the calibrated one so the full
    # calibrated range fits in the integer codes. A zero-width feature
    # (constant channel) keeps scale=0; forward() handles that by pinning
    # the output to zero_point.
    if self.input_bits > 0:
        # The input grid step is (input_max - input_min) / (2**input_bits - 1)
        # for each feature — or uniform ``1 / (2**input_bits - 1)`` if we
        # normalize to [0, 1]. We store the real-valued scale and let
        # forward() snap inputs to the grid the same way the thermometer
        # does. The thresholds_int ladder below encodes scale × grid
        # directly.
        pass
    self.scale = scale
    self.input_min = input_min
    self.input_max = input_max
    self.fit_input_shape = torch.tensor(x.shape[1:], dtype=torch.int64)
    self.thresholds_int = self._build_thresholds_int()
    return self