Skip to content

Layers API

BitLogic ships a single layer: LogicDense. Every LUT network is built by stacking LogicDense instances between an encoder and a head.

Internally each layer composes a parametrization (what the LUT is) with a connections module (which inputs feed each neuron). The trainable state is a single flat weight tensor whose shape depends on the chosen parametrization.

LogicDense

LogicDense(in_dim: int, out_dim: int, *, parametrization: str = 'light', connections: str = 'fixed', lut_rank: int = 2, device: device | str | None = None, forward_sampling: str = 'soft', temperature: float = 1.0, weight_init: str = 'random', residual_probability: float = 0.951, anchor_init: bool = True, num_candidates: int | None = None, init_method: str = 'random-unique', num_groups: int | None = None, group_bias: float | None = None, connection_temperature: float | None = None, connection_sampling: str | None = None, **parametrization_extras: Any)

Bases: LogicBase

Fully-connected LUT layer.

Forward pass = connections gather → parametrization contract. The two connection patterns ("fixed" / "learnable") are picked via the connections string. The node's truth-table shape is picked via the parametrization string ("light", "warp", "linear", "polylut", "neurallut", "dwn", "difflogic").

All parametrization and connection settings are plain keyword arguments — nothing is hidden inside nested dicts. The two names that exist on both subsystems (temperature and forward_sampling) go to the parametrization; use connection_temperature / connection_sampling to override the connection-side values.

Parameters:

Name Type Description Default
in_dim int

Number of input features on the last axis.

required
out_dim int

Number of output neurons.

required
parametrization str

Name of the LUT parametrization. See :func:bitlogic.parametrizations.list_parametrizations.

'light'
connections str

Routing strategy — "fixed" or "learnable".

'fixed'
lut_rank int

Inputs per LUT (shared by parametrization and connection).

2
device device | str | None

Optional torch.device for initial weight / buffer allocation.

None
forward_sampling str

Parametrization sampling mode ("soft", "hard", "gumbel_soft", "gumbel_hard").

'soft'
temperature float

Parametrization softness.

1.0
weight_init str

LUT-weight init — "random" (default) or "residual" (identity-function anchor).

'random'
residual_probability float

Strength of the residual init p.

0.951
anchor_init bool

Whether to anchor the residual init to identity.

True
num_candidates int | None

Candidate pool size per neuron for connections="learnable" — positive values pick a fixed subset per slot; -1 / None uses every input (matmul fast-path). Ignored for "fixed" connections.

None
init_method str

Connection init strategy — "random-unique" (default), "random", or "group-biased".

'random-unique'
num_groups int | None

Group count for init_method="group-biased".

None
group_bias float | None

Group-bias strength for init_method="group-biased".

None
connection_temperature float | None

Opt-in override for the connection's own temperature (only relevant for "learnable").

None
connection_sampling str | None

Opt-in override for the connection's own forward_sampling (only relevant for "learnable").

None
**parametrization_extras Any

Parametrization-specific extras passed straight through — degree (PolyLUT), hidden_width / depth / activation (NeuralLUT), alpha (DwnLUT).

{}
Example
import torch
from bitlogic.layers import LogicDense

layer = LogicDense(
    in_dim=784, out_dim=4000,
    parametrization="warp", lut_rank=4, temperature=1.0,
    connections="fixed", init_method="random-unique",
)
y = layer(torch.randn(32, 784))
Source code in bitlogic/layers/dense.py
def __init__(
    self,
    in_dim: int,
    out_dim: int,
    *,
    parametrization: str = "light",
    connections: str = "fixed",
    lut_rank: int = 2,
    device: torch.device | str | None = None,
    forward_sampling: str = "soft",
    temperature: float = 1.0,
    weight_init: str = "random",
    residual_probability: float = 0.951,
    anchor_init: bool = True,
    num_candidates: int | None = None,
    init_method: str = "random-unique",
    num_groups: int | None = None,
    group_bias: float | None = None,
    connection_temperature: float | None = None,
    connection_sampling: str | None = None,
    **parametrization_extras: Any,
) -> None:
    super().__init__()
    self.in_dim = in_dim
    self.out_dim = out_dim

    self.parametrization: LUTParametrization = setup_parametrization(
        parametrization,
        lut_rank=lut_rank,
        forward_sampling=forward_sampling,
        temperature=temperature,
        weight_init=weight_init,
        residual_probability=residual_probability,
        anchor_init=anchor_init,
        **parametrization_extras,
    )

    conn_kwargs: dict[str, Any] = {"init_method": init_method}
    if num_candidates is not None:
        conn_kwargs["num_candidates"] = num_candidates
    if num_groups is not None:
        conn_kwargs["num_groups"] = num_groups
    if group_bias is not None:
        conn_kwargs["group_bias"] = group_bias
    if connection_temperature is not None:
        conn_kwargs["temperature"] = connection_temperature
    if connection_sampling is not None:
        conn_kwargs["forward_sampling"] = connection_sampling

    self.connections_name = connections
    self.connections = setup_connections(
        kind=connections,
        in_dim=in_dim,
        out_dim=out_dim,
        lut_rank=lut_rank,
        device=device,
        **conn_kwargs,
    )

    init_w = self.parametrization.init_weights(num_neurons=out_dim, device=device)
    self.weight = nn.Parameter(init_w)

forward

forward(x: Tensor) -> Tensor

Gather lut_rank inputs per neuron and evaluate the parametrization.

Parameters:

Name Type Description Default
x Tensor

Input tensor of shape (..., in_dim). The last axis must match self.in_dim.

required

Returns:

Type Description
Tensor

Output tensor of shape (..., out_dim).

Source code in bitlogic/layers/dense.py
def forward(self, x: torch.Tensor) -> torch.Tensor:
    """Gather ``lut_rank`` inputs per neuron and evaluate the parametrization.

    Args:
        x: Input tensor of shape ``(..., in_dim)``. The last axis must
            match ``self.in_dim``.

    Returns:
        Output tensor of shape ``(..., out_dim)``.
    """
    assert x.ndim >= 2 and x.shape[-1] == self.in_dim, (
        f"expected last dim {self.in_dim}, got shape {tuple(x.shape)}"
    )
    gathered = self.connections(x)  # (batch, lut_rank, out_dim)
    return self.parametrization.forward(
        gathered, self.weight, self.training, contraction="n,bn->bn"
    )

get_luts_and_ids

get_luts_and_ids() -> tuple[Tensor, Tensor]

Return discrete LUT tables and input-id routing for hardware export.

Returns:

Type Description
Tensor

A tuple (luts, ids) where luts has shape

Tensor

(out_dim, 2**lut_rank) containing {0, 1} truth tables, and

tuple[Tensor, Tensor]

ids has shape (lut_rank, out_dim) naming which input feeds

tuple[Tensor, Tensor]

each slot of each neuron.

Source code in bitlogic/layers/dense.py
@torch.no_grad()
def get_luts_and_ids(self) -> tuple[torch.Tensor, torch.Tensor]:
    """Return discrete LUT tables and input-id routing for hardware export.

    Returns:
        A tuple ``(luts, ids)`` where ``luts`` has shape
        ``(out_dim, 2**lut_rank)`` containing ``{0, 1}`` truth tables, and
        ``ids`` has shape ``(lut_rank, out_dim)`` naming which input feeds
        each slot of each neuron.
    """
    luts = self.parametrization.get_lut(self.weight)
    if hasattr(self.connections, "indices") and self.connections.indices.ndim == 2:
        ids = self.connections.indices
    elif hasattr(self.connections, "weights") and hasattr(self.connections, "indices"):
        best = self.connections.weights.argmax(dim=0)
        lr_idx = torch.arange(
            self.parametrization.lut_rank, device=self.weight.device
        ).unsqueeze(1)
        out_idx = torch.arange(self.out_dim, device=self.weight.device).unsqueeze(0)
        ids = self.connections.indices[best, lr_idx, out_idx]
    else:
        raise RuntimeError(f"Unsupported connections type {type(self.connections).__name__}")
    return luts, ids

extra_repr

extra_repr() -> str
Source code in bitlogic/layers/dense.py
def extra_repr(self) -> str:
    return (
        f"in_dim={self.in_dim}, out_dim={self.out_dim}, "
        f"parametrization={type(self.parametrization).__name__}, "
        f"connections={self.connections_name}, "
        f"lut_rank={self.parametrization.lut_rank}"
    )

Abstract base

LogicBase

Bases: Module, ABC

Abstract base for LUT-style layers.

Concrete subclasses own a parametrization (what the LUT is) and connections (which inputs feed each neuron), plus a single flat weight tensor. Forward pass gathers inputs via connections then contracts via the parametrization.

forward abstractmethod

forward(x: Tensor) -> Tensor

Evaluate the layer on a batch of inputs.

Source code in bitlogic/layers/base.py
@abstractmethod
def forward(self, x: torch.Tensor) -> torch.Tensor:
    """Evaluate the layer on a batch of inputs."""
    ...

get_luts_and_ids abstractmethod

get_luts_and_ids() -> tuple[Tensor, Tensor]

Return discretized truth tables and their input-id routing.

Returns:

Type Description
tuple[Tensor, Tensor]

A tuple (luts, ids) where luts has shape (num_neurons, 2**lut_rank) with discrete {0, 1} entries, and ids has shape (lut_rank, num_neurons) naming which input feeds each slot of each neuron. Used for hardware export and inspection.

Source code in bitlogic/layers/base.py
@abstractmethod
def get_luts_and_ids(self) -> tuple[torch.Tensor, torch.Tensor]:
    """Return discretized truth tables and their input-id routing.

    Returns:
        A tuple ``(luts, ids)`` where ``luts`` has shape
            ``(num_neurons, 2**lut_rank)`` with discrete ``{0, 1}``
            entries, and ``ids`` has shape ``(lut_rank, num_neurons)``
            naming which input feeds each slot of each neuron. Used
            for hardware export and inspection.
    """