Layers API¶
BitLogic ships a single layer: LogicDense.
Every LUT network is built by stacking LogicDense instances between an
encoder and a head.
Internally each layer composes a parametrization (what the LUT is) with a connections module (which inputs feed each neuron). The trainable state is a single flat weight tensor whose shape depends on the chosen parametrization.
LogicDense ¶
LogicDense(in_dim: int, out_dim: int, *, parametrization: str = 'light', connections: str = 'fixed', lut_rank: int = 2, device: device | str | None = None, forward_sampling: str = 'soft', temperature: float = 1.0, weight_init: str = 'random', residual_probability: float = 0.951, anchor_init: bool = True, num_candidates: int | None = None, init_method: str = 'random-unique', num_groups: int | None = None, group_bias: float | None = None, connection_temperature: float | None = None, connection_sampling: str | None = None, **parametrization_extras: Any)
Bases: LogicBase
Fully-connected LUT layer.
Forward pass = connections gather → parametrization contract. The two
connection patterns ("fixed" / "learnable") are picked via the
connections string. The node's truth-table shape
is picked via the parametrization string ("light", "warp",
"linear", "polylut", "neurallut", "dwn",
"difflogic").
All parametrization and connection settings are plain keyword arguments —
nothing is hidden inside nested dicts. The two names that exist on both
subsystems (temperature and forward_sampling) go to the
parametrization; use connection_temperature / connection_sampling
to override the connection-side values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
in_dim
|
int
|
Number of input features on the last axis. |
required |
out_dim
|
int
|
Number of output neurons. |
required |
parametrization
|
str
|
Name of the LUT parametrization. See
:func: |
'light'
|
connections
|
str
|
Routing strategy — |
'fixed'
|
lut_rank
|
int
|
Inputs per LUT (shared by parametrization and connection). |
2
|
device
|
device | str | None
|
Optional |
None
|
forward_sampling
|
str
|
Parametrization sampling mode ( |
'soft'
|
temperature
|
float
|
Parametrization softness. |
1.0
|
weight_init
|
str
|
LUT-weight init — |
'random'
|
residual_probability
|
float
|
Strength of the residual init |
0.951
|
anchor_init
|
bool
|
Whether to anchor the residual init to identity. |
True
|
num_candidates
|
int | None
|
Candidate pool size per neuron for
|
None
|
init_method
|
str
|
Connection init strategy — |
'random-unique'
|
num_groups
|
int | None
|
Group count for |
None
|
group_bias
|
float | None
|
Group-bias strength for |
None
|
connection_temperature
|
float | None
|
Opt-in override for the connection's own
|
None
|
connection_sampling
|
str | None
|
Opt-in override for the connection's own
|
None
|
**parametrization_extras
|
Any
|
Parametrization-specific extras passed
straight through — |
{}
|
Example
Source code in bitlogic/layers/dense.py
forward ¶
Gather lut_rank inputs per neuron and evaluate the parametrization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input tensor of shape |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Output tensor of shape |
Source code in bitlogic/layers/dense.py
get_luts_and_ids ¶
Return discrete LUT tables and input-id routing for hardware export.
Returns:
| Type | Description |
|---|---|
Tensor
|
A tuple |
Tensor
|
|
tuple[Tensor, Tensor]
|
|
tuple[Tensor, Tensor]
|
each slot of each neuron. |
Source code in bitlogic/layers/dense.py
extra_repr ¶
Source code in bitlogic/layers/dense.py
Abstract base¶
LogicBase ¶
Bases: Module, ABC
Abstract base for LUT-style layers.
Concrete subclasses own a parametrization (what the LUT is) and
connections (which inputs feed each neuron), plus a single flat
weight tensor. Forward pass gathers inputs via connections then
contracts via the parametrization.
forward
abstractmethod
¶
get_luts_and_ids
abstractmethod
¶
Return discretized truth tables and their input-id routing.
Returns:
| Type | Description |
|---|---|
tuple[Tensor, Tensor]
|
A tuple |