Layers
Antecedent and consequent layers for highFIS fuzzy models.
This module provides torch.nn.Module building blocks for the TSK
antecedent and consequent pipeline. The layers here are used by concrete
TSK model variants in highfis.models.
Layer overview
Membership
- MembershipLayer
Rule aggregation
- RuleLayer
- AdaSoftminRuleLayer
- DGALETSKRuleLayer
- DGTSKRuleLayer
- AdaptiveDombiRuleLayer
Consequent heads
- ClassificationConsequentLayer
- RegressionConsequentLayer
- GatedClassificationConsequentLayer
- GatedClassificationZeroOrderConsequentLayer
- GatedRegressionConsequentLayer
- GatedRegressionZeroOrderConsequentLayer
Gate activations
gate1, gate2, gate3, gate4, gate_m
AdaSoftminRuleLayer
Bases: RuleLayer
Compute adaptive Ada-softmin firing strengths for each rule.
Initialize Ada-softmin rule layer.
Source code in highfis/layers.py
forward
Compute Ada-softmin rule strengths from membership outputs.
Source code in highfis/layers.py
AdaptiveDombiRuleLayer
Bases: RuleLayer
Compute adaptive Dombi firing strengths with per-rule lambda parameters.
Initialize adaptive Dombi rule layer.
Source code in highfis/layers.py
lambdas
property
Return strictly positive per-rule lambda values.
forward
Compute adaptive Dombi firing strengths for each rule.
Source code in highfis/layers.py
ClassificationConsequentLayer
Bases: nn.Module
Linear TSK consequent layer for classification logits.
Initialize consequent parameters for classification logits.
Source code in highfis/layers.py
forward
Compute class logits from inputs and normalized rule strengths.
Source code in highfis/layers.py
DGALETSKRuleLayer
Bases: RuleLayer
Compute adaptive Ln-Exp softmin firing strengths with antecedent feature gates.
Initialize DGALETSK rule layer.
Source code in highfis/layers.py
alpha
property
Return positive adaptive alpha parameter.
forward
Compute adaptive Ln-Exp rule strengths from membership outputs.
Source code in highfis/layers.py
DGTSKRuleLayer
Bases: RuleLayer
Compute DG-TSK antecedent strengths with learned feature gates.
Initialize DGTSK rule layer.
Source code in highfis/layers.py
forward
Compute DGTSK rule strengths from membership outputs.
Source code in highfis/layers.py
GatedClassificationConsequentLayer
Bases: nn.Module
Gated TSK consequent layer for classification logits.
Supports three training modes to match the FSRE-AdaTSK paper protocol:
"fs"— only feature gates :math:M(\\lambda_d)are active (Phase 1, feature selection, eq. 21)."re"— only rule gates :math:M(\\theta_r)are active (Phase 2, rule extraction, eq. 22)."finetune"— no gates; plain linear TSK consequent (Phase 3, eq. 5)."both"(default) — both gate families applied simultaneously.
When shared_lambda=True the feature gate vector has shape
(n_inputs,) and is shared across all rules (FSRE-AdaTSK, eq. 21).
When shared_lambda=False (default) each rule has its own
(n_inputs,) gate vector, stored as (n_rules, n_inputs)
(DG-ALETSK).
Initialize gated consequent parameters for classification logits.
Source code in highfis/layers.py
forward
Compute gated class logits from inputs and normalized rule strengths.
Source code in highfis/layers.py
GatedClassificationZeroOrderConsequentLayer
Bases: nn.Module
Gated zero-order TSK consequent layer for classification logits.
Initialize zero-order gated consequent parameters for classification logits.
Source code in highfis/layers.py
forward
Compute gated class logits from normalized rule strengths.
Source code in highfis/layers.py
GatedRegressionConsequentLayer
Bases: nn.Module
Gated TSK consequent layer for scalar regression output.
Supports the same mode / shared_lambda protocol as
:class:GatedClassificationConsequentLayer.
Initialize gated consequent parameters for regression.
Source code in highfis/layers.py
forward
Compute gated regression output from inputs and normalized rule strengths.
Source code in highfis/layers.py
GatedRegressionZeroOrderConsequentLayer
Bases: nn.Module
Gated zero-order TSK consequent layer for regression.
Initialize zero-order gated consequent parameters for regression.
Source code in highfis/layers.py
forward
Compute gated regression output from normalized rule strengths.
Source code in highfis/layers.py
MembershipLayer
Bases: nn.Module
Apply membership functions for each input feature.
Evaluates each input variable against its sequence of
:class:~highfis.memberships.MembershipFunction objects and returns
a dictionary of per-variable membership tensors.
Attributes:
| Name | Type | Description |
|---|---|---|
input_names |
Ordered list of input feature names. |
|
n_inputs |
Number of input features. |
|
mf_per_input |
list[int]
|
Number of membership functions per feature. |
input_mfs |
|
Initialize membership layer with input-to-membership mapping.
Source code in highfis/layers.py
forward
Compute membership outputs for each input variable.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input tensor of shape |
required |
Returns:
| Type | Description |
|---|---|
dict[str, Tensor]
|
Dictionary mapping each input name to a membership tensor of |
dict[str, Tensor]
|
shape |
Raises:
| Type | Description |
|---|---|
ValueError
|
If x is not 2-dimensional or has the wrong number of columns. |
Source code in highfis/layers.py
RegressionConsequentLayer
Bases: nn.Module
Linear TSK consequent layer for scalar regression output.
Initialize consequent parameters for regression.
Source code in highfis/layers.py
forward
Compute scalar regression output from inputs and normalized rule strengths.
Source code in highfis/layers.py
RuleLayer
Bases: nn.Module
Compute firing strengths from membership degrees.
Generates a rule base from the specified strategy and aggregates per-input membership degrees into scalar firing strengths using a configurable T-norm.
Supported rule-base strategies (rule_base parameter):
"cartesian"/"fuco"— full combinatorial rule base."coco"— same-index rule base; requires identical MF counts across all inputs."en"— enhanced FRB; requires identical MF counts across all inputs."custom"— user-supplied rule index sequences.
Attributes:
| Name | Type | Description |
|---|---|---|
rules |
List of rule index tuples, one per rule. |
|
n_rules |
Number of rules in the rule base. |
Initialize rule generation and firing-strength aggregation strategy.
Source code in highfis/layers.py
forward
Compute rule firing strengths from membership outputs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
membership_outputs
|
dict[str, Tensor]
|
Dictionary returned by
:class: |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
Firing-strength tensor of shape |
Raises:
| Type | Description |
|---|---|
KeyError
|
If a required input name is missing from membership_outputs. |