Skip to content

ANFIS Models

anfis_toolbox.model.TSKANFIS

TSKANFIS(
    input_mfs: dict[str, list[MembershipFunction]],
    rules: Sequence[Sequence[int]] | None = None,
)

Bases: _TSKANFISSharedMixin

Adaptive Neuro-Fuzzy Inference System (legacy TSK ANFIS) model.

Implements the classic 4-layer ANFIS architecture:

1) MembershipLayer — fuzzification of inputs 2) RuleLayer — rule strength computation (T-norm) 3) NormalizationLayer — weight normalization 4) ConsequentLayer — final output via a TSK model

Supports forward/backward passes for training, parameter access/update, and a simple prediction API.

Attributes:

Name Type Description
input_mfs dict[str, list[MembershipFunction]]

Mapping from input name to its list of membership functions.

membership_layer MembershipLayer

Layer 1 — fuzzification.

rule_layer RuleLayer

Layer 2 — rule strength computation.

normalization_layer NormalizationLayer

Layer 3 — weight normalization.

consequent_layer ConsequentLayer

Layer 4 — final TSK output.

input_names list[str]

Ordered list of input variable names.

n_inputs int

Number of input variables (features).

n_rules int

Number of fuzzy rules used by the system.

Parameters:

Name Type Description Default
input_mfs dict[str, list[MembershipFunction]]

Mapping from input name to a list of membership functions. Example: {"x1": [GaussianMF(0,1), ...], "x2": [...]}.

required
rules Sequence[Sequence[int]] | None

Optional explicit set of rules, each specifying one membership index per input. When None, the Cartesian product of all membership functions is used.

None

Examples:

>>> from anfis_toolbox.membership import GaussianMF
>>> input_mfs = {
...     'x1': [GaussianMF(0, 1), GaussianMF(1, 1)],
...     'x2': [GaussianMF(0, 1), GaussianMF(1, 1)]
... }
>>> model = ANFIS(input_mfs)
Source code in anfis_toolbox/model.py
def __init__(
    self,
    input_mfs: dict[str, list[MembershipFunction]],
    rules: Sequence[Sequence[int]] | None = None,
):
    """Initialize the ANFIS model.

    Args:
        input_mfs (dict[str, list[MembershipFunction]]): Mapping from input
            name to a list of membership functions. Example:
            ``{"x1": [GaussianMF(0,1), ...], "x2": [...]}``.
        rules: Optional explicit set of rules, each specifying one membership index per
            input. When ``None``, the Cartesian product of all membership functions is used.

    Examples:
        >>> from anfis_toolbox.membership import GaussianMF
        >>> input_mfs = {
        ...     'x1': [GaussianMF(0, 1), GaussianMF(1, 1)],
        ...     'x2': [GaussianMF(0, 1), GaussianMF(1, 1)]
        ... }
        >>> model = ANFIS(input_mfs)
    """
    self.input_mfs = input_mfs
    self.input_names = list(input_mfs.keys())
    self.n_inputs = len(input_mfs)

    # Calculate number of membership functions per input
    mf_per_input = [len(mfs) for mfs in input_mfs.values()]

    # Initialize all layers
    self.membership_layer = MembershipLayer(input_mfs)
    self.rule_layer = RuleLayer(self.input_names, mf_per_input, rules=rules)
    self.n_rules = self.rule_layer.n_rules
    self.normalization_layer = NormalizationLayer()
    self.consequent_layer = ConsequentLayer(self.n_rules, self.n_inputs)

__repr__

__repr__() -> str

Returns detailed representation of the ANFIS model.

Source code in anfis_toolbox/model.py
def __repr__(self) -> str:
    """Returns detailed representation of the ANFIS model."""
    return f"TSKANFIS(n_inputs={self.n_inputs}, n_rules={self.n_rules})"

backward

backward(dL_dy: ndarray) -> None

Run a backward pass through all layers.

Propagates gradients from the output back through all layers and stores parameter gradients for a later update step.

Parameters:

Name Type Description Default
dL_dy ndarray

Gradient of the loss w.r.t. the model output, shape (batch_size, 1).

required
Source code in anfis_toolbox/model.py
def backward(self, dL_dy: np.ndarray) -> None:
    """Run a backward pass through all layers.

    Propagates gradients from the output back through all layers and stores
    parameter gradients for a later update step.

    Args:
        dL_dy (np.ndarray): Gradient of the loss w.r.t. the model output,
            shape ``(batch_size, 1)``.
    """
    # Backward pass through Layer 4: Consequent layer
    dL_dnorm_w, _ = self.consequent_layer.backward(dL_dy)

    # Backward pass through Layer 3: Normalization layer
    dL_dw = self.normalization_layer.backward(dL_dnorm_w)

    # Backward pass through Layer 2: Rule layer
    gradients = self.rule_layer.backward(dL_dw)

    # Backward pass through Layer 1: Membership layer
    self.membership_layer.backward(gradients)

fit

fit(
    x: ndarray,
    y: ndarray,
    epochs: int = 100,
    learning_rate: float = 0.01,
    verbose: bool = False,
    trainer: TrainerLike | None = None,
    *,
    validation_data: tuple[ndarray, ndarray] | None = None,
    validation_frequency: int = 1,
) -> TrainingHistory

Train the ANFIS model.

If a trainer is provided (see anfis_toolbox.optim), delegate training to it while preserving a scikit-learn-style fit(X, y) entry point. If no trainer is provided, a default HybridTrainer is used with the given hyperparameters.

Parameters:

Name Type Description Default
x ndarray

Training inputs of shape (n_samples, n_inputs).

required
y ndarray

Training targets of shape (n_samples, 1) for regression.

required
epochs int

Number of epochs. Defaults to 100.

100
learning_rate float

Learning rate. Defaults to 0.01.

0.01
verbose bool

Whether to log progress. Defaults to False.

False
trainer TrainerLike | None

External trainer implementing fit(model, X, y). Defaults to None.

None
validation_data tuple[ndarray, ndarray] | None

Optional validation inputs and targets evaluated according to validation_frequency.

None
validation_frequency int

Evaluate validation loss every N epochs.

1

Returns:

Name Type Description
TrainingHistory TrainingHistory

Dictionary with "train" losses and optional "val" losses.

Source code in anfis_toolbox/model.py
def fit(
    self,
    x: np.ndarray,
    y: np.ndarray,
    epochs: int = 100,
    learning_rate: float = 0.01,
    verbose: bool = False,
    trainer: TrainerLike | None = None,
    *,
    validation_data: tuple[np.ndarray, np.ndarray] | None = None,
    validation_frequency: int = 1,
) -> TrainingHistory:
    """Train the ANFIS model.

    If a trainer is provided (see ``anfis_toolbox.optim``), delegate training
    to it while preserving a scikit-learn-style ``fit(X, y)`` entry point. If
    no trainer is provided, a default ``HybridTrainer`` is used with the given
    hyperparameters.

    Args:
        x (np.ndarray): Training inputs of shape ``(n_samples, n_inputs)``.
        y (np.ndarray): Training targets of shape ``(n_samples, 1)`` for
            regression.
        epochs (int, optional): Number of epochs. Defaults to ``100``.
        learning_rate (float, optional): Learning rate. Defaults to ``0.01``.
        verbose (bool, optional): Whether to log progress. Defaults to ``False``.
        trainer (TrainerLike | None, optional): External trainer implementing
            ``fit(model, X, y)``. Defaults to ``None``.
        validation_data (tuple[np.ndarray, np.ndarray] | None, optional): Optional
            validation inputs and targets evaluated according to ``validation_frequency``.
        validation_frequency (int, optional): Evaluate validation loss every N epochs.

    Returns:
        TrainingHistory: Dictionary with ``"train"`` losses and optional ``"val"`` losses.
    """
    if trainer is None:
        # Lazy import to avoid unnecessary dependency at module import time
        from .optim import HybridTrainer

        trainer_instance: TrainerLike = HybridTrainer(
            learning_rate=learning_rate,
            epochs=epochs,
            verbose=verbose,
        )
    else:
        trainer_instance = trainer
        if not isinstance(trainer_instance, TrainerProtocol):
            raise TypeError("trainer must implement fit(model, X, y)")

    # Delegate training to the provided or default trainer
    fit_kwargs: dict[str, Any] = {}
    if validation_data is not None:
        fit_kwargs["validation_data"] = validation_data
    if validation_frequency != 1 or validation_data is not None:
        fit_kwargs["validation_frequency"] = validation_frequency

    history = trainer_instance.fit(self, x, y, **fit_kwargs)
    if not isinstance(history, dict):
        raise TypeError("Trainer.fit must return a TrainingHistory dictionary")
    return history

forward

forward(x: ndarray) -> np.ndarray

Run a forward pass through the model.

Parameters:

Name Type Description Default
x ndarray

Input array of shape (batch_size, n_inputs).

required

Returns:

Type Description
ndarray

np.ndarray: Output array of shape (batch_size, 1).

Source code in anfis_toolbox/model.py
def forward(self, x: np.ndarray) -> np.ndarray:
    """Run a forward pass through the model.

    Args:
        x (np.ndarray): Input array of shape ``(batch_size, n_inputs)``.

    Returns:
        np.ndarray: Output array of shape ``(batch_size, 1)``.
    """
    normalized_weights = self.forward_antecedents(x)
    output = self.forward_consequents(x, normalized_weights)
    return output

forward_antecedents

forward_antecedents(x: ndarray) -> np.ndarray

Run a forward pass through the antecedent layers only.

Parameters:

Name Type Description Default
x ndarray

Input array of shape (batch_size, n_inputs).

required

Returns:

Type Description
ndarray

np.ndarray: Normalized rule weights of shape (batch_size, n_rules).

Source code in anfis_toolbox/model.py
def forward_antecedents(self, x: np.ndarray) -> np.ndarray:
    """Run a forward pass through the antecedent layers only.

    Args:
        x (np.ndarray): Input array of shape ``(batch_size, n_inputs)``.

    Returns:
        np.ndarray: Normalized rule weights of shape ``(batch_size, n_rules)``.
    """
    membership_outputs = self.membership_layer.forward(x)
    rule_strengths = self.rule_layer.forward(membership_outputs)
    normalized_weights = self.normalization_layer.forward(rule_strengths)
    return normalized_weights

forward_consequents

forward_consequents(
    x: ndarray, normalized_weights: ndarray
) -> np.ndarray

Run a forward pass through the consequent layer only.

Parameters:

Name Type Description Default
x ndarray

Input array of shape (batch_size, n_inputs).

required
normalized_weights ndarray

Normalized rule weights of shape (batch_size, n_rules).

required

Returns:

Type Description
ndarray

np.ndarray: Output array of shape (batch_size, 1).

Source code in anfis_toolbox/model.py
def forward_consequents(self, x: np.ndarray, normalized_weights: np.ndarray) -> np.ndarray:
    """Run a forward pass through the consequent layer only.

    Args:
        x (np.ndarray): Input array of shape ``(batch_size, n_inputs)``.
        normalized_weights (np.ndarray): Normalized rule weights of shape
            ``(batch_size, n_rules)``.

    Returns:
        np.ndarray: Output array of shape ``(batch_size, 1)``.
    """
    output = self.consequent_layer.forward(x, normalized_weights)
    return output

predict

predict(x: ndarray) -> np.ndarray

Predict using the current model parameters.

Accepts Python lists, 1D or 2D arrays and coerces to the expected shape.

Parameters:

Name Type Description Default
x ndarray | list[float]

Input data. If 1D, must have exactly n_inputs elements; if 2D, must be (batch_size, n_inputs).

required

Returns:

Type Description
ndarray

np.ndarray: Predictions of shape (batch_size, 1).

Raises:

Type Description
ValueError

If input dimensionality or feature count does not match the model configuration.

Source code in anfis_toolbox/model.py
def predict(self, x: np.ndarray) -> np.ndarray:
    """Predict using the current model parameters.

    Accepts Python lists, 1D or 2D arrays and coerces to the expected shape.

    Args:
        x (np.ndarray | list[float]): Input data. If 1D, must have
            exactly ``n_inputs`` elements; if 2D, must be
            ``(batch_size, n_inputs)``.

    Returns:
        np.ndarray: Predictions of shape ``(batch_size, 1)``.

    Raises:
        ValueError: If input dimensionality or feature count does not match
            the model configuration.
    """
    # Accept Python lists or 1D arrays by coercing to correct 2D shape
    x_arr = np.asarray(x, dtype=float)
    if x_arr.ndim == 1:
        # Single sample; ensure feature count matches
        if x_arr.size != self.n_inputs:
            raise ValueError(f"Expected {self.n_inputs} features, got {x_arr.size} in 1D input")
        x_arr = x_arr.reshape(1, self.n_inputs)
    elif x_arr.ndim == 2:
        # Validate feature count
        if x_arr.shape[1] != self.n_inputs:
            raise ValueError(f"Expected input with {self.n_inputs} features, got {x_arr.shape[1]}")
    else:
        raise ValueError("Expected input with shape (batch_size, n_inputs)")

    return self.forward(x_arr)

anfis_toolbox.model.TSKANFISClassifier

TSKANFISClassifier(
    input_mfs: dict[str, list[MembershipFunction]],
    n_classes: int,
    random_state: int | None = None,
    rules: Sequence[Sequence[int]] | None = None,
)

Bases: _TSKANFISSharedMixin

Adaptive Neuro-Fuzzy classifier with a softmax head (TSK variant).

Aggregates per-rule linear consequents into per-class logits and trains with cross-entropy loss.

Parameters:

Name Type Description Default
input_mfs dict[str, list[MembershipFunction]]

Mapping from input variable name to its list of membership functions.

required
n_classes int

Number of output classes (>= 2).

required
random_state int | None

Optional random seed for parameter init.

None
rules Sequence[Sequence[int]] | None

Optional explicit rule definitions where each inner sequence lists the membership-function index per input. When None, all combinations are used.

None

Raises:

Type Description
ValueError

If n_classes < 2.

Attributes:

Name Type Description
input_mfs dict[str, list[MembershipFunction]]

Membership functions per input.

input_names list[str]

Input variable names.

n_inputs int

Number of input variables.

n_classes int

Number of classes.

n_rules int

Number of fuzzy rules (product of MFs per input).

membership_layer MembershipLayer

Computes membership degrees.

rule_layer RuleLayer

Evaluates rule activations.

normalization_layer NormalizationLayer

Normalizes rule strengths.

consequent_layer ClassificationConsequentLayer

Computes class logits.

Source code in anfis_toolbox/model.py
def __init__(
    self,
    input_mfs: dict[str, list[MembershipFunction]],
    n_classes: int,
    random_state: int | None = None,
    rules: Sequence[Sequence[int]] | None = None,
):
    """Initialize the ANFIS model for classification.

    Args:
        input_mfs (dict[str, list[MembershipFunction]]): Mapping from input
            variable name to its list of membership functions.
        n_classes (int): Number of output classes (>= 2).
        random_state (int | None): Optional random seed for parameter init.
        rules (Sequence[Sequence[int]] | None): Optional explicit rule definitions
            where each inner sequence lists the membership-function index per input.
            When ``None``, all combinations are used.

    Raises:
        ValueError: If ``n_classes < 2``.

    Attributes:
        input_mfs (dict[str, list[MembershipFunction]]): Membership functions per input.
        input_names (list[str]): Input variable names.
        n_inputs (int): Number of input variables.
        n_classes (int): Number of classes.
        n_rules (int): Number of fuzzy rules (product of MFs per input).
        membership_layer (MembershipLayer): Computes membership degrees.
        rule_layer (RuleLayer): Evaluates rule activations.
        normalization_layer (NormalizationLayer): Normalizes rule strengths.
        consequent_layer (ClassificationConsequentLayer): Computes class logits.
    """
    if n_classes < 2:
        raise ValueError("n_classes must be >= 2")
    self.input_mfs = input_mfs
    self.input_names = list(input_mfs.keys())
    self.n_inputs = len(input_mfs)
    self.n_classes = int(n_classes)
    mf_per_input = [len(mfs) for mfs in input_mfs.values()]
    self.membership_layer = MembershipLayer(input_mfs)
    self.rule_layer = RuleLayer(self.input_names, mf_per_input, rules=rules)
    self.n_rules = self.rule_layer.n_rules
    self.normalization_layer = NormalizationLayer()
    self.consequent_layer = ClassificationConsequentLayer(
        self.n_rules, self.n_inputs, self.n_classes, random_state=random_state
    )

__repr__

__repr__() -> str

Return a string representation of the ANFISClassifier.

Returns:

Name Type Description
str str

A formatted string describing the classifier configuration.

Source code in anfis_toolbox/model.py
def __repr__(self) -> str:
    """Return a string representation of the ANFISClassifier.

    Returns:
        str: A formatted string describing the classifier configuration.
    """
    return f"TSKANFISClassifier(n_inputs={self.n_inputs}, n_rules={self.n_rules}, n_classes={self.n_classes})"

backward

backward(dL_dlogits: ndarray) -> None

Backpropagate gradients through all layers.

Parameters:

Name Type Description Default
dL_dlogits ndarray

Gradient of the loss w.r.t. logits, shape (batch_size, n_classes).

required
Source code in anfis_toolbox/model.py
def backward(self, dL_dlogits: np.ndarray) -> None:
    """Backpropagate gradients through all layers.

    Args:
        dL_dlogits (np.ndarray): Gradient of the loss w.r.t. logits,
            shape ``(batch_size, n_classes)``.
    """
    dL_dnorm_w, _ = self.consequent_layer.backward(dL_dlogits)
    dL_dw = self.normalization_layer.backward(dL_dnorm_w)
    gradients = self.rule_layer.backward(dL_dw)
    self.membership_layer.backward(gradients)

fit

fit(
    X: ndarray,
    y: ndarray,
    epochs: int = 100,
    learning_rate: float = 0.01,
    verbose: bool = False,
    trainer: TrainerLike | None = None,
    loss: LossFunction | str | None = None,
    *,
    validation_data: tuple[ndarray, ndarray] | None = None,
    validation_frequency: int = 1,
) -> TrainingHistory

Fits the ANFIS model to the provided training data using the specified optimization strategy.

Parameters:

Name Type Description Default
X ndarray

Input features for training.

required
y ndarray

Target values for training.

required
epochs int

Number of training epochs. Defaults to 100.

100
learning_rate float

Learning rate for the optimizer. Defaults to 0.01.

0.01
verbose bool

If True, prints training progress. Defaults to False.

False
trainer TrainerLike | None

Custom trainer instance. If None, uses AdamTrainer. Defaults to None.

None
loss LossFunction, str, or None

Loss function to use. If None, defaults to cross-entropy for classification.

None
validation_data tuple[ndarray, ndarray] | None

Optional validation dataset.

None
validation_frequency int

Evaluate validation metrics every N epochs.

1

Returns:

Name Type Description
TrainingHistory TrainingHistory

Dictionary containing "train" and optionally "val" loss curves.

Source code in anfis_toolbox/model.py
def fit(
    self,
    X: np.ndarray,
    y: np.ndarray,
    epochs: int = 100,
    learning_rate: float = 0.01,
    verbose: bool = False,
    trainer: TrainerLike | None = None,
    loss: LossFunction | str | None = None,
    *,
    validation_data: tuple[np.ndarray, np.ndarray] | None = None,
    validation_frequency: int = 1,
) -> TrainingHistory:
    """Fits the ANFIS model to the provided training data using the specified optimization strategy.

    Parameters:
        X (np.ndarray): Input features for training.
        y (np.ndarray): Target values for training.
        epochs (int, optional): Number of training epochs. Defaults to 100.
        learning_rate (float, optional): Learning rate for the optimizer. Defaults to 0.01.
        verbose (bool, optional): If True, prints training progress. Defaults to False.
        trainer (TrainerLike | None, optional): Custom trainer instance. If None,
            uses AdamTrainer. Defaults to None.
        loss (LossFunction, str, or None, optional): Loss function to use.
            If None, defaults to cross-entropy for classification.
        validation_data (tuple[np.ndarray, np.ndarray] | None, optional): Optional validation dataset.
        validation_frequency (int, optional): Evaluate validation metrics every N epochs.

    Returns:
        TrainingHistory: Dictionary containing ``"train"`` and optionally ``"val"`` loss curves.
    """
    if loss is None:
        resolved_loss = resolve_loss("cross_entropy")
    else:
        resolved_loss = resolve_loss(loss)

    if trainer is None:
        from .optim import AdamTrainer

        trainer_instance: TrainerLike = AdamTrainer(
            learning_rate=learning_rate,
            epochs=epochs,
            verbose=verbose,
            loss=resolved_loss,
        )
    else:
        trainer_instance = trainer
        if not isinstance(trainer_instance, TrainerProtocol):
            raise TypeError("trainer must implement fit(model, X, y)")
        if hasattr(trainer_instance, "loss"):
            trainer_instance.loss = resolved_loss

    fit_kwargs: dict[str, Any] = {}
    if validation_data is not None:
        fit_kwargs["validation_data"] = validation_data
    if validation_frequency != 1 or validation_data is not None:
        fit_kwargs["validation_frequency"] = validation_frequency

    history = trainer_instance.fit(self, X, y, **fit_kwargs)
    if not isinstance(history, dict):
        raise TypeError("Trainer.fit must return a TrainingHistory dictionary")
    return history

forward

forward(x: ndarray) -> np.ndarray

Run a forward pass through the classifier.

Parameters:

Name Type Description Default
x ndarray

Input array of shape (batch_size, n_inputs).

required

Returns:

Type Description
ndarray

np.ndarray: Logits of shape (batch_size, n_classes).

Source code in anfis_toolbox/model.py
def forward(self, x: np.ndarray) -> np.ndarray:
    """Run a forward pass through the classifier.

    Args:
        x (np.ndarray): Input array of shape ``(batch_size, n_inputs)``.

    Returns:
        np.ndarray: Logits of shape ``(batch_size, n_classes)``.
    """
    membership_outputs = self.membership_layer.forward(x)
    rule_strengths = self.rule_layer.forward(membership_outputs)
    normalized_weights = self.normalization_layer.forward(rule_strengths)
    logits = self.consequent_layer.forward(x, normalized_weights)  # (b, k)
    return logits

predict

predict(x: ndarray) -> np.ndarray

Predict the most likely class label for each sample.

Parameters:

Name Type Description Default
x ndarray | list[float]

Inputs. If 1D, must have exactly n_inputs elements; if 2D, must be (batch_size, n_inputs).

required

Returns:

Type Description
ndarray

np.ndarray: Predicted labels of shape (batch_size,).

Source code in anfis_toolbox/model.py
def predict(self, x: np.ndarray) -> np.ndarray:
    """Predict the most likely class label for each sample.

    Args:
        x (np.ndarray | list[float]): Inputs. If 1D, must have exactly
            ``n_inputs`` elements; if 2D, must be ``(batch_size, n_inputs)``.

    Returns:
        np.ndarray: Predicted labels of shape ``(batch_size,)``.
    """
    proba = self.predict_proba(x)
    return np.argmax(proba, axis=1)

predict_proba

predict_proba(x: ndarray) -> np.ndarray

Predict per-class probabilities for the given inputs.

Parameters:

Name Type Description Default
x ndarray | list[float]

Inputs. If 1D, must have exactly n_inputs elements; if 2D, must be (batch_size, n_inputs).

required

Returns:

Type Description
ndarray

np.ndarray: Probabilities of shape (batch_size, n_classes).

Raises:

Type Description
ValueError

If input dimensionality or feature count is invalid.

Source code in anfis_toolbox/model.py
def predict_proba(self, x: np.ndarray) -> np.ndarray:
    """Predict per-class probabilities for the given inputs.

    Args:
        x (np.ndarray | list[float]): Inputs. If 1D, must have exactly
            ``n_inputs`` elements; if 2D, must be ``(batch_size, n_inputs)``.

    Returns:
        np.ndarray: Probabilities of shape ``(batch_size, n_classes)``.

    Raises:
        ValueError: If input dimensionality or feature count is invalid.
    """
    x_arr = np.asarray(x, dtype=float)
    if x_arr.ndim == 1:
        if x_arr.size != self.n_inputs:
            raise ValueError(f"Expected {self.n_inputs} features, got {x_arr.size} in 1D input")
        x_arr = x_arr.reshape(1, self.n_inputs)
    elif x_arr.ndim == 2:
        if x_arr.shape[1] != self.n_inputs:
            raise ValueError(f"Expected input with {self.n_inputs} features, got {x_arr.shape[1]}")
    else:
        raise ValueError("Expected input with shape (batch_size, n_inputs)")
    logits = self.forward(x_arr)
    return softmax(logits, axis=1)