Skip to content

Estimators

Scikit-learn compatible estimator wrappers for highFIS TSK models.

This module provides high-level, sklearn-compatible wrappers for every TSK variant implemented in highfis.models. Each estimator follows the standard fit / predict / score interface and handles membership-function initialization, model construction, and the training loop internally.

Base Classes

Two abstract base classes share the common logic:

  • _BaseClassifierEstimator: For classification tasks.
  • _BaseRegressorEstimator: For regression tasks.
Model Family Overview

Concrete estimators cover the following model families:

TSK Vanilla Takagi-Sugeno-Kang model.

Implemented by:
    `TSKClassifierEstimator`, `TSKRegressorEstimator`

HTSK High-dimensional TSK via averaged defuzzification.

Implemented by:
    `HTSKClassifierEstimator`, `HTSKRegressorEstimator`

LogTSK Inverse-log normalization of log-domain rule weights for high-dimensional data.

Implemented by:
    `LogTSKClassifierEstimator`, `LogTSKRegressorEstimator`

DombiTSK Dombi T-norm based TSK.

Implemented by:
    `DombiTSKClassifierEstimator`, `DombiTSKRegressorEstimator`

ADMTSK Adaptive Dombi TSK with Composite Gaussian membership functions.

Implemented by:
    `ADMTSKClassifierEstimator`, `ADMTSKRegressorEstimator`

AYATSK Adaptive Yager T-norm based TSK.

Implemented by:
    `AYATSKClassifierEstimator`, `AYATSKRegressorEstimator`

AdaTSK Adaptive softmin based TSK.

Implemented by:
    `AdaTSKClassifierEstimator`, `AdaTSKRegressorEstimator`

ADPTSK Adaptive double-parameter softmin based TSK with Gaussian PIMF.

Implemented by:
    `ADPTSKClassifierEstimator`, `ADPTSKRegressorEstimator`

FSRE-AdaTSK AdaTSK with feature-selection and rule-extraction gates.

Implemented by:
    `FSREAdaTSKClassifierEstimator`, `FSREAdaTSKRegressorEstimator`

DG-ALETSK Double-gate adaptive Ln-Exp softmin TSK.

Implemented by:
    `DGALETSKClassifierEstimator`, `DGALETSKRegressorEstimator`

DG-TSK Double-gate TSK with point-based FRB.

Implemented by:
    `DGTSKClassifierEstimator`, `DGTSKRegressorEstimator`

HDFIS High-dimensional inference with both product DMF and minimum frozen-antecedent variants.

Implemented by:
    `HDFISProdClassifierEstimator`, `HDFISProdRegressorEstimator`,
    `HDFISMinClassifierEstimator`, `HDFISMinRegressorEstimator`
Membership Function Initialization

The following strategies are available for initializing membership functions:

  • mf_init="kmeans" (default): K-means cluster centroids are used as membership function centers. The sigma values are derived from within-cluster spread and scaled by sigma_scale. This produces a CoCo rule base by default.

  • mf_init="grid": Regular grid placement controlled by InputConfig. This produces a Cartesian rule base by default.

Notes
  • All estimators follow the scikit-learn API design.
  • Model construction and training are fully encapsulated within the estimator interface.

ADMTSKClassifierEstimator

Bases: _BaseClassifierEstimator

ADMTSK classifier estimator with Composite GMF and adaptive Dombi lambda.

ADMTSK is an adaptive Dombi TSK fuzzy system designed for high-dimensional inference. It combines a Dombi T-norm antecedent with a positive lower-bound Composite Gaussian membership function (CGMF) and normalized first-order consequents.

Reference

G. Xue, L. Hu, J. Wang and S. Ablameyko, "ADMTSK: A High-Dimensional Takagi-Sugeno-Kang Fuzzy System Based on Adaptive Dombi T-Norm," in IEEE Transactions on Fuzzy Systems, vol. 33, no. 6, pp. 1767-1780, June 2025, doi: 10.1109/TFUZZ.2025.3535640.

Example
from highfis import ADMTSKClassifierEstimator

clf = ADMTSKClassifierEstimator()
clf.fit(X_train, y_train)

Initialize an ADMTSK classifier estimator.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Optional list of per-feature input configurations.

None
n_mfs int

Number of membership functions per input when using mf_init="kmeans" or mf_init="grid".

5
mf_init str

Initialisation strategy for MFs, either "kmeans" or "grid".

'kmeans'
sigma_scale float | str

Scale factor used to initialise Gaussian MF sigma values.

1.0
random_state int | None

Random seed for MF initialisation and weights.

None
epochs int

Maximum number of training epochs.

10
learning_rate float

Learning rate for the optimizer.

0.01
verbose bool | int

Verbosity level for training output.

False
rule_base str | None

Rule base strategy override, typically "coco" or "cartesian".

None
batch_size int | None

Mini-batch size for training.

512
shuffle bool

Whether to shuffle training data each epoch.

True
ur_weight float

Uniform-rule regularisation weight.

0.0
ur_target float | None

Target average rule activation for uniform regularisation.

None
consequent_batch_norm bool

If True, apply batch normalization to consequent inputs.

False
pfrb_max_rules int | None

Maximum number of rules for point-based FRB.

None
patience int | None

Early stopping patience. Use None to disable.

20
restore_best bool

If True, restore the best validation weights.

True
validation_data tuple[Any, Any] | None

Validation dataset used for early stopping.

None
weight_decay float

Weight decay applied during training.

1e-08
adaptive bool

If True, use adaptive lambda selection for Dombi T-norm.

True
lambda_ float

Fixed Dombi parameter when adaptive is False.

1.0
lower_bound float

Lower bound used by Composite GMF.

1.0 / math.e
K float

Heuristic constant used to compute adaptive lambda.

10.0

Raises:

Type Description
ValueError

If estimator hyperparameters are invalid.

Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    pfrb_max_rules: int | None = None,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
    adaptive: bool = True,
    lambda_: float = 1.0,
    lower_bound: float = 1.0 / math.e,
    K: float = 10.0,
) -> None:
    """Initialize an ADMTSK classifier estimator.

    Args:
        input_configs: Optional list of per-feature input configurations.
        n_mfs: Number of membership functions per input when using
            ``mf_init="kmeans"`` or ``mf_init="grid"``.
        mf_init: Initialisation strategy for MFs, either ``"kmeans"``
            or ``"grid"``.
        sigma_scale: Scale factor used to initialise Gaussian MF sigma
            values.
        random_state: Random seed for MF initialisation and weights.
        epochs: Maximum number of training epochs.
        learning_rate: Learning rate for the optimizer.
        verbose: Verbosity level for training output.
        rule_base: Rule base strategy override, typically ``"coco"`` or
            ``"cartesian"``.
        batch_size: Mini-batch size for training.
        shuffle: Whether to shuffle training data each epoch.
        ur_weight: Uniform-rule regularisation weight.
        ur_target: Target average rule activation for uniform regularisation.
        consequent_batch_norm: If True, apply batch normalization to
            consequent inputs.
        pfrb_max_rules: Maximum number of rules for point-based FRB.
        patience: Early stopping patience. Use ``None`` to disable.
        restore_best: If True, restore the best validation weights.
        validation_data: Validation dataset used for early stopping.
        weight_decay: Weight decay applied during training.
        adaptive: If True, use adaptive lambda selection for Dombi T-norm.
        lambda_: Fixed Dombi parameter when adaptive is False.
        lower_bound: Lower bound used by Composite GMF.
        K: Heuristic constant used to compute adaptive lambda.

    Raises:
        ValueError: If estimator hyperparameters are invalid.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        pfrb_max_rules=pfrb_max_rules,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )
    self.adaptive = bool(adaptive)
    self.lambda_ = float(lambda_)
    self.lower_bound = float(lower_bound)
    self.K = float(K)

evaluate

Compute classification evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute classification evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="classification",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK classifier on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK classifier on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    le = LabelEncoder()
    y_idx = le.fit_transform(np.asarray(y_arr))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)
    self.classes_ = le.classes_
    self._label_encoder_ = le

    self.model_ = self._build_model(input_mfs, len(self.classes_), effective_rule_base)

    y_t = torch.as_tensor(y_idx, dtype=torch.long)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        y_v_idx = self._label_encoder_.transform(np.asarray(y_v))
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(y_v_idx, dtype=torch.long)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_model(
        model_init["input_mfs"],
        int(model_init["n_classes"]),
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.classes_ = np.asarray(fitted["classes"], dtype=object)
    label_encoder = LabelEncoder()
    label_encoder.classes_ = estimator.classes_
    estimator._label_encoder_ = label_encoder
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict class labels for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict class labels for input samples."""
    proba = self.predict_proba(x)
    y_idx = np.argmax(proba, axis=1)
    return np.asarray(self._label_encoder_.inverse_transform(y_idx))

predict_proba

Predict class probabilities for input samples.

Source code in highfis/estimators.py
def predict_proba(self, x: Any) -> np.ndarray:
    """Predict class probabilities for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    probs = cast(Any, self.model_).predict_proba(self._as_tensor_x(x_arr))
    return probs.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "n_classes": len(self.classes_),
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
            "classes": self.classes_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

score

Return classification accuracy on the provided dataset.

Source code in highfis/estimators.py
def score(self, X: Any, y: Any, sample_weight: Any = None) -> float:
    """Return classification accuracy on the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return float(accuracy_score(y_true, y_pred, sample_weight=sample_weight))

ADMTSKRegressorEstimator

Bases: _BaseRegressorEstimator

ADMTSK regressor estimator with Composite GMF and adaptive Dombi lambda.

ADMTSK is an adaptive Dombi TSK fuzzy system designed for high-dimensional inference. It combines a Dombi T-norm antecedent with a positive lower-bound Composite Gaussian membership function (CGMF) and normalized first-order consequents.

Reference

G. Xue, L. Hu, J. Wang and S. Ablameyko, "ADMTSK: A High-Dimensional Takagi-Sugeno-Kang Fuzzy System Based on Adaptive Dombi T-Norm," in IEEE Transactions on Fuzzy Systems, vol. 33, no. 6, pp. 1767-1780, June 2025, doi: 10.1109/TFUZZ.2025.3535640.

Example
from highfis import ADMTSKRegressorEstimator

reg = ADMTSKRegressorEstimator()
reg.fit(X_train, y_train)

Initialize an ADMTSK regressor estimator.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Optional list of per-feature input configurations.

None
n_mfs int

Number of membership functions per input when using mf_init="kmeans" or mf_init="grid".

5
mf_init str

Initialisation strategy for MFs, either "kmeans" or "grid".

'kmeans'
sigma_scale float | str

Scale factor used to initialise Gaussian MF sigma values.

1.0
random_state int | None

Random seed for MF initialisation and weights.

None
epochs int

Maximum number of training epochs.

10
learning_rate float

Learning rate for the optimizer.

0.01
verbose bool | int

Verbosity level for training output.

False
rule_base str | None

Rule base strategy override, typically "coco" or "cartesian".

None
batch_size int | None

Mini-batch size for training.

512
shuffle bool

Whether to shuffle training data each epoch.

True
ur_weight float

Uniform-rule regularisation weight.

0.0
ur_target float | None

Target average rule activation for uniform regularisation.

None
consequent_batch_norm bool

If True, apply batch normalization to consequent inputs.

False
patience int | None

Early stopping patience. Use None to disable.

20
restore_best bool

If True, restore the best validation weights.

True
validation_data tuple[Any, Any] | None

Validation dataset used for early stopping.

None
weight_decay float

Weight decay applied during training.

1e-08
adaptive bool

If True, use adaptive lambda selection for Dombi T-norm.

True
lambda_ float

Fixed Dombi parameter when adaptive is False.

1.0
lower_bound float

Lower bound used by Composite GMF.

1.0 / math.e
K float

Heuristic constant used to compute adaptive lambda.

10.0

Raises:

Type Description
ValueError

If estimator hyperparameters are invalid.

Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
    adaptive: bool = True,
    lambda_: float = 1.0,
    lower_bound: float = 1.0 / math.e,
    K: float = 10.0,
) -> None:
    """Initialize an ADMTSK regressor estimator.

    Args:
        input_configs: Optional list of per-feature input configurations.
        n_mfs: Number of membership functions per input when using
            ``mf_init="kmeans"`` or ``mf_init="grid"``.
        mf_init: Initialisation strategy for MFs, either ``"kmeans"``
            or ``"grid"``.
        sigma_scale: Scale factor used to initialise Gaussian MF sigma
            values.
        random_state: Random seed for MF initialisation and weights.
        epochs: Maximum number of training epochs.
        learning_rate: Learning rate for the optimizer.
        verbose: Verbosity level for training output.
        rule_base: Rule base strategy override, typically ``"coco"`` or
            ``"cartesian"``.
        batch_size: Mini-batch size for training.
        shuffle: Whether to shuffle training data each epoch.
        ur_weight: Uniform-rule regularisation weight.
        ur_target: Target average rule activation for uniform regularisation.
        consequent_batch_norm: If True, apply batch normalization to
            consequent inputs.
        patience: Early stopping patience. Use ``None`` to disable.
        restore_best: If True, restore the best validation weights.
        validation_data: Validation dataset used for early stopping.
        weight_decay: Weight decay applied during training.
        adaptive: If True, use adaptive lambda selection for Dombi T-norm.
        lambda_: Fixed Dombi parameter when adaptive is False.
        lower_bound: Lower bound used by Composite GMF.
        K: Heuristic constant used to compute adaptive lambda.

    Raises:
        ValueError: If estimator hyperparameters are invalid.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )
    self.adaptive = bool(adaptive)
    self.lambda_ = float(lambda_)
    self.lower_bound = float(lower_bound)
    self.K = float(K)

evaluate

Compute regression evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute regression evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="regression",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK regressor on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK regressor on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)

    self.model_ = self._build_regressor_model(input_mfs, effective_rule_base)

    y_t = torch.as_tensor(np.asarray(y_arr, dtype=np.float32), dtype=torch.float32)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(np.asarray(y_v, dtype=np.float32), dtype=torch.float32)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_regressor_model(
        model_init["input_mfs"],
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict continuous target values for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict continuous target values for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    preds = cast(Any, self.model_).predict(self._as_tensor_x(x_arr))
    return preds.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

ADPTSKClassifierEstimator

Bases: _BaseClassifierEstimator

TSK classifier with ADP-softmin antecedent and Gaussian PIMF.

The firing strengths of each rule are computed with the ADP-softmin operator, and membership functions are wrapped as Gaussian PIMFs to preserve a positive infimum during high-dimensional training.

Reference

Ma, M., Qian, L., Zhang, Y., Fang, Q., & Xue, G. (2025). An adaptive double-parameter softmin based Takagi-Sugeno-Kang fuzzy system for high-dimensional data. Fuzzy Sets and Systems, 521, 109582. https://doi.org/10.1016/j.fss.2025.109582

Example
from highfis import ADPTSKClassifierEstimator

clf = ADPTSKClassifierEstimator()
clf.fit(X_train, y_train)

Initialise an ADPTSK classifier estimator.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Optional list of :class:InputConfig instances, one per feature. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of membership functions per feature or k-means clusters.

3
mf_init str

Membership-function initialization strategy. "kmeans" or "grid".

'kmeans'
sigma_scale float | str

Scale factor for Gaussian MF sigma initialization.

1.0
random_state int | None

Seed for k-means and PyTorch weight initialization.

None
epochs int

Maximum number of training epochs.

10
learning_rate float

Initial learning rate for the Adam optimizer.

0.01
verbose bool | int

Verbosity level for training output.

False
rule_base str | None

Rule-base strategy, e.g. "coco" or "cartesian".

None
batch_size int | None

Mini-batch size. None uses the full dataset.

512
shuffle bool

Whether to shuffle training samples each epoch.

True
ur_weight float

Uniform-rule regularization weight.

0.0
ur_target float | None

Target average rule activation for UR.

None
consequent_batch_norm bool

Apply batch normalization to consequent linear layers.

False
pfrb_max_rules int | None

Maximum rules for point-based FRB when rule_base="pfrb".

None
patience int | None

Early-stopping patience. None disables early stopping.

20
restore_best bool

Restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) tuple for early stopping.

None
weight_decay float

L2 weight decay coefficient for consequent parameters.

1e-08
kappa float

ADPTSK κ parameter controlling the double-softmin geometry.

690.0
xi float

ADPTSK ξ parameter controlling adaptive softmin sharpness.

730.0
K float

Gaussian PIMF scaling constant used when wrapping the input MFs.

1.0
eps float | None

Optional lower bound for Gaussian PIMF values.

None
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 3,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    pfrb_max_rules: int | None = None,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
    kappa: float = 690.0,
    xi: float = 730.0,
    K: float = 1.0,
    eps: float | None = None,
) -> None:
    """Initialise an ADPTSK classifier estimator.

    Args:
        input_configs: Optional list of :class:`InputConfig` instances,
            one per feature. Only ``name`` is used when
            ``mf_init="kmeans"``.
        n_mfs: Number of membership functions per feature or k-means
            clusters.
        mf_init: Membership-function initialization strategy.
            ``"kmeans"`` or ``"grid"``.
        sigma_scale: Scale factor for Gaussian MF sigma initialization.
        random_state: Seed for k-means and PyTorch weight initialization.
        epochs: Maximum number of training epochs.
        learning_rate: Initial learning rate for the Adam optimizer.
        verbose: Verbosity level for training output.
        rule_base: Rule-base strategy, e.g. ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size. ``None`` uses the full dataset.
        shuffle: Whether to shuffle training samples each epoch.
        ur_weight: Uniform-rule regularization weight.
        ur_target: Target average rule activation for UR.
        consequent_batch_norm: Apply batch normalization to consequent
            linear layers.
        pfrb_max_rules: Maximum rules for point-based FRB when
            ``rule_base="pfrb"``.
        patience: Early-stopping patience. ``None`` disables early stopping.
        restore_best: Restore the best validation model weights after
            training.
        validation_data: Optional ``(X_val, y_val)`` tuple for early
            stopping.
        weight_decay: L2 weight decay coefficient for consequent parameters.
        kappa: ADPTSK ``κ`` parameter controlling the double-softmin
            geometry.
        xi: ADPTSK ``ξ`` parameter controlling adaptive softmin sharpness.
        K: Gaussian PIMF scaling constant used when wrapping the input MFs.
        eps: Optional lower bound for Gaussian PIMF values.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        pfrb_max_rules=pfrb_max_rules,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )
    self.kappa = float(kappa)
    self.xi = float(xi)
    self.K = float(K)
    self.eps = eps

evaluate

Compute classification evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute classification evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="classification",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK classifier on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK classifier on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    le = LabelEncoder()
    y_idx = le.fit_transform(np.asarray(y_arr))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)
    self.classes_ = le.classes_
    self._label_encoder_ = le

    self.model_ = self._build_model(input_mfs, len(self.classes_), effective_rule_base)

    y_t = torch.as_tensor(y_idx, dtype=torch.long)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        y_v_idx = self._label_encoder_.transform(np.asarray(y_v))
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(y_v_idx, dtype=torch.long)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_model(
        model_init["input_mfs"],
        int(model_init["n_classes"]),
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.classes_ = np.asarray(fitted["classes"], dtype=object)
    label_encoder = LabelEncoder()
    label_encoder.classes_ = estimator.classes_
    estimator._label_encoder_ = label_encoder
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict class labels for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict class labels for input samples."""
    proba = self.predict_proba(x)
    y_idx = np.argmax(proba, axis=1)
    return np.asarray(self._label_encoder_.inverse_transform(y_idx))

predict_proba

Predict class probabilities for input samples.

Source code in highfis/estimators.py
def predict_proba(self, x: Any) -> np.ndarray:
    """Predict class probabilities for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    probs = cast(Any, self.model_).predict_proba(self._as_tensor_x(x_arr))
    return probs.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "n_classes": len(self.classes_),
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
            "classes": self.classes_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

score

Return classification accuracy on the provided dataset.

Source code in highfis/estimators.py
def score(self, X: Any, y: Any, sample_weight: Any = None) -> float:
    """Return classification accuracy on the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return float(accuracy_score(y_true, y_pred, sample_weight=sample_weight))

ADPTSKRegressorEstimator

Bases: _BaseRegressorEstimator

TSK regressor with ADP-softmin antecedent and Gaussian PIMF.

The firing strengths of each rule are computed with the ADP-softmin operator, and membership functions are wrapped as Gaussian PIMFs to preserve a positive infimum during high-dimensional training.

Reference

Ma, M., Qian, L., Zhang, Y., Fang, Q., & Xue, G. (2025). An adaptive double-parameter softmin based Takagi-Sugeno-Kang fuzzy system for high-dimensional data. Fuzzy Sets and Systems, 521, 109582. https://doi.org/10.1016/j.fss.2025.109582

Example
from highfis import ADPTSKRegressorEstimator

reg = ADPTSKRegressorEstimator()
reg.fit(X_train, y_train)

Initialise an ADPTSK regressor estimator.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Optional list of :class:InputConfig instances, one per feature. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of membership functions per feature or k-means clusters.

3
mf_init str

Membership-function initialization strategy. "kmeans" or "grid".

'kmeans'
sigma_scale float | str

Scale factor for Gaussian MF sigma initialization.

1.0
random_state int | None

Seed for k-means and PyTorch weight initialization.

None
epochs int

Maximum number of training epochs.

10
learning_rate float

Initial learning rate for the Adam optimizer.

0.01
verbose bool | int

Verbosity level for training output.

False
rule_base str | None

Rule-base strategy, e.g. "coco" or "cartesian".

None
batch_size int | None

Mini-batch size. None uses the full dataset.

512
shuffle bool

Whether to shuffle training samples each epoch.

True
ur_weight float

Uniform-rule regularization weight.

0.0
ur_target float | None

Target average rule activation for UR.

None
consequent_batch_norm bool

Apply batch normalization to consequent linear layers.

False
pfrb_max_rules int | None

Maximum rules for point-based FRB when rule_base="pfrb".

None
patience int | None

Early-stopping patience. None disables early stopping.

20
restore_best bool

Restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) tuple for early stopping.

None
weight_decay float

L2 weight decay coefficient for consequent parameters.

1e-08
kappa float

ADPTSK κ parameter controlling the double-softmin geometry.

690.0
xi float

ADPTSK ξ parameter controlling adaptive softmin sharpness.

730.0
K float

Gaussian PIMF scaling constant used when wrapping the input MFs.

1.0
eps float | None

Optional lower bound for Gaussian PIMF values.

None
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 3,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    pfrb_max_rules: int | None = None,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
    kappa: float = 690.0,
    xi: float = 730.0,
    K: float = 1.0,
    eps: float | None = None,
) -> None:
    """Initialise an ADPTSK regressor estimator.

    Args:
        input_configs: Optional list of :class:`InputConfig` instances,
            one per feature. Only ``name`` is used when
            ``mf_init="kmeans"``.
        n_mfs: Number of membership functions per feature or k-means
            clusters.
        mf_init: Membership-function initialization strategy.
            ``"kmeans"`` or ``"grid"``.
        sigma_scale: Scale factor for Gaussian MF sigma initialization.
        random_state: Seed for k-means and PyTorch weight initialization.
        epochs: Maximum number of training epochs.
        learning_rate: Initial learning rate for the Adam optimizer.
        verbose: Verbosity level for training output.
        rule_base: Rule-base strategy, e.g. ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size. ``None`` uses the full dataset.
        shuffle: Whether to shuffle training samples each epoch.
        ur_weight: Uniform-rule regularization weight.
        ur_target: Target average rule activation for UR.
        consequent_batch_norm: Apply batch normalization to consequent
            linear layers.
        pfrb_max_rules: Maximum rules for point-based FRB when
            ``rule_base="pfrb"``.
        patience: Early-stopping patience. ``None`` disables early stopping.
        restore_best: Restore the best validation model weights after
            training.
        validation_data: Optional ``(X_val, y_val)`` tuple for early
            stopping.
        weight_decay: L2 weight decay coefficient for consequent parameters.
        kappa: ADPTSK ``κ`` parameter controlling the double-softmin
            geometry.
        xi: ADPTSK ``ξ`` parameter controlling adaptive softmin sharpness.
        K: Gaussian PIMF scaling constant used when wrapping the input MFs.
        eps: Optional lower bound for Gaussian PIMF values.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        pfrb_max_rules=pfrb_max_rules,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )
    self.kappa = float(kappa)
    self.xi = float(xi)
    self.K = float(K)
    self.eps = eps

evaluate

Compute regression evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute regression evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="regression",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK regressor on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK regressor on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)

    self.model_ = self._build_regressor_model(input_mfs, effective_rule_base)

    y_t = torch.as_tensor(np.asarray(y_arr, dtype=np.float32), dtype=torch.float32)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(np.asarray(y_v, dtype=np.float32), dtype=torch.float32)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_regressor_model(
        model_init["input_mfs"],
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict continuous target values for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict continuous target values for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    preds = cast(Any, self.model_).predict(self._as_tensor_x(x_arr))
    return preds.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

AYATSKClassifierEstimator

Bases: _BaseClassifierEstimator

TSK classifier with an adaptive Yager T-norm in the antecedent.

AYATSK extends TSK by using an adaptive Yager T-norm aggregation and optional positive lower-bound membership functions to improve stability and performance in high-dimensional settings.

Reference

G. Xue, Y. Yang and J. Wang, "Adaptive Yager T-Norm-Based Takagi-Sugeno-Kang Fuzzy Systems," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 55, no. 12, pp. 9802-9815, Dec. 2025, doi: 10.1109/TSMC.2025.3621346.

Example
from highfis import AYATSKClassifierEstimator

clf = AYATSKClassifierEstimator(n_mfs=30, random_state=0)
clf.fit(X_train, y_train)

Initialise an AYATSK classifier.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor for k-means initialisation. 1.0 is recommended; the adaptive Yager T-norm handles high-dimensional stability internally.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian". Defaults to "coco" for kmeans and "cartesian" for grid.

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
pfrb_max_rules int | None

Maximum point-based FRB rules (unused by AYATSK).

None
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    pfrb_max_rules: int | None = None,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise an AYATSK classifier.

    Args:
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor for k-means initialisation.
            ``1.0`` is recommended; the adaptive Yager T-norm handles
            high-dimensional stability internally.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``. Defaults to
            ``"coco"`` for kmeans and ``"cartesian"`` for grid.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        pfrb_max_rules: Maximum point-based FRB rules (unused by
            AYATSK).
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        pfrb_max_rules=pfrb_max_rules,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute classification evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute classification evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="classification",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK classifier on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK classifier on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    le = LabelEncoder()
    y_idx = le.fit_transform(np.asarray(y_arr))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)
    self.classes_ = le.classes_
    self._label_encoder_ = le

    self.model_ = self._build_model(input_mfs, len(self.classes_), effective_rule_base)

    y_t = torch.as_tensor(y_idx, dtype=torch.long)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        y_v_idx = self._label_encoder_.transform(np.asarray(y_v))
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(y_v_idx, dtype=torch.long)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_model(
        model_init["input_mfs"],
        int(model_init["n_classes"]),
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.classes_ = np.asarray(fitted["classes"], dtype=object)
    label_encoder = LabelEncoder()
    label_encoder.classes_ = estimator.classes_
    estimator._label_encoder_ = label_encoder
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict class labels for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict class labels for input samples."""
    proba = self.predict_proba(x)
    y_idx = np.argmax(proba, axis=1)
    return np.asarray(self._label_encoder_.inverse_transform(y_idx))

predict_proba

Predict class probabilities for input samples.

Source code in highfis/estimators.py
def predict_proba(self, x: Any) -> np.ndarray:
    """Predict class probabilities for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    probs = cast(Any, self.model_).predict_proba(self._as_tensor_x(x_arr))
    return probs.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "n_classes": len(self.classes_),
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
            "classes": self.classes_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

score

Return classification accuracy on the provided dataset.

Source code in highfis/estimators.py
def score(self, X: Any, y: Any, sample_weight: Any = None) -> float:
    """Return classification accuracy on the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return float(accuracy_score(y_true, y_pred, sample_weight=sample_weight))

AYATSKRegressorEstimator

Bases: _BaseRegressorEstimator

TSK regressor with an adaptive Yager T-norm in the antecedent.

AYATSK extends TSK by using an adaptive Yager T-norm aggregation and optional positive lower-bound membership functions to improve stability and performance in high-dimensional settings.

Reference

G. Xue, Y. Yang and J. Wang, "Adaptive Yager T-Norm-Based Takagi-Sugeno-Kang Fuzzy Systems," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 55, no. 12, pp. 9802-9815, Dec. 2025, doi: 10.1109/TSMC.2025.3621346.

Example
from highfis import AYATSKRegressorEstimator

reg = AYATSKRegressorEstimator(n_mfs=30, random_state=0)
reg.fit(X_train, y_train)

Initialise an AYATSK regressor.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 recommended.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian".

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise an AYATSK regressor.

    Args:
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` recommended.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute regression evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute regression evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="regression",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK regressor on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK regressor on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)

    self.model_ = self._build_regressor_model(input_mfs, effective_rule_base)

    y_t = torch.as_tensor(np.asarray(y_arr, dtype=np.float32), dtype=torch.float32)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(np.asarray(y_v, dtype=np.float32), dtype=torch.float32)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_regressor_model(
        model_init["input_mfs"],
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict continuous target values for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict continuous target values for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    preds = cast(Any, self.model_).predict(self._as_tensor_x(x_arr))
    return preds.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

AdaTSKClassifierEstimator

Bases: _BaseClassifierEstimator

TSK classifier with adaptive softmin antecedent (AdaTSK).

The firing strength of each rule is computed with the Ada-softmin operator.

Reference

G. Xue, Q. Chang, J. Wang, K. Zhang and N. R. Pal, "An Adaptive Neuro-Fuzzy System With Integrated Feature Selection and Rule Extraction for High-Dimensional Classification Problems," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 7, pp. 2167-2181, July 2023, doi: 10.1109/TFUZZ.2022.3220950.

Example
from highfis import AdaTSKClassifierEstimator

clf = AdaTSKClassifierEstimator(n_mfs=30, random_state=0)
clf.fit(X_train, y_train)

Initialise an AdaTSK classifier.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 recommended; Ada-softmin handles high-dimensional stability.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian".

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise an AdaTSK classifier.

    Args:
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` recommended; Ada-softmin
            handles high-dimensional stability.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute classification evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute classification evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="classification",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK classifier on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK classifier on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    le = LabelEncoder()
    y_idx = le.fit_transform(np.asarray(y_arr))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)
    self.classes_ = le.classes_
    self._label_encoder_ = le

    self.model_ = self._build_model(input_mfs, len(self.classes_), effective_rule_base)

    y_t = torch.as_tensor(y_idx, dtype=torch.long)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        y_v_idx = self._label_encoder_.transform(np.asarray(y_v))
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(y_v_idx, dtype=torch.long)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_model(
        model_init["input_mfs"],
        int(model_init["n_classes"]),
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.classes_ = np.asarray(fitted["classes"], dtype=object)
    label_encoder = LabelEncoder()
    label_encoder.classes_ = estimator.classes_
    estimator._label_encoder_ = label_encoder
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict class labels for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict class labels for input samples."""
    proba = self.predict_proba(x)
    y_idx = np.argmax(proba, axis=1)
    return np.asarray(self._label_encoder_.inverse_transform(y_idx))

predict_proba

Predict class probabilities for input samples.

Source code in highfis/estimators.py
def predict_proba(self, x: Any) -> np.ndarray:
    """Predict class probabilities for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    probs = cast(Any, self.model_).predict_proba(self._as_tensor_x(x_arr))
    return probs.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "n_classes": len(self.classes_),
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
            "classes": self.classes_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

score

Return classification accuracy on the provided dataset.

Source code in highfis/estimators.py
def score(self, X: Any, y: Any, sample_weight: Any = None) -> float:
    """Return classification accuracy on the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return float(accuracy_score(y_true, y_pred, sample_weight=sample_weight))

AdaTSKRegressorEstimator

Bases: _BaseRegressorEstimator

TSK regressor with adaptive softmin antecedent (AdaTSK).

The firing strength of each rule is computed with the Ada-softmin operator.

Reference

G. Xue, Q. Chang, J. Wang, K. Zhang and N. R. Pal, "An Adaptive Neuro-Fuzzy System With Integrated Feature Selection and Rule Extraction for High-Dimensional Classification Problems," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 7, pp. 2167-2181, July 2023, doi: 10.1109/TFUZZ.2022.3220950.

Example
from highfis import AdaTSKRegressorEstimator

reg = AdaTSKRegressorEstimator(n_mfs=30, random_state=0)
reg.fit(X_train, y_train)

Initialise an AdaTSK regressor.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 recommended.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian".

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise an AdaTSK regressor.

    Args:
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` recommended.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute regression evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute regression evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="regression",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK regressor on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK regressor on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)

    self.model_ = self._build_regressor_model(input_mfs, effective_rule_base)

    y_t = torch.as_tensor(np.asarray(y_arr, dtype=np.float32), dtype=torch.float32)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(np.asarray(y_v, dtype=np.float32), dtype=torch.float32)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_regressor_model(
        model_init["input_mfs"],
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict continuous target values for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict continuous target values for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    preds = cast(Any, self.model_).predict(self._as_tensor_x(x_arr))
    return preds.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

DGALETSKClassifierEstimator

Bases: FSREAdaTSKClassifierEstimator

DG-ALETSK classifier with ALE-softmin antecedent and double-group gates.

DG-ALETSK extends FSRE-AdaTSK by replacing the adaptive softmin with the Adaptive Ln-Exp (ALE) softmin — a smoother variant with improved numerical stability. It also uses a zero-order consequent in the DG (data-guided) training phase and optionally converts to first-order after gate-based pruning.

Reference

G. Xue, J. Wang, B. Yuan and C. Dai, "DG-ALETSK: A High-Dimensional Fuzzy Approach With Simultaneous Feature Selection and Rule Extraction," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 11, pp. 3866-3880, Nov. 2023, doi: 10.1109/TFUZZ.2023.3270445.

Example
from highfis import DGALETSKClassifierEstimator

clf = DGALETSKClassifierEstimator(
    n_mfs=30, lambda_init=1.0, use_en_frb=False, random_state=0
)
clf.fit(X_train, y_train)

Initialise an FSRE-AdaTSK classifier.

Parameters:

Name Type Description Default
lambda_init float

Initial ALE-softmin parameter λ > 0 inherited by :class:DGALETSKClassifierEstimator; not used by FSRE-AdaTSK proper (Ada-softmin computes its index from the current membership values). Default 1.0.

1.0
use_en_frb bool

If True, use the Enhanced FRB (En-FRB) whose size grows linearly with the number of features, allowing more candidate rules for the RE phase. Xue et al. (2023) activate En-FRB after the FS phase; set False (default) to keep the compact CoCo-FRB.

False
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 recommended.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian".

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08

Raises:

Type Description
ValueError

If lambda_init <= 0.

Source code in highfis/estimators.py
def __init__(
    self,
    *,
    lambda_init: float = 1.0,
    use_en_frb: bool = False,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise an FSRE-AdaTSK classifier.

    Args:
        lambda_init: Initial ALE-softmin parameter ``λ > 0`` inherited
            by :class:`DGALETSKClassifierEstimator`; not used by
            FSRE-AdaTSK proper (Ada-softmin computes its index from
            the current membership values). Default ``1.0``.
        use_en_frb: If ``True``, use the Enhanced FRB (En-FRB) whose
            size grows linearly with the number of features, allowing
            more candidate rules for the RE phase. Xue et al. (2023)
            activate En-FRB after the FS phase; set ``False`` (default)
            to keep the compact CoCo-FRB.
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` recommended.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.

    Raises:
        ValueError: If ``lambda_init <= 0``.
    """
    if lambda_init <= 0.0:
        raise ValueError("lambda_init must be > 0")
    self.lambda_init = float(lambda_init)
    self.use_en_frb = bool(use_en_frb)
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute classification evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute classification evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="classification",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK classifier on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK classifier on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    le = LabelEncoder()
    y_idx = le.fit_transform(np.asarray(y_arr))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)
    self.classes_ = le.classes_
    self._label_encoder_ = le

    self.model_ = self._build_model(input_mfs, len(self.classes_), effective_rule_base)

    y_t = torch.as_tensor(y_idx, dtype=torch.long)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        y_v_idx = self._label_encoder_.transform(np.asarray(y_v))
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(y_v_idx, dtype=torch.long)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_model(
        model_init["input_mfs"],
        int(model_init["n_classes"]),
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.classes_ = np.asarray(fitted["classes"], dtype=object)
    label_encoder = LabelEncoder()
    label_encoder.classes_ = estimator.classes_
    estimator._label_encoder_ = label_encoder
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict class labels for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict class labels for input samples."""
    proba = self.predict_proba(x)
    y_idx = np.argmax(proba, axis=1)
    return np.asarray(self._label_encoder_.inverse_transform(y_idx))

predict_proba

Predict class probabilities for input samples.

Source code in highfis/estimators.py
def predict_proba(self, x: Any) -> np.ndarray:
    """Predict class probabilities for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    probs = cast(Any, self.model_).predict_proba(self._as_tensor_x(x_arr))
    return probs.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "n_classes": len(self.classes_),
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
            "classes": self.classes_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

score

Return classification accuracy on the provided dataset.

Source code in highfis/estimators.py
def score(self, X: Any, y: Any, sample_weight: Any = None) -> float:
    """Return classification accuracy on the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return float(accuracy_score(y_true, y_pred, sample_weight=sample_weight))

DGALETSKRegressorEstimator

Bases: FSREAdaTSKRegressorEstimator

DG-ALETSK regressor with ALE-softmin antecedent and double-group gates.

DG-ALETSK extends FSRE-AdaTSK by replacing the adaptive softmin with the Adaptive Ln-Exp (ALE) softmin — a smoother variant with improved numerical stability. It also uses a zero-order consequent in the DG (data-guided) training phase and optionally converts to first-order after gate-based pruning.

Reference

G. Xue, J. Wang, B. Yuan and C. Dai, "DG-ALETSK: A High-Dimensional Fuzzy Approach With Simultaneous Feature Selection and Rule Extraction," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 11, pp. 3866-3880, Nov. 2023, doi: 10.1109/TFUZZ.2023.3270445.

Example
from highfis import DGALETSKRegressorEstimator

reg = DGALETSKRegressorEstimator(
    n_mfs=30, lambda_init=1.0, use_en_frb=False, random_state=0
)
reg.fit(X_train, y_train)

Initialise an FSRE-AdaTSK regressor.

Parameters:

Name Type Description Default
lambda_init float

Initial ALE-softmin parameter λ > 0 inherited by :class:DGALETSKRegressorEstimator; not used by FSRE-AdaTSK proper (Ada-softmin computes its index from the current membership values). Default 1.0.

1.0
use_en_frb bool

If True, use the Enhanced FRB (En-FRB) for rule extraction. Default False keeps CoCo-FRB.

False
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 recommended.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian".

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08

Raises:

Type Description
ValueError

If lambda_init <= 0.

Source code in highfis/estimators.py
def __init__(
    self,
    *,
    lambda_init: float = 1.0,
    use_en_frb: bool = False,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise an FSRE-AdaTSK regressor.

    Args:
        lambda_init: Initial ALE-softmin parameter ``λ > 0`` inherited
            by :class:`DGALETSKRegressorEstimator`; not used by
            FSRE-AdaTSK proper (Ada-softmin computes its index from
            the current membership values). Default ``1.0``.
        use_en_frb: If ``True``, use the Enhanced FRB (En-FRB) for rule
            extraction. Default ``False`` keeps CoCo-FRB.
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` recommended.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.

    Raises:
        ValueError: If ``lambda_init <= 0``.
    """
    if lambda_init <= 0.0:
        raise ValueError("lambda_init must be > 0")
    self.lambda_init = float(lambda_init)
    self.use_en_frb = bool(use_en_frb)
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute regression evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute regression evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="regression",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK regressor on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK regressor on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)

    self.model_ = self._build_regressor_model(input_mfs, effective_rule_base)

    y_t = torch.as_tensor(np.asarray(y_arr, dtype=np.float32), dtype=torch.float32)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(np.asarray(y_v, dtype=np.float32), dtype=torch.float32)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_regressor_model(
        model_init["input_mfs"],
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict continuous target values for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict continuous target values for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    preds = cast(Any, self.model_).predict(self._as_tensor_x(x_arr))
    return preds.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

DGTSKClassifierEstimator

Bases: _BaseClassifierEstimator

DG-TSK classifier with M-gate antecedent and point-based FRB (P-FRB).

DG-TSK uses a data-guided M-gate function to automatically select relevant features and rules.

Reference

Guangdong Xue, Jian Wang, Bingjie Zhang, Bin Yuan, Caili Dai, Double groups of gates based Takagi-Sugeno-Kang (DG-TSK) fuzzy system for simultaneous feature selection and rule extraction, Fuzzy Sets and Systems, Volume 469, 2023, 108627, ISSN 0165-0114, https://doi.org/10.1016/j.fss.2023.108627.

Example
from highfis import DGTSKClassifierEstimator

clf = DGTSKClassifierEstimator(n_mfs=30, use_en_frb=False, random_state=0)
clf.fit(X_train, y_train)

Initialise a DG-TSK classifier.

Parameters:

Name Type Description Default
use_en_frb bool

If True, use the Enhanced FRB (En-FRB) for rule extraction (P-FRB). Default False keeps CoCo-FRB.

False
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list.

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 recommended.

1.0
random_state int | None

Seed for reproducibility.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian".

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
pfrb_max_rules int | None

Maximum number of point-based FRB rules when rule_base='pfrb'. None uses all training samples.

None
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    use_en_frb: bool = False,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    pfrb_max_rules: int | None = None,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise a DG-TSK classifier.

    Args:
        use_en_frb: If ``True``, use the Enhanced FRB (En-FRB) for rule
            extraction (P-FRB). Default ``False`` keeps CoCo-FRB.
        input_configs: Per-feature :class:`InputConfig` list.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` recommended.
        random_state: Seed for reproducibility.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        pfrb_max_rules: Maximum number of point-based FRB rules when
            ``rule_base='pfrb'``. ``None`` uses all training samples.
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    self.use_en_frb = bool(use_en_frb)
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        pfrb_max_rules=pfrb_max_rules,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute classification evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute classification evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="classification",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK classifier on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK classifier on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    le = LabelEncoder()
    y_idx = le.fit_transform(np.asarray(y_arr))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)
    self.classes_ = le.classes_
    self._label_encoder_ = le

    self.model_ = self._build_model(input_mfs, len(self.classes_), effective_rule_base)

    y_t = torch.as_tensor(y_idx, dtype=torch.long)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        y_v_idx = self._label_encoder_.transform(np.asarray(y_v))
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(y_v_idx, dtype=torch.long)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_model(
        model_init["input_mfs"],
        int(model_init["n_classes"]),
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.classes_ = np.asarray(fitted["classes"], dtype=object)
    label_encoder = LabelEncoder()
    label_encoder.classes_ = estimator.classes_
    estimator._label_encoder_ = label_encoder
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict class labels for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict class labels for input samples."""
    proba = self.predict_proba(x)
    y_idx = np.argmax(proba, axis=1)
    return np.asarray(self._label_encoder_.inverse_transform(y_idx))

predict_proba

Predict class probabilities for input samples.

Source code in highfis/estimators.py
def predict_proba(self, x: Any) -> np.ndarray:
    """Predict class probabilities for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    probs = cast(Any, self.model_).predict_proba(self._as_tensor_x(x_arr))
    return probs.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "n_classes": len(self.classes_),
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
            "classes": self.classes_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

score

Return classification accuracy on the provided dataset.

Source code in highfis/estimators.py
def score(self, X: Any, y: Any, sample_weight: Any = None) -> float:
    """Return classification accuracy on the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return float(accuracy_score(y_true, y_pred, sample_weight=sample_weight))

DGTSKRegressorEstimator

Bases: _BaseRegressorEstimator

DG-TSK regressor with M-gate antecedent and point-based FRB (P-FRB).

DG-TSK uses a data-guided M-gate function to automatically select relevant features and rules.

Reference

Guangdong Xue, Jian Wang, Bingjie Zhang, Bin Yuan, Caili Dai, Double groups of gates based Takagi-Sugeno-Kang (DG-TSK) fuzzy system for simultaneous feature selection and rule extraction, Fuzzy Sets and Systems, Volume 469, 2023, 108627, ISSN 0165-0114, https://doi.org/10.1016/j.fss.2023.108627.

Example
from highfis import DGTSKRegressorEstimator

reg = DGTSKRegressorEstimator(n_mfs=30, use_en_frb=False, random_state=0)
reg.fit(X_train, y_train)

Initialise a DG-TSK regressor.

Parameters:

Name Type Description Default
use_en_frb bool

If True, use the Enhanced FRB (En-FRB) for rule extraction (P-FRB). Default False keeps CoCo-FRB.

False
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list.

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 recommended.

1.0
random_state int | None

Seed for reproducibility.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian".

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
pfrb_max_rules int | None

Maximum number of point-based FRB rules when rule_base='pfrb'. None uses all training samples.

None
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    use_en_frb: bool = False,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    pfrb_max_rules: int | None = None,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise a DG-TSK regressor.

    Args:
        use_en_frb: If ``True``, use the Enhanced FRB (En-FRB) for rule
            extraction (P-FRB). Default ``False`` keeps CoCo-FRB.
        input_configs: Per-feature :class:`InputConfig` list.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` recommended.
        random_state: Seed for reproducibility.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        pfrb_max_rules: Maximum number of point-based FRB rules when
            ``rule_base='pfrb'``. ``None`` uses all training samples.
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    self.use_en_frb = bool(use_en_frb)
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        pfrb_max_rules=pfrb_max_rules,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute regression evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute regression evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="regression",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK regressor on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK regressor on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)

    self.model_ = self._build_regressor_model(input_mfs, effective_rule_base)

    y_t = torch.as_tensor(np.asarray(y_arr, dtype=np.float32), dtype=torch.float32)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(np.asarray(y_v, dtype=np.float32), dtype=torch.float32)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_regressor_model(
        model_init["input_mfs"],
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict continuous target values for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict continuous target values for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    preds = cast(Any, self.model_).predict(self._as_tensor_x(x_arr))
    return preds.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

DombiTSKClassifierEstimator

Bases: _BaseClassifierEstimator

TSK classifier with a fixed Dombi T-norm in the antecedent.

DombiTSK extends TSK fuzzy inference by using a Dombi t-norm aggregation in antecedent evaluation while keeping first-order linear consequents.

Reference

G. Xue, L. Hu, J. Wang and S. Ablameyko, "ADMTSK: A High-Dimensional Takagi-Sugeno-Kang Fuzzy System Based on Adaptive Dombi T-Norm," in IEEE Transactions on Fuzzy Systems, vol. 33, no. 6, pp. 1767-1780, June 2025, doi: 10.1109/TFUZZ.2025.3535640.

Example
from highfis import DombiTSKClassifierEstimator

clf = DombiTSKClassifierEstimator(n_mfs=30, random_state=0)
clf.fit(X_train, y_train)

Initialise a DombiTSK classifier.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 recommended; the Dombi T-norm handles high-dimensional stability without inflating sigma.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian".

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
pfrb_max_rules int | None

Maximum point-based FRB rules (unused by DombiTSK).

None
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    pfrb_max_rules: int | None = None,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise a DombiTSK classifier.

    Args:
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` recommended; the Dombi
            T-norm handles high-dimensional stability without inflating
            sigma.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        pfrb_max_rules: Maximum point-based FRB rules (unused by
            DombiTSK).
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        pfrb_max_rules=pfrb_max_rules,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute classification evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute classification evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="classification",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK classifier on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK classifier on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    le = LabelEncoder()
    y_idx = le.fit_transform(np.asarray(y_arr))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)
    self.classes_ = le.classes_
    self._label_encoder_ = le

    self.model_ = self._build_model(input_mfs, len(self.classes_), effective_rule_base)

    y_t = torch.as_tensor(y_idx, dtype=torch.long)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        y_v_idx = self._label_encoder_.transform(np.asarray(y_v))
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(y_v_idx, dtype=torch.long)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_model(
        model_init["input_mfs"],
        int(model_init["n_classes"]),
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.classes_ = np.asarray(fitted["classes"], dtype=object)
    label_encoder = LabelEncoder()
    label_encoder.classes_ = estimator.classes_
    estimator._label_encoder_ = label_encoder
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict class labels for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict class labels for input samples."""
    proba = self.predict_proba(x)
    y_idx = np.argmax(proba, axis=1)
    return np.asarray(self._label_encoder_.inverse_transform(y_idx))

predict_proba

Predict class probabilities for input samples.

Source code in highfis/estimators.py
def predict_proba(self, x: Any) -> np.ndarray:
    """Predict class probabilities for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    probs = cast(Any, self.model_).predict_proba(self._as_tensor_x(x_arr))
    return probs.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "n_classes": len(self.classes_),
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
            "classes": self.classes_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

score

Return classification accuracy on the provided dataset.

Source code in highfis/estimators.py
def score(self, X: Any, y: Any, sample_weight: Any = None) -> float:
    """Return classification accuracy on the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return float(accuracy_score(y_true, y_pred, sample_weight=sample_weight))

DombiTSKRegressorEstimator

Bases: _BaseRegressorEstimator

TSK regressor with a fixed Dombi T-norm in the antecedent.

DombiTSK extends TSK fuzzy inference by using a Dombi t-norm aggregation in antecedent evaluation while keeping first-order linear consequents.

Reference

G. Xue, L. Hu, J. Wang and S. Ablameyko, "ADMTSK: A High-Dimensional Takagi-Sugeno-Kang Fuzzy System Based on Adaptive Dombi T-Norm," in IEEE Transactions on Fuzzy Systems, vol. 33, no. 6, pp. 1767-1780, June 2025, doi: 10.1109/TFUZZ.2025.3535640.

Example
from highfis import DombiTSKRegressorEstimator

reg = DombiTSKRegressorEstimator(n_mfs=30, random_state=0)
reg.fit(X_train, y_train)

Initialise a DombiTSK regressor.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 recommended.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian".

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise a DombiTSK regressor.

    Args:
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` recommended.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute regression evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute regression evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="regression",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK regressor on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK regressor on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)

    self.model_ = self._build_regressor_model(input_mfs, effective_rule_base)

    y_t = torch.as_tensor(np.asarray(y_arr, dtype=np.float32), dtype=torch.float32)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(np.asarray(y_v, dtype=np.float32), dtype=torch.float32)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_regressor_model(
        model_init["input_mfs"],
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict continuous target values for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict continuous target values for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    preds = cast(Any, self.model_).predict(self._as_tensor_x(x_arr))
    return preds.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

FSREAdaTSKClassifierEstimator

Bases: _BaseClassifierEstimator

FSRE-AdaTSK classifier with adaptive softmin antecedent and gated consequents.

FSRE-AdaTSK (Feature Selection and Rule Extraction) extends AdaTSK.

Reference

G. Xue, Q. Chang, J. Wang, K. Zhang and N. R. Pal, "An Adaptive Neuro-Fuzzy System With Integrated Feature Selection and Rule Extraction for High-Dimensional Classification Problems," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 7, pp. 2167-2181, July 2023, doi: 10.1109/TFUZZ.2022.3220950.

Example
from highfis import FSREAdaTSKClassifierEstimator

clf = FSREAdaTSKClassifierEstimator()
clf.fit(X_train, y_train)

Initialise an FSRE-AdaTSK classifier.

Parameters:

Name Type Description Default
lambda_init float

Initial ALE-softmin parameter λ > 0 inherited by :class:DGALETSKClassifierEstimator; not used by FSRE-AdaTSK proper (Ada-softmin computes its index from the current membership values). Default 1.0.

1.0
use_en_frb bool

If True, use the Enhanced FRB (En-FRB) whose size grows linearly with the number of features, allowing more candidate rules for the RE phase. Xue et al. (2023) activate En-FRB after the FS phase; set False (default) to keep the compact CoCo-FRB.

False
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 recommended.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian".

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08

Raises:

Type Description
ValueError

If lambda_init <= 0.

Source code in highfis/estimators.py
def __init__(
    self,
    *,
    lambda_init: float = 1.0,
    use_en_frb: bool = False,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise an FSRE-AdaTSK classifier.

    Args:
        lambda_init: Initial ALE-softmin parameter ``λ > 0`` inherited
            by :class:`DGALETSKClassifierEstimator`; not used by
            FSRE-AdaTSK proper (Ada-softmin computes its index from
            the current membership values). Default ``1.0``.
        use_en_frb: If ``True``, use the Enhanced FRB (En-FRB) whose
            size grows linearly with the number of features, allowing
            more candidate rules for the RE phase. Xue et al. (2023)
            activate En-FRB after the FS phase; set ``False`` (default)
            to keep the compact CoCo-FRB.
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` recommended.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.

    Raises:
        ValueError: If ``lambda_init <= 0``.
    """
    if lambda_init <= 0.0:
        raise ValueError("lambda_init must be > 0")
    self.lambda_init = float(lambda_init)
    self.use_en_frb = bool(use_en_frb)
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute classification evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute classification evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="classification",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK classifier on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK classifier on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    le = LabelEncoder()
    y_idx = le.fit_transform(np.asarray(y_arr))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)
    self.classes_ = le.classes_
    self._label_encoder_ = le

    self.model_ = self._build_model(input_mfs, len(self.classes_), effective_rule_base)

    y_t = torch.as_tensor(y_idx, dtype=torch.long)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        y_v_idx = self._label_encoder_.transform(np.asarray(y_v))
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(y_v_idx, dtype=torch.long)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_model(
        model_init["input_mfs"],
        int(model_init["n_classes"]),
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.classes_ = np.asarray(fitted["classes"], dtype=object)
    label_encoder = LabelEncoder()
    label_encoder.classes_ = estimator.classes_
    estimator._label_encoder_ = label_encoder
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict class labels for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict class labels for input samples."""
    proba = self.predict_proba(x)
    y_idx = np.argmax(proba, axis=1)
    return np.asarray(self._label_encoder_.inverse_transform(y_idx))

predict_proba

Predict class probabilities for input samples.

Source code in highfis/estimators.py
def predict_proba(self, x: Any) -> np.ndarray:
    """Predict class probabilities for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    probs = cast(Any, self.model_).predict_proba(self._as_tensor_x(x_arr))
    return probs.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "n_classes": len(self.classes_),
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
            "classes": self.classes_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

score

Return classification accuracy on the provided dataset.

Source code in highfis/estimators.py
def score(self, X: Any, y: Any, sample_weight: Any = None) -> float:
    """Return classification accuracy on the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return float(accuracy_score(y_true, y_pred, sample_weight=sample_weight))

FSREAdaTSKRegressorEstimator

Bases: _BaseRegressorEstimator

FSRE-AdaTSK regressor with adaptive softmin antecedent and gated consequents.

FSRE-AdaTSK (Feature Selection and Rule Extraction) extends AdaTSK.

Reference

G. Xue, Q. Chang, J. Wang, K. Zhang and N. R. Pal, "An Adaptive Neuro-Fuzzy System With Integrated Feature Selection and Rule Extraction for High-Dimensional Classification Problems," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 7, pp. 2167-2181, July 2023, doi: 10.1109/TFUZZ.2022.3220950.

Example
from highfis import FSREAdaTSKRegressorEstimator

reg = FSREAdaTSKRegressorEstimator()
reg.fit(X_train, y_train)

Initialise an FSRE-AdaTSK regressor.

Parameters:

Name Type Description Default
lambda_init float

Initial ALE-softmin parameter λ > 0 inherited by :class:DGALETSKRegressorEstimator; not used by FSRE-AdaTSK proper (Ada-softmin computes its index from the current membership values). Default 1.0.

1.0
use_en_frb bool

If True, use the Enhanced FRB (En-FRB) for rule extraction. Default False keeps CoCo-FRB.

False
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 recommended.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian".

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08

Raises:

Type Description
ValueError

If lambda_init <= 0.

Source code in highfis/estimators.py
def __init__(
    self,
    *,
    lambda_init: float = 1.0,
    use_en_frb: bool = False,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise an FSRE-AdaTSK regressor.

    Args:
        lambda_init: Initial ALE-softmin parameter ``λ > 0`` inherited
            by :class:`DGALETSKRegressorEstimator`; not used by
            FSRE-AdaTSK proper (Ada-softmin computes its index from
            the current membership values). Default ``1.0``.
        use_en_frb: If ``True``, use the Enhanced FRB (En-FRB) for rule
            extraction. Default ``False`` keeps CoCo-FRB.
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` recommended.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.

    Raises:
        ValueError: If ``lambda_init <= 0``.
    """
    if lambda_init <= 0.0:
        raise ValueError("lambda_init must be > 0")
    self.lambda_init = float(lambda_init)
    self.use_en_frb = bool(use_en_frb)
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute regression evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute regression evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="regression",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK regressor on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK regressor on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)

    self.model_ = self._build_regressor_model(input_mfs, effective_rule_base)

    y_t = torch.as_tensor(np.asarray(y_arr, dtype=np.float32), dtype=torch.float32)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(np.asarray(y_v, dtype=np.float32), dtype=torch.float32)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_regressor_model(
        model_init["input_mfs"],
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict continuous target values for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict continuous target values for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    preds = cast(Any, self.model_).predict(self._as_tensor_x(x_arr))
    return preds.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

HDFISMinClassifierEstimator

Bases: _BaseClassifierEstimator

HDFIS-min classifier estimator with minimum T-norm antecedents.

HDFIS-min freezes antecedent membership parameters and uses a minimum T-norm aggregation in the antecedent, so that only consequent parameters are optimized during training. This matches the paper's observation that minimum-based high-dimensional inference is best handled by fixing the antecedent structure and training the rule consequents.

References

G. Xue, J. Wang, K. Zhang and N. R. Pal, "High-Dimensional Fuzzy Inference Systems," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 1, pp. 507-519, Jan. 2024, doi: 10.1109/TSMC.2023.3311475.

Example
from highfis import HDFISMinClassifierEstimator

clf = HDFISMinClassifierEstimator()
clf.fit(X_train, y_train)
preds = clf.predict(X_test)

Initialise an HDFIS-min classifier estimator.

Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    pfrb_max_rules: int | None = None,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise an HDFIS-min classifier estimator."""
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        pfrb_max_rules=pfrb_max_rules,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute classification evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute classification evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="classification",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK classifier on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK classifier on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    le = LabelEncoder()
    y_idx = le.fit_transform(np.asarray(y_arr))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)
    self.classes_ = le.classes_
    self._label_encoder_ = le

    self.model_ = self._build_model(input_mfs, len(self.classes_), effective_rule_base)

    y_t = torch.as_tensor(y_idx, dtype=torch.long)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        y_v_idx = self._label_encoder_.transform(np.asarray(y_v))
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(y_v_idx, dtype=torch.long)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_model(
        model_init["input_mfs"],
        int(model_init["n_classes"]),
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.classes_ = np.asarray(fitted["classes"], dtype=object)
    label_encoder = LabelEncoder()
    label_encoder.classes_ = estimator.classes_
    estimator._label_encoder_ = label_encoder
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict class labels for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict class labels for input samples."""
    proba = self.predict_proba(x)
    y_idx = np.argmax(proba, axis=1)
    return np.asarray(self._label_encoder_.inverse_transform(y_idx))

predict_proba

Predict class probabilities for input samples.

Source code in highfis/estimators.py
def predict_proba(self, x: Any) -> np.ndarray:
    """Predict class probabilities for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    probs = cast(Any, self.model_).predict_proba(self._as_tensor_x(x_arr))
    return probs.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "n_classes": len(self.classes_),
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
            "classes": self.classes_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

score

Return classification accuracy on the provided dataset.

Source code in highfis/estimators.py
def score(self, X: Any, y: Any, sample_weight: Any = None) -> float:
    """Return classification accuracy on the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return float(accuracy_score(y_true, y_pred, sample_weight=sample_weight))

HDFISMinRegressorEstimator

Bases: _BaseRegressorEstimator

HDFIS-min regressor estimator with minimum T-norm antecedents.

HDFIS-min freezes antecedent membership parameters and uses a minimum T-norm aggregation in the antecedent, so that only consequent parameters are optimized during training. This design avoids the nondifferentiability of the minimum operator while preserving first-order TSK consequents.

References

G. Xue, J. Wang, K. Zhang and N. R. Pal, "High-Dimensional Fuzzy Inference Systems," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 1, pp. 507-519, Jan. 2024, doi: 10.1109/TSMC.2023.3311475.

Example
from highfis import HDFISMinRegressorEstimator

reg = HDFISMinRegressorEstimator()
reg.fit(X_train, y_train)
preds = reg.predict(X_test)

Initialise an HDFIS-min regressor estimator.

Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise an HDFIS-min regressor estimator."""
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute regression evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute regression evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="regression",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK regressor on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK regressor on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)

    self.model_ = self._build_regressor_model(input_mfs, effective_rule_base)

    y_t = torch.as_tensor(np.asarray(y_arr, dtype=np.float32), dtype=torch.float32)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(np.asarray(y_v, dtype=np.float32), dtype=torch.float32)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_regressor_model(
        model_init["input_mfs"],
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict continuous target values for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict continuous target values for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    preds = cast(Any, self.model_).predict(self._as_tensor_x(x_arr))
    return preds.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

HDFISProdClassifierEstimator

Bases: _BaseClassifierEstimator

HDFIS-prod classifier estimator with dimension-dependent Gaussian MFs.

HDFIS-prod combines the standard product T-norm with a dimension-dependent Gaussian membership function (DMF) to avoid numeric underflow in very high-dimensional feature spaces while preserving first-order TSK consequents.

References

G. Xue, J. Wang, K. Zhang and N. R. Pal, "High-Dimensional Fuzzy Inference Systems," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 1, pp. 507-519, Jan. 2024, doi: 10.1109/TSMC.2023.3311475.

Example
from highfis import HDFISProdClassifierEstimator

clf = HDFISProdClassifierEstimator()
clf.fit(X_train, y_train)
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    pfrb_max_rules: int | None = None,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
    xi: float = 745.0,
    rho: float | None = None,
) -> None:
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        pfrb_max_rules=pfrb_max_rules,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )
    self.xi = float(xi)
    self.rho = rho

evaluate

Compute classification evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute classification evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="classification",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK classifier on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK classifier on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    le = LabelEncoder()
    y_idx = le.fit_transform(np.asarray(y_arr))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)
    self.classes_ = le.classes_
    self._label_encoder_ = le

    self.model_ = self._build_model(input_mfs, len(self.classes_), effective_rule_base)

    y_t = torch.as_tensor(y_idx, dtype=torch.long)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        y_v_idx = self._label_encoder_.transform(np.asarray(y_v))
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(y_v_idx, dtype=torch.long)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_model(
        model_init["input_mfs"],
        int(model_init["n_classes"]),
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.classes_ = np.asarray(fitted["classes"], dtype=object)
    label_encoder = LabelEncoder()
    label_encoder.classes_ = estimator.classes_
    estimator._label_encoder_ = label_encoder
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict class labels for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict class labels for input samples."""
    proba = self.predict_proba(x)
    y_idx = np.argmax(proba, axis=1)
    return np.asarray(self._label_encoder_.inverse_transform(y_idx))

predict_proba

Predict class probabilities for input samples.

Source code in highfis/estimators.py
def predict_proba(self, x: Any) -> np.ndarray:
    """Predict class probabilities for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    probs = cast(Any, self.model_).predict_proba(self._as_tensor_x(x_arr))
    return probs.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "n_classes": len(self.classes_),
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
            "classes": self.classes_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

score

Return classification accuracy on the provided dataset.

Source code in highfis/estimators.py
def score(self, X: Any, y: Any, sample_weight: Any = None) -> float:
    """Return classification accuracy on the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return float(accuracy_score(y_true, y_pred, sample_weight=sample_weight))

HDFISProdRegressorEstimator

Bases: _BaseRegressorEstimator

HDFIS-prod regressor estimator with dimension-dependent Gaussian MFs.

HDFIS-prod combines the standard product T-norm with a dimension-dependent Gaussian membership function (DMF) to avoid numeric underflow in very high-dimensional feature spaces while preserving first-order TSK consequents.

References

G. Xue, J. Wang, K. Zhang and N. R. Pal, "High-Dimensional Fuzzy Inference Systems," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 1, pp. 507-519, Jan. 2024, doi: 10.1109/TSMC.2023.3311475.

Example
from highfis import HDFISProdRegressorEstimator

reg = HDFISProdRegressorEstimator()
reg.fit(X_train, y_train)
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
    xi: float = 745.0,
    rho: float | None = None,
) -> None:
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )
    self.xi = float(xi)
    self.rho = rho

evaluate

Compute regression evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute regression evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="regression",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK regressor on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK regressor on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)

    self.model_ = self._build_regressor_model(input_mfs, effective_rule_base)

    y_t = torch.as_tensor(np.asarray(y_arr, dtype=np.float32), dtype=torch.float32)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(np.asarray(y_v, dtype=np.float32), dtype=torch.float32)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_regressor_model(
        model_init["input_mfs"],
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict continuous target values for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict continuous target values for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    preds = cast(Any, self.model_).predict(self._as_tensor_x(x_arr))
    return preds.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

HTSKClassifierEstimator

Bases: _BaseClassifierEstimator

HTSK classifier for high-dimensional TSK inference.

HTSK replaces the standard product t-norm with a geometric mean over membership values and performs rule normalization in log-space.

References

Y. Cui, D. Wu and Y. Xu, "Curse of Dimensionality for TSK Fuzzy Neural Networks: Explanation and Solutions," 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 2021, pp. 1-8, doi: 10.1109/IJCNN52387.2021.9534265.

Example
from highfis import HTSKClassifierEstimator

clf = HTSKClassifierEstimator()
clf.fit(X_train, y_train)

Initialise an HTSK classifier.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs.

3
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 is recommended for HTSK.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian". Defaults to "coco" for kmeans and "cartesian" for grid.

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
pfrb_max_rules int | None

Maximum point-based FRB rules (unused by HTSK).

None
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 3,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    pfrb_max_rules: int | None = None,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise an HTSK classifier.

    Args:
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs.
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` is recommended for HTSK.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``. Defaults to
            ``"coco"`` for kmeans and ``"cartesian"`` for grid.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        pfrb_max_rules: Maximum point-based FRB rules (unused by HTSK).
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        pfrb_max_rules=pfrb_max_rules,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute classification evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute classification evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="classification",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK classifier on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK classifier on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    le = LabelEncoder()
    y_idx = le.fit_transform(np.asarray(y_arr))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)
    self.classes_ = le.classes_
    self._label_encoder_ = le

    self.model_ = self._build_model(input_mfs, len(self.classes_), effective_rule_base)

    y_t = torch.as_tensor(y_idx, dtype=torch.long)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        y_v_idx = self._label_encoder_.transform(np.asarray(y_v))
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(y_v_idx, dtype=torch.long)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_model(
        model_init["input_mfs"],
        int(model_init["n_classes"]),
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.classes_ = np.asarray(fitted["classes"], dtype=object)
    label_encoder = LabelEncoder()
    label_encoder.classes_ = estimator.classes_
    estimator._label_encoder_ = label_encoder
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict class labels for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict class labels for input samples."""
    proba = self.predict_proba(x)
    y_idx = np.argmax(proba, axis=1)
    return np.asarray(self._label_encoder_.inverse_transform(y_idx))

predict_proba

Predict class probabilities for input samples.

Source code in highfis/estimators.py
def predict_proba(self, x: Any) -> np.ndarray:
    """Predict class probabilities for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    probs = cast(Any, self.model_).predict_proba(self._as_tensor_x(x_arr))
    return probs.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "n_classes": len(self.classes_),
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
            "classes": self.classes_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

score

Return classification accuracy on the provided dataset.

Source code in highfis/estimators.py
def score(self, X: Any, y: Any, sample_weight: Any = None) -> float:
    """Return classification accuracy on the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return float(accuracy_score(y_true, y_pred, sample_weight=sample_weight))

HTSKRegressorEstimator

Bases: _BaseRegressorEstimator

HTSK regressor for high-dimensional TSK inference.

HTSK replaces the standard product t-norm with a geometric mean over membership values and performs rule normalization in log-space.

References

Y. Cui, D. Wu and Y. Xu, "Curse of Dimensionality for TSK Fuzzy Neural Networks: Explanation and Solutions," 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 2021, pp. 1-8, doi: 10.1109/IJCNN52387.2021.9534265.

Example
from highfis import HTSKRegressorEstimator

reg = HTSKRegressorEstimator()
reg.fit(X_train, y_train)

Initialise an HTSK regressor.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs.

3
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Scale factor for sigma initialisation when mf_init="kmeans". 1.0 is recommended for HTSK.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian". Defaults to "coco" for kmeans and "cartesian" for grid.

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
pfrb_max_rules int | None

Maximum point-based FRB rules (unused by HTSK).

None
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 3,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    pfrb_max_rules: int | None = None,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise an HTSK regressor.

    Args:
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs.
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Scale factor for sigma initialisation when
            ``mf_init="kmeans"``. ``1.0`` is recommended for HTSK.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``. Defaults to
            ``"coco"`` for kmeans and ``"cartesian"`` for grid.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        pfrb_max_rules: Maximum point-based FRB rules (unused by HTSK).
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute regression evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute regression evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="regression",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK regressor on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK regressor on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)

    self.model_ = self._build_regressor_model(input_mfs, effective_rule_base)

    y_t = torch.as_tensor(np.asarray(y_arr, dtype=np.float32), dtype=torch.float32)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(np.asarray(y_v, dtype=np.float32), dtype=torch.float32)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_regressor_model(
        model_init["input_mfs"],
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict continuous target values for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict continuous target values for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    preds = cast(Any, self.model_).predict(self._as_tensor_x(x_arr))
    return preds.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

InputConfig dataclass

Per-feature configuration for Gaussian MF grid initialisation.

This dataclass controls how membership functions are placed on a single input feature when mf_init="grid". When mf_init="kmeans" only the name field is used; centres and sigmas are derived from k-means cluster centroids.

Attributes:

Name Type Description
name str

Feature name. Used as the key in the membership-function dictionary passed to the underlying TSK model.

n_mfs int

Number of Gaussian MFs to place on this feature. Must be >= 1.

overlap float

Spacing factor between neighbouring MF centres. A larger value widens each MF (more overlap); 0.5 corresponds to roughly half-width overlap at the midpoint between centres.

margin float

Fractional padding added to the observed feature range before centre placement. 0.10 extends each side of [x_min, x_max] by 10 percent so edge centres are not clipped to extreme values.

Example
from highfis.estimators import InputConfig

configs = [
    InputConfig(name="sepal_length", n_mfs=3),
    InputConfig(name="sepal_width", n_mfs=5, overlap=0.3),
]

LogTSKClassifierEstimator

Bases: _BaseClassifierEstimator

LogTSK classifier with inverse-log rule normalization.

LogTSK uses product antecedent aggregation and inverse-log normalization of log-domain rule strengths. The resulting rule weights are normalized with L1 normalization across rules, which makes the model scale-invariant in log-space and avoids the softmax saturation that occurs in high-dimensional inputs.

Reference

Y. Cui, D. Wu and Y. Xu, "Curse of Dimensionality for TSK Fuzzy Neural Networks: Explanation and Solutions," 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1-8, doi: 10.1109/IJCNN52387.2021.9534265.

Example
from highfis import LogTSKClassifierEstimator

clf = LogTSKClassifierEstimator()
clf.fit(X_train, y_train)

Initialise a LogTSK classifier.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list.

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 is recommended (the log-space defuzzifier is scale-invariant).

1.0
random_state int | None

Seed for reproducibility.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian".

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise a LogTSK classifier.

    Args:
        input_configs: Per-feature :class:`InputConfig` list.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` is recommended (the
            log-space defuzzifier is scale-invariant).
        random_state: Seed for reproducibility.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute classification evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute classification evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="classification",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK classifier on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK classifier on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    le = LabelEncoder()
    y_idx = le.fit_transform(np.asarray(y_arr))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)
    self.classes_ = le.classes_
    self._label_encoder_ = le

    self.model_ = self._build_model(input_mfs, len(self.classes_), effective_rule_base)

    y_t = torch.as_tensor(y_idx, dtype=torch.long)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        y_v_idx = self._label_encoder_.transform(np.asarray(y_v))
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(y_v_idx, dtype=torch.long)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_model(
        model_init["input_mfs"],
        int(model_init["n_classes"]),
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.classes_ = np.asarray(fitted["classes"], dtype=object)
    label_encoder = LabelEncoder()
    label_encoder.classes_ = estimator.classes_
    estimator._label_encoder_ = label_encoder
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict class labels for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict class labels for input samples."""
    proba = self.predict_proba(x)
    y_idx = np.argmax(proba, axis=1)
    return np.asarray(self._label_encoder_.inverse_transform(y_idx))

predict_proba

Predict class probabilities for input samples.

Source code in highfis/estimators.py
def predict_proba(self, x: Any) -> np.ndarray:
    """Predict class probabilities for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    probs = cast(Any, self.model_).predict_proba(self._as_tensor_x(x_arr))
    return probs.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "n_classes": len(self.classes_),
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
            "classes": self.classes_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

score

Return classification accuracy on the provided dataset.

Source code in highfis/estimators.py
def score(self, X: Any, y: Any, sample_weight: Any = None) -> float:
    """Return classification accuracy on the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return float(accuracy_score(y_true, y_pred, sample_weight=sample_weight))

LogTSKRegressorEstimator

Bases: _BaseRegressorEstimator

LogTSK regressor with inverse-log rule normalization.

LogTSK uses product antecedent aggregation and inverse-log normalization of log-domain rule strengths. The resulting rule weights are normalized with L1 normalization across rules, which makes the model scale-invariant in log-space and avoids the softmax saturation that occurs in high-dimensional inputs.

Reference

Y. Cui, D. Wu and Y. Xu, "Curse of Dimensionality for TSK Fuzzy Neural Networks: Explanation and Solutions," 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1-8, doi: 10.1109/IJCNN52387.2021.9534265.

Example
from highfis import LogTSKRegressorEstimator

reg = LogTSKRegressorEstimator()
reg.fit(X_train, y_train)

Initialise a LogTSK regressor.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list.

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. 1.0 is recommended (the log-space defuzzifier is scale-invariant).

1.0
random_state int | None

Seed for reproducibility.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian".

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise a LogTSK regressor.

    Args:
        input_configs: Per-feature :class:`InputConfig` list.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. ``1.0`` is recommended (the
            log-space defuzzifier is scale-invariant).
        random_state: Seed for reproducibility.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute regression evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute regression evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="regression",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK regressor on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK regressor on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)

    self.model_ = self._build_regressor_model(input_mfs, effective_rule_base)

    y_t = torch.as_tensor(np.asarray(y_arr, dtype=np.float32), dtype=torch.float32)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(np.asarray(y_v, dtype=np.float32), dtype=torch.float32)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_regressor_model(
        model_init["input_mfs"],
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict continuous target values for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict continuous target values for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    preds = cast(Any, self.model_).predict(self._as_tensor_x(x_arr))
    return preds.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

TSKClassifierEstimator

Bases: _BaseClassifierEstimator

Vanilla TSK classifier with sum-based rule normalization.

The vanilla Takagi-Sugeno-Kang inference computes rule firing strengths with the product t-norm and normalizes them by their total sum.

References

T. Takagi and M. Sugeno, "Fuzzy identification of systems and its applications to modeling and control," in IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-15, no. 1, pp. 116-132, Jan.-Feb. 1985, doi: 10.1109/TSMC.1985.6313399.

Example
from highfis import TSKClassifierEstimator

clf = TSKClassifierEstimator(n_mfs=5, random_state=0)
clf.fit(X_train, y_train)

Initialise a vanilla TSK classifier.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. Use "auto" (= sqrt(D)) for high-dimensional data to mitigate softmax saturation (Cui et al., IJCNN 2021). 1.0 is appropriate for low- to medium-dimensional problems.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian". Defaults to "coco" for kmeans and "cartesian" for grid.

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
pfrb_max_rules int | None

Maximum point-based FRB rules (unused by TSK).

None
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    pfrb_max_rules: int | None = None,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise a vanilla TSK classifier.

    Args:
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. Use ``"auto"`` (= ``sqrt(D)``)
            for high-dimensional data to mitigate softmax saturation
            (Cui et al., IJCNN 2021). ``1.0`` is appropriate for low-
            to medium-dimensional problems.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``. Defaults to
            ``"coco"`` for kmeans and ``"cartesian"`` for grid.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        pfrb_max_rules: Maximum point-based FRB rules (unused by TSK).
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        pfrb_max_rules=pfrb_max_rules,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute classification evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute classification evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="classification",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK classifier on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK classifier on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    le = LabelEncoder()
    y_idx = le.fit_transform(np.asarray(y_arr))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)
    self.classes_ = le.classes_
    self._label_encoder_ = le

    self.model_ = self._build_model(input_mfs, len(self.classes_), effective_rule_base)

    y_t = torch.as_tensor(y_idx, dtype=torch.long)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        y_v_idx = self._label_encoder_.transform(np.asarray(y_v))
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(y_v_idx, dtype=torch.long)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_model(
        model_init["input_mfs"],
        int(model_init["n_classes"]),
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.classes_ = np.asarray(fitted["classes"], dtype=object)
    label_encoder = LabelEncoder()
    label_encoder.classes_ = estimator.classes_
    estimator._label_encoder_ = label_encoder
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict class labels for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict class labels for input samples."""
    proba = self.predict_proba(x)
    y_idx = np.argmax(proba, axis=1)
    return np.asarray(self._label_encoder_.inverse_transform(y_idx))

predict_proba

Predict class probabilities for input samples.

Source code in highfis/estimators.py
def predict_proba(self, x: Any) -> np.ndarray:
    """Predict class probabilities for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    probs = cast(Any, self.model_).predict_proba(self._as_tensor_x(x_arr))
    return probs.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "n_classes": len(self.classes_),
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
            "classes": self.classes_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)

score

Return classification accuracy on the provided dataset.

Source code in highfis/estimators.py
def score(self, X: Any, y: Any, sample_weight: Any = None) -> float:
    """Return classification accuracy on the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return float(accuracy_score(y_true, y_pred, sample_weight=sample_weight))

TSKRegressorEstimator

Bases: _BaseRegressorEstimator

Vanilla TSK regressor with sum-based rule normalization.

The vanilla Takagi-Sugeno-Kang inference computes rule firing strengths with the product t-norm and normalizes them by their total sum.

References

T. Takagi and M. Sugeno, "Fuzzy identification of systems and its applications to modeling and control," in IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-15, no. 1, pp. 116-132, Jan.-Feb. 1985, doi: 10.1109/TSMC.1985.6313399.

Example
from highfis import TSKRegressorEstimator

reg = TSKRegressorEstimator(n_mfs=30, random_state=0)
reg.fit(X_train, y_train)

Initialise a vanilla TSK regressor.

Parameters:

Name Type Description Default
input_configs list[InputConfig] | None

Per-feature :class:InputConfig list. Only name is used when mf_init="kmeans".

None
n_mfs int

Number of k-means clusters / grid MFs (default 5).

5
mf_init str

"kmeans" (default) or "grid".

'kmeans'
sigma_scale float | str

Sigma scale factor. Use "auto" (= sqrt(D)) to mitigate softmax saturation on high-dimensional data. 1.0 is appropriate for low-to-medium-dimensional problems.

1.0
random_state int | None

Seed for k-means and weight initialisation.

None
epochs int

Maximum training epochs (default 10).

10
learning_rate float

Adam learning rate (default 0.01).

0.01
verbose bool | int

Print per-epoch progress.

False
rule_base str | None

"coco" or "cartesian". Defaults to "coco" for kmeans and "cartesian" for grid.

None
batch_size int | None

Mini-batch size (default 512).

512
shuffle bool

Reshuffle each epoch.

True
ur_weight float

Uncertainty regularisation weight.

0.0
ur_target float | None

Uncertainty regularisation target.

None
consequent_batch_norm bool

Batch normalisation on consequent layers.

False
patience int | None

Early-stopping patience (default 20). Set to None to disable early stopping.

20
restore_best bool

If True (default), restore the best validation model weights after training.

True
validation_data tuple[Any, Any] | None

Optional (X_val, y_val) for early stopping.

None
weight_decay float

L2 weight decay for consequent parameters.

1e-08
Source code in highfis/estimators.py
def __init__(
    self,
    *,
    input_configs: list[InputConfig] | None = None,
    n_mfs: int = 5,
    mf_init: str = "kmeans",
    sigma_scale: float | str = 1.0,
    random_state: int | None = None,
    epochs: int = 10,
    learning_rate: float = 1e-2,
    verbose: bool | int = False,
    rule_base: str | None = None,
    batch_size: int | None = 512,
    shuffle: bool = True,
    ur_weight: float = 0.0,
    ur_target: float | None = None,
    consequent_batch_norm: bool = False,
    patience: int | None = 20,
    restore_best: bool = True,
    validation_data: tuple[Any, Any] | None = None,
    weight_decay: float = 1e-8,
) -> None:
    """Initialise a vanilla TSK regressor.

    Args:
        input_configs: Per-feature :class:`InputConfig` list. Only
            ``name`` is used when ``mf_init="kmeans"``.
        n_mfs: Number of k-means clusters / grid MFs (default ``5``).
        mf_init: ``"kmeans"`` (default) or ``"grid"``.
        sigma_scale: Sigma scale factor. Use ``"auto"`` (= ``sqrt(D)``)
            to mitigate softmax saturation on high-dimensional data.
            ``1.0`` is appropriate for low-to-medium-dimensional problems.
        random_state: Seed for k-means and weight initialisation.
        epochs: Maximum training epochs (default ``10``).
        learning_rate: Adam learning rate (default ``0.01``).
        verbose: Print per-epoch progress.
        rule_base: ``"coco"`` or ``"cartesian"``. Defaults to
            ``"coco"`` for kmeans and ``"cartesian"`` for grid.
        batch_size: Mini-batch size (default ``512``).
        shuffle: Reshuffle each epoch.
        ur_weight: Uncertainty regularisation weight.
        ur_target: Uncertainty regularisation target.
        consequent_batch_norm: Batch normalisation on consequent layers.
        patience: Early-stopping patience (default ``20``). Set to ``None`` to disable early stopping.
        restore_best: If ``True`` (default), restore the best validation
            model weights after training.
        validation_data: Optional ``(X_val, y_val)`` for early stopping.
        weight_decay: L2 weight decay for consequent parameters.
    """
    super().__init__(
        input_configs=input_configs,
        n_mfs=n_mfs,
        mf_init=mf_init,
        sigma_scale=sigma_scale,
        random_state=random_state,
        epochs=epochs,
        learning_rate=learning_rate,
        verbose=verbose,
        rule_base=rule_base,
        batch_size=batch_size,
        shuffle=shuffle,
        ur_weight=ur_weight,
        ur_target=ur_target,
        consequent_batch_norm=consequent_batch_norm,
        patience=patience,
        restore_best=restore_best,
        validation_data=validation_data,
        weight_decay=weight_decay,
    )

evaluate

Compute regression evaluation metrics for the provided dataset.

Source code in highfis/estimators.py
def evaluate(
    self,
    X: Any,
    y: Any,
    metrics: list[str] | None = None,
    sample_weight: Any | None = None,
) -> dict[str, float]:
    """Compute regression evaluation metrics for the provided dataset."""
    y_true = np.asarray(y)
    y_pred = self.predict(X)
    return compute_metrics(
        task="regression",
        y_true=y_true,
        y_pred=y_pred,
        sample_weight=sample_weight,
        metrics=metrics,
    )

fit

Train the TSK regressor on labeled samples.

Source code in highfis/estimators.py
def fit(self, x: Any, y: Any) -> Self:
    """Train the TSK regressor on labeled samples."""
    x_arr, y_arr = check_X_y(x, y)

    if self.random_state is not None:
        torch.manual_seed(int(self.random_state))

    input_mfs, feature_names, effective_rule_base = self._build_input_mfs(x_arr)

    self.n_features_in_ = x_arr.shape[1]
    self.feature_names_in_ = np.asarray(feature_names, dtype=object)

    self.model_ = self._build_regressor_model(input_mfs, effective_rule_base)

    y_t = torch.as_tensor(np.asarray(y_arr, dtype=np.float32), dtype=torch.float32)

    # Prepare validation tensors if provided
    x_val_t: torch.Tensor | None = None
    y_val_t: torch.Tensor | None = None
    if self.validation_data is not None:
        x_v, y_v = self.validation_data
        x_v_arr = check_array(x_v)
        x_val_t = self._as_tensor_x(x_v_arr)
        y_val_t = torch.as_tensor(np.asarray(y_v, dtype=np.float32), dtype=torch.float32)

    self.history_ = self.model_.fit(
        self._as_tensor_x(x_arr),
        y_t,
        epochs=int(self.epochs),
        learning_rate=float(self.learning_rate),
        batch_size=self.batch_size,
        shuffle=bool(self.shuffle),
        ur_weight=float(self.ur_weight),
        ur_target=self.ur_target,
        verbose=self.verbose,
        x_val=x_val_t,
        y_val=y_val_t,
        patience=self.patience,
        restore_best=self.restore_best,
        weight_decay=float(self.weight_decay),
    )
    self.rule_base_ = effective_rule_base
    return self

load classmethod

Load a persisted estimator created by save.

Source code in highfis/estimators.py
@classmethod
def load(cls, path: str) -> Self:
    """Load a persisted estimator created by save."""
    checkpoint = load_checkpoint(path)
    validate_checkpoint_payload(checkpoint, expected_estimator_class=cls.__name__)

    estimator = cls(**checkpoint["estimator_params"])
    model_init = checkpoint["model_init"]
    estimator.rule_base_ = model_init["rule_base"]
    estimator.model_ = estimator._build_regressor_model(
        model_init["input_mfs"],
        str(model_init["rule_base"]),
    )
    estimator.model_.load_state_dict(checkpoint["model_state_dict"])

    fitted = checkpoint["fitted_attrs"]
    estimator.n_features_in_ = int(fitted["n_features_in"])
    estimator.feature_names_in_ = np.asarray(fitted["feature_names_in"], dtype=object)
    estimator.history_ = cast(dict[str, Any], checkpoint.get("history", {}))
    return estimator

predict

Predict continuous target values for input samples.

Source code in highfis/estimators.py
def predict(self, x: Any) -> np.ndarray:
    """Predict continuous target values for input samples."""
    check_is_fitted(self, "model_")
    x_arr = check_array(x)
    if x_arr.shape[1] != self.n_features_in_:
        raise ValueError(f"expected {self.n_features_in_} features, got {x_arr.shape[1]}")
    preds = cast(Any, self.model_).predict(self._as_tensor_x(x_arr))
    return preds.detach().cpu().numpy()

save

Persist estimator configuration, model weights and fitted metadata.

Source code in highfis/estimators.py
def save(self, path: str) -> None:
    """Persist estimator configuration, model weights and fitted metadata."""
    checkpoint = self._build_checkpoint_base(
        model_init={
            "input_mfs": self.model_.input_mfs,
            "rule_base": self.rule_base_,
        },
        fitted_attrs={
            "n_features_in": int(self.n_features_in_),
            "feature_names_in": self.feature_names_in_.tolist(),
        },
    )
    save_checkpoint(path, checkpoint)