highFIS
highFIS is a modern PyTorch library for high-dimensional Takagi-Sugeno-Kang (TSK) fuzzy systems. It delivers differentiable, trainable fuzzy inference for classification and regression, with sklearn-compatible estimators for fast experimentation.
Why highFIS?
- Built for high-dimensional data and numerical stability.
- Supports adaptive and gated fuzzy inference, including feature selection and rule extraction.
- Includes HDFIS variants for product-DMF and minimum frozen-antecedent high-dimensional inference.
- Ships with both model-level classes and sklearn-style estimator wrappers.
- Works seamlessly with
Pipeline,GridSearchCV, and standard scikit-learn workflows.
High-level overview
highFIS models combine:
- differentiable membership functions for antecedent fuzzification,
- configurable rule bases and T-norm aggregation,
- normalized rule weights via defuzzification,
- built-in metrics and evaluation utilities for regression and classification,
- task-specific consequent layers for classification or regression.
Use BaseTSK for custom pipelines, or choose a concrete model variant when
you want a ready-to-use TSK architecture.
Quick Start
from highfis import HTSKClassifierEstimator
clf = HTSKClassifierEstimator(
n_mfs=4,
mf_init="kmeans",
epochs=150,
learning_rate=1e-3,
random_state=42,
)
clf.fit(X_train, y_train)
print(f"Test accuracy: {clf.score(X_test, y_test):.4f}")
Models Available
highFIS includes the following concrete TSK model families:
TSK— vanilla TSK with product antecedent aggregation and sum-based normalization.HTSK— high-dimensional TSK with geometric mean aggregation and log-space softmax normalization.LogTSK— inverse-log normalization of log-domain rule weights for stable high-dimensional inference.HDFIS— high-dimensional inference with both product-DMF aggregation (HDFIS-prod) and minimum frozen-antecedent inference (HDFIS-min).DombiTSK— Dombi parametric aggregation with a learnable shape parameter.ADMTSK— adaptive Dombi TSK with Composite GMF and positive lower-bound membership values.AYATSK— Yager-style aggregation for more flexible antecedent behavior.AdaTSK— adaptive softmin aggregation with dynamic rule weighting.ADPTSK— adaptive double-parameter softmin aggregation with stable normalized rule weights.FSRE-AdaTSK— gated feature selection and rule extraction inside an adaptive inference pipeline.DG-TSK— double-gated training for simultaneous feature selection and rule extraction.DG-ALETSK— adaptive Ln-Exp softmin with embedded feature and rule gates for sparse high-dimensional modeling.
Each model family exposes both classifier and regressor variants.
Model selection guide
- Choose
TSKfor a baseline vanilla fuzzy model. - Choose
HTSKorLogTSKfor high-dimensional problems where numerical stability is critical. - Choose
DombiTSKorAYATSKwhen you want more control over antecedent aggregation behavior. - Choose
HDFISwhen you need high-dimensional inference with either a dimension-dependent product antecedent or a frozen minimum antecedent. - Choose
AdaTSK,FSRE-AdaTSK,DG-TSK, orDG-ALETSKwhen you need adaptive sparsity, feature gating, or rule extraction.
Documentation
| Topic | Description |
|---|---|
| Quick Start | Installation and first model run. |
| Models | Model constructors and usage notes. |
| Estimators | sklearn-compatible estimator reference. |
| Layers | Layer primitives for fuzzy pipelines. |
| Defuzzifiers | Normalization strategies. |
| T-Norms | Built-in and custom aggregation functions. |
| Memberships | Membership functions for antecedents. |
| Metrics | Regression and classification evaluation utilities. |
| Base TSK | Unified training loop and shared logic. |
| Protocols | Structural typing interfaces. |
| Persistence | Estimator checkpoint serialization and load validation. |
| Contributing | Development setup and contribution guide. |
Get Started
Use the top-level highfis classes for fast prototyping, or extend
BaseTSK directly for custom fuzzy pipelines.