AdaTSK
AdaTSK extends TSK with an adaptive softmin antecedent that stabilizes high-dimensional fuzzy inference while preserving first-order TSK consequents.
Reference
G. Xue, Q. Chang, J. Wang, K. Zhang and N. R. Pal, "An Adaptive Neuro-Fuzzy System With Integrated Feature Selection and Rule Extraction for High-Dimensional Classification Problems," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 7, pp. 2167-2181, July 2023, doi: 10.1109/TFUZZ.2022.3220950
Mathematical Formulation
AdaTSK extends TSK fuzzy inference by using an adaptive softmin antecedent (Ada-softmin) together with first-order linear consequents.
Antecedent
Each rule-term membership is typically computed with a Gaussian function:
where \(c_{r,d}\) is the center and \(\sigma_{r,d}>0\) is the spread.
In highFIS, the default estimator wrappers build standard Gaussian MFs.
The paper's proposed positive lower-bound variant can be instantiated with
highfis.memberships.CompositeGaussianMF when desired.
Adaptive Ada-softmin aggregation
AdaTSK computes rule firing strengths with an adaptive softmin based on the minimum antecedent membership for each rule:
The exponent \(\hat{q}_r\) is recomputed on every forward pass and is clamped for numerical stability, which avoids the fixed-parameter softmin problems of underflow and fake minimum.
Normalization
Rule firing strengths are normalized by simple sum normalization:
Consequent
AdaTSK uses a first-order TSK consequent for both classification and regression.
For classification:
For regression:
Output aggregation
The final prediction is the normalized weighted sum of rule consequents:
- Classification:
- Regression:
Code ↔ Paper Correspondence
| Equation | Class / Method | Description |
|---|---|---|
| Adaptive softmin | highfis.layers.AdaSoftminRuleLayer |
Computes per-rule softmin exponents from the minimum membership value |
| Normalization | highfis.defuzzifiers.SumBasedDefuzzifier |
Standard sum-based rule strength normalization |
| Consequent | ClassificationConsequentLayer / RegressionConsequentLayer |
First-order linear consequents |
| Membership functions | highfis.memberships.GaussianMF |
Default Gaussian antecedent MFs |
| Optional membership | highfis.memberships.CompositeGaussianMF |
Optional positive lower-bound MF matching the paper variant |
Implementation notes
In highFIS, AdaTSKClassifier and AdaTSKRegressor implement the core
AdaTSK model by replacing the standard product antecedent with the
adaptive softmin operator.
Model classes
AdaTSKClassifierandAdaTSKRegressorusehighfis.layers.AdaSoftminRuleLayerto compute rule strengths.- The TSK consequent remains first-order linear and is normalized with
highfis.defuzzifiers.SumBasedDefuzzifier. AdaTSKClassifierandAdaTSKRegressordo not expose the feature- selection / rule-extraction gates of FSRE-AdaTSK.
Estimator wrappers
AdaTSKClassifierEstimatorandAdaTSKRegressorEstimatorare sklearn-compatible wrappers around the low-level AdaTSK model classes.- They build Gaussian membership functions from
input_configs,n_mfs,mf_init, andsigma_scale. - The default
sigma_scale=1.0is appropriate because the adaptive softmin operator handles high-dimensional stability.
Membership functions
- The primary antecedent MFs are standard
highfis.memberships.GaussianMFobjects. - An optional nonzero lower-bound membership function is available via
highfis.memberships.CompositeGaussianMFfor paper-style stability.
Training in the paper vs. highFIS
- The paper trains AdaTSK end-to-end by optimizing the task loss through the adaptive softmin operator.
- highFIS follows the same gradient-based training paradigm in
BaseTSK.fit(). -
epsis used to clamp membership values and stabilize log-space computations inAdaSoftminRuleLayer. -
FSRE-AdaTSK is documented separately in
docs/models/fsre-adatsk.md.
Alignment with the paper
- The paper's key AdaTSK contribution is the adaptive softmin antecedent operator to avoid numeric underflow and fake minimum effects.
- highFIS implements this via
AdaSoftminRuleLayerwith a per-rule exponent derived from the rule's minimum antecedent membership. - The TSK consequent remains first-order, matching the paper's model.
- The default estimator wrappers use
GaussianMF, while the paper's positive lower-bound MF can be supplied viaCompositeGaussianMF.