Models
Concrete TSK model variants.
Each class in this module is a subclass of BaseTSK that bundles a specific
antecedent strategy (t-norm), defuzzification head, and consequent architecture.
Users typically access these through the sklearn-style estimator wrappers in
highfis.estimators.
Model Family Overview
HTSK
Configuration: t_norm="gmean" + SoftmaxLogDefuzzifier
Classes:
- `HTSKClassifier`
- `HTSKRegressor`
Behavior:
- `softmax(log(w^{1/D}))`
TSK (vanilla)
Configuration: t_norm="prod" + SumBasedDefuzzifier
Classes:
- `TSKClassifier`
- `TSKRegressor`
Behavior:
- `w_r / Σw`
LogTSK
Configuration: t_norm="prod" + InvLogDefuzzifier
Classes:
- `LogTSKClassifier`
- `LogTSKRegressor`
Behavior:
- Inverse-log normalization of log-domain rule weights
DombiTSK
Configuration: t_norm="dombi" + SumBasedDefuzzifier
Classes:
- `DombiTSKClassifier`
- `DombiTSKRegressor`
ADMTSK
Configuration: adaptive Dombi T-norm + CompositeGMF + SumBasedDefuzzifier
Classes:
- `ADMTSKClassifier`
- `ADMTSKRegressor`
AYATSK
Configuration: t_norm="yager" + SumBasedDefuzzifier
Classes:
- `AYATSKClassifier`
- `AYATSKRegressor`
AdaTSK
Configuration: adaptive softmin (Ada-softmin) + SumBasedDefuzzifier
Classes:
- `AdaTSKClassifier`
- `AdaTSKRegressor`
ADPTSK
Configuration: adaptive double-parameter softmin (ADP-softmin) + SumBasedDefuzzifier
Classes:
- `ADPTSKClassifier`
- `ADPTSKRegressor`
FSRE-AdaTSK
Configuration: adaptive softmin + SoftmaxLogDefuzzifier
Classes:
- `FSREAdaTSKClassifier`
- `FSREAdaTSKRegressor`
DG-ALETSK
Configuration: ALE-softmin + SoftmaxLogDefuzzifier
Classes:
- `DGALETSKClassifier`
- `DGALETSKRegressor`
DG-TSK
Configuration: product + M-gate + SoftmaxLogDefuzzifier
Classes:
- `DGTSKClassifier`
- `DGTSKRegressor`
HDFIS
Configuration: t_norm="prod" with DimensionDependentGaussianMF +
SumBasedDefuzzifier for HDFIS-prod; t_norm="min" with frozen
antecedents + SumBasedDefuzzifier for HDFIS-min.
Classes:
- `HDFISProdClassifier`
- `HDFISProdRegressor`
- `HDFISMinClassifier`
- `HDFISMinRegressor`
Notes
- All variants normalize rule firing strengths across rules.
SoftmaxLogDefuzzifierimproves numerical stability via log-space normalization.InvLogDefuzzifierapplies inverse-log normalization.- Adaptive softmin variants dynamically adjust aggregation behavior.
- All classes are exported by this module and are intended for use as concrete TSK classifiers and regressors.
ADMTSKClassifier
Bases: BaseTSKClassifier
Adaptive Dombi TSK classifier with Composite Gaussian membership functions.
ADMTSK is an adaptive Dombi TSK fuzzy system designed for high-dimensional inference. It combines a Dombi T-norm antecedent with a positive lower-bound Composite Gaussian membership function (CGMF) and normalized first-order consequents.
Reference
G. Xue, L. Hu, J. Wang and S. Ablameyko, "ADMTSK: A High-Dimensional Takagi-Sugeno-Kang Fuzzy System Based on Adaptive Dombi T-Norm," in IEEE Transactions on Fuzzy Systems, vol. 33, no. 6, pp. 1767-1780, June 2025, doi: 10.1109/TFUZZ.2025.3535640.
Initialize the ADMTSK classifier.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of membership functions. |
required |
n_classes
|
int
|
Number of output classes. Must be >= 2. |
required |
rule_base
|
str
|
Rule base strategy, either |
'coco'
|
t_norm
|
str
|
T-norm identifier. Defaults to |
'dombi'
|
adaptive
|
bool
|
If True, compute adaptive lambda using the feature dimension and membership lower bound. |
True
|
lambda_
|
float
|
Fixed Dombi parameter |
1.0
|
lower_bound
|
float
|
The lower bound for Composite GMF values. |
1.0 / math.e
|
K
|
float
|
Heuristic constant used to compute adaptive lambda. |
10.0
|
t_norm_fn
|
TNormFn | None
|
Optional custom T-norm implementation. Overrides
|
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices for custom rule bases. |
None
|
defuzzifier
|
nn.Module | None
|
Optional defuzzifier module. |
None
|
consequent_batch_norm
|
bool
|
If True, apply batch normalization to consequent inputs. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
predict
predict_proba
ADMTSKRegressor
Bases: BaseTSKRegressor
Adaptive Dombi TSK regressor with Composite Gaussian membership functions.
ADMTSK is an adaptive Dombi TSK fuzzy system designed for high-dimensional inference. It combines a Dombi T-norm antecedent with a positive lower-bound Composite Gaussian membership function (CGMF) and normalized first-order consequents.
Reference
G. Xue, L. Hu, J. Wang and S. Ablameyko, "ADMTSK: A High-Dimensional Takagi-Sugeno-Kang Fuzzy System Based on Adaptive Dombi T-Norm," in IEEE Transactions on Fuzzy Systems, vol. 33, no. 6, pp. 1767-1780, June 2025, doi: 10.1109/TFUZZ.2025.3535640.
Initialize the ADMTSK regressor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of membership functions. |
required |
rule_base
|
str
|
Rule base strategy, either |
'coco'
|
t_norm
|
str
|
T-norm identifier. Defaults to |
'dombi'
|
adaptive
|
bool
|
If True, compute adaptive lambda using the feature dimension and membership lower bound. |
True
|
lambda_
|
float
|
Fixed Dombi parameter |
1.0
|
lower_bound
|
float
|
The lower bound for Composite GMF values. |
1.0 / math.e
|
K
|
float
|
Heuristic constant used to compute adaptive lambda. |
10.0
|
t_norm_fn
|
TNormFn | None
|
Optional custom T-norm implementation. Overrides
|
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices for custom rule bases. |
None
|
defuzzifier
|
nn.Module | None
|
Optional defuzzifier module. |
None
|
consequent_batch_norm
|
bool
|
If True, apply batch normalization to consequent inputs. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
ADPTSKClassifier
Bases: BaseTSKClassifier
TSK classifier with adaptive double-parameter softmin antecedent (ADPTSK).
The firing strengths of each rule are computed with the ADP-softmin operator, and membership functions are wrapped as Gaussian PIMFs to preserve a positive infimum during high-dimensional training.
Reference
Ma, M., Qian, L., Zhang, Y., Fang, Q., & Xue, G. (2025). An adaptive double-parameter softmin based Takagi-Sugeno-Kang fuzzy system for high-dimensional data. Fuzzy Sets and Systems, 521, 109582. https://doi.org/10.1016/j.fss.2025.109582
Initialise the ADPTSK classifier.
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
predict
predict_proba
ADPTSKRegressor
Bases: BaseTSKRegressor
TSK regressor with adaptive double-parameter softmin antecedent (ADPTSK).
The firing strengths of each rule are computed with the ADP-softmin operator, and membership functions are wrapped as Gaussian PIMFs to preserve a positive infimum during high-dimensional training.
Reference
Ma, M., Qian, L., Zhang, Y., Fang, Q., & Xue, G. (2025). An adaptive double-parameter softmin based Takagi-Sugeno-Kang fuzzy system for high-dimensional data. Fuzzy Sets and Systems, 521, 109582. https://doi.org/10.1016/j.fss.2025.109582
Initialise the ADPTSK regressor.
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
AYATSKClassifier
Bases: BaseTSKClassifier
TSK classifier with an adaptive Yager T-norm in the antecedent.
AYATSK extends TSK by using an adaptive Yager T-norm aggregation and optional positive lower-bound membership functions to improve stability and performance in high-dimensional settings.
Reference
G. Xue, Y. Yang and J. Wang, "Adaptive Yager T-Norm-Based Takagi-Sugeno-Kang Fuzzy Systems," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 55, no. 12, pp. 9802-9815, Dec. 2025, doi: 10.1109/TSMC.2025.3621346.
Initialise the AYATSK classifier.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
n_classes
|
int
|
Number of output classes (must be ≥ 2). |
required |
rule_base
|
str
|
|
'coco'
|
t_norm
|
str
|
T-norm identifier (default |
'yager'
|
t_norm_fn
|
TNormFn | None
|
Optional custom t-norm callable. |
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices. |
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
predict
predict_proba
AYATSKRegressor
Bases: BaseTSKRegressor
TSK regressor with an adaptive Yager T-norm in the antecedent.
AYATSK extends TSK by using an adaptive Yager T-norm aggregation and optional positive lower-bound membership functions to improve stability and performance in high-dimensional settings.
Reference
G. Xue, Y. Yang and J. Wang, "Adaptive Yager T-Norm-Based Takagi-Sugeno-Kang Fuzzy Systems," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 55, no. 12, pp. 9802-9815, Dec. 2025, doi: 10.1109/TSMC.2025.3621346.
Initialise the AYATSK regressor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
rule_base
|
str
|
|
'coco'
|
t_norm
|
str
|
T-norm identifier (default |
'yager'
|
t_norm_fn
|
TNormFn | None
|
Optional custom t-norm callable. |
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices. |
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
AdaTSKClassifier
Bases: BaseTSKClassifier
TSK classifier with adaptive softmin antecedent (AdaTSK).
The firing strength of each rule is computed with the Ada-softmin operator.
Reference
G. Xue, Q. Chang, J. Wang, K. Zhang and N. R. Pal, "An Adaptive Neuro-Fuzzy System With Integrated Feature Selection and Rule Extraction for High-Dimensional Classification Problems," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 7, pp. 2167-2181, July 2023, doi: 10.1109/TFUZZ.2022.3220950.
Initialise the AdaTSK classifier.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
n_classes
|
int
|
Number of output classes (must be ≥ 2). |
required |
rule_base
|
str
|
|
'coco'
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices. |
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
eps
|
float | None
|
Numerical stability epsilon for the Ada-softmin operator. |
None
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
predict
predict_proba
AdaTSKRegressor
Bases: BaseTSKRegressor
TSK regressor with adaptive softmin antecedent (AdaTSK).
The firing strength of each rule is computed with the Ada-softmin operator.
Reference
G. Xue, Q. Chang, J. Wang, K. Zhang and N. R. Pal, "An Adaptive Neuro-Fuzzy System With Integrated Feature Selection and Rule Extraction for High-Dimensional Classification Problems," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 7, pp. 2167-2181, July 2023, doi: 10.1109/TFUZZ.2022.3220950.
Initialise the AdaTSK regressor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
rule_base
|
str
|
|
'coco'
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices. |
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
eps
|
float | None
|
Numerical stability epsilon for the Ada-softmin operator. |
None
|
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
BaseTSKClassifier
Bases: BaseTSK
Abstract classifier base that provides task-specific training and inference helpers.
Initialize the TSK pipeline layers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature names to sequences of
:class: |
required |
rule_base
|
str
|
Rule-base construction strategy. Supported values:
|
'cartesian'
|
t_norm
|
str
|
Built-in T-norm name. Ignored when t_norm_fn is
provided. Common values: |
'gmean'
|
t_norm_fn
|
TNormFn | None
|
Optional custom T-norm callable. When provided,
t_norm is internally set to |
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule index sequences. Required when
rule_base is |
None
|
defuzzifier
|
nn.Module | None
|
Normalization module applied to raw rule firing
strengths. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
If |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If input_mfs is empty. |
Source code in highfis/base.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
predict
predict_proba
BaseTSKRegressor
Bases: BaseTSK
Abstract regressor base that provides task-specific training and inference helpers.
Initialize the TSK pipeline layers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature names to sequences of
:class: |
required |
rule_base
|
str
|
Rule-base construction strategy. Supported values:
|
'cartesian'
|
t_norm
|
str
|
Built-in T-norm name. Ignored when t_norm_fn is
provided. Common values: |
'gmean'
|
t_norm_fn
|
TNormFn | None
|
Optional custom T-norm callable. When provided,
t_norm is internally set to |
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule index sequences. Required when
rule_base is |
None
|
defuzzifier
|
nn.Module | None
|
Normalization module applied to raw rule firing
strengths. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
If |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If input_mfs is empty. |
Source code in highfis/base.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
DGALETSKClassifier
Bases: BaseTSKClassifier
DG-ALETSK classifier with ALE-softmin antecedent and double-group gates.
DG-ALETSK extends FSRE-AdaTSK by replacing the adaptive softmin with the Adaptive Ln-Exp (ALE) softmin — a smoother variant with improved numerical stability. It also uses a zero-order consequent in the DG (data-guided) training phase and optionally converts to first-order after gate-based pruning.
Reference
G. Xue, J. Wang, B. Yuan and C. Dai, "DG-ALETSK: A High-Dimensional Fuzzy Approach With Simultaneous Feature Selection and Rule Extraction," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 11, pp. 3866-3880, Nov. 2023, doi: 10.1109/TFUZZ.2023.3270445.
Initialise the DG-ALETSK classifier.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
n_classes
|
int
|
Number of output classes (must be ≥ 2). |
required |
rule_base
|
str
|
|
'coco'
|
lambda_init
|
float
|
Initial ALE-softmin parameter |
1.0
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices; ignored when
|
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
eps
|
float | None
|
Numerical stability epsilon for the ALE-softmin operator. |
None
|
use_en_frb
|
bool
|
Start directly from the Enhanced FRB (En-FRB). |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in highfis/models.py
1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 | |
apply_thresholds
Apply threshold pruning to feature and rule gates.
Source code in highfis/models.py
compute_thresholds
Compute feature and rule thresholds from gate values and coefficient pairs.
Source code in highfis/models.py
convert_to_first_order
Convert the DG phase zero-order consequent to first-order form.
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
fit_dg_phase
fit_finetune
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
get_feature_gate_values
Return normalized antecedent feature gate values for the DG phase.
get_rule_gate_values
Return normalized consequent rule gate values for the DG phase.
predict
predict_proba
search_thresholds
Search threshold coefficients for feature and rule pruning.
The search follows the DG-ALETSK paper strategy: thresholds are computed from gate values, applied to prune gates, and the first-order consequent parameters are refit with antecedents fixed.
Source code in highfis/models.py
1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 | |
DGALETSKRegressor
Bases: BaseTSKRegressor
DG-ALETSK regressor with ALE-softmin antecedent and double-group gates.
DG-ALETSK extends FSRE-AdaTSK by replacing the adaptive softmin with the Adaptive Ln-Exp (ALE) softmin — a smoother variant with improved numerical stability. It also uses a zero-order consequent in the DG (data-guided) training phase and optionally converts to first-order after gate-based pruning.
Reference
G. Xue, J. Wang, B. Yuan and C. Dai, "DG-ALETSK: A High-Dimensional Fuzzy Approach With Simultaneous Feature Selection and Rule Extraction," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 11, pp. 3866-3880, Nov. 2023, doi: 10.1109/TFUZZ.2023.3270445.
Initialise the DG-ALETSK regressor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
rule_base
|
str
|
|
'coco'
|
lambda_init
|
float
|
Initial ALE-softmin parameter |
1.0
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices; ignored when
|
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
eps
|
float | None
|
Numerical stability epsilon for the ALE-softmin operator. |
None
|
use_en_frb
|
bool
|
Start directly from the Enhanced FRB (En-FRB). |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in highfis/models.py
apply_thresholds
Apply threshold pruning to feature and rule gates.
Source code in highfis/models.py
compute_thresholds
Compute feature and rule thresholds from gate values and coefficient pairs.
Source code in highfis/models.py
convert_to_first_order
Convert the DG phase zero-order consequent to first-order form.
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
fit_dg_phase
fit_finetune
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
get_feature_gate_values
Return normalized antecedent feature gate values for the DG phase.
get_rule_gate_values
Return normalized consequent rule gate values for the DG phase.
predict
search_thresholds
Search threshold coefficients for feature and rule pruning.
Source code in highfis/models.py
1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 | |
DGTSKClassifier
Bases: BaseTSKClassifier
DG-TSK classifier with M-gate antecedent and point-based FRB (P-FRB).
DG-TSK uses a data-guided M-gate function to automatically select relevant features and rules.
Reference
Guangdong Xue, Jian Wang, Bingjie Zhang, Bin Yuan, Caili Dai, Double groups of gates based Takagi-Sugeno-Kang (DG-TSK) fuzzy system for simultaneous feature selection and rule extraction, Fuzzy Sets and Systems, Volume 469, 2023, 108627, ISSN 0165-0114, https://doi.org/10.1016/j.fss.2023.108627.
Initialise the DG-TSK classifier.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
n_classes
|
int
|
Number of output classes (must be ≥ 2). |
required |
rule_base
|
str
|
|
'coco'
|
gate_fea
|
str | Callable[[Tensor], Tensor] | None
|
Gate function for antecedent feature selection.
|
'gate_m'
|
gate_rule
|
str | Callable[[Tensor], Tensor] | None
|
Gate function for consequent rule selection.
Same options as |
'gate_m'
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices; ignored when
|
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
eps
|
float | None
|
Numerical stability epsilon. |
None
|
use_en_frb
|
bool
|
Use the Enhanced FRB (P-FRB) rule base. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in highfis/models.py
2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 | |
apply_thresholds
Prune DG-TSK feature and rule gates using the computed thresholds.
Source code in highfis/models.py
compute_thresholds
Compute DG-TSK pruning thresholds from gate values and zeta parameters.
Source code in highfis/models.py
convert_to_first_order
Convert the DG-TSK model from zero-order to first-order consequent.
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
fit_dg_phase
fit_finetune
Fine-tune the DG-TSK classifier after conversion to first-order consequents.
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
get_feature_gate_values
get_rule_gate_values
predict
predict_proba
search_thresholds
Search DG-TSK threshold combinations and optionally apply the best candidate.
Source code in highfis/models.py
2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 | |
DGTSKRegressor
Bases: BaseTSKRegressor
DG-TSK regressor with M-gate antecedent and point-based FRB (P-FRB).
DG-TSK uses a data-guided M-gate function to automatically select relevant features and rules.
Reference
Guangdong Xue, Jian Wang, Bingjie Zhang, Bin Yuan, Caili Dai, Double groups of gates based Takagi-Sugeno-Kang (DG-TSK) fuzzy system for simultaneous feature selection and rule extraction, Fuzzy Sets and Systems, Volume 469, 2023, 108627, ISSN 0165-0114, https://doi.org/10.1016/j.fss.2023.108627.
Initialise the DG-TSK regressor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
rule_base
|
str
|
|
'coco'
|
gate_fea
|
str | Callable[[Tensor], Tensor] | None
|
Gate function for antecedent feature selection
(default |
'gate_m'
|
gate_rule
|
str | Callable[[Tensor], Tensor] | None
|
Gate function for consequent rule selection
(default |
'gate_m'
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices; ignored when
|
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
eps
|
float | None
|
Numerical stability epsilon. |
None
|
use_en_frb
|
bool
|
Use the Enhanced FRB (P-FRB) rule base. |
False
|
Source code in highfis/models.py
apply_thresholds
Prune DG-TSK feature and rule gates using the computed thresholds.
Source code in highfis/models.py
compute_thresholds
Compute DG-TSK pruning thresholds from gate values and zeta parameters.
Source code in highfis/models.py
convert_to_first_order
Convert the DG-TSK regressor from zero-order to first-order consequent.
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
fit_dg_phase
Train the DG-TSK regression zero-order phase before first-order conversion.
fit_finetune
Fine-tune the DG-TSK regression model after converting to first order.
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
get_feature_gate_values
get_rule_gate_values
predict
search_thresholds
Search DG-TSK regression threshold combinations and optionally apply the best candidate.
Source code in highfis/models.py
2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 | |
DombiTSKClassifier
Bases: BaseTSKClassifier
TSK classifier with a fixed Dombi T-norm in the antecedent.
DombiTSK extends TSK fuzzy inference by using a Dombi t-norm aggregation in antecedent evaluation while keeping first-order linear consequents.
Reference
G. Xue, L. Hu, J. Wang and S. Ablameyko, "ADMTSK: A High-Dimensional Takagi-Sugeno-Kang Fuzzy System Based on Adaptive Dombi T-Norm," in IEEE Transactions on Fuzzy Systems, vol. 33, no. 6, pp. 1767-1780, June 2025, doi: 10.1109/TFUZZ.2025.3535640.
Initialise the Dombi TSK classifier.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
n_classes
|
int
|
Number of output classes (must be ≥ 2). |
required |
rule_base
|
str
|
|
'cartesian'
|
t_norm
|
str
|
T-norm identifier (default |
'dombi'
|
lambda_
|
float
|
Dombi parameter |
1.0
|
t_norm_fn
|
TNormFn | None
|
Optional custom t-norm callable; overrides
|
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices. |
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
predict
predict_proba
DombiTSKRegressor
Bases: BaseTSKRegressor
TSK regressor with a fixed Dombi T-norm in the antecedent.
DombiTSK extends TSK fuzzy inference by using a Dombi t-norm aggregation in antecedent evaluation while keeping first-order linear consequents.
Reference
G. Xue, L. Hu, J. Wang and S. Ablameyko, "ADMTSK: A High-Dimensional Takagi-Sugeno-Kang Fuzzy System Based on Adaptive Dombi T-Norm," in IEEE Transactions on Fuzzy Systems, vol. 33, no. 6, pp. 1767-1780, June 2025, doi: 10.1109/TFUZZ.2025.3535640.
Initialise the Dombi TSK regressor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
rule_base
|
str
|
|
'cartesian'
|
t_norm
|
str
|
T-norm identifier (default |
'dombi'
|
lambda_
|
float
|
Dombi parameter |
1.0
|
t_norm_fn
|
TNormFn | None
|
Optional custom t-norm callable. |
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices. |
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
FSREAdaTSKClassifier
Bases: BaseTSKClassifier
FSRE-AdaTSK classifier with adaptive softmin antecedent and gated consequents.
FSRE-AdaTSK (Feature Selection and Rule Extraction) extends AdaTSK.
Reference
G. Xue, Q. Chang, J. Wang, K. Zhang and N. R. Pal, "An Adaptive Neuro-Fuzzy System With Integrated Feature Selection and Rule Extraction for High-Dimensional Classification Problems," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 7, pp. 2167-2181, July 2023, doi: 10.1109/TFUZZ.2022.3220950.
Initialise the FSRE-AdaTSK classifier.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
n_classes
|
int
|
Number of output classes (must be ≥ 2). |
required |
rule_base
|
str
|
|
'coco'
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices; ignored when
|
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
eps
|
float | None
|
Numerical stability epsilon for the Ada-softmin operator. |
None
|
use_en_frb
|
bool
|
Start directly from the Enhanced FRB (En-FRB) instead of CoCo-FRB. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in highfis/models.py
expand_to_en_frb
Switch the rule layer to an Enhanced Fuzzy Rule Base for RE phase.
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
fit_finetune
Fine-tune with no gates — plain TSK consequent (eq. 5).
fit_fs
Train the FS phase: only feature gates M(λ_d) are active (eq. 21).
fit_re
Expand to En-FRB and train the RE phase: only rule gates M(θ_r) active (eq. 22).
Source code in highfis/models.py
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
predict
predict_proba
FSREAdaTSKRegressor
Bases: BaseTSKRegressor
FSRE-AdaTSK regressor with adaptive softmin antecedent and gated consequents.
FSRE-AdaTSK (Feature Selection and Rule Extraction) extends AdaTSK.
Reference
G. Xue, Q. Chang, J. Wang, K. Zhang and N. R. Pal, "An Adaptive Neuro-Fuzzy System With Integrated Feature Selection and Rule Extraction for High-Dimensional Classification Problems," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 7, pp. 2167-2181, July 2023, doi: 10.1109/TFUZZ.2022.3220950.
Initialise the FSRE-AdaTSK regressor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
rule_base
|
str
|
|
'coco'
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices; ignored when
|
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
eps
|
float | None
|
Numerical stability epsilon for the Ada-softmin operator. |
None
|
use_en_frb
|
bool
|
Start directly from the Enhanced FRB (En-FRB). |
False
|
Source code in highfis/models.py
expand_to_en_frb
Switch the rule layer to an Enhanced Fuzzy Rule Base for RE phase.
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
fit_finetune
Fine-tune with no gates — plain TSK consequent (eq. 5).
fit_fs
Train the FS phase: only feature gates M(λ_d) are active (eq. 21).
fit_re
Expand to En-FRB and train the RE phase: only rule gates M(θ_r) active (eq. 22).
Source code in highfis/models.py
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
HDFISMinClassifier
Bases: BaseTSKClassifier
HDFIS-min classifier with frozen antecedents and minimum aggregation.
HDFIS-min uses the minimum T-norm in the antecedent and only optimizes consequent parameters, which avoids the nondifferentiability of the minimum operator during training.
References
G. Xue, J. Wang, K. Zhang and N. R. Pal, "High-Dimensional Fuzzy Inference Systems," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 1, pp. 507-519, Jan. 2024, doi: 10.1109/TSMC.2023.3311475.
Initialize the HDFIS-min classifier.
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
predict
predict_proba
HDFISMinRegressor
Bases: BaseTSKRegressor
HDFIS-min regressor with frozen antecedents and minimum aggregation.
HDFIS-min uses the minimum T-norm in the antecedent and only optimizes consequent parameters, which avoids the nondifferentiability of the minimum operator during training.
References
G. Xue, J. Wang, K. Zhang and N. R. Pal, "High-Dimensional Fuzzy Inference Systems," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 1, pp. 507-519, Jan. 2024, doi: 10.1109/TSMC.2023.3311475.
Initialize the HDFIS-min regressor.
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
HDFISProdClassifier
Bases: BaseTSKClassifier
HDFIS-prod classifier with dimension-dependent Gaussian MFs.
HDFIS-prod combines the standard product T-norm with a dimension-dependent Gaussian membership function (DMF) to avoid numeric underflow in very high-dimensional feature spaces while preserving first-order TSK consequents.
References
G. Xue, J. Wang, K. Zhang and N. R. Pal, "High-Dimensional Fuzzy Inference Systems," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 1, pp. 507-519, Jan. 2024, doi: 10.1109/TSMC.2023.3311475.
Initialize the HDFIS-prod classifier.
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
predict
predict_proba
HDFISProdRegressor
Bases: BaseTSKRegressor
HDFIS-prod regressor with dimension-dependent Gaussian MFs.
HDFIS-prod combines the standard product T-norm with a dimension-dependent Gaussian membership function (DMF) to avoid numeric underflow in very high-dimensional feature spaces while preserving first-order TSK consequents.
References
G. Xue, J. Wang, K. Zhang and N. R. Pal, "High-Dimensional Fuzzy Inference Systems," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 1, pp. 507-519, Jan. 2024, doi: 10.1109/TSMC.2023.3311475.
Initialize the HDFIS-prod regressor.
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
HTSKClassifier
Bases: BaseTSKClassifier
HTSK classifier for high-dimensional TSK inference.
HTSK replaces the standard product t-norm with a geometric mean over membership values and performs rule normalization in log-space.
References
Y. Cui, D. Wu and Y. Xu, "Curse of Dimensionality for TSK Fuzzy Neural Networks: Explanation and Solutions," 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 2021, pp. 1-8, doi: 10.1109/IJCNN52387.2021.9534265.
Initialise the HTSK classifier.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
n_classes
|
int
|
Number of output classes (must be ≥ 2). |
required |
rule_base
|
str
|
Rule-base construction strategy. |
'cartesian'
|
t_norm
|
str
|
Antecedent aggregation operator name (default
|
'gmean'
|
t_norm_fn
|
TNormFn | None
|
Optional custom t-norm callable; overrides
|
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices. If |
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier module. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Apply batch normalisation to the consequent layer inputs. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
predict
predict_proba
HTSKRegressor
Bases: BaseTSKRegressor
HTSK regressor for high-dimensional TSK inference.
HTSK replaces the standard product t-norm with a geometric mean over membership values and performs rule normalization in log-space.
References
Y. Cui, D. Wu and Y. Xu, "Curse of Dimensionality for TSK Fuzzy Neural Networks: Explanation and Solutions," 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 2021, pp. 1-8, doi: 10.1109/IJCNN52387.2021.9534265.
Initialise the HTSK regressor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
rule_base
|
str
|
Rule-base construction strategy ( |
'cartesian'
|
t_norm
|
str
|
Antecedent aggregation operator (default |
'gmean'
|
t_norm_fn
|
TNormFn | None
|
Optional custom t-norm callable. |
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices. |
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
LogTSKClassifier
Bases: BaseTSKClassifier
LogTSK classifier with inverse-log normalization of log-domain rules.
Firing strengths are normalized using the inverse-log formula, which is immune to softmax saturation in high-dimensional input spaces.
References
Y. Cui, D. Wu and Y. Xu, "Curse of Dimensionality for TSK Fuzzy Neural Networks: Explanation and Solutions," 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 2021, pp. 1-8, doi: 10.1109/IJCNN52387.2021.9534265.
Initialise the LogTSK classifier.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
n_classes
|
int
|
Number of output classes (must be ≥ 2). |
required |
rule_base
|
str
|
|
'cartesian'
|
t_norm
|
str
|
Antecedent aggregation operator (default |
'prod'
|
t_norm_fn
|
TNormFn | None
|
Optional custom t-norm callable. |
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices. |
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
predict
predict_proba
LogTSKRegressor
Bases: BaseTSKRegressor
LogTSK regressor with inverse-log normalization of log-domain rules.
Firing strengths are normalized using the inverse-log formula, which is immune to softmax saturation in high-dimensional input spaces.
References
Y. Cui, D. Wu and Y. Xu, "Curse of Dimensionality for TSK Fuzzy Neural Networks: Explanation and Solutions," 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 2021, pp. 1-8, doi: 10.1109/IJCNN52387.2021.9534265.
Initialise the LogTSK regressor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
rule_base
|
str
|
|
'cartesian'
|
t_norm
|
str
|
Antecedent aggregation operator (default |
'prod'
|
t_norm_fn
|
TNormFn | None
|
Optional custom t-norm callable. |
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices. |
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
TSKClassifier
Bases: BaseTSKClassifier
Vanilla TSK classifier with sum-based rule normalization.
The vanilla Takagi-Sugeno-Kang inference computes rule firing strengths with the product t-norm and normalizes them by their total sum.
References
T. Takagi and M. Sugeno, "Fuzzy identification of systems and its applications to modeling and control," in IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-15, no. 1, pp. 116-132, Jan.-Feb. 1985, doi: 10.1109/TSMC.1985.6313399.
Initialise the vanilla TSK classifier.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
n_classes
|
int
|
Number of output classes (must be ≥ 2). |
required |
rule_base
|
str
|
|
'cartesian'
|
t_norm
|
str
|
Antecedent aggregation operator (default |
'prod'
|
t_norm_fn
|
TNormFn | None
|
Optional custom t-norm callable. |
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices. |
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.
predict
predict_proba
TSKRegressor
Bases: BaseTSKRegressor
Vanilla TSK regressor with sum-based rule normalization.
The vanilla Takagi-Sugeno-Kang inference computes rule firing strengths with the product t-norm and normalizes them by their total sum.
References
T. Takagi and M. Sugeno, "Fuzzy identification of systems and its applications to modeling and control," in IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-15, no. 1, pp. 116-132, Jan.-Feb. 1985, doi: 10.1109/TSMC.1985.6313399.
Initialise the vanilla TSK regressor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature name to a sequence of
:class: |
required |
rule_base
|
str
|
|
'cartesian'
|
t_norm
|
str
|
Antecedent aggregation operator (default |
'prod'
|
t_norm_fn
|
TNormFn | None
|
Optional custom t-norm callable. |
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule antecedent indices. |
None
|
defuzzifier
|
nn.Module | None
|
Custom defuzzifier. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
Batch normalisation on consequent inputs. |
False
|
Source code in highfis/models.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.