Base
Base TSK model that factors the common antecedent-defuzzification pipeline.
This module defines BaseTSK, the abstract foundation for all TSK fuzzy
models in highFIS. It factors out the shared antecedent pipeline,
defuzzifier, and training loop so concrete subclasses can focus on
task-specific consequent layers and loss criteria.
The forward pipeline executes four sequential steps:
highfis.layers.MembershipLayer— evaluates membership functions for each input feature.highfis.layers.RuleLayer— computes rule firing strengths via a configurable rule base and T-norm.- Defuzzifier — normalizes firing strengths to probability-like weights
(default:
highfis.defuzzifiers.SoftmaxLogDefuzzifier). - ConsequentLayer — produces the final output from the inputs and the normalized rule weights.
Concrete subclasses must implement:
BaseTSK._build_consequent_layer— return the task-specific consequent module.BaseTSK._default_criterion— return the default loss function.
Optional overridable hooks:
BaseTSK._compute_loss— customize target preparation or loss composition.BaseTSK._evaluate_validation— customize the validation metric used for early stopping.
BaseTSK
Bases: nn.Module
Abstract base for TSK fuzzy models.
Subclasses must implement :meth:_build_consequent_layer and
:meth:_default_criterion. Optionally override :meth:_compute_loss
and :meth:_evaluate_validation for task-specific logic.
Initialize the TSK pipeline layers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_mfs
|
Mapping[str, Sequence[MembershipFunction]]
|
Mapping from feature names to sequences of
:class: |
required |
rule_base
|
str
|
Rule-base construction strategy. Supported values:
|
'cartesian'
|
t_norm
|
str
|
Built-in T-norm name. Ignored when t_norm_fn is
provided. Common values: |
'gmean'
|
t_norm_fn
|
TNormFn | None
|
Optional custom T-norm callable. When provided,
t_norm is internally set to |
None
|
rules
|
Sequence[Sequence[int]] | None
|
Explicit rule index sequences. Required when
rule_base is |
None
|
defuzzifier
|
nn.Module | None
|
Normalization module applied to raw rule firing
strengths. Defaults to
:class: |
None
|
consequent_batch_norm
|
bool
|
If |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If input_mfs is empty. |
Source code in highfis/base.py
fit
Train the model with optional early stopping.
When x_val and y_val are provided the model evaluates a
task-specific metric (via :meth:_evaluate_validation) after every
epoch and applies early stopping when the metric has not improved for
patience consecutive epochs.
By default the best model weights from validation are restored when
restore_best=True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Training features of shape |
required |
y
|
Tensor
|
Training targets of shape |
required |
epochs
|
int
|
Maximum number of training epochs. |
200
|
learning_rate
|
float
|
Learning rate for the default AdamW optimizer. |
0.001
|
criterion
|
Callable[[Tensor, Tensor], Tensor] | None
|
Optional loss function. Defaults to
:meth: |
None
|
optimizer
|
torch.optim.Optimizer | None
|
Optional pre-built optimizer. When |
None
|
batch_size
|
int | None
|
Mini-batch size. |
None
|
shuffle
|
bool
|
If |
True
|
ur_weight
|
float
|
Non-negative weight for the uniform rule
regularization term. |
0.0
|
ur_target
|
float | None
|
Target uniform activation for UR. Must be in
|
None
|
verbose
|
bool | int
|
Verbosity level. |
False
|
x_val
|
Tensor | None
|
Optional validation features of shape
|
None
|
y_val
|
Tensor | None
|
Optional validation targets of shape |
None
|
patience
|
int | None
|
Number of consecutive epochs without improvement
before early stopping. Set to |
20
|
restore_best
|
bool
|
If |
True
|
weight_decay
|
float
|
L2 weight decay applied to consequent parameters by the default AdamW optimizer. |
1e-08
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
A dictionary with keys |
dict[str, Any]
|
containing per-epoch loss lists. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes of x, y, x_val, or y_val are
incompatible, or if ur_weight < 0 or ur_target is
outside |
Source code in highfis/base.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | |
forward
forward_antecedents
Compute normalized rule strengths from model antecedents.