Regressor API¶
ANFISRegressor provides the high-level façade for regression workflows,
combining membership-function generation, rule construction, and optimization
with familiar estimator semantics.
anfis_toolbox.regressor.ANFISRegressor ¶
ANFISRegressor(
*,
n_mfs: int = 3,
mf_type: str = "gaussian",
init: str | None = "grid",
overlap: float = 0.5,
margin: float = 0.1,
inputs_config: Mapping[Any, Any] | None = None,
random_state: int | None = None,
optimizer: str
| BaseTrainer
| type[BaseTrainer]
| None = "hybrid",
optimizer_params: Mapping[str, Any] | None = None,
learning_rate: float | None = None,
epochs: int | None = None,
batch_size: int | None = None,
shuffle: bool | None = None,
verbose: bool = False,
loss: LossFunction | str | None = None,
rules: Sequence[Sequence[int]] | None = None,
)
Bases: BaseEstimatorLike, FittedMixin, RegressorMixinLike
Adaptive Neuro-Fuzzy regressor with a scikit-learn style API.
The estimator manages membership-function synthesis, rule construction, and
trainer selection so you can focus on calling :meth:fit, :meth:predict,
and :meth:evaluate with familiar NumPy-like data structures.
Examples:¶
reg = ANFISRegressor() reg.fit(X, y) ANFISRegressor(...) reg.predict(X[:1]) array([...])
Parameters¶
n_mfs : int, default=3
Default number of membership functions per input.
mf_type : str, default="gaussian"
Default membership function family used for automatically generated
membership functions. Supported values include "gaussian",
"triangular", "bell", and other names exposed by the
membership catalogue.
init : {"grid", "fcm", "random", None}, default="grid"
Strategy used when inferring membership functions from data. None
falls back to "grid".
overlap : float, default=0.5
Controls overlap when generating membership functions automatically.
margin : float, default=0.10
Margin added around observed data ranges during automatic
initialization.
inputs_config : Mapping, optional
Per-input overrides. Keys may be feature names (when X is a
:class:pandas.DataFrame) or integer indices. Values may be:
* ``dict`` with keys among ``{"n_mfs", "mf_type", "init", "overlap",
"margin", "range", "membership_functions", "mfs"}``.
* A list/tuple of :class:`MembershipFunction` instances for full control.
* ``None`` for defaults.
random_state : int, optional
Random state forwarded to FCM-based initialization and any stochastic
optimizers.
optimizer : str, BaseTrainer, type[BaseTrainer], or None, default="hybrid"
Trainer identifier or instance used for fitting. Strings map to entries
in :data:TRAINER_REGISTRY. None defaults to "hybrid".
optimizer_params : Mapping, optional
Additional keyword arguments forwarded to the trainer constructor.
learning_rate, epochs, batch_size, shuffle, verbose : optional scalars
Common trainer hyper-parameters provided for convenience. When the
selected trainer supports the parameter it is included automatically.
loss : str or LossFunction, optional
Custom loss forwarded to trainers that expose a loss parameter.
rules : Sequence[Sequence[int]] | None, optional
Explicit fuzzy rule indices to use instead of the full Cartesian product. Each
rule lists the membership-function index per input. None keeps the default
exhaustive rule set.
Parameters¶
n_mfs : int, default=3
Default number of membership functions allocated to each input when
they are inferred from data.
mf_type : str, default="gaussian"
Membership function family used for automatically generated
membership functions. Supported names mirror the ones exported in
:mod:anfis_toolbox.membership (e.g. "gaussian",
"triangular", "bell").
init : {"grid", "fcm", "random", None}, default="grid"
Initialization strategy employed when synthesizing membership
functions from the training data. None falls back to
"grid".
overlap : float, default=0.5
Desired overlap between neighbouring membership functions during
automatic construction.
margin : float, default=0.10
Extra range added around the observed feature minima/maxima when
performing grid initialization.
inputs_config : Mapping, optional
Per-feature overrides for membership configuration. Keys may be
feature names (e.g. when X is a :class:pandas.DataFrame),
integer indices, or "x{i}" aliases. Values accept dictionaries
with membership keywords (e.g. "n_mfs", "mf_type",
"init"), explicit membership function lists, or scalars for
simple overrides. None entries keep defaults.
random_state : int, optional
Seed propagated to stochastic components such as FCM-based
initialization and optimizers that rely on randomness.
optimizer : str | BaseTrainer | type[BaseTrainer] | None, default="hybrid"
Trainer identifier or instance used for fitting. String aliases are
looked up in :data:TRAINER_REGISTRY. None defaults to
"hybrid".
optimizer_params : Mapping, optional
Extra keyword arguments forwarded to the trainer constructor when a
string identifier or class is supplied.
learning_rate, epochs, batch_size, shuffle, verbose : optional
Convenience hyper-parameters that are injected into the selected
trainer when supported. shuffle accepts False to disable
randomisation.
loss : str | LossFunction, optional
Custom loss forwarded to trainers exposing a loss parameter.
None keeps the trainer default (typically mean squared error).
rules : Sequence[Sequence[int]] | None, optional
Optional explicit fuzzy rule definitions. Each rule lists the
membership index for every input. None uses the full Cartesian
product of configured membership functions.
Source code in anfis_toolbox/regressor.py
128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 | |
__repr__ ¶
Return a formatted representation summarising configuration and fitted artefacts.
evaluate ¶
evaluate(
X: ArrayLike,
y: ArrayLike,
*,
return_dict: bool = True,
print_results: bool = True,
) -> Mapping[str, MetricValue] | None
Evaluate predictive performance on a dataset.
Parameters¶
X : array-like
Evaluation inputs with shape (n_samples, n_features).
y : array-like
Ground-truth targets aligned with X.
return_dict : bool, default=True
When True, return the computed metric dictionary. When
False, only perform side effects (such as printing) and return
None.
print_results : bool, default=True
Log a human-readable summary to stdout. Set to False to
suppress printing.
Returns:¶
Mapping[str, MetricValue] | None
Regression metrics including mean squared error, root mean squared
error, mean absolute error, and :math:R^2 when return_dict is
True; otherwise None.
Raises:¶
RuntimeError
If called before fit.
ValueError
When X and y disagree on the sample count.
Source code in anfis_toolbox/regressor.py
349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 | |
fit ¶
fit(
X: ArrayLike,
y: ArrayLike,
*,
validation_data: tuple[ndarray, ndarray] | None = None,
validation_frequency: int = 1,
verbose: bool | None = None,
**fit_params: Any,
) -> ANFISRegressor
Fit the ANFIS regressor on labelled data.
Parameters¶
X : array-like
Training inputs with shape (n_samples, n_features).
y : array-like
Target values aligned with X. One-dimensional vectors are
accepted and reshaped internally.
validation_data : tuple[np.ndarray, np.ndarray], optional
Optional validation split supplied to the underlying trainer. Both
arrays must already be numeric and share the same row count.
validation_frequency : int, default=1
Frequency (in epochs) at which validation loss is evaluated when
validation_data is provided.
verbose : bool, optional
Override the estimator's verbose flag for this fit call. When
supplied, the value is stored on the estimator and forwarded to the
trainer configuration.
**fit_params : Any
Arbitrary keyword arguments forwarded to the trainer fit
method.
Returns:¶
ANFISRegressor
Reference to self for fluent-style chaining.
Raises:¶
ValueError
If X and y contain a different number of samples.
ValueError
If validation frequency is less than one.
TypeError
If the configured trainer returns an object that is not a
dict-like training history.
Source code in anfis_toolbox/regressor.py
230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 | |
get_rules ¶
Return the fuzzy rule index combinations used by the fitted model.
Returns:¶
tuple[tuple[int, ...], ...] Immutable tuple containing one tuple per fuzzy rule, where each inner tuple lists the membership index chosen for each input.
Raises:¶
RuntimeError If invoked before the estimator is fitted.
Source code in anfis_toolbox/regressor.py
load
classmethod
¶
Load a pickled estimator from filepath and validate its type.
Source code in anfis_toolbox/regressor.py
predict ¶
Predict regression targets for the provided samples.
Parameters¶
X : array-like
Samples to evaluate. Accepts one-dimensional arrays (interpreted as
a single sample) or matrices with shape (n_samples, n_features).
Returns:¶
np.ndarray
Vector of predictions with shape (n_samples,).
Raises:¶
RuntimeError If the estimator has not been fitted yet. ValueError When the supplied samples do not match the fitted feature count.
Source code in anfis_toolbox/regressor.py
save ¶
Serialize this estimator (and its fitted state) using pickle.