Metrics
Evaluation metrics for highFIS estimators.
This module provides a small, sklearn-style evaluation API for both regression and classification tasks.
Classification Metrics
accuracy: standard accuracy scorebalanced_accuracy: average recall over classesprecision_macro: macro-averaged precisionrecall_macro: macro-averaged recallf1_macro: macro-averaged F1 scoreprecision_micro: micro-averaged precisionrecall_micro: micro-averaged recallf1_micro: micro-averaged F1 scoreconfusion_matrix: confusion matrix by classclasses: sorted union of true and predicted labels
Regression Metrics
mse: mean squared errormae: mean absolute errorrmse: root mean squared errorr2: coefficient of determinationmedian_absolute_error: median absolute errormean_bias_error: average prediction biasmax_error: maximum absolute errorstd_error: standard deviation of residualsexplained_variance: explained variance scoremape: mean absolute percentage errorsmape: symmetric mean absolute percentage errormsle: mean squared logarithmic errorpearson: Pearson correlation coefficient
Notes
- The module exports
compute_metricsand the helper classesClassificationMetricsandRegressionMetrics. compute_metricsvalidates metric names and returns only the requested subset.- All metrics accept raw array-like inputs and flatten non-1D arrays.
ClassificationMetrics
Standard classification metrics.
accuracy
staticmethod
balanced_accuracy
staticmethod
Return the balanced accuracy score.
classes
staticmethod
Return the sorted set of predicted and true classes.
Source code in highfis/metrics.py
confusion_matrix
staticmethod
Return the confusion matrix for the predictions.
Source code in highfis/metrics.py
f1_macro
staticmethod
Return macro-averaged F1 score.
f1_micro
staticmethod
Return micro-averaged F1 score.
precision_macro
staticmethod
Return macro-averaged precision.
Source code in highfis/metrics.py
precision_micro
staticmethod
Return micro-averaged precision.
Source code in highfis/metrics.py
recall_macro
staticmethod
Return macro-averaged recall.
recall_micro
staticmethod
Return micro-averaged recall.
RegressionMetrics
Standard regression metrics.
explained_variance
staticmethod
mae
staticmethod
mape
staticmethod
max_error
staticmethod
mean_bias_error
staticmethod
Return mean bias error (prediction minus truth).
Source code in highfis/metrics.py
median_absolute_error
staticmethod
Return median absolute error.
mse
staticmethod
msle
staticmethod
Return mean squared logarithmic error.
pearson
staticmethod
Return Pearson correlation coefficient.
Source code in highfis/metrics.py
r2
staticmethod
rmse
staticmethod
smape
staticmethod
Return symmetric mean absolute percentage error.
Source code in highfis/metrics.py
std_error
staticmethod
Return the standard deviation of the errors.
Source code in highfis/metrics.py
compute_metrics
Compute a set of named evaluation metrics.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
task
|
Task
|
|
required |
y_true
|
Any
|
Ground-truth labels or targets. |
required |
y_pred
|
Any
|
Predicted labels or values. |
required |
sample_weight
|
Any | None
|
Optional sample weights. |
None
|
metrics
|
list[str] | None
|
Optional list of metric names to compute. |
None
|
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
Dictionary mapping metric names to scalar float results. |