ex_fuzzy.eval_tools.FuzzyEvaluator#

class ex_fuzzy.eval_tools.FuzzyEvaluator(fl_classifier)[source]#

Bases: object

Comprehensive evaluation and analysis tool for fuzzy rule-based classifiers.

This class provides a complete suite of evaluation methods for fuzzy classification models, including performance metrics, rule analysis, statistical testing, and visualization capabilities. It is designed to work with fuzzy classifiers that follow the BaseFuzzyRulesClassifier interface.

fl_classifier#

The fuzzy classifier to evaluate

Type:

evf.BaseFuzzyRulesClassifier

Example

>>> evaluator = FuzzyEvaluator(trained_classifier)
>>> predictions = evaluator.predict(X_test)
>>> accuracy = evaluator.get_metric('accuracy_score', X_test, y_test)
>>> report = evaluator.eval_fuzzy_model(X_train, y_train, X_test, y_test)

Note

The FuzzyEvaluator assumes the classifier has been fitted before evaluation. It provides both individual metric computation and comprehensive evaluation reports.

__init__(fl_classifier)[source]#

Initialize the FuzzyEvaluator with a fitted fuzzy classifier.

Parameters:

fl_classifier (evf.BaseFuzzyRulesClassifier) – A fitted fuzzy rule-based classifier that implements the standard fit/predict interface.

predict(X)[source]#

Generate predictions for input data using the wrapped fuzzy classifier.

This method provides a unified interface for prediction that can be used with scikit-learn evaluation metrics and other analysis tools.

Parameters:

X (np.array) – Feature data for prediction with shape (n_samples, n_features)

Returns:

Predicted class labels with shape (n_samples,)

Return type:

np.array

get_metric(metric, X_true, y_true, **kwargs)[source]#

Compute a specific classification metric for the fuzzy model.

This method provides a unified interface for computing various scikit-learn classification metrics on the fuzzy model predictions. It handles class label conversion and error handling for unsupported metrics.

Parameters:
  • metric (str) – Name of the sklearn.metrics function to compute (e.g., ‘accuracy_score’, ‘f1_score’)

  • X_true (np.array) – Feature data for prediction

  • y_true (np.array) – True class labels

  • **kwargs – Additional arguments for the specific metric function

Returns:

The computed metric value, or error string if metric is unavailable

Return type:

float

Example

>>> evaluator = FuzzyEvaluator(classifier)
>>> accuracy = evaluator.get_metric('accuracy_score', X_test, y_test)
>>> f1 = evaluator.get_metric('f1_score', X_test, y_test, average='weighted')

Note

The method automatically handles string class labels by converting them to numeric indices based on the classifier’s classes_names attribute.

eval_fuzzy_model(X_train, y_train, X_test, y_test, plot_rules=True, print_rules=True, plot_partitions=True, return_rules=False, print_accuracy=True, print_matthew=True, export_path=None, bootstrap_results_print=True)[source]#

Comprehensive evaluation of the fuzzy rule-based model.

This method provides a complete evaluation workflow including performance metrics, rule visualization, partition plotting, and statistical analysis. It combines multiple evaluation aspects into a single convenient interface.

Parameters:
  • X_train (np.array) – Training feature data

  • y_train (np.array) – Training target labels

  • X_test (np.array) – Test feature data

  • y_test (np.array) – Test target labels

  • plot_rules (bool, optional) – Whether to generate rule visualization plots. Defaults to True.

  • print_rules (bool, optional) – Whether to print rule text representations. Defaults to True.

  • plot_partitions (bool, optional) – Whether to plot fuzzy variable partitions. Defaults to True.

  • return_rules (bool, optional) – Whether to return rule text in output. Defaults to False.

  • print_accuracy (bool, optional) – Whether to print accuracy metrics. Defaults to True.

  • print_matthew (bool, optional) – Whether to print Matthews correlation coefficient. Defaults to True.

  • export_path (str, optional) – Path to export rule visualization plots. Defaults to None.

  • bootstrap_results_print (bool, optional) – Whether to perform bootstrap statistical analysis. Defaults to True.

Returns:

Rule text representation if return_rules=True, otherwise None

Return type:

str or None

Example

>>> evaluator = FuzzyEvaluator(classifier)
>>> report = evaluator.eval_fuzzy_model(X_train, y_train, X_test, y_test,
...                                     plot_rules=True, print_rules=True)

Note

This method handles string class labels automatically and provides comprehensive output including performance metrics and rule analysis.