Advanced Classifiers#

Fuzzy Classification Algorithms for Ex-Fuzzy Library

This module provides high-level classification algorithms that combine rule mining, genetic optimization, and fuzzy inference for pattern classification tasks. The classifiers implement sophisticated two-stage optimization approaches that first discover candidate rules through data mining and then optimize rule combinations using evolutionary algorithms.

Main Components:
  • RuleMineClassifier: Two-stage classifier combining rule mining and genetic optimization

  • DoubleGo classifier: Advanced multi-objective genetic optimization

  • Integrated preprocessing: Automatic linguistic variable generation

  • Performance optimization: Efficient rule evaluation and selection

  • Scikit-learn compatibility: Standard fit/predict interface

Key Features:
  • Automatic feature fuzzification with optimal partitioning

  • Rule mining with support, confidence, and lift thresholds

  • Multi-objective optimization balancing accuracy and interpretability

  • Support for imbalanced datasets with specialized fitness functions

  • Cross-validation based fitness evaluation for robust models

  • Integration with various fuzzy set types (Type-1, Type-2, GT2)

The classifiers are designed to be both highly accurate and interpretable, making them suitable for applications where understanding the decision process is as important as predictive performance.

class ex_fuzzy.classifiers.RuleMineClassifier(nRules=30, nAnts=4, fuzzy_type=None, tolerance=0.0, verbose=False, n_class=None, runner=1, linguistic_variables=None)[source]#

Bases: ClassifierMixin

A classifier that works by mining a set of candidate rules with a minimum support, confidence and lift, and then using a genetic algorithm that chooses the optimal combination of those rules.

__init__(nRules=30, nAnts=4, fuzzy_type=None, tolerance=0.0, verbose=False, n_class=None, runner=1, linguistic_variables=None)[source]#

Inits the optimizer with the corresponding parameters.

Parameters:
  • nRules (int) – number of rules to optimize.

  • nAnts (int) – max number of antecedents to use.

  • type (fuzzy) – FUZZY_SET enum type in fuzzy_sets module. The kind of fuzzy set used.

  • tolerance (float) – tolerance for the support/dominance score of the rules.

  • verbose – if True, prints the progress of the optimization.

  • n_class (int) – number of classes in the problem. If None (default) the classifier will compute it empirically.

  • runner (int) – number of threads to use.

  • linguistic_variables (list[fuzzyVariable]) – linguistic variables per antecedent.

fit(X, y, n_gen=30, pop_size=50, **kwargs)[source]#

Trains the model with the given data.

Parameters:
  • X (array) – samples to train.

  • y (array) – labels for each sample.

  • n_gen (int) – number of generations to compute in the genetic optimization.

  • pop_size (int) – number of subjects per generation.

  • kwargs – additional parameters for the genetic optimization. See fit method in BaseRuleBaseClassifier.

predict(X)[source]#

Predict for each sample the corresponding class.

Parameters:

X (array) – samples to predict.

Returns:

a class for each sample.

Return type:

array

internal_classifier()[source]#
class ex_fuzzy.classifiers.FuzzyRulesClassifier(nRules=30, nAnts=4, fuzzy_type=None, tolerance=0.0, verbose=False, n_class=None, runner=1, expansion_factor=1, linguistic_variables=None)[source]#

Bases: ClassifierMixin

A classifier that works by performing a double optimization process. First, it creates a candidate rule base using genetic optimization and then uses it as a basis to obtain a better one that satisfies the constrain of antecedents and number of rules.

__init__(nRules=30, nAnts=4, fuzzy_type=None, tolerance=0.0, verbose=False, n_class=None, runner=1, expansion_factor=1, linguistic_variables=None)[source]#

Inits the optimizer with the corresponding parameters.

Parameters:
  • nRules (int) – number of rules to optimize.

  • nAnts (int) – max number of antecedents to use.

  • type (fuzzy) – FUZZY_SET enum type in fuzzy_sets module. The kind of fuzzy set used.

  • tolerance (float) – tolerance for the dominance score of the rules.

  • verbose – if True, prints the progress of the optimization.

  • n_class (int) – number of classes in the problem. If None (default) the classifier will compute it empirically.

  • runner (int) – number of threads to use.

  • expansion_factor (int) – if > 1, it will compute inthe first optimization process n times the nRules parameters. (So that the search space for the second step is bigger)

  • linguistic_variables (list[fuzzyVariable]) – linguistic variables per antecedent.

fit(X, y, n_gen=30, pop_size=50, checkpoints=0, **kwargs)[source]#

Trains the model with the given data.

Parameters:
  • X (array) – samples to train.

  • y (array) – labels for each sample.

  • n_gen (int) – number of generations to compute in the genetic optimization.

  • pop_size (int) – number of subjects per generation.

  • checkpoints (int) – if bigger than 0, will save the best subject per x generations in a text file.

  • kwargs – additional parameters for the genetic optimization. See fit method in BaseRuleBaseClassifier.

predict(X)[source]#

Predcit for each sample the correspondent class.

Parameters:

X (array) – samples to predict.

Returns:

a class for each sample.

Return type:

array

internal_classifier()[source]#
class ex_fuzzy.classifiers.RuleFineTuneClassifier(nRules=30, nAnts=4, fuzzy_type=None, tolerance=0.0, verbose=False, n_class=None, runner=1, expansion_factor=1, linguistic_variables=None)[source]#

Bases: ClassifierMixin

A classifier that works by mining a set of candidate rules with a minimum support and then uses a two step genetic optimization that chooses the optimal combination of those rules and fine tunes them.

__init__(nRules=30, nAnts=4, fuzzy_type=None, tolerance=0.0, verbose=False, n_class=None, runner=1, expansion_factor=1, linguistic_variables=None)[source]#

Inits the optimizer with the corresponding parameters.

Parameters:
  • nRules (int) – number of rules to optimize.

  • nAnts (int) – max number of antecedents to use.

  • type (fuzzy) – FUZZY_SET enum type in fuzzy_sets module. The kind of fuzzy set used.

  • tolerance (float) – tolerance for the dominance score of the rules.

  • verbose – if True, prints the progress of the optimization.

  • n_class (int) – number of classes in the problem. If None (default) the classifier will compute it empirically.

  • linguistic_variables (list[fuzzyVariable]) – linguistic variables per antecedent.

fit(X, y, n_gen=30, pop_size=50, checkpoints=0, **kwargs)[source]#

Trains the model with the given data.

Parameters:
  • X (array) – samples to train.

  • y (array) – labels for each sample.

  • n_gen (int) – number of generations to compute in the genetic optimization.

  • pop_size (int) – number of subjects per generation.

  • checkpoints (int) – if bigger than 0, will save the best subject per x generations in a text file.

  • kwargs – additional parameters for the genetic optimization. See fit method in BaseRuleBaseClassifier.

predict(X)[source]#

Predcit for each sample the correspondent class.

Parameters:

X (array) – samples to predict.

Returns:

a class for each sample.

Return type:

array

internal_classifier()[source]#