Rule Evaluation Functions#

This file contains the classes to perform rule classification evaluation.

class ex_fuzzy.eval_rules.evalRuleBase(mrule_base, X, y, time_moments=None, precomputed_truth=None)[source]#

Bases: object

Class to evaluate a set of rules given a evaluation dataset.

__init__(mrule_base, X, y, time_moments=None, precomputed_truth=None)[source]#

Creates the object with the rulebase to evaluate and the data to use in the evaluation.

Parameters:
  • mrule_base (MasterRuleBase) – The rule base to evaluate.

  • X (array) – array shape samples x features. The data to evaluate the rule base.

  • y (array) – array shape samples x 1. The labels of the data.

  • time_moments (array) – array shape samples x 1. The time moments of the samples. (Only for temporal rule bases)

Returns:

None

Return type:

None

compute_antecedent_pattern_support(X=None)[source]#

Computes the pattern support for each of the rules for the given X. Each pattern support firing strength is the result of the tnorm for all the antecedent memeberships, dvided by their number.

Returns:

array of shape rules x 2

Return type:

array

compute_pattern_support(X=None, y=None)[source]#

Computes the pattern support for each of the rules for the given X. Each pattern support firing strength is the result of the tnorm for all the antecedent memeberships, dvided by their number.

Returns:

array of shape rules x 2

Return type:

array

compute_aux_pattern_support()[source]#

Computes the pattern support for each of the rules for each of the classes for the given X. Each pattern support firing strength is the result of the tnorm for all the antecedent memeberships, dvided by their number.

Returns:

array of shape rules x 2

Return type:

array

compute_pattern_confidence(X=None, y=None, precomputed_truth=None)[source]#

Computes the pattern confidence for each of the rules for the given X. Each pattern confidence is the normalized firing strength.

Returns:

array of shape 1 x rules

Return type:

array

compute_aux_pattern_confidence()[source]#

Computes the pattern confidence for each of the rules for the given X. Each pattern confidence is the normalized firing strength.

Returns:

array of shape rules x classes

Return type:

array

dominance_scores()[source]#

Returns the dominance score of each pattern for each rule.

Returns:

array of shape rules x 2

Return type:

array

association_degree()[source]#

Returns the association degree of each rule for each sample.

Returns:

vector of shape rules

Return type:

array

aux_dominance_scores()[source]#

Returns the dominance score of each pattern for each rule.

Returns:

array of shape rules x 2

Return type:

array

add_rule_weights()[source]#

Add dominance score field to each of the rules present in the master Rule Base.

add_auxiliary_rule_weights()[source]#

Add dominance score field to each of the rules present in the master Rule Base for each consequent. They are labeled as aux_score, aux_support and aux_confidence. (Because they are not the main rule weights)

add_classification_metrics(X=None, y=None)[source]#

Adds the accuracy of each rule in the master rule base. It also adds the f1, precision and recall scores. If X and y are None uses the train set.

Parameters:
  • X (array) – array of shape samples x features

  • y (array) – array of shape samples

classification_eval()[source]#

Returns the matthews correlation coefficient for a classification task using the rules evaluated.

Returns:

mattews correlation coefficient. (float in [-1, 1])

Return type:

float

size_antecedents_eval(tolerance=0.1)[source]#

Returns a score between 0 and 1, where 1 means that the rule base only contains almost no antecedents.

0 means that the rule base contains all rules with more than {tolerance} DS, there are many of them and they have all possible antecedents. The more rules and antecedent per rules the lower this score is.

Parameters:

tolerance – float in [0, 1]. The tolerance for the dominance score. Default 0.1

Returns:

float in [0, 1] with the score.

Return type:

float

effective_rulesize_eval(tolerance=0.1)[source]#

Returns a score between 0 and 1, where 1 means that the rule base only contains almost no antecedents.

0 means that the rule base contains all rules with more than {tolerance} DS, there are many of them and they have all possible antecedents. The more rules and antecedent per rules the lower this score is.

Parameters:

tolerance – float in [0, 1]. The tolerance for the dominance score. Default 0.1

Returns:

float in [0, 1] with the score.

Return type:

float

p_permutation_classifier_validation(n=100, r=10)[source]#

Performs a boostrap test to evaluate the performance of the rule base. Returns the p-valuefor the label permutation test and the feature coalition test.

Parameters:
  • n – int. Number of boostrap samples.

  • r – int. Number of repetitions to estimate the original error rate.

Returns:

p-value of the permutation test.

Return type:

float

p_bootstrapping_rules_validation(n=100)[source]#
add_full_evaluation()[source]#

Adds classification scores, both Dominance Scores and accuracy metrics, for each individual rule.

bootstrap_support_rules(X, y, n_samples)[source]#

Bootstraps the support of the rules in the classifier.

bootstrap_confidence_rules(X, y, n_samples)[source]#

Bootstraps the confidence of the rules in the classifier.

bootstrap_support_confinterval(X, y, n_samples)[source]#

Bootstraps the support of the rules in the classifier.

bootstrap_confidence_confinterval(X, y, n_samples)[source]#

Bootstraps the confidence of the rules in the classifier.