Rules Module#
The ex_fuzzy.rules
module contains the core classes and functions for fuzzy rule definition, management, and inference.
Overview#
This module implements a complete fuzzy inference system supporting:
Type-1, Type-2, and General Type-2 fuzzy sets
Multiple inference methods: Mamdani and Takagi-Sugeno
Various t-norm operations: Product, minimum, and other aggregation functions
Defuzzification methods: Centroid, height, and other methods
Rule quality assessment: Dominance scores and performance metrics
Architecture#
The module follows a hierarchical structure:
Individual Rules:
RuleSimple
for single rule representationRule Collections:
RuleBaseT1
,RuleBaseT2
,RuleBaseGT2
Multi-class Systems:
MasterRuleBase
for complete fuzzy systemsInference Support: Functions for membership computation and aggregation
Classes#
|
Simplified Rule Representation for Optimized Computation. |
|
Class optimized to work with multiple rules at the same time. |
|
Class optimized to work with multiple rules at the same time. |
|
Class optimized to work with multiple rules at the same time. |
|
This Class encompasses a list of rule bases where each one corresponds to a different class. |
Functions#
|
Compute membership degrees for input values across all fuzzy variables. |
Rule Classes#
RuleSimple#
- class ex_fuzzy.rules.RuleSimple(antecedents, consequent=0, modifiers=None)[source]#
Bases:
object
Simplified Rule Representation for Optimized Computation.
This class represents fuzzy rules in a simplified format optimized for computational efficiency in rule base operations. It uses integer encoding for antecedents and consequents to minimize memory usage and speed up rule evaluation processes.
- antecedents#
Integer-encoded antecedents where: - -1: Variable not used in the rule - 0-N: Index of the linguistic variable used for the ith input
- modifiers#
Optional modifiers for rule adaptation
- Type:
np.array
Example
>>> # Rule: IF x1 is Low AND x2 is High THEN y is Medium >>> # Assuming Low=0, High=1, Medium=1 >>> rule = RuleSimple([0, 1], consequent=1) >>> print(rule.antecedents) # [0, 1] >>> print(rule.consequent) # 1
Note
This simplified representation is designed for high-performance rule evaluation in large rule bases where memory and speed are critical.
Core Methods
- __init__(antecedents, consequent=0, modifiers=None)[source]#
Creates a rule with the given antecedents and consequent.
- Parameters:
antecedents (list[int]) – List of integers indicating the linguistic variable used for each input (-1 for unused variables)
consequent (int, optional) – Integer indicating the linguistic variable used for the consequent. Defaults to 0.
modifiers (np.array, optional) – Array of modifier values for rule adaptation. Defaults to None.
Example
>>> # Create a rule with two antecedents and one consequent >>> rule = RuleSimple([0, 2, -1], consequent=1) # x1=0, x2=2, x3=unused, y=1
- __getitem__(ix)[source]#
Returns the antecedent value for the given index.
- Parameters:
ix (int) – Index of the antecedent to return
- Returns:
The antecedent value at the specified index
- Return type:
Example
>>> rule = RuleSimple([0, 1, 2]) >>> print(rule[1]) # 1
- __setitem__(ix, value)[source]#
Sets the antecedent value for the given index.
- Parameters:
Example
>>> rule = RuleSimple([0, 1, 2]) >>> rule[1] = 3 # Change second antecedent to 3
- __str__()[source]#
Returns a string representation of the rule.
- Returns:
Human-readable string representation of the rule
- Return type:
Example
>>> rule = RuleSimple([0, 1], consequent=2) >>> print(rule) # Rule: antecedents: [0, 1] consequent: 2
- __len__()[source]#
Returns the number of antecedents in the rule.
- Returns:
Number of antecedents in the rule
- Return type:
Example
>>> rule = RuleSimple([0, 1, 2]) >>> print(len(rule)) # 3
- __eq__(other)[source]#
Returns True if the two rules are equal.
- Parameters:
other (RuleSimple) – Another rule to compare with
- Returns:
True if rules have identical antecedents and consequent
- Return type:
Example
>>> rule1 = RuleSimple([0, 1], consequent=2) >>> rule2 = RuleSimple([0, 1], consequent=2) >>> print(rule1 == rule2) # True
RuleBase Classes#
- class ex_fuzzy.rules.RuleBaseT1(antecedents, rules, consequent=None, tnorm=<function prod>)[source]#
Bases:
RuleBase
Class optimized to work with multiple rules at the same time. Supports only one consequent. (Use one rulebase per consequent to study classification problems. Check MasterRuleBase class for more documentation)
This class supports t1 fs.
- __init__(antecedents, rules, consequent=None, tnorm=<function prod>)[source]#
Constructor of the RuleBaseT1 class.
- Parameters:
antecedents (list[fuzzyVariable]) – list of fuzzy variables that are the antecedents of the rules.
rules (list[RuleSimple]) – list of rules.
consequent (fuzzyVariable) – fuzzy variable that is the consequent of the rules. ONLY on regression problems.
tnorm – t-norm used to compute the fuzzy output.
- inference(x)[source]#
Computes the output of the t1 inference system.
Return an array in shape samples.
- Parameters:
x (array) – array with the values of the inputs.
- Returns:
array with the output of the inference system for each sample.
- Return type:
array
- forward(x)[source]#
Same as inference() in the t1 case.
Return a vector of size (samples, )
- Parameters:
x (array) – array with the values of the inputs.
- Returns:
array with the deffuzified output for each sample.
- Return type:
array
- fuzzy_type()[source]#
Returns the correspoing type of the RuleBase using the enum type in the fuzzy_sets module.
- Returns:
the corresponding fuzzy set type of the RuleBase.
- Return type:
- __add__(other)#
Adds two rule bases.
- __eq__(other)#
Returns True if the two rule bases are equal.
- __getitem__(item)#
Returns the corresponding rulebase.
- Parameters:
item (int) – index of the rule.
- Returns:
the corresponding rule.
- Return type:
- __hash__()#
Returns the hash of the rule base.
- __iter__()#
Returns an iterator for the rule base.
- __len__()#
Returns the number of rules in the rule base.
- __setitem__(key, value)#
Set the corresponding rule.
- Parameters:
key (int) – index of the rule.
value (RuleSimple) – new rule.
- __str__()#
Returns a string with the rules in the rule base.
- add_rule(new_rule)#
Adds a new rule to the rulebase. :param new_rule: rule to add.
- add_rules(new_rules)#
Adds a list of new rules to the rulebase.
- Parameters:
new_rules (list[RuleSimple]) – list of rules to add.
- compute_antecedents_memberships(x)#
Returns a list of of dictionaries that contains the memberships for each x value to the ith antecedents, nth linguistic variable. x must be a vector (only one sample)
- compute_rule_antecedent_memberships(x, scaled=False, antecedents_memberships=None)#
Computes the antecedent truth value of an input array.
Return an array in shape samples x rules (x 2) (last is iv dimension)
- Parameters:
x (array) – array with the values of the inputs.
scaled – if True, the memberships are scaled according to their sums for each sample.
- Returns:
array with the memberships of the antecedents for each rule.
- Return type:
array
- copy()#
Creates a copy of the RuleBase.
- Parameters:
deep – if True, creates a deep copy. If False, creates a shallow copy.
- Returns:
a copy of the RuleBase.
- delete_rule_duplicates(list_rules)#
- get_rulebase_matrix()#
Returns a matrix with the antecedents values for each rule.
- get_rules()#
Returns the list of rules in the rulebase.
- get_scores()#
Returns an array with the dominance score for each rule. (Must been already computed by an evalRule object)
- get_weights()#
Returns an array with the weights for each rule. (Different from dominance scores: must been already computed by an optimization algorithm)
- n_linguistic_variables()#
Returns the number of linguistic variables in the rule base.
- print_rule_bootstrap_results()#
Prints the bootstrap results for each rule.
- print_rules(return_rules=False, bootstrap_results=True)#
Print the rules from the rule base.
- Parameters:
return_rules (bool) – if True, the rules are returned as a string.
- prune_bad_rules(tolerance=0.01)#
Delete the rules from the rule base that do not have a dominance score superior to the threshold or have 0 accuracy in the training set.
- Parameters:
tolerance – threshold for the dominance score.
- remove_rule(ix)#
Removes the rule in the given index. :param ix: index of the rule to remove.
- remove_rules(delete_list)#
Removes the rules in the given list of indexes.
- scores()#
Returns the dominance score for each rule.
- Returns:
array with the dominance score for each rule.
- Return type:
array
- class ex_fuzzy.rules.RuleBaseT2(antecedents, rules, consequent=None, tnorm=<function prod>)[source]#
Bases:
RuleBase
Class optimized to work with multiple rules at the same time. Supports only one consequent. (Use one rulebase per consequent to study classification problems. Check MasterRuleBase class for more documentation)
This class supports iv approximation for t2 fs.
- __init__(antecedents, rules, consequent=None, tnorm=<function prod>)[source]#
Constructor of the RuleBaseT2 class.
- Parameters:
antecedents (list[fuzzyVariable]) – list of fuzzy variables that are the antecedents of the rules.
rules (list[RuleSimple]) – list of rules.
consequent (fuzzyVariable) – fuzzy variable that is the consequent of the rules.
tnorm – t-norm used to compute the fuzzy output.
- inference(x)[source]#
Computes the iv output of the t2 inference system.
Return an array in shape samples x 2 (last is iv dimension)
- Parameters:
x (array) – array with the values of the inputs.
- Returns:
array with the memberships of the consequents for each sample.
- Return type:
array
- forward(x)[source]#
Computes the deffuzified output of the t2 inference system.
Return a vector of size (samples, )
- Parameters:
x (array) – array with the values of the inputs.
- Returns:
array with the deffuzified output for each sample.
- Return type:
array
- fuzzy_type()[source]#
Returns the correspoing type of the RuleBase using the enum type in the fuzzy_sets module.
- Returns:
the corresponding fuzzy set type of the RuleBase.
- Return type:
- __add__(other)#
Adds two rule bases.
- __eq__(other)#
Returns True if the two rule bases are equal.
- __getitem__(item)#
Returns the corresponding rulebase.
- Parameters:
item (int) – index of the rule.
- Returns:
the corresponding rule.
- Return type:
- __hash__()#
Returns the hash of the rule base.
- __iter__()#
Returns an iterator for the rule base.
- __len__()#
Returns the number of rules in the rule base.
- __setitem__(key, value)#
Set the corresponding rule.
- Parameters:
key (int) – index of the rule.
value (RuleSimple) – new rule.
- __str__()#
Returns a string with the rules in the rule base.
- add_rule(new_rule)#
Adds a new rule to the rulebase. :param new_rule: rule to add.
- add_rules(new_rules)#
Adds a list of new rules to the rulebase.
- Parameters:
new_rules (list[RuleSimple]) – list of rules to add.
- compute_antecedents_memberships(x)#
Returns a list of of dictionaries that contains the memberships for each x value to the ith antecedents, nth linguistic variable. x must be a vector (only one sample)
- compute_rule_antecedent_memberships(x, scaled=False, antecedents_memberships=None)#
Computes the antecedent truth value of an input array.
Return an array in shape samples x rules (x 2) (last is iv dimension)
- Parameters:
x (array) – array with the values of the inputs.
scaled – if True, the memberships are scaled according to their sums for each sample.
- Returns:
array with the memberships of the antecedents for each rule.
- Return type:
array
- copy()#
Creates a copy of the RuleBase.
- Parameters:
deep – if True, creates a deep copy. If False, creates a shallow copy.
- Returns:
a copy of the RuleBase.
- delete_rule_duplicates(list_rules)#
- get_rulebase_matrix()#
Returns a matrix with the antecedents values for each rule.
- get_rules()#
Returns the list of rules in the rulebase.
- get_scores()#
Returns an array with the dominance score for each rule. (Must been already computed by an evalRule object)
- get_weights()#
Returns an array with the weights for each rule. (Different from dominance scores: must been already computed by an optimization algorithm)
- n_linguistic_variables()#
Returns the number of linguistic variables in the rule base.
- print_rule_bootstrap_results()#
Prints the bootstrap results for each rule.
- print_rules(return_rules=False, bootstrap_results=True)#
Print the rules from the rule base.
- Parameters:
return_rules (bool) – if True, the rules are returned as a string.
- prune_bad_rules(tolerance=0.01)#
Delete the rules from the rule base that do not have a dominance score superior to the threshold or have 0 accuracy in the training set.
- Parameters:
tolerance – threshold for the dominance score.
- remove_rule(ix)#
Removes the rule in the given index. :param ix: index of the rule to remove.
- remove_rules(delete_list)#
Removes the rules in the given list of indexes.
- scores()#
Returns the dominance score for each rule.
- Returns:
array with the dominance score for each rule.
- Return type:
array
- class ex_fuzzy.rules.RuleBaseGT2(antecedents, rules, consequent=None, tnorm=<function prod>)[source]#
Bases:
RuleBase
Class optimized to work with multiple rules at the same time. Supports only one consequent. (Use one rulebase per consequent to study classification problems. Check MasterRuleBase class for more documentation)
This class supports gt2 fs. (ONLY FOR CLASSIFICATION PROBLEMS)
- __init__(antecedents, rules, consequent=None, tnorm=<function prod>)[source]#
Constructor of the RuleBaseGT2 class.
- Parameters:
antecedents (list[fuzzyVariable]) – list of fuzzy variables that are the antecedents of the rules.
rules (list[RuleSimple]) – list of rules.
consequent (fuzzyVariable) – fuzzy variable that is the consequent of the rules.
tnorm – t-norm used to compute the fuzzy output.
- inference(x)[source]#
Computes the output of the gt2 inference system.
Return an array in shape samples x alpha_cuts
- Parameters:
x (array) – array with the values of the inputs.
- Returns:
array with the memberships of the consequents for each sample.
- Return type:
array
- forward(x)[source]#
Computes the deffuzified output of the t2 inference system.
Return a vector of size (samples, )
- Parameters:
x (array) – array with the values of the inputs.
- Returns:
array with the deffuzified output for each sample.
- Return type:
array
- fuzzy_type()[source]#
Returns the correspoing type of the RuleBase using the enum type in the fuzzy_sets module.
- Returns:
the corresponding fuzzy set type of the RuleBase.
- Return type:
- compute_rule_antecedent_memberships(x, scaled=True, antecedents_memberships=None)[source]#
Computes the membership for the antecedents performing the alpha_cut reduction.
- Parameters:
x (array) – array with the values of the inputs.
scaled – if True, the memberships are scaled to sum 1 in each sample.
antecedents_memberships – precomputed antecedent memberships. Not supported for GT2.
- Returns:
array with the memberships of the antecedents for each sample.
- Return type:
array
- alpha_compute_rule_antecedent_memberships(x, scaled=True, antecedents_memberships=None)[source]#
Computes the membership for the antecedents for all the alpha cuts.
- Parameters:
x (array) – array with the values of the inputs.
scaled – if True, the memberships are scaled to sum 1 in each sample.
antecedents_memberships – precomputed antecedent memberships. Not supported for GT2.
- Returns:
array with the memberships of the antecedents for each sample.
- Return type:
array
- __add__(other)#
Adds two rule bases.
- __eq__(other)#
Returns True if the two rule bases are equal.
- __getitem__(item)#
Returns the corresponding rulebase.
- Parameters:
item (int) – index of the rule.
- Returns:
the corresponding rule.
- Return type:
- __hash__()#
Returns the hash of the rule base.
- __iter__()#
Returns an iterator for the rule base.
- __len__()#
Returns the number of rules in the rule base.
- __setitem__(key, value)#
Set the corresponding rule.
- Parameters:
key (int) – index of the rule.
value (RuleSimple) – new rule.
- __str__()#
Returns a string with the rules in the rule base.
- add_rule(new_rule)#
Adds a new rule to the rulebase. :param new_rule: rule to add.
- add_rules(new_rules)#
Adds a list of new rules to the rulebase.
- Parameters:
new_rules (list[RuleSimple]) – list of rules to add.
- compute_antecedents_memberships(x)#
Returns a list of of dictionaries that contains the memberships for each x value to the ith antecedents, nth linguistic variable. x must be a vector (only one sample)
- copy()#
Creates a copy of the RuleBase.
- Parameters:
deep – if True, creates a deep copy. If False, creates a shallow copy.
- Returns:
a copy of the RuleBase.
- delete_rule_duplicates(list_rules)#
- get_rulebase_matrix()#
Returns a matrix with the antecedents values for each rule.
- get_rules()#
Returns the list of rules in the rulebase.
- get_scores()#
Returns an array with the dominance score for each rule. (Must been already computed by an evalRule object)
- get_weights()#
Returns an array with the weights for each rule. (Different from dominance scores: must been already computed by an optimization algorithm)
- n_linguistic_variables()#
Returns the number of linguistic variables in the rule base.
- print_rule_bootstrap_results()#
Prints the bootstrap results for each rule.
- print_rules(return_rules=False, bootstrap_results=True)#
Print the rules from the rule base.
- Parameters:
return_rules (bool) – if True, the rules are returned as a string.
- prune_bad_rules(tolerance=0.01)#
Delete the rules from the rule base that do not have a dominance score superior to the threshold or have 0 accuracy in the training set.
- Parameters:
tolerance – threshold for the dominance score.
- remove_rule(ix)#
Removes the rule in the given index. :param ix: index of the rule to remove.
- remove_rules(delete_list)#
Removes the rules in the given list of indexes.
- scores()#
Returns the dominance score for each rule.
- Returns:
array with the dominance score for each rule.
- Return type:
array
MasterRuleBase#
- class ex_fuzzy.rules.MasterRuleBase(rule_base, consequent_names=None, ds_mode=0, allow_unknown=False)[source]#
Bases:
object
This Class encompasses a list of rule bases where each one corresponds to a different class.
- __init__(rule_base, consequent_names=None, ds_mode=0, allow_unknown=False)[source]#
Constructor of the MasterRuleBase class.
- add_rule(rule, consequent)[source]#
Adds a rule to the rule base of the given consequent.
- Parameters:
rule (RuleSimple) – rule to add.
consequent (int) – index of the rule base to add the rule.
- get_rulebase_matrix()[source]#
Returns a list with the rulebases for each antecedent in matrix format.
- Returns:
list with the rulebases for each antecedent in matrix format.
- Return type:
list[array]
- get_scores()[source]#
Returns the dominance score for each rule in all the rulebases.
- Returns:
array with the dominance score for each rule in all the rulebases.
- Return type:
array
- get_weights()[source]#
Returns the weights for each rule in all the rulebases.
- Returns:
array with the weights for each rule in all the rulebases.
- Return type:
array
- compute_firing_strenghts(X, precomputed_truth=None)[source]#
Computes the firing strength of each rule for each sample.
- Parameters:
X – array with the values of the inputs.
precomputed_truth – if not None, the antecedent memberships are already computed. (Used for sped up in genetic algorithms)
- Returns:
array with the firing strength of each rule for each sample.
- Return type:
array
- compute_association_degrees(X, precomputed_truth=None)[source]#
Returns the winning rule for each sample. Takes into account dominance scores if already computed. :param X: array with the values of the inputs. :return: array with the winning rule for each sample.
- winning_rule_predict(X, precomputed_truth=None, out_class_names=False)[source]#
Returns the winning rule for each sample. Takes into account dominance scores if already computed.
- Parameters:
X (array) – array with the values of the inputs.
precomputed_truth – if not None, the antecedent memberships are already computed. (Used for sped up in genetic algorithms)
- Returns:
array with the winning rule for each sample.
- Return type:
array
- explainable_predict(X, out_class_names=False, precomputed_truth=None)[source]#
Returns the predicted class for each sample.
- Parameters:
X (array) – array with the values of the inputs.
out_class_names – if True, the output will be the class names instead of the class index.
- Returns:
np array samples (x 1) with the predicted class.
- Return type:
array
- add_rule_base(rule_base)[source]#
Adds a rule base to the list of rule bases.
- Parameters:
rule_base (RuleBase) – rule base to add.
- print_rules(return_rules=False, bootstrap_results=True)[source]#
Print all the rules for all the consequents.
- Parameters:
autoprint – if True, the rules are printed. If False, the rules are returned as a string.
- get_rules()[source]#
Returns a list with all the rules.
- Returns:
list with all the rules.
- Return type:
- fuzzy_type()[source]#
Returns the correspoing type of the RuleBase using the enum type in the fuzzy_sets module.
- Returns:
the corresponding fuzzy set type of the RuleBase.
- Return type:
- purge_rules(tolerance=0.001)[source]#
Delete the roles with a dominance score lower than the tolerance.
- Parameters:
tolerance – tolerance to delete the rules.
- __getitem__(item)[source]#
Returns the corresponding rulebase.
- Parameters:
item – index of the rulebase.
- Returns:
the corresponding rulebase.
- Return type:
- __eq__(_MasterRuleBase__value)[source]#
Returns True if the two rule bases are equal.
- Parameters:
__value – object to compare.
- Returns:
True if the two rule bases are equal.
- Return type:
- __call__(X)[source]#
Gives the prediction for each sample (same as winning_rule_predict)
- Parameters:
X (array) – array of dims: samples x features.
- Returns:
vector of predictions, size: samples,
- Return type:
array
- predict(X)[source]#
Gives the prediction for each sample (same as winning_rule_predict)
- Parameters:
X (array) – array of dims: samples x features.
- Returns:
vector of predictions, size: samples,
- Return type:
array
Core Functions#
Membership Computation#
- ex_fuzzy.rules.compute_antecedents_memberships(antecedents, x)[source]#
Compute membership degrees for input values across all fuzzy variables.
This function calculates the membership degrees of input values for each linguistic variable in the antecedents. It returns a structured representation that can be used for efficient rule evaluation and inference.
- Parameters:
antecedents (list[fs.fuzzyVariable]) – List of fuzzy variables representing the antecedents (input variables) of the fuzzy system
x (np.array) – Input vector with values for each antecedent variable. Shape should be (n_samples, n_variables) or (n_variables,) for single sample
- Returns:
- List containing membership dictionaries for each antecedent variable.
Each dictionary maps linguistic term indices to their membership degrees.
- Return type:
Example
>>> # For 2 variables with 3 linguistic terms each >>> antecedents = [temp_var, pressure_var] >>> x = np.array([25.0, 101.3]) # temperature=25°C, pressure=101.3kPa >>> memberships = compute_antecedents_memberships(antecedents, x) >>> # memberships[0] contains temperature memberships: {0: 0.2, 1: 0.8, 2: 0.0} >>> # memberships[1] contains pressure memberships: {0: 0.0, 1: 0.6, 2: 0.4}
Note
This function is typically used internally during rule evaluation but can be useful for debugging membership degree calculations or analyzing input fuzzification.
Model Evaluation#
Constants and Modifiers#
Rule Modifiers#
The module supports linguistic hedges that modify fuzzy set membership:
- ex_fuzzy.rules.modifiers_names#
Dictionary mapping modifier powers to linguistic terms:
0.5
: “Somewhat”1.0
: “” (no modifier)1.3
: “A little”1.7
: “Slightly”2.0
: “Very”3.0
: “Extremely”4.0
: “Very very”
Examples#
Creating Simple Rules#
import ex_fuzzy.rules as rules
import ex_fuzzy.fuzzy_sets as fs
import numpy as np
# Create fuzzy variables
temp_var = fs.fuzzyVariable("Temperature", [0, 50], 3, fs.FUZZY_SETS.t1)
humidity_var = fs.fuzzyVariable("Humidity", [0, 100], 3, fs.FUZZY_SETS.t1)
# Create a simple rule: IF Temperature is High AND Humidity is Low THEN Comfort is Good
rule = rules.RuleSimple(
antecedents=[temp_var[2], humidity_var[0]], # High temp, Low humidity
consequent=1, # Good comfort class
weight=1.0
)
# Evaluate rule for input
input_values = np.array([35, 25]) # 35°C, 25% humidity
membership = rule.eval_rule([temp_var, humidity_var], input_values)
print(f"Rule activation: {membership}")
Building Rule Bases#
# Create rule base for Type-1 fuzzy sets
rule_base = rules.RuleBaseT1()
# Add multiple rules
rules_list = [
rules.RuleSimple([temp_var[0], humidity_var[0]], 0, 1.0), # Low temp, Low humidity -> Class 0
rules.RuleSimple([temp_var[1], humidity_var[1]], 1, 0.8), # Med temp, Med humidity -> Class 1
rules.RuleSimple([temp_var[2], humidity_var[2]], 2, 0.9), # High temp, High humidity -> Class 2
]
rule_base.add_rules(rules_list)
rule_base.antecedents = [temp_var, humidity_var]
Multi-class Systems#
# Create master rule base for multi-class classification
master_rb = rules.MasterRuleBase()
# Add rule bases for each class
for class_id in range(3):
class_rules = rules.RuleBaseT1()
# Add class-specific rules...
master_rb.add_rulebase(class_rules, class_id)
# Evaluate complete system
input_data = np.array([[35, 25], [20, 80], [40, 90]])
predictions = master_rb.predict(input_data)
Computing Memberships#
# Compute antecedent memberships for multiple inputs
antecedents = [temp_var, humidity_var]
input_values = np.array([[25, 60], [35, 30], [15, 80]])
memberships = rules.compute_antecedents_memberships(antecedents, input_values)
# Access membership for first variable, second sample, first fuzzy set
first_var_memberships = memberships[0]
print(f"Memberships for variable 1: {first_var_memberships}")
Rule Quality Assessment#
# Evaluate rule quality with dominance score
X_train = np.random.rand(100, 2) * 50 # Training data
y_train = np.random.randint(0, 3, 100) # Class labels
# Calculate dominance score for a rule
dominance = rule.dominance_score(
antecedents=[temp_var, humidity_var],
X=X_train,
y=y_train,
target_class=1
)
print(f"Rule dominance: {dominance}")
Advanced Inference#
# Custom t-norm and aggregation
from ex_fuzzy.rules import t_norm_operators, aggregation_operators
# Use different t-norms for rule evaluation
rule_base.set_tnorm(t_norm_operators.product_tnorm)
# Evaluate with custom aggregation
output = rule_base.eval_rulebase(
input_values,
aggregation_method=aggregation_operators.maximum
)
Performance Optimization#
# Efficient batch evaluation
batch_inputs = np.random.rand(1000, 2) * 50
# Pre-compute memberships for efficiency
batch_memberships = rules.compute_antecedents_memberships(
antecedents, batch_inputs
)
# Evaluate multiple rules efficiently
results = []
for rule in rule_base.rules:
batch_results = rule.eval_rule_batch(antecedents, batch_inputs)
results.append(batch_results)
See Also#
ex_fuzzy.fuzzy_sets
: Fuzzy set definitions and operationsex_fuzzy.centroid
: Defuzzification algorithmsex_fuzzy.classifiers
: High-level classification interfacesex_fuzzy.eval_tools
: Rule evaluation and performance metrics