-
Notifications
You must be signed in to change notification settings - Fork 9
feat(RHOAIENG-21045): Add fairness metrics and tests #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(RHOAIENG-21045): Add fairness metrics and tests #5
Conversation
49ef199
to
407f057
Compare
b83a31e
to
1e28682
Compare
@sourcery-ai review |
Reviewer's GuideThis PR introduces implementations for group and individual fairness metrics (disparate impact ratio, statistical parity difference, average odds/predictive value difference, and consistency) with a shared utility module, alongside a comprehensive pytest suite that validates each metric against AIF360 and reference calculations. Class diagram for new fairness metrics classesclassDiagram
class DisparateImpactRatio {
+static calculate_model(samples, model, privilege_columns, privilege_values, favorable_output)
+static calculate(privileged, unprivileged, favorable_output)
}
class GroupStatisticalParityDifference {
+static calculate_model(samples, model, privilege_columns, privilege_values, favorable_output)
+static calculate(privileged, unprivileged, favorable_output)
}
class GroupAverageOddsDifference {
+static calculate_model(samples, model, privilege_columns, privilege_values, postive_class, output_column)
+static calculate(test, truth, privilege_columns, privilege_values, positive_class, output_column)
}
class GroupAveragePredictiveValueDifference {
+static calculate_model(samples, model, privilege_columns, privilege_values, positive_class, output_column)
+static calculate(test, truth, privilege_columns, privilege_values, positive_class, output_column)
}
class IndividualConsistency {
+static calculate(proximity_function, samples, prediction_provider)
}
class fairness_metrics_utils {
+filter_rows_by_inputs(data, filter_func)
+calculate_confusion_matrix(test, truth, positive_class)
}
GroupAverageOddsDifference ..> fairness_metrics_utils : uses
GroupAveragePredictiveValueDifference ..> fairness_metrics_utils : uses
Class diagram for fairness_metrics_utils moduleclassDiagram
class fairness_metrics_utils {
+filter_rows_by_inputs(data, filter_func)
+calculate_confusion_matrix(test, truth, positive_class)
}
File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @christinaexyou - I've reviewed your changes and found some issues that need to be addressed.
Blocking issues:
- Potential division by zero in denominators without epsilon. (link)
General comments:
- The tests pull data from a remote CSV which can be brittle and slow—consider using a small in-memory fixture or synthetic dataset to make them faster and more reliable.
- The group fairness metrics implementations share almost identical filtering and confusion‐matrix code—extract those common patterns into shared helpers to reduce duplication.
- There’s a typo in the parameter name (
postive_class
) and inconsistent epsilon handling across metrics—standardize parameter names and zero‐division guards for clarity and correctness.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The tests pull data from a remote CSV which can be brittle and slow—consider using a small in-memory fixture or synthetic dataset to make them faster and more reliable.
- The group fairness metrics implementations share almost identical filtering and confusion‐matrix code—extract those common patterns into shared helpers to reduce duplication.
- There’s a typo in the parameter name (`postive_class`) and inconsistent epsilon handling across metrics—standardize parameter names and zero‐division guards for clarity and correctness.
## Individual Comments
### Comment 1
<location> `src/core/metrics/fairness/group/group_average_odds_difference.py:27` </location>
<code_context>
+ :param model the model to be tested for fairness
+ :param privilege_columns a list of integers specifying the indices of the privileged columns
+ :param privilege_values a list of intergers specifying the privileged values
+ :param postive_class the favorable / desirable outputs
+ :param output_column the column index where the output is located
+ return group average odds difference score
</code_context>
<issue_to_address>
Typo in parameter name 'postive_class'.
Please correct the spelling to 'positive_class'.
Suggested implementation:
```python
positive_class: List[int],
```
```python
:param positive_class the favorable / desirable outputs
```
```python
return GroupAverageOddsDifference.calculate(samples, truth, privilege_columns, privilege_values, positive_class, output_column)
```
</issue_to_address>
### Comment 2
<location> `src/core/metrics/fairness/group/group_average_predictive_value_difference.py:61` </location>
<code_context>
+ utp, utn, ufp, ufn = ucm["tp"], ucm["tn"], ucm["fp"], ucm["fn"]
+ ptp, ptn, pfp, pfn = pcm["tp"], pcm["tn"], pcm["fp"], pcm["fn"]
+
+ return (utp / (utp + ufp) - ptp / (ptp + pfp + 1e-10)) / 2 + \
+ (ufn / (ufn + utn) - pfn / (pfn + ptn + 1e-10)) / 2
</code_context>
<issue_to_address>
Potential division by zero in denominators without epsilon.
Add a small epsilon (e.g., 1e-10) to (utp + ufp) and (ufn + utn) to prevent possible division by zero and ensure numerical stability.
</issue_to_address>
### Comment 3
<location> `src/core/metrics/fairness/group/disparate_impact_ratio.py:29` </location>
<code_context>
+ """
+ outputs = model.predict(samples)
+ data = np.append(samples, outputs, axis=1)
+ privileged = np.sum(data[:, privilege_columns] == privilege_values)
+ unprivileged = np.sum(data[:, privilege_columns] != privilege_values)
+
+ return DisparateImpactRatio.calculate(privileged, unprivileged, favorable_output)
</code_context>
<issue_to_address>
Incorrect assignment: privileged/unprivileged should be arrays, not counts.
Assign 'privileged' and 'unprivileged' as arrays of samples using boolean indexing, not as scalar counts, to match the expected input for 'calculate'.
</issue_to_address>
### Comment 4
<location> `src/core/metrics/fairness/group/group_statistical_parity_difference.py:29` </location>
<code_context>
+ """
+ outputs = model.predict(samples)
+ data = np.append(samples, outputs, axis=1)
+ privileged = data[np.where(data[:, privilege_columns] == privilege_values)]
+ unprivileged = data[np.where(data[:, privilege_columns] != privilege_values)]
+
+ return GroupStatisticalParityDifference.calculate(privileged, unprivileged, favorable_output)
</code_context>
<issue_to_address>
Potential misuse of np.where for multi-column privilege selection.
Use 'np.all(data[:, privilege_columns] == privilege_values, axis=1)' for correct multi-column matching.
</issue_to_address>
### Comment 5
<location> `src/core/metrics/fairness/fairness_metrics_utils.py:3` </location>
<code_context>
+import numpy as np
+
+def filter_rows_by_inputs(data, filter_func):
+ return data[np.apply_along_axis(filter_func, 1, data)]
+
</code_context>
<issue_to_address>
No type annotations for function parameters.
Please add type annotations for 'data' and 'filter_func' to enhance clarity and maintainability.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
def filter_rows_by_inputs(data, filter_func):
return data[np.apply_along_axis(filter_func, 1, data)]
=======
from typing import Callable
import numpy as np
def filter_rows_by_inputs(
data: np.ndarray,
filter_func: Callable[[np.ndarray], bool]
) -> np.ndarray:
return data[np.apply_along_axis(filter_func, 1, data)]
>>>>>>> REPLACE
</suggested_fix>
### Comment 6
<location> `tests/metrics/test_fairness.py:164` </location>
<code_context>
+ assert score == approx(spd, abs=1e-5)
+
+
+def test_average_odds_difference():
+ aod = average_odds_difference(y, y_pred, prot_attr="Gender", priv_group="Male", pos_label=1)
+
+ score = GroupAverageOddsDifference.calculate(
+ test=data_pred,
+ truth=data,
+ privilege_columns=[2],
+ privilege_values=["Male"],
+ positive_class=1,
+ output_column=-1
+ )
+
+ assert score == approx(aod, abs=0.2)
+
+
</code_context>
<issue_to_address>
Test for average odds difference uses a high tolerance in assertion.
Consider reducing the abs tolerance from 0.2, or document the reason for using such a high value.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
assert score == approx(aod, abs=0.2)
=======
# Use a tight tolerance to ensure precision, unless numerical instability is expected.
assert score == approx(aod, abs=1e-5)
>>>>>>> REPLACE
</suggested_fix>
### Comment 7
<location> `tests/metrics/test_fairness.py:26` </location>
<code_context>
+from src.core.metrics.fairness.group.group_statistical_parity_difference import GroupStatisticalParityDifference
+from src.core.metrics.fairness.individual.individual_consistency import IndividualConsistency
+
+df = pd.read_csv(
+ "https://raw.githubusercontent.com/trustyai-explainability/model-collection/8aa8e2e762c6d2b41dbcbe8a0035d50aa5f58c93/bank-churn/data/train.csv",
+)
+X = df.drop(columns=["Exited"], axis=1)
+y = df["Exited"]
+
+def train_model():
</code_context>
<issue_to_address>
Tests depend on external data source.
Loading test data from a remote CSV can cause tests to fail if the file changes or is unavailable. Use a static dataset in the repository for consistent, reliable tests.
Suggested implementation:
```python
df = pd.read_csv(
"tests/data/bank_churn_train.csv",
)
```
You must ensure that the file `tests/data/bank_churn_train.csv` exists in your repository and contains the same data as the remote CSV. If it does not exist, download the file from the remote URL and add it to your repository at the specified path.
</issue_to_address>
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
:param model the model to be tested for fairness | ||
:param privilege_columns a list of integers specifying the indices of the privileged columns | ||
:param privilege_values a list of intergers specifying the privileged values | ||
:param postive_class the favorable / desirable outputs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (typo): Typo in parameter name 'postive_class'.
Please correct the spelling to 'positive_class'.
Suggested implementation:
positive_class: List[int],
:param positive_class the favorable / desirable outputs
return GroupAverageOddsDifference.calculate(samples, truth, privilege_columns, privilege_values, positive_class, output_column)
return (utp / (utp + ufp) - ptp / (ptp + pfp + 1e-10)) / 2 + \ | ||
(ufn / (ufn + utn) - pfn / (pfn + ptn + 1e-10)) / 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): Potential division by zero in denominators without epsilon.
Add a small epsilon (e.g., 1e-10) to (utp + ufp) and (ufn + utn) to prevent possible division by zero and ensure numerical stability.
privileged = np.sum(data[:, privilege_columns] == privilege_values) | ||
unprivileged = np.sum(data[:, privilege_columns] != privilege_values) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): Incorrect assignment: privileged/unprivileged should be arrays, not counts.
Assign 'privileged' and 'unprivileged' as arrays of samples using boolean indexing, not as scalar counts, to match the expected input for 'calculate'.
privileged = data[np.where(data[:, privilege_columns] == privilege_values)] | ||
unprivileged = data[np.where(data[:, privilege_columns] != privilege_values)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): Potential misuse of np.where for multi-column privilege selection.
Use 'np.all(data[:, privilege_columns] == privilege_values, axis=1)' for correct multi-column matching.
def filter_rows_by_inputs(data, filter_func): | ||
return data[np.apply_along_axis(filter_func, 1, data)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: No type annotations for function parameters.
Please add type annotations for 'data' and 'filter_func' to enhance clarity and maintainability.
def filter_rows_by_inputs(data, filter_func): | |
return data[np.apply_along_axis(filter_func, 1, data)] | |
from typing import Callable | |
import numpy as np | |
def filter_rows_by_inputs( | |
data: np.ndarray, | |
filter_func: Callable[[np.ndarray], bool] | |
) -> np.ndarray: | |
return data[np.apply_along_axis(filter_func, 1, data)] |
tests/metrics/test_fairness.py
Outdated
output_column=-1 | ||
) | ||
|
||
assert score == approx(aod, abs=0.2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Test for average odds difference uses a high tolerance in assertion.
Consider reducing the abs tolerance from 0.2, or document the reason for using such a high value.
assert score == approx(aod, abs=0.2) | |
# Use a tight tolerance to ensure precision, unless numerical instability is expected. | |
assert score == approx(aod, abs=1e-5) |
tests/metrics/test_fairness.py
Outdated
df = pd.read_csv( | ||
"https://raw.githubusercontent.com/trustyai-explainability/model-collection/8aa8e2e762c6d2b41dbcbe8a0035d50aa5f58c93/bank-churn/data/train.csv", | ||
) | ||
X = df.drop(columns=["Exited"], axis=1) | ||
y = df["Exited"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Tests depend on external data source.
Loading test data from a remote CSV can cause tests to fail if the file changes or is unavailable. Use a static dataset in the repository for consistent, reliable tests.
Suggested implementation:
df = pd.read_csv(
"tests/data/bank_churn_train.csv",
)
You must ensure that the file tests/data/bank_churn_train.csv
exists in your repository and contains the same data as the remote CSV. If it does not exist, download the file from the remote URL and add it to your repository at the specified path.
tests/metrics/test_fairness.py
Outdated
|
||
|
||
def test_disparate_impact_ratio(): | ||
dir = disparate_impact_ratio(y, prot_attr="Gender", priv_group="Male", pos_label=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Don't assign to builtin variable dir
(avoid-builtin-shadow
)
Explanation
Python has a number ofbuiltin
variables: functions and constants thatform a part of the language, such as
list
, getattr
, and type
(See https://docs.python.org/3/library/functions.html).
It is valid, in the language, to re-bind such variables:
list = [1, 2, 3]
However, this is considered poor practice.
- It will confuse other developers.
- It will confuse syntax highlighters and linters.
- It means you can no longer use that builtin for its original purpose.
How can you solve this?
Rename the variable something more specific, such as integers
.
In a pinch, my_list
and similar names are colloquially-recognized
placeholders.
be0338e
to
167e5f1
Compare
…ustyAI Python Service
167e5f1
to
cd7c5a6
Compare
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #5 +/- ##
=======================================
Coverage ? 61.84%
=======================================
Files ? 39
Lines ? 2351
Branches ? 0
=======================================
Hits ? 1454
Misses ? 897
Partials ? 0 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@christinaexyou Let's merge as it is and address any issues on separate PRs 👍
Summary by Sourcery
Implement a suite of group and individual fairness metrics with corresponding utility functions, and validate them through tests against AIF360 references.
New Features:
Enhancements:
Tests: