Skip to content

feat(RHOAIENG-21045): Add fairness metrics and tests #5

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

christinaexyou
Copy link
Contributor

@christinaexyou christinaexyou commented Mar 25, 2025

Summary by Sourcery

Implement a suite of group and individual fairness metrics with corresponding utility functions, and validate them through tests against AIF360 references.

New Features:

  • Add group fairness metric classes: DisparateImpactRatio, GroupStatisticalParityDifference, GroupAverageOddsDifference, and GroupAveragePredictiveValueDifference
  • Add IndividualConsistency class for measuring individual fairness

Enhancements:

  • Introduce fairness_metrics_utils with row filtering and confusion matrix helpers

Tests:

  • Add end-to-end tests comparing new fairness metrics to AIF360 implementations on a real dataset

@christinaexyou christinaexyou force-pushed the add-fairness-metrics branch from b83a31e to 1e28682 Compare May 8, 2025 15:03
@christinaexyou christinaexyou requested a review from RobGeada May 8, 2025 15:03
@ruivieira ruivieira linked an issue Jun 16, 2025 that may be closed by this pull request
@ruivieira ruivieira added the enhancement New feature or request label Jun 16, 2025
@ruivieira ruivieira moved this to In Review in TrustyAI planning Jun 16, 2025
@ruivieira
Copy link
Member

@sourcery-ai review

Copy link
Contributor

sourcery-ai bot commented Jun 27, 2025

Reviewer's Guide

This PR introduces implementations for group and individual fairness metrics (disparate impact ratio, statistical parity difference, average odds/predictive value difference, and consistency) with a shared utility module, alongside a comprehensive pytest suite that validates each metric against AIF360 and reference calculations.

Class diagram for new fairness metrics classes

classDiagram
    class DisparateImpactRatio {
        +static calculate_model(samples, model, privilege_columns, privilege_values, favorable_output)
        +static calculate(privileged, unprivileged, favorable_output)
    }
    class GroupStatisticalParityDifference {
        +static calculate_model(samples, model, privilege_columns, privilege_values, favorable_output)
        +static calculate(privileged, unprivileged, favorable_output)
    }
    class GroupAverageOddsDifference {
        +static calculate_model(samples, model, privilege_columns, privilege_values, postive_class, output_column)
        +static calculate(test, truth, privilege_columns, privilege_values, positive_class, output_column)
    }
    class GroupAveragePredictiveValueDifference {
        +static calculate_model(samples, model, privilege_columns, privilege_values, positive_class, output_column)
        +static calculate(test, truth, privilege_columns, privilege_values, positive_class, output_column)
    }
    class IndividualConsistency {
        +static calculate(proximity_function, samples, prediction_provider)
    }
    class fairness_metrics_utils {
        +filter_rows_by_inputs(data, filter_func)
        +calculate_confusion_matrix(test, truth, positive_class)
    }

    GroupAverageOddsDifference ..> fairness_metrics_utils : uses
    GroupAveragePredictiveValueDifference ..> fairness_metrics_utils : uses
Loading

Class diagram for fairness_metrics_utils module

classDiagram
    class fairness_metrics_utils {
        +filter_rows_by_inputs(data, filter_func)
        +calculate_confusion_matrix(test, truth, positive_class)
    }
Loading

File-Level Changes

Change Details Files
Add group fairness metric implementations
  • Define static calculate/ calculate_model methods for each metric
  • Filter privileged vs. unprivileged groups using numpy predicates
  • Compute probabilities or confusion-matrix-based scores
  • Return metric values within expected ranges
src/core/metrics/fairness/group/disparate_impact_ratio.py
src/core/metrics/fairness/group/group_statistical_parity_difference.py
src/core/metrics/fairness/group/group_average_odds_difference.py
src/core/metrics/fairness/group/group_average_predictive_value_difference.py
Implement individual fairness via consistency score
  • Introduce IndividualConsistency.calculate combining predictions across neighbors
  • Decrease consistency proportionally for differing neighbor outputs per sample
  • Leverage external proximity function for k-NN neighbor retrieval
src/core/metrics/fairness/individual/individual_consistency.py
Create shared utility functions for metrics
  • filter_rows_by_inputs to select rows matching arbitrary predicates
  • calculate_confusion_matrix to derive TP/TN/FP/FN counts from labels
src/core/metrics/fairness/fairness_metrics_utils.py
Add pytest suite validating all fairness metrics
  • Load sample dataset and train a logistic regression model
  • Compare each metric against AIF360 functions with pytest.approx
  • Implement helper classes for mock, perfect, and random predictions
  • Define k-NN neighbor finder and data processing utilities
tests/metrics/test_fairness.py

Possibly linked issues

  • #21045: Adds fairness metrics and tests including disparate impact, statistical parity, average odds, average predictive value, and individual consistency.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @christinaexyou - I've reviewed your changes and found some issues that need to be addressed.

Blocking issues:

  • Potential division by zero in denominators without epsilon. (link)

General comments:

  • The tests pull data from a remote CSV which can be brittle and slow—consider using a small in-memory fixture or synthetic dataset to make them faster and more reliable.
  • The group fairness metrics implementations share almost identical filtering and confusion‐matrix code—extract those common patterns into shared helpers to reduce duplication.
  • There’s a typo in the parameter name (postive_class) and inconsistent epsilon handling across metrics—standardize parameter names and zero‐division guards for clarity and correctness.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The tests pull data from a remote CSV which can be brittle and slow—consider using a small in-memory fixture or synthetic dataset to make them faster and more reliable.
- The group fairness metrics implementations share almost identical filtering and confusion‐matrix code—extract those common patterns into shared helpers to reduce duplication.
- There’s a typo in the parameter name (`postive_class`) and inconsistent epsilon handling across metrics—standardize parameter names and zero‐division guards for clarity and correctness.

## Individual Comments

### Comment 1
<location> `src/core/metrics/fairness/group/group_average_odds_difference.py:27` </location>
<code_context>
+        :param model the model to be tested for fairness
+        :param privilege_columns a list of integers specifying the indices of the privileged columns
+        :param privilege_values a list of intergers specifying the privileged values
+        :param postive_class the favorable / desirable outputs
+        :param output_column the column index where the output is located
+        return group average odds difference score
</code_context>

<issue_to_address>
Typo in parameter name 'postive_class'.

Please correct the spelling to 'positive_class'.

Suggested implementation:

```python
        positive_class: List[int],

```

```python
        :param positive_class the favorable / desirable outputs

```

```python
        return GroupAverageOddsDifference.calculate(samples, truth, privilege_columns, privilege_values, positive_class, output_column)

```
</issue_to_address>

### Comment 2
<location> `src/core/metrics/fairness/group/group_average_predictive_value_difference.py:61` </location>
<code_context>
+        utp, utn, ufp, ufn = ucm["tp"], ucm["tn"], ucm["fp"], ucm["fn"]
+        ptp, ptn, pfp, pfn = pcm["tp"], pcm["tn"], pcm["fp"], pcm["fn"]
+
+        return (utp / (utp + ufp) - ptp / (ptp + pfp + 1e-10)) / 2 + \
+            (ufn / (ufn + utn) - pfn / (pfn + ptn + 1e-10)) / 2
</code_context>

<issue_to_address>
Potential division by zero in denominators without epsilon.

Add a small epsilon (e.g., 1e-10) to (utp + ufp) and (ufn + utn) to prevent possible division by zero and ensure numerical stability.
</issue_to_address>

### Comment 3
<location> `src/core/metrics/fairness/group/disparate_impact_ratio.py:29` </location>
<code_context>
+        """
+        outputs = model.predict(samples)
+        data = np.append(samples, outputs, axis=1)
+        privileged = np.sum(data[:, privilege_columns] == privilege_values)
+        unprivileged = np.sum(data[:, privilege_columns] != privilege_values)
+
+        return DisparateImpactRatio.calculate(privileged, unprivileged, favorable_output)
</code_context>

<issue_to_address>
Incorrect assignment: privileged/unprivileged should be arrays, not counts.

Assign 'privileged' and 'unprivileged' as arrays of samples using boolean indexing, not as scalar counts, to match the expected input for 'calculate'.
</issue_to_address>

### Comment 4
<location> `src/core/metrics/fairness/group/group_statistical_parity_difference.py:29` </location>
<code_context>
+        """
+        outputs = model.predict(samples)
+        data = np.append(samples, outputs, axis=1)
+        privileged = data[np.where(data[:, privilege_columns] == privilege_values)]
+        unprivileged = data[np.where(data[:, privilege_columns] != privilege_values)]
+
+        return GroupStatisticalParityDifference.calculate(privileged, unprivileged, favorable_output)
</code_context>

<issue_to_address>
Potential misuse of np.where for multi-column privilege selection.

Use 'np.all(data[:, privilege_columns] == privilege_values, axis=1)' for correct multi-column matching.
</issue_to_address>

### Comment 5
<location> `src/core/metrics/fairness/fairness_metrics_utils.py:3` </location>
<code_context>
+import numpy as np
+
+def filter_rows_by_inputs(data, filter_func):
+    return data[np.apply_along_axis(filter_func, 1, data)]
+
</code_context>

<issue_to_address>
No type annotations for function parameters.

Please add type annotations for 'data' and 'filter_func' to enhance clarity and maintainability.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
def filter_rows_by_inputs(data, filter_func):
    return data[np.apply_along_axis(filter_func, 1, data)]
=======
from typing import Callable
import numpy as np

def filter_rows_by_inputs(
    data: np.ndarray,
    filter_func: Callable[[np.ndarray], bool]
) -> np.ndarray:
    return data[np.apply_along_axis(filter_func, 1, data)]
>>>>>>> REPLACE

</suggested_fix>

### Comment 6
<location> `tests/metrics/test_fairness.py:164` </location>
<code_context>
+    assert score == approx(spd, abs=1e-5)
+
+
+def test_average_odds_difference():
+    aod = average_odds_difference(y, y_pred, prot_attr="Gender", priv_group="Male", pos_label=1)
+
+    score = GroupAverageOddsDifference.calculate(
+        test=data_pred,
+        truth=data,
+        privilege_columns=[2],
+        privilege_values=["Male"],
+        positive_class=1,
+        output_column=-1
+    )
+
+    assert score == approx(aod, abs=0.2)
+
+
</code_context>

<issue_to_address>
Test for average odds difference uses a high tolerance in assertion.

Consider reducing the abs tolerance from 0.2, or document the reason for using such a high value.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
    assert score == approx(aod, abs=0.2)
=======
    # Use a tight tolerance to ensure precision, unless numerical instability is expected.
    assert score == approx(aod, abs=1e-5)
>>>>>>> REPLACE

</suggested_fix>

### Comment 7
<location> `tests/metrics/test_fairness.py:26` </location>
<code_context>
+from src.core.metrics.fairness.group.group_statistical_parity_difference import GroupStatisticalParityDifference
+from src.core.metrics.fairness.individual.individual_consistency import IndividualConsistency
+
+df = pd.read_csv(
+    "https://raw.githubusercontent.com/trustyai-explainability/model-collection/8aa8e2e762c6d2b41dbcbe8a0035d50aa5f58c93/bank-churn/data/train.csv",
+)
+X = df.drop(columns=["Exited"], axis=1)
+y = df["Exited"]
+
+def train_model():
</code_context>

<issue_to_address>
Tests depend on external data source.

Loading test data from a remote CSV can cause tests to fail if the file changes or is unavailable. Use a static dataset in the repository for consistent, reliable tests.

Suggested implementation:

```python
df = pd.read_csv(
    "tests/data/bank_churn_train.csv",
)

```

You must ensure that the file `tests/data/bank_churn_train.csv` exists in your repository and contains the same data as the remote CSV. If it does not exist, download the file from the remote URL and add it to your repository at the specified path.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

:param model the model to be tested for fairness
:param privilege_columns a list of integers specifying the indices of the privileged columns
:param privilege_values a list of intergers specifying the privileged values
:param postive_class the favorable / desirable outputs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (typo): Typo in parameter name 'postive_class'.

Please correct the spelling to 'positive_class'.

Suggested implementation:

        positive_class: List[int],
        :param positive_class the favorable / desirable outputs
        return GroupAverageOddsDifference.calculate(samples, truth, privilege_columns, privilege_values, positive_class, output_column)

Comment on lines 61 to 62
return (utp / (utp + ufp) - ptp / (ptp + pfp + 1e-10)) / 2 + \
(ufn / (ufn + utn) - pfn / (pfn + ptn + 1e-10)) / 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Potential division by zero in denominators without epsilon.

Add a small epsilon (e.g., 1e-10) to (utp + ufp) and (ufn + utn) to prevent possible division by zero and ensure numerical stability.

Comment on lines +29 to +30
privileged = np.sum(data[:, privilege_columns] == privilege_values)
unprivileged = np.sum(data[:, privilege_columns] != privilege_values)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Incorrect assignment: privileged/unprivileged should be arrays, not counts.

Assign 'privileged' and 'unprivileged' as arrays of samples using boolean indexing, not as scalar counts, to match the expected input for 'calculate'.

Comment on lines 29 to 30
privileged = data[np.where(data[:, privilege_columns] == privilege_values)]
unprivileged = data[np.where(data[:, privilege_columns] != privilege_values)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Potential misuse of np.where for multi-column privilege selection.

Use 'np.all(data[:, privilege_columns] == privilege_values, axis=1)' for correct multi-column matching.

Comment on lines 3 to 5
def filter_rows_by_inputs(data, filter_func):
return data[np.apply_along_axis(filter_func, 1, data)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: No type annotations for function parameters.

Please add type annotations for 'data' and 'filter_func' to enhance clarity and maintainability.

Suggested change
def filter_rows_by_inputs(data, filter_func):
return data[np.apply_along_axis(filter_func, 1, data)]
from typing import Callable
import numpy as np
def filter_rows_by_inputs(
data: np.ndarray,
filter_func: Callable[[np.ndarray], bool]
) -> np.ndarray:
return data[np.apply_along_axis(filter_func, 1, data)]

output_column=-1
)

assert score == approx(aod, abs=0.2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Test for average odds difference uses a high tolerance in assertion.

Consider reducing the abs tolerance from 0.2, or document the reason for using such a high value.

Suggested change
assert score == approx(aod, abs=0.2)
# Use a tight tolerance to ensure precision, unless numerical instability is expected.
assert score == approx(aod, abs=1e-5)

Comment on lines 26 to 28
df = pd.read_csv(
"https://raw.githubusercontent.com/trustyai-explainability/model-collection/8aa8e2e762c6d2b41dbcbe8a0035d50aa5f58c93/bank-churn/data/train.csv",
)
X = df.drop(columns=["Exited"], axis=1)
y = df["Exited"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Tests depend on external data source.

Loading test data from a remote CSV can cause tests to fail if the file changes or is unavailable. Use a static dataset in the repository for consistent, reliable tests.

Suggested implementation:

df = pd.read_csv(
    "tests/data/bank_churn_train.csv",
)

You must ensure that the file tests/data/bank_churn_train.csv exists in your repository and contains the same data as the remote CSV. If it does not exist, download the file from the remote URL and add it to your repository at the specified path.



def test_disparate_impact_ratio():
dir = disparate_impact_ratio(y, prot_attr="Gender", priv_group="Male", pos_label=1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Don't assign to builtin variable dir (avoid-builtin-shadow)


ExplanationPython has a number of builtin variables: functions and constants that
form a part of the language, such as list, getattr, and type
(See https://docs.python.org/3/library/functions.html).
It is valid, in the language, to re-bind such variables:

list = [1, 2, 3]

However, this is considered poor practice.

  • It will confuse other developers.
  • It will confuse syntax highlighters and linters.
  • It means you can no longer use that builtin for its original purpose.

How can you solve this?

Rename the variable something more specific, such as integers.
In a pinch, my_list and similar names are colloquially-recognized
placeholders.

@christinaexyou christinaexyou force-pushed the add-fairness-metrics branch 2 times, most recently from be0338e to 167e5f1 Compare July 1, 2025 15:44
@christinaexyou christinaexyou force-pushed the add-fairness-metrics branch from 167e5f1 to cd7c5a6 Compare July 2, 2025 14:28
@codecov-commenter
Copy link

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

Attention: Patch coverage is 84.00000% with 16 lines in your changes missing coverage. Please review.

Please upload report for BASE (main@aa250e3). Learn more about missing BASE report.

Files with missing lines Patch % Lines
...e/metrics/fairness/group/disparate_impact_ratio.py 66.66% 5 Missing ⚠️
...rness/group/group_statistical_parity_difference.py 66.66% 5 Missing ⚠️
...cs/fairness/group/group_average_odds_difference.py 86.36% 3 Missing ⚠️
...group/group_average_predictive_value_difference.py 86.36% 3 Missing ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@           Coverage Diff           @@
##             main       #5   +/-   ##
=======================================
  Coverage        ?   61.84%           
=======================================
  Files           ?       39           
  Lines           ?     2351           
  Branches        ?        0           
=======================================
  Hits            ?     1454           
  Misses          ?      897           
  Partials        ?        0           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@ruivieira ruivieira left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@christinaexyou Let's merge as it is and address any issues on separate PRs 👍

@RobGeada RobGeada merged commit 150cf01 into trustyai-explainability:main Jul 3, 2025
3 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: In Review
Development

Successfully merging this pull request may close these issues.

Add fairness metrics and tests
5 participants