Skip to content

ADA-SVR (2/4) Test for models #101

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 29 commits into
base: main
Choose a base branch
from

Conversation

lionelkusch
Copy link
Collaborator

This pull request includes a modification of other PR (#74, #73, #99, #100)

This pull request is about the tests for the ada_svr. I added a functional test to complete the unit of test of the functions.

@lionelkusch lionelkusch requested review from bthirion, Remi-Gau and jpaillard and removed request for jpaillard December 26, 2024 13:24
@lionelkusch lionelkusch changed the title Test for models ADA-SVR (4/4) Test for models Dec 26, 2024
@lionelkusch lionelkusch changed the title ADA-SVR (4/4) Test for models ADA-SVR (2/4) Test for models Feb 25, 2025
Copy link

codecov bot commented Feb 25, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 100.00%. Comparing base (52d640d) to head (4c23f91).
Report is 46 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff            @@
##              main      #101   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           21        21           
  Lines          923       942   +19     
=========================================
+ Hits           923       942   +19     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@jpaillard
Copy link
Collaborator

Since this is about tests, would creating fixtures in the conftest.py with a broad scope (like package or session) make sense?
This could, for instance, avoid re-generating the multivariate_1D_simulation in different tests.
It's probably not a game changer, but it could help optimize the testing pipeline.

Copy link
Collaborator

@bthirion bthirion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall, thx.

estimator.fit(X, y)
beta_hat_svr = estimator.coef_

# compare that the coefficiants are the same that the one of SVR
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# compare that the coefficiants are the same that the one of SVR
# check that the coefficients are the same as the ones of SVR

assert np.max(np.abs(beta_hat - beta_hat_svr.T[:, 0])) < 2e-4

weights, weights_distribution = permutation_test(
X, y, estimator=estimator, n_permutations=10000, n_jobs=8, seed=42
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a test, you probably want to have a much lower number of permutations

@lionelkusch lionelkusch marked this pull request as draft March 18, 2025 16:57
@lionelkusch lionelkusch added the test Question link to tests label Apr 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
test Question link to tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants