-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separate each search type into its own mlflow run to allow comparison using Azure ML / mlflow #540
Comments
guybartal
added a commit
that referenced
this issue
May 18, 2024
closes #480 ### This PR includes - Log all hyper parameters to mlflow - Config refactoring - lowercase for all attributes as those are not constants (match coding conventions) - Print azure ml monitoring URL right after its creation to allow easy access to monitoring (ctrl + left click) - Fix issues with experiment and job names, allowing azure ml commands open experiment and mlflow runs automatically, while locally we create those manually. - Remove unused experiment settings from `.env` sample file - Hide warning azureml warnings by using `CliV2AnonymousEnvironment` as Azure ML environment name - Temporary solution for wrong json format in Q&A Gen step with current CI generation model version by removing all "..."` strings from the model response ### WIP #540
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
AS a researcher I would like to be able to compare metric results across different types of search approaches in Azure ML / ML Flow
SO, I would be able to choose best type of search for my case.
related to #529
Note: currently eval step averages all metrics for all search types and log to mflow only the mean value per metric, but also uploads the full detailed table to azureml.
DoD:
IndexConfig
class)Tasks
The text was updated successfully, but these errors were encountered: