Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Notes for run.sh Script in README.md #46

Merged
merged 9 commits into from
Feb 14, 2025
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,3 +63,7 @@ scoring_program/
- Any requirements used by participants must be on the approved whitelist (or participants must reach out to request their addition) for security purposes.
- Scores must be saved to a `score.json` file where the keys detailed in the `Leaderboard` section of the `competition.yaml` are give as the keys for the scores.
- This full collection of files and folders is zipped as-is to upload the bundle to CodaBench.
- `run.sh` is a bash script to simulate the process on the Leaderboard for testing on your local. You can first build the docker container, and then run the bash within the virtual environment. The script will
work4cs marked this conversation as resolved.
Show resolved Hide resolved
- Create a folder `/ref` for the csv ground truth file and a folder `/res` for the generated prediction txt file.
- Run `ingestion_program/ingestion.py` to get the predictions from your model and output the predictions to the txt file.
- Run `scoring_program/score_combined.py` to evaluate the predictions by comparing to the grouth truth. The final scores are then written to a json file.
work4cs marked this conversation as resolved.
Show resolved Hide resolved
19 changes: 9 additions & 10 deletions run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@
docker pull [image_id]

1.a.1. If use a GPU:
docker run -it --gpus device=0 -v [repo path]:/codabench [image_id] /bin/bash
docker run -it --gpus device=0 -v [repo_path]:/codabench [image_id] /bin/bash
1.a.2. If only use CPU:
docker run -it -v [repo path]:/codabench [image_id] /bin/bash
docker run -it -v [repo_path]:/codabench [image_id] /bin/bash

cd codabench

Expand All @@ -34,12 +34,12 @@ export baseline_model="bioclip"
export task_folder="${data_split}_${baseline_model}"


## create folder structure
# Create folder structure
if [[ "$task_type" == *"folder"* ]]; then
mkdir -p sample_result_submission/$task_folder/ref
mkdir -p sample_result_submission/$task_folder/res
export ref_path="/home/wu.5686/imageo/challenge/reference_data/ref_$data_split.csv"
cp $ref_path sample_result_submission/$task_folder/ref
export ref_path="[the path of folder you put the simulated ground truth csv file]/ref_$data_split.csv"
cp $ref_path sample_result_submission/$task_folder/ref # Put the simulated ground truth csv file in the ref folder.
fi

: <<'END_COMMENT'
Expand All @@ -51,11 +51,10 @@ sample_result_submission
-- res
END_COMMENT

## get the predictions
# Get the predictions
if [[ "$task_type" == *"predict"* ]]; then
export input_dir="input_data/$data_split"
# export input_dir="/local/scratch/wu.5686/anomaly_challenge/input_data/$data_split"
export output_dir="sample_result_submission/$task_folder/res"
export input_dir="input_data/$data_split" # This is the directory you put the images in.
export output_dir="sample_result_submission/$task_folder/res" # The prediction file will output to this directory.
export program_dir="ingestion_program"
if [ "$baseline_model" == "bioclip" ]; then
export submission_dir="baselines/BioCLIP_code_submission"
Expand All @@ -69,7 +68,7 @@ if [[ "$task_type" == *"predict"* ]]; then
python3 ingestion_program/ingestion.py $input_dir $output_dir $program_dir $submission_dir
fi

## score the predictions
# Score the predictions
if [[ "$task_type" == *"evaluate"* ]]; then
export input_dir="sample_result_submission/$task_folder"
export output_dir="sample_result_submission/$task_folder"
Expand Down