Skip to content

Commit

Permalink
Make GMS HAND post-processing independent
Browse files Browse the repository at this point in the history
  • Loading branch information
RobHanna-NOAA committed Dec 23, 2022
1 parent 5de3995 commit 99f1601
Show file tree
Hide file tree
Showing 14 changed files with 354 additions and 413 deletions.
54 changes: 51 additions & 3 deletions docs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,56 @@
All notable changes to this project will be documented in this file.
We follow the [Semantic Versioning 2.0.0](http://semver.org/) format.

## v4.0.16.0 - 2022-12-20 - [PR #768](https://github.com/NOAA-OWP/inundation-mapping/pull/768)

`gms_run_branch.sh` was processing all of the branches iteratively, then continuing on to a large post processing portion of code. That has now be split to two files, one for branch iteration and the other file for just post processing.

Other minor changes include:
- Removing the system where a user could override `DropStreamOrders` where they could process streams with stream orders 1 and 2 independently like other GMS branches. This option is now removed, so it will only allow stream orders 3 and higher as gms branches and SO 1 and 2 will always be in branch zero.

- The `retry` flag on the three gms*.sh files has been removed. It did not work correctly and was not being used. Usage of it would have created unreliable results.

### Additions

- `gms_run_post_processing.sh`
- handles all tasks from after `gms_run_branch.sh` to this file, except for output cleanup, which stayed in `gms_run_branch.sh`.
- Can be run completely independent from `gms_run_unit.sh` or gms_run_branch.sh` as long as all of the files are in place. And can be re-run if desired.

### Changes

- `gms_pipeline.sh`
- Remove "retry" system.
- Remove "dropLowStreamOrders" system.
- Updated for newer reusable output date/time/duration system.
- Add call to new `gms_run_post_processing.sh` file.

- `gms_run_branch.sh`
- Remove "retry" system.
- Remove "dropLowStreamOrders" system.
- Updated for newer reusable output date/time/duration system.
- Removed most code from below the branch iterator to the new `gms_run_post_processing.sh` file. However, it did keep the branch files output cleanup and non-zero exit code checking.

- `gms_run_unit.sh`
- Remove "retry" system.
- Remove "dropLowStreamOrders" system.
- Updated for newer reusable output date/time/duration system.

- `src`
- `bash_functions.env`: Added a new method to make it easier / simpler to calculation and display duration time.
- `filter_catchments_and_add_attributes.py`: Remove "dropLowStreamOrders" system.
- `split_flows.py`: Remove "dropLowStreamOrders" system.
- `usgs_gage_unit_setup.py`: Remove "dropLowStreamOrders" system.

- `gms`
- `delineate_hydros_and_produced_HAND.sh` : Remove "dropLowStreamOrders" system.
- `derive_level_paths.py`: Remove "dropLowStreamOrders" system and some small style updates.
- `run_by_unit.sh`: Remove "dropLowStreamOrders" system.

- `unit_tests/gms`
- `derive_level_paths_params.json` and `derive_level_paths_unittests.py`: Remove "dropLowStreamOrders" system.

<br/><br/>

## v4.0.15.0 - 2022-12-20 - [PR #758](https://github.com/NOAA-OWP/inundation-mapping/pull/758)

This merge addresses feedback received from field users regarding CatFIM. Users wanted a Stage-Based version of CatFIM, they wanted maps created for multiple intervals between flood categories, and they wanted documentation as to why many sites are absent from the Stage-Based CatFIM service. This merge seeks to address this feedback. CatFIM will continue to evolve with more feedback over time.
Expand Down Expand Up @@ -80,13 +130,11 @@ Fixes inundation of nodata areas of REM.

- `tools/inundation.py`: Assigns depth a value of `0` if REM is less than `0`



## v4.0.13.1 - 2022-12-09 - [PR #743](https://github.com/NOAA-OWP/inundation-mapping/pull/743)

This merge adds the tools required to generate Alpha metrics by hydroid. It summarizes the Apha metrics by branch 0 catchment for use in the Hydrovis "FIM Performance" service.

## Additions
### Additions

- `pixel_counter.py`: A script to perform zonal statistics against raster data and geometries
- `pixel_counter_functions.py`: Supporting functions
Expand Down
52 changes: 13 additions & 39 deletions gms_pipeline.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,11 @@ usage ()
{
echo
echo 'Produce GMS hydrofabric datasets for unit and branch scale.'
echo 'Usage : gms_pipeline.sh [REQ: -u <hucs> - -n <run name> ]'
echo ' [OPT: -h -c <config file> -j <job limit>] -o -r'
echo ' -ud <unit deny list file> -bd <branch deny list file>'
echo ' -zd <branch zero deny list file> -a <use all stream orders> ]'
echo 'Usage : gms_pipeline.sh [REQ: -u <hucs> -n <run name> ]'
echo ' [OPT: -h -c <config file> -j <job limit>] -o'
echo ' -ud <unit deny list file>'
echo ' -bd <branch deny list file>'
echo ' -zd <branch zero deny list file>]'
echo ''
echo 'REQUIRED:'
echo ' -u/--hucList : HUC8s to run or multiple passed in quotes (space delimited) file.'
Expand Down Expand Up @@ -38,10 +39,6 @@ usage ()
echo ' -j/--jobLimit : max number of concurrent jobs to run. Default 1 job at time.'
echo ' stdout and stderr to terminal and logs. With >1 outputs progress and logs the rest'
echo ' -o/--overwrite : overwrite outputs if already exist'
echo ' -r/--retry : retries failed jobs'
echo ' -a/--UseAllStreamOrders : If this flag is included, the system will INCLUDE stream orders 1 and 2'
echo ' at the initial load of the nwm_subset_streams.'
echo ' Default (if arg not added) is false and stream orders 1 and 2 will be dropped'
echo
exit
}
Expand Down Expand Up @@ -74,10 +71,6 @@ in
-o|--overwrite)
overwrite=1
;;
-r|--retry)
retry="--retry-failed"
overwrite=1
;;
-ud|--unitDenylist)
shift
deny_unit_list=$1
Expand All @@ -90,9 +83,6 @@ in
shift
deny_branch_zero_list=$1
;;
-a|--useAllStreamOrders)
useAllStreamOrders=1
;;
*) ;;
esac
shift
Expand Down Expand Up @@ -120,10 +110,6 @@ then
# default is false (0)
overwrite=0
fi
if [ -z "$retry" ]
then
retry=""
fi

# The tests for the deny lists are duplicated here on to help catch
# them earlier (ie.. don't have to wait to process units to find an
Expand Down Expand Up @@ -154,18 +140,6 @@ then
usage
fi

# invert useAllStreamOrders boolean (to make it historically compatiable
# with other files like gms/run_unit.sh and gms/run_branch.sh).
# Yet help user understand that the inclusion of the -a flag means
# to include the stream order (and not get mixed up with older versions
# where -s mean drop stream orders)
# This will encourage leaving stream orders 1 and 2 out.
if [ "$useAllStreamOrders" == "1" ]; then
export dropLowStreamOrders=0
else
export dropLowStreamOrders=1
fi

## SOURCE ENV FILE AND FUNCTIONS ##
source $envFile
source $srcDir/bash_functions.env
Expand Down Expand Up @@ -202,8 +176,7 @@ run_cmd+=" -c $envFile"
run_cmd+=" -j $jobLimit"

if [ $overwrite -eq 1 ]; then run_cmd+=" -o" ; fi
if [ "$retry" == "--retry-failed" ]; then run_cmd+=" -r" ; fi
if [ $dropLowStreamOrders -eq 1 ]; then run_cmd+=" -s" ; fi

#echo "$run_cmd"
. /foss_fim/gms_run_unit.sh -u "$hucList" $run_cmd -ud "$deny_unit_list" -zd "$deny_branch_zero_list"

Expand All @@ -217,18 +190,19 @@ if [ $dropLowStreamOrders -eq 1 ]; then run_cmd+=" -s" ; fi

# if this has too many errors, it will return a sys.exit code (like 62 as per fim_enums)
# and we will stop the rest of the process. We have to catch stnerr as well.
# This stops the run from continuing to run, drastically filing drive and killing disk space.
python3 $srcDir/check_unit_errors.py -f $outputRunDataDir -n $num_hucs


## Produce level path or branch level datasets
. /foss_fim/gms_run_branch.sh $run_cmd -bd "$deny_branches_list" -zd "$deny_branch_zero_list"


## continue on to post processing
. /foss_fim/gms_run_post_processing.sh $run_cmd

echo
echo "======================== End of gms_pipeline.sh =========================="
pipeline_end_time=`date +%s`
total_sec=$(expr $pipeline_end_time - $pipeline_start_time)
dur_min=$((total_sec / 60))
dur_remainder_sec=$((total_sec % 60))
echo "Total Run Time = $dur_min min(s) and $dur_remainder_sec sec"
date -u
Calc_Duration $pipeline_start_time
echo

Loading

0 comments on commit 99f1601

Please sign in to comment.