Skip to content

Commit

Permalink
Enhancements to CatFIM, rating curve generation script, and other files
Browse files Browse the repository at this point in the history
  • Loading branch information
BradfordBates-NOAA committed Dec 23, 2022
1 parent 9116498 commit 5de3995
Show file tree
Hide file tree
Showing 13 changed files with 1,131 additions and 418 deletions.
33 changes: 32 additions & 1 deletion docs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,37 @@
All notable changes to this project will be documented in this file.
We follow the [Semantic Versioning 2.0.0](http://semver.org/) format.

## v4.0.15.0 - 2022-12-20 - [PR #758](https://github.com/NOAA-OWP/inundation-mapping/pull/758)

This merge addresses feedback received from field users regarding CatFIM. Users wanted a Stage-Based version of CatFIM, they wanted maps created for multiple intervals between flood categories, and they wanted documentation as to why many sites are absent from the Stage-Based CatFIM service. This merge seeks to address this feedback. CatFIM will continue to evolve with more feedback over time.

## Changes
- `/src/gms/usgs_gage_crosswalk.py`: Removed filtering of extra attributes when writing table
- `/src/gms/usgs_gage_unit_setup.py`: Removed filter of gages where `rating curve == yes`. The filtering happens later on now.
- `/tools/eval_plots.py`: Added a post-processing step to produce CSVs of spatial data
- `/tools/generate_categorical_fim.py`:
- New arguments to support more advanced multiprocessing, support production of Stage-Based CatFIM, specific output directory pathing, upstream and downstream distance, controls on how high past "major" magnitude to go when producing interval maps for Stage-Based, the ability to run a single AHPS site.
- `/tools/generate_categorical_fim_flows.py`:
- Allows for flows to be retrieved for only one site (useful for testing)
- More logging
- Filtering stream segments according to stream order
- `/tools/generate_categorical_fim_mapping.py`:
- Support for Stage-Based CatFIM production
- Enhanced multiprocessing
- Improved post-processing
- `/tools/pixel_counter.py`: fixed a bug where Nonetypes were being returned
- `/tools/rating_curve_get_usgs_rating_curves.py`:
- Removed filtering when producing `usgs_gages.gpkg`, but adding attribute as to whether or not it meets acceptance criteria, as defined in `gms_tools/tools_shared_variables.py`.
- Creating a lookup list to filter out unacceptable gages before they're written to `usgs_rating_curves.csv`
- The `usgs_gages.gpkg` now includes two fields indicating whether or not gages pass acceptance criteria (defined in `tools_shared_variables.py`. The fields are `acceptable_codes` and `acceptable_alt_error`
- `/tools/tools_shared_functions.py`:
- Added `get_env_paths()` function to retrieve environmental variable information used by CatFIM and rating curves scripts
- `Added `filter_nwm_segments_by_stream_order()` function that uses WRDS to filter out NWM feature_ids from a list if their stream order is different than a desired stream order.
- `/tools/tools_shared_variables.py`: Added the acceptance criteria and URLS for gages as non-constant variables. These can be modified and tracked through version changes. These variables are imported by the CatFIM and USGS rating curve and gage generation scripts.
- `/tools/test_case_by_hydroid.py`: reformatting code, recommend adding more comments/docstrings in future commit

<br/><br/>

## v4.0.14.2 - 2022-12-22 - [PR #772](https://github.com/NOAA-OWP/inundation-mapping/pull/772)

Added `usgs_elev_table.csv` to hydrovis whitelist files. Also updated the name to include the word "hydrovis" in them (anticipating more s3 whitelist files).
Expand Down Expand Up @@ -49,7 +80,7 @@ Fixes inundation of nodata areas of REM.

- `tools/inundation.py`: Assigns depth a value of `0` if REM is less than `0`

<br/><br/>


## v4.0.13.1 - 2022-12-09 - [PR #743](https://github.com/NOAA-OWP/inundation-mapping/pull/743)

Expand Down
6 changes: 3 additions & 3 deletions src/usgs_gage_crosswalk.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def run_crosswalk(self, input_catchment_filename, input_flows_filename, dem_file
the dem-derived flows 3) sample both dems at the snapped points 4) write the crosswalked points
to usgs_elev_table.csv
'''

if self.gages.empty:
print(f'There are no gages for branch {branch_id}')
os._exit(0)
Expand Down Expand Up @@ -96,10 +96,10 @@ def sample_dem(self, dem_filename, column_name):
def write(self, output_table_filename):
'''Write to csv file'''

# Prep and write out file
elev_table = self.gages.copy()
# Elev table cleanup
elev_table.loc[elev_table['location_id'] == elev_table['nws_lid'], 'location_id'] = None # set location_id to None where there isn't a gage
elev_table = elev_table[['location_id', 'nws_lid', 'feature_id', 'HydroID', 'levpa_id', 'dem_elevation', 'dem_adj_elevation', 'order_', 'LakeID', 'HUC8', 'snap_distance']]
elev_table = elev_table[elev_table['location_id'].notna()]

if not elev_table.empty:
elev_table.to_csv(output_table_filename, index=False)
Expand Down
2 changes: 1 addition & 1 deletion src/usgs_gage_unit_setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def load_gages(self):

# Filter USGS gages to huc
usgs_gages = gpd.read_file(self.usgs_gage_filename)
self.gages = usgs_gages[(usgs_gages.HUC8 == self.huc8) & (usgs_gages.curve == 'yes')]
self.gages = usgs_gages[(usgs_gages.HUC8 == self.huc8)]

# Get AHPS sites within the HUC and add them to the USGS dataset
if self.ahps_filename:
Expand Down
18 changes: 18 additions & 0 deletions tools/eval_plots.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
import matplotlib.pyplot as plt
import seaborn as sns
import re
import glob
import os
import sys
sys.path.append('/foss_fim/src')
Expand Down Expand Up @@ -669,6 +670,19 @@ def eval_plots(metrics_csv, workspace, versions = [], stats = ['CSI','FAR','TPR'
wbd_with_metrics.to_file(Path(workspace) / 'fim_performance_polys.shp')
else:
print('BLE/IFC/RAS2FIM FR datasets not analyzed, no spatial data created.\nTo produce spatial data analyze a FR version')

def convert_shapes_to_csv(workspace):

# Convert any geopackage in the root level of output_mapping_dir to CSV and rename.
shape_list = glob.glob(os.path.join(workspace, '*.shp'))
for shape in shape_list:
gdf = gpd.read_file(shape)
parent_directory = os.path.split(shape)[0]
file_name = shape.replace('.shp', '.csv')
csv_output_path = os.path.join(parent_directory, file_name)
gdf.to_csv(csv_output_path)


#######################################################################
if __name__ == '__main__':
# Parse arguments
Expand Down Expand Up @@ -697,3 +711,7 @@ def eval_plots(metrics_csv, workspace, versions = [], stats = ['CSI','FAR','TPR'
print('The following AHPS sites are considered "BAD_SITES": ' + ', '.join(BAD_SITES))
print('The following query is used to filter AHPS: ' + DISCARD_AHPS_QUERY)
eval_plots(metrics_csv = m, workspace = w, versions = v, stats = s, spatial = sp, fim_1_ms = f, site_barplots = i)

# Convert output shapefiles to CSV
print("Converting to CSVs...")
convert_shapes_to_csv(w)
Loading

0 comments on commit 5de3995

Please sign in to comment.