Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wip: model test updates #441

Draft
wants to merge 28 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
3fd9525
add descriptions of different norm options
kirstybayliss May 24, 2024
b9c0731
can supply bin_width for completeness separately
kirstybayliss May 24, 2024
e048c27
updating 'optimize' norm to use specified binwidths
kirstybayliss May 29, 2024
566aa7a
Merge branch 'master' of github.com:GEMScienceTools/oq-mbtk into mode…
kirstybayliss Jul 23, 2024
bdd56f2
updates to take all files in a folder and manage point sources
kirstybayliss Jul 23, 2024
5fed7e9
adding option to plot all sources in one plot
kirstybayliss Jul 23, 2024
993c08f
compare two source models
kirstybayliss Jul 24, 2024
1c9cca3
adding more tests for homogenisation as useful examples
kirstybayliss Aug 26, 2024
ece90e8
tweak to stop depths = 0 returning depths < 0 warning
kirstybayliss Aug 26, 2024
f1ea7b1
explain preferences for homogenisation magnitudes
kirstybayliss Aug 26, 2024
7695aab
fix positional argument issues
kirstybayliss Aug 26, 2024
83bccc7
add option to provide lower magnitude != ref mag
kirstybayliss Aug 26, 2024
1c41f38
remove extra print statements
kirstybayliss Aug 26, 2024
e35c216
remove print statements
kirstybayliss Aug 26, 2024
7c9b06f
adding missing importlib import
kirstybayliss Sep 5, 2024
bbaae03
Merge branch 'completeness_updates' into model_test_updates
kirstybayliss Sep 9, 2024
f7f859d
updates to subduction documentation
kirstybayliss Sep 9, 2024
9e608a0
further improvements to sub documentation
kirstybayliss Sep 10, 2024
cb56eef
Merge branch 'master' of github.com:GEMScienceTools/oq-mbtk into mode…
kirstybayliss Sep 17, 2024
d044998
binwidths not implemented properly
kirstybayliss Sep 17, 2024
12d9a55
adding missing norm import
kirstybayliss Sep 30, 2024
c1edb4f
adding norm tests
kirstybayliss Sep 30, 2024
8336cce
adding warnings for small datasets
kirstybayliss Sep 30, 2024
7ef1821
Merge branch 'master' of github.com:GEMScienceTools/oq-mbtk into mode…
kirstybayliss Oct 14, 2024
541a23e
require sourceConverter object
kirstybayliss Oct 31, 2024
6689fee
Merge branch 'master' of github.com:GEMScienceTools/oq-mbtk into mode…
kirstybayliss Oct 31, 2024
63f1ad9
properly update source names when removing fault buffers
kirstybayliss Nov 13, 2024
ba82dde
add b_sigma and a_sigma to config
kirstybayliss Nov 15, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion docsrc/contents/cat.rst
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,7 @@ This would give us a different fit to our data and a different equation to suppl

Where there are not enough events to allow for a direct regression or we are unhappy with the fit for our data, there are many conversions in the literature which may be useful. This process may take some revising and iterating - it is sometimes very difficult to identify a best fit, especially where we have few datapoints or highly uncertain data. Once we are happy with the fits to our data, we can add the regression equation to the homogenisation .toml file. This process should be repeated for every magnitude we wish to convert to Mw.

The final homogenisation step itself is also controlled by a toml file, where each observed magnitude is specified individually and the regression coefficients and uncertainty are included. It is also necessary to specify a hierarchy of catalogues so that a preferred catalogue is used for the magnitude where the event has multiple entries. In the example below, we merge the ISCGEM and a local catalogue, preferring ISCGEM magnitudes where available as specified in the ranking. Because the ISCGEM already provides magnitudes in Mw, we simply retain all Mw magnitudes from ISCGEM. In this example, our local catalogue has two different magnitude types for which we have derived a regression. We specify how to convert to the standardised Mw from the local.mw and the standard deviations, which are outputs of the fitting we carried out above.
The final homogenisation step itself is also controlled by a toml file, where each observed magnitude is specified individually and the regression coefficients and uncertainty are included. It is also necessary to specify a hierarchy of catalogues so that a preferred catalogue is used for the magnitude where the event has multiple entries. If you have an isf format catalogue, you can also specify here the hierarchy of individual agencies (or authors in the isf format) within the catalogue. In the example below, we merge the ISCGEM and a local catalogue, preferring ISCGEM magnitudes where available as specified in the ranking. Because the ISCGEM already provides magnitudes in Mw, we simply retain all Mw magnitudes from ISCGEM. In this example, our local catalogue has two different magnitude types for which we have derived a regression. We specify how to convert to the standardised Mw from the local.mw and the standard deviations, which are outputs of the fitting we carried out above.

.. code-block:: ini

Expand Down Expand Up @@ -229,6 +229,7 @@ The final homogenisation step itself is also controlled by a toml file, where ea
conv_eqs = ["0.1928 + 0.9757 * m"]
std_devs = [0.0091, 0.0016]

The order of conversions in the list will determine priority for conversion, so for the local events we will first convert all events with mw magnitudes and then use mww only where the mw magnitudes are not available, and the local conversions will not be used when we have an ISCGEM Mw. In this way we can specify hierarchies for both the agencies and the magnitudes.
The actual homogenisation step is carried out by calling
:code:`oqm cat homogenise $ARG1 $ARG2 $ARG3`
as in the bash script example, where $ARG1 is the homogenisation toml file and and $ARG2 and $ARG3 are the hdf5 file outputs from the merge step, describing the origins and magnitude information for the merged catalogue respectively.
Expand Down
231 changes: 187 additions & 44 deletions docsrc/contents/sub.rst

Large diffs are not rendered by default.

36 changes: 21 additions & 15 deletions openquake/cat/completeness/analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -176,8 +176,8 @@ def check_criterion(criterion, rate, previous_norm, tvars):
norm = abs(tmp_rate - rate_ma)

elif criterion == 'optimize':
tmp_rate = -1
norm = get_norm_optimize(tcat, aval, bval, ctab, cmag, n_obs, t_per,
tmp_rate = -1
norm = get_norm_optimize(tcat, aval, bval, ctab, binw, cmag, n_obs, t_per,
last_year, info=False)

elif criterion == 'optimize_a':
Expand All @@ -190,20 +190,19 @@ def check_criterion(criterion, rate, previous_norm, tvars):
mmin=ref_mag, mmax=ref_upp_mag)
elif criterion == 'optimize_c':
tmp_rate = -1
norm = get_norm_optimize_c(tcat, aval, bval, ctab, last_year)
norm = get_norm_optimize_c(tcat, aval, bval, ctab, last_year, ref_mag, ref_upp_mag, binw)

elif criterion == 'gft':
tmp_rate = -1
norm = get_norm_optimize_gft(tcat, aval, bval, ctab, cmag, n_obs,
t_per, last_year)

elif criterion == 'weichert':
tmp_rate = -1
norm = get_norm_optimize_weichert(tcat, aval, bval, ctab, last_year)

elif criterion == 'poisson':
tmp_rate = -1
norm = get_norm_optimize_c(tcat, aval, bval, ctab, last_year, ref_mag)
tmp_rate = -1
norm = get_norm_optimize_c(tcat, aval, bval, ctab, last_year, ref_upp_mag, binw)

if norm is None or np.isnan(norm):
return False, -1, previous_norm
Expand Down Expand Up @@ -233,7 +232,7 @@ def _make_ctab(prm, years, mags):
return 'skip'


def _completeness_analysis(fname, years, mags, binw, ref_mag, ref_upp_mag,
def _completeness_analysis(fname, years, mags, binw, ref_mag, mag_low, ref_upp_mag,
bgrlim, criterion, compl_tables, src_id=None,
folder_out_figs=None, rewrite=False,
folder_out=None):
Expand All @@ -247,6 +246,9 @@ def _completeness_analysis(fname, years, mags, binw, ref_mag, ref_upp_mag,
:param ref_mag:
The reference magnitude used to compute the rate and select the
completeness table
:param mag_low:
The lowest magnitude to include in rate calculations.
Will default to ref_mag if not provided.
:param ref_upp_mag:
The reference upper magnitude limit used to compute the rate and
select the completeness table
Expand All @@ -264,7 +266,7 @@ def _completeness_analysis(fname, years, mags, binw, ref_mag, ref_upp_mag,

# Checking input
if criterion not in ['match_rate', 'largest_rate', 'optimize', 'weichert',
'poisson', 'optimize_a', 'optimize_b', 'optimize_d']:
'poisson', 'optimize_a', 'optimize_b', 'optimize_c','optimize_d', 'gft']:
raise ValueError('Unknown optimization criterion')

tcat = _load_catalogue(fname)
Expand All @@ -273,7 +275,7 @@ def _completeness_analysis(fname, years, mags, binw, ref_mag, ref_upp_mag,

# Info
# Should have option to specify a mag_low != ref_mag
mag_low = ref_mag
#mag_low = lower_mag
idx = tcat.data["magnitude"] >= mag_low
fmt = 'Catalogue contains {:d} events equal or above {:.1f}'
print('\nSOURCE:', src_id)
Expand Down Expand Up @@ -310,7 +312,7 @@ def _completeness_analysis(fname, years, mags, binw, ref_mag, ref_upp_mag,
print(f'Iteration: {iper:05d} norm: {norm:12.6e}', end="\r")

ctab = _make_ctab(prm, years, mags)
#print(ctab)

if isinstance(ctab, str):
continue

Expand Down Expand Up @@ -466,15 +468,19 @@ def read_compl_params(config):
key = 'completeness'
ms = np.array(config[key]['mags'], dtype=float)
yrs = np.array(config[key]['years'])
bw = config.get('bin_width', 0.1)
try:
bw = np.array(config[key]['bin_width'], dtype =float)
except:
bw = config.get('bin_width', 0.1)
r_m = config[key].get('ref_mag', 5.0)
m_low = config[key].get('mag_low', r_m)
r_up_m = config[key].get('ref_upp_mag', None)
bmin = config[key].get('bmin', 0.8)
bmax = config[key].get('bmax', 1.2)
# Options: 'largest_rate', 'match_rate', 'optimize'
crit = config[key].get('optimization_criterion', 'optimize')

return ms, yrs, bw, r_m, r_up_m, bmin, bmax, crit
return ms, yrs, bw, r_m, m_low, r_up_m, bmin, bmax, crit


def read_compl_data(folder_in):
Expand Down Expand Up @@ -513,7 +519,7 @@ def completeness_analysis(fname_input_pattern, f_config, folder_out_figs,
# Loading configuration
config = toml.load(f_config)

ms, yrs, bw, r_m, r_up_m, bmin, bmax, crit = read_compl_params(config)
ms, yrs, bw, r_m, m_low, r_up_m, bmin, bmax, crit = read_compl_params(config)

compl_tables, mags_chk, years_chk = read_compl_data(folder_in)

Expand Down Expand Up @@ -544,13 +550,13 @@ def completeness_analysis(fname_input_pattern, f_config, folder_out_figs,
else:
var = {}

res = _completeness_analysis(fname, yrs, ms, bw, r_m,
res = _completeness_analysis(fname, yrs, ms, bw, r_m, m_low,
r_up_m, [bmin, bmax], crit,
compl_tables, src_id,
folder_out_figs=folder_out_figs,
folder_out=folder_out,
rewrite=False)
#print(len(res))

if len(res) == 0:
continue

Expand Down
3 changes: 2 additions & 1 deletion openquake/cat/completeness/generate.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,8 @@ def _get_completenesses(mags, years, folder_out=None, num_steps=0,
min_mag = min(mags)
else:
min_mag = min_mag_compl


print("minimum magnitude is ", min_mag)
if len(np.where(min_mag <= mags)) < 1:
msg = 'None of the magnitude intervals above the min_mag_compl'
raise ValueError(msg)
Expand Down
Loading
Loading