Some useful links for registering to LVK and test-runs of GWs events.
- Register for a LIGO membership here.
After your membership is accepted:
-
Register for LIGO's Mattermost here.
Many channels are open for joining
$\rightarrow$ you can find them by clicking the$+$ sign, next toLIGO
at the top-left. This will open a list of available public channels (beware, the list continues for multiple pages - clickNext
at bottom-right, to check them all). -
Apply for a LIGO cluster account here.
-
Create a GitLab LIGO account here.
-
The GWs Open Data Workshops (here) are very useful to understand the basics of GWs analysis. There is also a dedicated forum (here) with Q&A about the workshops, but also more general issues.
-
The IGWN Public Alerts page (here) has a very useful website describing the whole process, but also many of the relevant terminology.
-
A useful compilation of FAQ can be found here.
-
A quick tutorial, using
bilby
to analyse, i.e. do a parameter estimation (PE), of a GW event is here. Beware that there could be some typos on the event's naming, so try to be consistent with the original event's naming at the top, i.e. the command that creates thebilby
configuration file. -
The general idea is the following:
- We create a directory for the PE run
- Activate relevant python environments
- Make all the necessary (physical) changes in the
filename_config.ini
- Secure permission (If problems with permission occur, check (here) for alternative authorisation options. Sometimes the error log files of a failed job can have useful links.)
- Run the analysis:
bilby_pipe filename_config.ini --submit
- Check analysis:
condor_q --nobatch
. More onHTCondor
workload manager here.
-
More details on
bilby
configuration options can be found here. -
Where to run?
To access the LIGO clusters, follow the ways here. For UNLV GWs group, we use the CIT site, with the following hosts:
LIGO has a number of workstations - the HTCondor
system will distribute the run accordingly. The ldas-grid
and ldas-pcdev*
are good for hosting your files and analyses (check the system configuration, in case you are interested in a job with specific characteristics, like high memory etc).
Of course, the files could be downloaded and inspected locally. But there is also the possibility to examine some of the plots online:
-
Follow the links here. Either
Jupytel Lab
or thepublic_html
page organise the produced plots. -
To create a summary webpage (shown in
public_html
above), the following options must be selected in the configuration file:create-summary = True email = [email protected] webdir = /home/albert.einstein/public_html/project
-
The
dag_name.submit
file andbash_name.sh
file have useful information on the order of jobs submissions, but also on the commands used at each step that initialise the specific part of the run (these can be useful, if you want to check) -
The different
.log
files show details (duration, memory etc) about the runs.