Skip to content

Commit

Permalink
Merge branch 'docs/dev' into dev/14.2.2
Browse files Browse the repository at this point in the history
  • Loading branch information
msulprizio committed Oct 23, 2023
2 parents 9087a47 + e9481bc commit 88b0b84
Show file tree
Hide file tree
Showing 4 changed files with 63 additions and 34 deletions.
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Thank you for looking into contributing to GEOS-Chem! GEOS-Chem is a grass-roots model that relies on contributions from community members like you. Whether you're new to GEOS-Chem or a longtime user, you're a valued member of the community, and we want you to feel empowered to contribute.

Updates to the GEOS-Chem model benefit both you and the [entire GEOS-Chem community](https://geoschem.github.io/geos-chem-people-projects-map/). You benefit through [coauthorship and citations](https://geos-chem.seas.harvard.edu/geos-new-developments). Priority development needs are identified at GEOS-Chem users' meetings with updates between meetings based on [GEOS-Chem Steering Committee (GCSC)](https://geos-chem.seas.harvard.edu/geos-steering-cmte) input through [Working Groups](https://geos-chem.seas.harvard.edu/geos-working-groups).
Updates to the GEOS-Chem model benefit both you and the [entire GEOS-Chem community](https://geoschem.github.io/people.html). You benefit through [coauthorship and citations](https://geoschem.github.io/new-developments.html). Priority development needs are identified at GEOS-Chem users' meetings with updates between meetings based on [GEOS-Chem Steering Committee (GCSC)](https://geoschem.github.io/steering-committee.html) input through [Working Groups](https://geoschem.github.io/working-groups.html).

## We use GitHub and ReadTheDocs
We use GitHub to host the GCHP source code, to track issues, user questions, and feature requests, and to accept pull requests: [https://github.com/geoschem/GCHP](https://github.com/geoschem/GCHP). Please help out as you can in response to issues and user questions.
Expand Down Expand Up @@ -36,7 +36,7 @@ As the author you are responsible for:
7. Test your update thoroughly and make sure that it works. For structural updates we recommend performing a difference test (i.e. testing against the prior version) in order to ensure that identical results are obtained).
8. Review the coding conventions and checklists for code and data updates listed below.
9. Create a [pull request in GitHub](https://help.github.com/articles/creating-a-pull-request/).
10. The [GEOS-Chem Support Team](https://wiki.geos-chem.org/GEOS-Chem_Support_Team) will add your updates into the development branch for an upcoming GEOS-Chem version. They will also validate your updates with [benchmark simulations](https://wiki.geos-chem.org/GEOS-Chem_benchmarking).
10. The [GEOS-Chem Support Team](https://geoschem.github.io/support-team.html) will add your updates into the development branch for an upcoming GEOS-Chem version. They will also validate your updates with [benchmark simulations](http://wiki.geos-chem.org/GEOS-Chem_benchmarking).
11. If the benchmark simulations reveal a problem with your update, the GCST will request that you take further corrective action.

### Coding conventions
Expand Down
5 changes: 2 additions & 3 deletions SUPPORT.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,14 @@ We use GitHub to track issues. To report a bug, **[open a new issue](https://git
We use GitHub issues to support user questions. To ask a question, **[open a new issue](https://github.com/geoschem/GCHP/issues/new/choose)** and select the question template. Please include your name and institution in the issue.

## What type of support can I expect?
We will be happy to assist you in resolving bugs and technical issues that arise when compiling or running GEOS-Chem. User support and outreach is an important part of our mission to support the [International GEOS-Chem User Community](https://geoschem.github.io/geos-chem-people-projects-map/).
We will be happy to assist you in resolving bugs and technical issues that arise when compiling or running GEOS-Chem. User support and outreach is an important part of our mission to support the [International GEOS-Chem User Community](https://geoschem.github.io/people.html).

Even though we can assist in several ways, we cannot possibly do everything. We rely on GEOS-Chem users being resourceful and willing to try to resolve problems on their own to the greatest extent possible.

If you have a science question rather than a technical question, you should contact the relevant [GEOS-Chem Working Group(s)](https://geos-chem.seas.harvard.edu/geos-working-groups) directly. But if you do not know whom to ask, you may open a new issue (See "Where can I ask for help" above) and we will be happy to direct your question to the appropriate person(s).
If you have a science question rather than a technical question, you should contact the relevant [GEOS-Chem Working Group(s)](https://geoschem.github.io/working-groups.html) directly. But if you do not know whom to ask, you may open a new issue (See "Where can I ask for help" above) and we will be happy to direct your question to the appropriate person(s).

## How to submit changes
Please see **[Contributing Guidelines](https://gchp.readthedocs.io/en/latest/reference/CONTRIBUTING.html)**.

## How to request an enhancement
Please see **[Contributing Guidelines](https://gchp.readthedocs.io/en/latest/reference/CONTRIBUTING.html)**.

33 changes: 22 additions & 11 deletions docs/source/supplement/containers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,14 @@ The instructions below show how to create a run directory and run GCHP using `Si
, which can be installed using instructions at the previous link or through Spack.
Singularity is a container software that is preferred over Docker for many HPC applications due to security issues.
Singularity can automatically convert and use Docker images.
You can choose to use Docker or Singularity depending on the support of the cluster.

The workflow for running GCHP using containers is

#. Pull an image (:ref:`described on this page <create_run_directory_using_singularity>`)
#. Create a run directory (:ref:`use pre-built tools <create_run_directory_using_singularity>` or follow :ref:`creating_a_run_directory`)
#. Download input data (:ref:`described on this page <download_data_using_dry_run>` and :ref:`downloading_input_data`)
#. Running GCHP (:ref:`use pre-built tools <setting_up_and_running_gchp_using_singularity>` or follow :ref:`running_gchp`)

Software requirements
---------------------
Expand All @@ -17,8 +25,6 @@ There are only two software requirements for running GCHP using a Singularity co
* Singularity itself
* An MPI implementation that matches the type and major/minor version of the MPI implementation inside of the container

The current images use OpenMPI 4.0.1 internally, which has been confirmed to work with external installations of OpenMPI 4.0.2-4.0.5.


Performance
-----------
Expand All @@ -27,16 +33,17 @@ Because we do not include optimized infiniband libraries within the provided Doc
Container-based benchmarks deployed on Harvard's Cannon cluster using up to 360 cores at c90 (~1x1.25) resolution averaged 15% slower than equivalent non-container runs. Performance may worsen at a higher core count and resolution.
If this performance hit is not a concern, these containers are the quickest way to setup and run GCHP.

.. _create_run_directory_using_singularity:

Setting up and running GCHP using Singularity
Pulling an image and creating run directory using Singularity
---------------------------------------------

Available GCHP images are listed `on Docker Hub <https://hub.docker.com/r/geoschem/gchp/tags?page=1&ordering=last_updated>`__.
The following command pulls the image of GCHP 13.0.2 and converts it to a Singularity image named `gchp.sif` in your current directory.
The following command pulls the image of GCHP 14.2.0 and converts it to a Singularity image named `gchp.sif` in your current directory.

.. code-block:: console
$ singularity pull gchp.sif docker://geoschem/gchp:13.0.2
$ singularity pull gchp.sif docker://geoschem/gchp:14.2.0
If you do not already have GCHP data directories, create a directory where you will later store data files.
Expand All @@ -50,11 +57,15 @@ respectively, in the run directory creation prompts.

.. code-block:: console
$ singularity exec -B DATA_DIR:/ExtData -B WORK_DIR:/workdir gchp.sif /opt/geos-chem/bin/createRunDir.sh
$ singularity exec -B DATA_DIR:/ExtData -B WORK_DIR:/workdir gchp.sif /bin/bash -c ". ~/.bashrc && /opt/geos-chem/bin/createRunDir.sh"
Once the run directory is created, it will be available at `WORK_DIR` on your host machine. ``cd`` to `WORK_DIR`.

.. _setting_up_and_running_gchp_using_singularity:

Setting up and running GCHP using Singularity
---------------------------------------------

To avoid having to specify the locations of your data and run directories (RUN_DIR) each time you execute a command in the singularity container,
we will add these to an environment file called `~/.container_run.rc` and point the `gchp.env` symlink to this environment file.
Expand All @@ -63,8 +74,8 @@ We will also load MPI in this environment file (edit the first line below as app
.. code-block:: console
$ echo "module load openmpi/4.0.3" > ~/.container_run.rc
$ echo "export SINGULARITY_BINDPATH=\"DATA_DIR:/ExtData, RUN_DIR:/rundir\"" >> ~/.container_run.rc
$ ./setEnvironment.sh ~/.container_run.rc
$ echo "export SINGULARITY_BINDPATH=\"DATA_DIR:/ExtData,RUN_DIR:/rundir\"" >> ~/.container_run.rc
$ ./setEnvironmentLink.sh ~/.container_run.rc
$ source gchp.env
Expand All @@ -73,7 +84,7 @@ We will now move the pre-built `gchp` executable and example run scripts to the

.. code-block:: console
$ rm runScriptSamples #remove broken link
$ rm runScriptSamples # remove broken link
$ singularity exec ../gchp.sif cp /opt/geos-chem/bin/gchp /rundir
$ singularity exec ../gchp.sif cp -rf /gc-src/run/runScriptSamples/ /rundir
Expand All @@ -84,7 +95,7 @@ We'll call this script `internal_exec`.

.. code-block:: console
$ echo ". /init.rc" > ./internal_exec
$ echo -e "if [ -e \"/init.rc\" ] ; then\n\t. /init.rc\nfi" > ./internal_exec # no need for versions after 13.4.1
$ echo "cd /rundir" >> ./internal_exec
$ echo "./gchp" >> ./internal_exec
$ chmod +x ./internal_exec
Expand All @@ -104,7 +115,7 @@ You can now setup your run configuration as normal using `setCommonRunSettings.s
If you already have GCHP data directories, congratulations! You've completed all the steps you need to run GCHP in a container.
If you still need to download data directories, read on.


.. _download_data_using_dry_run:

Downloading data directories using GEOS-Chem Classic's dry-run option
---------------------------------------------------------------------
Expand Down
55 changes: 37 additions & 18 deletions docs/source/supplement/setting-up-aws-parallelcluster.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,30 +5,33 @@ Set up AWS ParallelCluster

.. important::

AWS ParallelCluster and FSx for Lustre costs several hundred dollars per month to use.
AWS ParallelCluster and FSx for Lustre costs hundreds or thousands of dollars per month to use.
See `FSx for Lustre Pricing <https://aws.amazon.com/fsx/lustre/pricing/>`_ and
`EC2 Pricing <https://aws.amazon.com/ec2/pricing/on-demand/>`_ for details.


This page has instructions on setting up AWS ParallelCluster for running GCHP simulations.
AWS ParallelCluster is a service that lets you create your own HPC cluster.
Using GCHP on AWS ParallelCluster is similar to using GCHP on any other HPC, so these instructions focus on AWS ParallelCluster setup, and the other GCHP documentation like :ref:`building_gchp`, :ref:`downloading_input_data`, and :ref:`running_gchp` is appropriate for using GCHP on AWS ParallelCluster.
AWS ParallelCluster is a service that lets you create your own HPC cluster. Using GCHP on AWS ParallelCluster is similar to using GCHP on any other HPC.
We offer up-to-date Amazon Machine Images (AMIs) with GCHP's dependencies built and GCHP compiled through `AMI list <https://github.com/yidant/GCHP-cloud/blob/main/aws/ami.md>`_.
These images contain pre-built GCHP source code and the tools for creating a GCHP run directory.
This page has instructions on using the AMIs to create your own ParallelCluster.
You can also choose to set up AWS ParallelCluster for running GCHP simulations yourself, and the other GCHP documentation like :ref:`Build GCHP's dependencies <spackguide>`, :ref:`downloading_gchp`, :ref:`building_gchp`, :ref:`downloading_input_data`, and :ref:`running_gchp` is appropriate for using GCHP on AWS ParallelCluster.

The workflow for getting started with GCHP simulations using AWS ParallelCluster is
The workflow for getting started with GCHP simulations using AWS ParallelCluster based on our public AMIs is

#. Create an FSx for Lustre file system (*described on this page*)
#. Configure AWS CLI (*described on this page*)
#. Configure AWS ParallelCluster (*described on this page*)
#. :ref:`Build GCHP's dependencies <building_gchp_dependencies>` on your AWS ParallelCluster
#. Create an FSx for Lustre file system for input data (:ref:`described on this page <create_fsx_for_lustre>`)
#. Configure AWS CLI (:ref:`described on this page <aws_cli_setup>`)
#. Configure AWS ParallelCluster (:ref:`described on this page <creating_your_pcluster>`)
#. Create AWS ParallelCluster with GCHP public AMIs (:ref:`described on this page <creating_your_pcluster>`)
#. Follow the normal GCHP User Guide

a. :ref:`downloading_gchp`
#. :ref:`building_gchp`
#. :ref:`creating_a_run_directory`
a. :ref:`creating_a_run_directory`
#. :ref:`downloading_input_data`
#. :ref:`running_gchp`

#. Running GCHP on ParallelCluster (:ref:`described on this page <create_fsx_for_lustre>`)

These instructions were written using AWS ParallelCluster 3.7.0.

These instructions were written using AWS ParallelCluster 3.0.1.
.. _create_fsx_for_lustre:

1. Create an FSx for Lustre file system
---------------------------------------
Expand Down Expand Up @@ -92,6 +95,7 @@ Create a cluster config file by running the :command:`pcluster configure` comman
$ pcluster configure --config cluster-config.yaml
For instructions on :literal:`pcluster configure`, refer to the official instructions `Configuring AWS ParallelCluster <https://docs.aws.amazon.com/parallelcluster/latest/ug/install-v3-configuring.html>`_.

The following settings are recommended:

Expand All @@ -105,15 +109,19 @@ The following settings are recommended:
Execution nodes automatically spinup and shutdown according when there are jobs in your queue.

Now you should have a file name :file:`cluster-config.yaml`.
This the configuration file with setting for a cluster.
Before starting your cluster with the :command:`pcluster create-cluster` command, you need to modify :file:`cluster-config.yaml` so that your FSx for Lustre file system is mounted to your cluster.
This is the configuration file with setting for a cluster.

Before starting your cluster with the :command:`pcluster create-cluster` command, you can modify :file:`cluster-config.yaml` to create cluster based on our AMIs. We provide the available AMI ID through `AMI list <https://github.com/yidant/GCHP-cloud/blob/main/aws/ami.md>`_.

You also need to modify :file:`cluster-config.yaml` so that your FSx for Lustre file system is mounted to your cluster.
Use the following :file:`cluster-config.yaml` as a template for these changes.

.. code-block:: yaml
Region: us-east-1 # [replace with] the region with your FSx for Lustre file system
Image:
Os: alinux2
CustomAmi: ami-AAAAAAAAAAAAAAAAA # [replace with] the AMI ID you want to use
HeadNode:
InstanceType: c5n.large # smallest c5n node to minimize costs when head-node is up
Networking:
Expand Down Expand Up @@ -161,7 +169,7 @@ When you are ready, run the :command:`pcluster create-cluster` command.
$ pcluster create-cluster --cluster-name pcluster --cluster-configuration cluster-config.yaml
It may take 30 minutes or an hour for your cluster's status to change to :literal:`CREATE_COMPLETE`.
It may take several minutes up to an hour for your cluster's status to change to :literal:`CREATE_COMPLETE`.
You can check the status of you cluster with the following command.

.. code-block:: console
Expand All @@ -176,4 +184,15 @@ Once your cluster's status is :literal:`CREATE_COMPLETE`, run the :command:`pclu
At this point, your cluster is set up and you can use it like any other HPC.
Your next steps will be :ref:`building_gchp_dependencies` followed by the normal instructions found in the User Guide.
Now you can create a run directory by running the :literal:`createRunDir.sh` command. Your next steps will be following the normal instructions found in the User Guide.

.. _running_gchp_on_parallelcluster:

4. Running GCHP on ParallelCluster
--------------------------------------------

AWS ParallelCluster supports Slurm and AWS Batch job schedulers. Your cluster is set to use Slurm scheduler according to the configuration file.
It might require the root permission to run Slurm commands or restart Slurm.
Before you submit your job, you can start a shell as superuser by running :literal:`sudo -s`.

You can follow :ref:`running_gchp` to run GCHP with Slurm scheduler.

0 comments on commit 88b0b84

Please sign in to comment.