Skip to content

Commit

Permalink
some more movign
Browse files Browse the repository at this point in the history
  • Loading branch information
CallumWalley committed Feb 5, 2025
1 parent 81aa9f7 commit 6d93395
Show file tree
Hide file tree
Showing 250 changed files with 129 additions and 166 deletions.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,15 @@ different allocation criteria.

An allocation will come from one of our allocation classes. We will
decide what class of allocation is most suitable for you and your
research programme, however you're welcome to review [our article on
research programme, however you're welcome to review [our article on../../Scientific_Computing/General/NeSI_Policies/Allocation_classes.md
allocation classes](../../General/NeSI_Policies/Allocation_classes.md)
to find out what class you're likely eligible for.

## An important note on CPU hour allocations

You may continue to submit jobs even if you have used all your CPU-hour
allocation. The effect of 0 remaining CPU hours allocation is a
[lower fairshare](../../Scientific_Computing_old/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md),
[lower fairshare](../../Scientific_Computing/Scientific_Computing/Batch_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md
not the inability to use CPUs. Your ability to submit jobs will only be
removed when your project's allocation expires, not when core-hours are
exhausted.
Expand All @@ -38,7 +38,7 @@ plus one kind of compute allocation) in order to be valid and active.

Compute allocations are expressed in terms of a number of units, to be
consumed or reserved between a set start date and time and a set end
date and time. For allocations of computing power, we use [Fair../../Scientific_Computing_old/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md
date and time. For allocations of computing power, we use [Fair../../Scientific_Computing_old/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md../../Scientific_Computing/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md
Share](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md)
to balance work between different projects. NeSI allocations and the
relative "prices" of resources used by those allocations should not be
Expand All @@ -48,7 +48,7 @@ the associated infrastructure and services.
### Mahuika allocations

Allocations on
[Mahuika](../../Scientific_Computing_old/The_NeSI_High_Performance_Computers/Mahuika.md)
[Mahuika](../../Scientific_Computing/Scientific_Computing/Batch_Computing/The_NeSI_High_Performance_Computers/Mahuika.md
are measured in Mahuika compute units. A job uses one Mahuika compute
unit if it runs for one hour on one physical Mahuika CPU core (two
logical CPUs), using 3 GB of RAM and no GPU devices. This means a single
Expand All @@ -75,7 +75,7 @@ depend on your contractual arrangements with the NeSI host.

Note that the minimum number of logical cores a job can take on Mahuika
is two
(see [Hyperthreading](../../Scientific_Computing_old/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md) for
(see [Hyperthreading](../../Scientific_Computing/Scientific_Computing/Batch_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md
details). Therefore:

- the lowest possible price for a CPU-only job is 0.70 compute units
Expand All @@ -90,7 +90,7 @@ In reality, every job must request at least some RAM.
### Māui allocations

The compute capacity of the
[Māui](../../Scientific_Computing_old/The_NeSI_High_Performance_Computers/Maui.md)
[Māui](../../Scientific_Computing/Scientific_Computing/Batch_Computing/The_NeSI_High_Performance_Computers/Maui.md
supercomputer is allocated by node-hours. Though some Māui nodes have
more RAM than others, we do not currently distinguish between low-memory
and high-memory nodes for allocation, billing or Fair Share purposes.
Expand Down
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ search:
items under Accounts.
- On the Project page and New Allocation Request page, tool tip text
referring to
[nn\_corehour\_usage](../../../Scientific_Computing_old/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md)
[nn\_corehour\_usage](../../../Scientific_Computing/Scientific_Computing_old/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md)
will appear when you hover over the Mahuika Compute Units
information.

Expand Down
File renamed without changes.
File renamed without changes.
8 changes: 6 additions & 2 deletions docs/Scientific_Computing/.pages.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
nav:
- Getting_Started
- Scientific_Computing_old
- Storage
- Software
- Data_Management
- Interactive_Computing
- Batch_Computing
- Parallel_Computing
- ...
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Your *Fair Share score* is a number between **0** and **1**. Projects
with a **larger** Fair Share score receive a **higher priority** in the
queue.

A project is given an [allocation of compute units](../../Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md)
A project is given an [allocation of compute units](../../../Access/Accounts-Projects_and_Allocations/What_is_an_allocation.md)
over a given **period**. An institution also has a percentage **Fair Share entitlement**
of each machine's deliverable capacity over that same period.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Your *Fair Share score* is a number between **0** and **1**. Projects
with a **larger** Fair Share score receive a **higher priority** in the
queue.

A project is given an [**allocation** of compute units](../../Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md)
A project is given an [**allocation** of compute units](../../../Access/Accounts-Projects_and_Allocations/What_is_an_allocation.md)
over a given **period**. An institution also has a percentage **Fair
Share entitlement** of each machine's deliverable capacity over that
same period.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ page.
and [Māui\_Ancil (CS500) Slurm Partitions](./Maui_Slurm_Partitions.md)
support pages.
Details about pricing in terms of compute units can be found in the
[What is an allocation?](../../Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md)
[What is an allocation?](../../../Access/Accounts-Projects_and_Allocations/What_is_an_allocation.md)
page.

!!! note
Expand All @@ -28,7 +28,7 @@ page.

## Request GPU resources using Slurm

To request a GPU for your [Slurm job](../../Getting_Started/Next_Steps/Submitting_your_first_job.md), add
To request a GPU for your [Slurm job](../Submitting_your_first_job.md), add
the following option at the beginning of your submission script:

```sl
Expand Down Expand Up @@ -156,7 +156,7 @@ duration of 30 minutes.
## Load CUDA and cuDNN modules

To use an Nvidia GPU card with your application, you need to load the
driver and the CUDA toolkit via the [environment modules](./../HPC_Software_Environment/Finding_Software.md)
driver and the CUDA toolkit via the [environment modules](../../Software/HPC_Software_Environment/Finding_Software.md)
mechanism:

``` sh
Expand Down Expand Up @@ -326,8 +326,8 @@ graphical interface.
!!! warning
The `nsys-ui` and `ncu-ui` tools require access to a display server,
either via
[X11](../Terminal_Setup/X11_on_NeSI.md) or a
[Virtual Desktop](../Interactive_computing_using_Jupyter/Virtual_Desktop_via_Jupyter_on_NeSI.md).
[X11](../../Getting_Started/Terminal_Setup/X11_on_NeSI.md) or a
[Virtual Desktop](../../Interactive_Computing/Interactive_computing_using_Jupyter/Virtual_Desktop_via_Jupyter_on_NeSI.md).
You also need to load the `PyQt` module beforehand:

```sh
Expand All @@ -341,14 +341,14 @@ graphical interface.
The following pages provide additional information for supported
applications:

- [ABAQUS](../Supported_Applications/ABAQUS.md#examples)
- [GROMACS](../Supported_Applications/GROMACS.md#nvidia-gpu-container)
- [Lambda Stack](../Supported_Applications/Lambda_Stack.md)
- [Matlab](../Supported_Applications/MATLAB.md#using-gpus)
- [TensorFlow on GPUs](../Supported_Applications/TensorFlow_on_GPUs.md)
- [ABAQUS](../../Software/Supported_Applications/ABAQUS.md#examples)
- [GROMACS](../../Software/Supported_Applications/GROMACS.md#nvidia-gpu-container)
- [Lambda Stack](../../Software/Supported_Applications/Lambda_Stack.md)
- [Matlab](../../Software/Supported_Applications/MATLAB.md#using-gpus)
- [TensorFlow on GPUs](../../Software/Supported_Applications/TensorFlow_on_GPUs.md)

And programming toolkits:

- [Offloading to GPU with OpenMP](../HPC_Software_Environment/Offloading_to_GPU_with_OpenMP.md)
- [Offloading to GPU with OpenACC using the Cray compiler](./../HPC_Software_Environment/Offloading_to_GPU_with_OpenACC.md)
- [NVIDIA GPU Containers](../HPC_Software_Environment/NVIDIA_GPU_Containers.md)
- [Offloading to GPU with OpenMP](../../Software/HPC_Software_Environment/Offloading_to_GPU_with_OpenMP.md)
- [Offloading to GPU with OpenACC using the Cray compiler](../../Software/HPC_Software_Environment/Offloading_to_GPU_with_OpenACC.md)
- [NVIDIA GPU Containers](../../Software/HPC_Software_Environment/NVIDIA_GPU_Containers.md)
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ once your job starts you will have twice the number of CPUs as `ntasks`.
If you set `--cpus-per-task=n`, Slurm will request `n` logical CPUs per
task, i.e., will set `n` threads for the job. Your code must be capable
of running Hyperthreaded (for example using
[OpenMP](../HPC_Software_Environment/OpenMP_settings.md))
[OpenMP](../../Software/HPC_Software_Environment/OpenMP_settings.md))
if `--cpus-per-task > 1`.

Setting `--hint=nomultithread` with `srun` or `sbatch` causes Slurm to
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ limit:
| 240 | 5 | 1200 | 1200 node-hours, 240 nodes |
| 240 | 1 | 240 | 240 nodes |

Most of the time [jobJob_prioritisation.md
Most of the time [jobJob_prioritisation.md../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md
priority](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md) will
be the most important influence on how long your jobs have to wait - the
above limits are just backstops to ensure that Māui's resources are not
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ ssh to these nodes after logging onto the NeSI lander node.
1. The Cray Programming Environment on Mahuika, differs from that on
Māui.
2. The `/home, /nesi/project`, and `/nesi/nobackup`
[filesystems](../../Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md)
[filesystems](../../Data_Management/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md)
are mounted on Mahuika.
3. Read about how to compile and link code on Mahuika in section
entitled: [Compiling software on../HPC_Software_Environment/Compiling_software_on_Mahuika.md
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ lander node. Jobs can be submitted to the HPC from these nodes.

1. The Cray Programming Environment on the XC50 (supercomputer) differs
from that on Mahuika and the Māui Ancillary nodes.
2. The `/home, /nesi/project`, and `/nesi/nobackup` [file systems](../../Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md) are
2. The `/home, /nesi/project`, and `/nesi/nobackup` [file systems](../../Data_Management/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md) are
mounted on Māui.
3. The I/O subsystem on the XC50 can provide high bandwidth to disk
(large amounts of data), but not many separate reading or writing
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ and any (multi-cluster) Slurm partitions on the Māui or Mahuika systems.
## Notes

1. The `/home, /nesi/project`, and `/nesi/nobackup`
[filesystems](../../Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md)
[filesystems](../../Data_Management/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md)
are mounted on the Māui Ancillary Nodes.
2. The Māui Ancillary nodes have Skylake processors, while the Mahuika
nodes use Broadwell processors.
Expand Down Expand Up @@ -67,7 +67,7 @@ w-mauivlab01.maui.nesi.org.nz

If you are looking for accessing this node from your local machine you
could add the following section to `~/.ssh/config` (extending the
[recommended terminal setup](../Terminal_Setup/Standard_Terminal_Setup.md)
[recommended terminal setup](../../Getting_Started/Terminal_Setup/Standard_Terminal_Setup.md)

``` sh
Host w-mauivlab01
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ hence will also be accessible to your collaborators.
The instructions are geared towards members of the `niwa02916` group -
{% include "partials/support_request.html" %} if you are a NIWA employee and
want to become part of this group. Other NeSI users may want to
read [this](../../Scientific_Computing_old/Supported_Applications/Synda.md),
read [this](../../Software/Supported_Applications/Synda.md),
which explains how to install the Synda tool. Once installed, you can
then type similar commands to the ones below to test your configuration.

Expand All @@ -40,7 +40,7 @@ source /nesi/project/niwa02916/synda_env.sh

This will load the Anaconda3 environment and set the `ST_HOME` variable.
You should also now be able to invoke
[Synda](../../Scientific_Computing_old/Supported_Applications/Synda.md)
[Synda](../../Software/Supported_Applications/Synda.md)
commands, a tool that can be used to synchronise CMIP data with Earth
System Grid Federation archives. A full list of options can be obtained
with
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ path directory, displayed as '`/home/<username>`'.
| `/nesi/project/<project_code>` | yes | `/nesi/project/<project_code>` | yes | **read only** access |

For more information about NeSI filesystem, check
[NeSI_File_Systems_and_Quotas](../../Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md).
[NeSI_File_Systems_and_Quotas](../File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md).

## Performing Globus transfers to/from Māui/Mahuika

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -148,4 +148,4 @@ suitable permissions model.
!!! prerequisite "See also"
- [How can I let my fellow project team members read or write my files?](../../General/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md)
- [How can I give read-only team members access to my files?](../../General/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md)
- [NeSI file systems and quotas](../../Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md)
- [NeSI file systems and quotas](./NeSI_File_Systems_and_Quotas.md)
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,14 @@ Scale clients*, and those that employ *Cray’s DVS* *solution*.

Applications that make heavy demands on metadata services and or have
high levels of small I/O activity should generally not be run on
[Māui](../../Scientific_Computing_old/The_NeSI_High_Performance_Computers/Maui.md) (the Cray
[Māui](../../Scientific_Computing_old/The_NeSI_High_Performance_Computers/Maui.mday
XC50).

## Nodes which access storage via native Spectrum Scale Clients

All [Mauhika](../../Scientific_Computing_old/The_NeSI_High_Performance_Computers/Mahuika.md)
HPC Cluster, [Mahuika Ancillary](../../Scientific_Computing_old/The_NeSI_High_Performance_Computers/Mahuika.md),
[Māui Ancillary](../../Scientific_Computing_old/The_NeSI_High_Performance_Computers/Maui_Ancillary.md) and
All [Mauhika](../../Scientific_Computing_old/The_NeSI_High_Performance_Computers/Mahuika.md
HPC Cluster, [Mahuika Ancillary](../../Scientific_Computing_old/The_NeSI_High_Performance_Computers/Mahuika.md
[Māui Ancillary](../../Scientific_Computing_old/The_NeSI_High_Performance_Computers/Maui_Ancillary.md
Māui login (aka build) nodes have native Spectrum Scale clients
installed and provide high performance access to storage:

Expand Down Expand Up @@ -96,4 +96,4 @@ to decompress the data after use. However, testing has shown that there
can be an impact on job performance due to I/O. You can find out more
about tests and results regarding performance of transparent
data compression on the NeSI platforms on our
[Data Compression support page](../../Storage/File_Systems_and_Quotas/Data_Compression.md).
[Data Compression support page](../../Storage/File_Systems_and_Quotas/Data_Compression.md/File_Systems_and_Quotas/Data_Compression.md).
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,14 @@ tags:
- maui
- quota
title: NeSI File Systems and Quotas
vote_count: 4
vote_sum: 4
zendesk_article_id: 360000177256
zendesk_section_id: 360000033936
---

!!! tip "Transparent File Compression"
We have recently started rolling out compression of inactive data on the NeSI Project filesystem.
Please see the [documentation below](#transparent-file-data-compression) to learn more about how this works and what data will be compressed.

[Māui](../../Scientific_Computing_old/The_NeSI_High_Performance_Computers/Maui.md) and
[Mahuika](../../Scientific_Computing_old/The_NeSI_High_Performance_Computers/Mahuika.md), along
[Māui](../../Batch_Computing/The_NeSI_High_Performance_Computers/Maui.md) and
[Mahuika](../../Batch_Computing/The_NeSI_High_Performance_Computers/Mahuika.md), along
with all the ancillary nodes, share access to the same IBM Storage Scale
file systems. Storage Scale was previously known as Spectrum Scale, and
before that as GPFS, or General Parallel File System - we'll generally
Expand Down Expand Up @@ -112,7 +108,7 @@ cleaning policy is applied.

It provides storage space for datasets, shared code or configuration
scripts that need to be accessed by users within a project, and
[potentially by other projects](../File_Systems_and_Quotas/File_permissions_and_groups.md).
[potentially by other projects](./File_permissions_and_groups.md).
Read and write performance increases using larger files, therefore you should
consider archiving small files with the `nn_archive_files` utility, or a
similar archiving package such as `tar` .
Expand Down Expand Up @@ -141,7 +137,7 @@ or {% include "partials/support_request.html" %} at any time.

To ensure this file system remains fit-for-purpose, we have a regular
cleaning policy as described in
[Automatic cleaning of nobackup filesystem](../../Storage/File_Systems_and_Quotas/Automatic_cleaning_of_nobackup_file_system.md).
[Automatic cleaning of nobackup filesystem](./Automatic_cleaning_of_nobackup_file_system.md).

Do not use the `touch` command or an equivalent to prevent the cleaning
policy from removing unused files, because this behaviour would deprive
Expand All @@ -166,7 +162,7 @@ an Automatic Tape Library (ATL). Files will remain on `/nesi/nearline`
temporarily, typically for hours to days, before being moved to tape. A
catalogue of files on tape will remain on the disk for quick access.

See [more information about the nearline service](../../Storage/Nearline_long_term_storage/Nearline_Long_Term_Storage_Service.md).
See [more information about the nearline service](../Nearline_long_term_storage/Nearline_Long_Term_Storage_Service.md).

## Snapshots

Expand Down Expand Up @@ -213,7 +209,7 @@ though this is mitigated by space and bandwidth savings.

Transparent file data compression can be controlled and applied by users
via file attributes, you can find out more about using this method on
our [Data Compression support page](../../Storage/File_Systems_and_Quotas/Data_Compression.md).
our [Data Compression support page](./Data_Compression.md).
File data compression can also be automatically applied by administrators
through the Scale policy engine. We leverage this latter feature to
regularly identify and compress inactive data on the `/nesi/project`
Expand All @@ -228,7 +224,7 @@ cold data. We may decrease this in future.
Additionally, we only automatically compress files in the range of 4kB -
10GB in size. Files larger than this can be compressed by user
interaction - see the instructions for the `mmchattr` command on
the [Data Compression support
the [Data Compression support../../Data_Management/File_Systems_and_Quotas/Data_Compression.md../../Storage/File_Systems_and_Quotas/Data_Compression.md
page](../../Storage/File_Systems_and_Quotas/Data_Compression.md). Also
note that the Scale filesystem will only store compressed blocks when
the compression space saving is &gt;=10%.
Loading

0 comments on commit 6d93395

Please sign in to comment.