rtis-cs-hpc-01

Getting access

Once you have been onboarded, you should be able to authenticate using your University credentials.

SSH

  • From a terminal window, connect using SSH:

ssh <username>@rtis-cs-hpc-01.uod.otago.ac.nz

(Or use your favourite SSH client, specifying rtis-cs-hpc-01.uod.otago.ac.nz as the host.)

  • Authenticate using your University credentials.

Spack

For provisioning scientific software, we are primarily leveraging Spack where feasible.

Spack is an HPC-focussed package manager for scientific software that allows to easily build customised software stacks, taking care of dependencies and library conflicts. Amongst other things it enables having multiple parallel coexisting versions of the same software on one system.

The Spack documentation can be found at https://spack.readthedocs.io . If you are new to Spack, you may want to have a look at the Basic Usage guide.

While the Spack documentation assumes you set up a Spack instance yourself, we have set up a dedicated spack user with some common and requested packages already available.

In order to make use of any of the Spack-installed software, you first have to explicitly load them with the spack command, from the terminal or from your scripts.

  • spack find will return a list of the installed Spack packages. (One way to look for a particular match: spack find | grep <partial name>)

  • spack load <package name> will load them for use

Essentially a spack load will set up the environment variables for the package in question as well as any of its dependencies, making the binaries and libraries available for your session, similar to the traditional environment modules system. As an alternative to using ‘spack’, environment modules (e.g. module load) is available to use as well, although Spack generally handles dependencies better and would thus be the preferred option.

Spack has its own way to create and manage ‘virtual environments’, where you can create an environment which loads a collection of relevant spack packages, and makes for an easy way to set up a customised environment for a specific project. e.g.:

spack env create foo
spacktivate -p foo
spack add <spack package> <another spack package>

Note that when using the default shared Spack setup, Spack will be read-only and you won’t be able to install anything yourself. If you are wanting to manage your own Spack package (spec) installs, you would have to set up your own local Spack instance in your home directory. If you would rather let us manage any new packages, just let us know what you need and we can provision the requested software via the shared spack instance.

Software

R

spack load r
R

While you should be able to download additional CRAN packages from within R, several common packages are already installed and available to be loaded with Spack before starting R. See spack find | grep r- to get a list of the installed R packages. These additional packages can be spack-loaded at the same time. e.g.:

spack load r r-units r-sp
R

Python

By default the system Python will be used; python -V.

spack find python will show you additional installed versions that can be loaded with Spack:

spack load python@<version>
python -V

While you can install pip or conda packages in your local home directory as usual, several optimised python packages may already be installed and available for use via Spack.

spack find ^python@<version> | grep py- will show the list of Spack-installed pip packages available for the given Python version. These can be loaded in a similar way to make them available in python. e.g.

spack load py-numpy ^python@3.9

Conda

You can install miniconda under your user according to the instructions on the conda site (either with the bootstrap script or pip), or alternatively initialise your shell using the preinstalled Spack-provided miniconda:

spack load miniconda3
conda init bash
printf "if [ -s ~/.bashrc ]; then\n\tsource ~/.bashrc;\nfi\n" >> ~/.bash_profile

Then re-open a shell as instructed. This should initialise conda, making it available from that point on without the need for a spack load.

It is recommended to install related conda packages for a particular project within their own dedicated conda environment.

Accessing HCS shares

There are a number of ways to mount and access HCS shares as a regular user without needing elevated privileges;

  • There is an HCS automount path set up at /mnt/auto-hcs/. Though the /mnt/auto-hcs/ directory may appear to be empty, when you access /mnt/auto-hcs/<hcs_share> the specified share should get automatically mounted (with the privileges of your logged-in user).

If you are having trouble getting the share to appear, try issuing a kinit first, to re-issue an authorisation ticket.

For storage.hcs-chc or storage.hcs-wlg shares, use /mnt/auto-hcs-chc/ and /mnt/auto-hcs-wlg/ respectively instead.

  • With gvfs/fuse:

    gio mount smb://storage.hcs-p01.otago.ac.nz/<sharename>
    

    This will by default make the share available for your user at /run/user/<userid>/gvfs/smb-share:server=storage.hcs-p01.otago.ac.nz,share=<share_name>.

  • If you are accessing the server through a graphical user interface, generally the file manager should allow you to access SMB shares. For example, on an Xfce desktop: open File Manager, and in the location bar go to smb://storage.hcs-p01.otago.ac.nz/<share_name>. Select registered user, Enter your University credentials and domain registry if you have a staff account, or domain student for students.