Container environment

Singularity

Singularity is a container environment optimized for HPC usage. The batch job can be executed in a container separated from the operating system. Similar to Docker containers this provides an opportunity to create and run scientific and application workloads based on user specific needs.

The open-source version, Singularity Community Edition is installed on the Komondor HPC. In order to use Singularity the following module must be loaded:

module load singularity

The Singularity container environment is recommended for everyone and it has several advantages:

  • It is possible to use any Linux distribution inside a container

  • Any specific application software can be easily installed into the container

  • The file system i-node quota will not be exceeded, only one SIF file is being created

  • When using large amount of small files, optimal storage can be achieved if those files are copied into the container

Building containers

The Singularity container format makes easier to create the application enviornment in a SIF (Singularity Image Format) file. The container can be built from Docker container format or from an already existings SIF image.

  1. From Docker container format, e.g. ubuntu.def:

BootStrap: docker
From: ubuntu:latest

%post
apt -y update
apt -y install git python3.11 pip
singularity build --fakeroot --fix-perms ubuntu.sif ubuntu.def

2., From SIF image, e.g. container.def:

Bootstrap: localimage
From: ubuntu.sif

%post
pip install numpy
singularity build --fakeroot --fix-perms container.sif container.def

More examples can be found on the KIFÜ GitLab page:

https://git.einfra.hu/hpc-public/singularity/

Running containers

The container can be started with the singularity exec command. The following example shows how to execute a command with slurm srun:

srun --partition=<partition> --account=<account> \
       singularity exec ubuntu.sif cat /etc/os-release

The $HOME directory is automatically mounted at runtime, unless the –no-home option is specified. It is important note that the /project and /scratch directories must be bound mounted with Singularity in order to use them in the container. Example of mounting the project shared directory:

srun --partition=<partition> --account=<project_name> \
       singularity exec -B /scratch/<project_name> \
       ubuntu.sif ls /scratch/<project_name>

Copying into the container

Files can be added from the host operating sysystem. The following example shows how to create a definition file for a container that unpacks the tar file under /scratch/<project_name> and puts the data into the container:

BootStrap: docker
From: ubuntu:latest

%post
mkdir -p /data
cd /data
tar xzfv /mnt/data.tar
singularity build -B /scratch/<project_name>:/mnt --fakeroot \
        --fix-perms container.sif container.def

Python environment in the container

The Python environment (Anaconda/Conda) installed in the $HOME directory creates many small files. This is not optimal from the Lustre filesystem point of view, because it can lead to the overuse of the i-node quota. That’s why we recommend to install the Python environment into the container. For a quicker nstallation, we recommend the mamba installer instead of conda. More examples can be found on the KIFÜ GitLab site: https://git.einfra.hu/hpc-public/singularity

Python virtual env with overlay container**

Python environment modules can be installed at the time of container building and later as well. The writable container (called overlay.sif in the the example) layers can be overwritten as required.

Step 1: the python-venv package must be installed in the base container

BootStrap: docker
From: ubuntu:latest

%post
apt -y update
apt -y install python3.11 python3.10-venv pip

%environment
source /venv/bin/activate

Step 2: the overlay file must be created (an appropriate size must be chosen!)

singularity overlay create --size 1024 overlay.sif

Step 3: python packages can be installed after mounting the overlay (the overlay can be mounted more times later)

singularity shell --overlay overlay.sif ubuntu.sif
Singularity> mkdir /venv
Singularity> python3 -m venv /venv
Singularity> source /venv/bin/activate
(venv) Singularity> pip install numpy

Step 4: the virtual environment must be activated from the application or must be included under %environment when running singularity exec

Nvidia CUDA containers

To use the CUDA drivers, the appropriate CUDA environment must be installed in the container.

Important

The –nv switch loads the host’s Nvidia driver. You can check with the following command, whether the GPUs are visible:

srun --partition=<partition> --account=<project_name> \
     singularity exec --nv ubuntu.sif nvidia-smi