First steps
Main steps of starting a project
Project planning
Project application
Project approval
Project preparation
Resource planning
Task preparation
Job preparation
Job submission
Job monitoring
User Policy for the Supercomputer
What can the supercomputing infrastructure be used for?
How can I access the supercomputer?
What are the conditions of use?
How are resources allocated?
What applications, user software or development tools are available?
What user support/assistance is available?
How much does it cost to use a supercomputer?
Pricing
Price list for the economic use of Komondor HPC resources
Pricing of HPC services
Pricing of mentoring services
Pricing of custom services
CPU node pricing
GPU node pricing
AI node pricing
BigData node pricing
Project application & Types
Academic projects
Projects of small and medium-sized enterprises
HPC operations
Connecting to Komondor with SSH client
Prerequisites
What is my username?
SSH connection from LINUX command line
Two-factor Authentitacion (2FA)
Copying files using SCP
Data transfer using rsync
SSH connection setup - PuTTY (MS Windows)
SSH connection setup - MobaXTerm
Graphical user interface access using X2Go over SSH
Slurm workload manager
What is Slurm and why do we need it?
Useful commands
How to use the Slurm scheduler?
System Overview
Hardware architecture
Further information about the hardware
CPU partition
Memory in CPU partition
Network in CPU partition
Naming convention
GPU partition
GPUs in GPU partition
Memory in GPU partition
Network in GPU partition
Naming convention in GPU partition
AI partition
GPUs in the AI partition
Memory in AI nodes
Network in AI nodes
Naming convention in AI partition
BigData partition
CPUs
Memory
Network
Naming convention
High Speed Network
Topology
How to use the Slinghsot network
Storage
Storage Overview
Suggested use for each of the tiers
Usage and paths
Block and Inode quotas
How to check your quotas?
Lustre
Lustre Components
Useful Knowledge
The
scratch
filesystem
The
project
filesystem
DMF and archiving
Storage Options for Komondor HPC
Basic Usage
Overview
Login Node
Compute Nodes
Job Queues (Partitions)
Preparing Job Scripts (sbatch)
#SBATCH Directives
Basic Options
Time Limit
CPU Allocation
Memory Allocation
GPU Allocation
Node Allocation
Non-restartable Jobs
Quality of Service
Email Notification
Slurm Environment Variables
Submitting and Managing Jobs
Resource Usage Estimation
Checking Resource Availability
Submitting Batch Jobs
Status Information
Cancelling Jobs
Pending Jobs
Checking Job Efficiency
CPU and memory efficiency
Resource usage of running jobs
Requested vs allocated resources
GPU usage statistics
Interactive Use
srun
Resource Usage
Balance and Billing
Total Consumption
Storage Quotas
Quality of Service (QOS)
Checking Available QOSs
Default QOS
Manual QOS
Advanced Job Definitions
Job Array
Packed Jobs
Multiple Tasks
Data Sharing for Project Members
ACL
Examples
File Management Considerations
Inode Limit
FUSE
Software
Overview of Installed Software
Libraries, Programming Languages, Developer Tools
Molecular dynamics, Quantum Chemistry
Mathematical and Simulation Packages
Deep Learning and AI
Software Tools for Genomic Analyses
Software Environments
Licence Information
Environment Modules
Cray Programming Environment
Useful Compiler Settings
Cray Scientific and Math Libraries (CSML)
Cray Message Passing Toolkit (CMPT)
Performance Analysis and Optimization
Cray Debugger Support Tools (CDST)
Reference Manuals:
Container environment
Singularity
Building containers
Running containers
Copying into the container
Python environment in the container
Nvidia CUDA containers
Jupyter Environment
Accessing JupyterHub
Using JupyterHub
Stopping the Jupyter container
List of available containers
Using your own container
IPython Parallel
Ansys Fluent
Creating Journal file in Ansys Fluent
Creating a SLURM script
Running a job
OpenFOAM @ Komondor
Accessing OpenFOAM binaries @ Komondor
ORCA
Submitting an ORCA calculation using the ubORKA script
Estimating computational requirements
Q-Chem & BrianQC
Quick start guide for Q-Chem + BrianQC on HPC komondor
Prerequisites for Q-Chem & BrianQC
Starting Q-Chem or Q-Chem+BrianQC
Examples
Pytorch
Licence
Preinstalled Pytorch environments
Usage of Pytorch container
Usage of Pytorch module
MATLAB
Getting Started with Parallel Computing using MATLAB on the
Komondor
HPC Cluster
CONFIGURATION – MATLAB client on the cluster
INSTALLATION and CONFIGURATION – MATLAB client on the desktop
CONFIGURING JOBS
INTERACTIVE JOBS - MATLAB client on the cluster
INDEPENDENT BATCH JOB
PARALLEL BATCH JOB
HELPER FUNCTIONS
DEBUGGING
TO LEARN MORE
Genomic Analyses
Available tools
Scheduled analyses (sbatch)
Interactive analyses
Wolfram Mathematica
Adding license configuration
Programming Models
MPICH
MPICH Example
MPICH CPU
MPICH GPU
MPICH CPU Batch Job
MPICH GPU Batch Job
OpenMP
OpenMP Example
OpenMP CPU
OpenMP GPU
OpenMP CPU Batch Job
OpenMP GPU Batch Job
Hybrid MPI
Hybrid Example
Hybrid MPI CPU
Hybrid MPI GPU
Hybrid MPI Nvidia
Hybrid MPI CPU Batch Job
Hybrid MPI GPU Batch Job
Nvidia MPI
Nvidia MPI Example
Nvidia MPI GPU
Nvidia MPI Batch Job
Nvidia OpenACC
Nvidia OpenACC Example
Nvidia OpenACC GPU
Nvidia OpenACC Batch Job
Cuda-aware MPI
Cuda-aware MPI Example
Cuda-aware MPI CCE
Cuda-aware MPI Batch Job
Open MPI
Open MPI in containers
AMD AOCC
AMD Optimizing Compiler
Intel MPI
Intel Compiler
Intel MPICH
Intel Hybrid MPI
Intel mpirun
Cray OpenSHMEMX
Singularity Container MPI
HPC Support
HPC Portal
Helpdesk Service
User Manual on the HPC Portal
External links
Publications
Acknowledgement
Feedback
FAQ
KIFÜ - HPC Documentation
Programming Models
Programming Models
KIFÜ HPC Git Repo:
https://git.einfra.hu/hpc-public/devel
MPICH
MPICH Example
MPICH CPU
MPICH GPU
MPICH CPU Batch Job
MPICH GPU Batch Job
OpenMP
OpenMP Example
OpenMP CPU
OpenMP GPU
OpenMP CPU Batch Job
OpenMP GPU Batch Job
Hybrid MPI
Hybrid Example
Hybrid MPI CPU
Hybrid MPI GPU
Hybrid MPI Nvidia
Hybrid MPI CPU Batch Job
Hybrid MPI GPU Batch Job
Nvidia MPI
Nvidia MPI Example
Nvidia MPI GPU
Nvidia MPI Batch Job
Nvidia OpenACC
Nvidia OpenACC Example
Nvidia OpenACC GPU
Nvidia OpenACC Batch Job
Cuda-aware MPI
Cuda-aware MPI Example
Cuda-aware MPI CCE
Cuda-aware MPI Batch Job
Open MPI
Open MPI in containers
AMD AOCC
AMD Optimizing Compiler
Intel MPI
Intel Compiler
Intel MPICH
Intel Hybrid MPI
Intel mpirun
Cray OpenSHMEMX
Singularity Container MPI