Logo
  • First steps
    • Main steps of starting a project
      • Project planning
      • Project application
      • Project approval
      • Project preparation
      • Resource planning
      • Task preparation
      • Job preparation
      • Job submission
      • Job monitoring
    • User Policy for the Supercomputer
      • What can the supercomputing infrastructure be used for?
      • How can I access the supercomputer?
      • What are the conditions of use?
      • How are resources allocated?
      • What applications, user software or development tools are available?
      • What user support/assistance is available?
      • How much does it cost to use a supercomputer?
    • Pricing
      • Price list for the economic use of Komondor HPC resources
      • Pricing of HPC services
      • Pricing of mentoring services
      • Pricing of custom services
      • CPU node pricing
      • GPU node pricing
      • AI node pricing
      • BigData node pricing
    • Project application & Types
      • Academic projects
      • Projects of small and medium-sized enterprises
      • HPC operations
    • Connecting to Komondor with SSH client
      • Prerequisites
      • What is my username?
      • SSH connection from LINUX command line
      • Two-factor Authentitacion (2FA)
      • Copying files using SCP
      • Data transfer using rsync
      • SSH connection setup - PuTTY (MS Windows)
      • SSH connection setup - MobaXTerm
      • Graphical user interface access using X2Go over SSH
    • Slurm workload manager
      • What is Slurm and why do we need it?
      • Useful commands
      • How to use the Slurm scheduler?
  • System Overview
    • Hardware architecture
      • Further information about the hardware
    • CPU partition
      • Memory in CPU partition
      • Network in CPU partition
      • Naming convention
    • GPU partition
      • GPUs in GPU partition
      • Memory in GPU partition
      • Network in GPU partition
      • Naming convention in GPU partition
    • AI partition
      • GPUs in the AI partition
      • Memory in AI nodes
      • Network in AI nodes
      • Naming convention in AI partition
    • BigData partition
      • CPUs
      • Memory
      • Network
      • Naming convention
    • High Speed Network
      • Topology
      • How to use the Slinghsot network
  • Storage
    • Storage Overview
      • Suggested use for each of the tiers
      • Usage and paths
      • Block and Inode quotas
      • How to check your quotas?
    • Lustre
      • Lustre Components
      • Useful Knowledge
    • The scratch filesystem
    • The project filesystem
    • DMF and archiving
    • Storage Options for Komondor HPC
  • Basic Usage
    • Overview
      • Login Node
      • Compute Nodes
      • Job Queues (Partitions)
    • Preparing Job Scripts (sbatch)
      • #SBATCH Directives
      • Basic Options
      • Time Limit
      • CPU Allocation
      • Memory Allocation
      • GPU Allocation
      • Node Allocation
      • Non-restartable Jobs
      • Quality of Service
      • Email Notification
      • Slurm Environment Variables
    • Submitting and Managing Jobs
      • Resource Usage Estimation
      • Checking Resource Availability
      • Submitting Batch Jobs
      • Status Information
      • Cancelling Jobs
      • Pending Jobs
    • Checking Job Efficiency
      • CPU and memory efficiency
      • Resource usage of running jobs
      • Requested vs allocated resources
      • GPU usage statistics
      • Job monitoring with jobstats
      • Aggregated job efficiency with reportseff
    • Interactive Use
      • srun
    • Resource Usage
      • Balance and Billing
      • Total Consumption
      • Storage Quotas
    • Quality of Service (QOS)
      • Checking Available QOSs
      • Default QOS
      • Manual QOS
    • Advanced Job Definitions
      • Job Array
      • Packed Jobs
      • Multiple Tasks
    • Data Sharing for Project Members
      • ACL
      • Examples
    • File Management Considerations
      • Inode Limit
      • FUSE
  • Software
    • Overview of Installed Software
      • Libraries, Programming Languages, Developer Tools
      • Molecular dynamics, Quantum Chemistry
      • Mathematical and Simulation Packages
      • Deep Learning and AI
      • Software Tools for Genomic Analyses
      • Software Environments
      • Licence Information
    • Environment Modules
    • Alphafold2
      • Sequence alignment
      • Running the prediction on the GPU partition
      • Multimer runs
      • Multimer sequence alignment
      • Multimer GPU prediction
    • Genomic Analyses
      • Available tools
      • Scheduled analyses (sbatch)
      • Interactive analyses
    • Cray Programming Environment
      • Useful Compiler Settings
      • Cray Scientific and Math Libraries (CSML)
      • Cray Message Passing Toolkit (CMPT)
      • Performance Analysis and Optimization
      • Cray Debugger Support Tools (CDST)
      • Reference Manuals:
    • Container environment
      • Singularity
      • Building containers
      • Running containers
      • Copying into the container
      • Python environment in the container
      • Nvidia CUDA containers
    • Ansys Fluent
      • Creating Journal file in Ansys Fluent
      • Creating a SLURM script
      • Running a job
    • OpenFOAM
      • Accessing OpenFOAM binaries @ Komondor
    • AMBER
      • Submitting a AMBER calculation
      • Preparing the system for simulation
    • GROMACS
      • Submitting a Gromacs calculation
      • Recommended hardware allocation
    • ORCA
      • Submitting an ORCA calculation using the ubORKA script
      • Estimating computational requirements
    • TeraChem
      • Submitting a TeraChem calculation
    • MRCC
      • Submitting an MRCC calculation
      • Estimating computational requirements
    • Q-Chem & BrianQC
      • Quick start guide for Q-Chem + BrianQC on HPC komondor
      • Prerequisites for Q-Chem & BrianQC
      • Starting Q-Chem or Q-Chem+BrianQC
      • Examples
    • NAMD
      • Submitting a NAMD calculation
    • MATLAB
      • Getting Started with Parallel Computing using MATLAB on the Komondor HPC Cluster
      • CONFIGURATION – MATLAB client on the cluster
      • INSTALLATION and CONFIGURATION – MATLAB client on the desktop
      • CONFIGURING JOBS
      • INTERACTIVE JOBS - MATLAB client on the cluster
      • INDEPENDENT BATCH JOB
      • PARALLEL BATCH JOB
      • HELPER FUNCTIONS
      • DEBUGGING
      • TO LEARN MORE
    • Wolfram Mathematica
      • Adding license configuration
    • Pytorch
      • Licence
      • Preinstalled Pytorch environments
      • Usage of Pytorch container
      • Usage of Pytorch module
      • Parallel jobs with Pytorch (torchrun):
      • Distributed Data Parallel (DDP)
      • Model parallelism
      • DDP + Model Parallelism
      • Solutions for suboptimal GPU usage
      • AI_env module package list
    • Jupyter Environment
      • Accessing JupyterHub
      • Using JupyterHub
      • Stopping the Jupyter container
      • List of available containers
      • Using your own container
      • IPython Parallel
  • Programming Models
    • MPICH
      • MPICH Example
      • MPICH CPU
      • MPICH GPU
      • MPICH CPU Batch Job
      • MPICH GPU Batch Job
    • OpenMP
      • OpenMP Example
      • OpenMP CPU
      • OpenMP GPU
      • OpenMP CPU Batch Job
      • OpenMP GPU Batch Job
    • Hybrid MPI
      • Hybrid Example
      • Hybrid MPI CPU
      • Hybrid MPI GPU
      • Hybrid MPI Nvidia
      • Hybrid MPI CPU Batch Job
      • Hybrid MPI GPU Batch Job
    • Nvidia MPI
      • Nvidia MPI Example
      • Nvidia MPI GPU
      • Nvidia MPI Batch Job
    • Nvidia OpenACC
      • Nvidia OpenACC Example
      • Nvidia OpenACC GPU
      • Nvidia OpenACC Batch Job
    • Cuda-aware MPI
      • Cuda-aware MPI Example
      • Cuda-aware MPI CCE
      • Cuda-aware MPI Batch Job
    • Open MPI
      • Open MPI in containers
    • AMD AOCC
      • AMD Optimizing Compiler
    • Intel MPI
      • Intel Compiler
      • Intel MPICH
      • Intel Hybrid MPI
      • Intel mpirun
    • Cray OpenSHMEMX
    • Singularity Container MPI
  • HPC Support
    • HPC Portal
    • Helpdesk Service
    • User Manual on the HPC Portal
    • External links
  • How to cite
    • Acknowledgement
  • FAQ
DKF - HPC Documentation
  • Basic Usage

Basic Usage

  • Overview
    • Login Node
    • Compute Nodes
    • Job Queues (Partitions)
  • Preparing Job Scripts (sbatch)
    • #SBATCH Directives
    • Basic Options
    • Time Limit
    • CPU Allocation
    • Memory Allocation
    • GPU Allocation
    • Node Allocation
    • Non-restartable Jobs
    • Quality of Service
    • Email Notification
    • Slurm Environment Variables
  • Submitting and Managing Jobs
    • Resource Usage Estimation
    • Checking Resource Availability
    • Submitting Batch Jobs
    • Status Information
    • Cancelling Jobs
    • Pending Jobs
  • Checking Job Efficiency
    • CPU and memory efficiency
    • Resource usage of running jobs
    • Requested vs allocated resources
    • GPU usage statistics
    • Job monitoring with jobstats
    • Aggregated job efficiency with reportseff
  • Interactive Use
    • srun
  • Resource Usage
    • Balance and Billing
    • Total Consumption
    • Storage Quotas
  • Quality of Service (QOS)
    • Checking Available QOSs
    • Default QOS
    • Manual QOS
  • Advanced Job Definitions
    • Job Array
    • Packed Jobs
    • Multiple Tasks
  • Data Sharing for Project Members
    • ACL
    • Examples
  • File Management Considerations
    • Inode Limit
    • FUSE
Previous Next

© Copyright 2025, DKF.

Built with Sphinx using a theme provided by Read the Docs.