MRCC
Installed versions:
mrcc.2023-08-28 (binary)
mrcc-2024-12-31 (complied, recommended)
Submitting an MRCC calculation
The subMRCC script submits the <Input file> as a MRCC job to the Komondor CPU queue. The corresponding SLURM job submission can be done in the following way:
/opt/software/packages/mrcc/20241231/subMRCC241231 <number of processors> <Input file>
Alternatively, use the following alias:
alias MRCC241231='/opt/software/packages/mrcc/20241231/subMRCC241231'
In this case, the job submission can be performed more conveniently with the following command
MRCC241231 <number of processors> <Input file>
Entering only the script itself without arguments will print the help menu. The MRCC output file is saved in the same directory as the input file, with the only difference being its extension (.out). MRCC can run in parallel using OpenMP, MPI, or both. By default, the script sets up OpenMP for parallel execution. However, if the input file includes the “mpitasks” option, both MPI and OpenMP will be used together. The script is not designed to handle pure MPI runs. The number of MPI processes (or CPU cores) will match the value specified in the input file, with two OpenMP threads assigned to each core. When submitting the script, the number of processes must always specified on the command line. For OpenMP-only runs, this will correspond to the number of CPU threads. For MPI + OpenMP runs, this will correspond to the number of cores, and the number of threads will be automatically set. The default memory allocation is 2000MB unless a different value is defined in the input file using the “mem” keyword. Memory must always be specified in megabytes (MB) and refers to the total allocated memory, not memory per core.
An optional “-n” flag can be used to specify the number of nodes for MPI jobs:
MRCC241231 -n <number of nodes> <number of processors> <Input file>
Additionally, the “-o” option can be used to optimize MPI job performance by allocating all CPUs on the node(s) to the job and increasing the number of threads. The extra processes beyond those specified by “mpitasks” are driver processes, which mostly run in the background and don’t require dedicated resources (though they will prevent other jobs from using those resources). For more details, refer to sections 9.2 and 9.3 of the MRCC manual. https://www.mrcc.hu/MRCC/manual/pdf/manual.pdf
An example MRCC input:
# CCSDT(Q) Single Point calculation
basis=cc-pVDZ
calc=CCSDT(Q)
mpitasks=4
mem=8000MB
mult=1
unit=angs
geom=xyz
8
C 0.765317 0.000000 0.000000
H 1.164483 1.006642 0.170694
H 1.164478 -0.355494 -0.957128
H 1.164488 -0.651149 0.786427
C -0.765317 0.000000 0.000000
H -1.164488 0.651723 -0.785951
H -1.164483 -1.006517 -0.171430
H -1.164478 0.354794 0.957387
Alternatively, MRCC can also be called via ORCA program package (version 5.0.4) as an interface. Whenever MRCC program used as an ORCA interface, only OpenMP is viable and the methods are restricted to single point energies but can also be used for rigid scan calculations or numerical frequencies.
Estimating computational requirements
There is no universal rule of thumb for the resource allocation of MRCC calculations, as the variety of chemical systems and theoretical methods have different resource demands. You need to develop the ability to independently estimate performance requirements. If you prefer to create your own specific SLURM script, you can invoke MRCC by pointing to /opt/software/packages/mrcc/20241231/.
You can find MRCC specific information at the following locations:
Update: 2025.03.19.