Geometry Optimization¶
HUMMR is able to calculate analytical nuclear gradients with FCI and HCI wave functions. In both cases it is possible to calculate the gradients in a state-specific (SS) or state-averaged (SA) manner. However, currently we don't have a built-in geometry optimizer. Luckily, we can make use of ORCA for that purpose (see section 7.26.6). In this tutorial we use ORCA solely as a geometry optimizer that calls HUMMR for carrying out the electronic structure and nuclear gradient calculation.
Note
We have made plans to write our own geometry optimization library. This effort will simplify the geometry optimizations with HUMMR in a hopefully not too distant future.
Necessary scripts¶
Following the ORCA manual, we shall first provide the otool_external bash script:
#!/usr/bin/bash
if [ $# -ne 1 ]
then
printf "Error: Please provide the 'basename_EXT.extinp.tmp file'!\n"
exit
fi
num_threads=12 # Number of OMP threads
num_procs=1 # Number of MPI processes
python $(which otool_hummr.py) $1 calc_hummr.inp $num_threads $num_procs
otool_external script is not found. Thus, define also the
environment variable, export EXTOPTEXE=<path to your otool_external>.
Note
For this test, the number of OMP threads was set equal to 12. The requested number of threads
should be specified in the otool_external by assigning a desired value to num_threads.
The next and final script otool_hummr.py facilitates transfer of information between HUMMR and ORCA:
- Setting up an HUMMR input based on the data (XYZ-coordinates) from ORCA
- Carrying out the HUMMR calculation.
- Passing data (nuclear gradient) from HUMMR to ORCA.
Place this script aside the otool_external script, and the ORCA executables, as well, and you're
good to go.
Note
Make sure that the otool_external and otool_hummr.py files have executable rights. This can
be done via the command chmod +x.
Acrolein excited state geometry optimization¶
Below are given the necessary files to carry out the first excited state geometry optimization with SA-CASSCF nuclear gradients:
- ORCA input: calc.inp
- HUMMR input: calc_hummr.inp
- HUMMR orbitals: inporbs.C0
Having obtained these files, simply call the ORCA executable on calc.inp and the geometry
optimization should commence.
Parallelization¶
In most cases where only a single-node parallelization is desired, it is sufficient to simply
specify the number of requested threads in the num_threads variable in
otool_external. An advanced user might sometimes want to use
multiple nodes however. Especially so when dealing with larger calculations. Then, multinode
calculations can be carried out by specifying the number of MPI processes in the num_procs
variable. Note that the given number there should equal with the number of MPI processes
used in the ORCA calculation input (nprocs).