Quantum mechanical software


Table of Contents


Turbomole

Structure optimization

  1. Obtain a coord file (can be obtained from an xyz file with open-babel)
  2. Copy coord file and the scripts define.sh (here: define.sh) and run-define.sh (here run-define.sh) to a the directory you wish to run the calculations in.
  3. Modify define.sh according to what is needed (change e.g. charge of molecule, open-shell?, if open-shell, what spin?)
  4. Modify run-define.sh according to what is needed (Turbomole path in submit scripts, slurm related parameters, etc.)
  5. Run define.sh (chmod u+x define.sh and ./define.sh)

Hessian

Calculation of the (molecular) Hessian requires a well-converged structure. Start here if you don't have a structure. After you have got a good starting structure, use the the turbomole program aoforce to calculate the scripts:

  1. Run the script aoforce-1.sh which converges the structure with tighter convergence criteria
  2. Run the script aoforce-2.sh which uses the program aoforce to calculate the Hessian

Broken-symmetry calculations

For a simple case (triplet to open-shell singlet), converge the triplet state and start for the converged orbitals. In this case out triplet has 128 alpha electrons and 126 beta electrons. Then change alpha and beta electrons to 127 (find the strip below in the control file):

$alpha shells a       1-128=>127                             
$beta shells  a       1-126=>127

Scripts

define

Use the script as define.sh (example of automated setup using define)

#!/bin/csh

foreach complex (resting_state)
   if ($complex == resting_state) then
   cp tpssd3_$complex.bohr coord 
define <<EOF


a coord
*
no
bb
all def2-SV(P)
*
eht
y
y
2
y

dft
on
func
tpss
*
ri
on
*
dsp
on
bj
*
*
EOF

     mkdir -p $complex
     mv alpha $complex
     mv beta $complex
     mv mos $complex
     mv basis $complex
     mv auxbasis $complex
     mv control $complex
     mv coord $complex
     cp run-define.sh $complex
   endif
end
run-define

Use the script as run-define.sh (requires that you have run define.sh first)

#!/bin/bash

cp control control.tmp
cat control.tmp | sed -e 's/$scfiterlimit       30/$scfiterlimit       300/' > control.tmp2       
cat control.tmp2 | sed -e 's/$scforbitalshift  closedshell=.05/$scforbitalshift  closedshell=.25/' > control        
rm control.tmp && rm control.tmp2

array=(
tpss
)
for i in "${array[@]}"
do
   echo '#!/bin/bash'                       > run_$i.sh
   echo '#SBATCH -N 1'                     >> run_$i.sh 
   echo '#SBATCH -n 8'                     >> run_$i.sh
   echo '#SBATCH -t 168:00:00'             >> run_$i.sh
   echo '#SBATCH -J lpmo_bde'              >> run_$i.sh

   echo  'export PARA_ARCH=MPI'            >> run_$i.sh 
   echo  'TURBODIR=/lunarc/nobackup/projects/bio/TURBO/Turbo7.1'        >> run_$i.sh 
   echo  'PATH=$TURBODIR/bin/x86_64-unknown-linux-gnu_mpi:$TURBODIR/scripts:$PATH'  >> run_$i.sh 
   echo  'export PARNODES=8'               >> run_$i.sh
   echo  'export PATH'                     >> run_$i.sh

   echo  'cd $SNIC_TMP'                   >> run_$i.sh 
   echo  'cp -p $SLURM_SUBMIT_DIR/* .'    >> run_$i.sh 
   echo  'jobex -backup -c 400 -ri'       >> run_$i.sh 
   echo  'cp -pu * $SLURM_SUBMIT_DIR'     >> run_$i.sh 

   chmod u+x run_$i.sh
   sbatch run_$i.sh 
done
aoforce 1

Use the script as aoforce-1.sh

#!/bin/csh

foreach complex (resting_state)
        if ($complex == "resting_state") then
           cd $complex
             rm -rf aoforce
             mkdir -p aoforce
             cd aoforce
                cp ../* .
                rm nextstep
                sed -i 's/$scfconv   6/$scfconv   8/g' control
                sed -i 's/jobex -backup -c 400 -ri/jobex -backup -c 400 -ri -gcart 4/g' run_tpss.sh
                sbatch run_tpss.sh
             cd ..
           cd ..
        endif
end
aoforce 2

Use the script as aoforce-2.sh

#!/bin/csh
foreach complex (resting_state)
        if ($complex == "resting_state") then
           cd $complex
             cd aoforce
                turbofreq
                sed -i 's/jobex -backup -c 400 -ri -gcart 4/aoforce > logf/g' run_tpss.sh
                sed -i 's/$maxcor 700/$maxcor 2500/g' control
                sbatch run_tpss.sh
             cd ..
           cd ..
        endif
end

MOLCAS

Install MOLCAS

  • Download source from >inset repo<
  • Setup (serial):

./configure # Runs a configure scripts.

  • Setup (parallel): Here with intel; remember to load the required modules.

./configure -parallel -intel

Installation of OpenMolcas on kebnekaise

Geting OpenMolcas:

git clone https://gitlab.com/Molcas/OpenMolcas.git [directory]

Go to the molcas dir and do git submodule update --init --recursive

Load modules (intel/mkl):

module load ifort/2018.1.163-GCC-6.4.0-2.28 impi/2018.1.163 icc/2018.1.163-GCC-6.4.0-2.28 HDF5/1.10.1 imkl/2018.1.163

Run CMake

FC=ifort CC=icc CXX=icpc && cmake -DHDF5=ON -DOPENMP=ON -DLINALG=MKL ../
  • For DMRG support (CheMPS2), do

    FC=ifort CC=icc CXX=icpc && cmake -DCMAKE_INSTALL_PREFIX=/pfs/nobackup/home/e/erikh/programs/open-molcas-jan2019/build_chemps2 -DHDF5=ON -DOPENMP=ON -DLINALG=MKL -DCHEMPS2=ON -DCHEMPS2_DIR=/pfs/nobackup/home/e/erikh/programs/CheMPS2/build/CheMPS2
    
  • Note that a path to the CheMPS2 executable is given, assuming CheMPS2 is installed. See separate section for installation of CheMPS2 (below)

Installation of CheMPS2 on kebnekaise

  • Get program (git clone)

  • Load modules (intel/mkl - use the same as for MOLCAS above if the programs should work together!):

    module load ifort/2018.1.163-GCC-6.4.0-2.28 impi/2018.1.163 icc/2018.1.163-GCC-6.4.0-2.28 imkl/2018.1.163 HDF5/1.10.1
    
  • Go to the ChemMPS2 source folder and make a build directory mkdir build

  • Set compiler flags using CMake:

    export FC=ifort export CC=icc export CXX=icpc && cmake -DMKL=ON -DSHARED_ONLY=on -DCMAKE_INSTALL_PREFIX=/pfs/nobackup/home/e/erikh/programs/CheMPS2/build/install ../
    
  • Compile make -j8 and Install (perhaps not really required) make install

Put the modules into a file, e.g., example on runscript (through OpenMolcas)

source /pfs/nobackup/home/e/erikh/programs/.openmolcas-intel
export CHEMPS2=/pfs/nobackup/home/e/erikh/programs/CheMPS2/build/CheMPS2
export MOLCAS=/pfs/nobackup/home/e/erikh/programs/open-molcas-jan2019/build_chemps2/
PATH=$PATH:$CHEMPS2:$MOLCAS:/home/e/erikh/bin/
export PATH

#OpenMP settings
export OMP_NUM_THREADS=8

pymolcas -f input-dmrg.input

MOLCAS in parallel on kebnekaise (with global arrays)

Global Arrays installation

  • Get global arrays: git clone https://github.com/GlobalArrays/ga.git global-array-jan2018
  • Load modules (see installation of MOLCAS)
  • Use the autogen script: ./autogen.sh
  • make build dir: mkdir build
  • Configure: remember to set prefix and double precison integers (i8) ./configure --enable-i8 --prefix=/pfs/nobackup/home/e/erikh/programs/global-array-jan2018/build
  • use make and next make install

Remember to save the following (it is printed to the screen during compilation and will be important for setting the LD_LIBRARY_PATH and LD_RUN_PATH when compiling MOLCAS wiith GA). I often save it in a file colled for_molcas or similar.

----------------------------------------------------------------------
Libraries have been installed in:
   /pfs/nobackup/home/e/erikh/programs/global-array-jan2018/build/lib

If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the '-LLIBDIR'
flag during linking and do at least one of the following:
   - add LIBDIR to the 'LD_LIBRARY_PATH' environment variable
     during execution
   - add LIBDIR to the 'LD_RUN_PATH' environment variable
     during linking
   - use the '-Wl,-rpath -Wl,LIBDIR' linker flag
   - have your system administrator add LIBDIR to '/etc/ld.so.conf'

See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------

MOLCAS install

  • Follow the guide in

More on DMRG calculations (should be updated!)

  • Measures (entropy and mutual infomation)

    • The scripts for obtaining the mutual information can be found in
      path_to_molcas_build-dir/qcmaquis/lib/python/pyeval/
      
    • They are run by command (example with entropy and mutual information):
      ./mutinf.py h2o.results_state.0.h5
      
      ./entropy.py h2o.results_state.0.h5
      
  • Order of orbitals: In DMRG calculations, the order of the orbitals can be important. Usually, a good order and default start guess for the environment lattice is better than a lousy ordering and a good start guess for the environment lattice. The order according to mutual information is found by: fielder.py

Setup of MOLCAS driver

  1. After a successful build, go to molcas-root-dir/sbin and copy the molcas.driver to home/user/bin/
  2. export the path: export PATH=$PATH:~/bin/ or export PATH=$PATH:home/user/bin/
  3. export the explicit molcas e.g. export MOLCAS=/home/user/molcas-root-dir/build-dir/ (e.g. export MOLCAS=/home/erikh/Programs/molcas-march2017/build-gcc-mkl/)
  4. 2 and 3. above can also be put into a runscript, to only set it in connection with a calculation.

Some tips and tricks

compiling clean-up

make distclean # other options are make clean; make varyclean; make extraclean

  • Fist check echo $PATH
  • Looking for paths to compilers: Look in file "Symbols" in $MOLCAS directory. This file can also be modified
  • Check also folder cfg (compiler flags)

Known problems

  • For some clusters (e.g. kebnekaise) the PATH to home/user/bin needs to be defined (e.g. in .bash_profile)
    export PATH=$HOME/bin:$PATH
    

Install MOLCAS (with QCMaquis DMRG programs - depreciated, should be updated!)

Prerequisites (for QCMaquis)

  1. MKL and Cmake
  2. HDF5 sudo yum install hdf5 sudo yum install hdf5-devel
  3. Python yum install python-devel.x86_64 sudo yum install numpy sudo yum install scipy
  4. gsl yum install gsl-devel

Clone and compile MOLCAS with QCMaquis

Note: QCMaquis depreciated. Has been moved to other repo.

  1. Clone git clone git@tc-gitlab.ethz.ch:molcas-dev/fde-dmrg.git molcas
  2. go to molcas root dir (path-to-molcas in the clone above)
  3. submodules git submodule update --init --recursive External/gen1int-molcaslib External/hdf5_f2003_interface External/libmsym External/qcmaquis_driver External/qcmaquis_suite
  4. Change CMakeList.txt: set(CMAKE_DISABLE_SOURCE_CHANGES ON) should be changed to set(CMAKE_DISABLE_SOURCE_CHANGES OFF)
  5. make build-dir and enter build-dir (cd build-dir)
  6. remember to set path to MKL (typically something like source /opt/intel/mkl/bin/mklvars.sh intel64)
  7. FC=gfortran CC=gcc CXX=g++ cmake -DFDE=ON -DOPENMP=ON -DDMRG=ON -DLINALG=MKL -DCMAKE_BUILD_TYPE:String=RelWithDebInfo -DGEN1INT=ON ..
  8. source /home/erikh/Programs/molcas-march2017/build-gcc-mkl/qcmaquis/bin/qcmaquis.sh
  9. test: molcas verify qcmaquis:001 (if molcas variable is not set, see next see next session)

QCMaquis stand-alone

Installation:

  • Pre-requirement:
    • GSL (latest version is 2.1): Download latest stable version (gsl.tar). Unpack.
    • HDF5 Download (here for pre-combiled) hdf5.tar.gz for the wanted compiler. Set environment variables (in .bashrc or elsewhere). export HDF5_DIR=/full_path_to_hdf5/

results matching ""

    No results matching ""