...
No Format |
---|
module load vasp/5.4.4.vasp-mpi-intel OR module load vasp/5.4.4.vasp-serial-intelvasp541-gnu-openmpi To use vasp on the cluster, one needs to be in the unix group named "vasp". Please contact the HPC admin to request this. |
VASP-gnu
Not installed currently
...
No Format |
---|
module purge module load intel-mpi/4.0.0.027 module load intel-tools-11/11.1.072 module load fftw/3.2.2-intel cd /sw/VASP/5.4.1/intel-serial cp arch/makefile.include.linux_intel_serial makefile.include vi makefile.include Add this: MKLROOT=/sw/sdev/intel/Compiler/11.1/072/mkl make This will build the standard, gamma-only, and non-collinear version of VASP one after the other. Patch ===== module load vasp/5.4.1vasp-intel-serial cd /sw/VASP/5.4.1/intel-serial gunzip the patch e.g gunzip patch.5.4.1.03082016.gz patch -p0 < patch.5.4.1.14032016 patch -p0 <patch.5.4.1.03082016 Special Patch (for Tim): Replace the files in src by the updated ones from patch_vasp.5.4.1.05Feb16.tgz (including the newly created file tim_parameters.F) and replace/modify also the "hidden" file .objects so as to contain the item tim_parameters.o right after subdft3.o. tim_parameters.o right after subdft3.o. |
VASP 5.4.4
Note: Awoonga is a different HPC managed by QRIS. Awoonga has been replaced by a newer HPC called Tinaroo. Information as of 18/01/23; Tinaroo is soon to be replaced by a newer HPC called Bunya. Bunya will have VASP 6. VASP 5.4.4 will not be available on Bunya. View FAQ Qs38 for details on Awoonga: FAQ - Griffith HPC Cluster#GriffithHPCCluster-Qs38%3AHowdoIgetstartedontheawoongacluster
No Format |
---|
awoonga node only module load mkl/2017.3 module load intel/2017.4 module load lapack/3.6.0 cp arch/makefile.include.linux_intel_serial makefile.include vi makefile.include Add this: MKLROOT=/opt/intel/composer_xe_2017.4/mkl make 2>&1 | tee makeLog.txt |
Sample PBS script
No Format |
---|
#!/bin/bash #PBS -m abe #PBS -N C3N4-Si-Ring-q workq #PBS -m abebea #PBS -M YourEmail@griffithemail@griffithuni.edu.au #PBS -N VaspTest #PBS -l select=1:ncpus=4:mpiprocs=4:mem=64gb #PBS -l 35gb,walltime=244:00:00 ## load vasp module to setup env variables to run vasp cd $PBS_O_WORKDIR module load intel-cc-11/11.1.072 module load intel-mpi/4.0.0.027 module load intel-fc-11/11.1.072 module load fftw/3.2.2-intel module load mkl/mkl_composer_xe_2017.4 module load vasp/5.4.4.vasp-mpi-intel aliasvasp/vasp541-gnu-openmpi ## get the number of allocated cpu cores export nprocs=`cat $PBS_NODEFILE | wc -l` ## run vasp in parallel over $nprocs processors ###alias vasp="vasp_std" ulimit -s unlimited mpirunmpiexec vasp_std < /dev/null > vasp.debug; echo "Done with job" |
Another Sample PBS Script
...