SGI MPT
General Information
The SGI Message Passing Toolkit (MPT) provides versions of industry-standard message passing libraries optimized for SGI computer systems running SGI operating systems.
Version | 2.04, 2.02 and 2.00 |
Vendor | |
Installation Path | /opt/sgi/mpt/ |
Environment | module load mpt/2.02 |
Setting the Environment
To compile, link and run a binary that uses MPT, at least the following module has to be loaded:
module load mpt/2.02 This loads the current default version mpt/2.04 OR: module load mpt/2.04 module load mpt/2.02 module load mpt/2.00
Sample code
http://confluence.rcs.griffith.edu.au:8080/display/GHPC/Sample+MPI+Codes
See examples here:
http://confluence.rcs.griffith.edu.au:8080/download/attachments/28803122/pi.f http://confluence.rcs.griffith.edu.au:8080/download/attachments/28803122/pi.c http://confluence.rcs.griffith.edu.au:8080/download/attachments/28803122/heat.cxx http://confluence.rcs.griffith.edu.au:8080/download/attachments/28803122/hworld.f90 http://confluence.rcs.griffith.edu.au:8080/download/attachments/28803122/MPIHello2.c
Compile and link MPI Applications using SGI MPT
Using sample codes from above (http://confluence.rcs.griffith.edu.au:8080/display/GHPC/SGI+MPT#SGIMPT-Samplecode)
Use Intel Compilers
module load intel-cc-11/11.1.072 module load mpt/2.02 icc -c pi.c icpc -c heat.cxx ifort -c hworld.f90 icc -o pi pi.c -lmpi icpc -o heat heat.cxx -lmpi++ -lmpi ifort -o hworld pi.f -lmpi
Use GNU Compilers
module load mpt/2.02 gcc -c pi.c g++ -c heat.cxx gfortran -I$FPATH -c hworld.f90 gcc -o pi pi.c -lmpi g++ -o heat heat.cxx -lmpi++ -lmpi gfortran -I$FPATH -o hworld hworld.f90 -lmpi
Running an MPT binary
There exist three ways to start an MPI program that was linked against the MPT MPI library:
use the mpiexec command
use the mpiexec_mpt command
use the mpirun command
All commands are in $PATH after loading an mpt module file.
See online man pages of these commands for detailed information.
The commands mpiexec and mpiexec_mpt are actually wrappers around the mpirun executable. They simplify startup of MPT binaries in batch jobs.
Using mpiexec
An MPT executable is started from within batch jobs simply by
mpiexec a.out
The following example batch job script requests 12 nodes with 8 processors each, and uses mpiexec to start one MPI task per requested processor:
#!/bin/bash #PBS -N run_mpt #PBS -l nodes=12:ncpus=2 #PBS -j abe #PBS -l walltime=00:30:00 # -- Set up the environment module load mpt/2.02 cd $PBS_O_WORKDIR # -- Run ./mpiexec pi # mpiexec a.out
Notes:
Use appropriate command line options to mpiexec, if you want to start less tasks than the number of CPUs reserved for the batch job. See mpiexec(1) for details (e.g. options -npernode, -config).
Environment variables as well as shell limits, which are set before calling mpiexec, are propagated to each MPI task.
Special command line options, which are supported by mpirun of MPT (like -stats, -prefix, ...) are not supported by mpiexec. Export the corresponding environment variables, instead (e.g. MPI_STATS, MPI_PREFIX, see MPI(1)).
Ensure that the executable a.out to start is in $PATH, or in the current working directory, or specified with full path on the command line.
Using mpirun for batch jobs
#!/bin/bash -l #PBS -m abe #PBS -M snumber@griffith.edu.au #PBS -N mpiTest #PBS -q mpi #PBS -l walltime=00:10:00 #PBS -l select=1:ncpus=8:mpiprocs=8 NPROCS=8 source $HOME/.bashrc module load mpt/2.04 echo "Starting job" mpirun $NPROCS ~/pbs/mpi/mpt/pingpong_mpt echo "Done with job"
Using mpiexec_mpt command for batch jobs
Typically, an MPI executable can be started from within batch jobs by invoking
mpiexec_mpt -np <numer_of_tasks> a.out
#!/bin/bash -l #PBS -m abe #PBS -V ### Mail to user #PBS -M YOUREMAIL@griffith.edu.au ### Job name #PBS -N mpi #PBS -l walltime=100:00:00 ### Number of nodes:Number of CPUs:Number of threads per node #PBS -l select=2:ncpus=8:mpiprocs=8 ### The number of MPI processes available is mpiprocs * nodes (=NPROCS) ###NPROCS=16 # ### -- Determine number of total tasks: NPROCS=$(cat $PBS_NODEFILE | wc -l) source $HOME/.bashrc # -- Set up the environment module load mpt/2.02 # This job's working directory cd $PBS_O_WORKDIR # -- Run mpiexec_mpt -np $NPROCS ./pi
Using mpirun (not supported on gowonda. Do not use this!)
There exist cases, where greater control over the number of started MPI tasks per node is needed. In these case, the mpirun executable may be used directly.
The following batch job script reads the host list that is generated by the batch system, and generates a host file with the same information, but which is suitable to be used by mpirun:
#!/bin/bash -l #PBS -m abe #PBS -V ### Mail to user #PBS -M YOUREMAIL@griffith.edu.au ### Job name #PBS -N mpi #PBS -l walltime=00:30:00 ### Number of nodes:Number of CPUs:Number of threads per node #PBS -l select=2:ncpus=8:mpiprocs=8 source $HOME/.bashrc # -- Set up the environment module load mpt/2.02 # This job's working directory cd $PBS_O_WORKDIR # -- Generate host list from $PBS_NODEFILE cat $PBS_NODEFILE | perl -e '@hosts=<STDIN>; foreach(@hosts){chomp;$hosts{$_}++} @hosts=map{"$_ $hosts{$_}"} sort keys %hosts; print join(",\n",@hosts)' > mpd.hosts.$$ # -- Run mpirun -f mpd.hosts.$$ ./pi
Troubleshooting for sysadmins
Array services not available" error with MPT (SGI MPI) Add the following entry to /etc/services for arrayd service and port. The default port number is 5434 and is specified in the arrayd.conf configuration file. sgi-arrayd 5434/tcp # SGI Array Services daemon o If necessary, modify the default authentication configuration. The default authentication is AUTHENTICATION NOREMOTE, which does not allow access from remote hosts. The authentication model is specified in the /usr/lib/array/arrayd.auth configuration file. o The array services will not become active until they are enabled with the chkconfig(1) command: chkconfig --add array chkconfig --level 2345 array on o It is not necessary to reboot the system after installing the array services, to make them active, but if you do not reboot, it will be necessary to restart them manually. To do so, use the following command: /etc/init.d/array start
Reference
1. Most of the material on this page was culled from https://www.hlrn.de/home/view/System/SgiMpt
We acknowledge the authors at hlrn and thank them for a well written page.
2. https://vscentrum.be/neutral/documentation/cluster-doc/development/mpi
3. http://www.bear.bham.ac.uk/bluebear/applications/mpif90_101.shtml
4. http://www.ks.uiuc.edu/Research/namd/wiki/index.cgi?NamdOnAltix